All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation
@ 2020-03-30  9:49 Cédric Le Goater
  2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
                   ` (6 more replies)
  0 siblings, 7 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Cédric Le Goater, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

Hello,

The Radix tree translation model currently supports process-scoped
translation for the PowerNV machine (Hypervisor mode) and for the
pSeries machine (Guest mode). Guests running under an emulated
Hypervisor (PowerNV machine) require a new type of Radix translation,
called partition-scoped, which is missing today.

The Radix tree translation is a 2 steps process. The first step,
process-scoped translation, converts an effective Address to a guest
real address, and the second step, partition-scoped translation,
converts a guest real address to a host real address.

There are difference cases to covers : 

* Hypervisor real mode access: no Radix translation.

* Hypervisor or host application access (quadrant 0 and 3) with
  relocation on: process-scoped translation.

* Guest OS real mode access: only partition-scoped translation.

* Guest OS real or guest application access (quadrant 0 and 3) with
  relocation on: both process-scoped translation and partition-scoped
  translations.

* Hypervisor access in quadrant 1 and 2 with relocation on: both
  process-scoped translation and partition-scoped translations.

The radix tree partition-scoped translation is performed using tables
pointed to by the first double-word of the Partition Table Entries and
process-scoped translation uses tables pointed to by the Process Table
Entries (second double-word of the Partition Table Entries).

Both partition-scoped and process-scoped translations process are
identical and thus the radix tree traversing code is largely reused.
However, errors in partition-scoped translations generate hypervisor
exceptions.

Based on work from Suraj Jitindar Singh <sjitindarsingh@gmail.com>

Thanks,

C.

Cédric Le Goater (6):
  target/ppc: Introduce a relocation bool in  ppc_radix64_handle_mmu_fault()
  target/ppc: Assert if HV mode is set when running under a pseries machine
  target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation
  target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool
  target/ppc: Add support for Radix partition-scoped translation

Suraj Jitindar Singh (1):
  target/ppc: Enforce that the root page directory size must be at least
    5

 target/ppc/cpu.h         |   3 +
 target/ppc/excp_helper.c |   3 +-
 target/ppc/mmu-radix64.c | 411 +++++++++++++++++++++++++++++----------
 3 files changed, 308 insertions(+), 109 deletions(-)

-- 
2.21.1



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 10:45   ` Greg Kurz
  2020-03-31  0:57   ` David Gibson
  2020-03-30  9:49 ` [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault() Cédric Le Goater
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

From: Suraj Jitindar Singh <sjitindarsingh@gmail.com>

According to the ISA the root page directory size of a radix tree for
either process- or partition-scoped translation must be >= 5.

Thus add this to the list of conditions checked when validating the
partition table entry in validate_pate();

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 224e646c5094..99678570581b 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -212,6 +212,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     if (lpid == 0 && !msr_hv) {
         return false;
     }
+    if ((pate->dw0 & PATE1_R_PRTS) < 5) {
+        return false;
+    }
     /* More checks ... */
     return true;
 }
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault()
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
  2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 13:01   ` Greg Kurz
  2020-03-31  0:58   ` David Gibson
  2020-03-30  9:49 ` [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine Cédric Le Goater
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

It will ease the introduction of new routines for partition-scoped
Radix translation.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 99678570581b..f6007e956569 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -229,12 +229,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
     uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
     int page_size, prot, fault_cause = 0;
     ppc_v3_pate_t pate;
+    bool relocation;
 
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
 
+    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
     /* HV or virtual hypervisor Real Mode Access */
-    if ((msr_hv || cpu->vhyp) &&
-        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
+    if (!relocation && (msr_hv || cpu->vhyp)) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
 
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
  2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
  2020-03-30  9:49 ` [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault() Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 10:46   ` Greg Kurz
  2020-03-31  0:59   ` David Gibson
  2020-03-30  9:49 ` [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation Cédric Le Goater
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index f6007e956569..d2422d1c54c9 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -231,6 +231,7 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
     ppc_v3_pate_t pate;
     bool relocation;
 
+    assert(!(msr_hv && cpu->vhyp));
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
 
     relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
                   ` (2 preceding siblings ...)
  2020-03-30  9:49 ` [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 14:18   ` Greg Kurz
  2020-03-30  9:49 ` [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation Cédric Le Goater
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

This is moving code under a new ppc_radix64_xlate() routine shared by
the MMU Radix page fault handler and the get_phys_page_debug PPC
callback. The difference being that get_phys_page_debug does not
generate exceptions.

The specific part of process-scoped Radix translation is moved under
ppc_radix64_process_scoped_xlate() in preparation of the future
support for partition-scoped Radix translation.

It should be functionally equivalent.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 201 ++++++++++++++++++++++-----------------
 1 file changed, 116 insertions(+), 85 deletions(-)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index d2422d1c54c9..b4e6abcd2d35 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -219,17 +219,119 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
+static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                      vaddr eaddr, uint64_t lpid, uint64_t pid,
+                                      ppc_v3_pate_t pate, hwaddr *g_raddr,
+                                      int *g_prot, int *g_page_size,
+                                      bool cause_excp)
+{
+    CPUState *cs = CPU(cpu);
+    uint64_t offset, size, prtbe_addr, prtbe0, pte;
+    int fault_cause = 0;
+    hwaddr pte_addr;
+
+    /* Index Process Table by PID to Find Corresponding Process Table Entry */
+    offset = pid * sizeof(struct prtb_entry);
+    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
+    if (offset >= size) {
+        /* offset exceeds size of the process table */
+        if (cause_excp) {
+            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+        }
+        return 1;
+    }
+    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
+    prtbe0 = ldq_phys(cs->as, prtbe_addr);
+
+    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
+    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
+    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
+                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
+                                g_raddr, g_page_size, &fault_cause, &pte_addr);
+
+    if (!(pte & R_PTE_VALID) ||
+        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
+        /* No valid pte or access denied due to protection */
+        if (cause_excp) {
+            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
+        }
+        return 1;
+    }
+
+    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
+
+    return 0;
+}
+
+static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
+                             uint64_t lpid, uint64_t pid, bool relocation,
+                             hwaddr *raddr, int *psizep, int *protp,
+                             bool cause_excp)
+{
+    ppc_v3_pate_t pate;
+    int psize, prot;
+    hwaddr g_raddr;
+
+    *psizep = INT_MAX;
+    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
+
+    /* Get Process Table */
+    if (cpu->vhyp && lpid == 0) {
+        PPCVirtualHypervisorClass *vhc;
+        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
+        vhc->get_pate(cpu->vhyp, &pate);
+    } else {
+        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
+            if (cause_excp) {
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+            }
+            return 1;
+        }
+        if (!validate_pate(cpu, lpid, &pate)) {
+            if (cause_excp) {
+                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
+            }
+            return 1;
+        }
+        /* We don't support guest mode yet */
+        if (lpid != 0) {
+            error_report("PowerNV guest support Unimplemented");
+            exit(1);
+        }
+    }
+
+    /*
+     * Perform process-scoped translation if relocation enabled.
+     *
+     * - Translates an effective address to a host real address in
+     *   quadrants 0 and 3 when HV=1.
+     */
+    if (relocation) {
+        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
+                                                   pate, &g_raddr, &prot,
+                                                   &psize, cause_excp);
+        if (ret) {
+            return ret;
+        }
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        g_raddr = eaddr & R_EADDR_MASK;
+    }
+
+    *raddr = g_raddr;
+    return 0;
+}
+
 int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
                                  int mmu_idx)
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, prot, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t lpid = 0, pid = 0;
+    int page_size, prot;
     bool relocation;
+    hwaddr raddr;
 
     assert(!(msr_hv && cpu->vhyp));
     assert((rwx == 0) || (rwx == 1) || (rwx == 2));
@@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
         return 1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
-            return 1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
+    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
+    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
+                          &page_size, &prot, 1)) {
         return 1;
     }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
-        /* Couldn't get pte or access denied due to protection */
-        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
-        return 1;
-    }
-
-    /* Update Reference and Change Bits */
-    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
 
     tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
                  prot, mmu_idx, 1UL << page_size);
@@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
 
 hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
 {
-    CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    PPCVirtualHypervisorClass *vhc;
-    hwaddr raddr, pte_addr;
-    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
-    int page_size, fault_cause = 0;
-    ppc_v3_pate_t pate;
+    uint64_t lpid = 0, pid = 0;
+    int psize, prot;
+    hwaddr raddr;
 
     /* Handle Real Mode */
-    if (msr_dr == 0) {
+    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         return eaddr & 0x0FFFFFFFFFFFFFFFULL;
     }
@@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
         return -1;
     }
 
-    /* Get Process Table */
-    if (cpu->vhyp) {
-        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
-        vhc->get_pate(cpu->vhyp, &pate);
-    } else {
-        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        if (!validate_pate(cpu, lpid, &pate)) {
-            return -1;
-        }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-       }
-    }
-
-    /* Index Process Table by PID to Find Corresponding Process Table Entry */
-    offset = pid * sizeof(struct prtb_entry);
-    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
-    if (offset >= size) {
-        /* offset exceeds size of the process table */
-        return -1;
-    }
-    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
-
-    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
-    page_size = PRTBE_R_GET_RTS(prtbe0);
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                &raddr, &page_size, &fault_cause, &pte_addr);
-    if (!pte) {
+    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
+                          &prot, 0)) {
         return -1;
     }
 
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
                   ` (3 preceding siblings ...)
  2020-03-30  9:49 ` [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 17:00   ` Greg Kurz
  2020-03-30  9:49 ` [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool Cédric Le Goater
  2020-03-30  9:49 ` [PATCH 7/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
  6 siblings, 1 reply; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

The ppc_radix64_walk_tree() routine walks through the nested radix
tables to look for a PTE.

Split it two and introduce a new routine ppc_radix64_next_level()
which we will use for partition-scoped Radix translation when
translating the process tree addresses.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 50 ++++++++++++++++++++++++++--------------
 1 file changed, 33 insertions(+), 17 deletions(-)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index b4e6abcd2d35..136498111f60 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -162,44 +162,60 @@ static void ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte,
     }
 }
 
-static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
-                                      uint64_t base_addr, uint64_t nls,
-                                      hwaddr *raddr, int *psize,
-                                      int *fault_cause, hwaddr *pte_addr)
+static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
+                                       uint64_t *pte_addr, uint64_t *nls,
+                                       int *psize, int *fault_cause)
 {
     CPUState *cs = CPU(cpu);
     uint64_t index, pde;
 
-    if (nls < 5) { /* Directory maps less than 2**5 entries */
+    if (*nls < 5) { /* Directory maps less than 2**5 entries */
         *fault_cause |= DSISR_R_BADCONFIG;
         return 0;
     }
 
     /* Read page <directory/table> entry from guest address space */
-    index = eaddr >> (*psize - nls); /* Shift */
-    index &= ((1UL << nls) - 1); /* Mask */
-    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
-    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
+    pde = ldq_phys(cs->as, *pte_addr);
+    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
         *fault_cause |= DSISR_NOPTE;
         return 0;
     }
 
-    *psize -= nls;
+    *psize -= *nls;
+    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
+        *nls = pde & R_PDE_NLS;
+        index = eaddr >> (*psize - *nls);       /* Shift */
+        index &= ((1UL << *nls) - 1);           /* Mask */
+        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
+    }
+    return pde;
+}
+
+static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
+                                      uint64_t base_addr, uint64_t nls,
+                                      hwaddr *raddr, int *psize,
+                                      int *fault_cause, hwaddr *pte_addr)
+{
+    uint64_t index, pde;
+
+    index = eaddr >> (*psize - nls);    /* Shift */
+    index &= ((1UL << nls) - 1);       /* Mask */
+    *pte_addr = base_addr + (index * sizeof(pde));
+    do {
+        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
+                                     fault_cause);
+    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
 
-    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
-    if (pde & R_PTE_LEAF) {
+    /* Did we find a valid leaf? */
+    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
         uint64_t rpn = pde & R_PTE_RPN;
         uint64_t mask = (1UL << *psize) - 1;
 
         /* Or high bits of rpn and low bits to ea to form whole real addr */
         *raddr = (rpn & ~mask) | (eaddr & mask);
-        *pte_addr = base_addr + (index * sizeof(pde));
-        return pde;
     }
 
-    /* Next Level of Radix Tree */
-    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
-                                 raddr, psize, fault_cause, pte_addr);
+    return pde;
 }
 
 static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
                   ` (4 preceding siblings ...)
  2020-03-30  9:49 ` [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  2020-03-30 17:01   ` Greg Kurz
  2020-03-30 17:08   ` Greg Kurz
  2020-03-30  9:49 ` [PATCH 7/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
  6 siblings, 2 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

This prepares ground for partition-scoped Radix translation.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/mmu-radix64.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 136498111f60..3ae29ed90d49 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -105,7 +105,8 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
 
 
 static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
-                                   int *fault_cause, int *prot)
+                                   int *fault_cause, int *prot,
+                                   bool partition_scoped)
 {
     CPUPPCState *env = &cpu->env;
     const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
@@ -121,11 +122,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
     }
 
     /* Determine permissions allowed by Encoded Access Authority */
-    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
+    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
         *prot = 0;
-    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
+    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
         *prot = ppc_radix64_get_prot_eaa(pte);
-    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
+    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
         *prot = ppc_radix64_get_prot_eaa(pte);
         *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
     }
@@ -266,7 +267,7 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
                                 g_raddr, g_page_size, &fault_cause, &pte_addr);
 
     if (!(pte & R_PTE_VALID) ||
-        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
+        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
         /* No valid pte or access denied due to protection */
         if (cause_excp) {
             ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH 7/7] target/ppc: Add support for Radix partition-scoped translation
  2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
                   ` (5 preceding siblings ...)
  2020-03-30  9:49 ` [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool Cédric Le Goater
@ 2020-03-30  9:49 ` Cédric Le Goater
  6 siblings, 0 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30  9:49 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc,
	Cédric Le Goater, Suraj Jitindar Singh

The Radix tree translation model currently supports process-scoped
translation for the PowerNV machine (Hypervisor mode) and for the
pSeries machine (Guest mode). Guests running under an emulated
Hypervisor (PowerNV machine) require a new type of Radix translation,
called partition-scoped, which is missing today.

The Radix tree translation is a 2 steps process. The first step,
process-scoped translation, converts an effective Address to a guest
real address, and the second step, partition-scoped translation,
converts a guest real address to a host real address.

There are difference cases to covers :

* Hypervisor real mode access: no Radix translation.

* Hypervisor or host application access (quadrant 0 and 3) with
  relocation on: process-scoped translation.

* Guest OS real mode access: only partition-scoped translation.

* Guest OS real or guest application access (quadrant 0 and 3) with
  relocation on: both process-scoped translation and partition-scoped
  translations.

* Hypervisor access in quadrant 1 and 2 with relocation on: both
  process-scoped translation and partition-scoped translations.

The radix tree partition-scoped translation is performed using tables
pointed to by the first double-word of the Partition Table Entries and
process-scoped translation uses tables pointed to by the Process Table
Entries (second double-word of the Partition Table Entries).

Both partition-scoped and process-scoped translations process are
identical and thus the radix tree traversing code is largely reused.
However, errors in partition-scoped translations generate hypervisor
exceptions.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
---
 target/ppc/cpu.h         |   3 +
 target/ppc/excp_helper.c |   3 +-
 target/ppc/mmu-radix64.c | 172 +++++++++++++++++++++++++++++++++++----
 3 files changed, 162 insertions(+), 16 deletions(-)

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index bca4169f8062..74f0524f4fea 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -464,6 +464,9 @@ typedef struct ppc_v3_pate_t {
 #define DSISR_AMR                0x00200000
 /* Unsupported Radix Tree Configuration */
 #define DSISR_R_BADCONFIG        0x00080000
+#define DSISR_ATOMIC_RC          0x00040000
+/* Unable to translate address of (guest) pde or process/page table entry */
+#define DSISR_PRTABLE_FAULT      0x00020000
 
 /* SRR1 error code fields */
 
diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
index 73b5c28d0375..e77572fab86e 100644
--- a/target/ppc/excp_helper.c
+++ b/target/ppc/excp_helper.c
@@ -525,9 +525,10 @@ static inline void powerpc_excp(PowerPCCPU *cpu, int excp_model, int excp)
     case POWERPC_EXCP_ISEG:      /* Instruction segment exception            */
     case POWERPC_EXCP_TRACE:     /* Trace exception                          */
         break;
+    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
+        msr |= env->error_code;
     case POWERPC_EXCP_HDECR:     /* Hypervisor decrementer exception         */
     case POWERPC_EXCP_HDSI:      /* Hypervisor data storage exception        */
-    case POWERPC_EXCP_HISI:      /* Hypervisor instruction storage exception */
     case POWERPC_EXCP_HDSEG:     /* Hypervisor data segment exception        */
     case POWERPC_EXCP_HISEG:     /* Hypervisor instruction segment exception */
     case POWERPC_EXCP_SDOOR_HV:  /* Hypervisor Doorbell interrupt            */
diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
index 3ae29ed90d49..049527f40071 100644
--- a/target/ppc/mmu-radix64.c
+++ b/target/ppc/mmu-radix64.c
@@ -103,6 +103,27 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
     }
 }
 
+static void ppc_radix64_raise_hsi(PowerPCCPU *cpu, int rwx, vaddr eaddr,
+                                  hwaddr g_raddr, uint32_t cause)
+{
+    CPUState *cs = CPU(cpu);
+    CPUPPCState *env = &cpu->env;
+
+    if (rwx == 2) { /* H Instruction Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HISI;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = cause;
+    } else { /* H Data Storage Interrupt */
+        cs->exception_index = POWERPC_EXCP_HDSI;
+        if (rwx == 1) { /* Write -> Store */
+            cause |= DSISR_ISSTORE;
+        }
+        env->spr[SPR_HDSISR] = cause;
+        env->spr[SPR_HDAR] = eaddr;
+        env->spr[SPR_ASDR] = g_raddr;
+        env->error_code = 0;
+    }
+}
 
 static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
                                    int *fault_cause, int *prot,
@@ -236,6 +257,38 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
     return true;
 }
 
+static int ppc_radix64_partition_scoped_xlate(PowerPCCPU *cpu, int rwx,
+                                              vaddr eaddr, hwaddr g_raddr,
+                                              ppc_v3_pate_t pate,
+                                              hwaddr *h_raddr, int *h_prot,
+                                              int *h_page_size, bool pde_addr,
+                                              bool cause_excp)
+{
+    int fault_cause = 0;
+    hwaddr pte_addr;
+    uint64_t pte;
+
+    *h_page_size = PRTBE_R_GET_RTS(pate.dw0);
+    pte = ppc_radix64_walk_tree(cpu, g_raddr, pate.dw0 & PRTBE_R_RPDB,
+                                pate.dw0 & PRTBE_R_RPDS, h_raddr, h_page_size,
+                                &fault_cause, &pte_addr);
+    /* No valid pte or access denied due to protection */
+    if (!(pte & R_PTE_VALID) ||
+            ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, h_prot, 1)) {
+        if (pde_addr) /* address being translated was that of a guest pde */
+            fault_cause |= DSISR_PRTABLE_FAULT;
+        if (cause_excp) {
+            ppc_radix64_raise_hsi(cpu, rwx, eaddr, g_raddr, fault_cause);
+        }
+        return 1;
+    }
+
+    /* Update Reference and Change Bits */
+    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, h_prot);
+
+    return 0;
+}
+
 static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
                                       vaddr eaddr, uint64_t lpid, uint64_t pid,
                                       ppc_v3_pate_t pate, hwaddr *g_raddr,
@@ -243,9 +296,10 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
                                       bool cause_excp)
 {
     CPUState *cs = CPU(cpu);
-    uint64_t offset, size, prtbe_addr, prtbe0, pte;
-    int fault_cause = 0;
-    hwaddr pte_addr;
+    CPUPPCState *env = &cpu->env;
+    uint64_t offset, size, prtbe_addr, prtbe0, base_addr, nls, index, pte;
+    int fault_cause = 0, h_page_size, h_prot, ret;
+    hwaddr h_raddr, pte_addr;
 
     /* Index Process Table by PID to Find Corresponding Process Table Entry */
     offset = pid * sizeof(struct prtb_entry);
@@ -258,13 +312,69 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
         return 1;
     }
     prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
-    prtbe0 = ldq_phys(cs->as, prtbe_addr);
+
+    if (cpu->vhyp && lpid == 0) {
+        prtbe0 = ldq_phys(cs->as, prtbe_addr);
+    } else {
+        /*
+         * Process table addresses are subject to partition-scoped
+         * translation
+         *
+         * On a Radix host, the partition-scoped page table for LPID=0
+         * is only used to translate the effective addresses of the
+         * process table entries.
+         */
+        ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, prtbe_addr,
+                                                 pate, &h_raddr, &h_prot,
+                                                 &h_page_size, 1, 1);
+        if (ret) {
+            return ret;
+        }
+        prtbe0 = ldq_phys(cs->as, h_raddr);
+    }
 
     /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
     *g_page_size = PRTBE_R_GET_RTS(prtbe0);
-    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
-                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
-                                g_raddr, g_page_size, &fault_cause, &pte_addr);
+    base_addr = prtbe0 & PRTBE_R_RPDB;
+    nls = prtbe0 & PRTBE_R_RPDS;
+    if (msr_hv || (cpu->vhyp && lpid == 0)) {
+        /*
+         * Can treat process table addresses as real addresses
+         */
+        pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK, base_addr, nls,
+                                    g_raddr, g_page_size, &fault_cause,
+                                    &pte_addr);
+    } else {
+        index = (eaddr & R_EADDR_MASK) >> (*g_page_size - nls); /* Shift */
+        index &= ((1UL << nls) - 1);                            /* Mask */
+        pte_addr = base_addr + (index * sizeof(pte));
+
+        /*
+         * Each process table address is subject to a partition-scoped
+         * translation
+         */
+        do {
+            ret = ppc_radix64_partition_scoped_xlate(cpu, 0, eaddr, pte_addr,
+                                                     pate, &h_raddr, &h_prot,
+                                                     &h_page_size, 1, 1);
+            if (ret) {
+                return ret;
+            }
+
+            pte = ppc_radix64_next_level(cpu, eaddr & R_EADDR_MASK, &h_raddr,
+                                         &nls, g_page_size, &fault_cause);
+            pte_addr = h_raddr;
+        } while ((pte & R_PTE_VALID) && !(pte & R_PTE_LEAF));
+
+        /* Did we find a valid leaf? */
+        if ((pte & R_PTE_VALID) && (pte & R_PTE_LEAF)) {
+            uint64_t rpn = pte & R_PTE_RPN;
+            uint64_t mask = (1UL << *g_page_size) - 1;
+
+            /* Or high bits of rpn and low bits to ea to form whole real addr */
+            *g_raddr = (rpn & ~mask) | (eaddr & mask);
+        }
+    }
 
     if (!(pte & R_PTE_VALID) ||
         ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
@@ -280,11 +390,29 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
     return 0;
 }
 
+/*
+ * Radix tree translation is a 2 steps translation process:
+ *
+ * 1. Process-scoped translation:   Guest Eff Addr  -> Guest Real Addr
+ * 2. Partition-scoped translation: Guest Real Addr -> Host Real Addr
+ *
+ *                                  MSR[HV]
+ *              +-------------+----------------+---------------+
+ *              |             |     HV = 0     |     HV = 1    |
+ *              +-------------+----------------+---------------+
+ *              | Relocation  |    Partition   |      No       |
+ *              | = Off       |     Scoped     |  Translation  |
+ *  Relocation  +-------------+----------------+---------------+
+ *              | Relocation  |   Partition &  |    Process    |
+ *              | = On        | Process Scoped |    Scoped     |
+ *              +-------------+----------------+---------------+
+ */
 static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
                              uint64_t lpid, uint64_t pid, bool relocation,
                              hwaddr *raddr, int *psizep, int *protp,
                              bool cause_excp)
 {
+    CPUPPCState *env = &cpu->env;
     ppc_v3_pate_t pate;
     int psize, prot;
     hwaddr g_raddr;
@@ -310,11 +438,6 @@ static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
             }
             return 1;
         }
-        /* We don't support guest mode yet */
-        if (lpid != 0) {
-            error_report("PowerNV guest support Unimplemented");
-            exit(1);
-        }
     }
 
     /*
@@ -322,6 +445,8 @@ static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
      *
      * - Translates an effective address to a host real address in
      *   quadrants 0 and 3 when HV=1.
+     *
+     * - Translates an effective address to a guest real address.
      */
     if (relocation) {
         int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
@@ -336,7 +461,24 @@ static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
         g_raddr = eaddr & R_EADDR_MASK;
     }
 
-    *raddr = g_raddr;
+    /*
+     * Perform partition-scoped translation if !HV or HV access to
+     * quadrants 1 or 2. Translates a guest real address to a host
+     * real address.
+     */
+    if ((lpid != 0) || (!cpu->vhyp && !msr_hv)) {
+        int ret = ppc_radix64_partition_scoped_xlate(cpu, rwx, eaddr, g_raddr,
+                                                     pate, raddr, &prot, &psize,
+                                                     0, cause_excp);
+        if (ret) {
+            return ret;
+        }
+        *psizep = MIN(*psizep, psize);
+        *protp &= prot;
+    } else {
+        *raddr = g_raddr;
+    }
+
     return 0;
 }
 
@@ -345,7 +487,7 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
 {
     CPUState *cs = CPU(cpu);
     CPUPPCState *env = &cpu->env;
-    uint64_t lpid = 0, pid = 0;
+    uint64_t pid, lpid = env->spr[SPR_LPIDR];
     int page_size, prot;
     bool relocation;
     hwaddr raddr;
@@ -355,7 +497,7 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
 
     relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
     /* HV or virtual hypervisor Real Mode Access */
-    if (!relocation && (msr_hv || cpu->vhyp)) {
+    if (!relocation && (msr_hv || (cpu->vhyp && lpid == 0))) {
         /* In real mode top 4 effective addr bits (mostly) ignored */
         raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
 
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5
  2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
@ 2020-03-30 10:45   ` Greg Kurz
  2020-03-31  0:57   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 10:45 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:40 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> From: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> 
> According to the ISA the root page directory size of a radix tree for
> either process- or partition-scoped translation must be >= 5.
> 
> Thus add this to the list of conditions checked when validating the
> partition table entry in validate_pate();
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/mmu-radix64.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 224e646c5094..99678570581b 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -212,6 +212,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      if (lpid == 0 && !msr_hv) {
>          return false;
>      }
> +    if ((pate->dw0 & PATE1_R_PRTS) < 5) {
> +        return false;
> +    }
>      /* More checks ... */
>      return true;
>  }



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine
  2020-03-30  9:49 ` [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine Cédric Le Goater
@ 2020-03-30 10:46   ` Greg Kurz
  2020-03-31  0:59   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 10:46 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:42 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/mmu-radix64.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index f6007e956569..d2422d1c54c9 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -231,6 +231,7 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>      ppc_v3_pate_t pate;
>      bool relocation;
>  
> +    assert(!(msr_hv && cpu->vhyp));
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
>      relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault()
  2020-03-30  9:49 ` [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault() Cédric Le Goater
@ 2020-03-30 13:01   ` Greg Kurz
  2020-03-31  0:58   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 13:01 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:41 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> It will ease the introduction of new routines for partition-scoped
> Radix translation.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  target/ppc/mmu-radix64.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 99678570581b..f6007e956569 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -229,12 +229,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>      uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>      int page_size, prot, fault_cause = 0;
>      ppc_v3_pate_t pate;
> +    bool relocation;
>  
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
> +    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
>      /* HV or virtual hypervisor Real Mode Access */
> -    if ((msr_hv || cpu->vhyp) &&
> -        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
> +    if (!relocation && (msr_hv || cpu->vhyp)) {
>          /* In real mode top 4 effective addr bits (mostly) ignored */
>          raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
>  



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30  9:49 ` [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation Cédric Le Goater
@ 2020-03-30 14:18   ` Greg Kurz
  2020-03-30 15:24     ` Cédric Le Goater
  0 siblings, 1 reply; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 14:18 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:43 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> This is moving code under a new ppc_radix64_xlate() routine shared by
> the MMU Radix page fault handler and the get_phys_page_debug PPC
> callback. The difference being that get_phys_page_debug does not
> generate exceptions.
> 
> The specific part of process-scoped Radix translation is moved under
> ppc_radix64_process_scoped_xlate() in preparation of the future
> support for partition-scoped Radix translation.
> 
> It should be functionally equivalent.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---
>  target/ppc/mmu-radix64.c | 201 ++++++++++++++++++++++-----------------
>  1 file changed, 116 insertions(+), 85 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index d2422d1c54c9..b4e6abcd2d35 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -219,17 +219,119 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      return true;
>  }
>  
> +static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
> +                                      vaddr eaddr, uint64_t lpid, uint64_t pid,
> +                                      ppc_v3_pate_t pate, hwaddr *g_raddr,
> +                                      int *g_prot, int *g_page_size,
> +                                      bool cause_excp)
> +{
> +    CPUState *cs = CPU(cpu);
> +    uint64_t offset, size, prtbe_addr, prtbe0, pte;
> +    int fault_cause = 0;
> +    hwaddr pte_addr;
> +
> +    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> +    offset = pid * sizeof(struct prtb_entry);
> +    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> +    if (offset >= size) {
> +        /* offset exceeds size of the process table */
> +        if (cause_excp) {
> +            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> +        }
> +        return 1;
> +    }
> +    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
> +    prtbe0 = ldq_phys(cs->as, prtbe_addr);
> +
> +    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> +    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
> +    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> +                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> +                                g_raddr, g_page_size, &fault_cause, &pte_addr);
> +
> +    if (!(pte & R_PTE_VALID) ||
> +        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {

Current code checks !pte instead of !(pte & R_PTE_VALID)... and I it seems
that ppc_radix64_walk_tree() already handles that:

    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
        *fault_cause |= DSISR_NOPTE;
        return 0;
    }

> +        /* No valid pte or access denied due to protection */
> +        if (cause_excp) {
> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> +        }
> +        return 1;
> +    }
> +
> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
> +
> +    return 0;
> +}
> +
> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> +                             uint64_t lpid, uint64_t pid, bool relocation,
> +                             hwaddr *raddr, int *psizep, int *protp,
> +                             bool cause_excp)
> +{
> +    ppc_v3_pate_t pate;
> +    int psize, prot;
> +    hwaddr g_raddr;
> +
> +    *psizep = INT_MAX;
> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> +
> +    /* Get Process Table */
> +    if (cpu->vhyp && lpid == 0) {

Current code doesn't check lpid == 0. Not sure to see what it's for...
especially env->spr[SPR_LPIDR] is always 0 with pseries machine types
AFAICT... is it even possible to have lpid != 0 here ?


Rest LGTM.

> +        PPCVirtualHypervisorClass *vhc;
> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> +        vhc->get_pate(cpu->vhyp, &pate);
> +    } else {
> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> +            if (cause_excp) {
> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> +            }
> +            return 1;
> +        }
> +        if (!validate_pate(cpu, lpid, &pate)) {
> +            if (cause_excp) {
> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> +            }
> +            return 1;
> +        }
> +        /* We don't support guest mode yet */
> +        if (lpid != 0) {
> +            error_report("PowerNV guest support Unimplemented");
> +            exit(1);
> +        }
> +    }
> +
> +    /*
> +     * Perform process-scoped translation if relocation enabled.
> +     *
> +     * - Translates an effective address to a host real address in
> +     *   quadrants 0 and 3 when HV=1.
> +     */
> +    if (relocation) {
> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
> +                                                   pate, &g_raddr, &prot,
> +                                                   &psize, cause_excp);
> +        if (ret) {
> +            return ret;
> +        }
> +        *psizep = MIN(*psizep, psize);
> +        *protp &= prot;
> +    } else {
> +        g_raddr = eaddr & R_EADDR_MASK;
> +    }
> +
> +    *raddr = g_raddr;
> +    return 0;
> +}
> +
>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>                                   int mmu_idx)
>  {
>      CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> -    PPCVirtualHypervisorClass *vhc;
> -    hwaddr raddr, pte_addr;
> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> -    int page_size, prot, fault_cause = 0;
> -    ppc_v3_pate_t pate;
> +    uint64_t lpid = 0, pid = 0;
> +    int page_size, prot;
>      bool relocation;
> +    hwaddr raddr;
>  
>      assert(!(msr_hv && cpu->vhyp));
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
> @@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>          return 1;
>      }
>  
> -    /* Get Process Table */
> -    if (cpu->vhyp) {
> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> -        vhc->get_pate(cpu->vhyp, &pate);
> -    } else {
> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> -            return 1;
> -        }
> -        if (!validate_pate(cpu, lpid, &pate)) {
> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> -        }
> -        /* We don't support guest mode yet */
> -        if (lpid != 0) {
> -            error_report("PowerNV guest support Unimplemented");
> -            exit(1);
> -       }
> -    }
> -
> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> -    offset = pid * sizeof(struct prtb_entry);
> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> -    if (offset >= size) {
> -        /* offset exceeds size of the process table */
> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
> +                          &page_size, &prot, 1)) {
>          return 1;
>      }
> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> -
> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
> -        /* Couldn't get pte or access denied due to protection */
> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> -        return 1;
> -    }
> -
> -    /* Update Reference and Change Bits */
> -    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
>  
>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
>                   prot, mmu_idx, 1UL << page_size);
> @@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>  
>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>  {
> -    CPUState *cs = CPU(cpu);
>      CPUPPCState *env = &cpu->env;
> -    PPCVirtualHypervisorClass *vhc;
> -    hwaddr raddr, pte_addr;
> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> -    int page_size, fault_cause = 0;
> -    ppc_v3_pate_t pate;
> +    uint64_t lpid = 0, pid = 0;
> +    int psize, prot;
> +    hwaddr raddr;
>  
>      /* Handle Real Mode */
> -    if (msr_dr == 0) {
> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
>          /* In real mode top 4 effective addr bits (mostly) ignored */
>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
>      }
> @@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>          return -1;
>      }
>  
> -    /* Get Process Table */
> -    if (cpu->vhyp) {
> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> -        vhc->get_pate(cpu->vhyp, &pate);
> -    } else {
> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> -            return -1;
> -        }
> -        if (!validate_pate(cpu, lpid, &pate)) {
> -            return -1;
> -        }
> -        /* We don't support guest mode yet */
> -        if (lpid != 0) {
> -            error_report("PowerNV guest support Unimplemented");
> -            exit(1);
> -       }
> -    }
> -
> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> -    offset = pid * sizeof(struct prtb_entry);
> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> -    if (offset >= size) {
> -        /* offset exceeds size of the process table */
> -        return -1;
> -    }
> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> -
> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> -    if (!pte) {
> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
> +                          &prot, 0)) {
>          return -1;
>      }
>  



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30 14:18   ` Greg Kurz
@ 2020-03-30 15:24     ` Cédric Le Goater
  2020-03-30 15:34       ` Cédric Le Goater
  0 siblings, 1 reply; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30 15:24 UTC (permalink / raw)
  To: Greg Kurz
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On 3/30/20 4:18 PM, Greg Kurz wrote:
> On Mon, 30 Mar 2020 11:49:43 +0200
> Cédric Le Goater <clg@kaod.org> wrote:
> 
>> This is moving code under a new ppc_radix64_xlate() routine shared by
>> the MMU Radix page fault handler and the get_phys_page_debug PPC
>> callback. The difference being that get_phys_page_debug does not
>> generate exceptions.
>>
>> The specific part of process-scoped Radix translation is moved under
>> ppc_radix64_process_scoped_xlate() in preparation of the future
>> support for partition-scoped Radix translation.
>>
>> It should be functionally equivalent.
>>
>> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
>> Signed-off-by: Cédric Le Goater <clg@kaod.org>
>> ---
>>  target/ppc/mmu-radix64.c | 201 ++++++++++++++++++++++-----------------
>>  1 file changed, 116 insertions(+), 85 deletions(-)
>>
>> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
>> index d2422d1c54c9..b4e6abcd2d35 100644
>> --- a/target/ppc/mmu-radix64.c
>> +++ b/target/ppc/mmu-radix64.c
>> @@ -219,17 +219,119 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>>      return true;
>>  }
>>  
>> +static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
>> +                                      vaddr eaddr, uint64_t lpid, uint64_t pid,
>> +                                      ppc_v3_pate_t pate, hwaddr *g_raddr,
>> +                                      int *g_prot, int *g_page_size,
>> +                                      bool cause_excp)
>> +{
>> +    CPUState *cs = CPU(cpu);
>> +    uint64_t offset, size, prtbe_addr, prtbe0, pte;
>> +    int fault_cause = 0;
>> +    hwaddr pte_addr;
>> +
>> +    /* Index Process Table by PID to Find Corresponding Process Table Entry */
>> +    offset = pid * sizeof(struct prtb_entry);
>> +    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
>> +    if (offset >= size) {
>> +        /* offset exceeds size of the process table */
>> +        if (cause_excp) {
>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>> +        }
>> +        return 1;
>> +    }
>> +    prtbe_addr = (pate.dw1 & PATE1_R_PRTB) + offset;
>> +    prtbe0 = ldq_phys(cs->as, prtbe_addr);
>> +
>> +    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
>> +    *g_page_size = PRTBE_R_GET_RTS(prtbe0);
>> +    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
>> +                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
>> +                                g_raddr, g_page_size, &fault_cause, &pte_addr);
>> +
>> +    if (!(pte & R_PTE_VALID) ||
>> +        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
> 
> Current code checks !pte instead of !(pte & R_PTE_VALID)... and I it seems
> that ppc_radix64_walk_tree() already handles that:
> 
>     if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
>         *fault_cause |= DSISR_NOPTE;
>         return 0;
>     }

There are more changes in the following patches when partition-scoped
is introduced.

> 
>> +        /* No valid pte or access denied due to protection */
>> +        if (cause_excp) {
>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
>> +        }
>> +        return 1;
>> +    }
>> +
>> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
>> +
>> +    return 0;
>> +}
>> +
>> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>> +                             uint64_t lpid, uint64_t pid, bool relocation,
>> +                             hwaddr *raddr, int *psizep, int *protp,
>> +                             bool cause_excp)
>> +{
>> +    ppc_v3_pate_t pate;
>> +    int psize, prot;
>> +    hwaddr g_raddr;
>> +
>> +    *psizep = INT_MAX;
>> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
>> +
>> +    /* Get Process Table */
>> +    if (cpu->vhyp && lpid == 0) {
> 
> Current code doesn't check lpid == 0. Not sure to see what it's for...

cpu->vhyp means a pseries machine, lpid == 0 means accessing quadrant3, 
so it's the kernel.

> especially env->spr[SPR_LPIDR] is always 0 with pseries machine types
> AFAICT... is it even possible to have lpid != 0 here ?

When under PowerNV, SPR_LPIDR can be set, but not under pseries.

C.

> 
> 
> Rest LGTM.
> 
>> +        PPCVirtualHypervisorClass *vhc;
>> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>> +        vhc->get_pate(cpu->vhyp, &pate);
>> +    } else {
>> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>> +            if (cause_excp) {
>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>> +            }
>> +            return 1;
>> +        }
>> +        if (!validate_pate(cpu, lpid, &pate)) {
>> +            if (cause_excp) {
>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
>> +            }
>> +            return 1;
>> +        }
>> +        /* We don't support guest mode yet */
>> +        if (lpid != 0) {
>> +            error_report("PowerNV guest support Unimplemented");
>> +            exit(1);
>> +        }
>> +    }
>> +
>> +    /*
>> +     * Perform process-scoped translation if relocation enabled.
>> +     *
>> +     * - Translates an effective address to a host real address in
>> +     *   quadrants 0 and 3 when HV=1.
>> +     */
>> +    if (relocation) {
>> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
>> +                                                   pate, &g_raddr, &prot,
>> +                                                   &psize, cause_excp);
>> +        if (ret) {
>> +            return ret;
>> +        }
>> +        *psizep = MIN(*psizep, psize);
>> +        *protp &= prot;
>> +    } else {
>> +        g_raddr = eaddr & R_EADDR_MASK;
>> +    }
>> +
>> +    *raddr = g_raddr;
>> +    return 0;
>> +}
>> +
>>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>                                   int mmu_idx)
>>  {
>>      CPUState *cs = CPU(cpu);
>>      CPUPPCState *env = &cpu->env;
>> -    PPCVirtualHypervisorClass *vhc;
>> -    hwaddr raddr, pte_addr;
>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>> -    int page_size, prot, fault_cause = 0;
>> -    ppc_v3_pate_t pate;
>> +    uint64_t lpid = 0, pid = 0;
>> +    int page_size, prot;
>>      bool relocation;
>> +    hwaddr raddr;
>>  
>>      assert(!(msr_hv && cpu->vhyp));
>>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>> @@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>          return 1;
>>      }
>>  
>> -    /* Get Process Table */
>> -    if (cpu->vhyp) {
>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>> -        vhc->get_pate(cpu->vhyp, &pate);
>> -    } else {
>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>> -            return 1;
>> -        }
>> -        if (!validate_pate(cpu, lpid, &pate)) {
>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
>> -        }
>> -        /* We don't support guest mode yet */
>> -        if (lpid != 0) {
>> -            error_report("PowerNV guest support Unimplemented");
>> -            exit(1);
>> -       }
>> -    }
>> -
>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
>> -    offset = pid * sizeof(struct prtb_entry);
>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
>> -    if (offset >= size) {
>> -        /* offset exceeds size of the process table */
>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
>> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
>> +                          &page_size, &prot, 1)) {
>>          return 1;
>>      }
>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
>> -
>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
>> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
>> -        /* Couldn't get pte or access denied due to protection */
>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
>> -        return 1;
>> -    }
>> -
>> -    /* Update Reference and Change Bits */
>> -    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
>>  
>>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
>>                   prot, mmu_idx, 1UL << page_size);
>> @@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>  
>>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>>  {
>> -    CPUState *cs = CPU(cpu);
>>      CPUPPCState *env = &cpu->env;
>> -    PPCVirtualHypervisorClass *vhc;
>> -    hwaddr raddr, pte_addr;
>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>> -    int page_size, fault_cause = 0;
>> -    ppc_v3_pate_t pate;
>> +    uint64_t lpid = 0, pid = 0;
>> +    int psize, prot;
>> +    hwaddr raddr;
>>  
>>      /* Handle Real Mode */
>> -    if (msr_dr == 0) {
>> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
>>          /* In real mode top 4 effective addr bits (mostly) ignored */
>>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
>>      }
>> @@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>>          return -1;
>>      }
>>  
>> -    /* Get Process Table */
>> -    if (cpu->vhyp) {
>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>> -        vhc->get_pate(cpu->vhyp, &pate);
>> -    } else {
>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>> -            return -1;
>> -        }
>> -        if (!validate_pate(cpu, lpid, &pate)) {
>> -            return -1;
>> -        }
>> -        /* We don't support guest mode yet */
>> -        if (lpid != 0) {
>> -            error_report("PowerNV guest support Unimplemented");
>> -            exit(1);
>> -       }
>> -    }
>> -
>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
>> -    offset = pid * sizeof(struct prtb_entry);
>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
>> -    if (offset >= size) {
>> -        /* offset exceeds size of the process table */
>> -        return -1;
>> -    }
>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
>> -
>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
>> -    if (!pte) {
>> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
>> +                          &prot, 0)) {
>>          return -1;
>>      }
>>  
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30 15:24     ` Cédric Le Goater
@ 2020-03-30 15:34       ` Cédric Le Goater
  2020-03-30 17:07         ` Greg Kurz
  2020-03-31  1:13         ` David Gibson
  0 siblings, 2 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-30 15:34 UTC (permalink / raw)
  To: Greg Kurz
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

>>> +        /* No valid pte or access denied due to protection */
>>> +        if (cause_excp) {
>>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
>>> +        }
>>> +        return 1;
>>> +    }
>>> +
>>> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>> +                             uint64_t lpid, uint64_t pid, bool relocation,
>>> +                             hwaddr *raddr, int *psizep, int *protp,
>>> +                             bool cause_excp)
>>> +{
>>> +    ppc_v3_pate_t pate;
>>> +    int psize, prot;
>>> +    hwaddr g_raddr;
>>> +
>>> +    *psizep = INT_MAX;
>>> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
>>> +
>>> +    /* Get Process Table */
>>> +    if (cpu->vhyp && lpid == 0) {
>>
>> Current code doesn't check lpid == 0. Not sure to see what it's for...
> 
> cpu->vhyp means a pseries machine, lpid == 0 means accessing quadrant3, 
> so it's the kernel.

Sorry. I misread that. It would pid == 0 for the kernel. 

So yes, the test cpu->vhyp && lpid == 0 might be a bit overkill, given 
that lpid is always 0 when running under a QEMU pseries machine.


C.

> 
>> especially env->spr[SPR_LPIDR] is always 0 with pseries machine types
>> AFAICT... is it even possible to have lpid != 0 here ?
> 
> When under PowerNV, SPR_LPIDR can be set, but not under pseries.
> 
> C.
> 
>>
>>
>> Rest LGTM.
>>
>>> +        PPCVirtualHypervisorClass *vhc;
>>> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>>> +        vhc->get_pate(cpu->vhyp, &pate);
>>> +    } else {
>>> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>>> +            if (cause_excp) {
>>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>>> +            }
>>> +            return 1;
>>> +        }
>>> +        if (!validate_pate(cpu, lpid, &pate)) {
>>> +            if (cause_excp) {
>>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
>>> +            }
>>> +            return 1;
>>> +        }
>>> +        /* We don't support guest mode yet */
>>> +        if (lpid != 0) {
>>> +            error_report("PowerNV guest support Unimplemented");
>>> +            exit(1);
>>> +        }
>>> +    }
>>> +
>>> +    /*
>>> +     * Perform process-scoped translation if relocation enabled.
>>> +     *
>>> +     * - Translates an effective address to a host real address in
>>> +     *   quadrants 0 and 3 when HV=1.
>>> +     */
>>> +    if (relocation) {
>>> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
>>> +                                                   pate, &g_raddr, &prot,
>>> +                                                   &psize, cause_excp);
>>> +        if (ret) {
>>> +            return ret;
>>> +        }
>>> +        *psizep = MIN(*psizep, psize);
>>> +        *protp &= prot;
>>> +    } else {
>>> +        g_raddr = eaddr & R_EADDR_MASK;
>>> +    }
>>> +
>>> +    *raddr = g_raddr;
>>> +    return 0;
>>> +}
>>> +
>>>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>>                                   int mmu_idx)
>>>  {
>>>      CPUState *cs = CPU(cpu);
>>>      CPUPPCState *env = &cpu->env;
>>> -    PPCVirtualHypervisorClass *vhc;
>>> -    hwaddr raddr, pte_addr;
>>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>>> -    int page_size, prot, fault_cause = 0;
>>> -    ppc_v3_pate_t pate;
>>> +    uint64_t lpid = 0, pid = 0;
>>> +    int page_size, prot;
>>>      bool relocation;
>>> +    hwaddr raddr;
>>>  
>>>      assert(!(msr_hv && cpu->vhyp));
>>>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>>> @@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>>          return 1;
>>>      }
>>>  
>>> -    /* Get Process Table */
>>> -    if (cpu->vhyp) {
>>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>>> -        vhc->get_pate(cpu->vhyp, &pate);
>>> -    } else {
>>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>>> -            return 1;
>>> -        }
>>> -        if (!validate_pate(cpu, lpid, &pate)) {
>>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
>>> -        }
>>> -        /* We don't support guest mode yet */
>>> -        if (lpid != 0) {
>>> -            error_report("PowerNV guest support Unimplemented");
>>> -            exit(1);
>>> -       }
>>> -    }
>>> -
>>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
>>> -    offset = pid * sizeof(struct prtb_entry);
>>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
>>> -    if (offset >= size) {
>>> -        /* offset exceeds size of the process table */
>>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
>>> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
>>> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
>>> +                          &page_size, &prot, 1)) {
>>>          return 1;
>>>      }
>>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
>>> -
>>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
>>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
>>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
>>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
>>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
>>> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
>>> -        /* Couldn't get pte or access denied due to protection */
>>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
>>> -        return 1;
>>> -    }
>>> -
>>> -    /* Update Reference and Change Bits */
>>> -    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
>>>  
>>>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
>>>                   prot, mmu_idx, 1UL << page_size);
>>> @@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>>  
>>>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>>>  {
>>> -    CPUState *cs = CPU(cpu);
>>>      CPUPPCState *env = &cpu->env;
>>> -    PPCVirtualHypervisorClass *vhc;
>>> -    hwaddr raddr, pte_addr;
>>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>>> -    int page_size, fault_cause = 0;
>>> -    ppc_v3_pate_t pate;
>>> +    uint64_t lpid = 0, pid = 0;
>>> +    int psize, prot;
>>> +    hwaddr raddr;
>>>  
>>>      /* Handle Real Mode */
>>> -    if (msr_dr == 0) {
>>> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
>>>          /* In real mode top 4 effective addr bits (mostly) ignored */
>>>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
>>>      }
>>> @@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
>>>          return -1;
>>>      }
>>>  
>>> -    /* Get Process Table */
>>> -    if (cpu->vhyp) {
>>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
>>> -        vhc->get_pate(cpu->vhyp, &pate);
>>> -    } else {
>>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
>>> -            return -1;
>>> -        }
>>> -        if (!validate_pate(cpu, lpid, &pate)) {
>>> -            return -1;
>>> -        }
>>> -        /* We don't support guest mode yet */
>>> -        if (lpid != 0) {
>>> -            error_report("PowerNV guest support Unimplemented");
>>> -            exit(1);
>>> -       }
>>> -    }
>>> -
>>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
>>> -    offset = pid * sizeof(struct prtb_entry);
>>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
>>> -    if (offset >= size) {
>>> -        /* offset exceeds size of the process table */
>>> -        return -1;
>>> -    }
>>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
>>> -
>>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
>>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
>>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
>>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
>>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
>>> -    if (!pte) {
>>> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
>>> +                          &prot, 0)) {
>>>          return -1;
>>>      }
>>>  
>>
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation
  2020-03-30  9:49 ` [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation Cédric Le Goater
@ 2020-03-30 17:00   ` Greg Kurz
  2020-03-31  9:10     ` Cédric Le Goater
  0 siblings, 1 reply; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 17:00 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:44 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> The ppc_radix64_walk_tree() routine walks through the nested radix
> tables to look for a PTE.
> 
> Split it two and introduce a new routine ppc_radix64_next_level()

Split it in two...

> which we will use for partition-scoped Radix translation when
> translating the process tree addresses.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---
>  target/ppc/mmu-radix64.c | 50 ++++++++++++++++++++++++++--------------
>  1 file changed, 33 insertions(+), 17 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index b4e6abcd2d35..136498111f60 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -162,44 +162,60 @@ static void ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte,
>      }
>  }
>  
> -static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> -                                      uint64_t base_addr, uint64_t nls,
> -                                      hwaddr *raddr, int *psize,
> -                                      int *fault_cause, hwaddr *pte_addr)
> +static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
> +                                       uint64_t *pte_addr, uint64_t *nls,
> +                                       int *psize, int *fault_cause)
>  {
>      CPUState *cs = CPU(cpu);
>      uint64_t index, pde;
>  
> -    if (nls < 5) { /* Directory maps less than 2**5 entries */
> +    if (*nls < 5) { /* Directory maps less than 2**5 entries */
>          *fault_cause |= DSISR_R_BADCONFIG;
>          return 0;
>      }
>  

I think this should stay in the caller...

>      /* Read page <directory/table> entry from guest address space */
> -    index = eaddr >> (*psize - nls); /* Shift */
> -    index &= ((1UL << nls) - 1); /* Mask */
> -    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
> -    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
> +    pde = ldq_phys(cs->as, *pte_addr);
> +    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
>          *fault_cause |= DSISR_NOPTE;
>          return 0;
>      }
>  
> -    *psize -= nls;
> +    *psize -= *nls;
> +    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
> +        *nls = pde & R_PDE_NLS;
> +        index = eaddr >> (*psize - *nls);       /* Shift */
> +        index &= ((1UL << *nls) - 1);           /* Mask */
> +        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
> +    }
> +    return pde;
> +}
> +
> +static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> +                                      uint64_t base_addr, uint64_t nls,
> +                                      hwaddr *raddr, int *psize,
> +                                      int *fault_cause, hwaddr *pte_addr)
> +{
> +    uint64_t index, pde;
> +
> +    index = eaddr >> (*psize - nls);    /* Shift */
> +    index &= ((1UL << nls) - 1);       /* Mask */
> +    *pte_addr = base_addr + (index * sizeof(pde));
> +    do {

... here. So that we have a well established "bad config" path at
the root level, just like the current code has.

Since the ppc_radix64_walk_tree() now calls ppc_radix64_next_level()
in a loop instead of recursing, and since ppc_radix64_next_level()
returns the nls value for the next level, it really makes sense to
have this check in ppc_radix64_walk_tree() and maybe put an assert
in ppc_radix64_next_level().

> +        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
> +                                     fault_cause);
> +    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
>  
> -    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
> -    if (pde & R_PTE_LEAF) {
> +    /* Did we find a valid leaf? */
> +    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
>          uint64_t rpn = pde & R_PTE_RPN;
>          uint64_t mask = (1UL << *psize) - 1;
>  
>          /* Or high bits of rpn and low bits to ea to form whole real addr */
>          *raddr = (rpn & ~mask) | (eaddr & mask);
> -        *pte_addr = base_addr + (index * sizeof(pde));
> -        return pde;
>      }
>  
> -    /* Next Level of Radix Tree */
> -    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
> -                                 raddr, psize, fault_cause, pte_addr);
> +    return pde;
>  }
>  
>  static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool
  2020-03-30  9:49 ` [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool Cédric Le Goater
@ 2020-03-30 17:01   ` Greg Kurz
  2020-03-31  8:22     ` Cédric Le Goater
  2020-03-30 17:08   ` Greg Kurz
  1 sibling, 1 reply; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 17:01 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:45 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> This prepares ground for partition-scoped Radix translation.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---
>  target/ppc/mmu-radix64.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 136498111f60..3ae29ed90d49 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -105,7 +105,8 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
>  
>  
>  static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
> -                                   int *fault_cause, int *prot)
> +                                   int *fault_cause, int *prot,
> +                                   bool partition_scoped)
>  {
>      CPUPPCState *env = &cpu->env;
>      const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
> @@ -121,11 +122,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
>      }
>  
>      /* Determine permissions allowed by Encoded Access Authority */
> -    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
> +    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
>          *prot = 0;
> -    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
> +    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
>          *prot = ppc_radix64_get_prot_eaa(pte);
> -    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
> +    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
>          *prot = ppc_radix64_get_prot_eaa(pte);
>          *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
>      }
> @@ -266,7 +267,7 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
>                                  g_raddr, g_page_size, &fault_cause, &pte_addr);
>  
>      if (!(pte & R_PTE_VALID) ||
> -        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
> +        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {

Maybe pass false since ppc_radix64_check_prot() expects a bool ?

Apart from that,

Reviewed-by: Greg Kurz <groug@kaod.org>

>          /* No valid pte or access denied due to protection */
>          if (cause_excp) {
>              ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30 15:34       ` Cédric Le Goater
@ 2020-03-30 17:07         ` Greg Kurz
  2020-03-31  1:13         ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 17:07 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 17:34:40 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> >>> +        /* No valid pte or access denied due to protection */
> >>> +        if (cause_excp) {
> >>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> >>> +        }
> >>> +        return 1;
> >>> +    }
> >>> +
> >>> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
> >>> +
> >>> +    return 0;
> >>> +}
> >>> +
> >>> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>> +                             uint64_t lpid, uint64_t pid, bool relocation,
> >>> +                             hwaddr *raddr, int *psizep, int *protp,
> >>> +                             bool cause_excp)
> >>> +{
> >>> +    ppc_v3_pate_t pate;
> >>> +    int psize, prot;
> >>> +    hwaddr g_raddr;
> >>> +
> >>> +    *psizep = INT_MAX;
> >>> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> >>> +
> >>> +    /* Get Process Table */
> >>> +    if (cpu->vhyp && lpid == 0) {
> >>
> >> Current code doesn't check lpid == 0. Not sure to see what it's for...
> > 
> > cpu->vhyp means a pseries machine, lpid == 0 means accessing quadrant3, 
> > so it's the kernel.
> 
> Sorry. I misread that. It would pid == 0 for the kernel. 
> 

Yes.

> So yes, the test cpu->vhyp && lpid == 0 might be a bit overkill, given 
> that lpid is always 0 when running under a QEMU pseries machine.
> 

That's my thinking as well.

> 
> C.
> 
> > 
> >> especially env->spr[SPR_LPIDR] is always 0 with pseries machine types
> >> AFAICT... is it even possible to have lpid != 0 here ?
> > 
> > When under PowerNV, SPR_LPIDR can be set, but not under pseries.
> > 
> > C.
> > 
> >>
> >>
> >> Rest LGTM.
> >>

Just a few nits below...

> >>> +        PPCVirtualHypervisorClass *vhc;
> >>> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> +        vhc->get_pate(cpu->vhyp, &pate);
> >>> +    } else {
> >>> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> +            if (cause_excp) {
> >>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> +            }
> >>> +            return 1;
> >>> +        }
> >>> +        if (!validate_pate(cpu, lpid, &pate)) {
> >>> +            if (cause_excp) {
> >>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> >>> +            }
> >>> +            return 1;
> >>> +        }
> >>> +        /* We don't support guest mode yet */
> >>> +        if (lpid != 0) {
> >>> +            error_report("PowerNV guest support Unimplemented");
> >>> +            exit(1);
> >>> +        }
> >>> +    }
> >>> +
> >>> +    /*
> >>> +     * Perform process-scoped translation if relocation enabled.
> >>> +     *
> >>> +     * - Translates an effective address to a host real address in
> >>> +     *   quadrants 0 and 3 when HV=1.
> >>> +     */
> >>> +    if (relocation) {
> >>> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
> >>> +                                                   pate, &g_raddr, &prot,
> >>> +                                                   &psize, cause_excp);
> >>> +        if (ret) {
> >>> +            return ret;
> >>> +        }
> >>> +        *psizep = MIN(*psizep, psize);
> >>> +        *protp &= prot;
> >>> +    } else {
> >>> +        g_raddr = eaddr & R_EADDR_MASK;
> >>> +    }
> >>> +
> >>> +    *raddr = g_raddr;
> >>> +    return 0;
> >>> +}
> >>> +
> >>>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>                                   int mmu_idx)
> >>>  {
> >>>      CPUState *cs = CPU(cpu);
> >>>      CPUPPCState *env = &cpu->env;
> >>> -    PPCVirtualHypervisorClass *vhc;
> >>> -    hwaddr raddr, pte_addr;
> >>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> >>> -    int page_size, prot, fault_cause = 0;
> >>> -    ppc_v3_pate_t pate;
> >>> +    uint64_t lpid = 0, pid = 0;
> >>> +    int page_size, prot;
> >>>      bool relocation;
> >>> +    hwaddr raddr;
> >>>  
> >>>      assert(!(msr_hv && cpu->vhyp));
> >>>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
> >>> @@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>          return 1;
> >>>      }
> >>>  
> >>> -    /* Get Process Table */
> >>> -    if (cpu->vhyp) {
> >>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> -        vhc->get_pate(cpu->vhyp, &pate);
> >>> -    } else {
> >>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> -            return 1;
> >>> -        }
> >>> -        if (!validate_pate(cpu, lpid, &pate)) {
> >>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> >>> -        }
> >>> -        /* We don't support guest mode yet */
> >>> -        if (lpid != 0) {
> >>> -            error_report("PowerNV guest support Unimplemented");
> >>> -            exit(1);
> >>> -       }
> >>> -    }
> >>> -
> >>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> >>> -    offset = pid * sizeof(struct prtb_entry);
> >>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> >>> -    if (offset >= size) {
> >>> -        /* offset exceeds size of the process table */
> >>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
> >>> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
> >>> +                          &page_size, &prot, 1)) {

... passe true instead of 1 and...

> >>>          return 1;
> >>>      }
> >>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> >>> -
> >>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> >>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> >>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> >>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> >>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> >>> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
> >>> -        /* Couldn't get pte or access denied due to protection */
> >>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> >>> -        return 1;
> >>> -    }
> >>> -
> >>> -    /* Update Reference and Change Bits */
> >>> -    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
> >>>  
> >>>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
> >>>                   prot, mmu_idx, 1UL << page_size);
> >>> @@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>  
> >>>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
> >>>  {
> >>> -    CPUState *cs = CPU(cpu);
> >>>      CPUPPCState *env = &cpu->env;
> >>> -    PPCVirtualHypervisorClass *vhc;
> >>> -    hwaddr raddr, pte_addr;
> >>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> >>> -    int page_size, fault_cause = 0;
> >>> -    ppc_v3_pate_t pate;
> >>> +    uint64_t lpid = 0, pid = 0;
> >>> +    int psize, prot;
> >>> +    hwaddr raddr;
> >>>  
> >>>      /* Handle Real Mode */
> >>> -    if (msr_dr == 0) {
> >>> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
> >>>          /* In real mode top 4 effective addr bits (mostly) ignored */
> >>>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
> >>>      }
> >>> @@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
> >>>          return -1;
> >>>      }
> >>>  
> >>> -    /* Get Process Table */
> >>> -    if (cpu->vhyp) {
> >>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> -        vhc->get_pate(cpu->vhyp, &pate);
> >>> -    } else {
> >>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> -            return -1;
> >>> -        }
> >>> -        if (!validate_pate(cpu, lpid, &pate)) {
> >>> -            return -1;
> >>> -        }
> >>> -        /* We don't support guest mode yet */
> >>> -        if (lpid != 0) {
> >>> -            error_report("PowerNV guest support Unimplemented");
> >>> -            exit(1);
> >>> -       }
> >>> -    }
> >>> -
> >>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> >>> -    offset = pid * sizeof(struct prtb_entry);
> >>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> >>> -    if (offset >= size) {
> >>> -        /* offset exceeds size of the process table */
> >>> -        return -1;
> >>> -    }
> >>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> >>> -
> >>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> >>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> >>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> >>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> >>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> >>> -    if (!pte) {
> >>> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
> >>> +                          &prot, 0)) {

... false here since ppc_radix64_xlate() takes a bool argument.

> >>>          return -1;
> >>>      }
> >>>  
> >>
> > 
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool
  2020-03-30  9:49 ` [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool Cédric Le Goater
  2020-03-30 17:01   ` Greg Kurz
@ 2020-03-30 17:08   ` Greg Kurz
  1 sibling, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-30 17:08 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Mon, 30 Mar 2020 11:49:45 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> This prepares ground for partition-scoped Radix translation.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> ---
>  target/ppc/mmu-radix64.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 136498111f60..3ae29ed90d49 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -105,7 +105,8 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
>  
>  
>  static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
> -                                   int *fault_cause, int *prot)
> +                                   int *fault_cause, int *prot,
> +                                   bool partition_scoped)
>  {
>      CPUPPCState *env = &cpu->env;
>      const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
> @@ -121,11 +122,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
>      }
>  
>      /* Determine permissions allowed by Encoded Access Authority */
> -    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
> +    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
>          *prot = 0;
> -    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
> +    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
>          *prot = ppc_radix64_get_prot_eaa(pte);
> -    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
> +    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
>          *prot = ppc_radix64_get_prot_eaa(pte);
>          *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
>      }
> @@ -266,7 +267,7 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
>                                  g_raddr, g_page_size, &fault_cause, &pte_addr);
>  
>      if (!(pte & R_PTE_VALID) ||
> -        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
> +        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {

Maybe pass false instead of 0 ?

Anyway,

Reviewed-by: Greg Kurz <groug@kaod.org>

>          /* No valid pte or access denied due to protection */
>          if (cause_excp) {
>              ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5
  2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
  2020-03-30 10:45   ` Greg Kurz
@ 2020-03-31  0:57   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: David Gibson @ 2020-03-31  0:57 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1366 bytes --]

On Mon, Mar 30, 2020 at 11:49:40AM +0200, Cédric Le Goater wrote:
> From: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> 
> According to the ISA the root page directory size of a radix tree for
> either process- or partition-scoped translation must be >= 5.
> 
> Thus add this to the list of conditions checked when validating the
> partition table entry in validate_pate();
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

Applied to ppc-for-5.1.

> ---
>  target/ppc/mmu-radix64.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 224e646c5094..99678570581b 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -212,6 +212,9 @@ static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
>      if (lpid == 0 && !msr_hv) {
>          return false;
>      }
> +    if ((pate->dw0 & PATE1_R_PRTS) < 5) {
> +        return false;
> +    }
>      /* More checks ... */
>      return true;
>  }

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault()
  2020-03-30  9:49 ` [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault() Cédric Le Goater
  2020-03-30 13:01   ` Greg Kurz
@ 2020-03-31  0:58   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: David Gibson @ 2020-03-31  0:58 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1550 bytes --]

On Mon, Mar 30, 2020 at 11:49:41AM +0200, Cédric Le Goater wrote:
> It will ease the introduction of new routines for partition-scoped
> Radix translation.
> 
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

Applied to ppc-for-5.1.

> ---
>  target/ppc/mmu-radix64.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index 99678570581b..f6007e956569 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -229,12 +229,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>      uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
>      int page_size, prot, fault_cause = 0;
>      ppc_v3_pate_t pate;
> +    bool relocation;
>  
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
> +    relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));
>      /* HV or virtual hypervisor Real Mode Access */
> -    if ((msr_hv || cpu->vhyp) &&
> -        (((rwx == 2) && (msr_ir == 0)) || ((rwx != 2) && (msr_dr == 0)))) {
> +    if (!relocation && (msr_hv || cpu->vhyp)) {
>          /* In real mode top 4 effective addr bits (mostly) ignored */
>          raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine
  2020-03-30  9:49 ` [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine Cédric Le Goater
  2020-03-30 10:46   ` Greg Kurz
@ 2020-03-31  0:59   ` David Gibson
  1 sibling, 0 replies; 26+ messages in thread
From: David Gibson @ 2020-03-31  0:59 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1007 bytes --]

On Mon, Mar 30, 2020 at 11:49:42AM +0200, Cédric Le Goater wrote:
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> Signed-off-by: Cédric Le Goater <clg@kaod.org>

Applied to ppc-for-5.1.

> ---
>  target/ppc/mmu-radix64.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> index f6007e956569..d2422d1c54c9 100644
> --- a/target/ppc/mmu-radix64.c
> +++ b/target/ppc/mmu-radix64.c
> @@ -231,6 +231,7 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>      ppc_v3_pate_t pate;
>      bool relocation;
>  
> +    assert(!(msr_hv && cpu->vhyp));
>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
>  
>      relocation = ((rwx == 2) && (msr_ir == 1)) || ((rwx != 2) && (msr_dr == 1));

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-30 15:34       ` Cédric Le Goater
  2020-03-30 17:07         ` Greg Kurz
@ 2020-03-31  1:13         ` David Gibson
  2020-03-31  8:15           ` Cédric Le Goater
  1 sibling, 1 reply; 26+ messages in thread
From: David Gibson @ 2020-03-31  1:13 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 9811 bytes --]

On Mon, Mar 30, 2020 at 05:34:40PM +0200, Cédric Le Goater wrote:
> >>> +        /* No valid pte or access denied due to protection */
> >>> +        if (cause_excp) {
> >>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> >>> +        }
> >>> +        return 1;
> >>> +    }
> >>> +
> >>> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
> >>> +
> >>> +    return 0;
> >>> +}
> >>> +
> >>> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>> +                             uint64_t lpid, uint64_t pid, bool relocation,
> >>> +                             hwaddr *raddr, int *psizep, int *protp,
> >>> +                             bool cause_excp)
> >>> +{
> >>> +    ppc_v3_pate_t pate;
> >>> +    int psize, prot;
> >>> +    hwaddr g_raddr;
> >>> +
> >>> +    *psizep = INT_MAX;
> >>> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
> >>> +
> >>> +    /* Get Process Table */
> >>> +    if (cpu->vhyp && lpid == 0) {
> >>
> >> Current code doesn't check lpid == 0. Not sure to see what it's for...
> > 
> > cpu->vhyp means a pseries machine, lpid == 0 means accessing quadrant3, 
> > so it's the kernel.
> 
> Sorry. I misread that. It would pid == 0 for the kernel. 
> 
> So yes, the test cpu->vhyp && lpid == 0 might be a bit overkill, given 
> that lpid is always 0 when running under a QEMU pseries machine.

Overkill and conceptually incorrect.  When in vhyp mode we're not
modelling the hypervisor part of the CPU, which means that really the
LPID doesn't exist, so we shouldn't be looking at it (though it will
always be 0 in practice).

> 
> 
> C.
> 
> > 
> >> especially env->spr[SPR_LPIDR] is always 0 with pseries machine types
> >> AFAICT... is it even possible to have lpid != 0 here ?
> > 
> > When under PowerNV, SPR_LPIDR can be set, but not under pseries.
> > 
> > C.
> > 
> >>
> >>
> >> Rest LGTM.
> >>
> >>> +        PPCVirtualHypervisorClass *vhc;
> >>> +        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> +        vhc->get_pate(cpu->vhyp, &pate);
> >>> +    } else {
> >>> +        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> +            if (cause_excp) {
> >>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> +            }
> >>> +            return 1;
> >>> +        }
> >>> +        if (!validate_pate(cpu, lpid, &pate)) {
> >>> +            if (cause_excp) {
> >>> +                ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> >>> +            }
> >>> +            return 1;
> >>> +        }
> >>> +        /* We don't support guest mode yet */
> >>> +        if (lpid != 0) {
> >>> +            error_report("PowerNV guest support Unimplemented");
> >>> +            exit(1);
> >>> +        }
> >>> +    }
> >>> +
> >>> +    /*
> >>> +     * Perform process-scoped translation if relocation enabled.
> >>> +     *
> >>> +     * - Translates an effective address to a host real address in
> >>> +     *   quadrants 0 and 3 when HV=1.
> >>> +     */
> >>> +    if (relocation) {
> >>> +        int ret = ppc_radix64_process_scoped_xlate(cpu, rwx, eaddr, lpid, pid,
> >>> +                                                   pate, &g_raddr, &prot,
> >>> +                                                   &psize, cause_excp);
> >>> +        if (ret) {
> >>> +            return ret;
> >>> +        }
> >>> +        *psizep = MIN(*psizep, psize);
> >>> +        *protp &= prot;
> >>> +    } else {
> >>> +        g_raddr = eaddr & R_EADDR_MASK;
> >>> +    }
> >>> +
> >>> +    *raddr = g_raddr;
> >>> +    return 0;
> >>> +}
> >>> +
> >>>  int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>                                   int mmu_idx)
> >>>  {
> >>>      CPUState *cs = CPU(cpu);
> >>>      CPUPPCState *env = &cpu->env;
> >>> -    PPCVirtualHypervisorClass *vhc;
> >>> -    hwaddr raddr, pte_addr;
> >>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> >>> -    int page_size, prot, fault_cause = 0;
> >>> -    ppc_v3_pate_t pate;
> >>> +    uint64_t lpid = 0, pid = 0;
> >>> +    int page_size, prot;
> >>>      bool relocation;
> >>> +    hwaddr raddr;
> >>>  
> >>>      assert(!(msr_hv && cpu->vhyp));
> >>>      assert((rwx == 0) || (rwx == 1) || (rwx == 2));
> >>> @@ -268,48 +370,11 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>          return 1;
> >>>      }
> >>>  
> >>> -    /* Get Process Table */
> >>> -    if (cpu->vhyp) {
> >>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> -        vhc->get_pate(cpu->vhyp, &pate);
> >>> -    } else {
> >>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> -            return 1;
> >>> -        }
> >>> -        if (!validate_pate(cpu, lpid, &pate)) {
> >>> -            ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_R_BADCONFIG);
> >>> -        }
> >>> -        /* We don't support guest mode yet */
> >>> -        if (lpid != 0) {
> >>> -            error_report("PowerNV guest support Unimplemented");
> >>> -            exit(1);
> >>> -       }
> >>> -    }
> >>> -
> >>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> >>> -    offset = pid * sizeof(struct prtb_entry);
> >>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> >>> -    if (offset >= size) {
> >>> -        /* offset exceeds size of the process table */
> >>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, DSISR_NOPTE);
> >>> +    /* Translate eaddr to raddr (where raddr is addr qemu needs for access) */
> >>> +    if (ppc_radix64_xlate(cpu, eaddr, rwx, lpid, pid, relocation, &raddr,
> >>> +                          &page_size, &prot, 1)) {
> >>>          return 1;
> >>>      }
> >>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> >>> -
> >>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> >>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> >>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> >>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> >>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> >>> -    if (!pte || ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, &prot)) {
> >>> -        /* Couldn't get pte or access denied due to protection */
> >>> -        ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
> >>> -        return 1;
> >>> -    }
> >>> -
> >>> -    /* Update Reference and Change Bits */
> >>> -    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, &prot);
> >>>  
> >>>      tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
> >>>                   prot, mmu_idx, 1UL << page_size);
> >>> @@ -318,16 +383,13 @@ int ppc_radix64_handle_mmu_fault(PowerPCCPU *cpu, vaddr eaddr, int rwx,
> >>>  
> >>>  hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
> >>>  {
> >>> -    CPUState *cs = CPU(cpu);
> >>>      CPUPPCState *env = &cpu->env;
> >>> -    PPCVirtualHypervisorClass *vhc;
> >>> -    hwaddr raddr, pte_addr;
> >>> -    uint64_t lpid = 0, pid = 0, offset, size, prtbe0, pte;
> >>> -    int page_size, fault_cause = 0;
> >>> -    ppc_v3_pate_t pate;
> >>> +    uint64_t lpid = 0, pid = 0;
> >>> +    int psize, prot;
> >>> +    hwaddr raddr;
> >>>  
> >>>      /* Handle Real Mode */
> >>> -    if (msr_dr == 0) {
> >>> +    if ((msr_dr == 0) && (msr_hv || (cpu->vhyp && lpid == 0))) {
> >>>          /* In real mode top 4 effective addr bits (mostly) ignored */
> >>>          return eaddr & 0x0FFFFFFFFFFFFFFFULL;
> >>>      }
> >>> @@ -337,39 +399,8 @@ hwaddr ppc_radix64_get_phys_page_debug(PowerPCCPU *cpu, target_ulong eaddr)
> >>>          return -1;
> >>>      }
> >>>  
> >>> -    /* Get Process Table */
> >>> -    if (cpu->vhyp) {
> >>> -        vhc = PPC_VIRTUAL_HYPERVISOR_GET_CLASS(cpu->vhyp);
> >>> -        vhc->get_pate(cpu->vhyp, &pate);
> >>> -    } else {
> >>> -        if (!ppc64_v3_get_pate(cpu, lpid, &pate)) {
> >>> -            return -1;
> >>> -        }
> >>> -        if (!validate_pate(cpu, lpid, &pate)) {
> >>> -            return -1;
> >>> -        }
> >>> -        /* We don't support guest mode yet */
> >>> -        if (lpid != 0) {
> >>> -            error_report("PowerNV guest support Unimplemented");
> >>> -            exit(1);
> >>> -       }
> >>> -    }
> >>> -
> >>> -    /* Index Process Table by PID to Find Corresponding Process Table Entry */
> >>> -    offset = pid * sizeof(struct prtb_entry);
> >>> -    size = 1ULL << ((pate.dw1 & PATE1_R_PRTS) + 12);
> >>> -    if (offset >= size) {
> >>> -        /* offset exceeds size of the process table */
> >>> -        return -1;
> >>> -    }
> >>> -    prtbe0 = ldq_phys(cs->as, (pate.dw1 & PATE1_R_PRTB) + offset);
> >>> -
> >>> -    /* Walk Radix Tree from Process Table Entry to Convert EA to RA */
> >>> -    page_size = PRTBE_R_GET_RTS(prtbe0);
> >>> -    pte = ppc_radix64_walk_tree(cpu, eaddr & R_EADDR_MASK,
> >>> -                                prtbe0 & PRTBE_R_RPDB, prtbe0 & PRTBE_R_RPDS,
> >>> -                                &raddr, &page_size, &fault_cause, &pte_addr);
> >>> -    if (!pte) {
> >>> +    if (ppc_radix64_xlate(cpu, eaddr, 0, lpid, pid, msr_dr, &raddr, &psize,
> >>> +                          &prot, 0)) {
> >>>          return -1;
> >>>      }
> >>>  
> >>
> > 
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation
  2020-03-31  1:13         ` David Gibson
@ 2020-03-31  8:15           ` Cédric Le Goater
  0 siblings, 0 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-31  8:15 UTC (permalink / raw)
  To: David Gibson
  Cc: Suraj Jitindar Singh, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

On 3/31/20 3:13 AM, David Gibson wrote:
> On Mon, Mar 30, 2020 at 05:34:40PM +0200, Cédric Le Goater wrote:
>>>>> +        /* No valid pte or access denied due to protection */
>>>>> +        if (cause_excp) {
>>>>> +            ppc_radix64_raise_si(cpu, rwx, eaddr, fault_cause);
>>>>> +        }
>>>>> +        return 1;
>>>>> +    }
>>>>> +
>>>>> +    ppc_radix64_set_rc(cpu, rwx, pte, pte_addr, g_prot);
>>>>> +
>>>>> +    return 0;
>>>>> +}
>>>>> +
>>>>> +static int ppc_radix64_xlate(PowerPCCPU *cpu, vaddr eaddr, int rwx,
>>>>> +                             uint64_t lpid, uint64_t pid, bool relocation,
>>>>> +                             hwaddr *raddr, int *psizep, int *protp,
>>>>> +                             bool cause_excp)
>>>>> +{
>>>>> +    ppc_v3_pate_t pate;
>>>>> +    int psize, prot;
>>>>> +    hwaddr g_raddr;
>>>>> +
>>>>> +    *psizep = INT_MAX;
>>>>> +    *protp = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
>>>>> +
>>>>> +    /* Get Process Table */
>>>>> +    if (cpu->vhyp && lpid == 0) {
>>>>
>>>> Current code doesn't check lpid == 0. Not sure to see what it's for...
>>>
>>> cpu->vhyp means a pseries machine, lpid == 0 means accessing quadrant3, 
>>> so it's the kernel.
>>
>> Sorry. I misread that. It would pid == 0 for the kernel. 
>>
>> So yes, the test cpu->vhyp && lpid == 0 might be a bit overkill, given 
>> that lpid is always 0 when running under a QEMU pseries machine.
> 
> Overkill and conceptually incorrect.  When in vhyp mode we're not
> modelling the hypervisor part of the CPU, which means that really the
> LPID doesn't exist, so we shouldn't be looking at it (though it will
> always be 0 in practice).

I will remove the extra check. This has the nice effect of cleaning up 
a couple of weird changes in patch 7.

Thanks,

C.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool
  2020-03-30 17:01   ` Greg Kurz
@ 2020-03-31  8:22     ` Cédric Le Goater
  0 siblings, 0 replies; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-31  8:22 UTC (permalink / raw)
  To: Greg Kurz
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On 3/30/20 7:01 PM, Greg Kurz wrote:
> On Mon, 30 Mar 2020 11:49:45 +0200
> Cédric Le Goater <clg@kaod.org> wrote:
> 
>> This prepares ground for partition-scoped Radix translation.
>>
>> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
>> Signed-off-by: Cédric Le Goater <clg@kaod.org>
>> ---
>>  target/ppc/mmu-radix64.c | 11 ++++++-----
>>  1 file changed, 6 insertions(+), 5 deletions(-)
>>
>> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
>> index 136498111f60..3ae29ed90d49 100644
>> --- a/target/ppc/mmu-radix64.c
>> +++ b/target/ppc/mmu-radix64.c
>> @@ -105,7 +105,8 @@ static void ppc_radix64_raise_si(PowerPCCPU *cpu, int rwx, vaddr eaddr,
>>  
>>  
>>  static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
>> -                                   int *fault_cause, int *prot)
>> +                                   int *fault_cause, int *prot,
>> +                                   bool partition_scoped)
>>  {
>>      CPUPPCState *env = &cpu->env;
>>      const int need_prot[] = { PAGE_READ, PAGE_WRITE, PAGE_EXEC };
>> @@ -121,11 +122,11 @@ static bool ppc_radix64_check_prot(PowerPCCPU *cpu, int rwx, uint64_t pte,
>>      }
>>  
>>      /* Determine permissions allowed by Encoded Access Authority */
>> -    if ((pte & R_PTE_EAA_PRIV) && msr_pr) { /* Insufficient Privilege */
>> +    if (!partition_scoped && (pte & R_PTE_EAA_PRIV) && msr_pr) {
>>          *prot = 0;
>> -    } else if (msr_pr || (pte & R_PTE_EAA_PRIV)) {
>> +    } else if (msr_pr || (pte & R_PTE_EAA_PRIV) || partition_scoped) {
>>          *prot = ppc_radix64_get_prot_eaa(pte);
>> -    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) */
>> +    } else { /* !msr_pr && !(pte & R_PTE_EAA_PRIV) && !partition_scoped */
>>          *prot = ppc_radix64_get_prot_eaa(pte);
>>          *prot &= ppc_radix64_get_prot_amr(cpu); /* Least combined permissions */
>>      }
>> @@ -266,7 +267,7 @@ static int ppc_radix64_process_scoped_xlate(PowerPCCPU *cpu, int rwx,
>>                                  g_raddr, g_page_size, &fault_cause, &pte_addr);
>>  
>>      if (!(pte & R_PTE_VALID) ||
>> -        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot)) {
>> +        ppc_radix64_check_prot(cpu, rwx, pte, &fault_cause, g_prot, 0)) {
> 
> Maybe pass false since ppc_radix64_check_prot() expects a bool ?

Sure,

> Apart from that,
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>

Thanks,

C.



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation
  2020-03-30 17:00   ` Greg Kurz
@ 2020-03-31  9:10     ` Cédric Le Goater
  2020-03-31  9:49       ` Greg Kurz
  0 siblings, 1 reply; 26+ messages in thread
From: Cédric Le Goater @ 2020-03-31  9:10 UTC (permalink / raw)
  To: Greg Kurz
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On 3/30/20 7:00 PM, Greg Kurz wrote:
> On Mon, 30 Mar 2020 11:49:44 +0200
> Cédric Le Goater <clg@kaod.org> wrote:
> 
>> The ppc_radix64_walk_tree() routine walks through the nested radix
>> tables to look for a PTE.
>>
>> Split it two and introduce a new routine ppc_radix64_next_level()
> 
> Split it in two...
> 
>> which we will use for partition-scoped Radix translation when
>> translating the process tree addresses.
>>
>> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
>> Signed-off-by: Cédric Le Goater <clg@kaod.org>
>> ---
>>  target/ppc/mmu-radix64.c | 50 ++++++++++++++++++++++++++--------------
>>  1 file changed, 33 insertions(+), 17 deletions(-)
>>
>> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
>> index b4e6abcd2d35..136498111f60 100644
>> --- a/target/ppc/mmu-radix64.c
>> +++ b/target/ppc/mmu-radix64.c
>> @@ -162,44 +162,60 @@ static void ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte,
>>      }
>>  }
>>  
>> -static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
>> -                                      uint64_t base_addr, uint64_t nls,
>> -                                      hwaddr *raddr, int *psize,
>> -                                      int *fault_cause, hwaddr *pte_addr)
>> +static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
>> +                                       uint64_t *pte_addr, uint64_t *nls,
>> +                                       int *psize, int *fault_cause)
>>  {
>>      CPUState *cs = CPU(cpu);
>>      uint64_t index, pde;
>>  
>> -    if (nls < 5) { /* Directory maps less than 2**5 entries */
>> +    if (*nls < 5) { /* Directory maps less than 2**5 entries */
>>          *fault_cause |= DSISR_R_BADCONFIG;
>>          return 0;
>>      }
>>  
> 
> I think this should stay in the caller...
> 
>>      /* Read page <directory/table> entry from guest address space */
>> -    index = eaddr >> (*psize - nls); /* Shift */
>> -    index &= ((1UL << nls) - 1); /* Mask */
>> -    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
>> -    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
>> +    pde = ldq_phys(cs->as, *pte_addr);
>> +    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
>>          *fault_cause |= DSISR_NOPTE;
>>          return 0;
>>      }
>>  
>> -    *psize -= nls;
>> +    *psize -= *nls;
>> +    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
>> +        *nls = pde & R_PDE_NLS;
>> +        index = eaddr >> (*psize - *nls);       /* Shift */
>> +        index &= ((1UL << *nls) - 1);           /* Mask */
>> +        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
>> +    }
>> +    return pde;
>> +}
>> +
>> +static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
>> +                                      uint64_t base_addr, uint64_t nls,
>> +                                      hwaddr *raddr, int *psize,
>> +                                      int *fault_cause, hwaddr *pte_addr)
>> +{
>> +    uint64_t index, pde;
>> +
>> +    index = eaddr >> (*psize - nls);    /* Shift */
>> +    index &= ((1UL << nls) - 1);       /* Mask */
>> +    *pte_addr = base_addr + (index * sizeof(pde));
>> +    do {
> 
> ... here. So that we have a well established "bad config" path at
> the root level, just like the current code has.
> 
> Since the ppc_radix64_walk_tree() now calls ppc_radix64_next_level()
> in a loop instead of recursing, and since ppc_radix64_next_level()
> returns the nls value for the next level, it really makes sense to
> have this check in ppc_radix64_walk_tree() and maybe put an assert
> in ppc_radix64_next_level().

ppc_radix64_next_level() also covers the needs of partition-scoped 
translation which translates each process table address. See PATCH 7.

I rather not duplicate more code and leave it as it is.
 
Thanks,

C.

> 
>> +        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
>> +                                     fault_cause);
>> +    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
>>  
>> -    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
>> -    if (pde & R_PTE_LEAF) {
>> +    /* Did we find a valid leaf? */
>> +    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
>>          uint64_t rpn = pde & R_PTE_RPN;
>>          uint64_t mask = (1UL << *psize) - 1;
>>  
>>          /* Or high bits of rpn and low bits to ea to form whole real addr */
>>          *raddr = (rpn & ~mask) | (eaddr & mask);
>> -        *pte_addr = base_addr + (index * sizeof(pde));
>> -        return pde;
>>      }
>>  
>> -    /* Next Level of Radix Tree */
>> -    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
>> -                                 raddr, psize, fault_cause, pte_addr);
>> +    return pde;
>>  }
>>  
>>  static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation
  2020-03-31  9:10     ` Cédric Le Goater
@ 2020-03-31  9:49       ` Greg Kurz
  0 siblings, 0 replies; 26+ messages in thread
From: Greg Kurz @ 2020-03-31  9:49 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Suraj Jitindar Singh, qemu-ppc, qemu-devel, Nicholas Piggin,
	David Gibson

On Tue, 31 Mar 2020 11:10:20 +0200
Cédric Le Goater <clg@kaod.org> wrote:

> On 3/30/20 7:00 PM, Greg Kurz wrote:
> > On Mon, 30 Mar 2020 11:49:44 +0200
> > Cédric Le Goater <clg@kaod.org> wrote:
> > 
> >> The ppc_radix64_walk_tree() routine walks through the nested radix
> >> tables to look for a PTE.
> >>
> >> Split it two and introduce a new routine ppc_radix64_next_level()
> > 
> > Split it in two...
> > 
> >> which we will use for partition-scoped Radix translation when
> >> translating the process tree addresses.
> >>
> >> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
> >> Signed-off-by: Cédric Le Goater <clg@kaod.org>
> >> ---
> >>  target/ppc/mmu-radix64.c | 50 ++++++++++++++++++++++++++--------------
> >>  1 file changed, 33 insertions(+), 17 deletions(-)
> >>
> >> diff --git a/target/ppc/mmu-radix64.c b/target/ppc/mmu-radix64.c
> >> index b4e6abcd2d35..136498111f60 100644
> >> --- a/target/ppc/mmu-radix64.c
> >> +++ b/target/ppc/mmu-radix64.c
> >> @@ -162,44 +162,60 @@ static void ppc_radix64_set_rc(PowerPCCPU *cpu, int rwx, uint64_t pte,
> >>      }
> >>  }
> >>  
> >> -static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> >> -                                      uint64_t base_addr, uint64_t nls,
> >> -                                      hwaddr *raddr, int *psize,
> >> -                                      int *fault_cause, hwaddr *pte_addr)
> >> +static uint64_t ppc_radix64_next_level(PowerPCCPU *cpu, vaddr eaddr,
> >> +                                       uint64_t *pte_addr, uint64_t *nls,
> >> +                                       int *psize, int *fault_cause)
> >>  {
> >>      CPUState *cs = CPU(cpu);
> >>      uint64_t index, pde;
> >>  
> >> -    if (nls < 5) { /* Directory maps less than 2**5 entries */
> >> +    if (*nls < 5) { /* Directory maps less than 2**5 entries */
> >>          *fault_cause |= DSISR_R_BADCONFIG;
> >>          return 0;
> >>      }
> >>  
> > 
> > I think this should stay in the caller...
> > 
> >>      /* Read page <directory/table> entry from guest address space */
> >> -    index = eaddr >> (*psize - nls); /* Shift */
> >> -    index &= ((1UL << nls) - 1); /* Mask */
> >> -    pde = ldq_phys(cs->as, base_addr + (index * sizeof(pde)));
> >> -    if (!(pde & R_PTE_VALID)) { /* Invalid Entry */
> >> +    pde = ldq_phys(cs->as, *pte_addr);
> >> +    if (!(pde & R_PTE_VALID)) {         /* Invalid Entry */
> >>          *fault_cause |= DSISR_NOPTE;
> >>          return 0;
> >>      }
> >>  
> >> -    *psize -= nls;
> >> +    *psize -= *nls;
> >> +    if (!(pde & R_PTE_LEAF)) { /* Prepare for next iteration */
> >> +        *nls = pde & R_PDE_NLS;
> >> +        index = eaddr >> (*psize - *nls);       /* Shift */
> >> +        index &= ((1UL << *nls) - 1);           /* Mask */
> >> +        *pte_addr = (pde & R_PDE_NLB) + (index * sizeof(pde));
> >> +    }
> >> +    return pde;
> >> +}
> >> +
> >> +static uint64_t ppc_radix64_walk_tree(PowerPCCPU *cpu, vaddr eaddr,
> >> +                                      uint64_t base_addr, uint64_t nls,
> >> +                                      hwaddr *raddr, int *psize,
> >> +                                      int *fault_cause, hwaddr *pte_addr)
> >> +{
> >> +    uint64_t index, pde;
> >> +
> >> +    index = eaddr >> (*psize - nls);    /* Shift */
> >> +    index &= ((1UL << nls) - 1);       /* Mask */
> >> +    *pte_addr = base_addr + (index * sizeof(pde));
> >> +    do {
> > 
> > ... here. So that we have a well established "bad config" path at
> > the root level, just like the current code has.
> > 
> > Since the ppc_radix64_walk_tree() now calls ppc_radix64_next_level()
> > in a loop instead of recursing, and since ppc_radix64_next_level()
> > returns the nls value for the next level, it really makes sense to
> > have this check in ppc_radix64_walk_tree() and maybe put an assert
> > in ppc_radix64_next_level().
> 
> ppc_radix64_next_level() also covers the needs of partition-scoped 
> translation which translates each process table address. See PATCH 7.
> 
> I rather not duplicate more code and leave it as it is.
>  

These patches change the behaviour of some existing paths in a
non-trivial way. I'll wait for the respin and try to review again.

> Thanks,
> 
> C.
> 
> > 
> >> +        pde = ppc_radix64_next_level(cpu, eaddr, pte_addr, &nls, psize,
> >> +                                     fault_cause);
> >> +    } while ((pde & R_PTE_VALID) && !(pde & R_PTE_LEAF));
> >>  
> >> -    /* Check if Leaf Entry -> Page Table Entry -> Stop the Search */
> >> -    if (pde & R_PTE_LEAF) {
> >> +    /* Did we find a valid leaf? */
> >> +    if ((pde & R_PTE_VALID) && (pde & R_PTE_LEAF)) {
> >>          uint64_t rpn = pde & R_PTE_RPN;
> >>          uint64_t mask = (1UL << *psize) - 1;
> >>  
> >>          /* Or high bits of rpn and low bits to ea to form whole real addr */
> >>          *raddr = (rpn & ~mask) | (eaddr & mask);
> >> -        *pte_addr = base_addr + (index * sizeof(pde));
> >> -        return pde;
> >>      }
> >>  
> >> -    /* Next Level of Radix Tree */
> >> -    return ppc_radix64_walk_tree(cpu, eaddr, pde & R_PDE_NLB, pde & R_PDE_NLS,
> >> -                                 raddr, psize, fault_cause, pte_addr);
> >> +    return pde;
> >>  }
> >>  
> >>  static bool validate_pate(PowerPCCPU *cpu, uint64_t lpid, ppc_v3_pate_t *pate)
> > 
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2020-03-31  9:50 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-30  9:49 [PATCH 0/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater
2020-03-30  9:49 ` [PATCH 1/7] target/ppc: Enforce that the root page directory size must be at least 5 Cédric Le Goater
2020-03-30 10:45   ` Greg Kurz
2020-03-31  0:57   ` David Gibson
2020-03-30  9:49 ` [PATCH 2/7] target/ppc: Introduce a relocation bool in ppc_radix64_handle_mmu_fault() Cédric Le Goater
2020-03-30 13:01   ` Greg Kurz
2020-03-31  0:58   ` David Gibson
2020-03-30  9:49 ` [PATCH 3/7] target/ppc: Assert if HV mode is set when running under a pseries machine Cédric Le Goater
2020-03-30 10:46   ` Greg Kurz
2020-03-31  0:59   ` David Gibson
2020-03-30  9:49 ` [PATCH 4/7] target/ppc: Introduce ppc_radix64_xlate() for Radix tree translation Cédric Le Goater
2020-03-30 14:18   ` Greg Kurz
2020-03-30 15:24     ` Cédric Le Goater
2020-03-30 15:34       ` Cédric Le Goater
2020-03-30 17:07         ` Greg Kurz
2020-03-31  1:13         ` David Gibson
2020-03-31  8:15           ` Cédric Le Goater
2020-03-30  9:49 ` [PATCH 5/7] target/ppc: Rework ppc_radix64_walk_tree() for partition-scoped translation Cédric Le Goater
2020-03-30 17:00   ` Greg Kurz
2020-03-31  9:10     ` Cédric Le Goater
2020-03-31  9:49       ` Greg Kurz
2020-03-30  9:49 ` [PATCH 6/7] target/ppc: Extend ppc_radix64_check_prot() with a 'partition_scoped' bool Cédric Le Goater
2020-03-30 17:01   ` Greg Kurz
2020-03-31  8:22     ` Cédric Le Goater
2020-03-30 17:08   ` Greg Kurz
2020-03-30  9:49 ` [PATCH 7/7] target/ppc: Add support for Radix partition-scoped translation Cédric Le Goater

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.