All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Alexander Graf <agraf@suse.de>
Cc: lvivier@redhat.com, thuth@redhat.com, aik@ozlabs.ru,
	qemu-devel@nongnu.org, qemu-ppc@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup
Date: Wed, 27 Jan 2016 11:59:18 +1100	[thread overview]
Message-ID: <20160127005918.GI16692@voom.fritz.box> (raw)
In-Reply-To: <56A679C9.4090704@suse.de>

[-- Attachment #1: Type: text/plain, Size: 8691 bytes --]

On Mon, Jan 25, 2016 at 08:38:49PM +0100, Alexander Graf wrote:
> 
> 
> On 01/25/2016 06:15 AM, David Gibson wrote:
> >Currently, the ppc_hash64_page_shift() function looks up a page size based
> >on information in an SLB entry.  It open codes the bit translation for
> >existing CPUs, however different CPU models can have different SLB
> >encodings.  We already store those in the 'sps' table in CPUPPCState, but
> >we don't currently enforce that that actually matches the logic in
> >ppc_hash64_page_shift.
> >
> >This patch reworks lookup of page size from SLB in several ways:
> >   * ppc_store_slb() will now fail (triggering an illegal instruction
> >     exception) if given a bad SLB page size encoding
> >   * On success ppc_store_slb() stores a pointer to the relevant entry in
> >     the page size table in the SLB entry.  This is looked up directly from
> >     the published table of page size encodings, so can't get out ot sync.
> >   * ppc_hash64_htab_lookup() and others now use this precached page size
> >     information rather than decoding the SLB values
> >   * Adjust ppc_hash64_pte_raddr() to take a page shift directly
> >     instead of an SLB entry.  We'll be wanting the flexibility shortly.
> >   * Adjust ppc_hash64_pte_raddr() to take a page shift directly
> >     instead of an SLB entry.  Both callers now have easy access to this,
> >     and we'll need the flexibility shortly.
> >
> >Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> >---
> >  target-ppc/cpu.h        |  1 +
> >  target-ppc/machine.c    | 20 ++++++++++++++
> >  target-ppc/mmu-hash64.c | 71 ++++++++++++++++++++++++++-----------------------
> >  3 files changed, 59 insertions(+), 33 deletions(-)
> >
> >diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
> >index 2bc96b4..0820390 100644
> >--- a/target-ppc/cpu.h
> >+++ b/target-ppc/cpu.h
> >@@ -419,6 +419,7 @@ typedef struct ppc_slb_t ppc_slb_t;
> >  struct ppc_slb_t {
> >      uint64_t esid;
> >      uint64_t vsid;
> >+    const struct ppc_one_seg_page_size *sps;
> >  };
> >  #define MAX_SLB_ENTRIES         64
> >diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> >index b61c060..ca62d3e 100644
> >--- a/target-ppc/machine.c
> >+++ b/target-ppc/machine.c
> >@@ -2,6 +2,7 @@
> >  #include "hw/boards.h"
> >  #include "sysemu/kvm.h"
> >  #include "helper_regs.h"
> >+#include "mmu-hash64.h"
> >  static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
> >  {
> >@@ -352,11 +353,30 @@ static bool slb_needed(void *opaque)
> >      return (cpu->env.mmu_model & POWERPC_MMU_64);
> >  }
> >+static int slb_post_load(void *opaque, int version_id)
> >+{
> >+    PowerPCCPU *cpu = opaque;
> >+    CPUPPCState *env = &cpu->env;
> >+    int i;
> >+
> >+    /* We've pulled in the raw esid and vsid values from the migration
> >+     * stream, but we need to recompute the page size pointers */
> >+    for (i = 0; i < env->slb_nr; i++) {
> >+        if (ppc_store_slb(cpu, i, env->slb[i].esid, env->slb[i].vsid) < 0) {
> >+            /* Migration source had bad values in its SLB */
> >+            return -1;
> >+        }
> >+    }
> >+
> >+    return 0;
> >+}
> >+
> >  static const VMStateDescription vmstate_slb = {
> >      .name = "cpu/slb",
> >      .version_id = 1,
> >      .minimum_version_id = 1,
> >      .needed = slb_needed,
> >+    .post_load = slb_post_load,
> >      .fields = (VMStateField[]) {
> >          VMSTATE_INT32_EQUAL(env.slb_nr, PowerPCCPU),
> >          VMSTATE_SLB_ARRAY(env.slb, PowerPCCPU, MAX_SLB_ENTRIES),
> >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c
> >index 5a6d33b..28ad361 100644
> >--- a/target-ppc/mmu-hash64.c
> >+++ b/target-ppc/mmu-hash64.c
> >@@ -19,6 +19,7 @@
> >   */
> >  #include "cpu.h"
> >  #include "exec/helper-proto.h"
> >+#include "qemu/error-report.h"
> >  #include "sysemu/kvm.h"
> >  #include "kvm_ppc.h"
> >  #include "mmu-hash64.h"
> >@@ -140,6 +141,8 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> >  {
> >      CPUPPCState *env = &cpu->env;
> >      ppc_slb_t *slb = &env->slb[slot];
> >+    const struct ppc_one_seg_page_size *sps = NULL;
> >+    int i;
> >      if (slot >= env->slb_nr) {
> >          return -1; /* Bad slot number */
> >@@ -154,8 +157,29 @@ int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot,
> >          return -1; /* 1T segment on MMU that doesn't support it */
> >      }
> >+    for (i = 0; i < PPC_PAGE_SIZES_MAX_SZ; i++) {
> >+        const struct ppc_one_seg_page_size *sps1 = &env->sps.sps[i];
> >+
> >+        if (!sps1->page_shift) {
> >+            break;
> >+        }
> >+
> >+        if ((vsid & SLB_VSID_LLP_MASK) == sps1->slb_enc) {
> >+            sps = sps1;
> >+            break;
> >+        }
> >+    }
> >+
> >+    if (!sps) {
> >+        error_report("Bad page size encoding in SLB store: slot "TARGET_FMT_lu
> >+                     " esid 0x"TARGET_FMT_lx" vsid 0x"TARGET_FMT_lx,
> >+                     slot, esid, vsid);
> >+        return -1;
> >+    }
> >+
> >      slb->esid = esid;
> >      slb->vsid = vsid;
> >+    slb->sps = sps;
> >      LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64
> >              " %016" PRIx64 "\n", __func__, slot, esid, vsid,
> >@@ -394,24 +418,6 @@ static hwaddr ppc_hash64_pteg_search(PowerPCCPU *cpu, hwaddr hash,
> >      return -1;
> >  }
> >-static uint64_t ppc_hash64_page_shift(ppc_slb_t *slb)
> >-{
> >-    uint64_t epnshift;
> >-
> >-    /* Page size according to the SLB, which we use to generate the
> >-     * EPN for hash table lookup..  When we implement more recent MMU
> >-     * extensions this might be different from the actual page size
> >-     * encoded in the PTE */
> >-    if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_4K) {
> >-        epnshift = TARGET_PAGE_BITS;
> >-    } else if ((slb->vsid & SLB_VSID_LLP_MASK) == SLB_VSID_64K) {
> >-        epnshift = TARGET_PAGE_BITS_64K;
> >-    } else {
> >-        epnshift = TARGET_PAGE_BITS_16M;
> >-    }
> >-    return epnshift;
> >-}
> >-
> >  static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >                                       ppc_slb_t *slb, target_ulong eaddr,
> >                                       ppc_hash_pte64_t *pte)
> >@@ -419,21 +425,24 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >      CPUPPCState *env = &cpu->env;
> >      hwaddr pte_offset;
> >      hwaddr hash;
> >-    uint64_t vsid, epnshift, epnmask, epn, ptem;
> >+    uint64_t vsid, epnmask, epn, ptem;
> >+
> >+    /* The SLB store path should prevent any bad page size encodings
> >+     * getting in there, so: */
> >+    assert(slb->sps);
> >-    epnshift = ppc_hash64_page_shift(slb);
> >-    epnmask = ~((1ULL << epnshift) - 1);
> >+    epnmask = ~((1ULL << slb->sps->page_shift) - 1);
> >      if (slb->vsid & SLB_VSID_B) {
> >          /* 1TB segment */
> >          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT_1T;
> >          epn = (eaddr & ~SEGMENT_MASK_1T) & epnmask;
> >-        hash = vsid ^ (vsid << 25) ^ (epn >> epnshift);
> >+        hash = vsid ^ (vsid << 25) ^ (epn >> slb->sps->page_shift);
> >      } else {
> >          /* 256M segment */
> >          vsid = (slb->vsid & SLB_VSID_VSID) >> SLB_VSID_SHIFT;
> >          epn = (eaddr & ~SEGMENT_MASK_256M) & epnmask;
> >-        hash = vsid ^ (epn >> epnshift);
> >+        hash = vsid ^ (epn >> slb->sps->page_shift);
> >      }
> >      ptem = (slb->vsid & SLB_VSID_PTEM) | ((epn >> 16) & HPTE64_V_AVPN);
> >@@ -465,17 +474,12 @@ static hwaddr ppc_hash64_htab_lookup(PowerPCCPU *cpu,
> >      return pte_offset;
> >  }
> >-static hwaddr ppc_hash64_pte_raddr(ppc_slb_t *slb, ppc_hash_pte64_t pte,
> >+static hwaddr ppc_hash64_pte_raddr(unsigned page_shift, ppc_hash_pte64_t pte,
> >                                     target_ulong eaddr)
> >  {
> >-    hwaddr mask;
> >-    int target_page_bits;
> >+    hwaddr mask = (1ULL << page_shift) - 1;
> >      hwaddr rpn = pte.pte1 & HPTE64_R_RPN;
> >-    /*
> >-     * We support 4K, 64K and 16M now
> >-     */
> >-    target_page_bits = ppc_hash64_page_shift(slb);
> >-    mask = (1ULL << target_page_bits) - 1;
> >+
> >      return (rpn & ~mask) | (eaddr & mask);
> >  }
> 
> Isn't this basically deposit64 by now?

True.  I'll remove ppc_hash64_pte_raddr() entirely and use deposit64
instead.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2016-01-27  2:33 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-25  5:15 [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 01/10] target-ppc: Remove unused kvmppc_read_segment_page_sizes() stub David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 02/10] target-ppc: Convert mmu-hash{32, 64}.[ch] from CPUPPCState to PowerPCCPU David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 03/10] target-ppc: Rework ppc_store_slb David Gibson
2016-01-25 19:22   ` Alexander Graf
2016-01-27  0:04     ` David Gibson
2016-01-27  7:32       ` Thomas Huth
2016-01-27 10:07         ` David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 04/10] target-ppc: Rework SLB page size lookup David Gibson
2016-01-25 19:38   ` Alexander Graf
2016-01-27  0:59     ` David Gibson [this message]
2016-01-25  5:15 ` [Qemu-devel] [PATCH 05/10] target-ppc: Use actual page size encodings from HPTE David Gibson
2016-01-25 13:26   ` David Gibson
2016-01-25 20:18   ` Alexander Graf
2016-01-27  0:40     ` David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 06/10] target-ppc: Remove unused mmu models from ppc_tlb_invalidate_one David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 07/10] target-ppc: Split 44x tlbiva from ppc_tlb_invalidate_one() David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 08/10] target-ppc: Add new TLB invalidate by HPTE call for hash64 MMUs David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 09/10] target-ppc: Helper to determine page size information from hpte alone David Gibson
2016-01-25  5:15 ` [Qemu-devel] [PATCH 10/10] target-ppc: Allow more page sizes for POWER7 & POWER8 in TCG David Gibson
2016-01-25 11:10 ` [Qemu-devel] [PATCH 00/10] Clean up page size handling for ppc 64-bit hash MMUs with TCG David Gibson
2016-01-25 20:36   ` Alexander Graf
2016-01-26  5:41     ` David Gibson
2016-01-26  7:09       ` Alexander Graf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160127005918.GI16692@voom.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=agraf@suse.de \
    --cc=aik@ozlabs.ru \
    --cc=lvivier@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.