On Mon, Jan 25, 2016 at 08:22:51PM +0100, Alexander Graf wrote: > > > On 01/25/2016 06:15 AM, David Gibson wrote: > >ppc_store_slb updates the SLB for PPC cpus with 64-bit hash MMUs. > >Currently it takes two parameters, which contain values encoded as the > >register arguments to the slbmte instruction, one register contains the > >ESID portion of the SLBE and also the slot number, the other contains the > >VSID portion of the SLBE. > > > >We're shortly going to want to do some SLB updates from other code where > >it is more convenient to supply the slot number and ESID separately, so > >rework this function and its callers to work this way. > > > >As a bonus, this slightly simplifies the emulation of segment registers for > >when running a 32-bit OS on a 64-bit CPU. > > > >Signed-off-by: David Gibson > >--- > > target-ppc/kvm.c | 2 +- > > target-ppc/mmu-hash64.c | 24 +++++++++++++----------- > > target-ppc/mmu-hash64.h | 3 ++- > > target-ppc/mmu_helper.c | 14 +++++--------- > > 4 files changed, 21 insertions(+), 22 deletions(-) > > > >diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c > >index 98d7ba6..18c7ba2 100644 > >--- a/target-ppc/kvm.c > >+++ b/target-ppc/kvm.c > >@@ -1205,7 +1205,7 @@ int kvm_arch_get_registers(CPUState *cs) > > * Only restore valid entries > > */ > > if (rb & SLB_ESID_V) { > >- ppc_store_slb(cpu, rb, rs); > >+ ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs); > > } > > } > > #endif > >diff --git a/target-ppc/mmu-hash64.c b/target-ppc/mmu-hash64.c > >index 03e25fd..5a6d33b 100644 > >--- a/target-ppc/mmu-hash64.c > >+++ b/target-ppc/mmu-hash64.c > >@@ -135,28 +135,30 @@ void helper_slbie(CPUPPCState *env, target_ulong addr) > > } > > } > >-int ppc_store_slb(PowerPCCPU *cpu, target_ulong rb, target_ulong rs) > >+int ppc_store_slb(PowerPCCPU *cpu, target_ulong slot, > >+ target_ulong esid, target_ulong vsid) > > { > > CPUPPCState *env = &cpu->env; > >- int slot = rb & 0xfff; > > ppc_slb_t *slb = &env->slb[slot]; > >- if (rb & (0x1000 - env->slb_nr)) { > >- return -1; /* Reserved bits set or slot too high */ > >+ if (slot >= env->slb_nr) { > >+ return -1; /* Bad slot number */ > >+ } > >+ if (esid & ~(SLB_ESID_ESID | SLB_ESID_V)) { > >+ return -1; /* Reserved bits set */ > > } > >- if (rs & (SLB_VSID_B & ~SLB_VSID_B_1T)) { > >+ if (vsid & (SLB_VSID_B & ~SLB_VSID_B_1T)) { > > return -1; /* Bad segment size */ > > } > >- if ((rs & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) { > >+ if ((vsid & SLB_VSID_B) && !(env->mmu_model & POWERPC_MMU_1TSEG)) { > > return -1; /* 1T segment on MMU that doesn't support it */ > > } > >- /* Mask out the slot number as we store the entry */ > >- slb->esid = rb & (SLB_ESID_ESID | SLB_ESID_V); > >- slb->vsid = rs; > >+ slb->esid = esid; > >+ slb->vsid = vsid; > > LOG_SLB("%s: %d " TARGET_FMT_lx " - " TARGET_FMT_lx " => %016" PRIx64 > >- " %016" PRIx64 "\n", __func__, slot, rb, rs, > >+ " %016" PRIx64 "\n", __func__, slot, esid, vsid, > > slb->esid, slb->vsid); > > return 0; > >@@ -196,7 +198,7 @@ void helper_store_slb(CPUPPCState *env, target_ulong rb, target_ulong rs) > > { > > PowerPCCPU *cpu = ppc_env_get_cpu(env); > >- if (ppc_store_slb(cpu, rb, rs) < 0) { > >+ if (ppc_store_slb(cpu, rb & 0xfff, rb & ~0xfff, rs) < 0) { > > This might truncate the esid to 32bits on 32bits hosts, no? Should be > 0xfffULL instead. Good point, nice catch. -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson