From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58462) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHV4z-0003NV-Gg for qemu-devel@nongnu.org; Fri, 08 Jan 2016 06:19:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aHV4u-0003sv-VQ for qemu-devel@nongnu.org; Fri, 08 Jan 2016 06:19:37 -0500 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]:34226) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHV4u-0003sl-9t for qemu-devel@nongnu.org; Fri, 08 Jan 2016 06:19:32 -0500 Received: by mail-wm0-x233.google.com with SMTP id u188so132982522wmu.1 for ; Fri, 08 Jan 2016 03:19:32 -0800 (PST) References: <1450082498-27109-1-git-send-email-a.rigo@virtualopensystems.com> <1450082498-27109-11-git-send-email-a.rigo@virtualopensystems.com> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <1450082498-27109-11-git-send-email-a.rigo@virtualopensystems.com> Date: Fri, 08 Jan 2016 11:19:30 +0000 Message-ID: <87ziwgb9od.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC v6 10/14] softmmu: Simplify helper_*_st_name, wrap unaligned code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alvise Rigo Cc: mttcg@listserver.greensocs.com, claudio.fontana@huawei.com, qemu-devel@nongnu.org, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, rth@twiddle.net Alvise Rigo writes: > Attempting to simplify the helper_*_st_name, wrap the > do_unaligned_access code into an inline function. > Remove also the goto statement. > > Suggested-by: Jani Kokkonen > Suggested-by: Claudio Fontana > Signed-off-by: Alvise Rigo > --- > softmmu_template.h | 96 ++++++++++++++++++++++++++++++++++-------------------- > 1 file changed, 60 insertions(+), 36 deletions(-) > > diff --git a/softmmu_template.h b/softmmu_template.h > index d3d5902..92f92b1 100644 > --- a/softmmu_template.h > +++ b/softmmu_template.h > @@ -370,6 +370,32 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env, > iotlbentry->attrs); > } > > +static inline void glue(helper_le_st_name, _do_unl_access)(CPUArchState *env, > + DATA_TYPE val, > + target_ulong addr, > + TCGMemOpIdx oi, > + unsigned mmu_idx, > + uintptr_t retaddr) > +{ > + int i; > + > + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { > + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, > + mmu_idx, retaddr); > + } > + /* XXX: not efficient, but simple */ > + /* Note: relies on the fact that tlb_fill() does not remove the > + * previous page from the TLB cache. */ > + for (i = DATA_SIZE - 1; i >= 0; i--) { > + /* Little-endian extract. */ > + uint8_t val8 = val >> (i * 8); > + /* Note the adjustment at the beginning of the function. > + Undo that for the recursion. */ > + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, > + oi, retaddr + GETPC_ADJ); > + } > +} > + > void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > TCGMemOpIdx oi, uintptr_t retaddr) > { > @@ -433,7 +459,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > return; > } else { > if ((addr & (DATA_SIZE - 1)) != 0) { > - goto do_unaligned_access; > + glue(helper_le_st_name, _do_unl_access)(env, val, addr, mmu_idx, > + oi, retaddr); I've just noticed this drops and implicit return. However I'm seeing if I can put together an RFC cleanup patch set for softmmu based on these plus a few other clean-ups. I'll CC when done. > } > iotlbentry = &env->iotlb[mmu_idx][index]; > > @@ -449,23 +476,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > if (DATA_SIZE > 1 > && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 > >= TARGET_PAGE_SIZE)) { > - int i; > - do_unaligned_access: > - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { > - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, > - mmu_idx, retaddr); > - } > - /* XXX: not efficient, but simple */ > - /* Note: relies on the fact that tlb_fill() does not remove the > - * previous page from the TLB cache. */ > - for (i = DATA_SIZE - 1; i >= 0; i--) { > - /* Little-endian extract. */ > - uint8_t val8 = val >> (i * 8); > - /* Note the adjustment at the beginning of the function. > - Undo that for the recursion. */ > - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, > - oi, retaddr + GETPC_ADJ); > - } > + glue(helper_le_st_name, _do_unl_access)(env, val, addr, oi, mmu_idx, > + retaddr); > return; > } > > @@ -485,6 +497,32 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > } > > #if DATA_SIZE > 1 > +static inline void glue(helper_be_st_name, _do_unl_access)(CPUArchState *env, > + DATA_TYPE val, > + target_ulong addr, > + TCGMemOpIdx oi, > + unsigned mmu_idx, > + uintptr_t retaddr) > +{ > + int i; > + > + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { > + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, > + mmu_idx, retaddr); > + } > + /* XXX: not efficient, but simple */ > + /* Note: relies on the fact that tlb_fill() does not remove the > + * previous page from the TLB cache. */ > + for (i = DATA_SIZE - 1; i >= 0; i--) { > + /* Big-endian extract. */ > + uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); > + /* Note the adjustment at the beginning of the function. > + Undo that for the recursion. */ > + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, > + oi, retaddr + GETPC_ADJ); > + } > +} > + > void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > TCGMemOpIdx oi, uintptr_t retaddr) > { > @@ -548,7 +586,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > return; > } else { > if ((addr & (DATA_SIZE - 1)) != 0) { > - goto do_unaligned_access; > + glue(helper_be_st_name, _do_unl_access)(env, val, addr, mmu_idx, > + oi, retaddr); > } > iotlbentry = &env->iotlb[mmu_idx][index]; > > @@ -564,23 +603,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, > if (DATA_SIZE > 1 > && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 > >= TARGET_PAGE_SIZE)) { > - int i; > - do_unaligned_access: > - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { > - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, > - mmu_idx, retaddr); > - } > - /* XXX: not efficient, but simple */ > - /* Note: relies on the fact that tlb_fill() does not remove the > - * previous page from the TLB cache. */ > - for (i = DATA_SIZE - 1; i >= 0; i--) { > - /* Big-endian extract. */ > - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); > - /* Note the adjustment at the beginning of the function. > - Undo that for the recursion. */ > - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, > - oi, retaddr + GETPC_ADJ); > - } > + glue(helper_be_st_name, _do_unl_access)(env, val, addr, oi, mmu_idx, > + retaddr); > return; > } -- Alex Bennée