All of lore.kernel.org
 help / color / mirror / Atom feed
From: Richard Henderson <richard.henderson@linaro.org>
To: Rebecca Cran <rebecca@nuviainc.com>,
	Peter Maydell <peter.maydell@linaro.org>,
	qemu-arm@nongnu.org
Cc: qemu-devel@nongnu.org
Subject: Re: [PATCH v4 1/3] target/arm: Add support for FEAT_TLBIRANGE
Date: Tue, 16 Mar 2021 12:09:45 -0600	[thread overview]
Message-ID: <8124cf7a-1634-7232-465a-172aeec47d07@linaro.org> (raw)
In-Reply-To: <20210316154910.25804-2-rebecca@nuviainc.com>

On 3/16/21 9:49 AM, Rebecca Cran wrote:
> +    for (page = addr; page < (addr + length); page += TARGET_PAGE_SIZE) {

This test means that it's impossible to flush the last page of the address 
space (addr + length == 0).  I think better to do

   for (l = 0; l < length; l += TARGET_PAGE_SIZE)
       page = addr + l;
       ...

> +        for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
> +            if ((idxmap >> mmu_idx) & 1) {
> +                tlb_flush_page_bits_locked(env, mmu_idx, page, bits);

Hmm.  I'm not keen on this.  You're not able to notice the special cases 
within, where we flush the entire tlb -- and therefore you do not need to 
continue the outer loop for this mmuidx.

> +                tb_flush_jmp_cache(cpu, page);

This does not need to be in the mmuidx loop.  But since above means that the 
mmuidx loop should be the outer loop, this would go in a separate page loop by 
itself.

> +void tlb_flush_page_range_bits_by_mmuidx(CPUState *cpu,
> +                                         target_ulong addr,
> +                                         target_ulong length,
> +                                         uint16_t idxmap,
> +                                         unsigned bits)
> +{
> +    TLBFlushPageRangeBitsByMMUIdxData d;
> +    TLBFlushPageRangeBitsByMMUIdxData *p;
> +
> +    /* This should already be page aligned */
> +    addr &= TARGET_PAGE_BITS;
> +
> +    /* If all bits are significant, this devolves to tlb_flush_page. */
> +    if (bits >= TARGET_LONG_BITS) {
> +        tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
> +        return;
> +    }

This case is incorrect.

The cputlb changes should have remained a separate patch.

> @@ -4759,6 +4759,241 @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, const ARMCPRegInfo *ri,
>                                                     ARMMMUIdxBit_SE3, bits);
>   }
>   
> +#ifdef TARGET_AARCH64
> +static uint64_t tlbi_aa64_range_get_length(CPUARMState *env,
> +                                           uint64_t value)
> +{
> +    unsigned int page_size;
> +    unsigned int page_size_granule;
> +    uint64_t num;
> +    uint64_t scale;
> +    uint64_t exponent;
> +    uint64_t length;
> +
> +    num = extract64(value, 39, 4);
> +    scale = extract64(value, 44, 2);
> +    page_size_granule = extract64(value, 46, 2);
> +
> +    switch (page_size_granule) {
> +    case 1:
> +      page_size = 4096;

Indentation is off?

> +      break;
> +    case 2:
> +      page_size = 16384;
> +      break;
> +    case 3:
> +      page_size = 65536;

You might as well have this as page_shift = {12,14,16}, or perhaps page_shift = 
page_size_granule * 2 + 10 instead of the full switch.

> +    exponent = (5 * scale) + 1;
> +    length = ((num + 1) << exponent) * page_size;

   length = (num + 1) << (exponent + page_shift);

> +    mask = vae1_tlbmask(env);
> +    if (regime_has_2_ranges(mask)) {

You can't pass in mask.

All of the mmuidx will have the same form, so ctz32(mask) would pick out the 
mmuidx for the first bit.

> +    if (regime_has_2_ranges(secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2)) {

again.  Only this time we know that E2 & SE2 have one range.  Only (S)EL1&0 and 
(S)EL2&0 have two ranges.

> +    if (regime_has_2_ranges(ARMMMUIdxBit_SE3)) {

Likewise, E3 has only one range.


r~


  reply	other threads:[~2021-03-16 18:30 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-16 15:49 [PATCH v4 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE Rebecca Cran
2021-03-16 15:49 ` [PATCH v4 1/3] target/arm: Add support for FEAT_TLBIRANGE Rebecca Cran
2021-03-16 18:09   ` Richard Henderson [this message]
2021-03-16 21:13     ` Rebecca Cran
2021-03-16 15:49 ` [PATCH v4 2/3] target/arm: Add support for FEAT_TLBIOS Rebecca Cran
2021-03-16 15:49 ` [PATCH v4 3/3] target/arm: set ID_AA64ISAR0.TLB to 2 for max AARCH64 CPU type Rebecca Cran
2021-03-16 17:03 ` [PATCH v4 0/3] target/arm: Add support for FEAT_TLBIOS and FEAT_TLBIRANGE no-reply

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8124cf7a-1634-7232-465a-172aeec47d07@linaro.org \
    --to=richard.henderson@linaro.org \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=rebecca@nuviainc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.