qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Richard Henderson <richard.henderson@linaro.org>, qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org
Subject: Re: [Qemu-devel] [PATCH 3/6] cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
Date: Mon, 26 Aug 2019 10:36:57 +0200	[thread overview]
Message-ID: <4a9697ec-dad4-db33-5a43-569093f0742a@redhat.com> (raw)
In-Reply-To: <20190824213451.31118-4-richard.henderson@linaro.org>

On 24.08.19 23:34, Richard Henderson wrote:
> We had two different mechanisms to force a recheck of the tlb.
> 
> Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
> that would immediate set TLB_INVALID_MASK, which automatically
> means that a second check of the tlb entry fails.
> 
> We can use the same mechanism to handle small pages.
> Conserve TLB_* bits by removing TLB_RECHECK.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  include/exec/cpu-all.h |  5 +--
>  accel/tcg/cputlb.c     | 86 +++++++++++-------------------------------
>  2 files changed, 24 insertions(+), 67 deletions(-)
> 
> diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
> index 8323094648..8d07ae23a5 100644
> --- a/include/exec/cpu-all.h
> +++ b/include/exec/cpu-all.h
> @@ -329,14 +329,11 @@ CPUArchState *cpu_copy(CPUArchState *env);
>  #define TLB_NOTDIRTY        (1 << (TARGET_PAGE_BITS - 2))
>  /* Set if TLB entry is an IO callback.  */
>  #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
> -/* Set if TLB entry must have MMU lookup repeated for every access */
> -#define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
>  
>  /* Use this mask to check interception with an alignment mask
>   * in a TCG backend.
>   */
> -#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
> -                         | TLB_RECHECK)
> +#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)
>  
>  /**
>   * tlb_hit_page: return true if page aligned @addr is a hit against the
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index d9787cc893..c9576bebcf 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -732,11 +732,8 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
>  
>      address = vaddr_page;
>      if (size < TARGET_PAGE_SIZE) {
> -        /*
> -         * Slow-path the TLB entries; we will repeat the MMU check and TLB
> -         * fill on every access.
> -         */
> -        address |= TLB_RECHECK;
> +        /* Repeat the MMU check and TLB fill on every access.  */
> +        address |= TLB_INVALID_MASK;
>      }
>      if (attrs.byte_swap) {
>          /* Force the access through the I/O slow path.  */
> @@ -1026,10 +1023,15 @@ static bool victim_tlb_hit(CPUArchState *env, size_t mmu_idx, size_t index,
>    victim_tlb_hit(env, mmu_idx, index, offsetof(CPUTLBEntry, TY), \
>                   (ADDR) & TARGET_PAGE_MASK)
>  
> -/* NOTE: this function can trigger an exception */
> -/* NOTE2: the returned address is not exactly the physical address: it
> - * is actually a ram_addr_t (in system mode; the user mode emulation
> - * version of this function returns a guest virtual address).
> +/*
> + * Return a ram_addr_t for the virtual address for execution.
> + *
> + * Return -1 if we can't translate and execute from an entire page
> + * of RAM.  This will force us to execute by loading and translating
> + * one insn at a time, without caching.
> + *
> + * NOTE: This function will trigger an exception if the page is
> + * not executable.
>   */
>  tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
>  {
> @@ -1043,19 +1045,20 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
>              tlb_fill(env_cpu(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
>              index = tlb_index(env, mmu_idx, addr);
>              entry = tlb_entry(env, mmu_idx, addr);
> +
> +            if (unlikely(entry->addr_code & TLB_INVALID_MASK)) {
> +                /*
> +                 * The MMU protection covers a smaller range than a target
> +                 * page, so we must redo the MMU check for every insn.
> +                 */
> +                return -1;
> +            }
>          }
>          assert(tlb_hit(entry->addr_code, addr));
>      }
>  
> -    if (unlikely(entry->addr_code & (TLB_RECHECK | TLB_MMIO))) {
> -        /*
> -         * Return -1 if we can't translate and execute from an entire
> -         * page of RAM here, which will cause us to execute by loading
> -         * and translating one insn at a time, without caching:
> -         *  - TLB_RECHECK: means the MMU protection covers a smaller range
> -         *    than a target page, so we must redo the MMU check every insn
> -         *  - TLB_MMIO: region is not backed by RAM
> -         */
> +    if (unlikely(entry->addr_code & TLB_MMIO)) {
> +        /* The region is not backed by RAM.  */
>          return -1;
>      }
>  
> @@ -1180,7 +1183,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>      }
>  
>      /* Notice an IO access or a needs-MMU-lookup access */
> -    if (unlikely(tlb_addr & (TLB_MMIO | TLB_RECHECK))) {
> +    if (unlikely(tlb_addr & TLB_MMIO)) {
>          /* There's really nothing that can be done to
>             support this apart from stop-the-world.  */
>          goto stop_the_world;
> @@ -1258,6 +1261,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              entry = tlb_entry(env, mmu_idx, addr);
>          }
>          tlb_addr = code_read ? entry->addr_code : entry->addr_read;
> +        tlb_addr &= ~TLB_INVALID_MASK;
>      }
>  
>      /* Handle an IO access.  */
> @@ -1265,27 +1269,6 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>          if ((addr & (size - 1)) != 0) {
>              goto do_unaligned_access;
>          }
> -
> -        if (tlb_addr & TLB_RECHECK) {
> -            /*
> -             * This is a TLB_RECHECK access, where the MMU protection
> -             * covers a smaller range than a target page, and we must
> -             * repeat the MMU check here. This tlb_fill() call might
> -             * longjump out if this access should cause a guest exception.
> -             */
> -            tlb_fill(env_cpu(env), addr, size,
> -                     access_type, mmu_idx, retaddr);
> -            index = tlb_index(env, mmu_idx, addr);
> -            entry = tlb_entry(env, mmu_idx, addr);
> -
> -            tlb_addr = code_read ? entry->addr_code : entry->addr_read;
> -            tlb_addr &= ~TLB_RECHECK;
> -            if (!(tlb_addr & ~TARGET_PAGE_MASK)) {
> -                /* RAM access */
> -                goto do_aligned_access;
> -            }
> -        }
> -
>          return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
>                          mmu_idx, addr, retaddr, access_type, op);
>      }
> @@ -1314,7 +1297,6 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>          return res & MAKE_64BIT_MASK(0, size * 8);
>      }
>  
> - do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
>      switch (op) {
>      case MO_UB:
> @@ -1509,27 +1491,6 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>          if ((addr & (size - 1)) != 0) {
>              goto do_unaligned_access;
>          }
> -
> -        if (tlb_addr & TLB_RECHECK) {
> -            /*
> -             * This is a TLB_RECHECK access, where the MMU protection
> -             * covers a smaller range than a target page, and we must
> -             * repeat the MMU check here. This tlb_fill() call might
> -             * longjump out if this access should cause a guest exception.
> -             */
> -            tlb_fill(env_cpu(env), addr, size, MMU_DATA_STORE,
> -                     mmu_idx, retaddr);
> -            index = tlb_index(env, mmu_idx, addr);
> -            entry = tlb_entry(env, mmu_idx, addr);
> -
> -            tlb_addr = tlb_addr_write(entry);
> -            tlb_addr &= ~TLB_RECHECK;
> -            if (!(tlb_addr & ~TARGET_PAGE_MASK)) {
> -                /* RAM access */
> -                goto do_aligned_access;
> -            }
> -        }
> -
>          io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
>                    val, addr, retaddr, op);
>          return;
> @@ -1579,7 +1540,6 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>          return;
>      }
>  
> - do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
>      switch (op) {
>      case MO_UB:
> 

Much better

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 

Thanks,

David / dhildenb


  reply	other threads:[~2019-08-26  8:38 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-24 21:34 [Qemu-devel] [PATCH 0/6] exec: Cleanup watchpoints Richard Henderson
2019-08-24 21:34 ` [Qemu-devel] [PATCH 1/6] exec: Move user-only watchpoint stubs inline Richard Henderson
2019-08-26  7:45   ` David Hildenbrand
2019-08-24 21:34 ` [Qemu-devel] [PATCH 2/6] exec: Factor out core logic of check_watchpoint() Richard Henderson
2019-08-24 21:34 ` [Qemu-devel] [PATCH 3/6] cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK Richard Henderson
2019-08-26  8:36   ` David Hildenbrand [this message]
2019-08-24 21:34 ` [Qemu-devel] [PATCH 4/6] exec: Factor out cpu_watchpoint_address_matches Richard Henderson
2019-08-26  8:41   ` David Hildenbrand
2019-08-28 21:28     ` Richard Henderson
2019-08-28 21:54       ` David Hildenbrand
2019-08-24 21:34 ` [Qemu-devel] [PATCH 5/6] cputlb: Handle watchpoints via TLB_WATCHPOINT Richard Henderson
2019-08-28 22:00   ` David Hildenbrand
2019-08-24 21:34 ` [Qemu-devel] [PATCH 6/6] tcg: Check for watchpoints in probe_write() Richard Henderson
2019-08-28 21:47 ` [Qemu-devel] [PATCH 0/6] exec: Cleanup watchpoints Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4a9697ec-dad4-db33-5a43-569093f0742a@redhat.com \
    --to=david@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).