All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up
@ 2016-04-10 21:45 Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields Sergey Fedorov
                   ` (9 more replies)
  0 siblings, 10 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson

From: Sergey Fedorov <serge.fdrv@gmail.com>

This series combines a set of patches which is meant to improve overall code
structure and readability of direct block chaining mechanism. The other point
is to make a step towards thread safety of TB chainig.

The series' tree can be found in a public git repository [1].

[1] https://github.com/sergefdrv/qemu/tree/tb-chaining-cleanup-v3

Summary of changes:
 Changes in v3:
  * New patch to clean up safety checks [PATCH v3 09/10]
  * New patch to eliminate unneeded checks in user-mode [PATCH v3 10/10]
 Changes in v2:
  * Eliminated duplicate dereference of 'ptb' in tb_jmp_remove() [PATCH v2 2/8]
  * Tweaked a comment [PATCH v2 4/8]
  * Complete rewrite [PATCH v2 5/8]
  * Tweaked a comment; eliminated duplicate dereference of 'ptb' in
    tb_jmp_unlink() [PATCH v2 8/8]

Sergey Fedorov (10):
  tcg: Clean up direct block chaining data fields
  tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
  tcg: Rearrange tb_link_page() to avoid forward declaration
  tcg: Init TB's direct jumps before making it visible
  tcg: Clarify thread safety check in tb_add_jump()
  tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list()
  tcg: Extract removing of jumps to TB from tb_phys_invalidate()
  tcg: Clean up tb_jmp_unlink()
  tcg: Clean up direct block chaining safety checks
  tcg: Moderate direct block chaining safety checks in user mode

 cpu-exec.c                    |   7 +-
 include/exec/exec-all.h       |  70 ++++++----
 target-alpha/translate.c      |  13 +-
 target-arm/translate-a64.c    |  11 +-
 target-arm/translate.c        |  25 +++-
 target-cris/translate.c       |  24 +++-
 target-i386/translate.c       |  31 +++--
 target-lm32/translate.c       |  29 ++++-
 target-m68k/translate.c       |  26 +++-
 target-microblaze/translate.c |  23 +++-
 target-mips/translate.c       |  28 +++-
 target-moxie/translate.c      |  29 ++++-
 target-openrisc/translate.c   |  28 +++-
 target-ppc/translate.c        |  28 +++-
 target-s390x/translate.c      |  25 +++-
 target-sh4/translate.c        |  29 ++++-
 target-sparc/translate.c      |  32 ++++-
 target-tricore/translate.c    |  28 +++-
 target-unicore32/translate.c  |  24 +++-
 target-xtensa/translate.c     |  20 +++
 tcg/aarch64/tcg-target.inc.c  |   7 +-
 tcg/arm/tcg-target.inc.c      |   8 +-
 tcg/i386/tcg-target.inc.c     |   8 +-
 tcg/ia64/tcg-target.inc.c     |   6 +-
 tcg/mips/tcg-target.inc.c     |   8 +-
 tcg/ppc/tcg-target.inc.c      |   6 +-
 tcg/s390/tcg-target.inc.c     |  11 +-
 tcg/sparc/tcg-target.inc.c    |   9 +-
 tcg/tcg.h                     |   6 +-
 tcg/tci/tcg-target.inc.c      |  10 +-
 translate-all.c               | 292 ++++++++++++++++++++++--------------------
 31 files changed, 605 insertions(+), 296 deletions(-)

-- 
2.8.1

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 10:02   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB Sergey Fedorov
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov,
	Claudio Fontana, Andrzej Zaborowski, Aurelien Jarno,
	Vassili Karpov (malc),
	Alexander Graf, Blue Swirl, Stefan Weil, qemu-arm

From: Sergey Fedorov <serge.fdrv@gmail.com>

Briefly describe in a comment how direct block chaining is done. It
should help in understanding of the following data fields.

Rename some fields in TranslationBlock and TCGContext structures to
better reflect their purpose (dropping excessive 'tb_' prefix in
TranslationBlock but keeping it in TCGContext):
   tb_next_offset  =>  jmp_reset_offset
   tb_jmp_offset   =>  jmp_insn_offset
   tb_next         =>  jmp_target_addr
   jmp_next        =>  jmp_list_next
   jmp_first       =>  jmp_list_first

Avoid using a magic constant as an invalid offset which is used to
indicate that there's no n-th jump generated.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
 include/exec/exec-all.h      | 44 ++++++++++++++++++++++++--------------
 tcg/aarch64/tcg-target.inc.c |  7 +++---
 tcg/arm/tcg-target.inc.c     |  8 +++----
 tcg/i386/tcg-target.inc.c    |  8 +++----
 tcg/ia64/tcg-target.inc.c    |  6 +++---
 tcg/mips/tcg-target.inc.c    |  8 +++----
 tcg/ppc/tcg-target.inc.c     |  6 +++---
 tcg/s390/tcg-target.inc.c    | 11 +++++-----
 tcg/sparc/tcg-target.inc.c   |  9 ++++----
 tcg/tcg.h                    |  6 +++---
 tcg/tci/tcg-target.inc.c     | 10 ++++-----
 translate-all.c              | 51 +++++++++++++++++++++++---------------------
 12 files changed, 96 insertions(+), 78 deletions(-)

diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 736209505a68..38a149cc5e0c 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -257,20 +257,32 @@ struct TranslationBlock {
     struct TranslationBlock *page_next[2];
     tb_page_addr_t page_addr[2];
 
-    /* the following data are used to directly call another TB from
-       the code of this one. */
-    uint16_t tb_next_offset[2]; /* offset of original jump target */
+    /* The following data are used to directly call another TB from
+     * the code of this one. This can be done either by emitting direct or
+     * indirect native jump instructions. These jumps are reset so that the TB
+     * just continue its execution. The TB can be linked to another one by
+     * setting one of the jump targets (or patching the jump instruction). Only
+     * two of such jumps are supported.
+     */
+    uint16_t jmp_reset_offset[2]; /* offset of original jump target */
+#define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated */
 #ifdef USE_DIRECT_JUMP
-    uint16_t tb_jmp_offset[2]; /* offset of jump instruction */
+    uint16_t jmp_insn_offset[2]; /* offset of native jump instruction */
 #else
-    uintptr_t tb_next[2]; /* address of jump generated code */
+    uintptr_t jmp_target_addr[2]; /* target address for indirect jump */
 #endif
-    /* list of TBs jumping to this one. This is a circular list using
-       the two least significant bits of the pointers to tell what is
-       the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 =
-       jmp_first */
-    struct TranslationBlock *jmp_next[2];
-    struct TranslationBlock *jmp_first;
+    /* Each TB has an assosiated circular list of TBs jumping to this one.
+     * jmp_list_first points to the first TB jumping to this one.
+     * jmp_list_next is used to point to the next TB in a list.
+     * Since each TB can have two jumps, it can participate in two lists.
+     * The two least significant bits of a pointer are used to choose which
+     * data field holds a pointer to the next TB:
+     * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first.
+     * In other words, 0/1 tells which jump is used in the pointed TB,
+     * and 2 means that this is a pointer back to the target TB of this list.
+     */
+    struct TranslationBlock *jmp_list_next[2];
+    struct TranslationBlock *jmp_list_first;
 };
 
 #include "qemu/thread.h"
@@ -359,7 +371,7 @@ void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr);
 static inline void tb_set_jmp_target(TranslationBlock *tb,
                                      int n, uintptr_t addr)
 {
-    uint16_t offset = tb->tb_jmp_offset[n];
+    uint16_t offset = tb->jmp_insn_offset[n];
     tb_set_jmp_target1((uintptr_t)(tb->tc_ptr + offset), addr);
 }
 
@@ -369,7 +381,7 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
 static inline void tb_set_jmp_target(TranslationBlock *tb,
                                      int n, uintptr_t addr)
 {
-    tb->tb_next[n] = addr;
+    tb->jmp_target_addr[n] = addr;
 }
 
 #endif
@@ -378,7 +390,7 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
                                TranslationBlock *tb_next)
 {
     /* NOTE: this test is only needed for thread safety */
-    if (!tb->jmp_next[n]) {
+    if (!tb->jmp_list_next[n]) {
         qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
                                "Linking TBs %p [" TARGET_FMT_lx
                                "] index %d -> %p [" TARGET_FMT_lx "]\n",
@@ -388,8 +400,8 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
         tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
 
         /* add in TB jmp circular list */
-        tb->jmp_next[n] = tb_next->jmp_first;
-        tb_next->jmp_first = (TranslationBlock *)((uintptr_t)(tb) | (n));
+        tb->jmp_list_next[n] = tb_next->jmp_list_first;
+        tb_next->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | n);
     }
 }
 
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0ed10a974121..08efdf41da48 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1294,12 +1294,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
 #ifndef USE_DIRECT_JUMP
 #error "USE_DIRECT_JUMP required for aarch64"
 #endif
-        assert(s->tb_jmp_offset != NULL); /* consistency for USE_DIRECT_JUMP */
-        s->tb_jmp_offset[a0] = tcg_current_code_size(s);
+        /* consistency for USE_DIRECT_JUMP */
+        assert(s->tb_jmp_insn_offset != NULL);
+        s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
         /* actual branch destination will be patched by
            aarch64_tb_set_jmp_target later, beware retranslation. */
         tcg_out_goto_noaddr(s);
-        s->tb_next_offset[a0] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
         break;
 
     case INDEX_op_br:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 3edf6a6f971c..a9147620b073 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1647,17 +1647,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
         tcg_out_goto(s, COND_AL, tb_ret_addr);
         break;
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             /* Direct jump method */
-            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
+            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
             tcg_out_b_noaddr(s, COND_AL);
         } else {
             /* Indirect jump method */
-            intptr_t ptr = (intptr_t)(s->tb_next + args[0]);
+            intptr_t ptr = (intptr_t)(s->tb_jmp_target_addr + args[0]);
             tcg_out_movi32(s, COND_AL, TCG_REG_R0, ptr & ~0xfff);
             tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, TCG_REG_R0, ptr & 0xfff);
         }
-        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         tcg_out_goto_label(s, COND_AL, arg_label(args[0]));
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 9187d34caf6d..2f98cae97f3b 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1775,17 +1775,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
         tcg_out_jmp(s, tb_ret_addr);
         break;
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             /* direct jump method */
             tcg_out8(s, OPC_JMP_long); /* jmp im */
-            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
+            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
             tcg_out32(s, 0);
         } else {
             /* indirect jump method */
             tcg_out_modrm_offset(s, OPC_GRP5, EXT5_JMPN_Ev, -1,
-                                 (intptr_t)(s->tb_next + args[0]));
+                                 (intptr_t)(s->tb_jmp_target_addr + args[0]));
         }
-        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         tcg_out_jxx(s, JCC_JMP, arg_label(args[0]), 0);
diff --git a/tcg/ia64/tcg-target.inc.c b/tcg/ia64/tcg-target.inc.c
index 62d654943c20..261861f90c3a 100644
--- a/tcg/ia64/tcg-target.inc.c
+++ b/tcg/ia64/tcg-target.inc.c
@@ -881,13 +881,13 @@ static void tcg_out_exit_tb(TCGContext *s, tcg_target_long arg)
 
 static inline void tcg_out_goto_tb(TCGContext *s, TCGArg arg)
 {
-    if (s->tb_jmp_offset) {
+    if (s->tb_jmp_insn_offset) {
         /* direct jump method */
         tcg_abort();
     } else {
         /* indirect jump method */
         tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R2,
-                     (tcg_target_long)(s->tb_next + arg));
+                     (tcg_target_long)(s->tb_jmp_target_addr + arg));
         tcg_out_bundle(s, MmI,
                        tcg_opc_m1 (TCG_REG_P0, OPC_LD8_M1,
                                    TCG_REG_R2, TCG_REG_R2),
@@ -900,7 +900,7 @@ static inline void tcg_out_goto_tb(TCGContext *s, TCGArg arg)
                        tcg_opc_b4 (TCG_REG_P0, OPC_BR_SPTK_MANY_B4,
                                    TCG_REG_B6));
     }
-    s->tb_next_offset[arg] = tcg_current_code_size(s);
+    s->tb_jmp_reset_offset[arg] = tcg_current_code_size(s);
 }
 
 static inline void tcg_out_jmp(TCGContext *s, TCGArg addr)
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 682e19897db0..b8c240b32196 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1397,19 +1397,19 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
         }
         break;
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             /* direct jump method */
-            s->tb_jmp_offset[a0] = tcg_current_code_size(s);
+            s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
             /* Avoid clobbering the address during retranslation.  */
             tcg_out32(s, OPC_J | (*(uint32_t *)s->code_ptr & 0x3ffffff));
         } else {
             /* indirect jump method */
             tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_REG_ZERO,
-                       (uintptr_t)(s->tb_next + a0));
+                       (uintptr_t)(s->tb_jmp_target_addr + a0));
             tcg_out_opc_reg(s, OPC_JR, 0, TCG_TMP0, 0);
         }
         tcg_out_nop(s);
-        s->tb_next_offset[a0] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         tcg_out_brcond(s, TCG_COND_EQ, TCG_REG_ZERO, TCG_REG_ZERO,
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 8c1c2dfa9b22..10394079ad9d 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1894,17 +1894,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
         tcg_out_b(s, 0, tb_ret_addr);
         break;
     case INDEX_op_goto_tb:
-        tcg_debug_assert(s->tb_jmp_offset);
+        tcg_debug_assert(s->tb_jmp_insn_offset);
         /* Direct jump.  Ensure the next insns are 8-byte aligned. */
         if ((uintptr_t)s->code_ptr & 7) {
             tcg_out32(s, NOP);
         }
-        s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
+        s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
         /* To be replaced by either a branch+nop or a load into TMP1.  */
         s->code_ptr += 2;
         tcg_out32(s, MTSPR | RS(TCG_REG_TMP1) | CTR);
         tcg_out32(s, BCCTR | BO_ALWAYS);
-        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         {
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fbf97bb2e15d..e95b04b0e278 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1715,17 +1715,18 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
 
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             tcg_out16(s, RIL_BRCL | (S390_CC_ALWAYS << 4));
-            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
+            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
             s->code_ptr += 2;
         } else {
-            /* load address stored at s->tb_next + args[0] */
-            tcg_out_ld_abs(s, TCG_TYPE_PTR, TCG_TMP0, s->tb_next + args[0]);
+            /* load address stored at s->tb_jmp_target_addr + args[0] */
+            tcg_out_ld_abs(s, TCG_TYPE_PTR, TCG_TMP0,
+                           s->tb_jmp_target_addr + args[0]);
             /* and go there */
             tcg_out_insn(s, RR, BCR, S390_CC_ALWAYS, TCG_TMP0);
         }
-        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
         break;
 
     OP_32_64(ld8u):
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 54df1bc42432..a611885a2aaf 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1229,18 +1229,19 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         }
         break;
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             /* direct jump method */
-            s->tb_jmp_offset[a0] = tcg_current_code_size(s);
+            s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
             /* Make sure to preserve links during retranslation.  */
             tcg_out32(s, CALL | (*s->code_ptr & ~INSN_OP(-1)));
         } else {
             /* indirect jump method */
-            tcg_out_ld_ptr(s, TCG_REG_T1, (uintptr_t)(s->tb_next + a0));
+            tcg_out_ld_ptr(s, TCG_REG_T1,
+                           (uintptr_t)(s->tb_jmp_target_addr + a0));
             tcg_out_arithi(s, TCG_REG_G0, TCG_REG_T1, 0, JMPL);
         }
         tcg_out_nop(s);
-        s->tb_next_offset[a0] = tcg_current_code_size(s);
+        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         tcg_out_bpcc(s, COND_A, BPCC_PT, arg_label(a0));
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 40c8fbe2ae64..ea4ff02308fc 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -510,9 +510,9 @@ struct TCGContext {
 
     /* goto_tb support */
     tcg_insn_unit *code_buf;
-    uintptr_t *tb_next;
-    uint16_t *tb_next_offset;
-    uint16_t *tb_jmp_offset; /* != NULL if USE_DIRECT_JUMP */
+    uint16_t *tb_jmp_reset_offset; /* tb->jmp_reset_offset */
+    uint16_t *tb_jmp_insn_offset; /* tb->jmp_insn_offset if USE_DIRECT_JUMP */
+    uintptr_t *tb_jmp_target_addr; /* tb->jmp_target_addr if !USE_DIRECT_JUMP */
 
     /* liveness analysis */
     uint16_t *op_dead_args; /* for each operation, each bit tells if the
diff --git a/tcg/tci/tcg-target.inc.c b/tcg/tci/tcg-target.inc.c
index 4afe4d7a8d59..4e91687abd1c 100644
--- a/tcg/tci/tcg-target.inc.c
+++ b/tcg/tci/tcg-target.inc.c
@@ -553,17 +553,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
         tcg_out64(s, args[0]);
         break;
     case INDEX_op_goto_tb:
-        if (s->tb_jmp_offset) {
+        if (s->tb_jmp_insn_offset) {
             /* Direct jump method. */
-            assert(args[0] < ARRAY_SIZE(s->tb_jmp_offset));
-            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
+            assert(args[0] < ARRAY_SIZE(s->tb_jmp_insn_offset));
+            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
             tcg_out32(s, 0);
         } else {
             /* Indirect jump method. */
             TODO();
         }
-        assert(args[0] < ARRAY_SIZE(s->tb_next_offset));
-        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
+        assert(args[0] < ARRAY_SIZE(s->tb_jmp_reset_offset));
+        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
         break;
     case INDEX_op_br:
         tci_out_label(s, arg_label(args[0]));
diff --git a/translate-all.c b/translate-all.c
index 8329ea60eeda..33ca06c663d4 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -930,7 +930,7 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
     TranslationBlock *tb1, **ptb;
     unsigned int n1;
 
-    ptb = &tb->jmp_next[n];
+    ptb = &tb->jmp_list_next[n];
     tb1 = *ptb;
     if (tb1) {
         /* find tb(n) in circular list */
@@ -942,15 +942,15 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
                 break;
             }
             if (n1 == 2) {
-                ptb = &tb1->jmp_first;
+                ptb = &tb1->jmp_list_first;
             } else {
-                ptb = &tb1->jmp_next[n1];
+                ptb = &tb1->jmp_list_next[n1];
             }
         }
         /* now we can suppress tb(n) from the list */
-        *ptb = tb->jmp_next[n];
+        *ptb = tb->jmp_list_next[n];
 
-        tb->jmp_next[n] = NULL;
+        tb->jmp_list_next[n] = NULL;
     }
 }
 
@@ -958,7 +958,8 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
    another TB */
 static inline void tb_reset_jump(TranslationBlock *tb, int n)
 {
-    tb_set_jmp_target(tb, n, (uintptr_t)(tb->tc_ptr + tb->tb_next_offset[n]));
+    uintptr_t addr = (uintptr_t)(tb->tc_ptr + tb->jmp_reset_offset[n]);
+    tb_set_jmp_target(tb, n, addr);
 }
 
 /* invalidate one TB */
@@ -1002,19 +1003,21 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
     tb_jmp_remove(tb, 1);
 
     /* suppress any remaining jumps to this TB */
-    tb1 = tb->jmp_first;
+    tb1 = tb->jmp_list_first;
     for (;;) {
         n1 = (uintptr_t)tb1 & 3;
         if (n1 == 2) {
             break;
         }
         tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
-        tb2 = tb1->jmp_next[n1];
+        tb2 = tb1->jmp_list_next[n1];
         tb_reset_jump(tb1, n1);
-        tb1->jmp_next[n1] = NULL;
+        tb1->jmp_list_next[n1] = NULL;
         tb1 = tb2;
     }
-    tb->jmp_first = (TranslationBlock *)((uintptr_t)tb | 2); /* fail safe */
+
+    /* fail safe */
+    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
 
     tcg_ctx.tb_ctx.tb_phys_invalidate_count++;
 }
@@ -1099,15 +1102,15 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
     trace_translate_block(tb, tb->pc, tb->tc_ptr);
 
     /* generate machine code */
-    tb->tb_next_offset[0] = 0xffff;
-    tb->tb_next_offset[1] = 0xffff;
-    tcg_ctx.tb_next_offset = tb->tb_next_offset;
+    tb->jmp_reset_offset[0] = TB_JMP_RESET_OFFSET_INVALID;
+    tb->jmp_reset_offset[1] = TB_JMP_RESET_OFFSET_INVALID;
+    tcg_ctx.tb_jmp_reset_offset = tb->jmp_reset_offset;
 #ifdef USE_DIRECT_JUMP
-    tcg_ctx.tb_jmp_offset = tb->tb_jmp_offset;
-    tcg_ctx.tb_next = NULL;
+    tcg_ctx.tb_jmp_insn_offset = tb->jmp_insn_offset;
+    tcg_ctx.tb_jmp_target_addr = NULL;
 #else
-    tcg_ctx.tb_jmp_offset = NULL;
-    tcg_ctx.tb_next = tb->tb_next;
+    tcg_ctx.tb_jmp_insn_offset = NULL;
+    tcg_ctx.tb_jmp_target_addr = tb->jmp_target_addr;
 #endif
 
 #ifdef CONFIG_PROFILER
@@ -1488,15 +1491,15 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
         tb->page_addr[1] = -1;
     }
 
-    tb->jmp_first = (TranslationBlock *)((uintptr_t)tb | 2);
-    tb->jmp_next[0] = NULL;
-    tb->jmp_next[1] = NULL;
+    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
+    tb->jmp_list_next[0] = NULL;
+    tb->jmp_list_next[1] = NULL;
 
     /* init original jump addresses */
-    if (tb->tb_next_offset[0] != 0xffff) {
+    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
         tb_reset_jump(tb, 0);
     }
-    if (tb->tb_next_offset[1] != 0xffff) {
+    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
         tb_reset_jump(tb, 1);
     }
 
@@ -1689,9 +1692,9 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf)
         if (tb->page_addr[1] != -1) {
             cross_page++;
         }
-        if (tb->tb_next_offset[0] != 0xffff) {
+        if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
             direct_jmp_count++;
-            if (tb->tb_next_offset[1] != 0xffff) {
+            if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
                 direct_jmp2_count++;
             }
         }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 10:34   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration Sergey Fedorov
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

These fields do not contain pure pointers to a TranslationBlock
structure. So uintptr_t is the most appropriate type for them.
Also put an assert to assure that the two least significant bits of the
pointer are zero before assigning it to jmp_list_first.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---

Changes in v2:
 * Eliminated duplicate dereference of 'ptb' in tb_jmp_remove()

 include/exec/exec-all.h | 13 ++++++++-----
 translate-all.c         | 38 ++++++++++++++++++++------------------
 2 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 38a149cc5e0c..b055716ed690 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -275,14 +275,16 @@ struct TranslationBlock {
      * jmp_list_first points to the first TB jumping to this one.
      * jmp_list_next is used to point to the next TB in a list.
      * Since each TB can have two jumps, it can participate in two lists.
-     * The two least significant bits of a pointer are used to choose which
-     * data field holds a pointer to the next TB:
+     * jmp_list_first and jmp_list_next are 4-byte aligned pointers to a
+     * TranslationBlock structure, but the two least significant bits of
+     * them are used to encode which data field of the pointed TB should
+     * be used to traverse the list further from that TB:
      * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first.
      * In other words, 0/1 tells which jump is used in the pointed TB,
      * and 2 means that this is a pointer back to the target TB of this list.
      */
-    struct TranslationBlock *jmp_list_next[2];
-    struct TranslationBlock *jmp_list_first;
+    uintptr_t jmp_list_next[2];
+    uintptr_t jmp_list_first;
 };
 
 #include "qemu/thread.h"
@@ -401,7 +403,8 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
 
         /* add in TB jmp circular list */
         tb->jmp_list_next[n] = tb_next->jmp_list_first;
-        tb_next->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | n);
+        assert(((uintptr_t)tb & 3) == 0);
+        tb_next->jmp_list_first = (uintptr_t)tb | n;
     }
 }
 
diff --git a/translate-all.c b/translate-all.c
index 33ca06c663d4..ba71ff73f55f 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -927,17 +927,17 @@ static inline void tb_page_remove(TranslationBlock **ptb, TranslationBlock *tb)
 
 static inline void tb_jmp_remove(TranslationBlock *tb, int n)
 {
-    TranslationBlock *tb1, **ptb;
+    TranslationBlock *tb1;
+    uintptr_t *ptb, ntb;
     unsigned int n1;
 
     ptb = &tb->jmp_list_next[n];
-    tb1 = *ptb;
-    if (tb1) {
+    if (*ptb) {
         /* find tb(n) in circular list */
         for (;;) {
-            tb1 = *ptb;
-            n1 = (uintptr_t)tb1 & 3;
-            tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
+            ntb = *ptb;
+            n1 = ntb & 3;
+            tb1 = (TranslationBlock *)(ntb & ~3);
             if (n1 == n && tb1 == tb) {
                 break;
             }
@@ -950,7 +950,7 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
         /* now we can suppress tb(n) from the list */
         *ptb = tb->jmp_list_next[n];
 
-        tb->jmp_list_next[n] = NULL;
+        tb->jmp_list_next[n] = (uintptr_t)NULL;
     }
 }
 
@@ -969,7 +969,7 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
     PageDesc *p;
     unsigned int h, n1;
     tb_page_addr_t phys_pc;
-    TranslationBlock *tb1, *tb2;
+    uintptr_t tb1, tb2;
 
     /* remove the TB from the hash list */
     phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
@@ -1005,19 +1005,20 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
     /* suppress any remaining jumps to this TB */
     tb1 = tb->jmp_list_first;
     for (;;) {
-        n1 = (uintptr_t)tb1 & 3;
+        TranslationBlock *tmp_tb;
+        n1 = tb1 & 3;
         if (n1 == 2) {
             break;
         }
-        tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
-        tb2 = tb1->jmp_list_next[n1];
-        tb_reset_jump(tb1, n1);
-        tb1->jmp_list_next[n1] = NULL;
+        tmp_tb = (TranslationBlock *)(tb1 & ~3);
+        tb2 = tmp_tb->jmp_list_next[n1];
+        tb_reset_jump(tmp_tb, n1);
+        tmp_tb->jmp_list_next[n1] = (uintptr_t)NULL;
         tb1 = tb2;
     }
 
-    /* fail safe */
-    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
+    assert(((uintptr_t)tb & 3) == 0);
+    tb->jmp_list_first = (uintptr_t)tb | 2; /* fail safe */
 
     tcg_ctx.tb_ctx.tb_phys_invalidate_count++;
 }
@@ -1491,9 +1492,10 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
         tb->page_addr[1] = -1;
     }
 
-    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
-    tb->jmp_list_next[0] = NULL;
-    tb->jmp_list_next[1] = NULL;
+    assert(((uintptr_t)tb & 3) == 0);
+    tb->jmp_list_first = (uintptr_t)tb | 2;
+    tb->jmp_list_next[0] = (uintptr_t)NULL;
+    tb->jmp_list_next[1] = (uintptr_t)NULL;
 
     /* init original jump addresses */
     if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-18 17:20   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible Sergey Fedorov
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 translate-all.c | 204 ++++++++++++++++++++++++++++----------------------------
 1 file changed, 101 insertions(+), 103 deletions(-)

diff --git a/translate-all.c b/translate-all.c
index ba71ff73f55f..7ac7916f2792 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -153,8 +153,6 @@ void tb_lock_reset(void)
 #endif
 }
 
-static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
-                         tb_page_addr_t phys_page2);
 static TranslationBlock *tb_find_pc(uintptr_t tc_ptr);
 
 void cpu_gen_init(void)
@@ -1052,6 +1050,107 @@ static void build_page_bitmap(PageDesc *p)
     }
 }
 
+/* add the tb in the target page and protect it if necessary
+ *
+ * Called with mmap_lock held for user-mode emulation.
+ */
+static inline void tb_alloc_page(TranslationBlock *tb,
+                                 unsigned int n, tb_page_addr_t page_addr)
+{
+    PageDesc *p;
+#ifndef CONFIG_USER_ONLY
+    bool page_already_protected;
+#endif
+
+    tb->page_addr[n] = page_addr;
+    p = page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1);
+    tb->page_next[n] = p->first_tb;
+#ifndef CONFIG_USER_ONLY
+    page_already_protected = p->first_tb != NULL;
+#endif
+    p->first_tb = (TranslationBlock *)((uintptr_t)tb | n);
+    invalidate_page_bitmap(p);
+
+#if defined(CONFIG_USER_ONLY)
+    if (p->flags & PAGE_WRITE) {
+        target_ulong addr;
+        PageDesc *p2;
+        int prot;
+
+        /* force the host page as non writable (writes will have a
+           page fault + mprotect overhead) */
+        page_addr &= qemu_host_page_mask;
+        prot = 0;
+        for (addr = page_addr; addr < page_addr + qemu_host_page_size;
+            addr += TARGET_PAGE_SIZE) {
+
+            p2 = page_find(addr >> TARGET_PAGE_BITS);
+            if (!p2) {
+                continue;
+            }
+            prot |= p2->flags;
+            p2->flags &= ~PAGE_WRITE;
+          }
+        mprotect(g2h(page_addr), qemu_host_page_size,
+                 (prot & PAGE_BITS) & ~PAGE_WRITE);
+#ifdef DEBUG_TB_INVALIDATE
+        printf("protecting code page: 0x" TARGET_FMT_lx "\n",
+               page_addr);
+#endif
+    }
+#else
+    /* if some code is already present, then the pages are already
+       protected. So we handle the case where only the first TB is
+       allocated in a physical page */
+    if (!page_already_protected) {
+        tlb_protect_code(page_addr);
+    }
+#endif
+}
+
+/* add a new TB and link it to the physical page tables. phys_page2 is
+ * (-1) to indicate that only one page contains the TB.
+ *
+ * Called with mmap_lock held for user-mode emulation.
+ */
+static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
+                         tb_page_addr_t phys_page2)
+{
+    unsigned int h;
+    TranslationBlock **ptb;
+
+    /* add in the physical hash table */
+    h = tb_phys_hash_func(phys_pc);
+    ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h];
+    tb->phys_hash_next = *ptb;
+    *ptb = tb;
+
+    /* add in the page list */
+    tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK);
+    if (phys_page2 != -1) {
+        tb_alloc_page(tb, 1, phys_page2);
+    } else {
+        tb->page_addr[1] = -1;
+    }
+
+    assert(((uintptr_t)tb & 3) == 0);
+    tb->jmp_list_first = (uintptr_t)tb | 2;
+    tb->jmp_list_next[0] = (uintptr_t)NULL;
+    tb->jmp_list_next[1] = (uintptr_t)NULL;
+
+    /* init original jump addresses */
+    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
+        tb_reset_jump(tb, 0);
+    }
+    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
+        tb_reset_jump(tb, 1);
+    }
+
+#ifdef DEBUG_TB_CHECK
+    tb_page_check();
+#endif
+}
+
 /* Called with mmap_lock held for user mode emulation.  */
 TranslationBlock *tb_gen_code(CPUState *cpu,
                               target_ulong pc, target_ulong cs_base,
@@ -1409,107 +1508,6 @@ static void tb_invalidate_phys_page(tb_page_addr_t addr,
 }
 #endif
 
-/* add the tb in the target page and protect it if necessary
- *
- * Called with mmap_lock held for user-mode emulation.
- */
-static inline void tb_alloc_page(TranslationBlock *tb,
-                                 unsigned int n, tb_page_addr_t page_addr)
-{
-    PageDesc *p;
-#ifndef CONFIG_USER_ONLY
-    bool page_already_protected;
-#endif
-
-    tb->page_addr[n] = page_addr;
-    p = page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1);
-    tb->page_next[n] = p->first_tb;
-#ifndef CONFIG_USER_ONLY
-    page_already_protected = p->first_tb != NULL;
-#endif
-    p->first_tb = (TranslationBlock *)((uintptr_t)tb | n);
-    invalidate_page_bitmap(p);
-
-#if defined(CONFIG_USER_ONLY)
-    if (p->flags & PAGE_WRITE) {
-        target_ulong addr;
-        PageDesc *p2;
-        int prot;
-
-        /* force the host page as non writable (writes will have a
-           page fault + mprotect overhead) */
-        page_addr &= qemu_host_page_mask;
-        prot = 0;
-        for (addr = page_addr; addr < page_addr + qemu_host_page_size;
-            addr += TARGET_PAGE_SIZE) {
-
-            p2 = page_find(addr >> TARGET_PAGE_BITS);
-            if (!p2) {
-                continue;
-            }
-            prot |= p2->flags;
-            p2->flags &= ~PAGE_WRITE;
-          }
-        mprotect(g2h(page_addr), qemu_host_page_size,
-                 (prot & PAGE_BITS) & ~PAGE_WRITE);
-#ifdef DEBUG_TB_INVALIDATE
-        printf("protecting code page: 0x" TARGET_FMT_lx "\n",
-               page_addr);
-#endif
-    }
-#else
-    /* if some code is already present, then the pages are already
-       protected. So we handle the case where only the first TB is
-       allocated in a physical page */
-    if (!page_already_protected) {
-        tlb_protect_code(page_addr);
-    }
-#endif
-}
-
-/* add a new TB and link it to the physical page tables. phys_page2 is
- * (-1) to indicate that only one page contains the TB.
- *
- * Called with mmap_lock held for user-mode emulation.
- */
-static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
-                         tb_page_addr_t phys_page2)
-{
-    unsigned int h;
-    TranslationBlock **ptb;
-
-    /* add in the physical hash table */
-    h = tb_phys_hash_func(phys_pc);
-    ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h];
-    tb->phys_hash_next = *ptb;
-    *ptb = tb;
-
-    /* add in the page list */
-    tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK);
-    if (phys_page2 != -1) {
-        tb_alloc_page(tb, 1, phys_page2);
-    } else {
-        tb->page_addr[1] = -1;
-    }
-
-    assert(((uintptr_t)tb & 3) == 0);
-    tb->jmp_list_first = (uintptr_t)tb | 2;
-    tb->jmp_list_next[0] = (uintptr_t)NULL;
-    tb->jmp_list_next[1] = (uintptr_t)NULL;
-
-    /* init original jump addresses */
-    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
-        tb_reset_jump(tb, 0);
-    }
-    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
-        tb_reset_jump(tb, 1);
-    }
-
-#ifdef DEBUG_TB_CHECK
-    tb_page_check();
-#endif
-}
-
 /* find the TB 'tb' such that tb[0].tc_ptr <= tc_ptr <
    tb[1].tc_ptr. Return NULL if not found */
 static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (2 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 10:55   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump() Sergey Fedorov
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

Initialize TB's direct jump list data fields and reset the jumps before
tb_link_page() puts it into the physical hash table and the physical
page list. So TB is completely initialized before it becomes visible.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---

Changes in v2:
 * Tweaked a comment

 translate-all.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/translate-all.c b/translate-all.c
index 7ac7916f2792..dfa7f0d64e76 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -1133,19 +1133,6 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
         tb->page_addr[1] = -1;
     }
 
-    assert(((uintptr_t)tb & 3) == 0);
-    tb->jmp_list_first = (uintptr_t)tb | 2;
-    tb->jmp_list_next[0] = (uintptr_t)NULL;
-    tb->jmp_list_next[1] = (uintptr_t)NULL;
-
-    /* init original jump addresses */
-    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
-        tb_reset_jump(tb, 0);
-    }
-    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
-        tb_reset_jump(tb, 1);
-    }
-
 #ifdef DEBUG_TB_CHECK
     tb_page_check();
 #endif
@@ -1254,6 +1241,20 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
         ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
                  CODE_GEN_ALIGN);
 
+    /* init jump list */
+    assert(((uintptr_t)tb & 3) == 0);
+    tb->jmp_list_first = (uintptr_t)tb | 2;
+    tb->jmp_list_next[0] = (uintptr_t)NULL;
+    tb->jmp_list_next[1] = (uintptr_t)NULL;
+
+    /* init original jump addresses wich has been set during tcg_gen_code() */
+    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
+        tb_reset_jump(tb, 0);
+    }
+    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
+        tb_reset_jump(tb, 1);
+    }
+
     /* check next page if needed */
     virt_page2 = (pc + tb->size - 1) & TARGET_PAGE_MASK;
     phys_page2 = -1;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump()
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (3 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 11:01   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 06/10] tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list() Sergey Fedorov
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

The check is to make sure that another thread hasn't already done the
same while we were outside of tb_lock. Mention this in a comment.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---

Changes in v2:
 * Typo fixed in the commit title
 * Complete rewrite of the commit body and the patch based on Paolo's comments

 include/exec/exec-all.h | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index b055716ed690..8e81ef5fb2c2 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -391,21 +391,24 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
 static inline void tb_add_jump(TranslationBlock *tb, int n,
                                TranslationBlock *tb_next)
 {
-    /* NOTE: this test is only needed for thread safety */
-    if (!tb->jmp_list_next[n]) {
-        qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
-                               "Linking TBs %p [" TARGET_FMT_lx
-                               "] index %d -> %p [" TARGET_FMT_lx "]\n",
-                               tb->tc_ptr, tb->pc, n,
-                               tb_next->tc_ptr, tb_next->pc);
-        /* patch the native jump address */
-        tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
-
-        /* add in TB jmp circular list */
-        tb->jmp_list_next[n] = tb_next->jmp_list_first;
-        assert(((uintptr_t)tb & 3) == 0);
-        tb_next->jmp_list_first = (uintptr_t)tb | n;
+    if (tb->jmp_list_next[n]) {
+        /* Another thread has already done this while we were
+         * outside of the lock; nothing to do in this case */
+        return;
     }
+    qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
+                           "Linking TBs %p [" TARGET_FMT_lx
+                           "] index %d -> %p [" TARGET_FMT_lx "]\n",
+                           tb->tc_ptr, tb->pc, n,
+                           tb_next->tc_ptr, tb_next->pc);
+
+    /* patch the native jump address */
+    tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
+
+    /* add in TB jmp circular list */
+    tb->jmp_list_next[n] = tb_next->jmp_list_first;
+    assert(((uintptr_t)tb & 3) == 0);
+    tb_next->jmp_list_first = (uintptr_t)tb | n;
 }
 
 /* GETRA is the true target of the return instruction that we'll execute,
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 06/10] tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list()
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (4 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump() Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 07/10] tcg: Extract removing of jumps to TB from tb_phys_invalidate() Sergey Fedorov
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

tb_jmp_remove() was only used to remove the TB from a list of all TBs
jumping to the same TB which is n-th jump destination of the given TB.
Put a comment briefly describing the function behavior and rename it to
better reflect its purpose.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 translate-all.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/translate-all.c b/translate-all.c
index dfa7f0d64e76..c2a800a7d8a0 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -923,7 +923,8 @@ static inline void tb_page_remove(TranslationBlock **ptb, TranslationBlock *tb)
     }
 }
 
-static inline void tb_jmp_remove(TranslationBlock *tb, int n)
+/* remove the TB from a list of TBs jumping to the n-th jump target of the TB */
+static inline void tb_remove_from_jmp_list(TranslationBlock *tb, int n)
 {
     TranslationBlock *tb1;
     uintptr_t *ptb, ntb;
@@ -997,8 +998,8 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
     }
 
     /* suppress this TB from the two jump lists */
-    tb_jmp_remove(tb, 0);
-    tb_jmp_remove(tb, 1);
+    tb_remove_from_jmp_list(tb, 0);
+    tb_remove_from_jmp_list(tb, 1);
 
     /* suppress any remaining jumps to this TB */
     tb1 = tb->jmp_list_first;
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 07/10] tcg: Extract removing of jumps to TB from tb_phys_invalidate()
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (5 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 06/10] tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list() Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 08/10] tcg: Clean up tb_jmp_unlink() Sergey Fedorov
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

Move the code for removing jumps to a TB out of tb_phys_invalidate() to
a separate static inline function tb_jmp_unlink(). This simplifies
tb_phys_invalidate() and improves code structure.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---
 translate-all.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/translate-all.c b/translate-all.c
index c2a800a7d8a0..2d82ce5735a1 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -961,14 +961,37 @@ static inline void tb_reset_jump(TranslationBlock *tb, int n)
     tb_set_jmp_target(tb, n, addr);
 }
 
+/* remove any jumps to the TB */
+static inline void tb_jmp_unlink(TranslationBlock *tb)
+{
+    uintptr_t tb1, tb2;
+    unsigned int n1;
+
+    tb1 = tb->jmp_list_first;
+    for (;;) {
+        TranslationBlock *tmp_tb;
+        n1 = tb1 & 3;
+        if (n1 == 2) {
+            break;
+        }
+        tmp_tb = (TranslationBlock *)(tb1 & ~3);
+        tb2 = tmp_tb->jmp_list_next[n1];
+        tb_reset_jump(tmp_tb, n1);
+        tmp_tb->jmp_list_next[n1] = (uintptr_t)NULL;
+        tb1 = tb2;
+    }
+
+    assert(((uintptr_t)tb & 3) == 0);
+    tb->jmp_list_first = (uintptr_t)tb | 2; /* fail safe */
+}
+
 /* invalidate one TB */
 void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
 {
     CPUState *cpu;
     PageDesc *p;
-    unsigned int h, n1;
+    unsigned int h;
     tb_page_addr_t phys_pc;
-    uintptr_t tb1, tb2;
 
     /* remove the TB from the hash list */
     phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
@@ -1002,22 +1025,7 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
     tb_remove_from_jmp_list(tb, 1);
 
     /* suppress any remaining jumps to this TB */
-    tb1 = tb->jmp_list_first;
-    for (;;) {
-        TranslationBlock *tmp_tb;
-        n1 = tb1 & 3;
-        if (n1 == 2) {
-            break;
-        }
-        tmp_tb = (TranslationBlock *)(tb1 & ~3);
-        tb2 = tmp_tb->jmp_list_next[n1];
-        tb_reset_jump(tmp_tb, n1);
-        tmp_tb->jmp_list_next[n1] = (uintptr_t)NULL;
-        tb1 = tb2;
-    }
-
-    assert(((uintptr_t)tb & 3) == 0);
-    tb->jmp_list_first = (uintptr_t)tb | 2; /* fail safe */
+    tb_jmp_unlink(tb);
 
     tcg_ctx.tb_ctx.tb_phys_invalidate_count++;
 }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 08/10] tcg: Clean up tb_jmp_unlink()
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (6 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 07/10] tcg: Extract removing of jumps to TB from tb_phys_invalidate() Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks Sergey Fedorov
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode Sergey Fedorov
  9 siblings, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov

From: Sergey Fedorov <serge.fdrv@gmail.com>

Unify the code of this function with tb_jmp_remove_from_list(). Making
these functions similar improves their readability. Also this could be a
step towards making this function thread-safe.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
---

Changes in v2:
 * Tweaked a comment to better describe jmp_list_{next|first} usage
 * Eliminated duplicate dereference of 'ptb' in tb_jmp_unlink()

 translate-all.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/translate-all.c b/translate-all.c
index 2d82ce5735a1..485bc9ccdfbe 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -964,25 +964,22 @@ static inline void tb_reset_jump(TranslationBlock *tb, int n)
 /* remove any jumps to the TB */
 static inline void tb_jmp_unlink(TranslationBlock *tb)
 {
-    uintptr_t tb1, tb2;
+    TranslationBlock *tb1;
+    uintptr_t *ptb, ntb;
     unsigned int n1;
 
-    tb1 = tb->jmp_list_first;
+    ptb = &tb->jmp_list_first;
     for (;;) {
-        TranslationBlock *tmp_tb;
-        n1 = tb1 & 3;
+        ntb = *ptb;
+        n1 = ntb & 3;
+        tb1 = (TranslationBlock *)(ntb & ~3);
         if (n1 == 2) {
             break;
         }
-        tmp_tb = (TranslationBlock *)(tb1 & ~3);
-        tb2 = tmp_tb->jmp_list_next[n1];
-        tb_reset_jump(tmp_tb, n1);
-        tmp_tb->jmp_list_next[n1] = (uintptr_t)NULL;
-        tb1 = tb2;
+        tb_reset_jump(tb1, n1);
+        *ptb = tb1->jmp_list_next[n1];
+        tb1->jmp_list_next[n1] = (uintptr_t)NULL;
     }
-
-    assert(((uintptr_t)tb & 3) == 0);
-    tb->jmp_list_first = (uintptr_t)tb | 2; /* fail safe */
 }
 
 /* invalidate one TB */
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (7 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 08/10] tcg: Clean up tb_jmp_unlink() Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 11:37   ` Alex Bennée
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode Sergey Fedorov
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Sergey Fedorov,
	Peter Maydell, Edgar E. Iglesias, Eduardo Habkost, Michael Walle,
	Aurelien Jarno, Leon Alrae, Anthony Green, Jia Liu,
	Alexander Graf, Blue Swirl, Mark Cave-Ayland, Bastian Koppelmann,
	Guan Xuetao, Max Filippov, qemu-arm, qemu-ppc

From: Sergey Fedorov <serge.fdrv@gmail.com>

We don't take care of direct jumps when address mapping changes. Thus we
must be sure to generate direct jumps so that they always keep valid
even if address mapping changes. However we only allow to execute a TB
if it was generated from the pages which match with current mapping.

Document in comments in the translation code the reason for these checks
on a destination PC.

Some targets with variable length instructions allow TB to straddle a
page boundary. However, we make sure that both TB pages match the
current address mapping when looking up TBs. So it is safe to do direct
jumps into both pages. Correct the checks for those targets.

Given that, we can safely patch a TB which spans two pages. Remove the
unnecessary check in cpu_exec() and allow such TBs to be patched.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
 cpu-exec.c                    | 7 ++-----
 target-alpha/translate.c      | 5 ++++-
 target-arm/translate-a64.c    | 5 ++++-
 target-arm/translate.c        | 7 ++++++-
 target-cris/translate.c       | 8 +++++++-
 target-i386/translate.c       | 7 +++++--
 target-lm32/translate.c       | 4 ++++
 target-m68k/translate.c       | 6 +++++-
 target-microblaze/translate.c | 4 ++++
 target-mips/translate.c       | 4 ++++
 target-moxie/translate.c      | 4 ++++
 target-openrisc/translate.c   | 4 ++++
 target-ppc/translate.c        | 4 ++++
 target-s390x/translate.c      | 9 ++++++---
 target-sh4/translate.c        | 4 ++++
 target-sparc/translate.c      | 4 ++++
 target-tricore/translate.c    | 4 ++++
 target-unicore32/translate.c  | 4 ++++
 target-xtensa/translate.c     | 8 ++++++++
 19 files changed, 87 insertions(+), 15 deletions(-)

diff --git a/cpu-exec.c b/cpu-exec.c
index bbfcbfb54385..065cc9159477 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -508,11 +508,8 @@ int cpu_exec(CPUState *cpu)
                     next_tb = 0;
                     tcg_ctx.tb_ctx.tb_invalidated_flag = 0;
                 }
-                /* see if we can patch the calling TB. When the TB
-                   spans two pages, we cannot safely do a direct
-                   jump. */
-                if (next_tb != 0 && tb->page_addr[1] == -1
-                    && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
+                /* See if we can patch the calling TB. */
+                if (next_tb != 0 && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
                     tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
                                 next_tb & TB_EXIT_MASK, tb);
                 }
diff --git a/target-alpha/translate.c b/target-alpha/translate.c
index 5b86992dd367..5fa66309ce2e 100644
--- a/target-alpha/translate.c
+++ b/target-alpha/translate.c
@@ -464,7 +464,10 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
     if (in_superpage(ctx, dest)) {
         return true;
     }
-    /* Check for the dest on the same page as the start of the TB.  */
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
 }
 
diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
index b13cff756ad1..bf8471c7fa99 100644
--- a/target-arm/translate-a64.c
+++ b/target-arm/translate-a64.c
@@ -274,7 +274,10 @@ static inline bool use_goto_tb(DisasContext *s, int n, uint64_t dest)
         return false;
     }
 
-    /* Only link tbs from inside the same guest page */
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((s->tb->pc & TARGET_PAGE_MASK) != (dest & TARGET_PAGE_MASK)) {
         return false;
     }
diff --git a/target-arm/translate.c b/target-arm/translate.c
index 940ec8d981d1..aeb3e84e8d40 100644
--- a/target-arm/translate.c
+++ b/target-arm/translate.c
@@ -4054,7 +4054,12 @@ static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
     TranslationBlock *tb;
 
     tb = s->tb;
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
+    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
+        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
         tcg_gen_goto_tb(n);
         gen_set_pc_im(s, dest);
         tcg_gen_exit_tb((uintptr_t)tb + n);
diff --git a/target-cris/translate.c b/target-cris/translate.c
index a73176c118b0..7acac076167e 100644
--- a/target-cris/translate.c
+++ b/target-cris/translate.c
@@ -524,7 +524,13 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
 {
     TranslationBlock *tb;
     tb = dc->tb;
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
+    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
+        (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(env_pc, dest);
                 tcg_gen_exit_tb((uintptr_t)tb + n);
diff --git a/target-i386/translate.c b/target-i386/translate.c
index 1a1214dcb12e..d0f440fc7697 100644
--- a/target-i386/translate.c
+++ b/target-i386/translate.c
@@ -2092,9 +2092,12 @@ static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
 
     pc = s->cs_base + eip;
     tb = s->tb;
-    /* NOTE: we handle the case where the TB spans two pages here */
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
-        (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK))  {
+        (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK))  {
         /* jump to same page: we can use a direct jump */
         tcg_gen_goto_tb(tb_num);
         gen_jmp_im(eip);
diff --git a/target-lm32/translate.c b/target-lm32/translate.c
index 256a51f8498f..18d648ffc729 100644
--- a/target-lm32/translate.c
+++ b/target-lm32/translate.c
@@ -138,6 +138,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
     TranslationBlock *tb;
 
     tb = dc->tb;
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
             likely(!dc->singlestep_enabled)) {
         tcg_gen_goto_tb(n);
diff --git a/target-m68k/translate.c b/target-m68k/translate.c
index 7560c3a80841..282da60cbcca 100644
--- a/target-m68k/translate.c
+++ b/target-m68k/translate.c
@@ -861,7 +861,11 @@ static void gen_jmp_tb(DisasContext *s, int n, uint32_t dest)
     if (unlikely(s->singlestep_enabled)) {
         gen_exception(s, dest, EXCP_DEBUG);
     } else if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
-               (s->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+               (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+        /* Direct jumps with goto_tb are only safe within the page this TB
+         * resides in because we don't take care of direct jumps when address
+         * mapping changes.
+         */
         tcg_gen_goto_tb(n);
         tcg_gen_movi_i32(QREG_PC, dest);
         tcg_gen_exit_tb((uintptr_t)tb + n);
diff --git a/target-microblaze/translate.c b/target-microblaze/translate.c
index f944965a14e1..6028750ba7de 100644
--- a/target-microblaze/translate.c
+++ b/target-microblaze/translate.c
@@ -128,6 +128,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
 {
     TranslationBlock *tb;
     tb = dc->tb;
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
diff --git a/target-mips/translate.c b/target-mips/translate.c
index a3a05ec66dd2..37b834d2df59 100644
--- a/target-mips/translate.c
+++ b/target-mips/translate.c
@@ -4195,6 +4195,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
 {
     TranslationBlock *tb;
     tb = ctx->tb;
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
         likely(!ctx->singlestep_enabled)) {
         tcg_gen_goto_tb(n);
diff --git a/target-moxie/translate.c b/target-moxie/translate.c
index a437e2ab6026..fb99038399fa 100644
--- a/target-moxie/translate.c
+++ b/target-moxie/translate.c
@@ -127,6 +127,10 @@ static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
     TranslationBlock *tb;
     tb = ctx->tb;
 
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
         !ctx->singlestep_enabled) {
         tcg_gen_goto_tb(n);
diff --git a/target-openrisc/translate.c b/target-openrisc/translate.c
index 5d0ab442a872..da60fae26a75 100644
--- a/target-openrisc/translate.c
+++ b/target-openrisc/translate.c
@@ -194,6 +194,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
 {
     TranslationBlock *tb;
     tb = dc->tb;
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
                                        likely(!dc->singlestep_enabled)) {
         tcg_gen_movi_tl(cpu_pc, dest);
diff --git a/target-ppc/translate.c b/target-ppc/translate.c
index 6f0e7b4face6..92ded8ec433b 100644
--- a/target-ppc/translate.c
+++ b/target-ppc/translate.c
@@ -3832,6 +3832,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
     if (NARROW_MODE(ctx)) {
         dest = (uint32_t) dest;
     }
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
         likely(!ctx->singlestep_enabled)) {
         tcg_gen_goto_tb(n);
diff --git a/target-s390x/translate.c b/target-s390x/translate.c
index c871ef2bb302..9f6ae60622b2 100644
--- a/target-s390x/translate.c
+++ b/target-s390x/translate.c
@@ -608,9 +608,12 @@ static void gen_op_calc_cc(DisasContext *s)
 
 static int use_goto_tb(DisasContext *s, uint64_t dest)
 {
-    /* NOTE: we handle the case where the TB spans two pages here */
-    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK)
-             || (dest & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK))
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
+    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
+             (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK))
             && !s->singlestep_enabled
             && !(s->tb->cflags & CF_LAST_IO)
             && !(s->tb->flags & FLAG_MASK_PER));
diff --git a/target-sh4/translate.c b/target-sh4/translate.c
index 7c189680a7a4..660692d06090 100644
--- a/target-sh4/translate.c
+++ b/target-sh4/translate.c
@@ -210,6 +210,10 @@ static void gen_goto_tb(DisasContext * ctx, int n, target_ulong dest)
     TranslationBlock *tb;
     tb = ctx->tb;
 
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
 	!ctx->singlestep_enabled) {
 	/* Use a direct jump if in same page and singlestep not enabled */
diff --git a/target-sparc/translate.c b/target-sparc/translate.c
index 58572c34cf3e..d09a500e8bd4 100644
--- a/target-sparc/translate.c
+++ b/target-sparc/translate.c
@@ -309,6 +309,10 @@ static inline void gen_goto_tb(DisasContext *s, int tb_num,
     TranslationBlock *tb;
 
     tb = s->tb;
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
         (npc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
         !s->singlestep)  {
diff --git a/target-tricore/translate.c b/target-tricore/translate.c
index 912bf226bedc..b67154a3f410 100644
--- a/target-tricore/translate.c
+++ b/target-tricore/translate.c
@@ -3240,6 +3240,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
 {
     TranslationBlock *tb;
     tb = ctx->tb;
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
         likely(!ctx->singlestep_enabled)) {
         tcg_gen_goto_tb(n);
diff --git a/target-unicore32/translate.c b/target-unicore32/translate.c
index 39af3af05f15..04dcbb0bb466 100644
--- a/target-unicore32/translate.c
+++ b/target-unicore32/translate.c
@@ -1094,6 +1094,10 @@ static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
     TranslationBlock *tb;
 
     tb = s->tb;
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
         tcg_gen_goto_tb(n);
         gen_set_pc_im(dest);
diff --git a/target-xtensa/translate.c b/target-xtensa/translate.c
index 989448846902..7eeb279e5030 100644
--- a/target-xtensa/translate.c
+++ b/target-xtensa/translate.c
@@ -418,6 +418,10 @@ static void gen_jump(DisasContext *dc, TCGv dest)
 static void gen_jumpi(DisasContext *dc, uint32_t dest, int slot)
 {
     TCGv_i32 tmp = tcg_const_i32(dest);
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
         slot = -1;
     }
@@ -446,6 +450,10 @@ static void gen_callw(DisasContext *dc, int callinc, TCGv_i32 dest)
 static void gen_callwi(DisasContext *dc, int callinc, uint32_t dest, int slot)
 {
     TCGv_i32 tmp = tcg_const_i32(dest);
+    /* Direct jumps with goto_tb are only safe within the pages this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes.
+     */
     if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
         slot = -1;
     }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode
  2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
                   ` (8 preceding siblings ...)
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks Sergey Fedorov
@ 2016-04-10 21:45 ` Sergey Fedorov
  2016-04-19 13:10   ` Alex Bennée
  9 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-10 21:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alex Bennée, Sergey Fedorov, Paolo Bonzini,
	Peter Crosthwaite, Richard Henderson, Riku Voipio, Blue Swirl,
	Sergey Fedorov, Peter Maydell, Edgar E. Iglesias,
	Eduardo Habkost, Michael Walle, Aurelien Jarno, Leon Alrae,
	Anthony Green, Jia Liu, Alexander Graf, Mark Cave-Ayland,
	Bastian Koppelmann, Guan Xuetao, Max Filippov, qemu-arm,
	qemu-ppc

From: Sergey Fedorov <serge.fdrv@gmail.com>

In user mode, there's only a static address translation, TBs are always
invalidated properly and direct jumps are reset when mapping change.
Thus the destination address is always valid for direct jumps and
there's no need to restrict it to the pages the TB resides in.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
 target-alpha/translate.c      | 10 +++++++++-
 target-arm/translate-a64.c    |  8 +++++++-
 target-arm/translate.c        | 26 ++++++++++++++++++--------
 target-cris/translate.c       | 26 ++++++++++++++++++--------
 target-i386/translate.c       | 30 ++++++++++++++++++++----------
 target-lm32/translate.c       | 27 ++++++++++++++++++++-------
 target-m68k/translate.c       | 30 ++++++++++++++++++++----------
 target-microblaze/translate.c | 23 +++++++++++++++++------
 target-mips/translate.c       | 28 +++++++++++++++++++++-------
 target-moxie/translate.c      | 29 +++++++++++++++++++++--------
 target-openrisc/translate.c   | 28 +++++++++++++++++++++-------
 target-ppc/translate.c        | 32 +++++++++++++++++++++++---------
 target-s390x/translate.c      | 20 +++++++++++++++-----
 target-sh4/translate.c        | 27 ++++++++++++++++++++-------
 target-sparc/translate.c      | 32 +++++++++++++++++++++++---------
 target-tricore/translate.c    | 28 +++++++++++++++++++++-------
 target-unicore32/translate.c  | 24 +++++++++++++++++-------
 target-xtensa/translate.c     | 16 ++++++++++++++--
 18 files changed, 325 insertions(+), 119 deletions(-)

diff --git a/target-alpha/translate.c b/target-alpha/translate.c
index 5fa66309ce2e..684559e694bd 100644
--- a/target-alpha/translate.c
+++ b/target-alpha/translate.c
@@ -464,11 +464,19 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
     if (in_superpage(ctx, dest)) {
         return true;
     }
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
     return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
 }
 
 static ExitStatus gen_bdirect(DisasContext *ctx, int ra, int32_t disp)
diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
index bf8471c7fa99..c2fccfe5a038 100644
--- a/target-arm/translate-a64.c
+++ b/target-arm/translate-a64.c
@@ -274,14 +274,20 @@ static inline bool use_goto_tb(DisasContext *s, int n, uint64_t dest)
         return false;
     }
 
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
     if ((s->tb->pc & TARGET_PAGE_MASK) != (dest & TARGET_PAGE_MASK)) {
         return false;
     }
+#endif
 
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
     return true;
 }
 
diff --git a/target-arm/translate.c b/target-arm/translate.c
index aeb3e84e8d40..ae6fd7f1c313 100644
--- a/target-arm/translate.c
+++ b/target-arm/translate.c
@@ -4049,20 +4049,30 @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
     return 0;
 }
 
-static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
 {
-    TranslationBlock *tb;
-
-    tb = s->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
-        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
+           ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
+{
+    if (use_goto_tb(s, dest)) {
         tcg_gen_goto_tb(n);
         gen_set_pc_im(s, dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)s->tb + n);
     } else {
         gen_set_pc_im(s, dest);
         tcg_gen_exit_tb(0);
diff --git a/target-cris/translate.c b/target-cris/translate.c
index 7acac076167e..cfd3933486d7 100644
--- a/target-cris/translate.c
+++ b/target-cris/translate.c
@@ -520,20 +520,30 @@ static void t_gen_cc_jmp(TCGv pc_true, TCGv pc_false)
     gen_set_label(l1);
 }
 
-static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = dc->tb;
-
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
-        (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
+           (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
+static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+{
+    if (use_goto_tb(dc, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(env_pc, dest);
-                tcg_gen_exit_tb((uintptr_t)tb + n);
+                tcg_gen_exit_tb((uintptr_t)dc->tb + n);
     } else {
         tcg_gen_movi_tl(env_pc, dest);
         tcg_gen_exit_tb(0);
diff --git a/target-i386/translate.c b/target-i386/translate.c
index d0f440fc7697..3b7c2202a377 100644
--- a/target-i386/translate.c
+++ b/target-i386/translate.c
@@ -2085,23 +2085,33 @@ static inline int insn_const_size(TCGMemOp ot)
     }
 }
 
-static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
+static inline bool use_goto_tb(DisasContext *s, target_ulong pc)
 {
-    TranslationBlock *tb;
-    target_ulong pc;
-
-    pc = s->cs_base + eip;
-    tb = s->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (pc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
+           (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
-        (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK))  {
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
+{
+    target_ulong pc = s->cs_base + eip;
+
+    if (use_goto_tb(s, pc))  {
         /* jump to same page: we can use a direct jump */
         tcg_gen_goto_tb(tb_num);
         gen_jmp_im(eip);
-        tcg_gen_exit_tb((uintptr_t)tb + tb_num);
+        tcg_gen_exit_tb((uintptr_t)s->tb + tb_num);
     } else {
         /* jump to another page: currently not optimized */
         gen_jmp_im(eip);
diff --git a/target-lm32/translate.c b/target-lm32/translate.c
index 18d648ffc729..9b516da47faa 100644
--- a/target-lm32/translate.c
+++ b/target-lm32/translate.c
@@ -133,20 +133,33 @@ static inline void t_gen_illegal_insn(DisasContext *dc)
     gen_helper_ill(cpu_env);
 }
 
-static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
 {
-    TranslationBlock *tb;
+    if (unlikely(dc->singlestep_enabled)) {
+        return false;
+    }
 
-    tb = dc->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-            likely(!dc->singlestep_enabled)) {
+    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
+static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+{
+    if (use_goto_tb(dc, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(cpu_pc, dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
     } else {
         tcg_gen_movi_tl(cpu_pc, dest);
         if (dc->singlestep_enabled) {
diff --git a/target-m68k/translate.c b/target-m68k/translate.c
index 282da60cbcca..454953608f93 100644
--- a/target-m68k/translate.c
+++ b/target-m68k/translate.c
@@ -852,23 +852,33 @@ static inline void gen_addr_fault(DisasContext *s)
         }                                                               \
     } while (0)
 
+static inline bool use_goto_tb(DisasContext *s, uint32_t dest)
+{
+#ifndef CONFIG_USER_ONLY
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes in system mode.
+     */
+    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
+           (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
 /* Generate a jump to an immediate address.  */
 static void gen_jmp_tb(DisasContext *s, int n, uint32_t dest)
 {
-    TranslationBlock *tb;
-
-    tb = s->tb;
     if (unlikely(s->singlestep_enabled)) {
         gen_exception(s, dest, EXCP_DEBUG);
-    } else if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
-               (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
-        /* Direct jumps with goto_tb are only safe within the page this TB
-         * resides in because we don't take care of direct jumps when address
-         * mapping changes.
-         */
+    } else if (use_goto_tb(s, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_i32(QREG_PC, dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)s->tb + n);
     } else {
         gen_jmp_im(s, dest);
         tcg_gen_exit_tb(0);
diff --git a/target-microblaze/translate.c b/target-microblaze/translate.c
index 6028750ba7de..1fe0ba77ef06 100644
--- a/target-microblaze/translate.c
+++ b/target-microblaze/translate.c
@@ -124,18 +124,29 @@ static inline void t_gen_raise_exception(DisasContext *dc, uint32_t index)
     dc->is_jmp = DISAS_UPDATE;
 }
 
-static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = dc->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+    return true;
+#endif
+}
+
+static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+{
+    if (use_goto_tb(dc, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
     } else {
         tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
         tcg_gen_exit_tb(0);
diff --git a/target-mips/translate.c b/target-mips/translate.c
index 37b834d2df59..700496e1d265 100644
--- a/target-mips/translate.c
+++ b/target-mips/translate.c
@@ -4191,19 +4191,33 @@ static void gen_trap (DisasContext *ctx, uint32_t opc,
     tcg_temp_free(t1);
 }
 
-static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = ctx->tb;
+    if (unlikely(ctx->singlestep_enabled)) {
+        return false;
+    }
+
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-        likely(!ctx->singlestep_enabled)) {
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
+{
+    if (use_goto_tb(ctx, dest)) {
         tcg_gen_goto_tb(n);
         gen_save_pc(dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
     } else {
         gen_save_pc(dest);
         if (ctx->singlestep_enabled) {
diff --git a/target-moxie/translate.c b/target-moxie/translate.c
index fb99038399fa..d911e200d9f6 100644
--- a/target-moxie/translate.c
+++ b/target-moxie/translate.c
@@ -121,21 +121,34 @@ void moxie_translate_init(void)
     done_init = 1;
 }
 
-static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
-                               int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = ctx->tb;
+    if (unlikely(ctx->singlestep_enabled)) {
+        return false;
+    }
 
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-        !ctx->singlestep_enabled) {
+    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
+                               int n, target_ulong dest)
+{
+    if (use_goto_tb(ctx, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_i32(cpu_pc, dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
     } else {
         tcg_gen_movi_i32(cpu_pc, dest);
         if (ctx->singlestep_enabled) {
diff --git a/target-openrisc/translate.c b/target-openrisc/translate.c
index da60fae26a75..be9deaa7a574 100644
--- a/target-openrisc/translate.c
+++ b/target-openrisc/translate.c
@@ -190,19 +190,33 @@ static void check_ov64s(DisasContext *dc)
 }
 #endif*/
 
-static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = dc->tb;
+    if (unlikely(dc->singlestep_enabled)) {
+        return false;
+    }
+
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-                                       likely(!dc->singlestep_enabled)) {
+    return true;
+#endif
+}
+
+static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
+{
+    if (use_goto_tb(dc, dest)) {
         tcg_gen_movi_tl(cpu_pc, dest);
         tcg_gen_goto_tb(n);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
     } else {
         tcg_gen_movi_tl(cpu_pc, dest);
         if (dc->singlestep_enabled) {
diff --git a/target-ppc/translate.c b/target-ppc/translate.c
index 92ded8ec433b..21518decf9ed 100644
--- a/target-ppc/translate.c
+++ b/target-ppc/translate.c
@@ -3824,23 +3824,37 @@ static inline void gen_update_cfar(DisasContext *ctx, target_ulong nip)
 #endif
 }
 
+static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
+{
+    if (unlikely(ctx->singlestep_enabled)) {
+        return false;
+    }
+
+#ifndef CONFIG_USER_ONLY
+    /* Direct jumps with goto_tb are only safe within the page this TB resides
+     * in because we don't take care of direct jumps when address mapping
+     * changes in system mode.
+     */
+    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
 /***                                Branch                                 ***/
 static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = ctx->tb;
     if (NARROW_MODE(ctx)) {
         dest = (uint32_t) dest;
     }
-    /* Direct jumps with goto_tb are only safe within the page this TB resides
-     * in because we don't take care of direct jumps when address mapping
-     * changes.
-     */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-        likely(!ctx->singlestep_enabled)) {
+    if (use_goto_tb(ctx, dest)) {
         tcg_gen_goto_tb(n);
         tcg_gen_movi_tl(cpu_nip, dest & ~3);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
     } else {
         tcg_gen_movi_tl(cpu_nip, dest & ~3);
         if (unlikely(ctx->singlestep_enabled)) {
diff --git a/target-s390x/translate.c b/target-s390x/translate.c
index 9f6ae60622b2..dab1b74b2667 100644
--- a/target-s390x/translate.c
+++ b/target-s390x/translate.c
@@ -608,15 +608,25 @@ static void gen_op_calc_cc(DisasContext *s)
 
 static int use_goto_tb(DisasContext *s, uint64_t dest)
 {
+    if (unlikely(s->singlestep_enabled) ||
+        (s->tb->cflags & CF_LAST_IO) ||
+        (s->tb->flags & FLAG_MASK_PER)) {
+        return false;
+    }
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
      * changes.
      */
-    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
-             (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK))
-            && !s->singlestep_enabled
-            && !(s->tb->cflags & CF_LAST_IO)
-            && !(s->tb->flags & FLAG_MASK_PER));
+    return (dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
+           (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
 }
 
 static void account_noninline_branch(DisasContext *s, int cc_op)
diff --git a/target-sh4/translate.c b/target-sh4/translate.c
index 660692d06090..355c1c943918 100644
--- a/target-sh4/translate.c
+++ b/target-sh4/translate.c
@@ -205,21 +205,34 @@ static void gen_write_sr(TCGv src)
     tcg_gen_andi_i32(cpu_sr_t, cpu_sr_t, 1);
 }
 
-static void gen_goto_tb(DisasContext * ctx, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = ctx->tb;
+    if (unlikely(ctx->singlestep_enabled)) {
+        return false;
+    }
 
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-	!ctx->singlestep_enabled) {
+    return true;
+#endif
+}
+
+static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
+{
+    if (use_goto_tb(ctx, dest)) {
 	/* Use a direct jump if in same page and singlestep not enabled */
         tcg_gen_goto_tb(n);
         tcg_gen_movi_i32(cpu_pc, dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
     } else {
         tcg_gen_movi_i32(cpu_pc, dest);
         if (ctx->singlestep_enabled)
diff --git a/target-sparc/translate.c b/target-sparc/translate.c
index d09a500e8bd4..8f59a101c4e5 100644
--- a/target-sparc/translate.c
+++ b/target-sparc/translate.c
@@ -303,24 +303,38 @@ static inline TCGv gen_dest_gpr(DisasContext *dc, int reg)
     }
 }
 
-static inline void gen_goto_tb(DisasContext *s, int tb_num,
-                               target_ulong pc, target_ulong npc)
+static inline bool use_goto_tb(DisasContext *s, target_ulong pc,
+                               target_ulong npc)
 {
-    TranslationBlock *tb;
+    if (unlikely(s->singlestep)) {
+        return false;
+    }
 
-    tb = s->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (pc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) &&
+           (npc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
-        (npc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
-        !s->singlestep)  {
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *s, int tb_num,
+                               target_ulong pc, target_ulong npc)
+{
+    if (use_goto_tb(s, pc, npc))  {
         /* jump to same page: we can use a direct jump */
         tcg_gen_goto_tb(tb_num);
         tcg_gen_movi_tl(cpu_pc, pc);
         tcg_gen_movi_tl(cpu_npc, npc);
-        tcg_gen_exit_tb((uintptr_t)tb + tb_num);
+        tcg_gen_exit_tb((uintptr_t)s->tb + tb_num);
     } else {
         /* jump to another page: currently not optimized */
         tcg_gen_movi_tl(cpu_pc, pc);
diff --git a/target-tricore/translate.c b/target-tricore/translate.c
index b67154a3f410..a648b5e67eae 100644
--- a/target-tricore/translate.c
+++ b/target-tricore/translate.c
@@ -3236,19 +3236,33 @@ static inline void gen_save_pc(target_ulong pc)
     tcg_gen_movi_tl(cpu_PC, pc);
 }
 
-static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
+static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
 {
-    TranslationBlock *tb;
-    tb = ctx->tb;
+    if (unlikely(ctx->singlestep_enabled)) {
+        return false;
+    }
+
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
-        likely(!ctx->singlestep_enabled)) {
+    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
+{
+    if (use_goto_tb(ctx, dest)) {
         tcg_gen_goto_tb(n);
         gen_save_pc(dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
     } else {
         gen_save_pc(dest);
         if (ctx->singlestep_enabled) {
diff --git a/target-unicore32/translate.c b/target-unicore32/translate.c
index 04dcbb0bb466..af30f2563490 100644
--- a/target-unicore32/translate.c
+++ b/target-unicore32/translate.c
@@ -1089,19 +1089,29 @@ static void disas_ucf64_insn(CPUUniCore32State *env, DisasContext *s, uint32_t i
     }
 }
 
-static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
+static inline bool use_goto_tb(DisasContext *s, uint32_t dest)
 {
-    TranslationBlock *tb;
-
-    tb = s->tb;
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the page this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
+     */
+    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
+#else
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
      */
-    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
+    return true;
+#endif
+}
+
+static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
+{
+    if (use_goto_tb(s, dest)) {
         tcg_gen_goto_tb(n);
         gen_set_pc_im(dest);
-        tcg_gen_exit_tb((uintptr_t)tb + n);
+        tcg_gen_exit_tb((uintptr_t)s->tb + n);
     } else {
         gen_set_pc_im(dest);
         tcg_gen_exit_tb(0);
diff --git a/target-xtensa/translate.c b/target-xtensa/translate.c
index 7eeb279e5030..6886c389c919 100644
--- a/target-xtensa/translate.c
+++ b/target-xtensa/translate.c
@@ -418,13 +418,19 @@ static void gen_jump(DisasContext *dc, TCGv dest)
 static void gen_jumpi(DisasContext *dc, uint32_t dest, int slot)
 {
     TCGv_i32 tmp = tcg_const_i32(dest);
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
     if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
         slot = -1;
     }
+#endif
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
     gen_jump_slot(dc, tmp, slot);
     tcg_temp_free(tmp);
 }
@@ -450,13 +456,19 @@ static void gen_callw(DisasContext *dc, int callinc, TCGv_i32 dest)
 static void gen_callwi(DisasContext *dc, int callinc, uint32_t dest, int slot)
 {
     TCGv_i32 tmp = tcg_const_i32(dest);
+#ifndef CONFIG_USER_ONLY
     /* Direct jumps with goto_tb are only safe within the pages this TB resides
      * in because we don't take care of direct jumps when address mapping
-     * changes.
+     * changes in system mode.
      */
     if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
         slot = -1;
     }
+#endif
+    /* In user mode, there's only a static address translation, so the
+     * destination address is always valid. TBs are always invalidated properly
+     * and direct jumps are reset when mapping attributes change.
+     */
     gen_callw_slot(dc, callinc, tmp, slot);
     tcg_temp_free(tmp);
 }
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration Sergey Fedorov
@ 2016-04-18 17:20   ` Alex Bennée
  2016-04-18 17:59     ` Sergey Fedorov
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2016-04-18 17:20 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

This clashes with the tcg clean-up patches. Should this series alway be
applied first?

> ---
>  translate-all.c | 204 ++++++++++++++++++++++++++++----------------------------
>  1 file changed, 101 insertions(+), 103 deletions(-)
>
> diff --git a/translate-all.c b/translate-all.c
> index ba71ff73f55f..7ac7916f2792 100644
> --- a/translate-all.c
> +++ b/translate-all.c
> @@ -153,8 +153,6 @@ void tb_lock_reset(void)
>  #endif
>  }
>
> -static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
> -                         tb_page_addr_t phys_page2);
>  static TranslationBlock *tb_find_pc(uintptr_t tc_ptr);
>
>  void cpu_gen_init(void)
> @@ -1052,6 +1050,107 @@ static void build_page_bitmap(PageDesc *p)
>      }
>  }
>
> +/* add the tb in the target page and protect it if necessary
> + *
> + * Called with mmap_lock held for user-mode emulation.
> + */
> +static inline void tb_alloc_page(TranslationBlock *tb,
> +                                 unsigned int n, tb_page_addr_t page_addr)
> +{
> +    PageDesc *p;
> +#ifndef CONFIG_USER_ONLY
> +    bool page_already_protected;
> +#endif
> +
> +    tb->page_addr[n] = page_addr;
> +    p = page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1);
> +    tb->page_next[n] = p->first_tb;
> +#ifndef CONFIG_USER_ONLY
> +    page_already_protected = p->first_tb != NULL;
> +#endif
> +    p->first_tb = (TranslationBlock *)((uintptr_t)tb | n);
> +    invalidate_page_bitmap(p);
> +
> +#if defined(CONFIG_USER_ONLY)
> +    if (p->flags & PAGE_WRITE) {
> +        target_ulong addr;
> +        PageDesc *p2;
> +        int prot;
> +
> +        /* force the host page as non writable (writes will have a
> +           page fault + mprotect overhead) */
> +        page_addr &= qemu_host_page_mask;
> +        prot = 0;
> +        for (addr = page_addr; addr < page_addr + qemu_host_page_size;
> +            addr += TARGET_PAGE_SIZE) {
> +
> +            p2 = page_find(addr >> TARGET_PAGE_BITS);
> +            if (!p2) {
> +                continue;
> +            }
> +            prot |= p2->flags;
> +            p2->flags &= ~PAGE_WRITE;
> +          }
> +        mprotect(g2h(page_addr), qemu_host_page_size,
> +                 (prot & PAGE_BITS) & ~PAGE_WRITE);
> +#ifdef DEBUG_TB_INVALIDATE
> +        printf("protecting code page: 0x" TARGET_FMT_lx "\n",
> +               page_addr);
> +#endif
> +    }
> +#else
> +    /* if some code is already present, then the pages are already
> +       protected. So we handle the case where only the first TB is
> +       allocated in a physical page */
> +    if (!page_already_protected) {
> +        tlb_protect_code(page_addr);
> +    }
> +#endif
> +}
> +
> +/* add a new TB and link it to the physical page tables. phys_page2 is
> + * (-1) to indicate that only one page contains the TB.
> + *
> + * Called with mmap_lock held for user-mode emulation.
> + */
> +static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
> +                         tb_page_addr_t phys_page2)
> +{
> +    unsigned int h;
> +    TranslationBlock **ptb;
> +
> +    /* add in the physical hash table */
> +    h = tb_phys_hash_func(phys_pc);
> +    ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h];
> +    tb->phys_hash_next = *ptb;
> +    *ptb = tb;
> +
> +    /* add in the page list */
> +    tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK);
> +    if (phys_page2 != -1) {
> +        tb_alloc_page(tb, 1, phys_page2);
> +    } else {
> +        tb->page_addr[1] = -1;
> +    }
> +
> +    assert(((uintptr_t)tb & 3) == 0);
> +    tb->jmp_list_first = (uintptr_t)tb | 2;
> +    tb->jmp_list_next[0] = (uintptr_t)NULL;
> +    tb->jmp_list_next[1] = (uintptr_t)NULL;
> +
> +    /* init original jump addresses */
> +    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
> +        tb_reset_jump(tb, 0);
> +    }
> +    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
> +        tb_reset_jump(tb, 1);
> +    }
> +
> +#ifdef DEBUG_TB_CHECK
> +    tb_page_check();
> +#endif
> +}
> +
>  /* Called with mmap_lock held for user mode emulation.  */
>  TranslationBlock *tb_gen_code(CPUState *cpu,
>                                target_ulong pc, target_ulong cs_base,
> @@ -1409,107 +1508,6 @@ static void tb_invalidate_phys_page(tb_page_addr_t addr,
>  }
>  #endif
>
> -/* add the tb in the target page and protect it if necessary
> - *
> - * Called with mmap_lock held for user-mode emulation.
> - */
> -static inline void tb_alloc_page(TranslationBlock *tb,
> -                                 unsigned int n, tb_page_addr_t page_addr)
> -{
> -    PageDesc *p;
> -#ifndef CONFIG_USER_ONLY
> -    bool page_already_protected;
> -#endif
> -
> -    tb->page_addr[n] = page_addr;
> -    p = page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1);
> -    tb->page_next[n] = p->first_tb;
> -#ifndef CONFIG_USER_ONLY
> -    page_already_protected = p->first_tb != NULL;
> -#endif
> -    p->first_tb = (TranslationBlock *)((uintptr_t)tb | n);
> -    invalidate_page_bitmap(p);
> -
> -#if defined(CONFIG_USER_ONLY)
> -    if (p->flags & PAGE_WRITE) {
> -        target_ulong addr;
> -        PageDesc *p2;
> -        int prot;
> -
> -        /* force the host page as non writable (writes will have a
> -           page fault + mprotect overhead) */
> -        page_addr &= qemu_host_page_mask;
> -        prot = 0;
> -        for (addr = page_addr; addr < page_addr + qemu_host_page_size;
> -            addr += TARGET_PAGE_SIZE) {
> -
> -            p2 = page_find(addr >> TARGET_PAGE_BITS);
> -            if (!p2) {
> -                continue;
> -            }
> -            prot |= p2->flags;
> -            p2->flags &= ~PAGE_WRITE;
> -          }
> -        mprotect(g2h(page_addr), qemu_host_page_size,
> -                 (prot & PAGE_BITS) & ~PAGE_WRITE);
> -#ifdef DEBUG_TB_INVALIDATE
> -        printf("protecting code page: 0x" TARGET_FMT_lx "\n",
> -               page_addr);
> -#endif
> -    }
> -#else
> -    /* if some code is already present, then the pages are already
> -       protected. So we handle the case where only the first TB is
> -       allocated in a physical page */
> -    if (!page_already_protected) {
> -        tlb_protect_code(page_addr);
> -    }
> -#endif
> -}
> -
> -/* add a new TB and link it to the physical page tables. phys_page2 is
> - * (-1) to indicate that only one page contains the TB.
> - *
> - * Called with mmap_lock held for user-mode emulation.
> - */
> -static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
> -                         tb_page_addr_t phys_page2)
> -{
> -    unsigned int h;
> -    TranslationBlock **ptb;
> -
> -    /* add in the physical hash table */
> -    h = tb_phys_hash_func(phys_pc);
> -    ptb = &tcg_ctx.tb_ctx.tb_phys_hash[h];
> -    tb->phys_hash_next = *ptb;
> -    *ptb = tb;
> -
> -    /* add in the page list */
> -    tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK);
> -    if (phys_page2 != -1) {
> -        tb_alloc_page(tb, 1, phys_page2);
> -    } else {
> -        tb->page_addr[1] = -1;
> -    }
> -
> -    assert(((uintptr_t)tb & 3) == 0);
> -    tb->jmp_list_first = (uintptr_t)tb | 2;
> -    tb->jmp_list_next[0] = (uintptr_t)NULL;
> -    tb->jmp_list_next[1] = (uintptr_t)NULL;
> -
> -    /* init original jump addresses */
> -    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
> -        tb_reset_jump(tb, 0);
> -    }
> -    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
> -        tb_reset_jump(tb, 1);
> -    }
> -
> -#ifdef DEBUG_TB_CHECK
> -    tb_page_check();
> -#endif
> -}
> -
>  /* find the TB 'tb' such that tb[0].tc_ptr <= tc_ptr <
>     tb[1].tc_ptr. Return NULL if not found */
>  static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration
  2016-04-18 17:20   ` Alex Bennée
@ 2016-04-18 17:59     ` Sergey Fedorov
  0 siblings, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-18 17:59 UTC (permalink / raw)
  To: Alex Bennée, Sergey Fedorov
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite, Richard Henderson

On 18/04/16 20:20, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>
>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> This clashes with the tcg clean-up patches. Should this series alway be
> applied first?
>

I didn't try to combine those series. I thought it's not important which
series comes first and it could be determined by the actual order of
merging them into the mainline. If combining these series, I would like
the direct block chaining clean-ups to come first. I can base the next
respin of TCG clean-ups on top of this series, if desirable.

Kind regards,
Sergey

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields Sergey Fedorov
@ 2016-04-19 10:02   ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 10:02 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson, Claudio Fontana, Andrzej Zaborowski,
	Aurelien Jarno, Vassili Karpov (malc),
	Alexander Graf, Blue Swirl, Stefan Weil, qemu-arm


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> Briefly describe in a comment how direct block chaining is done. It
> should help in understanding of the following data fields.
>
> Rename some fields in TranslationBlock and TCGContext structures to
> better reflect their purpose (dropping excessive 'tb_' prefix in
> TranslationBlock but keeping it in TCGContext):
>    tb_next_offset  =>  jmp_reset_offset
>    tb_jmp_offset   =>  jmp_insn_offset
>    tb_next         =>  jmp_target_addr
>    jmp_next        =>  jmp_list_next
>    jmp_first       =>  jmp_list_first
>
> Avoid using a magic constant as an invalid offset which is used to
> indicate that there's no n-th jump generated.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  include/exec/exec-all.h      | 44 ++++++++++++++++++++++++--------------
>  tcg/aarch64/tcg-target.inc.c |  7 +++---
>  tcg/arm/tcg-target.inc.c     |  8 +++----
>  tcg/i386/tcg-target.inc.c    |  8 +++----
>  tcg/ia64/tcg-target.inc.c    |  6 +++---
>  tcg/mips/tcg-target.inc.c    |  8 +++----
>  tcg/ppc/tcg-target.inc.c     |  6 +++---
>  tcg/s390/tcg-target.inc.c    | 11 +++++-----
>  tcg/sparc/tcg-target.inc.c   |  9 ++++----
>  tcg/tcg.h                    |  6 +++---
>  tcg/tci/tcg-target.inc.c     | 10 ++++-----
>  translate-all.c              | 51 +++++++++++++++++++++++---------------------
>  12 files changed, 96 insertions(+), 78 deletions(-)
>
> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
> index 736209505a68..38a149cc5e0c 100644
> --- a/include/exec/exec-all.h
> +++ b/include/exec/exec-all.h
> @@ -257,20 +257,32 @@ struct TranslationBlock {
>      struct TranslationBlock *page_next[2];
>      tb_page_addr_t page_addr[2];
>
> -    /* the following data are used to directly call another TB from
> -       the code of this one. */
> -    uint16_t tb_next_offset[2]; /* offset of original jump target */
> +    /* The following data are used to directly call another TB from
> +     * the code of this one. This can be done either by emitting direct or
> +     * indirect native jump instructions. These jumps are reset so that the TB
> +     * just continue its execution. The TB can be linked to another one by
> +     * setting one of the jump targets (or patching the jump instruction). Only
> +     * two of such jumps are supported.
> +     */
> +    uint16_t jmp_reset_offset[2]; /* offset of original jump target */
> +#define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated */
>  #ifdef USE_DIRECT_JUMP
> -    uint16_t tb_jmp_offset[2]; /* offset of jump instruction */
> +    uint16_t jmp_insn_offset[2]; /* offset of native jump instruction */
>  #else
> -    uintptr_t tb_next[2]; /* address of jump generated code */
> +    uintptr_t jmp_target_addr[2]; /* target address for indirect jump */
>  #endif
> -    /* list of TBs jumping to this one. This is a circular list using
> -       the two least significant bits of the pointers to tell what is
> -       the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 =
> -       jmp_first */
> -    struct TranslationBlock *jmp_next[2];
> -    struct TranslationBlock *jmp_first;
> +    /* Each TB has an assosiated circular list of TBs jumping to this one.
> +     * jmp_list_first points to the first TB jumping to this one.
> +     * jmp_list_next is used to point to the next TB in a list.
> +     * Since each TB can have two jumps, it can participate in two lists.
> +     * The two least significant bits of a pointer are used to choose which
> +     * data field holds a pointer to the next TB:
> +     * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first.
> +     * In other words, 0/1 tells which jump is used in the pointed TB,
> +     * and 2 means that this is a pointer back to the target TB of this list.
> +     */
> +    struct TranslationBlock *jmp_list_next[2];
> +    struct TranslationBlock *jmp_list_first;
>  };
>
>  #include "qemu/thread.h"
> @@ -359,7 +371,7 @@ void tb_set_jmp_target1(uintptr_t jmp_addr, uintptr_t addr);
>  static inline void tb_set_jmp_target(TranslationBlock *tb,
>                                       int n, uintptr_t addr)
>  {
> -    uint16_t offset = tb->tb_jmp_offset[n];
> +    uint16_t offset = tb->jmp_insn_offset[n];
>      tb_set_jmp_target1((uintptr_t)(tb->tc_ptr + offset), addr);
>  }
>
> @@ -369,7 +381,7 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
>  static inline void tb_set_jmp_target(TranslationBlock *tb,
>                                       int n, uintptr_t addr)
>  {
> -    tb->tb_next[n] = addr;
> +    tb->jmp_target_addr[n] = addr;
>  }
>
>  #endif
> @@ -378,7 +390,7 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
>                                 TranslationBlock *tb_next)
>  {
>      /* NOTE: this test is only needed for thread safety */
> -    if (!tb->jmp_next[n]) {
> +    if (!tb->jmp_list_next[n]) {
>          qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
>                                 "Linking TBs %p [" TARGET_FMT_lx
>                                 "] index %d -> %p [" TARGET_FMT_lx "]\n",
> @@ -388,8 +400,8 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
>          tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
>
>          /* add in TB jmp circular list */
> -        tb->jmp_next[n] = tb_next->jmp_first;
> -        tb_next->jmp_first = (TranslationBlock *)((uintptr_t)(tb) | (n));
> +        tb->jmp_list_next[n] = tb_next->jmp_list_first;
> +        tb_next->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | n);
>      }
>  }
>
> diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
> index 0ed10a974121..08efdf41da48 100644
> --- a/tcg/aarch64/tcg-target.inc.c
> +++ b/tcg/aarch64/tcg-target.inc.c
> @@ -1294,12 +1294,13 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
>  #ifndef USE_DIRECT_JUMP
>  #error "USE_DIRECT_JUMP required for aarch64"
>  #endif
> -        assert(s->tb_jmp_offset != NULL); /* consistency for USE_DIRECT_JUMP */
> -        s->tb_jmp_offset[a0] = tcg_current_code_size(s);
> +        /* consistency for USE_DIRECT_JUMP */
> +        assert(s->tb_jmp_insn_offset != NULL);
> +        s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
>          /* actual branch destination will be patched by
>             aarch64_tb_set_jmp_target later, beware retranslation. */
>          tcg_out_goto_noaddr(s);
> -        s->tb_next_offset[a0] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
>          break;
>
>      case INDEX_op_br:
> diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
> index 3edf6a6f971c..a9147620b073 100644
> --- a/tcg/arm/tcg-target.inc.c
> +++ b/tcg/arm/tcg-target.inc.c
> @@ -1647,17 +1647,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
>          tcg_out_goto(s, COND_AL, tb_ret_addr);
>          break;
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              /* Direct jump method */
> -            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
> +            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
>              tcg_out_b_noaddr(s, COND_AL);
>          } else {
>              /* Indirect jump method */
> -            intptr_t ptr = (intptr_t)(s->tb_next + args[0]);
> +            intptr_t ptr = (intptr_t)(s->tb_jmp_target_addr + args[0]);
>              tcg_out_movi32(s, COND_AL, TCG_REG_R0, ptr & ~0xfff);
>              tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, TCG_REG_R0, ptr & 0xfff);
>          }
> -        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          tcg_out_goto_label(s, COND_AL, arg_label(args[0]));
> diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
> index 9187d34caf6d..2f98cae97f3b 100644
> --- a/tcg/i386/tcg-target.inc.c
> +++ b/tcg/i386/tcg-target.inc.c
> @@ -1775,17 +1775,17 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
>          tcg_out_jmp(s, tb_ret_addr);
>          break;
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              /* direct jump method */
>              tcg_out8(s, OPC_JMP_long); /* jmp im */
> -            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
> +            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
>              tcg_out32(s, 0);
>          } else {
>              /* indirect jump method */
>              tcg_out_modrm_offset(s, OPC_GRP5, EXT5_JMPN_Ev, -1,
> -                                 (intptr_t)(s->tb_next + args[0]));
> +                                 (intptr_t)(s->tb_jmp_target_addr + args[0]));
>          }
> -        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          tcg_out_jxx(s, JCC_JMP, arg_label(args[0]), 0);
> diff --git a/tcg/ia64/tcg-target.inc.c b/tcg/ia64/tcg-target.inc.c
> index 62d654943c20..261861f90c3a 100644
> --- a/tcg/ia64/tcg-target.inc.c
> +++ b/tcg/ia64/tcg-target.inc.c
> @@ -881,13 +881,13 @@ static void tcg_out_exit_tb(TCGContext *s, tcg_target_long arg)
>
>  static inline void tcg_out_goto_tb(TCGContext *s, TCGArg arg)
>  {
> -    if (s->tb_jmp_offset) {
> +    if (s->tb_jmp_insn_offset) {
>          /* direct jump method */
>          tcg_abort();
>      } else {
>          /* indirect jump method */
>          tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_R2,
> -                     (tcg_target_long)(s->tb_next + arg));
> +                     (tcg_target_long)(s->tb_jmp_target_addr + arg));
>          tcg_out_bundle(s, MmI,
>                         tcg_opc_m1 (TCG_REG_P0, OPC_LD8_M1,
>                                     TCG_REG_R2, TCG_REG_R2),
> @@ -900,7 +900,7 @@ static inline void tcg_out_goto_tb(TCGContext *s, TCGArg arg)
>                         tcg_opc_b4 (TCG_REG_P0, OPC_BR_SPTK_MANY_B4,
>                                     TCG_REG_B6));
>      }
> -    s->tb_next_offset[arg] = tcg_current_code_size(s);
> +    s->tb_jmp_reset_offset[arg] = tcg_current_code_size(s);
>  }
>
>  static inline void tcg_out_jmp(TCGContext *s, TCGArg addr)
> diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
> index 682e19897db0..b8c240b32196 100644
> --- a/tcg/mips/tcg-target.inc.c
> +++ b/tcg/mips/tcg-target.inc.c
> @@ -1397,19 +1397,19 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
>          }
>          break;
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              /* direct jump method */
> -            s->tb_jmp_offset[a0] = tcg_current_code_size(s);
> +            s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
>              /* Avoid clobbering the address during retranslation.  */
>              tcg_out32(s, OPC_J | (*(uint32_t *)s->code_ptr & 0x3ffffff));
>          } else {
>              /* indirect jump method */
>              tcg_out_ld(s, TCG_TYPE_PTR, TCG_TMP0, TCG_REG_ZERO,
> -                       (uintptr_t)(s->tb_next + a0));
> +                       (uintptr_t)(s->tb_jmp_target_addr + a0));
>              tcg_out_opc_reg(s, OPC_JR, 0, TCG_TMP0, 0);
>          }
>          tcg_out_nop(s);
> -        s->tb_next_offset[a0] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          tcg_out_brcond(s, TCG_COND_EQ, TCG_REG_ZERO, TCG_REG_ZERO,
> diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
> index 8c1c2dfa9b22..10394079ad9d 100644
> --- a/tcg/ppc/tcg-target.inc.c
> +++ b/tcg/ppc/tcg-target.inc.c
> @@ -1894,17 +1894,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
>          tcg_out_b(s, 0, tb_ret_addr);
>          break;
>      case INDEX_op_goto_tb:
> -        tcg_debug_assert(s->tb_jmp_offset);
> +        tcg_debug_assert(s->tb_jmp_insn_offset);
>          /* Direct jump.  Ensure the next insns are 8-byte aligned. */
>          if ((uintptr_t)s->code_ptr & 7) {
>              tcg_out32(s, NOP);
>          }
> -        s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
> +        s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
>          /* To be replaced by either a branch+nop or a load into TMP1.  */
>          s->code_ptr += 2;
>          tcg_out32(s, MTSPR | RS(TCG_REG_TMP1) | CTR);
>          tcg_out32(s, BCCTR | BO_ALWAYS);
> -        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          {
> diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
> index fbf97bb2e15d..e95b04b0e278 100644
> --- a/tcg/s390/tcg-target.inc.c
> +++ b/tcg/s390/tcg-target.inc.c
> @@ -1715,17 +1715,18 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
>          break;
>
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              tcg_out16(s, RIL_BRCL | (S390_CC_ALWAYS << 4));
> -            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
> +            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
>              s->code_ptr += 2;
>          } else {
> -            /* load address stored at s->tb_next + args[0] */
> -            tcg_out_ld_abs(s, TCG_TYPE_PTR, TCG_TMP0, s->tb_next + args[0]);
> +            /* load address stored at s->tb_jmp_target_addr + args[0] */
> +            tcg_out_ld_abs(s, TCG_TYPE_PTR, TCG_TMP0,
> +                           s->tb_jmp_target_addr + args[0]);
>              /* and go there */
>              tcg_out_insn(s, RR, BCR, S390_CC_ALWAYS, TCG_TMP0);
>          }
> -        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
>          break;
>
>      OP_32_64(ld8u):
> diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
> index 54df1bc42432..a611885a2aaf 100644
> --- a/tcg/sparc/tcg-target.inc.c
> +++ b/tcg/sparc/tcg-target.inc.c
> @@ -1229,18 +1229,19 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
>          }
>          break;
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              /* direct jump method */
> -            s->tb_jmp_offset[a0] = tcg_current_code_size(s);
> +            s->tb_jmp_insn_offset[a0] = tcg_current_code_size(s);
>              /* Make sure to preserve links during retranslation.  */
>              tcg_out32(s, CALL | (*s->code_ptr & ~INSN_OP(-1)));
>          } else {
>              /* indirect jump method */
> -            tcg_out_ld_ptr(s, TCG_REG_T1, (uintptr_t)(s->tb_next + a0));
> +            tcg_out_ld_ptr(s, TCG_REG_T1,
> +                           (uintptr_t)(s->tb_jmp_target_addr + a0));
>              tcg_out_arithi(s, TCG_REG_G0, TCG_REG_T1, 0, JMPL);
>          }
>          tcg_out_nop(s);
> -        s->tb_next_offset[a0] = tcg_current_code_size(s);
> +        s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          tcg_out_bpcc(s, COND_A, BPCC_PT, arg_label(a0));
> diff --git a/tcg/tcg.h b/tcg/tcg.h
> index 40c8fbe2ae64..ea4ff02308fc 100644
> --- a/tcg/tcg.h
> +++ b/tcg/tcg.h
> @@ -510,9 +510,9 @@ struct TCGContext {
>
>      /* goto_tb support */
>      tcg_insn_unit *code_buf;
> -    uintptr_t *tb_next;
> -    uint16_t *tb_next_offset;
> -    uint16_t *tb_jmp_offset; /* != NULL if USE_DIRECT_JUMP */
> +    uint16_t *tb_jmp_reset_offset; /* tb->jmp_reset_offset */
> +    uint16_t *tb_jmp_insn_offset; /* tb->jmp_insn_offset if USE_DIRECT_JUMP */
> +    uintptr_t *tb_jmp_target_addr; /* tb->jmp_target_addr if !USE_DIRECT_JUMP */
>
>      /* liveness analysis */
>      uint16_t *op_dead_args; /* for each operation, each bit tells if the
> diff --git a/tcg/tci/tcg-target.inc.c b/tcg/tci/tcg-target.inc.c
> index 4afe4d7a8d59..4e91687abd1c 100644
> --- a/tcg/tci/tcg-target.inc.c
> +++ b/tcg/tci/tcg-target.inc.c
> @@ -553,17 +553,17 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
>          tcg_out64(s, args[0]);
>          break;
>      case INDEX_op_goto_tb:
> -        if (s->tb_jmp_offset) {
> +        if (s->tb_jmp_insn_offset) {
>              /* Direct jump method. */
> -            assert(args[0] < ARRAY_SIZE(s->tb_jmp_offset));
> -            s->tb_jmp_offset[args[0]] = tcg_current_code_size(s);
> +            assert(args[0] < ARRAY_SIZE(s->tb_jmp_insn_offset));
> +            s->tb_jmp_insn_offset[args[0]] = tcg_current_code_size(s);
>              tcg_out32(s, 0);
>          } else {
>              /* Indirect jump method. */
>              TODO();
>          }
> -        assert(args[0] < ARRAY_SIZE(s->tb_next_offset));
> -        s->tb_next_offset[args[0]] = tcg_current_code_size(s);
> +        assert(args[0] < ARRAY_SIZE(s->tb_jmp_reset_offset));
> +        s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
>          break;
>      case INDEX_op_br:
>          tci_out_label(s, arg_label(args[0]));
> diff --git a/translate-all.c b/translate-all.c
> index 8329ea60eeda..33ca06c663d4 100644
> --- a/translate-all.c
> +++ b/translate-all.c
> @@ -930,7 +930,7 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
>      TranslationBlock *tb1, **ptb;
>      unsigned int n1;
>
> -    ptb = &tb->jmp_next[n];
> +    ptb = &tb->jmp_list_next[n];
>      tb1 = *ptb;
>      if (tb1) {
>          /* find tb(n) in circular list */
> @@ -942,15 +942,15 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
>                  break;
>              }
>              if (n1 == 2) {
> -                ptb = &tb1->jmp_first;
> +                ptb = &tb1->jmp_list_first;
>              } else {
> -                ptb = &tb1->jmp_next[n1];
> +                ptb = &tb1->jmp_list_next[n1];
>              }
>          }
>          /* now we can suppress tb(n) from the list */
> -        *ptb = tb->jmp_next[n];
> +        *ptb = tb->jmp_list_next[n];
>
> -        tb->jmp_next[n] = NULL;
> +        tb->jmp_list_next[n] = NULL;
>      }
>  }
>
> @@ -958,7 +958,8 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
>     another TB */
>  static inline void tb_reset_jump(TranslationBlock *tb, int n)
>  {
> -    tb_set_jmp_target(tb, n, (uintptr_t)(tb->tc_ptr + tb->tb_next_offset[n]));
> +    uintptr_t addr = (uintptr_t)(tb->tc_ptr + tb->jmp_reset_offset[n]);
> +    tb_set_jmp_target(tb, n, addr);
>  }
>
>  /* invalidate one TB */
> @@ -1002,19 +1003,21 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
>      tb_jmp_remove(tb, 1);
>
>      /* suppress any remaining jumps to this TB */
> -    tb1 = tb->jmp_first;
> +    tb1 = tb->jmp_list_first;
>      for (;;) {
>          n1 = (uintptr_t)tb1 & 3;
>          if (n1 == 2) {
>              break;
>          }
>          tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
> -        tb2 = tb1->jmp_next[n1];
> +        tb2 = tb1->jmp_list_next[n1];
>          tb_reset_jump(tb1, n1);
> -        tb1->jmp_next[n1] = NULL;
> +        tb1->jmp_list_next[n1] = NULL;
>          tb1 = tb2;
>      }
> -    tb->jmp_first = (TranslationBlock *)((uintptr_t)tb | 2); /* fail safe */
> +
> +    /* fail safe */
> +    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
>
>      tcg_ctx.tb_ctx.tb_phys_invalidate_count++;
>  }
> @@ -1099,15 +1102,15 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>      trace_translate_block(tb, tb->pc, tb->tc_ptr);
>
>      /* generate machine code */
> -    tb->tb_next_offset[0] = 0xffff;
> -    tb->tb_next_offset[1] = 0xffff;
> -    tcg_ctx.tb_next_offset = tb->tb_next_offset;
> +    tb->jmp_reset_offset[0] = TB_JMP_RESET_OFFSET_INVALID;
> +    tb->jmp_reset_offset[1] = TB_JMP_RESET_OFFSET_INVALID;
> +    tcg_ctx.tb_jmp_reset_offset = tb->jmp_reset_offset;
>  #ifdef USE_DIRECT_JUMP
> -    tcg_ctx.tb_jmp_offset = tb->tb_jmp_offset;
> -    tcg_ctx.tb_next = NULL;
> +    tcg_ctx.tb_jmp_insn_offset = tb->jmp_insn_offset;
> +    tcg_ctx.tb_jmp_target_addr = NULL;
>  #else
> -    tcg_ctx.tb_jmp_offset = NULL;
> -    tcg_ctx.tb_next = tb->tb_next;
> +    tcg_ctx.tb_jmp_insn_offset = NULL;
> +    tcg_ctx.tb_jmp_target_addr = tb->jmp_target_addr;
>  #endif
>
>  #ifdef CONFIG_PROFILER
> @@ -1488,15 +1491,15 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
>          tb->page_addr[1] = -1;
>      }
>
> -    tb->jmp_first = (TranslationBlock *)((uintptr_t)tb | 2);
> -    tb->jmp_next[0] = NULL;
> -    tb->jmp_next[1] = NULL;
> +    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
> +    tb->jmp_list_next[0] = NULL;
> +    tb->jmp_list_next[1] = NULL;
>
>      /* init original jump addresses */
> -    if (tb->tb_next_offset[0] != 0xffff) {
> +    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>          tb_reset_jump(tb, 0);
>      }
> -    if (tb->tb_next_offset[1] != 0xffff) {
> +    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>          tb_reset_jump(tb, 1);
>      }
>
> @@ -1689,9 +1692,9 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf)
>          if (tb->page_addr[1] != -1) {
>              cross_page++;
>          }
> -        if (tb->tb_next_offset[0] != 0xffff) {
> +        if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>              direct_jmp_count++;
> -            if (tb->tb_next_offset[1] != 0xffff) {
> +            if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>                  direct_jmp2_count++;
>              }
>          }


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB Sergey Fedorov
@ 2016-04-19 10:34   ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 10:34 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> These fields do not contain pure pointers to a TranslationBlock
> structure. So uintptr_t is the most appropriate type for them.
> Also put an assert to assure that the two least significant bits of the
> pointer are zero before assigning it to jmp_list_first.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>
> Changes in v2:
>  * Eliminated duplicate dereference of 'ptb' in tb_jmp_remove()
>
>  include/exec/exec-all.h | 13 ++++++++-----
>  translate-all.c         | 38 ++++++++++++++++++++------------------
>  2 files changed, 28 insertions(+), 23 deletions(-)
>
> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
> index 38a149cc5e0c..b055716ed690 100644
> --- a/include/exec/exec-all.h
> +++ b/include/exec/exec-all.h
> @@ -275,14 +275,16 @@ struct TranslationBlock {
>       * jmp_list_first points to the first TB jumping to this one.
>       * jmp_list_next is used to point to the next TB in a list.
>       * Since each TB can have two jumps, it can participate in two lists.
> -     * The two least significant bits of a pointer are used to choose which
> -     * data field holds a pointer to the next TB:
> +     * jmp_list_first and jmp_list_next are 4-byte aligned pointers to a
> +     * TranslationBlock structure, but the two least significant bits of
> +     * them are used to encode which data field of the pointed TB should
> +     * be used to traverse the list further from that TB:
>       * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first.
>       * In other words, 0/1 tells which jump is used in the pointed TB,
>       * and 2 means that this is a pointer back to the target TB of this list.
>       */
> -    struct TranslationBlock *jmp_list_next[2];
> -    struct TranslationBlock *jmp_list_first;
> +    uintptr_t jmp_list_next[2];
> +    uintptr_t jmp_list_first;
>  };
>
>  #include "qemu/thread.h"
> @@ -401,7 +403,8 @@ static inline void tb_add_jump(TranslationBlock *tb, int n,
>
>          /* add in TB jmp circular list */
>          tb->jmp_list_next[n] = tb_next->jmp_list_first;
> -        tb_next->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | n);
> +        assert(((uintptr_t)tb & 3) == 0);
> +        tb_next->jmp_list_first = (uintptr_t)tb | n;
>      }
>  }
>
> diff --git a/translate-all.c b/translate-all.c
> index 33ca06c663d4..ba71ff73f55f 100644
> --- a/translate-all.c
> +++ b/translate-all.c
> @@ -927,17 +927,17 @@ static inline void tb_page_remove(TranslationBlock **ptb, TranslationBlock *tb)
>
>  static inline void tb_jmp_remove(TranslationBlock *tb, int n)
>  {
> -    TranslationBlock *tb1, **ptb;
> +    TranslationBlock *tb1;
> +    uintptr_t *ptb, ntb;
>      unsigned int n1;
>
>      ptb = &tb->jmp_list_next[n];
> -    tb1 = *ptb;
> -    if (tb1) {
> +    if (*ptb) {
>          /* find tb(n) in circular list */
>          for (;;) {
> -            tb1 = *ptb;
> -            n1 = (uintptr_t)tb1 & 3;
> -            tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
> +            ntb = *ptb;
> +            n1 = ntb & 3;
> +            tb1 = (TranslationBlock *)(ntb & ~3);
>              if (n1 == n && tb1 == tb) {
>                  break;
>              }
> @@ -950,7 +950,7 @@ static inline void tb_jmp_remove(TranslationBlock *tb, int n)
>          /* now we can suppress tb(n) from the list */
>          *ptb = tb->jmp_list_next[n];
>
> -        tb->jmp_list_next[n] = NULL;
> +        tb->jmp_list_next[n] = (uintptr_t)NULL;
>      }
>  }
>
> @@ -969,7 +969,7 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
>      PageDesc *p;
>      unsigned int h, n1;
>      tb_page_addr_t phys_pc;
> -    TranslationBlock *tb1, *tb2;
> +    uintptr_t tb1, tb2;
>
>      /* remove the TB from the hash list */
>      phys_pc = tb->page_addr[0] + (tb->pc & ~TARGET_PAGE_MASK);
> @@ -1005,19 +1005,20 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
>      /* suppress any remaining jumps to this TB */
>      tb1 = tb->jmp_list_first;
>      for (;;) {
> -        n1 = (uintptr_t)tb1 & 3;
> +        TranslationBlock *tmp_tb;
> +        n1 = tb1 & 3;
>          if (n1 == 2) {
>              break;
>          }
> -        tb1 = (TranslationBlock *)((uintptr_t)tb1 & ~3);
> -        tb2 = tb1->jmp_list_next[n1];
> -        tb_reset_jump(tb1, n1);
> -        tb1->jmp_list_next[n1] = NULL;
> +        tmp_tb = (TranslationBlock *)(tb1 & ~3);
> +        tb2 = tmp_tb->jmp_list_next[n1];
> +        tb_reset_jump(tmp_tb, n1);
> +        tmp_tb->jmp_list_next[n1] = (uintptr_t)NULL;
>          tb1 = tb2;
>      }
>
> -    /* fail safe */
> -    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
> +    assert(((uintptr_t)tb & 3) == 0);
> +    tb->jmp_list_first = (uintptr_t)tb | 2; /* fail safe */
>
>      tcg_ctx.tb_ctx.tb_phys_invalidate_count++;
>  }
> @@ -1491,9 +1492,10 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
>          tb->page_addr[1] = -1;
>      }
>
> -    tb->jmp_list_first = (TranslationBlock *)((uintptr_t)tb | 2);
> -    tb->jmp_list_next[0] = NULL;
> -    tb->jmp_list_next[1] = NULL;
> +    assert(((uintptr_t)tb & 3) == 0);
> +    tb->jmp_list_first = (uintptr_t)tb | 2;
> +    tb->jmp_list_next[0] = (uintptr_t)NULL;
> +    tb->jmp_list_next[1] = (uintptr_t)NULL;
>
>      /* init original jump addresses */
>      if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible Sergey Fedorov
@ 2016-04-19 10:55   ` Alex Bennée
  2016-04-19 12:42     ` Sergey Fedorov
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 10:55 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> Initialize TB's direct jump list data fields and reset the jumps before
> tb_link_page() puts it into the physical hash table and the physical
> page list. So TB is completely initialized before it becomes visible.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
> ---
>
> Changes in v2:
>  * Tweaked a comment
>
>  translate-all.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/translate-all.c b/translate-all.c
> index 7ac7916f2792..dfa7f0d64e76 100644
> --- a/translate-all.c
> +++ b/translate-all.c
> @@ -1133,19 +1133,6 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
>          tb->page_addr[1] = -1;
>      }
>
> -    assert(((uintptr_t)tb & 3) == 0);
> -    tb->jmp_list_first = (uintptr_t)tb | 2;
> -    tb->jmp_list_next[0] = (uintptr_t)NULL;
> -    tb->jmp_list_next[1] = (uintptr_t)NULL;
> -
> -    /* init original jump addresses */
> -    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
> -        tb_reset_jump(tb, 0);
> -    }
> -    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
> -        tb_reset_jump(tb, 1);
> -    }
> -
>  #ifdef DEBUG_TB_CHECK
>      tb_page_check();
>  #endif
> @@ -1254,6 +1241,20 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>          ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
>                   CODE_GEN_ALIGN);
>
> +    /* init jump list */
> +    assert(((uintptr_t)tb & 3) == 0);
> +    tb->jmp_list_first = (uintptr_t)tb | 2;
> +    tb->jmp_list_next[0] = (uintptr_t)NULL;
> +    tb->jmp_list_next[1] = (uintptr_t)NULL;
> +
> +    /* init original jump addresses wich has been set during tcg_gen_code() */
> +    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
> +        tb_reset_jump(tb, 0);
> +    }
> +    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
> +        tb_reset_jump(tb, 1);
> +    }
> +

If we are really concerned about ensuring everything is set before we
insert the TB into the list should we not have an explicit write barrier
before we call to link the page?

>      /* check next page if needed */
>      virt_page2 = (pc + tb->size - 1) & TARGET_PAGE_MASK;
>      phys_page2 = -1;


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump()
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump() Sergey Fedorov
@ 2016-04-19 11:01   ` Alex Bennée
  2016-04-19 12:49     ` Sergey Fedorov
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 11:01 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> The check is to make sure that another thread hasn't already done the
> same while we were outside of tb_lock. Mention this in a comment.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
> ---
>
> Changes in v2:
>  * Typo fixed in the commit title
>  * Complete rewrite of the commit body and the patch based on Paolo's comments
>
>  include/exec/exec-all.h | 31 +++++++++++++++++--------------
>  1 file changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
> index b055716ed690..8e81ef5fb2c2 100644
> --- a/include/exec/exec-all.h
> +++ b/include/exec/exec-all.h
> @@ -391,21 +391,24 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
>  static inline void tb_add_jump(TranslationBlock *tb, int n,
>                                 TranslationBlock *tb_next)
>  {
> -    /* NOTE: this test is only needed for thread safety */
> -    if (!tb->jmp_list_next[n]) {
> -        qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
> -                               "Linking TBs %p [" TARGET_FMT_lx
> -                               "] index %d -> %p [" TARGET_FMT_lx "]\n",
> -                               tb->tc_ptr, tb->pc, n,
> -                               tb_next->tc_ptr, tb_next->pc);
> -        /* patch the native jump address */
> -        tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
> -
> -        /* add in TB jmp circular list */
> -        tb->jmp_list_next[n] = tb_next->jmp_list_first;
> -        assert(((uintptr_t)tb & 3) == 0);
> -        tb_next->jmp_list_first = (uintptr_t)tb | n;
> +    if (tb->jmp_list_next[n]) {
> +        /* Another thread has already done this while we were
> +         * outside of the lock; nothing to do in this case */
> +        return;
>      }
> +    qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
> +                           "Linking TBs %p [" TARGET_FMT_lx
> +                           "] index %d -> %p [" TARGET_FMT_lx "]\n",
> +                           tb->tc_ptr, tb->pc, n,
> +                           tb_next->tc_ptr, tb_next->pc);
> +
> +    /* patch the native jump address */
> +    tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
> +
> +    /* add in TB jmp circular list */
> +    tb->jmp_list_next[n] = tb_next->jmp_list_first;
> +    assert(((uintptr_t)tb & 3) == 0);

I think this assert can be dropped. The only call explicitly masks with
TB_EXIT_MASK (which would be a better choice than the number 3 anyway)
so something really strange would have had to happen in the intervening
few lines.

Otherwise:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> +    tb_next->jmp_list_first = (uintptr_t)tb | n;
>  }
>
>  /* GETRA is the true target of the return instruction that we'll execute,


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks Sergey Fedorov
@ 2016-04-19 11:37   ` Alex Bennée
  2016-04-19 13:02     ` Sergey Fedorov
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 11:37 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson, Peter Maydell, Edgar E. Iglesias,
	Eduardo Habkost, Michael Walle, Aurelien Jarno, Leon Alrae,
	Anthony Green, Jia Liu, Alexander Graf, Blue Swirl,
	Mark Cave-Ayland, Bastian Koppelmann, Guan Xuetao, Max Filippov,
	qemu-arm, qemu-ppc


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> We don't take care of direct jumps when address mapping changes. Thus we
> must be sure to generate direct jumps so that they always keep valid
> even if address mapping changes. However we only allow to execute a TB
> if it was generated from the pages which match with current mapping.
>
> Document in comments in the translation code the reason for these checks
> on a destination PC.
>
> Some targets with variable length instructions allow TB to straddle a
> page boundary. However, we make sure that both TB pages match the
> current address mapping when looking up TBs. So it is safe to do direct
> jumps into both pages. Correct the checks for those targets.
>
> Given that, we can safely patch a TB which spans two pages. Remove the
> unnecessary check in cpu_exec() and allow such TBs to be patched.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
> ---
>  cpu-exec.c                    | 7 ++-----
>  target-alpha/translate.c      | 5 ++++-
>  target-arm/translate-a64.c    | 5 ++++-
>  target-arm/translate.c        | 7 ++++++-
>  target-cris/translate.c       | 8 +++++++-
>  target-i386/translate.c       | 7 +++++--
>  target-lm32/translate.c       | 4 ++++
>  target-m68k/translate.c       | 6 +++++-
>  target-microblaze/translate.c | 4 ++++
>  target-mips/translate.c       | 4 ++++
>  target-moxie/translate.c      | 4 ++++
>  target-openrisc/translate.c   | 4 ++++
>  target-ppc/translate.c        | 4 ++++
>  target-s390x/translate.c      | 9 ++++++---
>  target-sh4/translate.c        | 4 ++++
>  target-sparc/translate.c      | 4 ++++
>  target-tricore/translate.c    | 4 ++++
>  target-unicore32/translate.c  | 4 ++++
>  target-xtensa/translate.c     | 8 ++++++++
>  19 files changed, 87 insertions(+), 15 deletions(-)
>
> diff --git a/cpu-exec.c b/cpu-exec.c
> index bbfcbfb54385..065cc9159477 100644
> --- a/cpu-exec.c
> +++ b/cpu-exec.c
> @@ -508,11 +508,8 @@ int cpu_exec(CPUState *cpu)
>                      next_tb = 0;
>                      tcg_ctx.tb_ctx.tb_invalidated_flag = 0;
>                  }
> -                /* see if we can patch the calling TB. When the TB
> -                   spans two pages, we cannot safely do a direct
> -                   jump. */
> -                if (next_tb != 0 && tb->page_addr[1] == -1
> -                    && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
> +                /* See if we can patch the calling TB. */
> +                if (next_tb != 0 && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>                      tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
>                                  next_tb & TB_EXIT_MASK, tb);
>                  }

We've already discussed on IRC the confusion of next_tb ;-)

> diff --git a/target-alpha/translate.c b/target-alpha/translate.c
> index 5b86992dd367..5fa66309ce2e 100644
> --- a/target-alpha/translate.c
> +++ b/target-alpha/translate.c
> @@ -464,7 +464,10 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
>      if (in_superpage(ctx, dest)) {
>          return true;
>      }
> -    /* Check for the dest on the same page as the start of the TB.  */
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */

I'm all for harmonising the comments but I think for the common case we
could refer to a central location for the page linking rules and only
expand the comment for subtle differences between the front ends.

>      return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
>  }
>
> diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
> index b13cff756ad1..bf8471c7fa99 100644
> --- a/target-arm/translate-a64.c
> +++ b/target-arm/translate-a64.c
> @@ -274,7 +274,10 @@ static inline bool use_goto_tb(DisasContext *s, int n, uint64_t dest)
>          return false;
>      }
>
> -    /* Only link tbs from inside the same guest page */
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((s->tb->pc & TARGET_PAGE_MASK) != (dest & TARGET_PAGE_MASK)) {
>          return false;
>      }
> diff --git a/target-arm/translate.c b/target-arm/translate.c
> index 940ec8d981d1..aeb3e84e8d40 100644
> --- a/target-arm/translate.c
> +++ b/target-arm/translate.c
> @@ -4054,7 +4054,12 @@ static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
>      TranslationBlock *tb;
>
>      tb = s->tb;
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
> +    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> +        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest &
> TARGET_PAGE_MASK)) {

Isn't this check avoided by the fact the translate loop bails on end_of_page?

>          tcg_gen_goto_tb(n);
>          gen_set_pc_im(s, dest);
>          tcg_gen_exit_tb((uintptr_t)tb + n);
> diff --git a/target-cris/translate.c b/target-cris/translate.c
> index a73176c118b0..7acac076167e 100644
> --- a/target-cris/translate.c
> +++ b/target-cris/translate.c
> @@ -524,7 +524,13 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
>  {
>      TranslationBlock *tb;
>      tb = dc->tb;
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
> +    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> +        (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(env_pc, dest);
>                  tcg_gen_exit_tb((uintptr_t)tb + n);
> diff --git a/target-i386/translate.c b/target-i386/translate.c
> index 1a1214dcb12e..d0f440fc7697 100644
> --- a/target-i386/translate.c
> +++ b/target-i386/translate.c
> @@ -2092,9 +2092,12 @@ static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
>
>      pc = s->cs_base + eip;
>      tb = s->tb;
> -    /* NOTE: we handle the case where the TB spans two pages here */
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
> -        (pc & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK))  {
> +        (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK))  {
>          /* jump to same page: we can use a direct jump */
>          tcg_gen_goto_tb(tb_num);
>          gen_jmp_im(eip);
> diff --git a/target-lm32/translate.c b/target-lm32/translate.c
> index 256a51f8498f..18d648ffc729 100644
> --- a/target-lm32/translate.c
> +++ b/target-lm32/translate.c
> @@ -138,6 +138,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
>      TranslationBlock *tb;
>
>      tb = dc->tb;
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>              likely(!dc->singlestep_enabled)) {
>          tcg_gen_goto_tb(n);
> diff --git a/target-m68k/translate.c b/target-m68k/translate.c
> index 7560c3a80841..282da60cbcca 100644
> --- a/target-m68k/translate.c
> +++ b/target-m68k/translate.c
> @@ -861,7 +861,11 @@ static void gen_jmp_tb(DisasContext *s, int n, uint32_t dest)
>      if (unlikely(s->singlestep_enabled)) {
>          gen_exception(s, dest, EXCP_DEBUG);
>      } else if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> -               (s->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +               (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +        /* Direct jumps with goto_tb are only safe within the page this TB
> +         * resides in because we don't take care of direct jumps when address
> +         * mapping changes.
> +         */
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_i32(QREG_PC, dest);
>          tcg_gen_exit_tb((uintptr_t)tb + n);
> diff --git a/target-microblaze/translate.c b/target-microblaze/translate.c
> index f944965a14e1..6028750ba7de 100644
> --- a/target-microblaze/translate.c
> +++ b/target-microblaze/translate.c
> @@ -128,6 +128,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
>  {
>      TranslationBlock *tb;
>      tb = dc->tb;
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
> diff --git a/target-mips/translate.c b/target-mips/translate.c
> index a3a05ec66dd2..37b834d2df59 100644
> --- a/target-mips/translate.c
> +++ b/target-mips/translate.c
> @@ -4195,6 +4195,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
>  {
>      TranslationBlock *tb;
>      tb = ctx->tb;
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>          likely(!ctx->singlestep_enabled)) {
>          tcg_gen_goto_tb(n);
> diff --git a/target-moxie/translate.c b/target-moxie/translate.c
> index a437e2ab6026..fb99038399fa 100644
> --- a/target-moxie/translate.c
> +++ b/target-moxie/translate.c
> @@ -127,6 +127,10 @@ static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
>      TranslationBlock *tb;
>      tb = ctx->tb;
>
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>          !ctx->singlestep_enabled) {
>          tcg_gen_goto_tb(n);
> diff --git a/target-openrisc/translate.c b/target-openrisc/translate.c
> index 5d0ab442a872..da60fae26a75 100644
> --- a/target-openrisc/translate.c
> +++ b/target-openrisc/translate.c
> @@ -194,6 +194,10 @@ static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
>  {
>      TranslationBlock *tb;
>      tb = dc->tb;
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>                                         likely(!dc->singlestep_enabled)) {
>          tcg_gen_movi_tl(cpu_pc, dest);
> diff --git a/target-ppc/translate.c b/target-ppc/translate.c
> index 6f0e7b4face6..92ded8ec433b 100644
> --- a/target-ppc/translate.c
> +++ b/target-ppc/translate.c
> @@ -3832,6 +3832,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
>      if (NARROW_MODE(ctx)) {
>          dest = (uint32_t) dest;
>      }
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>          likely(!ctx->singlestep_enabled)) {
>          tcg_gen_goto_tb(n);
> diff --git a/target-s390x/translate.c b/target-s390x/translate.c
> index c871ef2bb302..9f6ae60622b2 100644
> --- a/target-s390x/translate.c
> +++ b/target-s390x/translate.c
> @@ -608,9 +608,12 @@ static void gen_op_calc_cc(DisasContext *s)
>
>  static int use_goto_tb(DisasContext *s, uint64_t dest)
>  {
> -    /* NOTE: we handle the case where the TB spans two pages here */
> -    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK)
> -             || (dest & TARGET_PAGE_MASK) == ((s->pc - 1) & TARGET_PAGE_MASK))
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
> +    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
> +             (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK))
>              && !s->singlestep_enabled
>              && !(s->tb->cflags & CF_LAST_IO)
>              && !(s->tb->flags & FLAG_MASK_PER));
> diff --git a/target-sh4/translate.c b/target-sh4/translate.c
> index 7c189680a7a4..660692d06090 100644
> --- a/target-sh4/translate.c
> +++ b/target-sh4/translate.c
> @@ -210,6 +210,10 @@ static void gen_goto_tb(DisasContext * ctx, int n, target_ulong dest)
>      TranslationBlock *tb;
>      tb = ctx->tb;
>
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>  	!ctx->singlestep_enabled) {
>  	/* Use a direct jump if in same page and singlestep not enabled */
> diff --git a/target-sparc/translate.c b/target-sparc/translate.c
> index 58572c34cf3e..d09a500e8bd4 100644
> --- a/target-sparc/translate.c
> +++ b/target-sparc/translate.c
> @@ -309,6 +309,10 @@ static inline void gen_goto_tb(DisasContext *s, int tb_num,
>      TranslationBlock *tb;
>
>      tb = s->tb;
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
>          (npc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
>          !s->singlestep)  {
> diff --git a/target-tricore/translate.c b/target-tricore/translate.c
> index 912bf226bedc..b67154a3f410 100644
> --- a/target-tricore/translate.c
> +++ b/target-tricore/translate.c
> @@ -3240,6 +3240,10 @@ static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
>  {
>      TranslationBlock *tb;
>      tb = ctx->tb;
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
>          likely(!ctx->singlestep_enabled)) {
>          tcg_gen_goto_tb(n);
> diff --git a/target-unicore32/translate.c b/target-unicore32/translate.c
> index 39af3af05f15..04dcbb0bb466 100644
> --- a/target-unicore32/translate.c
> +++ b/target-unicore32/translate.c
> @@ -1094,6 +1094,10 @@ static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
>      TranslationBlock *tb;
>
>      tb = s->tb;
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
>          tcg_gen_goto_tb(n);
>          gen_set_pc_im(dest);
> diff --git a/target-xtensa/translate.c b/target-xtensa/translate.c
> index 989448846902..7eeb279e5030 100644
> --- a/target-xtensa/translate.c
> +++ b/target-xtensa/translate.c
> @@ -418,6 +418,10 @@ static void gen_jump(DisasContext *dc, TCGv dest)
>  static void gen_jumpi(DisasContext *dc, uint32_t dest, int slot)
>  {
>      TCGv_i32 tmp = tcg_const_i32(dest);
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
>          slot = -1;
>      }
> @@ -446,6 +450,10 @@ static void gen_callw(DisasContext *dc, int callinc, TCGv_i32 dest)
>  static void gen_callwi(DisasContext *dc, int callinc, uint32_t dest, int slot)
>  {
>      TCGv_i32 tmp = tcg_const_i32(dest);
> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes.
> +     */
>      if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
>          slot = -1;
>      }


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible
  2016-04-19 10:55   ` Alex Bennée
@ 2016-04-19 12:42     ` Sergey Fedorov
  2016-04-19 13:07       ` Alex Bennée
  0 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-19 12:42 UTC (permalink / raw)
  To: Alex Bennée, Sergey Fedorov
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite, Richard Henderson

On 19/04/16 13:55, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>
>> Initialize TB's direct jump list data fields and reset the jumps before
>> tb_link_page() puts it into the physical hash table and the physical
>> page list. So TB is completely initialized before it becomes visible.
>>
>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>> ---
>>
>> Changes in v2:
>>  * Tweaked a comment
>>
>>  translate-all.c | 27 ++++++++++++++-------------
>>  1 file changed, 14 insertions(+), 13 deletions(-)
>>
>> diff --git a/translate-all.c b/translate-all.c
>> index 7ac7916f2792..dfa7f0d64e76 100644
>> --- a/translate-all.c
>> +++ b/translate-all.c
>> @@ -1133,19 +1133,6 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
>>          tb->page_addr[1] = -1;
>>      }
>>
>> -    assert(((uintptr_t)tb & 3) == 0);
>> -    tb->jmp_list_first = (uintptr_t)tb | 2;
>> -    tb->jmp_list_next[0] = (uintptr_t)NULL;
>> -    tb->jmp_list_next[1] = (uintptr_t)NULL;
>> -
>> -    /* init original jump addresses */
>> -    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>> -        tb_reset_jump(tb, 0);
>> -    }
>> -    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>> -        tb_reset_jump(tb, 1);
>> -    }
>> -
>>  #ifdef DEBUG_TB_CHECK
>>      tb_page_check();
>>  #endif
>> @@ -1254,6 +1241,20 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>>          ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
>>                   CODE_GEN_ALIGN);
>>
>> +    /* init jump list */
>> +    assert(((uintptr_t)tb & 3) == 0);
>> +    tb->jmp_list_first = (uintptr_t)tb | 2;
>> +    tb->jmp_list_next[0] = (uintptr_t)NULL;
>> +    tb->jmp_list_next[1] = (uintptr_t)NULL;
>> +
>> +    /* init original jump addresses wich has been set during tcg_gen_code() */
>> +    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>> +        tb_reset_jump(tb, 0);
>> +    }
>> +    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>> +        tb_reset_jump(tb, 1);
>> +    }
>> +
> If we are really concerned about ensuring everything is set before we
> insert the TB into the list should we not have an explicit write barrier
> before we call to link the page?

Currently, it is synchronized by 'tb_lock', so no need in a memory
barrier here. So this is a simple rearrangement of code to a more
suitable place and maybe just a preparation for relaxing locking scheme
in future. It would be ahead of time and unnecessary overhead to put a
barrier in this patch. Do you think it's worth to mention that in the
commit message?

Kind regards,
Sergey

>
>>      /* check next page if needed */
>>      virt_page2 = (pc + tb->size - 1) & TARGET_PAGE_MASK;
>>      phys_page2 = -1;
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump()
  2016-04-19 11:01   ` Alex Bennée
@ 2016-04-19 12:49     ` Sergey Fedorov
  2016-04-19 15:27       ` Alex Bennée
  0 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-19 12:49 UTC (permalink / raw)
  To: Alex Bennée, Sergey Fedorov
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite, Richard Henderson

On 19/04/16 14:01, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>
>> The check is to make sure that another thread hasn't already done the
>> same while we were outside of tb_lock. Mention this in a comment.
>>
>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>> ---
>>
>> Changes in v2:
>>  * Typo fixed in the commit title
>>  * Complete rewrite of the commit body and the patch based on Paolo's comments
>>
>>  include/exec/exec-all.h | 31 +++++++++++++++++--------------
>>  1 file changed, 17 insertions(+), 14 deletions(-)
>>
>> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
>> index b055716ed690..8e81ef5fb2c2 100644
>> --- a/include/exec/exec-all.h
>> +++ b/include/exec/exec-all.h
>> @@ -391,21 +391,24 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
>>  static inline void tb_add_jump(TranslationBlock *tb, int n,
>>                                 TranslationBlock *tb_next)
>>  {
>> -    /* NOTE: this test is only needed for thread safety */
>> -    if (!tb->jmp_list_next[n]) {
>> -        qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
>> -                               "Linking TBs %p [" TARGET_FMT_lx
>> -                               "] index %d -> %p [" TARGET_FMT_lx "]\n",
>> -                               tb->tc_ptr, tb->pc, n,
>> -                               tb_next->tc_ptr, tb_next->pc);
>> -        /* patch the native jump address */
>> -        tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
>> -
>> -        /* add in TB jmp circular list */
>> -        tb->jmp_list_next[n] = tb_next->jmp_list_first;
>> -        assert(((uintptr_t)tb & 3) == 0);
>> -        tb_next->jmp_list_first = (uintptr_t)tb | n;
>> +    if (tb->jmp_list_next[n]) {
>> +        /* Another thread has already done this while we were
>> +         * outside of the lock; nothing to do in this case */
>> +        return;
>>      }
>> +    qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
>> +                           "Linking TBs %p [" TARGET_FMT_lx
>> +                           "] index %d -> %p [" TARGET_FMT_lx "]\n",
>> +                           tb->tc_ptr, tb->pc, n,
>> +                           tb_next->tc_ptr, tb_next->pc);
>> +
>> +    /* patch the native jump address */
>> +    tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
>> +
>> +    /* add in TB jmp circular list */
>> +    tb->jmp_list_next[n] = tb_next->jmp_list_first;
>> +    assert(((uintptr_t)tb & 3) == 0);
> I think this assert can be dropped. The only call explicitly masks with
> TB_EXIT_MASK (which would be a better choice than the number 3 anyway)
> so something really strange would have had to happen in the intervening
> few lines.

What about the same assert in tb_gen_code()?

Kind regards,
Sergey

>
> Otherwise:
>
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
>> +    tb_next->jmp_list_first = (uintptr_t)tb | n;
>>  }
>>
>>  /* GETRA is the true target of the return instruction that we'll execute,
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks
  2016-04-19 11:37   ` Alex Bennée
@ 2016-04-19 13:02     ` Sergey Fedorov
  2016-04-19 14:53       ` Alex Bennée
  0 siblings, 1 reply; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-19 13:02 UTC (permalink / raw)
  To: Alex Bennée, Sergey Fedorov
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite, Richard Henderson,
	Peter Maydell, Edgar E. Iglesias, Eduardo Habkost, Michael Walle,
	Aurelien Jarno, Leon Alrae, Anthony Green, Jia Liu,
	Alexander Graf, Blue Swirl, Mark Cave-Ayland, Bastian Koppelmann,
	Guan Xuetao, Max Filippov, qemu-arm, qemu-ppc

On 19/04/16 14:37, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
(snip)
>> diff --git a/cpu-exec.c b/cpu-exec.c
>> index bbfcbfb54385..065cc9159477 100644
>> --- a/cpu-exec.c
>> +++ b/cpu-exec.c
>> @@ -508,11 +508,8 @@ int cpu_exec(CPUState *cpu)
>>                      next_tb = 0;
>>                      tcg_ctx.tb_ctx.tb_invalidated_flag = 0;
>>                  }
>> -                /* see if we can patch the calling TB. When the TB
>> -                   spans two pages, we cannot safely do a direct
>> -                   jump. */
>> -                if (next_tb != 0 && tb->page_addr[1] == -1
>> -                    && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>> +                /* See if we can patch the calling TB. */
>> +                if (next_tb != 0 && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>                      tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
>>                                  next_tb & TB_EXIT_MASK, tb);
>>                  }
> We've already discussed on IRC the confusion of next_tb ;-)

Yes, I'd rather add a patch to clean up 'next_tb' naming to "tcg: Misc
clean-up patches" series. This series is to deal with direct block
chaining only.

>> diff --git a/target-alpha/translate.c b/target-alpha/translate.c
>> index 5b86992dd367..5fa66309ce2e 100644
>> --- a/target-alpha/translate.c
>> +++ b/target-alpha/translate.c
>> @@ -464,7 +464,10 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
>>      if (in_superpage(ctx, dest)) {
>>          return true;
>>      }
>> -    /* Check for the dest on the same page as the start of the TB.  */
>> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
>> +     * in because we don't take care of direct jumps when address mapping
>> +     * changes.
>> +     */
> I'm all for harmonising the comments but I think for the common case we
> could refer to a central location for the page linking rules and only
> expand the comment for subtle differences between the front ends.

You mean, it'd be better to put a detailed explanation in a comment for
e.g. tlb_flush() and refer to it in every place like this?


>>      return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
>>  }
(snip)
>> diff --git a/target-arm/translate.c b/target-arm/translate.c
>> index 940ec8d981d1..aeb3e84e8d40 100644
>> --- a/target-arm/translate.c
>> +++ b/target-arm/translate.c
>> @@ -4054,7 +4054,12 @@ static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
>>      TranslationBlock *tb;
>>
>>      tb = s->tb;
>> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
>> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
>> +     * in because we don't take care of direct jumps when address mapping
>> +     * changes.
>> +     */
>> +    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
>> +        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest &
>> TARGET_PAGE_MASK)) {
> Isn't this check avoided by the fact the translate loop bails on end_of_page?

A TB can straddle a page boundary in Thumb mode. A TB can begin with an
T32 instruction straddling a page boundary, thus this check should be
useful :)

Kind regards,
Sergey

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible
  2016-04-19 12:42     ` Sergey Fedorov
@ 2016-04-19 13:07       ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 13:07 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: Sergey Fedorov, qemu-devel, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <serge.fdrv@gmail.com> writes:

> On 19/04/16 13:55, Alex Bennée wrote:
>> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>>
>>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>>
>>> Initialize TB's direct jump list data fields and reset the jumps before
>>> tb_link_page() puts it into the physical hash table and the physical
>>> page list. So TB is completely initialized before it becomes visible.
>>>
>>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>>> ---
>>>
>>> Changes in v2:
>>>  * Tweaked a comment
>>>
>>>  translate-all.c | 27 ++++++++++++++-------------
>>>  1 file changed, 14 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/translate-all.c b/translate-all.c
>>> index 7ac7916f2792..dfa7f0d64e76 100644
>>> --- a/translate-all.c
>>> +++ b/translate-all.c
>>> @@ -1133,19 +1133,6 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
>>>          tb->page_addr[1] = -1;
>>>      }
>>>
>>> -    assert(((uintptr_t)tb & 3) == 0);
>>> -    tb->jmp_list_first = (uintptr_t)tb | 2;
>>> -    tb->jmp_list_next[0] = (uintptr_t)NULL;
>>> -    tb->jmp_list_next[1] = (uintptr_t)NULL;
>>> -
>>> -    /* init original jump addresses */
>>> -    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>>> -        tb_reset_jump(tb, 0);
>>> -    }
>>> -    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>>> -        tb_reset_jump(tb, 1);
>>> -    }
>>> -
>>>  #ifdef DEBUG_TB_CHECK
>>>      tb_page_check();
>>>  #endif
>>> @@ -1254,6 +1241,20 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
>>>          ROUND_UP((uintptr_t)gen_code_buf + gen_code_size + search_size,
>>>                   CODE_GEN_ALIGN);
>>>
>>> +    /* init jump list */
>>> +    assert(((uintptr_t)tb & 3) == 0);
>>> +    tb->jmp_list_first = (uintptr_t)tb | 2;
>>> +    tb->jmp_list_next[0] = (uintptr_t)NULL;
>>> +    tb->jmp_list_next[1] = (uintptr_t)NULL;
>>> +
>>> +    /* init original jump addresses wich has been set during tcg_gen_code() */
>>> +    if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) {
>>> +        tb_reset_jump(tb, 0);
>>> +    }
>>> +    if (tb->jmp_reset_offset[1] != TB_JMP_RESET_OFFSET_INVALID) {
>>> +        tb_reset_jump(tb, 1);
>>> +    }
>>> +
>> If we are really concerned about ensuring everything is set before we
>> insert the TB into the list should we not have an explicit write barrier
>> before we call to link the page?
>
> Currently, it is synchronized by 'tb_lock', so no need in a memory
> barrier here. So this is a simple rearrangement of code to a more
> suitable place and maybe just a preparation for relaxing locking scheme
> in future. It would be ahead of time and unnecessary overhead to put a
> barrier in this patch. Do you think it's worth to mention that in the
> commit message?

Good point. Maybe it would be better to clean-up the comment in
tb_link_page() with the assumption that memory consistency for linking
the new TB is either explicit (user mode, tb_lock) or implicit in single
threaded softmmu emulation.

We can then update the comment when we add the MTTCG patches.

>
> Kind regards,
> Sergey
>
>>
>>>      /* check next page if needed */
>>>      virt_page2 = (pc + tb->size - 1) & TARGET_PAGE_MASK;
>>>      phys_page2 = -1;
>>


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode
  2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode Sergey Fedorov
@ 2016-04-19 13:10   ` Alex Bennée
  2016-04-19 13:17     ` Sergey Fedorov
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 13:10 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: qemu-devel, Sergey Fedorov, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson, Riku Voipio, Blue Swirl, Peter Maydell,
	Edgar E. Iglesias, Eduardo Habkost, Michael Walle,
	Aurelien Jarno, Leon Alrae, Anthony Green, Jia Liu,
	Alexander Graf, Mark Cave-Ayland, Bastian Koppelmann,
	Guan Xuetao, Max Filippov, qemu-arm, qemu-ppc


Sergey Fedorov <sergey.fedorov@linaro.org> writes:

> From: Sergey Fedorov <serge.fdrv@gmail.com>
>
> In user mode, there's only a static address translation, TBs are always
> invalidated properly and direct jumps are reset when mapping change.
> Thus the destination address is always valid for direct jumps and
> there's no need to restrict it to the pages the TB resides in.
>
> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
> ---
>  target-alpha/translate.c      | 10 +++++++++-
>  target-arm/translate-a64.c    |  8 +++++++-
>  target-arm/translate.c        | 26 ++++++++++++++++++--------
>  target-cris/translate.c       | 26 ++++++++++++++++++--------
>  target-i386/translate.c       | 30 ++++++++++++++++++++----------
>  target-lm32/translate.c       | 27 ++++++++++++++++++++-------
>  target-m68k/translate.c       | 30 ++++++++++++++++++++----------
>  target-microblaze/translate.c | 23 +++++++++++++++++------
>  target-mips/translate.c       | 28 +++++++++++++++++++++-------
>  target-moxie/translate.c      | 29 +++++++++++++++++++++--------
>  target-openrisc/translate.c   | 28 +++++++++++++++++++++-------
>  target-ppc/translate.c        | 32 +++++++++++++++++++++++---------
>  target-s390x/translate.c      | 20 +++++++++++++++-----
>  target-sh4/translate.c        | 27 ++++++++++++++++++++-------
>  target-sparc/translate.c      | 32 +++++++++++++++++++++++---------
>  target-tricore/translate.c    | 28 +++++++++++++++++++++-------
>  target-unicore32/translate.c  | 24 +++++++++++++++++-------
>  target-xtensa/translate.c     | 16 ++++++++++++++--
>  18 files changed, 325 insertions(+), 119 deletions(-)
>
> diff --git a/target-alpha/translate.c b/target-alpha/translate.c
> index 5fa66309ce2e..684559e694bd 100644
> --- a/target-alpha/translate.c
> +++ b/target-alpha/translate.c
> @@ -464,11 +464,19 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
>      if (in_superpage(ctx, dest)) {
>          return true;
>      }
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
>      return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;

The same comment as before with all this repeating boilerplate commentary.

> +#endif
>  }
>
>  static ExitStatus gen_bdirect(DisasContext *ctx, int ra, int32_t disp)
> diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c
> index bf8471c7fa99..c2fccfe5a038 100644
> --- a/target-arm/translate-a64.c
> +++ b/target-arm/translate-a64.c
> @@ -274,14 +274,20 @@ static inline bool use_goto_tb(DisasContext *s, int n, uint64_t dest)
>          return false;
>      }
>
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
>      if ((s->tb->pc & TARGET_PAGE_MASK) != (dest & TARGET_PAGE_MASK)) {
>          return false;
>      }
> +#endif
>
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
>      return true;
>  }
>
> diff --git a/target-arm/translate.c b/target-arm/translate.c
> index aeb3e84e8d40..ae6fd7f1c313 100644
> --- a/target-arm/translate.c
> +++ b/target-arm/translate.c
> @@ -4049,20 +4049,30 @@ static int disas_vfp_insn(DisasContext *s, uint32_t insn)
>      return 0;
>  }
>
> -static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *s, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -
> -    tb = s->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> -        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> +           ((s->pc - 1) & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(s, dest)) {
>          tcg_gen_goto_tb(n);
>          gen_set_pc_im(s, dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)s->tb + n);
>      } else {
>          gen_set_pc_im(s, dest);
>          tcg_gen_exit_tb(0);
> diff --git a/target-cris/translate.c b/target-cris/translate.c
> index 7acac076167e..cfd3933486d7 100644
> --- a/target-cris/translate.c
> +++ b/target-cris/translate.c
> @@ -520,20 +520,30 @@ static void t_gen_cc_jmp(TCGv pc_true, TCGv pc_false)
>      gen_set_label(l1);
>  }
>
> -static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = dc->tb;
> -
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> -        (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> +           (dc->ppc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
> +static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(dc, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(env_pc, dest);
> -                tcg_gen_exit_tb((uintptr_t)tb + n);
> +                tcg_gen_exit_tb((uintptr_t)dc->tb + n);
>      } else {
>          tcg_gen_movi_tl(env_pc, dest);
>          tcg_gen_exit_tb(0);
> diff --git a/target-i386/translate.c b/target-i386/translate.c
> index d0f440fc7697..3b7c2202a377 100644
> --- a/target-i386/translate.c
> +++ b/target-i386/translate.c
> @@ -2085,23 +2085,33 @@ static inline int insn_const_size(TCGMemOp ot)
>      }
>  }
>
> -static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
> +static inline bool use_goto_tb(DisasContext *s, target_ulong pc)
>  {
> -    TranslationBlock *tb;
> -    target_ulong pc;
> -
> -    pc = s->cs_base + eip;
> -    tb = s->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (pc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
> +           (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) ||
> -        (pc & TARGET_PAGE_MASK) == (s->pc_start & TARGET_PAGE_MASK))  {
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *s, int tb_num, target_ulong eip)
> +{
> +    target_ulong pc = s->cs_base + eip;
> +
> +    if (use_goto_tb(s, pc))  {
>          /* jump to same page: we can use a direct jump */
>          tcg_gen_goto_tb(tb_num);
>          gen_jmp_im(eip);
> -        tcg_gen_exit_tb((uintptr_t)tb + tb_num);
> +        tcg_gen_exit_tb((uintptr_t)s->tb + tb_num);
>      } else {
>          /* jump to another page: currently not optimized */
>          gen_jmp_im(eip);
> diff --git a/target-lm32/translate.c b/target-lm32/translate.c
> index 18d648ffc729..9b516da47faa 100644
> --- a/target-lm32/translate.c
> +++ b/target-lm32/translate.c
> @@ -133,20 +133,33 @@ static inline void t_gen_illegal_insn(DisasContext *dc)
>      gen_helper_ill(cpu_env);
>  }
>
> -static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> +    if (unlikely(dc->singlestep_enabled)) {
> +        return false;
> +    }
>
> -    tb = dc->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -            likely(!dc->singlestep_enabled)) {
> +    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
> +static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(dc, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(cpu_pc, dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
>      } else {
>          tcg_gen_movi_tl(cpu_pc, dest);
>          if (dc->singlestep_enabled) {
> diff --git a/target-m68k/translate.c b/target-m68k/translate.c
> index 282da60cbcca..454953608f93 100644
> --- a/target-m68k/translate.c
> +++ b/target-m68k/translate.c
> @@ -852,23 +852,33 @@ static inline void gen_addr_fault(DisasContext *s)
>          }                                                               \
>      } while (0)
>
> +static inline bool use_goto_tb(DisasContext *s, uint32_t dest)
> +{
> +#ifndef CONFIG_USER_ONLY
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes in system mode.
> +     */
> +    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> +           (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
>  /* Generate a jump to an immediate address.  */
>  static void gen_jmp_tb(DisasContext *s, int n, uint32_t dest)
>  {
> -    TranslationBlock *tb;
> -
> -    tb = s->tb;
>      if (unlikely(s->singlestep_enabled)) {
>          gen_exception(s, dest, EXCP_DEBUG);
> -    } else if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
> -               (s->insn_pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> -        /* Direct jumps with goto_tb are only safe within the page this TB
> -         * resides in because we don't take care of direct jumps when address
> -         * mapping changes.
> -         */
> +    } else if (use_goto_tb(s, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_i32(QREG_PC, dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)s->tb + n);
>      } else {
>          gen_jmp_im(s, dest);
>          tcg_gen_exit_tb(0);
> diff --git a/target-microblaze/translate.c b/target-microblaze/translate.c
> index 6028750ba7de..1fe0ba77ef06 100644
> --- a/target-microblaze/translate.c
> +++ b/target-microblaze/translate.c
> @@ -124,18 +124,29 @@ static inline void t_gen_raise_exception(DisasContext *dc, uint32_t index)
>      dc->is_jmp = DISAS_UPDATE;
>  }
>
> -static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = dc->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +    return true;
> +#endif
> +}
> +
> +static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(dc, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
>      } else {
>          tcg_gen_movi_tl(cpu_SR[SR_PC], dest);
>          tcg_gen_exit_tb(0);
> diff --git a/target-mips/translate.c b/target-mips/translate.c
> index 37b834d2df59..700496e1d265 100644
> --- a/target-mips/translate.c
> +++ b/target-mips/translate.c
> @@ -4191,19 +4191,33 @@ static void gen_trap (DisasContext *ctx, uint32_t opc,
>      tcg_temp_free(t1);
>  }
>
> -static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = ctx->tb;
> +    if (unlikely(ctx->singlestep_enabled)) {
> +        return false;
> +    }
> +
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -        likely(!ctx->singlestep_enabled)) {
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(ctx, dest)) {
>          tcg_gen_goto_tb(n);
>          gen_save_pc(dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
>      } else {
>          gen_save_pc(dest);
>          if (ctx->singlestep_enabled) {
> diff --git a/target-moxie/translate.c b/target-moxie/translate.c
> index fb99038399fa..d911e200d9f6 100644
> --- a/target-moxie/translate.c
> +++ b/target-moxie/translate.c
> @@ -121,21 +121,34 @@ void moxie_translate_init(void)
>      done_init = 1;
>  }
>
> -static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
> -                               int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = ctx->tb;
> +    if (unlikely(ctx->singlestep_enabled)) {
> +        return false;
> +    }
>
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -        !ctx->singlestep_enabled) {
> +    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(CPUMoxieState *env, DisasContext *ctx,
> +                               int n, target_ulong dest)
> +{
> +    if (use_goto_tb(ctx, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_i32(cpu_pc, dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
>      } else {
>          tcg_gen_movi_i32(cpu_pc, dest);
>          if (ctx->singlestep_enabled) {
> diff --git a/target-openrisc/translate.c b/target-openrisc/translate.c
> index da60fae26a75..be9deaa7a574 100644
> --- a/target-openrisc/translate.c
> +++ b/target-openrisc/translate.c
> @@ -190,19 +190,33 @@ static void check_ov64s(DisasContext *dc)
>  }
>  #endif*/
>
> -static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *dc, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = dc->tb;
> +    if (unlikely(dc->singlestep_enabled)) {
> +        return false;
> +    }
> +
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (dc->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -                                       likely(!dc->singlestep_enabled)) {
> +    return true;
> +#endif
> +}
> +
> +static void gen_goto_tb(DisasContext *dc, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(dc, dest)) {
>          tcg_gen_movi_tl(cpu_pc, dest);
>          tcg_gen_goto_tb(n);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)dc->tb + n);
>      } else {
>          tcg_gen_movi_tl(cpu_pc, dest);
>          if (dc->singlestep_enabled) {
> diff --git a/target-ppc/translate.c b/target-ppc/translate.c
> index 92ded8ec433b..21518decf9ed 100644
> --- a/target-ppc/translate.c
> +++ b/target-ppc/translate.c
> @@ -3824,23 +3824,37 @@ static inline void gen_update_cfar(DisasContext *ctx, target_ulong nip)
>  #endif
>  }
>
> +static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
> +{
> +    if (unlikely(ctx->singlestep_enabled)) {
> +        return false;
> +    }
> +
> +#ifndef CONFIG_USER_ONLY
> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
> +     * in because we don't take care of direct jumps when address mapping
> +     * changes in system mode.
> +     */
> +    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
>  /***                                Branch                                 ***/
>  static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = ctx->tb;
>      if (NARROW_MODE(ctx)) {
>          dest = (uint32_t) dest;
>      }
> -    /* Direct jumps with goto_tb are only safe within the page this TB resides
> -     * in because we don't take care of direct jumps when address mapping
> -     * changes.
> -     */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -        likely(!ctx->singlestep_enabled)) {
> +    if (use_goto_tb(ctx, dest)) {
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_tl(cpu_nip, dest & ~3);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
>      } else {
>          tcg_gen_movi_tl(cpu_nip, dest & ~3);
>          if (unlikely(ctx->singlestep_enabled)) {
> diff --git a/target-s390x/translate.c b/target-s390x/translate.c
> index 9f6ae60622b2..dab1b74b2667 100644
> --- a/target-s390x/translate.c
> +++ b/target-s390x/translate.c
> @@ -608,15 +608,25 @@ static void gen_op_calc_cc(DisasContext *s)
>
>  static int use_goto_tb(DisasContext *s, uint64_t dest)
>  {
> +    if (unlikely(s->singlestep_enabled) ||
> +        (s->tb->cflags & CF_LAST_IO) ||
> +        (s->tb->flags & FLAG_MASK_PER)) {
> +        return false;
> +    }
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
>       * changes.
>       */
> -    return (((dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
> -             (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK))
> -            && !s->singlestep_enabled
> -            && !(s->tb->cflags & CF_LAST_IO)
> -            && !(s->tb->flags & FLAG_MASK_PER));
> +    return (dest & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) ||
> +           (dest & TARGET_PAGE_MASK) == (s->pc & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
>  }
>
>  static void account_noninline_branch(DisasContext *s, int cc_op)
> diff --git a/target-sh4/translate.c b/target-sh4/translate.c
> index 660692d06090..355c1c943918 100644
> --- a/target-sh4/translate.c
> +++ b/target-sh4/translate.c
> @@ -205,21 +205,34 @@ static void gen_write_sr(TCGv src)
>      tcg_gen_andi_i32(cpu_sr_t, cpu_sr_t, 1);
>  }
>
> -static void gen_goto_tb(DisasContext * ctx, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = ctx->tb;
> +    if (unlikely(ctx->singlestep_enabled)) {
> +        return false;
> +    }
>
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -	!ctx->singlestep_enabled) {
> +    return true;
> +#endif
> +}
> +
> +static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(ctx, dest)) {
>  	/* Use a direct jump if in same page and singlestep not enabled */
>          tcg_gen_goto_tb(n);
>          tcg_gen_movi_i32(cpu_pc, dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
>      } else {
>          tcg_gen_movi_i32(cpu_pc, dest);
>          if (ctx->singlestep_enabled)
> diff --git a/target-sparc/translate.c b/target-sparc/translate.c
> index d09a500e8bd4..8f59a101c4e5 100644
> --- a/target-sparc/translate.c
> +++ b/target-sparc/translate.c
> @@ -303,24 +303,38 @@ static inline TCGv gen_dest_gpr(DisasContext *dc, int reg)
>      }
>  }
>
> -static inline void gen_goto_tb(DisasContext *s, int tb_num,
> -                               target_ulong pc, target_ulong npc)
> +static inline bool use_goto_tb(DisasContext *s, target_ulong pc,
> +                               target_ulong npc)
>  {
> -    TranslationBlock *tb;
> +    if (unlikely(s->singlestep)) {
> +        return false;
> +    }
>
> -    tb = s->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (pc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK) &&
> +           (npc & TARGET_PAGE_MASK) == (s->tb->pc & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((pc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
> -        (npc & TARGET_PAGE_MASK) == (tb->pc & TARGET_PAGE_MASK) &&
> -        !s->singlestep)  {
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *s, int tb_num,
> +                               target_ulong pc, target_ulong npc)
> +{
> +    if (use_goto_tb(s, pc, npc))  {
>          /* jump to same page: we can use a direct jump */
>          tcg_gen_goto_tb(tb_num);
>          tcg_gen_movi_tl(cpu_pc, pc);
>          tcg_gen_movi_tl(cpu_npc, npc);
> -        tcg_gen_exit_tb((uintptr_t)tb + tb_num);
> +        tcg_gen_exit_tb((uintptr_t)s->tb + tb_num);
>      } else {
>          /* jump to another page: currently not optimized */
>          tcg_gen_movi_tl(cpu_pc, pc);
> diff --git a/target-tricore/translate.c b/target-tricore/translate.c
> index b67154a3f410..a648b5e67eae 100644
> --- a/target-tricore/translate.c
> +++ b/target-tricore/translate.c
> @@ -3236,19 +3236,33 @@ static inline void gen_save_pc(target_ulong pc)
>      tcg_gen_movi_tl(cpu_PC, pc);
>  }
>
> -static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
> +static inline bool use_goto_tb(DisasContext *ctx, target_ulong dest)
>  {
> -    TranslationBlock *tb;
> -    tb = ctx->tb;
> +    if (unlikely(ctx->singlestep_enabled)) {
> +        return false;
> +    }
> +
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) &&
> -        likely(!ctx->singlestep_enabled)) {
> +    return (ctx->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest)
> +{
> +    if (use_goto_tb(ctx, dest)) {
>          tcg_gen_goto_tb(n);
>          gen_save_pc(dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)ctx->tb + n);
>      } else {
>          gen_save_pc(dest);
>          if (ctx->singlestep_enabled) {
> diff --git a/target-unicore32/translate.c b/target-unicore32/translate.c
> index 04dcbb0bb466..af30f2563490 100644
> --- a/target-unicore32/translate.c
> +++ b/target-unicore32/translate.c
> @@ -1089,19 +1089,29 @@ static void disas_ucf64_insn(CPUUniCore32State *env, DisasContext *s, uint32_t i
>      }
>  }
>
> -static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
> +static inline bool use_goto_tb(DisasContext *s, uint32_t dest)
>  {
> -    TranslationBlock *tb;
> -
> -    tb = s->tb;
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
> +     */
> +    return (s->tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK);
> +#else
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
>       */
> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
> +    return true;
> +#endif
> +}
> +
> +static inline void gen_goto_tb(DisasContext *s, int n, uint32_t dest)
> +{
> +    if (use_goto_tb(s, dest)) {
>          tcg_gen_goto_tb(n);
>          gen_set_pc_im(dest);
> -        tcg_gen_exit_tb((uintptr_t)tb + n);
> +        tcg_gen_exit_tb((uintptr_t)s->tb + n);
>      } else {
>          gen_set_pc_im(dest);
>          tcg_gen_exit_tb(0);
> diff --git a/target-xtensa/translate.c b/target-xtensa/translate.c
> index 7eeb279e5030..6886c389c919 100644
> --- a/target-xtensa/translate.c
> +++ b/target-xtensa/translate.c
> @@ -418,13 +418,19 @@ static void gen_jump(DisasContext *dc, TCGv dest)
>  static void gen_jumpi(DisasContext *dc, uint32_t dest, int slot)
>  {
>      TCGv_i32 tmp = tcg_const_i32(dest);
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
>      if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
>          slot = -1;
>      }
> +#endif
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
>      gen_jump_slot(dc, tmp, slot);
>      tcg_temp_free(tmp);
>  }
> @@ -450,13 +456,19 @@ static void gen_callw(DisasContext *dc, int callinc, TCGv_i32 dest)
>  static void gen_callwi(DisasContext *dc, int callinc, uint32_t dest, int slot)
>  {
>      TCGv_i32 tmp = tcg_const_i32(dest);
> +#ifndef CONFIG_USER_ONLY
>      /* Direct jumps with goto_tb are only safe within the pages this TB resides
>       * in because we don't take care of direct jumps when address mapping
> -     * changes.
> +     * changes in system mode.
>       */
>      if (((dc->tb->pc ^ dest) & TARGET_PAGE_MASK) != 0) {
>          slot = -1;
>      }
> +#endif
> +    /* In user mode, there's only a static address translation, so the
> +     * destination address is always valid. TBs are always invalidated properly
> +     * and direct jumps are reset when mapping attributes change.
> +     */
>      gen_callw_slot(dc, callinc, tmp, slot);
>      tcg_temp_free(tmp);
>  }

Otherwise:

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode
  2016-04-19 13:10   ` Alex Bennée
@ 2016-04-19 13:17     ` Sergey Fedorov
  0 siblings, 0 replies; 26+ messages in thread
From: Sergey Fedorov @ 2016-04-19 13:17 UTC (permalink / raw)
  To: Alex Bennée, Sergey Fedorov
  Cc: qemu-devel, Paolo Bonzini, Peter Crosthwaite, Richard Henderson,
	Riku Voipio, Blue Swirl, Peter Maydell, Edgar E. Iglesias,
	Eduardo Habkost, Michael Walle, Aurelien Jarno, Leon Alrae,
	Anthony Green, Jia Liu, Alexander Graf, Mark Cave-Ayland,
	Bastian Koppelmann, Guan Xuetao, Max Filippov, qemu-arm,
	qemu-ppc

On 19/04/16 16:10, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
(snip)
>> diff --git a/target-alpha/translate.c b/target-alpha/translate.c
>> index 5fa66309ce2e..684559e694bd 100644
>> --- a/target-alpha/translate.c
>> +++ b/target-alpha/translate.c
>> @@ -464,11 +464,19 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
>>      if (in_superpage(ctx, dest)) {
>>          return true;
>>      }
>> +#ifndef CONFIG_USER_ONLY
>>      /* Direct jumps with goto_tb are only safe within the page this TB resides
>>       * in because we don't take care of direct jumps when address mapping
>> -     * changes.
>> +     * changes in system mode.
>>       */
>>      return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
>> +#else
>> +    /* In user mode, there's only a static address translation, so the
>> +     * destination address is always valid. TBs are always invalidated properly
>> +     * and direct jumps are reset when mapping attributes change.
>> +     */
>> +    return true;
> The same comment as before with all this repeating boilerplate commentary.

Except this time I can't think of a central place to put such a comment
at. Maybe just drop the comment and get by on just commit message?

Kind regards,
Sergey

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks
  2016-04-19 13:02     ` Sergey Fedorov
@ 2016-04-19 14:53       ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 14:53 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: Sergey Fedorov, qemu-devel, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson, Peter Maydell, Edgar E. Iglesias,
	Eduardo Habkost, Michael Walle, Aurelien Jarno, Leon Alrae,
	Anthony Green, Jia Liu, Alexander Graf, Blue Swirl,
	Mark Cave-Ayland, Bastian Koppelmann, Guan Xuetao, Max Filippov,
	qemu-arm, qemu-ppc


Sergey Fedorov <serge.fdrv@gmail.com> writes:

> On 19/04/16 14:37, Alex Bennée wrote:
>> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
> (snip)
>>> diff --git a/cpu-exec.c b/cpu-exec.c
>>> index bbfcbfb54385..065cc9159477 100644
>>> --- a/cpu-exec.c
>>> +++ b/cpu-exec.c
>>> @@ -508,11 +508,8 @@ int cpu_exec(CPUState *cpu)
>>>                      next_tb = 0;
>>>                      tcg_ctx.tb_ctx.tb_invalidated_flag = 0;
>>>                  }
>>> -                /* see if we can patch the calling TB. When the TB
>>> -                   spans two pages, we cannot safely do a direct
>>> -                   jump. */
>>> -                if (next_tb != 0 && tb->page_addr[1] == -1
>>> -                    && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>> +                /* See if we can patch the calling TB. */
>>> +                if (next_tb != 0 && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
>>>                      tb_add_jump((TranslationBlock *)(next_tb & ~TB_EXIT_MASK),
>>>                                  next_tb & TB_EXIT_MASK, tb);
>>>                  }
>> We've already discussed on IRC the confusion of next_tb ;-)
>
> Yes, I'd rather add a patch to clean up 'next_tb' naming to "tcg: Misc
> clean-up patches" series. This series is to deal with direct block
> chaining only.

Yeah that makes sense.

>
>>> diff --git a/target-alpha/translate.c b/target-alpha/translate.c
>>> index 5b86992dd367..5fa66309ce2e 100644
>>> --- a/target-alpha/translate.c
>>> +++ b/target-alpha/translate.c
>>> @@ -464,7 +464,10 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
>>>      if (in_superpage(ctx, dest)) {
>>>          return true;
>>>      }
>>> -    /* Check for the dest on the same page as the start of the TB.  */
>>> +    /* Direct jumps with goto_tb are only safe within the page this TB resides
>>> +     * in because we don't take care of direct jumps when address mapping
>>> +     * changes.
>>> +     */
>> I'm all for harmonising the comments but I think for the common case we
>> could refer to a central location for the page linking rules and only
>> expand the comment for subtle differences between the front ends.
>
> You mean, it'd be better to put a detailed explanation in a comment for
> e.g. tlb_flush() and refer to it in every place like this?

Yes.

>
>
>>>      return ((ctx->tb->pc ^ dest) & TARGET_PAGE_MASK) == 0;
>>>  }
> (snip)
>>> diff --git a/target-arm/translate.c b/target-arm/translate.c
>>> index 940ec8d981d1..aeb3e84e8d40 100644
>>> --- a/target-arm/translate.c
>>> +++ b/target-arm/translate.c
>>> @@ -4054,7 +4054,12 @@ static inline void gen_goto_tb(DisasContext *s, int n, target_ulong dest)
>>>      TranslationBlock *tb;
>>>
>>>      tb = s->tb;
>>> -    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK)) {
>>> +    /* Direct jumps with goto_tb are only safe within the pages this TB resides
>>> +     * in because we don't take care of direct jumps when address mapping
>>> +     * changes.
>>> +     */
>>> +    if ((tb->pc & TARGET_PAGE_MASK) == (dest & TARGET_PAGE_MASK) ||
>>> +        ((s->pc - 1) & TARGET_PAGE_MASK) == (dest &
>>> TARGET_PAGE_MASK)) {
>> Isn't this check avoided by the fact the translate loop bails on end_of_page?
>
> A TB can straddle a page boundary in Thumb mode. A TB can begin with an
> T32 instruction straddling a page boundary, thus this check should be
> useful :)

OK.

>
> Kind regards,
> Sergey


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump()
  2016-04-19 12:49     ` Sergey Fedorov
@ 2016-04-19 15:27       ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2016-04-19 15:27 UTC (permalink / raw)
  To: Sergey Fedorov
  Cc: Sergey Fedorov, qemu-devel, Paolo Bonzini, Peter Crosthwaite,
	Richard Henderson


Sergey Fedorov <serge.fdrv@gmail.com> writes:

> On 19/04/16 14:01, Alex Bennée wrote:
>> Sergey Fedorov <sergey.fedorov@linaro.org> writes:
>>
>>> From: Sergey Fedorov <serge.fdrv@gmail.com>
>>>
>>> The check is to make sure that another thread hasn't already done the
>>> same while we were outside of tb_lock. Mention this in a comment.
>>>
>>> Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
>>> Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
>>> ---
>>>
>>> Changes in v2:
>>>  * Typo fixed in the commit title
>>>  * Complete rewrite of the commit body and the patch based on Paolo's comments
>>>
>>>  include/exec/exec-all.h | 31 +++++++++++++++++--------------
>>>  1 file changed, 17 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
>>> index b055716ed690..8e81ef5fb2c2 100644
>>> --- a/include/exec/exec-all.h
>>> +++ b/include/exec/exec-all.h
>>> @@ -391,21 +391,24 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
>>>  static inline void tb_add_jump(TranslationBlock *tb, int n,
>>>                                 TranslationBlock *tb_next)
>>>  {
>>> -    /* NOTE: this test is only needed for thread safety */
>>> -    if (!tb->jmp_list_next[n]) {
>>> -        qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
>>> -                               "Linking TBs %p [" TARGET_FMT_lx
>>> -                               "] index %d -> %p [" TARGET_FMT_lx "]\n",
>>> -                               tb->tc_ptr, tb->pc, n,
>>> -                               tb_next->tc_ptr, tb_next->pc);
>>> -        /* patch the native jump address */
>>> -        tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
>>> -
>>> -        /* add in TB jmp circular list */
>>> -        tb->jmp_list_next[n] = tb_next->jmp_list_first;
>>> -        assert(((uintptr_t)tb & 3) == 0);
>>> -        tb_next->jmp_list_first = (uintptr_t)tb | n;
>>> +    if (tb->jmp_list_next[n]) {
>>> +        /* Another thread has already done this while we were
>>> +         * outside of the lock; nothing to do in this case */
>>> +        return;
>>>      }
>>> +    qemu_log_mask_and_addr(CPU_LOG_EXEC, tb->pc,
>>> +                           "Linking TBs %p [" TARGET_FMT_lx
>>> +                           "] index %d -> %p [" TARGET_FMT_lx "]\n",
>>> +                           tb->tc_ptr, tb->pc, n,
>>> +                           tb_next->tc_ptr, tb_next->pc);
>>> +
>>> +    /* patch the native jump address */
>>> +    tb_set_jmp_target(tb, n, (uintptr_t)tb_next->tc_ptr);
>>> +
>>> +    /* add in TB jmp circular list */
>>> +    tb->jmp_list_next[n] = tb_next->jmp_list_first;
>>> +    assert(((uintptr_t)tb & 3) == 0);
>> I think this assert can be dropped. The only call explicitly masks with
>> TB_EXIT_MASK (which would be a better choice than the number 3 anyway)
>> so something really strange would have had to happen in the intervening
>> few lines.
>
> What about the same assert in tb_gen_code()?

I think in tb_link_page it is reasonable although fairly unlikely that
tb_alloc is going to start spitting out unaligned TranslationBlocks.

>
> Kind regards,
> Sergey
>
>>
>> Otherwise:
>>
>> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>>
>>> +    tb_next->jmp_list_first = (uintptr_t)tb | n;
>>>  }
>>>
>>>  /* GETRA is the true target of the return instruction that we'll execute,
>>


--
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-04-19 15:27 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-10 21:45 [Qemu-devel] [PATCH v3 00/10] tcg: Direct block chaining clean-up Sergey Fedorov
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 01/10] tcg: Clean up direct block chaining data fields Sergey Fedorov
2016-04-19 10:02   ` Alex Bennée
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 02/10] tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB Sergey Fedorov
2016-04-19 10:34   ` Alex Bennée
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 03/10] tcg: Rearrange tb_link_page() to avoid forward declaration Sergey Fedorov
2016-04-18 17:20   ` Alex Bennée
2016-04-18 17:59     ` Sergey Fedorov
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 04/10] tcg: Init TB's direct jumps before making it visible Sergey Fedorov
2016-04-19 10:55   ` Alex Bennée
2016-04-19 12:42     ` Sergey Fedorov
2016-04-19 13:07       ` Alex Bennée
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 05/10] tcg: Clarify thread safety check in tb_add_jump() Sergey Fedorov
2016-04-19 11:01   ` Alex Bennée
2016-04-19 12:49     ` Sergey Fedorov
2016-04-19 15:27       ` Alex Bennée
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 06/10] tcg: Rename tb_jmp_remove() to tb_remove_from_jmp_list() Sergey Fedorov
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 07/10] tcg: Extract removing of jumps to TB from tb_phys_invalidate() Sergey Fedorov
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 08/10] tcg: Clean up tb_jmp_unlink() Sergey Fedorov
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 09/10] tcg: Clean up direct block chaining safety checks Sergey Fedorov
2016-04-19 11:37   ` Alex Bennée
2016-04-19 13:02     ` Sergey Fedorov
2016-04-19 14:53       ` Alex Bennée
2016-04-10 21:45 ` [Qemu-devel] [PATCH v3 10/10] tcg: Moderate direct block chaining safety checks in user mode Sergey Fedorov
2016-04-19 13:10   ` Alex Bennée
2016-04-19 13:17     ` Sergey Fedorov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.