* [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user
@ 2015-05-10 22:36 Chen Gang
2015-05-10 22:38 ` [Qemu-devel] [PATCH 01/10 v10] linux-user: tilegx: Firstly add architecture related features Chen Gang
` (9 more replies)
0 siblings, 10 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:36 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
At present, it can run into glibc _init_malloc(), but cause assertion,
which should be fixed, next.
Since it already has quite a few of code, so send patches firstly, and
next, continue fixing the issue.
Chen Gang (10):
linux-user: tilegx: Firstly add architecture related features
linux-user: Support tilegx architecture in linux-user
linux-user/syscall.c: conditionalize syscalls which are not defined in
tilegx
target-tilegx: Add opcode basic implementation from Tilera Corporation
target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
target-tilegx: Add special register information from Tilera
Corporation
target-tilegx: Add cpu basic features for linux-user
target-tilegx: Add helper features for linux-user
target-tilegx: Generate tcg instructions to execute to _init_malloc in
glib
target-tilegx: Add TILE-Gx building files
configure | 2 +
default-configs/tilegx-linux-user.mak | 1 +
include/elf.h | 2 +
linux-user/elfload.c | 23 +
linux-user/main.c | 148 ++
linux-user/syscall.c | 50 +-
linux-user/syscall_defs.h | 14 +-
linux-user/tilegx/syscall.h | 35 +
linux-user/tilegx/syscall_nr.h | 278 ++++
linux-user/tilegx/target_cpu.h | 35 +
linux-user/tilegx/target_signal.h | 29 +
linux-user/tilegx/target_structs.h | 48 +
linux-user/tilegx/termbits.h | 285 ++++
target-tilegx/Makefile.objs | 1 +
target-tilegx/cpu.c | 143 ++
target-tilegx/cpu.h | 156 ++
target-tilegx/helper.c | 41 +
target-tilegx/helper.h | 3 +
target-tilegx/opcode_tilegx.h | 1405 ++++++++++++++++
target-tilegx/spr_def_64.h | 216 +++
target-tilegx/translate.c | 2889 +++++++++++++++++++++++++++++++++
21 files changed, 5798 insertions(+), 6 deletions(-)
create mode 100644 default-configs/tilegx-linux-user.mak
create mode 100644 linux-user/tilegx/syscall.h
create mode 100644 linux-user/tilegx/syscall_nr.h
create mode 100644 linux-user/tilegx/target_cpu.h
create mode 100644 linux-user/tilegx/target_signal.h
create mode 100644 linux-user/tilegx/target_structs.h
create mode 100644 linux-user/tilegx/termbits.h
create mode 100644 target-tilegx/Makefile.objs
create mode 100644 target-tilegx/cpu.c
create mode 100644 target-tilegx/cpu.h
create mode 100644 target-tilegx/helper.c
create mode 100644 target-tilegx/helper.h
create mode 100644 target-tilegx/opcode_tilegx.h
create mode 100644 target-tilegx/spr_def_64.h
create mode 100644 target-tilegx/translate.c
--
1.9.3
^ permalink raw reply [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 01/10 v10] linux-user: tilegx: Firstly add architecture related features
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
@ 2015-05-10 22:38 ` Chen Gang
2015-05-10 22:39 ` [Qemu-devel] [PATCH 02/10 v10] linux-user: Support tilegx architecture in linux-user Chen Gang
` (8 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:38 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
They are based on Linux kernel tilegx architecture for 64 bit binary,
and also based on tilegx ABI reference document, and also reference from
other targets implementations.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
linux-user/tilegx/syscall.h | 35 +++++
linux-user/tilegx/syscall_nr.h | 278 ++++++++++++++++++++++++++++++++++++
linux-user/tilegx/target_cpu.h | 35 +++++
linux-user/tilegx/target_signal.h | 29 ++++
linux-user/tilegx/target_structs.h | 48 +++++++
linux-user/tilegx/termbits.h | 285 +++++++++++++++++++++++++++++++++++++
6 files changed, 710 insertions(+)
create mode 100644 linux-user/tilegx/syscall.h
create mode 100644 linux-user/tilegx/syscall_nr.h
create mode 100644 linux-user/tilegx/target_cpu.h
create mode 100644 linux-user/tilegx/target_signal.h
create mode 100644 linux-user/tilegx/target_structs.h
create mode 100644 linux-user/tilegx/termbits.h
diff --git a/linux-user/tilegx/syscall.h b/linux-user/tilegx/syscall.h
new file mode 100644
index 0000000..df55ec7
--- /dev/null
+++ b/linux-user/tilegx/syscall.h
@@ -0,0 +1,35 @@
+#ifndef TILEGX_SYSCALLS_H
+#define TILEGX_SYSCALLS_H
+
+#define UNAME_MACHINE "tilegx"
+#define UNAME_MINIMUM_RELEASE "3.19"
+
+typedef uint64_t tilegx_reg_t;
+
+struct target_pt_regs {
+
+ union {
+ /* Saved main processor registers; 56..63 are special. */
+ tilegx_reg_t regs[56];
+ struct {
+ tilegx_reg_t __regs[53];
+ tilegx_reg_t tp; /* aliases regs[TREG_TP] */
+ tilegx_reg_t sp; /* aliases regs[TREG_SP] */
+ tilegx_reg_t lr; /* aliases regs[TREG_LR] */
+ };
+ };
+
+ /* Saved special registers. */
+ tilegx_reg_t pc; /* stored in EX_CONTEXT_K_0 */
+ tilegx_reg_t ex1; /* stored in EX_CONTEXT_K_1 (PL and ICS bit) */
+ tilegx_reg_t faultnum; /* fault number (INT_SWINT_1 for syscall) */
+ tilegx_reg_t orig_r0; /* r0 at syscall entry, else zero */
+ tilegx_reg_t flags; /* flags (see below) */
+ tilegx_reg_t cmpexch; /* value of CMPEXCH_VALUE SPR at interrupt */
+ tilegx_reg_t pad[2];
+};
+
+#define TARGET_MLOCKALL_MCL_CURRENT 1
+#define TARGET_MLOCKALL_MCL_FUTURE 2
+
+#endif
diff --git a/linux-user/tilegx/syscall_nr.h b/linux-user/tilegx/syscall_nr.h
new file mode 100644
index 0000000..8121154
--- /dev/null
+++ b/linux-user/tilegx/syscall_nr.h
@@ -0,0 +1,278 @@
+#ifndef TILEGX_SYSCALL_NR
+#define TILEGX_SYSCALL_NR
+
+/*
+ * Copy from linux kernel asm-generic/unistd.h, which tilegx uses.
+ */
+#define TARGET_NR_io_setup 0
+#define TARGET_NR_io_destroy 1
+#define TARGET_NR_io_submit 2
+#define TARGET_NR_io_cancel 3
+#define TARGET_NR_io_getevents 4
+#define TARGET_NR_setxattr 5
+#define TARGET_NR_lsetxattr 6
+#define TARGET_NR_fsetxattr 7
+#define TARGET_NR_getxattr 8
+#define TARGET_NR_lgetxattr 9
+#define TARGET_NR_fgetxattr 10
+#define TARGET_NR_listxattr 11
+#define TARGET_NR_llistxattr 12
+#define TARGET_NR_flistxattr 13
+#define TARGET_NR_removexattr 14
+#define TARGET_NR_lremovexattr 15
+#define TARGET_NR_fremovexattr 16
+#define TARGET_NR_getcwd 17
+#define TARGET_NR_lookup_dcookie 18
+#define TARGET_NR_eventfd2 19
+#define TARGET_NR_epoll_create1 20
+#define TARGET_NR_epoll_ctl 21
+#define TARGET_NR_epoll_pwait 22
+#define TARGET_NR_dup 23
+#define TARGET_NR_dup3 24
+#define TARGET_NR_fcntl 25
+#define TARGET_NR_inotify_init1 26
+#define TARGET_NR_inotify_add_watch 27
+#define TARGET_NR_inotify_rm_watch 28
+#define TARGET_NR_ioctl 29
+#define TARGET_NR_ioprio_set 30
+#define TARGET_NR_ioprio_get 31
+#define TARGET_NR_flock 32
+#define TARGET_NR_mknodat 33
+#define TARGET_NR_mkdirat 34
+#define TARGET_NR_unlinkat 35
+#define TARGET_NR_symlinkat 36
+#define TARGET_NR_linkat 37
+#define TARGET_NR_renameat 38
+#define TARGET_NR_umount2 39
+#define TARGET_NR_mount 40
+#define TARGET_NR_pivot_root 41
+#define TARGET_NR_nfsservctl 42
+#define TARGET_NR_statfs 43
+#define TARGET_NR_fstatfs 44
+#define TARGET_NR_truncate 45
+#define TARGET_NR_ftruncate 46
+#define TARGET_NR_fallocate 47
+#define TARGET_NR_faccessat 48
+#define TARGET_NR_chdir 49
+#define TARGET_NR_fchdir 50
+#define TARGET_NR_chroot 51
+#define TARGET_NR_fchmod 52
+#define TARGET_NR_fchmodat 53
+#define TARGET_NR_fchownat 54
+#define TARGET_NR_fchown 55
+#define TARGET_NR_openat 56
+#define TARGET_NR_close 57
+#define TARGET_NR_vhangup 58
+#define TARGET_NR_pipe2 59
+#define TARGET_NR_quotactl 60
+#define TARGET_NR_getdents64 61
+#define TARGET_NR_lseek 62
+#define TARGET_NR_read 63
+#define TARGET_NR_write 64
+#define TARGET_NR_readv 65
+#define TARGET_NR_writev 66
+#define TARGET_NR_pread64 67
+#define TARGET_NR_pwrite64 68
+#define TARGET_NR_preadv 69
+#define TARGET_NR_pwritev 70
+#define TARGET_NR_sendfile 71
+#define TARGET_NR_pselect6 72
+#define TARGET_NR_ppoll 73
+#define TARGET_NR_signalfd4 74
+#define TARGET_NR_vmsplice 75
+#define TARGET_NR_splice 76
+#define TARGET_NR_tee 77
+#define TARGET_NR_readlinkat 78
+#define TARGET_NR_fstatat 79
+#define TARGET_NR_fstat 80
+#define TARGET_NR_sync 81
+#define TARGET_NR_fsync 82
+#define TARGET_NR_fdatasync 83
+#define TARGET_NR_sync_file_range 84 /* For tilegx, no range2 */
+#define TARGET_NR_timerfd_create 85
+#define TARGET_NR_timerfd_settime 86
+#define TARGET_NR_timerfd_gettime 87
+#define TARGET_NR_utimensat 88
+#define TARGET_NR_acct 89
+#define TARGET_NR_capget 90
+#define TARGET_NR_capset 91
+#define TARGET_NR_personality 92
+#define TARGET_NR_exit 93
+#define TARGET_NR_exit_group 94
+#define TARGET_NR_waitid 95
+#define TARGET_NR_set_tid_address 96
+#define TARGET_NR_unshare 97
+#define TARGET_NR_futex 98
+#define TARGET_NR_set_robust_list 99
+#define TARGET_NR_get_robust_list 100
+#define TARGET_NR_nanosleep 101
+#define TARGET_NR_getitimer 102
+#define TARGET_NR_setitimer 103
+#define TARGET_NR_kexec_load 104
+#define TARGET_NR_init_module 105
+#define TARGET_NR_delete_module 106
+#define TARGET_NR_timer_create 107
+#define TARGET_NR_timer_gettime 108
+#define TARGET_NR_timer_getoverrun 109
+#define TARGET_NR_timer_settime 110
+#define TARGET_NR_timer_delete 111
+#define TARGET_NR_clock_settime 112
+#define TARGET_NR_clock_gettime 113
+#define TARGET_NR_clock_getres 114
+#define TARGET_NR_clock_nanosleep 115
+#define TARGET_NR_syslog 116
+#define TARGET_NR_ptrace 117
+#define TARGET_NR_sched_setparam 118
+#define TARGET_NR_sched_setscheduler 119
+#define TARGET_NR_sched_getscheduler 120
+#define TARGET_NR_sched_getparam 121
+#define TARGET_NR_sched_setaffinity 122
+#define TARGET_NR_sched_getaffinity 123
+#define TARGET_NR_sched_yield 124
+#define TARGET_NR_sched_get_priority_max 125
+#define TARGET_NR_sched_get_priority_min 126
+#define TARGET_NR_sched_rr_get_interval 127
+#define TARGET_NR_restart_syscall 128
+#define TARGET_NR_kill 129
+#define TARGET_NR_tkill 130
+#define TARGET_NR_tgkill 131
+#define TARGET_NR_sigaltstack 132
+#define TARGET_NR_rt_sigsuspend 133
+#define TARGET_NR_rt_sigaction 134
+#define TARGET_NR_rt_sigprocmask 135
+#define TARGET_NR_rt_sigpending 136
+#define TARGET_NR_rt_sigtimedwait 137
+#define TARGET_NR_rt_sigqueueinfo 138
+#define TARGET_NR_rt_sigreturn 139
+#define TARGET_NR_setpriority 140
+#define TARGET_NR_getpriority 141
+#define TARGET_NR_reboot 142
+#define TARGET_NR_setregid 143
+#define TARGET_NR_setgid 144
+#define TARGET_NR_setreuid 145
+#define TARGET_NR_setuid 146
+#define TARGET_NR_setresuid 147
+#define TARGET_NR_getresuid 148
+#define TARGET_NR_setresgid 149
+#define TARGET_NR_getresgid 150
+#define TARGET_NR_setfsuid 151
+#define TARGET_NR_setfsgid 152
+#define TARGET_NR_times 153
+#define TARGET_NR_setpgid 154
+#define TARGET_NR_getpgid 155
+#define TARGET_NR_getsid 156
+#define TARGET_NR_setsid 157
+#define TARGET_NR_getgroups 158
+#define TARGET_NR_setgroups 159
+#define TARGET_NR_uname 160
+#define TARGET_NR_sethostname 161
+#define TARGET_NR_setdomainname 162
+#define TARGET_NR_getrlimit 163
+#define TARGET_NR_setrlimit 164
+#define TARGET_NR_getrusage 165
+#define TARGET_NR_umask 166
+#define TARGET_NR_prctl 167
+#define TARGET_NR_getcpu 168
+#define TARGET_NR_gettimeofday 169
+#define TARGET_NR_settimeofday 170
+#define TARGET_NR_adjtimex 171
+#define TARGET_NR_getpid 172
+#define TARGET_NR_getppid 173
+#define TARGET_NR_getuid 174
+#define TARGET_NR_geteuid 175
+#define TARGET_NR_getgid 176
+#define TARGET_NR_getegid 177
+#define TARGET_NR_gettid 178
+#define TARGET_NR_sysinfo 179
+#define TARGET_NR_mq_open 180
+#define TARGET_NR_mq_unlink 181
+#define TARGET_NR_mq_timedsend 182
+#define TARGET_NR_mq_timedreceive 183
+#define TARGET_NR_mq_notify 184
+#define TARGET_NR_mq_getsetattr 185
+#define TARGET_NR_msgget 186
+#define TARGET_NR_msgctl 187
+#define TARGET_NR_msgrcv 188
+#define TARGET_NR_msgsnd 189
+#define TARGET_NR_semget 190
+#define TARGET_NR_semctl 191
+#define TARGET_NR_semtimedop 192
+#define TARGET_NR_semop 193
+#define TARGET_NR_shmget 194
+#define TARGET_NR_shmctl 195
+#define TARGET_NR_shmat 196
+#define TARGET_NR_shmdt 197
+#define TARGET_NR_socket 198
+#define TARGET_NR_socketpair 199
+#define TARGET_NR_bind 200
+#define TARGET_NR_listen 201
+#define TARGET_NR_accept 202
+#define TARGET_NR_connect 203
+#define TARGET_NR_getsockname 204
+#define TARGET_NR_getpeername 205
+#define TARGET_NR_sendto 206
+#define TARGET_NR_recvfrom 207
+#define TARGET_NR_setsockopt 208
+#define TARGET_NR_getsockopt 209
+#define TARGET_NR_shutdown 210
+#define TARGET_NR_sendmsg 211
+#define TARGET_NR_recvmsg 212
+#define TARGET_NR_readahead 213
+#define TARGET_NR_brk 214
+#define TARGET_NR_munmap 215
+#define TARGET_NR_mremap 216
+#define TARGET_NR_add_key 217
+#define TARGET_NR_request_key 218
+#define TARGET_NR_keyctl 219
+#define TARGET_NR_clone 220
+#define TARGET_NR_execve 221
+#define TARGET_NR_mmap 222
+#define TARGET_NR_fadvise64 223
+#define TARGET_NR_swapon 224
+#define TARGET_NR_swapoff 225
+#define TARGET_NR_mprotect 226
+#define TARGET_NR_msync 227
+#define TARGET_NR_mlock 228
+#define TARGET_NR_munlock 229
+#define TARGET_NR_mlockall 230
+#define TARGET_NR_munlockall 231
+#define TARGET_NR_mincore 232
+#define TARGET_NR_madvise 233
+#define TARGET_NR_remap_file_pages 234
+#define TARGET_NR_mbind 235
+#define TARGET_NR_get_mempolicy 236
+#define TARGET_NR_set_mempolicy 237
+#define TARGET_NR_migrate_pages 238
+#define TARGET_NR_move_pages 239
+#define TARGET_NR_rt_tgsigqueueinfo 240
+#define TARGET_NR_perf_event_open 241
+#define TARGET_NR_accept4 242
+#define TARGET_NR_recvmmsg 243
+
+#define TARGET_NR_arch_specific_syscall 244
+#define TARGET_NR_cacheflush 245 /* tilegx own syscall */
+
+#define TARGET_NR_wait4 260
+#define TARGET_NR_prlimit64 261
+#define TARGET_NR_fanotify_init 262
+#define TARGET_NR_fanotify_mark 263
+#define TARGET_NR_name_to_handle_at 264
+#define TARGET_NR_open_by_handle_at 265
+#define TARGET_NR_clock_adjtime 266
+#define TARGET_NR_syncfs 267
+#define TARGET_NR_setns 268
+#define TARGET_NR_sendmmsg 269
+#define TARGET_NR_process_vm_readv 270
+#define TARGET_NR_process_vm_writev 271
+#define TARGET_NR_kcmp 272
+#define TARGET_NR_finit_module 273
+#define TARGET_NR_sched_setattr 274
+#define TARGET_NR_sched_getattr 275
+#define TARGET_NR_renameat2 276
+#define TARGET_NR_seccomp 277
+#define TARGET_NR_getrandom 278
+#define TARGET_NR_memfd_create 279
+#define TARGET_NR_bpf 280
+#define TARGET_NR_execveat 281
+
+#endif
diff --git a/linux-user/tilegx/target_cpu.h b/linux-user/tilegx/target_cpu.h
new file mode 100644
index 0000000..c96e81d
--- /dev/null
+++ b/linux-user/tilegx/target_cpu.h
@@ -0,0 +1,35 @@
+/*
+ * TILE-Gx specific CPU ABI and functions for linux-user
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef TARGET_CPU_H
+#define TARGET_CPU_H
+
+static inline void cpu_clone_regs(CPUTLGState *env, target_ulong newsp)
+{
+ if (newsp) {
+ env->regs[TILEGX_R_SP] = newsp;
+ }
+ env->regs[TILEGX_R_RE] = 0;
+}
+
+static inline void cpu_set_tls(CPUTLGState *env, target_ulong newtls)
+{
+ env->regs[TILEGX_R_TP] = newtls;
+}
+
+#endif
diff --git a/linux-user/tilegx/target_signal.h b/linux-user/tilegx/target_signal.h
new file mode 100644
index 0000000..0746edb
--- /dev/null
+++ b/linux-user/tilegx/target_signal.h
@@ -0,0 +1,29 @@
+#ifndef TARGET_SIGNAL_H
+#define TARGET_SIGNAL_H
+
+#include "cpu.h"
+
+/* this struct defines a stack used during syscall handling */
+
+typedef struct target_sigaltstack {
+ abi_ulong ss_sp;
+ abi_int ss_flags;
+ abi_int dummy;
+ abi_ulong ss_size;
+} target_stack_t;
+
+/*
+ * sigaltstack controls
+ */
+#define TARGET_SS_ONSTACK 1
+#define TARGET_SS_DISABLE 2
+
+#define TARGET_MINSIGSTKSZ 2048
+#define TARGET_SIGSTKSZ 8192
+
+static inline abi_ulong get_sp_from_cpustate(CPUTLGState *state)
+{
+ return state->regs[TILEGX_R_SP];
+}
+
+#endif /* TARGET_SIGNAL_H */
diff --git a/linux-user/tilegx/target_structs.h b/linux-user/tilegx/target_structs.h
new file mode 100644
index 0000000..0f9838c
--- /dev/null
+++ b/linux-user/tilegx/target_structs.h
@@ -0,0 +1,48 @@
+/*
+ * TILE-Gx specific structures for linux-user
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef TARGET_STRUCTS_H
+#define TARGET_STRUCTS_H
+
+struct target_ipc_perm {
+ abi_int __key; /* Key. */
+ abi_uint uid; /* Owner's user ID. */
+ abi_uint gid; /* Owner's group ID. */
+ abi_uint cuid; /* Creator's user ID. */
+ abi_uint cgid; /* Creator's group ID. */
+ abi_uint mode; /* Read/write permission. */
+ abi_ushort __seq; /* Sequence number. */
+ abi_ushort __pad2;
+ abi_ulong __unused1;
+ abi_ulong __unused2;
+};
+
+struct target_shmid_ds {
+ struct target_ipc_perm shm_perm; /* operation permission struct */
+ abi_long shm_segsz; /* size of segment in bytes */
+ abi_ulong shm_atime; /* time of last shmat() */
+ abi_ulong shm_dtime; /* time of last shmdt() */
+ abi_ulong shm_ctime; /* time of last change by shmctl() */
+ abi_int shm_cpid; /* pid of creator */
+ abi_int shm_lpid; /* pid of last shmop */
+ abi_ulong shm_nattch; /* number of current attaches */
+ abi_ulong __unused4;
+ abi_ulong __unused5;
+};
+
+#endif
diff --git a/linux-user/tilegx/termbits.h b/linux-user/tilegx/termbits.h
new file mode 100644
index 0000000..39bc8ac
--- /dev/null
+++ b/linux-user/tilegx/termbits.h
@@ -0,0 +1,285 @@
+#ifndef TILEGX_TERMBITS_H
+#define TILEGX_TERMBITS_H
+
+/* From asm-generic/termbits.h, which is used by tilegx */
+
+#define TARGET_NCCS 19
+struct target_termios {
+ unsigned int c_iflag; /* input mode flags */
+ unsigned int c_oflag; /* output mode flags */
+ unsigned int c_cflag; /* control mode flags */
+ unsigned int c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[TARGET_NCCS]; /* control characters */
+};
+
+struct target_termios2 {
+ unsigned int c_iflag; /* input mode flags */
+ unsigned int c_oflag; /* output mode flags */
+ unsigned int c_cflag; /* control mode flags */
+ unsigned int c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[TARGET_NCCS]; /* control characters */
+ unsigned int c_ispeed; /* input speed */
+ unsigned int c_ospeed; /* output speed */
+};
+
+struct target_ktermios {
+ unsigned int c_iflag; /* input mode flags */
+ unsigned int c_oflag; /* output mode flags */
+ unsigned int c_cflag; /* control mode flags */
+ unsigned int c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[TARGET_NCCS]; /* control characters */
+ unsigned int c_ispeed; /* input speed */
+ unsigned int c_ospeed; /* output speed */
+};
+
+/* c_cc characters */
+#define TARGET_VINTR 0
+#define TARGET_VQUIT 1
+#define TARGET_VERASE 2
+#define TARGET_VKILL 3
+#define TARGET_VEOF 4
+#define TARGET_VTIME 5
+#define TARGET_VMIN 6
+#define TARGET_VSWTC 7
+#define TARGET_VSTART 8
+#define TARGET_VSTOP 9
+#define TARGET_VSUSP 10
+#define TARGET_VEOL 11
+#define TARGET_VREPRINT 12
+#define TARGET_VDISCARD 13
+#define TARGET_VWERASE 14
+#define TARGET_VLNEXT 15
+#define TARGET_VEOL2 16
+
+/* c_iflag bits */
+#define TARGET_IGNBRK 0000001
+#define TARGET_BRKINT 0000002
+#define TARGET_IGNPAR 0000004
+#define TARGET_PARMRK 0000010
+#define TARGET_INPCK 0000020
+#define TARGET_ISTRIP 0000040
+#define TARGET_INLCR 0000100
+#define TARGET_IGNCR 0000200
+#define TARGET_ICRNL 0000400
+#define TARGET_IUCLC 0001000
+#define TARGET_IXON 0002000
+#define TARGET_IXANY 0004000
+#define TARGET_IXOFF 0010000
+#define TARGET_IMAXBEL 0020000
+#define TARGET_IUTF8 0040000
+
+/* c_oflag bits */
+#define TARGET_OPOST 0000001
+#define TARGET_OLCUC 0000002
+#define TARGET_ONLCR 0000004
+#define TARGET_OCRNL 0000010
+#define TARGET_ONOCR 0000020
+#define TARGET_ONLRET 0000040
+#define TARGET_OFILL 0000100
+#define TARGET_OFDEL 0000200
+#define TARGET_NLDLY 0000400
+#define TARGET_NL0 0000000
+#define TARGET_NL1 0000400
+#define TARGET_CRDLY 0003000
+#define TARGET_CR0 0000000
+#define TARGET_CR1 0001000
+#define TARGET_CR2 0002000
+#define TARGET_CR3 0003000
+#define TARGET_TABDLY 0014000
+#define TARGET_TAB0 0000000
+#define TARGET_TAB1 0004000
+#define TARGET_TAB2 0010000
+#define TARGET_TAB3 0014000
+#define TARGET_XTABS 0014000
+#define TARGET_BSDLY 0020000
+#define TARGET_BS0 0000000
+#define TARGET_BS1 0020000
+#define TARGET_VTDLY 0040000
+#define TARGET_VT0 0000000
+#define TARGET_VT1 0040000
+#define TARGET_FFDLY 0100000
+#define TARGET_FF0 0000000
+#define TARGET_FF1 0100000
+
+/* c_cflag bit meaning */
+#define TARGET_CBAUD 0010017
+#define TARGET_B0 0000000 /* hang up */
+#define TARGET_B50 0000001
+#define TARGET_B75 0000002
+#define TARGET_B110 0000003
+#define TARGET_B134 0000004
+#define TARGET_B150 0000005
+#define TARGET_B200 0000006
+#define TARGET_B300 0000007
+#define TARGET_B600 0000010
+#define TARGET_B1200 0000011
+#define TARGET_B1800 0000012
+#define TARGET_B2400 0000013
+#define TARGET_B4800 0000014
+#define TARGET_B9600 0000015
+#define TARGET_B19200 0000016
+#define TARGET_B38400 0000017
+#define TARGET_EXTA TARGET_B19200
+#define TARGET_EXTB TARGET_B38400
+#define TARGET_CSIZE 0000060
+#define TARGET_CS5 0000000
+#define TARGET_CS6 0000020
+#define TARGET_CS7 0000040
+#define TARGET_CS8 0000060
+#define TARGET_CSTOPB 0000100
+#define TARGET_CREAD 0000200
+#define TARGET_PARENB 0000400
+#define TARGET_PARODD 0001000
+#define TARGET_HUPCL 0002000
+#define TARGET_CLOCAL 0004000
+#define TARGET_CBAUDEX 0010000
+#define TARGET_BOTHER 0010000
+#define TARGET_B57600 0010001
+#define TARGET_B115200 0010002
+#define TARGET_B230400 0010003
+#define TARGET_B460800 0010004
+#define TARGET_B500000 0010005
+#define TARGET_B576000 0010006
+#define TARGET_B921600 0010007
+#define TARGET_B1000000 0010010
+#define TARGET_B1152000 0010011
+#define TARGET_B1500000 0010012
+#define TARGET_B2000000 0010013
+#define TARGET_B2500000 0010014
+#define TARGET_B3000000 0010015
+#define TARGET_B3500000 0010016
+#define TARGET_B4000000 0010017
+#define TARGET_CIBAUD 002003600000 /* input baud rate */
+#define TARGET_CMSPAR 010000000000 /* mark or space (stick) parity */
+#define TARGET_CRTSCTS 020000000000 /* flow control */
+
+#define TARGET_IBSHIFT 16 /* Shift from CBAUD to CIBAUD */
+
+/* c_lflag bits */
+#define TARGET_ISIG 0000001
+#define TARGET_ICANON 0000002
+#define TARGET_XCASE 0000004
+#define TARGET_ECHO 0000010
+#define TARGET_ECHOE 0000020
+#define TARGET_ECHOK 0000040
+#define TARGET_ECHONL 0000100
+#define TARGET_NOFLSH 0000200
+#define TARGET_TOSTOP 0000400
+#define TARGET_ECHOCTL 0001000
+#define TARGET_ECHOPRT 0002000
+#define TARGET_ECHOKE 0004000
+#define TARGET_FLUSHO 0010000
+#define TARGET_PENDIN 0040000
+#define TARGET_IEXTEN 0100000
+#define TARGET_EXTPROC 0200000
+
+/* tcflow() and TCXONC use these */
+#define TARGET_TCOOFF 0
+#define TARGET_TCOON 1
+#define TARGET_TCIOFF 2
+#define TARGET_TCION 3
+
+/* tcflush() and TCFLSH use these */
+#define TARGET_TCIFLUSH 0
+#define TARGET_TCOFLUSH 1
+#define TARGET_TCIOFLUSH 2
+
+/* tcsetattr uses these */
+#define TARGET_TCSANOW 0
+#define TARGET_TCSADRAIN 1
+#define TARGET_TCSAFLUSH 2
+
+/* From asm-generic/ioctls.h, which is used by tilegx */
+
+#define TARGET_TCGETS 0x5401
+#define TARGET_TCSETS 0x5402
+#define TARGET_TCSETSW 0x5403
+#define TARGET_TCSETSF 0x5404
+#define TARGET_TCGETA 0x5405
+#define TARGET_TCSETA 0x5406
+#define TARGET_TCSETAW 0x5407
+#define TARGET_TCSETAF 0x5408
+#define TARGET_TCSBRK 0x5409
+#define TARGET_TCXONC 0x540A
+#define TARGET_TCFLSH 0x540B
+#define TARGET_TIOCEXCL 0x540C
+#define TARGET_TIOCNXCL 0x540D
+#define TARGET_TIOCSCTTY 0x540E
+#define TARGET_TIOCGPGRP 0x540F
+#define TARGET_TIOCSPGRP 0x5410
+#define TARGET_TIOCOUTQ 0x5411
+#define TARGET_TIOCSTI 0x5412
+#define TARGET_TIOCGWINSZ 0x5413
+#define TARGET_TIOCSWINSZ 0x5414
+#define TARGET_TIOCMGET 0x5415
+#define TARGET_TIOCMBIS 0x5416
+#define TARGET_TIOCMBIC 0x5417
+#define TARGET_TIOCMSET 0x5418
+#define TARGET_TIOCGSOFTCAR 0x5419
+#define TARGET_TIOCSSOFTCAR 0x541A
+#define TARGET_FIONREAD 0x541B
+#define TARGET_TIOCINQ TARGET_FIONREAD
+#define TARGET_TIOCLINUX 0x541C
+#define TARGET_TIOCCONS 0x541D
+#define TARGET_TIOCGSERIAL 0x541E
+#define TARGET_TIOCSSERIAL 0x541F
+#define TARGET_TIOCPKT 0x5420
+#define TARGET_FIONBIO 0x5421
+#define TARGET_TIOCNOTTY 0x5422
+#define TARGET_TIOCSETD 0x5423
+#define TARGET_TIOCGETD 0x5424
+#define TARGET_TCSBRKP 0x5425
+#define TARGET_TIOCSBRK 0x5427
+#define TARGET_TIOCCBRK 0x5428
+#define TARGET_TIOCGSID 0x5429
+#define TARGET_TCGETS2 TARGET_IOR('T', 0x2A, struct termios2)
+#define TARGET_TCSETS2 TARGET_IOW('T', 0x2B, struct termios2)
+#define TARGET_TCSETSW2 TARGET_IOW('T', 0x2C, struct termios2)
+#define TARGET_TCSETSF2 TARGET_IOW('T', 0x2D, struct termios2)
+#define TARGET_TIOCGRS485 0x542E
+#define TARGET_TIOCSRS485 0x542F
+#define TARGET_TIOCGPTN TARGET_IOR('T', 0x30, unsigned int)
+#define TARGET_TIOCSPTLCK TARGET_IOW('T', 0x31, int)
+#define TARGET_TIOCGDEV TARGET_IOR('T', 0x32, unsigned int)
+#define TARGET_TCGETX 0x5432
+#define TARGET_TCSETX 0x5433
+#define TARGET_TCSETXF 0x5434
+#define TARGET_TCSETXW 0x5435
+#define TARGET_TIOCSIG TARGET_IOW('T', 0x36, int)
+#define TARGET_TIOCVHANGUP 0x5437
+#define TARGET_TIOCGPKT TARGET_IOR('T', 0x38, int)
+#define TARGET_TIOCGPTLCK TARGET_IOR('T', 0x39, int)
+#define TARGET_TIOCGEXCL TARGET_IOR('T', 0x40, int)
+
+#define TARGET_FIONCLEX 0x5450
+#define TARGET_FIOCLEX 0x5451
+#define TARGET_FIOASYNC 0x5452
+#define TARGET_TIOCSERCONFIG 0x5453
+#define TARGET_TIOCSERGWILD 0x5454
+#define TARGET_TIOCSERSWILD 0x5455
+#define TARGET_TIOCGLCKTRMIOS 0x5456
+#define TARGET_TIOCSLCKTRMIOS 0x5457
+#define TARGET_TIOCSERGSTRUCT 0x5458
+#define TARGET_TIOCSERGETLSR 0x5459
+#define TARGET_TIOCSERGETMULTI 0x545A
+#define TARGET_TIOCSERSETMULTI 0x545B
+
+#define TARGET_TIOCMIWAIT 0x545C
+#define TARGET_TIOCGICOUNT 0x545D
+#define TARGET_FIOQSIZE 0x5460
+
+#define TARGET_TIOCPKT_DATA 0
+#define TARGET_TIOCPKT_FLUSHREAD 1
+#define TARGET_TIOCPKT_FLUSHWRITE 2
+#define TARGET_TIOCPKT_STOP 4
+#define TARGET_TIOCPKT_START 8
+#define TARGET_TIOCPKT_NOSTOP 16
+#define TARGET_TIOCPKT_DOSTOP 32
+#define TARGET_TIOCPKT_IOCTL 64
+
+#define TARGET_TIOCSER_TEMT 0x01
+
+#endif
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 02/10 v10] linux-user: Support tilegx architecture in linux-user
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-05-10 22:38 ` [Qemu-devel] [PATCH 01/10 v10] linux-user: tilegx: Firstly add architecture related features Chen Gang
@ 2015-05-10 22:39 ` Chen Gang
2015-05-10 22:40 ` [Qemu-devel] [PATCH 03/10 v10] linux-user/syscall.c: Conditionalize syscalls which are not defined in tilegx Chen Gang
` (7 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:39 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Add main working flow feature, system call processing feature, and elf64
tilegx binary loading feature, based on Linux kernel tilegx 64-bit
implementation.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
include/elf.h | 2 +
linux-user/elfload.c | 23 +++++++
linux-user/main.c | 148 ++++++++++++++++++++++++++++++++++++++++++++++
linux-user/syscall_defs.h | 14 +++--
4 files changed, 182 insertions(+), 5 deletions(-)
diff --git a/include/elf.h b/include/elf.h
index 3e75f05..154144e 100644
--- a/include/elf.h
+++ b/include/elf.h
@@ -133,6 +133,8 @@ typedef int64_t Elf64_Sxword;
#define EM_AARCH64 183
+#define EM_TILEGX 191 /* TILE-Gx */
+
/* This is the info that is needed to parse the dynamic section of the file */
#define DT_NULL 0
#define DT_NEEDED 1
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 0ba9706..fbf9212 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1189,6 +1189,29 @@ static inline void init_thread(struct target_pt_regs *regs, struct image_info *i
#endif /* TARGET_S390X */
+#ifdef TARGET_TILEGX
+
+/* 42 bits real used address, a half for user mode */
+#define ELF_START_MMAP (0x00000020000000000ULL)
+
+#define elf_check_arch(x) ((x) == EM_TILEGX)
+
+#define ELF_CLASS ELFCLASS64
+#define ELF_DATA ELFDATA2LSB
+#define ELF_ARCH EM_TILEGX
+
+static inline void init_thread(struct target_pt_regs *regs,
+ struct image_info *infop)
+{
+ regs->pc = infop->entry;
+ regs->sp = infop->start_stack;
+
+}
+
+#define ELF_EXEC_PAGESIZE 65536 /* TILE-Gx page size is 64KB */
+
+#endif /* TARGET_TILEGX */
+
#ifndef ELF_PLATFORM
#define ELF_PLATFORM (NULL)
#endif
diff --git a/linux-user/main.c b/linux-user/main.c
index 3f32db0..38fa01c 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -3416,6 +3416,143 @@ void cpu_loop(CPUS390XState *env)
#endif /* TARGET_S390X */
+#ifdef TARGET_TILEGX
+
+static uint64_t get_regval(CPUTLGState *env, uint8_t reg)
+{
+ if (likely(reg < TILEGX_R_COUNT)) {
+ return env->regs[reg];
+ } else if (reg != TILEGX_R_ZERO) {
+ fprintf(stderr, "invalid register r%d for reading.\n", reg);
+ g_assert_not_reached();
+ }
+ return 0;
+}
+
+static void set_regval(CPUTLGState *env, uint8_t reg, uint64_t val)
+{
+ if (likely(reg < TILEGX_R_COUNT)) {
+ env->regs[reg] = val;
+ } else if (reg != TILEGX_R_ZERO) {
+ fprintf(stderr, "invalid register r%d for writing.\n", reg);
+ g_assert_not_reached();
+ }
+}
+
+/*
+ * Compare the 8-byte contents of the CmpValue SPR with the 8-byte value in
+ * memory at the address held in the first source register. If the values are
+ * not equal, then no memory operation is performed. If the values are equal,
+ * the 8-byte quantity from the second source register is written into memory
+ * at the address held in the first source register. In either case, the result
+ * of the instruc- tion is the value read from memory. The compare and write to
+ * memory are atomic and thus can be used for synchronization purposes. This
+ * instruction only operates for addresses aligned to a 8-byte boundary.
+ * Unaligned memory access causes an Unaligned Data Reference interrupt.
+ *
+ * Functional Description (64-bit)
+ * uint64_t memVal = memoryReadDoubleWord (rf[SrcA]);
+ * rf[Dest] = memVal;
+ * if (memVal == SPR[CmpValueSPR])
+ * memoryWriteDoubleWord (rf[SrcA], rf[SrcB]);
+ *
+ * Functional Description (32-bit)
+ * uint64_t memVal = signExtend32 (memoryReadWord (rf[SrcA]));
+ * rf[Dest] = memVal;
+ * if (memVal == signExtend32 (SPR[CmpValueSPR]))
+ * memoryWriteWord (rf[SrcA], rf[SrcB]);
+ *
+ *
+ * For exch(4), will no cmp spr.
+ */
+static void do_exch(CPUTLGState *env, int8_t quad, int8_t cmp)
+{
+ uint8_t rdst, rsrc, rsrcb;
+ target_ulong addr, tmp;
+ target_long val, sprval;
+ target_siginfo_t info;
+
+ start_exclusive();
+
+ rdst = (env->cmpexch >> 16) & 0xff;
+ rsrc = (env->cmpexch >> 8) & 0xff;
+ rsrcb = env->cmpexch & 0xff;
+
+ addr = get_regval(env, rsrc);
+ if (quad ? get_user_s64(val, addr) : get_user_s32(val, addr)) {
+ goto do_sigsegv;
+ }
+ tmp = (target_ulong)val; /* rdst may be the same to rsrcb, so buffer it */
+
+ if (cmp) {
+ if (quad) {
+ sprval = (target_long)env->spregs[TILEGX_SPR_CMPEXCH];
+ } else {
+ sprval = (int32_t)(env->spregs[TILEGX_SPR_CMPEXCH] & 0xffffffff);
+ }
+ }
+
+ if (!cmp || val == sprval) {
+ val = get_regval(env, rsrcb);
+ if (quad ? put_user_u64(val, addr) : put_user_u32(val, addr)) {
+ goto do_sigsegv;
+ }
+ }
+
+ set_regval(env, rdst, tmp);
+
+ end_exclusive();
+ return;
+
+do_sigsegv:
+ end_exclusive();
+
+ info.si_signo = TARGET_SIGSEGV;
+ info.si_errno = 0;
+ info.si_code = TARGET_SEGV_MAPERR;
+ info._sifields._sigfault._addr = addr;
+ queue_signal(env, TARGET_SIGSEGV, &info);
+}
+
+void cpu_loop(CPUTLGState *env)
+{
+ CPUState *cs = CPU(tilegx_env_get_cpu(env));
+ int trapnr;
+
+ while (1) {
+ cpu_exec_start(cs);
+ trapnr = cpu_tilegx_exec(env);
+ cpu_exec_end(cs);
+ switch (trapnr) {
+ case TILEGX_EXCP_SYSCALL:
+ env->regs[TILEGX_R_RE] = do_syscall(env, env->regs[TILEGX_R_NR],
+ env->regs[0], env->regs[1],
+ env->regs[2], env->regs[3],
+ env->regs[4], env->regs[5],
+ env->regs[6], env->regs[7]);
+ break;
+ case TILEGX_EXCP_OPCODE_EXCH:
+ do_exch(env, 1, 0);
+ break;
+ case TILEGX_EXCP_OPCODE_EXCH4:
+ do_exch(env, 0, 0);
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH:
+ do_exch(env, 1, 1);
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH4:
+ do_exch(env, 0, 1);
+ break;
+ default:
+ fprintf(stderr, "trapnr is %d[0x%x].\n", trapnr, trapnr);
+ g_assert_not_reached();
+ }
+ process_pending_signals(env);
+ }
+}
+
+#endif
+
THREAD CPUState *thread_cpu;
void task_settid(TaskState *ts)
@@ -4389,6 +4526,17 @@ int main(int argc, char **argv, char **envp)
env->psw.mask = regs->psw.mask;
env->psw.addr = regs->psw.addr;
}
+#elif defined(TARGET_TILEGX)
+ {
+ int i;
+ for (i = 0; i < TILEGX_R_COUNT; i++) {
+ env->regs[i] = regs->regs[i];
+ }
+ for (i = 0; i < TILEGX_SPR_COUNT; i++) {
+ env->spregs[i] = 0;
+ }
+ env->pc = regs->pc;
+ }
#else
#error unsupported target CPU
#endif
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index edd5f3c..e6af073 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -64,8 +64,9 @@
#endif
#if defined(TARGET_I386) || defined(TARGET_ARM) || defined(TARGET_SH4) \
- || defined(TARGET_M68K) || defined(TARGET_CRIS) || defined(TARGET_UNICORE32) \
- || defined(TARGET_S390X) || defined(TARGET_OPENRISC)
+ || defined(TARGET_M68K) || defined(TARGET_CRIS) \
+ || defined(TARGET_UNICORE32) || defined(TARGET_S390X) \
+ || defined(TARGET_OPENRISC) || defined(TARGET_TILEGX)
#define TARGET_IOC_SIZEBITS 14
#define TARGET_IOC_DIRBITS 2
@@ -365,7 +366,8 @@ int do_sigaction(int sig, const struct target_sigaction *act,
|| defined(TARGET_PPC) || defined(TARGET_MIPS) || defined(TARGET_SH4) \
|| defined(TARGET_M68K) || defined(TARGET_ALPHA) || defined(TARGET_CRIS) \
|| defined(TARGET_MICROBLAZE) || defined(TARGET_UNICORE32) \
- || defined(TARGET_S390X) || defined(TARGET_OPENRISC)
+ || defined(TARGET_S390X) || defined(TARGET_OPENRISC) \
+ || defined(TARGET_TILEGX)
#if defined(TARGET_SPARC)
#define TARGET_SA_NOCLDSTOP 8u
@@ -1871,7 +1873,7 @@ struct target_stat {
abi_ulong target_st_ctime_nsec;
unsigned int __unused[2];
};
-#elif defined(TARGET_OPENRISC)
+#elif defined(TARGET_OPENRISC) || defined(TARGET_TILEGX)
/* These are the asm-generic versions of the stat and stat64 structures */
@@ -2264,7 +2266,9 @@ struct target_flock {
struct target_flock64 {
short l_type;
short l_whence;
-#if defined(TARGET_PPC) || defined(TARGET_X86_64) || defined(TARGET_MIPS) || defined(TARGET_SPARC) || defined(TARGET_HPPA) || defined (TARGET_MICROBLAZE)
+#if defined(TARGET_PPC) || defined(TARGET_X86_64) || defined(TARGET_MIPS) \
+ || defined(TARGET_SPARC) || defined(TARGET_HPPA) \
+ || defined(TARGET_MICROBLAZE) || defined(TARGET_TILEGX)
int __pad;
#endif
unsigned long long l_start;
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 03/10 v10] linux-user/syscall.c: Conditionalize syscalls which are not defined in tilegx
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-05-10 22:38 ` [Qemu-devel] [PATCH 01/10 v10] linux-user: tilegx: Firstly add architecture related features Chen Gang
2015-05-10 22:39 ` [Qemu-devel] [PATCH 02/10 v10] linux-user: Support tilegx architecture in linux-user Chen Gang
@ 2015-05-10 22:40 ` Chen Gang
2015-05-10 22:41 ` [Qemu-devel] [PATCH 04/10 v10] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
` (6 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:40 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Some of architectures (e.g. tilegx), several syscall macros are not
supported, so switch them.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
linux-user/syscall.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 49 insertions(+), 1 deletion(-)
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 1622ad6..a503673 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -213,7 +213,7 @@ static int gettid(void) {
return -ENOSYS;
}
#endif
-#ifdef __NR_getdents
+#if defined(TARGET_NR_getdents) && defined(__NR_getdents)
_syscall3(int, sys_getdents, uint, fd, struct linux_dirent *, dirp, uint, count);
#endif
#if !defined(__NR_getdents) || \
@@ -5581,6 +5581,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(write(arg1, p, arg3));
unlock_user(p, arg2, 0);
break;
+#ifdef TARGET_NR_open
case TARGET_NR_open:
if (!(p = lock_user_string(arg1)))
goto efault;
@@ -5589,6 +5590,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
arg3));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_openat:
if (!(p = lock_user_string(arg2)))
goto efault;
@@ -5603,9 +5605,11 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_brk:
ret = do_brk(arg1);
break;
+#ifdef TARGET_NR_fork
case TARGET_NR_fork:
ret = get_errno(do_fork(cpu_env, SIGCHLD, 0, 0, 0, 0));
break;
+#endif
#ifdef TARGET_NR_waitpid
case TARGET_NR_waitpid:
{
@@ -5640,6 +5644,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
break;
#endif
+#ifdef TARGET_NR_link
case TARGET_NR_link:
{
void * p2;
@@ -5653,6 +5658,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_linkat)
case TARGET_NR_linkat:
{
@@ -5670,12 +5676,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_unlink
case TARGET_NR_unlink:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(unlink(p));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_unlinkat)
case TARGET_NR_unlinkat:
if (!(p = lock_user_string(arg2)))
@@ -5792,12 +5800,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_mknod
case TARGET_NR_mknod:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(mknod(p, arg2, arg3));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_mknodat)
case TARGET_NR_mknodat:
if (!(p = lock_user_string(arg2)))
@@ -5806,12 +5816,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg2, 0);
break;
#endif
+#ifdef TARGET_NR_chmod
case TARGET_NR_chmod:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(chmod(p, arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#ifdef TARGET_NR_break
case TARGET_NR_break:
goto unimplemented;
@@ -5946,6 +5958,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_utimes
case TARGET_NR_utimes:
{
struct timeval *tvp, tv[2];
@@ -5964,6 +5977,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_futimesat)
case TARGET_NR_futimesat:
{
@@ -5992,12 +6006,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_gtty:
goto unimplemented;
#endif
+#ifdef TARGET_NR_access
case TARGET_NR_access:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(access(path(p), arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_faccessat) && defined(__NR_faccessat)
case TARGET_NR_faccessat:
if (!(p = lock_user_string(arg2)))
@@ -6022,6 +6038,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_kill:
ret = get_errno(kill(arg1, target_to_host_signal(arg2)));
break;
+#ifdef TARGET_NR_rename
case TARGET_NR_rename:
{
void *p2;
@@ -6035,6 +6052,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_renameat)
case TARGET_NR_renameat:
{
@@ -6050,12 +6068,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_mkdir
case TARGET_NR_mkdir:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(mkdir(p, arg2));
unlock_user(p, arg1, 0);
break;
+#endif
#if defined(TARGET_NR_mkdirat)
case TARGET_NR_mkdirat:
if (!(p = lock_user_string(arg2)))
@@ -6064,18 +6084,22 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg2, 0);
break;
#endif
+#ifdef TARGET_NR_rmdir
case TARGET_NR_rmdir:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(rmdir(p));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_dup:
ret = get_errno(dup(arg1));
break;
+#ifdef TARGET_NR_pipe
case TARGET_NR_pipe:
ret = do_pipe(cpu_env, arg1, 0, 0);
break;
+#endif
#ifdef TARGET_NR_pipe2
case TARGET_NR_pipe2:
ret = do_pipe(cpu_env, arg1,
@@ -6160,11 +6184,15 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(chroot(p));
unlock_user(p, arg1, 0);
break;
+#ifdef TARGET_NR_ustat
case TARGET_NR_ustat:
goto unimplemented;
+#endif
+#ifdef TARGET_NR_dup2
case TARGET_NR_dup2:
ret = get_errno(dup2(arg1, arg2));
break;
+#endif
#if defined(CONFIG_DUP3) && defined(TARGET_NR_dup3)
case TARGET_NR_dup3:
ret = get_errno(dup3(arg1, arg2, arg3));
@@ -6175,9 +6203,11 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(getppid());
break;
#endif
+#ifdef TARGET_NR_getpgrp
case TARGET_NR_getpgrp:
ret = get_errno(getpgrp());
break;
+#endif
case TARGET_NR_setsid:
ret = get_errno(setsid());
break;
@@ -6753,6 +6783,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_symlink
case TARGET_NR_symlink:
{
void *p2;
@@ -6766,6 +6797,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_symlinkat)
case TARGET_NR_symlinkat:
{
@@ -6785,6 +6817,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
case TARGET_NR_oldlstat:
goto unimplemented;
#endif
+#ifdef TARGET_NR_readlink
case TARGET_NR_readlink:
{
void *p2;
@@ -6815,6 +6848,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
unlock_user(p, arg1, 0);
}
break;
+#endif
#if defined(TARGET_NR_readlinkat)
case TARGET_NR_readlinkat:
{
@@ -7214,22 +7248,28 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
}
break;
+#ifdef TARGET_NR_stat
case TARGET_NR_stat:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(stat(path(p), &st));
unlock_user(p, arg1, 0);
goto do_stat;
+#endif
+#ifdef TARGET_NR_lstat
case TARGET_NR_lstat:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(lstat(path(p), &st));
unlock_user(p, arg1, 0);
goto do_stat;
+#endif
case TARGET_NR_fstat:
{
ret = get_errno(fstat(arg1, &st));
+#if defined(TARGET_NR_stat) || defined(TARGET_NR_lstat)
do_stat:
+#endif
if (!is_error(ret)) {
struct target_stat *target_st;
@@ -7517,6 +7557,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_getdents
case TARGET_NR_getdents:
#ifdef __NR_getdents
#if TARGET_ABI_BITS == 32 && HOST_LONG_BITS == 64
@@ -7647,6 +7688,7 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
#endif
break;
+#endif /* TARGET_NR_getdents */
#if defined(TARGET_NR_getdents64) && defined(__NR_getdents64)
case TARGET_NR_getdents64:
{
@@ -7786,11 +7828,13 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = get_errno(fdatasync(arg1));
break;
#endif
+#ifdef TARGET_NR__sysctl
case TARGET_NR__sysctl:
/* We don't implement this, but ENOTDIR is always a safe
return value. */
ret = -TARGET_ENOTDIR;
break;
+#endif
case TARGET_NR_sched_getaffinity:
{
unsigned int mask_size;
@@ -8237,12 +8281,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
ret = host_to_target_stat64(cpu_env, arg3, &st);
break;
#endif
+#ifdef TARGET_NR_lchown
case TARGET_NR_lchown:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(lchown(p, low2highuid(arg2), low2highgid(arg3)));
unlock_user(p, arg1, 0);
break;
+#endif
#ifdef TARGET_NR_getuid
case TARGET_NR_getuid:
ret = get_errno(high2lowuid(getuid()));
@@ -8365,12 +8411,14 @@ abi_long do_syscall(void *cpu_env, int num, abi_long arg1,
}
break;
#endif
+#ifdef TARGET_NR_chown
case TARGET_NR_chown:
if (!(p = lock_user_string(arg1)))
goto efault;
ret = get_errno(chown(p, low2highuid(arg2), low2highgid(arg3)));
unlock_user(p, arg1, 0);
break;
+#endif
case TARGET_NR_setuid:
ret = get_errno(setuid(low2highuid(arg1)));
break;
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 04/10 v10] target-tilegx: Add opcode basic implementation from Tilera Corporation
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (2 preceding siblings ...)
2015-05-10 22:40 ` [Qemu-devel] [PATCH 03/10 v10] linux-user/syscall.c: Conditionalize syscalls which are not defined in tilegx Chen Gang
@ 2015-05-10 22:41 ` Chen Gang
2015-05-10 22:43 ` [Qemu-devel] [PATCH 06/10 v10] target-tilegx: Add special register information " Chen Gang
` (5 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:41 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
It is copied from Linux kernel "arch/tile/include/uapi/arch/
opcode_tilegx.h".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/opcode_tilegx.h | 1406 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 1406 insertions(+)
create mode 100644 target-tilegx/opcode_tilegx.h
diff --git a/target-tilegx/opcode_tilegx.h b/target-tilegx/opcode_tilegx.h
new file mode 100644
index 0000000..d76ff2d
--- /dev/null
+++ b/target-tilegx/opcode_tilegx.h
@@ -0,0 +1,1406 @@
+/* TILE-Gx opcode information.
+ *
+ * Copyright 2011 Tilera Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for
+ * more details.
+ *
+ *
+ *
+ *
+ *
+ */
+
+#ifndef __ARCH_OPCODE_H__
+#define __ARCH_OPCODE_H__
+
+#ifndef __ASSEMBLER__
+
+typedef unsigned long long tilegx_bundle_bits;
+
+/* These are the bits that determine if a bundle is in the X encoding. */
+#define TILEGX_BUNDLE_MODE_MASK ((tilegx_bundle_bits)3 << 62)
+
+enum
+{
+ /* Maximum number of instructions in a bundle (2 for X, 3 for Y). */
+ TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE = 3,
+
+ /* How many different pipeline encodings are there? X0, X1, Y0, Y1, Y2. */
+ TILEGX_NUM_PIPELINE_ENCODINGS = 5,
+
+ /* Log base 2 of TILEGX_BUNDLE_SIZE_IN_BYTES. */
+ TILEGX_LOG2_BUNDLE_SIZE_IN_BYTES = 3,
+
+ /* Instructions take this many bytes. */
+ TILEGX_BUNDLE_SIZE_IN_BYTES = 1 << TILEGX_LOG2_BUNDLE_SIZE_IN_BYTES,
+
+ /* Log base 2 of TILEGX_BUNDLE_ALIGNMENT_IN_BYTES. */
+ TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES = 3,
+
+ /* Bundles should be aligned modulo this number of bytes. */
+ TILEGX_BUNDLE_ALIGNMENT_IN_BYTES =
+ (1 << TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES),
+
+ /* Number of registers (some are magic, such as network I/O). */
+ TILEGX_NUM_REGISTERS = 64,
+};
+
+/* Make a few "tile_" variables to simplify common code between
+ architectures. */
+
+typedef tilegx_bundle_bits tile_bundle_bits;
+#define TILE_BUNDLE_SIZE_IN_BYTES TILEGX_BUNDLE_SIZE_IN_BYTES
+#define TILE_BUNDLE_ALIGNMENT_IN_BYTES TILEGX_BUNDLE_ALIGNMENT_IN_BYTES
+#define TILE_LOG2_BUNDLE_ALIGNMENT_IN_BYTES \
+ TILEGX_LOG2_BUNDLE_ALIGNMENT_IN_BYTES
+#define TILE_BPT_BUNDLE TILEGX_BPT_BUNDLE
+
+/* 64-bit pattern for a { bpt ; nop } bundle. */
+#define TILEGX_BPT_BUNDLE 0x286a44ae51485000ULL
+
+static __inline unsigned int
+get_BFEnd_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_BFOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 24)) & 0xf);
+}
+
+static __inline unsigned int
+get_BFStart_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3f);
+}
+
+static __inline unsigned int
+get_BrOff_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 37)) & 0x0001ffc0);
+}
+
+static __inline unsigned int
+get_BrType_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 54)) & 0x1f);
+}
+
+static __inline unsigned int
+get_Dest_Imm8_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 43)) & 0x000000c0);
+}
+
+static __inline unsigned int
+get_Dest_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 0)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 0)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Dest_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x3f);
+}
+
+static __inline unsigned int
+get_Imm16_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xffff);
+}
+
+static __inline unsigned int
+get_Imm16_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xffff);
+}
+
+static __inline unsigned int
+get_Imm8OpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 20)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8OpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 51)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0xff);
+}
+
+static __inline unsigned int
+get_Imm8_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0xff);
+}
+
+static __inline unsigned int
+get_JumpOff_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x7ffffff);
+}
+
+static __inline unsigned int
+get_JumpOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 58)) & 0x1);
+}
+
+static __inline unsigned int
+get_MF_Imm14_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3fff);
+}
+
+static __inline unsigned int
+get_MT_Imm14_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 31)) & 0x0000003f) |
+ (((unsigned int)(n >> 37)) & 0x00003fc0);
+}
+
+static __inline unsigned int
+get_Mode(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 62)) & 0x3);
+}
+
+static __inline unsigned int
+get_Opcode_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 28)) & 0x7);
+}
+
+static __inline unsigned int
+get_Opcode_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 59)) & 0x7);
+}
+
+static __inline unsigned int
+get_Opcode_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 27)) & 0xf);
+}
+
+static __inline unsigned int
+get_Opcode_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 58)) & 0xf);
+}
+
+static __inline unsigned int
+get_Opcode_Y2(tilegx_bundle_bits n)
+{
+ return (((n >> 26)) & 0x00000001) |
+ (((unsigned int)(n >> 56)) & 0x00000002);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3);
+}
+
+static __inline unsigned int
+get_RRROpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3);
+}
+
+static __inline unsigned int
+get_ShAmt_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShAmt_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3ff);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 18)) & 0x3);
+}
+
+static __inline unsigned int
+get_ShiftOpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 49)) & 0x3);
+}
+
+static __inline unsigned int
+get_SrcA_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 6)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 6)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 37)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcA_Y2(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 20)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcBDest_Y2(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 51)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_SrcB_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_X0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_X1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_Y0(tilegx_bundle_bits num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((n >> 12)) & 0x3f);
+}
+
+static __inline unsigned int
+get_UnaryOpcodeExtension_Y1(tilegx_bundle_bits n)
+{
+ return (((unsigned int)(n >> 43)) & 0x3f);
+}
+
+
+static __inline int
+sign_extend(int n, int num_bits)
+{
+ int shift = (int)(sizeof(int) * 8 - num_bits);
+ return (n << shift) >> shift;
+}
+
+
+
+static __inline tilegx_bundle_bits
+create_BFEnd_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_BFOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xf) << 24);
+}
+
+static __inline tilegx_bundle_bits
+create_BFStart_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_BrOff_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x0001ffc0)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_BrType_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x1f)) << 54);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Imm8_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x000000c0)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 0);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 0);
+}
+
+static __inline tilegx_bundle_bits
+create_Dest_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm16_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xffff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm16_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xffff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8OpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 20);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8OpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 51);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xff) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_Imm8_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xff)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_JumpOff_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x7ffffff)) << 31);
+}
+
+static __inline tilegx_bundle_bits
+create_JumpOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x1)) << 58);
+}
+
+static __inline tilegx_bundle_bits
+create_MF_Imm14_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3fff)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_MT_Imm14_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x0000003f)) << 31) |
+ (((tilegx_bundle_bits)(n & 0x00003fc0)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_Mode(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 62);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x7) << 28);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x7)) << 59);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0xf) << 27);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0xf)) << 58);
+}
+
+static __inline tilegx_bundle_bits
+create_Opcode_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x00000001) << 26) |
+ (((tilegx_bundle_bits)(n & 0x00000002)) << 56);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3ff) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_RRROpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_ShAmt_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3ff) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3ff)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3) << 18);
+}
+
+static __inline tilegx_bundle_bits
+create_ShiftOpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3)) << 49);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 6);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 6);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 37);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcA_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 20);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcBDest_Y2(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 51);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_SrcB_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_X0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_X1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_Y0(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return ((n & 0x3f) << 12);
+}
+
+static __inline tilegx_bundle_bits
+create_UnaryOpcodeExtension_Y1(int num)
+{
+ const unsigned int n = (unsigned int)num;
+ return (((tilegx_bundle_bits)(n & 0x3f)) << 43);
+}
+
+
+enum
+{
+ ADDI_IMM8_OPCODE_X0 = 1,
+ ADDI_IMM8_OPCODE_X1 = 1,
+ ADDI_OPCODE_Y0 = 0,
+ ADDI_OPCODE_Y1 = 1,
+ ADDLI_OPCODE_X0 = 1,
+ ADDLI_OPCODE_X1 = 0,
+ ADDXI_IMM8_OPCODE_X0 = 2,
+ ADDXI_IMM8_OPCODE_X1 = 2,
+ ADDXI_OPCODE_Y0 = 1,
+ ADDXI_OPCODE_Y1 = 2,
+ ADDXLI_OPCODE_X0 = 2,
+ ADDXLI_OPCODE_X1 = 1,
+ ADDXSC_RRR_0_OPCODE_X0 = 1,
+ ADDXSC_RRR_0_OPCODE_X1 = 1,
+ ADDX_RRR_0_OPCODE_X0 = 2,
+ ADDX_RRR_0_OPCODE_X1 = 2,
+ ADDX_RRR_0_OPCODE_Y0 = 0,
+ ADDX_SPECIAL_0_OPCODE_Y1 = 0,
+ ADD_RRR_0_OPCODE_X0 = 3,
+ ADD_RRR_0_OPCODE_X1 = 3,
+ ADD_RRR_0_OPCODE_Y0 = 1,
+ ADD_SPECIAL_0_OPCODE_Y1 = 1,
+ ANDI_IMM8_OPCODE_X0 = 3,
+ ANDI_IMM8_OPCODE_X1 = 3,
+ ANDI_OPCODE_Y0 = 2,
+ ANDI_OPCODE_Y1 = 3,
+ AND_RRR_0_OPCODE_X0 = 4,
+ AND_RRR_0_OPCODE_X1 = 4,
+ AND_RRR_5_OPCODE_Y0 = 0,
+ AND_RRR_5_OPCODE_Y1 = 0,
+ BEQZT_BRANCH_OPCODE_X1 = 16,
+ BEQZ_BRANCH_OPCODE_X1 = 17,
+ BFEXTS_BF_OPCODE_X0 = 4,
+ BFEXTU_BF_OPCODE_X0 = 5,
+ BFINS_BF_OPCODE_X0 = 6,
+ BF_OPCODE_X0 = 3,
+ BGEZT_BRANCH_OPCODE_X1 = 18,
+ BGEZ_BRANCH_OPCODE_X1 = 19,
+ BGTZT_BRANCH_OPCODE_X1 = 20,
+ BGTZ_BRANCH_OPCODE_X1 = 21,
+ BLBCT_BRANCH_OPCODE_X1 = 22,
+ BLBC_BRANCH_OPCODE_X1 = 23,
+ BLBST_BRANCH_OPCODE_X1 = 24,
+ BLBS_BRANCH_OPCODE_X1 = 25,
+ BLEZT_BRANCH_OPCODE_X1 = 26,
+ BLEZ_BRANCH_OPCODE_X1 = 27,
+ BLTZT_BRANCH_OPCODE_X1 = 28,
+ BLTZ_BRANCH_OPCODE_X1 = 29,
+ BNEZT_BRANCH_OPCODE_X1 = 30,
+ BNEZ_BRANCH_OPCODE_X1 = 31,
+ BRANCH_OPCODE_X1 = 2,
+ CMOVEQZ_RRR_0_OPCODE_X0 = 5,
+ CMOVEQZ_RRR_4_OPCODE_Y0 = 0,
+ CMOVNEZ_RRR_0_OPCODE_X0 = 6,
+ CMOVNEZ_RRR_4_OPCODE_Y0 = 1,
+ CMPEQI_IMM8_OPCODE_X0 = 4,
+ CMPEQI_IMM8_OPCODE_X1 = 4,
+ CMPEQI_OPCODE_Y0 = 3,
+ CMPEQI_OPCODE_Y1 = 4,
+ CMPEQ_RRR_0_OPCODE_X0 = 7,
+ CMPEQ_RRR_0_OPCODE_X1 = 5,
+ CMPEQ_RRR_3_OPCODE_Y0 = 0,
+ CMPEQ_RRR_3_OPCODE_Y1 = 2,
+ CMPEXCH4_RRR_0_OPCODE_X1 = 6,
+ CMPEXCH_RRR_0_OPCODE_X1 = 7,
+ CMPLES_RRR_0_OPCODE_X0 = 8,
+ CMPLES_RRR_0_OPCODE_X1 = 8,
+ CMPLES_RRR_2_OPCODE_Y0 = 0,
+ CMPLES_RRR_2_OPCODE_Y1 = 0,
+ CMPLEU_RRR_0_OPCODE_X0 = 9,
+ CMPLEU_RRR_0_OPCODE_X1 = 9,
+ CMPLEU_RRR_2_OPCODE_Y0 = 1,
+ CMPLEU_RRR_2_OPCODE_Y1 = 1,
+ CMPLTSI_IMM8_OPCODE_X0 = 5,
+ CMPLTSI_IMM8_OPCODE_X1 = 5,
+ CMPLTSI_OPCODE_Y0 = 4,
+ CMPLTSI_OPCODE_Y1 = 5,
+ CMPLTS_RRR_0_OPCODE_X0 = 10,
+ CMPLTS_RRR_0_OPCODE_X1 = 10,
+ CMPLTS_RRR_2_OPCODE_Y0 = 2,
+ CMPLTS_RRR_2_OPCODE_Y1 = 2,
+ CMPLTUI_IMM8_OPCODE_X0 = 6,
+ CMPLTUI_IMM8_OPCODE_X1 = 6,
+ CMPLTU_RRR_0_OPCODE_X0 = 11,
+ CMPLTU_RRR_0_OPCODE_X1 = 11,
+ CMPLTU_RRR_2_OPCODE_Y0 = 3,
+ CMPLTU_RRR_2_OPCODE_Y1 = 3,
+ CMPNE_RRR_0_OPCODE_X0 = 12,
+ CMPNE_RRR_0_OPCODE_X1 = 12,
+ CMPNE_RRR_3_OPCODE_Y0 = 1,
+ CMPNE_RRR_3_OPCODE_Y1 = 3,
+ CMULAF_RRR_0_OPCODE_X0 = 13,
+ CMULA_RRR_0_OPCODE_X0 = 14,
+ CMULFR_RRR_0_OPCODE_X0 = 15,
+ CMULF_RRR_0_OPCODE_X0 = 16,
+ CMULHR_RRR_0_OPCODE_X0 = 17,
+ CMULH_RRR_0_OPCODE_X0 = 18,
+ CMUL_RRR_0_OPCODE_X0 = 19,
+ CNTLZ_UNARY_OPCODE_X0 = 1,
+ CNTLZ_UNARY_OPCODE_Y0 = 1,
+ CNTTZ_UNARY_OPCODE_X0 = 2,
+ CNTTZ_UNARY_OPCODE_Y0 = 2,
+ CRC32_32_RRR_0_OPCODE_X0 = 20,
+ CRC32_8_RRR_0_OPCODE_X0 = 21,
+ DBLALIGN2_RRR_0_OPCODE_X0 = 22,
+ DBLALIGN2_RRR_0_OPCODE_X1 = 13,
+ DBLALIGN4_RRR_0_OPCODE_X0 = 23,
+ DBLALIGN4_RRR_0_OPCODE_X1 = 14,
+ DBLALIGN6_RRR_0_OPCODE_X0 = 24,
+ DBLALIGN6_RRR_0_OPCODE_X1 = 15,
+ DBLALIGN_RRR_0_OPCODE_X0 = 25,
+ DRAIN_UNARY_OPCODE_X1 = 1,
+ DTLBPR_UNARY_OPCODE_X1 = 2,
+ EXCH4_RRR_0_OPCODE_X1 = 16,
+ EXCH_RRR_0_OPCODE_X1 = 17,
+ FDOUBLE_ADDSUB_RRR_0_OPCODE_X0 = 26,
+ FDOUBLE_ADD_FLAGS_RRR_0_OPCODE_X0 = 27,
+ FDOUBLE_MUL_FLAGS_RRR_0_OPCODE_X0 = 28,
+ FDOUBLE_PACK1_RRR_0_OPCODE_X0 = 29,
+ FDOUBLE_PACK2_RRR_0_OPCODE_X0 = 30,
+ FDOUBLE_SUB_FLAGS_RRR_0_OPCODE_X0 = 31,
+ FDOUBLE_UNPACK_MAX_RRR_0_OPCODE_X0 = 32,
+ FDOUBLE_UNPACK_MIN_RRR_0_OPCODE_X0 = 33,
+ FETCHADD4_RRR_0_OPCODE_X1 = 18,
+ FETCHADDGEZ4_RRR_0_OPCODE_X1 = 19,
+ FETCHADDGEZ_RRR_0_OPCODE_X1 = 20,
+ FETCHADD_RRR_0_OPCODE_X1 = 21,
+ FETCHAND4_RRR_0_OPCODE_X1 = 22,
+ FETCHAND_RRR_0_OPCODE_X1 = 23,
+ FETCHOR4_RRR_0_OPCODE_X1 = 24,
+ FETCHOR_RRR_0_OPCODE_X1 = 25,
+ FINV_UNARY_OPCODE_X1 = 3,
+ FLUSHWB_UNARY_OPCODE_X1 = 4,
+ FLUSH_UNARY_OPCODE_X1 = 5,
+ FNOP_UNARY_OPCODE_X0 = 3,
+ FNOP_UNARY_OPCODE_X1 = 6,
+ FNOP_UNARY_OPCODE_Y0 = 3,
+ FNOP_UNARY_OPCODE_Y1 = 8,
+ FSINGLE_ADD1_RRR_0_OPCODE_X0 = 34,
+ FSINGLE_ADDSUB2_RRR_0_OPCODE_X0 = 35,
+ FSINGLE_MUL1_RRR_0_OPCODE_X0 = 36,
+ FSINGLE_MUL2_RRR_0_OPCODE_X0 = 37,
+ FSINGLE_PACK1_UNARY_OPCODE_X0 = 4,
+ FSINGLE_PACK1_UNARY_OPCODE_Y0 = 4,
+ FSINGLE_PACK2_RRR_0_OPCODE_X0 = 38,
+ FSINGLE_SUB1_RRR_0_OPCODE_X0 = 39,
+ ICOH_UNARY_OPCODE_X1 = 7,
+ ILL_UNARY_OPCODE_X1 = 8,
+ ILL_UNARY_OPCODE_Y1 = 9,
+ IMM8_OPCODE_X0 = 4,
+ IMM8_OPCODE_X1 = 3,
+ INV_UNARY_OPCODE_X1 = 9,
+ IRET_UNARY_OPCODE_X1 = 10,
+ JALRP_UNARY_OPCODE_X1 = 11,
+ JALRP_UNARY_OPCODE_Y1 = 10,
+ JALR_UNARY_OPCODE_X1 = 12,
+ JALR_UNARY_OPCODE_Y1 = 11,
+ JAL_JUMP_OPCODE_X1 = 0,
+ JRP_UNARY_OPCODE_X1 = 13,
+ JRP_UNARY_OPCODE_Y1 = 12,
+ JR_UNARY_OPCODE_X1 = 14,
+ JR_UNARY_OPCODE_Y1 = 13,
+ JUMP_OPCODE_X1 = 4,
+ J_JUMP_OPCODE_X1 = 1,
+ LD1S_ADD_IMM8_OPCODE_X1 = 7,
+ LD1S_OPCODE_Y2 = 0,
+ LD1S_UNARY_OPCODE_X1 = 15,
+ LD1U_ADD_IMM8_OPCODE_X1 = 8,
+ LD1U_OPCODE_Y2 = 1,
+ LD1U_UNARY_OPCODE_X1 = 16,
+ LD2S_ADD_IMM8_OPCODE_X1 = 9,
+ LD2S_OPCODE_Y2 = 2,
+ LD2S_UNARY_OPCODE_X1 = 17,
+ LD2U_ADD_IMM8_OPCODE_X1 = 10,
+ LD2U_OPCODE_Y2 = 3,
+ LD2U_UNARY_OPCODE_X1 = 18,
+ LD4S_ADD_IMM8_OPCODE_X1 = 11,
+ LD4S_OPCODE_Y2 = 1,
+ LD4S_UNARY_OPCODE_X1 = 19,
+ LD4U_ADD_IMM8_OPCODE_X1 = 12,
+ LD4U_OPCODE_Y2 = 2,
+ LD4U_UNARY_OPCODE_X1 = 20,
+ LDNA_UNARY_OPCODE_X1 = 21,
+ LDNT1S_ADD_IMM8_OPCODE_X1 = 13,
+ LDNT1S_UNARY_OPCODE_X1 = 22,
+ LDNT1U_ADD_IMM8_OPCODE_X1 = 14,
+ LDNT1U_UNARY_OPCODE_X1 = 23,
+ LDNT2S_ADD_IMM8_OPCODE_X1 = 15,
+ LDNT2S_UNARY_OPCODE_X1 = 24,
+ LDNT2U_ADD_IMM8_OPCODE_X1 = 16,
+ LDNT2U_UNARY_OPCODE_X1 = 25,
+ LDNT4S_ADD_IMM8_OPCODE_X1 = 17,
+ LDNT4S_UNARY_OPCODE_X1 = 26,
+ LDNT4U_ADD_IMM8_OPCODE_X1 = 18,
+ LDNT4U_UNARY_OPCODE_X1 = 27,
+ LDNT_ADD_IMM8_OPCODE_X1 = 19,
+ LDNT_UNARY_OPCODE_X1 = 28,
+ LD_ADD_IMM8_OPCODE_X1 = 20,
+ LD_OPCODE_Y2 = 3,
+ LD_UNARY_OPCODE_X1 = 29,
+ LNK_UNARY_OPCODE_X1 = 30,
+ LNK_UNARY_OPCODE_Y1 = 14,
+ LWNA_ADD_IMM8_OPCODE_X1 = 21,
+ MFSPR_IMM8_OPCODE_X1 = 22,
+ MF_UNARY_OPCODE_X1 = 31,
+ MM_BF_OPCODE_X0 = 7,
+ MNZ_RRR_0_OPCODE_X0 = 40,
+ MNZ_RRR_0_OPCODE_X1 = 26,
+ MNZ_RRR_4_OPCODE_Y0 = 2,
+ MNZ_RRR_4_OPCODE_Y1 = 2,
+ MODE_OPCODE_YA2 = 1,
+ MODE_OPCODE_YB2 = 2,
+ MODE_OPCODE_YC2 = 3,
+ MTSPR_IMM8_OPCODE_X1 = 23,
+ MULAX_RRR_0_OPCODE_X0 = 41,
+ MULAX_RRR_3_OPCODE_Y0 = 2,
+ MULA_HS_HS_RRR_0_OPCODE_X0 = 42,
+ MULA_HS_HS_RRR_9_OPCODE_Y0 = 0,
+ MULA_HS_HU_RRR_0_OPCODE_X0 = 43,
+ MULA_HS_LS_RRR_0_OPCODE_X0 = 44,
+ MULA_HS_LU_RRR_0_OPCODE_X0 = 45,
+ MULA_HU_HU_RRR_0_OPCODE_X0 = 46,
+ MULA_HU_HU_RRR_9_OPCODE_Y0 = 1,
+ MULA_HU_LS_RRR_0_OPCODE_X0 = 47,
+ MULA_HU_LU_RRR_0_OPCODE_X0 = 48,
+ MULA_LS_LS_RRR_0_OPCODE_X0 = 49,
+ MULA_LS_LS_RRR_9_OPCODE_Y0 = 2,
+ MULA_LS_LU_RRR_0_OPCODE_X0 = 50,
+ MULA_LU_LU_RRR_0_OPCODE_X0 = 51,
+ MULA_LU_LU_RRR_9_OPCODE_Y0 = 3,
+ MULX_RRR_0_OPCODE_X0 = 52,
+ MULX_RRR_3_OPCODE_Y0 = 3,
+ MUL_HS_HS_RRR_0_OPCODE_X0 = 53,
+ MUL_HS_HS_RRR_8_OPCODE_Y0 = 0,
+ MUL_HS_HU_RRR_0_OPCODE_X0 = 54,
+ MUL_HS_LS_RRR_0_OPCODE_X0 = 55,
+ MUL_HS_LU_RRR_0_OPCODE_X0 = 56,
+ MUL_HU_HU_RRR_0_OPCODE_X0 = 57,
+ MUL_HU_HU_RRR_8_OPCODE_Y0 = 1,
+ MUL_HU_LS_RRR_0_OPCODE_X0 = 58,
+ MUL_HU_LU_RRR_0_OPCODE_X0 = 59,
+ MUL_LS_LS_RRR_0_OPCODE_X0 = 60,
+ MUL_LS_LS_RRR_8_OPCODE_Y0 = 2,
+ MUL_LS_LU_RRR_0_OPCODE_X0 = 61,
+ MUL_LU_LU_RRR_0_OPCODE_X0 = 62,
+ MUL_LU_LU_RRR_8_OPCODE_Y0 = 3,
+ MZ_RRR_0_OPCODE_X0 = 63,
+ MZ_RRR_0_OPCODE_X1 = 27,
+ MZ_RRR_4_OPCODE_Y0 = 3,
+ MZ_RRR_4_OPCODE_Y1 = 3,
+ NAP_UNARY_OPCODE_X1 = 32,
+ NOP_UNARY_OPCODE_X0 = 5,
+ NOP_UNARY_OPCODE_X1 = 33,
+ NOP_UNARY_OPCODE_Y0 = 5,
+ NOP_UNARY_OPCODE_Y1 = 15,
+ NOR_RRR_0_OPCODE_X0 = 64,
+ NOR_RRR_0_OPCODE_X1 = 28,
+ NOR_RRR_5_OPCODE_Y0 = 1,
+ NOR_RRR_5_OPCODE_Y1 = 1,
+ ORI_IMM8_OPCODE_X0 = 7,
+ ORI_IMM8_OPCODE_X1 = 24,
+ OR_RRR_0_OPCODE_X0 = 65,
+ OR_RRR_0_OPCODE_X1 = 29,
+ OR_RRR_5_OPCODE_Y0 = 2,
+ OR_RRR_5_OPCODE_Y1 = 2,
+ PCNT_UNARY_OPCODE_X0 = 6,
+ PCNT_UNARY_OPCODE_Y0 = 6,
+ REVBITS_UNARY_OPCODE_X0 = 7,
+ REVBITS_UNARY_OPCODE_Y0 = 7,
+ REVBYTES_UNARY_OPCODE_X0 = 8,
+ REVBYTES_UNARY_OPCODE_Y0 = 8,
+ ROTLI_SHIFT_OPCODE_X0 = 1,
+ ROTLI_SHIFT_OPCODE_X1 = 1,
+ ROTLI_SHIFT_OPCODE_Y0 = 0,
+ ROTLI_SHIFT_OPCODE_Y1 = 0,
+ ROTL_RRR_0_OPCODE_X0 = 66,
+ ROTL_RRR_0_OPCODE_X1 = 30,
+ ROTL_RRR_6_OPCODE_Y0 = 0,
+ ROTL_RRR_6_OPCODE_Y1 = 0,
+ RRR_0_OPCODE_X0 = 5,
+ RRR_0_OPCODE_X1 = 5,
+ RRR_0_OPCODE_Y0 = 5,
+ RRR_0_OPCODE_Y1 = 6,
+ RRR_1_OPCODE_Y0 = 6,
+ RRR_1_OPCODE_Y1 = 7,
+ RRR_2_OPCODE_Y0 = 7,
+ RRR_2_OPCODE_Y1 = 8,
+ RRR_3_OPCODE_Y0 = 8,
+ RRR_3_OPCODE_Y1 = 9,
+ RRR_4_OPCODE_Y0 = 9,
+ RRR_4_OPCODE_Y1 = 10,
+ RRR_5_OPCODE_Y0 = 10,
+ RRR_5_OPCODE_Y1 = 11,
+ RRR_6_OPCODE_Y0 = 11,
+ RRR_6_OPCODE_Y1 = 12,
+ RRR_7_OPCODE_Y0 = 12,
+ RRR_7_OPCODE_Y1 = 13,
+ RRR_8_OPCODE_Y0 = 13,
+ RRR_9_OPCODE_Y0 = 14,
+ SHIFT_OPCODE_X0 = 6,
+ SHIFT_OPCODE_X1 = 6,
+ SHIFT_OPCODE_Y0 = 15,
+ SHIFT_OPCODE_Y1 = 14,
+ SHL16INSLI_OPCODE_X0 = 7,
+ SHL16INSLI_OPCODE_X1 = 7,
+ SHL1ADDX_RRR_0_OPCODE_X0 = 67,
+ SHL1ADDX_RRR_0_OPCODE_X1 = 31,
+ SHL1ADDX_RRR_7_OPCODE_Y0 = 1,
+ SHL1ADDX_RRR_7_OPCODE_Y1 = 1,
+ SHL1ADD_RRR_0_OPCODE_X0 = 68,
+ SHL1ADD_RRR_0_OPCODE_X1 = 32,
+ SHL1ADD_RRR_1_OPCODE_Y0 = 0,
+ SHL1ADD_RRR_1_OPCODE_Y1 = 0,
+ SHL2ADDX_RRR_0_OPCODE_X0 = 69,
+ SHL2ADDX_RRR_0_OPCODE_X1 = 33,
+ SHL2ADDX_RRR_7_OPCODE_Y0 = 2,
+ SHL2ADDX_RRR_7_OPCODE_Y1 = 2,
+ SHL2ADD_RRR_0_OPCODE_X0 = 70,
+ SHL2ADD_RRR_0_OPCODE_X1 = 34,
+ SHL2ADD_RRR_1_OPCODE_Y0 = 1,
+ SHL2ADD_RRR_1_OPCODE_Y1 = 1,
+ SHL3ADDX_RRR_0_OPCODE_X0 = 71,
+ SHL3ADDX_RRR_0_OPCODE_X1 = 35,
+ SHL3ADDX_RRR_7_OPCODE_Y0 = 3,
+ SHL3ADDX_RRR_7_OPCODE_Y1 = 3,
+ SHL3ADD_RRR_0_OPCODE_X0 = 72,
+ SHL3ADD_RRR_0_OPCODE_X1 = 36,
+ SHL3ADD_RRR_1_OPCODE_Y0 = 2,
+ SHL3ADD_RRR_1_OPCODE_Y1 = 2,
+ SHLI_SHIFT_OPCODE_X0 = 2,
+ SHLI_SHIFT_OPCODE_X1 = 2,
+ SHLI_SHIFT_OPCODE_Y0 = 1,
+ SHLI_SHIFT_OPCODE_Y1 = 1,
+ SHLXI_SHIFT_OPCODE_X0 = 3,
+ SHLXI_SHIFT_OPCODE_X1 = 3,
+ SHLX_RRR_0_OPCODE_X0 = 73,
+ SHLX_RRR_0_OPCODE_X1 = 37,
+ SHL_RRR_0_OPCODE_X0 = 74,
+ SHL_RRR_0_OPCODE_X1 = 38,
+ SHL_RRR_6_OPCODE_Y0 = 1,
+ SHL_RRR_6_OPCODE_Y1 = 1,
+ SHRSI_SHIFT_OPCODE_X0 = 4,
+ SHRSI_SHIFT_OPCODE_X1 = 4,
+ SHRSI_SHIFT_OPCODE_Y0 = 2,
+ SHRSI_SHIFT_OPCODE_Y1 = 2,
+ SHRS_RRR_0_OPCODE_X0 = 75,
+ SHRS_RRR_0_OPCODE_X1 = 39,
+ SHRS_RRR_6_OPCODE_Y0 = 2,
+ SHRS_RRR_6_OPCODE_Y1 = 2,
+ SHRUI_SHIFT_OPCODE_X0 = 5,
+ SHRUI_SHIFT_OPCODE_X1 = 5,
+ SHRUI_SHIFT_OPCODE_Y0 = 3,
+ SHRUI_SHIFT_OPCODE_Y1 = 3,
+ SHRUXI_SHIFT_OPCODE_X0 = 6,
+ SHRUXI_SHIFT_OPCODE_X1 = 6,
+ SHRUX_RRR_0_OPCODE_X0 = 76,
+ SHRUX_RRR_0_OPCODE_X1 = 40,
+ SHRU_RRR_0_OPCODE_X0 = 77,
+ SHRU_RRR_0_OPCODE_X1 = 41,
+ SHRU_RRR_6_OPCODE_Y0 = 3,
+ SHRU_RRR_6_OPCODE_Y1 = 3,
+ SHUFFLEBYTES_RRR_0_OPCODE_X0 = 78,
+ ST1_ADD_IMM8_OPCODE_X1 = 25,
+ ST1_OPCODE_Y2 = 0,
+ ST1_RRR_0_OPCODE_X1 = 42,
+ ST2_ADD_IMM8_OPCODE_X1 = 26,
+ ST2_OPCODE_Y2 = 1,
+ ST2_RRR_0_OPCODE_X1 = 43,
+ ST4_ADD_IMM8_OPCODE_X1 = 27,
+ ST4_OPCODE_Y2 = 2,
+ ST4_RRR_0_OPCODE_X1 = 44,
+ STNT1_ADD_IMM8_OPCODE_X1 = 28,
+ STNT1_RRR_0_OPCODE_X1 = 45,
+ STNT2_ADD_IMM8_OPCODE_X1 = 29,
+ STNT2_RRR_0_OPCODE_X1 = 46,
+ STNT4_ADD_IMM8_OPCODE_X1 = 30,
+ STNT4_RRR_0_OPCODE_X1 = 47,
+ STNT_ADD_IMM8_OPCODE_X1 = 31,
+ STNT_RRR_0_OPCODE_X1 = 48,
+ ST_ADD_IMM8_OPCODE_X1 = 32,
+ ST_OPCODE_Y2 = 3,
+ ST_RRR_0_OPCODE_X1 = 49,
+ SUBXSC_RRR_0_OPCODE_X0 = 79,
+ SUBXSC_RRR_0_OPCODE_X1 = 50,
+ SUBX_RRR_0_OPCODE_X0 = 80,
+ SUBX_RRR_0_OPCODE_X1 = 51,
+ SUBX_RRR_0_OPCODE_Y0 = 2,
+ SUBX_RRR_0_OPCODE_Y1 = 2,
+ SUB_RRR_0_OPCODE_X0 = 81,
+ SUB_RRR_0_OPCODE_X1 = 52,
+ SUB_RRR_0_OPCODE_Y0 = 3,
+ SUB_RRR_0_OPCODE_Y1 = 3,
+ SWINT0_UNARY_OPCODE_X1 = 34,
+ SWINT1_UNARY_OPCODE_X1 = 35,
+ SWINT2_UNARY_OPCODE_X1 = 36,
+ SWINT3_UNARY_OPCODE_X1 = 37,
+ TBLIDXB0_UNARY_OPCODE_X0 = 9,
+ TBLIDXB0_UNARY_OPCODE_Y0 = 9,
+ TBLIDXB1_UNARY_OPCODE_X0 = 10,
+ TBLIDXB1_UNARY_OPCODE_Y0 = 10,
+ TBLIDXB2_UNARY_OPCODE_X0 = 11,
+ TBLIDXB2_UNARY_OPCODE_Y0 = 11,
+ TBLIDXB3_UNARY_OPCODE_X0 = 12,
+ TBLIDXB3_UNARY_OPCODE_Y0 = 12,
+ UNARY_RRR_0_OPCODE_X0 = 82,
+ UNARY_RRR_0_OPCODE_X1 = 53,
+ UNARY_RRR_1_OPCODE_Y0 = 3,
+ UNARY_RRR_1_OPCODE_Y1 = 3,
+ V1ADDI_IMM8_OPCODE_X0 = 8,
+ V1ADDI_IMM8_OPCODE_X1 = 33,
+ V1ADDUC_RRR_0_OPCODE_X0 = 83,
+ V1ADDUC_RRR_0_OPCODE_X1 = 54,
+ V1ADD_RRR_0_OPCODE_X0 = 84,
+ V1ADD_RRR_0_OPCODE_X1 = 55,
+ V1ADIFFU_RRR_0_OPCODE_X0 = 85,
+ V1AVGU_RRR_0_OPCODE_X0 = 86,
+ V1CMPEQI_IMM8_OPCODE_X0 = 9,
+ V1CMPEQI_IMM8_OPCODE_X1 = 34,
+ V1CMPEQ_RRR_0_OPCODE_X0 = 87,
+ V1CMPEQ_RRR_0_OPCODE_X1 = 56,
+ V1CMPLES_RRR_0_OPCODE_X0 = 88,
+ V1CMPLES_RRR_0_OPCODE_X1 = 57,
+ V1CMPLEU_RRR_0_OPCODE_X0 = 89,
+ V1CMPLEU_RRR_0_OPCODE_X1 = 58,
+ V1CMPLTSI_IMM8_OPCODE_X0 = 10,
+ V1CMPLTSI_IMM8_OPCODE_X1 = 35,
+ V1CMPLTS_RRR_0_OPCODE_X0 = 90,
+ V1CMPLTS_RRR_0_OPCODE_X1 = 59,
+ V1CMPLTUI_IMM8_OPCODE_X0 = 11,
+ V1CMPLTUI_IMM8_OPCODE_X1 = 36,
+ V1CMPLTU_RRR_0_OPCODE_X0 = 91,
+ V1CMPLTU_RRR_0_OPCODE_X1 = 60,
+ V1CMPNE_RRR_0_OPCODE_X0 = 92,
+ V1CMPNE_RRR_0_OPCODE_X1 = 61,
+ V1DDOTPUA_RRR_0_OPCODE_X0 = 161,
+ V1DDOTPUSA_RRR_0_OPCODE_X0 = 93,
+ V1DDOTPUS_RRR_0_OPCODE_X0 = 94,
+ V1DDOTPU_RRR_0_OPCODE_X0 = 162,
+ V1DOTPA_RRR_0_OPCODE_X0 = 95,
+ V1DOTPUA_RRR_0_OPCODE_X0 = 163,
+ V1DOTPUSA_RRR_0_OPCODE_X0 = 96,
+ V1DOTPUS_RRR_0_OPCODE_X0 = 97,
+ V1DOTPU_RRR_0_OPCODE_X0 = 164,
+ V1DOTP_RRR_0_OPCODE_X0 = 98,
+ V1INT_H_RRR_0_OPCODE_X0 = 99,
+ V1INT_H_RRR_0_OPCODE_X1 = 62,
+ V1INT_L_RRR_0_OPCODE_X0 = 100,
+ V1INT_L_RRR_0_OPCODE_X1 = 63,
+ V1MAXUI_IMM8_OPCODE_X0 = 12,
+ V1MAXUI_IMM8_OPCODE_X1 = 37,
+ V1MAXU_RRR_0_OPCODE_X0 = 101,
+ V1MAXU_RRR_0_OPCODE_X1 = 64,
+ V1MINUI_IMM8_OPCODE_X0 = 13,
+ V1MINUI_IMM8_OPCODE_X1 = 38,
+ V1MINU_RRR_0_OPCODE_X0 = 102,
+ V1MINU_RRR_0_OPCODE_X1 = 65,
+ V1MNZ_RRR_0_OPCODE_X0 = 103,
+ V1MNZ_RRR_0_OPCODE_X1 = 66,
+ V1MULTU_RRR_0_OPCODE_X0 = 104,
+ V1MULUS_RRR_0_OPCODE_X0 = 105,
+ V1MULU_RRR_0_OPCODE_X0 = 106,
+ V1MZ_RRR_0_OPCODE_X0 = 107,
+ V1MZ_RRR_0_OPCODE_X1 = 67,
+ V1SADAU_RRR_0_OPCODE_X0 = 108,
+ V1SADU_RRR_0_OPCODE_X0 = 109,
+ V1SHLI_SHIFT_OPCODE_X0 = 7,
+ V1SHLI_SHIFT_OPCODE_X1 = 7,
+ V1SHL_RRR_0_OPCODE_X0 = 110,
+ V1SHL_RRR_0_OPCODE_X1 = 68,
+ V1SHRSI_SHIFT_OPCODE_X0 = 8,
+ V1SHRSI_SHIFT_OPCODE_X1 = 8,
+ V1SHRS_RRR_0_OPCODE_X0 = 111,
+ V1SHRS_RRR_0_OPCODE_X1 = 69,
+ V1SHRUI_SHIFT_OPCODE_X0 = 9,
+ V1SHRUI_SHIFT_OPCODE_X1 = 9,
+ V1SHRU_RRR_0_OPCODE_X0 = 112,
+ V1SHRU_RRR_0_OPCODE_X1 = 70,
+ V1SUBUC_RRR_0_OPCODE_X0 = 113,
+ V1SUBUC_RRR_0_OPCODE_X1 = 71,
+ V1SUB_RRR_0_OPCODE_X0 = 114,
+ V1SUB_RRR_0_OPCODE_X1 = 72,
+ V2ADDI_IMM8_OPCODE_X0 = 14,
+ V2ADDI_IMM8_OPCODE_X1 = 39,
+ V2ADDSC_RRR_0_OPCODE_X0 = 115,
+ V2ADDSC_RRR_0_OPCODE_X1 = 73,
+ V2ADD_RRR_0_OPCODE_X0 = 116,
+ V2ADD_RRR_0_OPCODE_X1 = 74,
+ V2ADIFFS_RRR_0_OPCODE_X0 = 117,
+ V2AVGS_RRR_0_OPCODE_X0 = 118,
+ V2CMPEQI_IMM8_OPCODE_X0 = 15,
+ V2CMPEQI_IMM8_OPCODE_X1 = 40,
+ V2CMPEQ_RRR_0_OPCODE_X0 = 119,
+ V2CMPEQ_RRR_0_OPCODE_X1 = 75,
+ V2CMPLES_RRR_0_OPCODE_X0 = 120,
+ V2CMPLES_RRR_0_OPCODE_X1 = 76,
+ V2CMPLEU_RRR_0_OPCODE_X0 = 121,
+ V2CMPLEU_RRR_0_OPCODE_X1 = 77,
+ V2CMPLTSI_IMM8_OPCODE_X0 = 16,
+ V2CMPLTSI_IMM8_OPCODE_X1 = 41,
+ V2CMPLTS_RRR_0_OPCODE_X0 = 122,
+ V2CMPLTS_RRR_0_OPCODE_X1 = 78,
+ V2CMPLTUI_IMM8_OPCODE_X0 = 17,
+ V2CMPLTUI_IMM8_OPCODE_X1 = 42,
+ V2CMPLTU_RRR_0_OPCODE_X0 = 123,
+ V2CMPLTU_RRR_0_OPCODE_X1 = 79,
+ V2CMPNE_RRR_0_OPCODE_X0 = 124,
+ V2CMPNE_RRR_0_OPCODE_X1 = 80,
+ V2DOTPA_RRR_0_OPCODE_X0 = 125,
+ V2DOTP_RRR_0_OPCODE_X0 = 126,
+ V2INT_H_RRR_0_OPCODE_X0 = 127,
+ V2INT_H_RRR_0_OPCODE_X1 = 81,
+ V2INT_L_RRR_0_OPCODE_X0 = 128,
+ V2INT_L_RRR_0_OPCODE_X1 = 82,
+ V2MAXSI_IMM8_OPCODE_X0 = 18,
+ V2MAXSI_IMM8_OPCODE_X1 = 43,
+ V2MAXS_RRR_0_OPCODE_X0 = 129,
+ V2MAXS_RRR_0_OPCODE_X1 = 83,
+ V2MINSI_IMM8_OPCODE_X0 = 19,
+ V2MINSI_IMM8_OPCODE_X1 = 44,
+ V2MINS_RRR_0_OPCODE_X0 = 130,
+ V2MINS_RRR_0_OPCODE_X1 = 84,
+ V2MNZ_RRR_0_OPCODE_X0 = 131,
+ V2MNZ_RRR_0_OPCODE_X1 = 85,
+ V2MULFSC_RRR_0_OPCODE_X0 = 132,
+ V2MULS_RRR_0_OPCODE_X0 = 133,
+ V2MULTS_RRR_0_OPCODE_X0 = 134,
+ V2MZ_RRR_0_OPCODE_X0 = 135,
+ V2MZ_RRR_0_OPCODE_X1 = 86,
+ V2PACKH_RRR_0_OPCODE_X0 = 136,
+ V2PACKH_RRR_0_OPCODE_X1 = 87,
+ V2PACKL_RRR_0_OPCODE_X0 = 137,
+ V2PACKL_RRR_0_OPCODE_X1 = 88,
+ V2PACKUC_RRR_0_OPCODE_X0 = 138,
+ V2PACKUC_RRR_0_OPCODE_X1 = 89,
+ V2SADAS_RRR_0_OPCODE_X0 = 139,
+ V2SADAU_RRR_0_OPCODE_X0 = 140,
+ V2SADS_RRR_0_OPCODE_X0 = 141,
+ V2SADU_RRR_0_OPCODE_X0 = 142,
+ V2SHLI_SHIFT_OPCODE_X0 = 10,
+ V2SHLI_SHIFT_OPCODE_X1 = 10,
+ V2SHLSC_RRR_0_OPCODE_X0 = 143,
+ V2SHLSC_RRR_0_OPCODE_X1 = 90,
+ V2SHL_RRR_0_OPCODE_X0 = 144,
+ V2SHL_RRR_0_OPCODE_X1 = 91,
+ V2SHRSI_SHIFT_OPCODE_X0 = 11,
+ V2SHRSI_SHIFT_OPCODE_X1 = 11,
+ V2SHRS_RRR_0_OPCODE_X0 = 145,
+ V2SHRS_RRR_0_OPCODE_X1 = 92,
+ V2SHRUI_SHIFT_OPCODE_X0 = 12,
+ V2SHRUI_SHIFT_OPCODE_X1 = 12,
+ V2SHRU_RRR_0_OPCODE_X0 = 146,
+ V2SHRU_RRR_0_OPCODE_X1 = 93,
+ V2SUBSC_RRR_0_OPCODE_X0 = 147,
+ V2SUBSC_RRR_0_OPCODE_X1 = 94,
+ V2SUB_RRR_0_OPCODE_X0 = 148,
+ V2SUB_RRR_0_OPCODE_X1 = 95,
+ V4ADDSC_RRR_0_OPCODE_X0 = 149,
+ V4ADDSC_RRR_0_OPCODE_X1 = 96,
+ V4ADD_RRR_0_OPCODE_X0 = 150,
+ V4ADD_RRR_0_OPCODE_X1 = 97,
+ V4INT_H_RRR_0_OPCODE_X0 = 151,
+ V4INT_H_RRR_0_OPCODE_X1 = 98,
+ V4INT_L_RRR_0_OPCODE_X0 = 152,
+ V4INT_L_RRR_0_OPCODE_X1 = 99,
+ V4PACKSC_RRR_0_OPCODE_X0 = 153,
+ V4PACKSC_RRR_0_OPCODE_X1 = 100,
+ V4SHLSC_RRR_0_OPCODE_X0 = 154,
+ V4SHLSC_RRR_0_OPCODE_X1 = 101,
+ V4SHL_RRR_0_OPCODE_X0 = 155,
+ V4SHL_RRR_0_OPCODE_X1 = 102,
+ V4SHRS_RRR_0_OPCODE_X0 = 156,
+ V4SHRS_RRR_0_OPCODE_X1 = 103,
+ V4SHRU_RRR_0_OPCODE_X0 = 157,
+ V4SHRU_RRR_0_OPCODE_X1 = 104,
+ V4SUBSC_RRR_0_OPCODE_X0 = 158,
+ V4SUBSC_RRR_0_OPCODE_X1 = 105,
+ V4SUB_RRR_0_OPCODE_X0 = 159,
+ V4SUB_RRR_0_OPCODE_X1 = 106,
+ WH64_UNARY_OPCODE_X1 = 38,
+ XORI_IMM8_OPCODE_X0 = 20,
+ XORI_IMM8_OPCODE_X1 = 45,
+ XOR_RRR_0_OPCODE_X0 = 160,
+ XOR_RRR_0_OPCODE_X1 = 107,
+ XOR_RRR_5_OPCODE_Y0 = 3,
+ XOR_RRR_5_OPCODE_Y1 = 3
+};
+
+
+#endif /* __ASSEMBLER__ */
+
+#endif /* __ARCH_OPCODE_H__ */
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 06/10 v10] target-tilegx: Add special register information from Tilera Corporation
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (3 preceding siblings ...)
2015-05-10 22:41 ` [Qemu-devel] [PATCH 04/10 v10] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
@ 2015-05-10 22:43 ` Chen Gang
2015-05-10 22:44 ` [Qemu-devel] [PATCH 07/10 v10] target-tilegx: Add cpu basic features for linux-user Chen Gang
` (4 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:43 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
The related copy is from Linux kernel "arch/tile/include/uapi/arch/
spr_def_64.h".
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/spr_def_64.h | 216 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 216 insertions(+)
create mode 100644 target-tilegx/spr_def_64.h
diff --git a/target-tilegx/spr_def_64.h b/target-tilegx/spr_def_64.h
new file mode 100644
index 0000000..67a6c17
--- /dev/null
+++ b/target-tilegx/spr_def_64.h
@@ -0,0 +1,216 @@
+/*
+ * Copyright 2011 Tilera Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation, version 2.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+ * NON INFRINGEMENT. See the GNU General Public License for
+ * more details.
+ */
+
+#ifndef __DOXYGEN__
+
+#ifndef __ARCH_SPR_DEF_64_H__
+#define __ARCH_SPR_DEF_64_H__
+
+#define SPR_AUX_PERF_COUNT_0 0x2105
+#define SPR_AUX_PERF_COUNT_1 0x2106
+#define SPR_AUX_PERF_COUNT_CTL 0x2107
+#define SPR_AUX_PERF_COUNT_STS 0x2108
+#define SPR_CMPEXCH_VALUE 0x2780
+#define SPR_CYCLE 0x2781
+#define SPR_DONE 0x2705
+#define SPR_DSTREAM_PF 0x2706
+#define SPR_EVENT_BEGIN 0x2782
+#define SPR_EVENT_END 0x2783
+#define SPR_EX_CONTEXT_0_0 0x2580
+#define SPR_EX_CONTEXT_0_1 0x2581
+#define SPR_EX_CONTEXT_0_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_0_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_0_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_0_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_0_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_0_1__ICS_MASK 0x4
+#define SPR_EX_CONTEXT_1_0 0x2480
+#define SPR_EX_CONTEXT_1_1 0x2481
+#define SPR_EX_CONTEXT_1_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_1_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_1_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_1_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_1_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_1_1__ICS_MASK 0x4
+#define SPR_EX_CONTEXT_2_0 0x2380
+#define SPR_EX_CONTEXT_2_1 0x2381
+#define SPR_EX_CONTEXT_2_1__PL_SHIFT 0
+#define SPR_EX_CONTEXT_2_1__PL_RMASK 0x3
+#define SPR_EX_CONTEXT_2_1__PL_MASK 0x3
+#define SPR_EX_CONTEXT_2_1__ICS_SHIFT 2
+#define SPR_EX_CONTEXT_2_1__ICS_RMASK 0x1
+#define SPR_EX_CONTEXT_2_1__ICS_MASK 0x4
+#define SPR_FAIL 0x2707
+#define SPR_IDN_AVAIL_EN 0x1a05
+#define SPR_IDN_DATA_AVAIL 0x0a80
+#define SPR_IDN_DEADLOCK_TIMEOUT 0x1806
+#define SPR_IDN_DEMUX_COUNT_0 0x0a05
+#define SPR_IDN_DEMUX_COUNT_1 0x0a06
+#define SPR_IDN_DIRECTION_PROTECT 0x1405
+#define SPR_IDN_PENDING 0x0a08
+#define SPR_ILL_TRANS_REASON__I_STREAM_VA_RMASK 0x1
+#define SPR_INTCTRL_0_STATUS 0x2505
+#define SPR_INTCTRL_1_STATUS 0x2405
+#define SPR_INTCTRL_2_STATUS 0x2305
+#define SPR_INTERRUPT_CRITICAL_SECTION 0x2708
+#define SPR_INTERRUPT_MASK_0 0x2506
+#define SPR_INTERRUPT_MASK_1 0x2406
+#define SPR_INTERRUPT_MASK_2 0x2306
+#define SPR_INTERRUPT_MASK_RESET_0 0x2507
+#define SPR_INTERRUPT_MASK_RESET_1 0x2407
+#define SPR_INTERRUPT_MASK_RESET_2 0x2307
+#define SPR_INTERRUPT_MASK_SET_0 0x2508
+#define SPR_INTERRUPT_MASK_SET_1 0x2408
+#define SPR_INTERRUPT_MASK_SET_2 0x2308
+#define SPR_INTERRUPT_VECTOR_BASE_0 0x2509
+#define SPR_INTERRUPT_VECTOR_BASE_1 0x2409
+#define SPR_INTERRUPT_VECTOR_BASE_2 0x2309
+#define SPR_INTERRUPT_VECTOR_BASE_3 0x2209
+#define SPR_IPI_EVENT_0 0x1f05
+#define SPR_IPI_EVENT_1 0x1e05
+#define SPR_IPI_EVENT_2 0x1d05
+#define SPR_IPI_EVENT_RESET_0 0x1f06
+#define SPR_IPI_EVENT_RESET_1 0x1e06
+#define SPR_IPI_EVENT_RESET_2 0x1d06
+#define SPR_IPI_EVENT_SET_0 0x1f07
+#define SPR_IPI_EVENT_SET_1 0x1e07
+#define SPR_IPI_EVENT_SET_2 0x1d07
+#define SPR_IPI_MASK_0 0x1f08
+#define SPR_IPI_MASK_1 0x1e08
+#define SPR_IPI_MASK_2 0x1d08
+#define SPR_IPI_MASK_RESET_0 0x1f09
+#define SPR_IPI_MASK_RESET_1 0x1e09
+#define SPR_IPI_MASK_RESET_2 0x1d09
+#define SPR_IPI_MASK_SET_0 0x1f0a
+#define SPR_IPI_MASK_SET_1 0x1e0a
+#define SPR_IPI_MASK_SET_2 0x1d0a
+#define SPR_MPL_AUX_PERF_COUNT_SET_0 0x2100
+#define SPR_MPL_AUX_PERF_COUNT_SET_1 0x2101
+#define SPR_MPL_AUX_PERF_COUNT_SET_2 0x2102
+#define SPR_MPL_AUX_TILE_TIMER_SET_0 0x1700
+#define SPR_MPL_AUX_TILE_TIMER_SET_1 0x1701
+#define SPR_MPL_AUX_TILE_TIMER_SET_2 0x1702
+#define SPR_MPL_IDN_ACCESS_SET_0 0x0a00
+#define SPR_MPL_IDN_ACCESS_SET_1 0x0a01
+#define SPR_MPL_IDN_ACCESS_SET_2 0x0a02
+#define SPR_MPL_IDN_AVAIL_SET_0 0x1a00
+#define SPR_MPL_IDN_AVAIL_SET_1 0x1a01
+#define SPR_MPL_IDN_AVAIL_SET_2 0x1a02
+#define SPR_MPL_IDN_COMPLETE_SET_0 0x0500
+#define SPR_MPL_IDN_COMPLETE_SET_1 0x0501
+#define SPR_MPL_IDN_COMPLETE_SET_2 0x0502
+#define SPR_MPL_IDN_FIREWALL_SET_0 0x1400
+#define SPR_MPL_IDN_FIREWALL_SET_1 0x1401
+#define SPR_MPL_IDN_FIREWALL_SET_2 0x1402
+#define SPR_MPL_IDN_TIMER_SET_0 0x1800
+#define SPR_MPL_IDN_TIMER_SET_1 0x1801
+#define SPR_MPL_IDN_TIMER_SET_2 0x1802
+#define SPR_MPL_INTCTRL_0_SET_0 0x2500
+#define SPR_MPL_INTCTRL_0_SET_1 0x2501
+#define SPR_MPL_INTCTRL_0_SET_2 0x2502
+#define SPR_MPL_INTCTRL_1_SET_0 0x2400
+#define SPR_MPL_INTCTRL_1_SET_1 0x2401
+#define SPR_MPL_INTCTRL_1_SET_2 0x2402
+#define SPR_MPL_INTCTRL_2_SET_0 0x2300
+#define SPR_MPL_INTCTRL_2_SET_1 0x2301
+#define SPR_MPL_INTCTRL_2_SET_2 0x2302
+#define SPR_MPL_IPI_0 0x1f04
+#define SPR_MPL_IPI_0_SET_0 0x1f00
+#define SPR_MPL_IPI_0_SET_1 0x1f01
+#define SPR_MPL_IPI_0_SET_2 0x1f02
+#define SPR_MPL_IPI_1 0x1e04
+#define SPR_MPL_IPI_1_SET_0 0x1e00
+#define SPR_MPL_IPI_1_SET_1 0x1e01
+#define SPR_MPL_IPI_1_SET_2 0x1e02
+#define SPR_MPL_IPI_2 0x1d04
+#define SPR_MPL_IPI_2_SET_0 0x1d00
+#define SPR_MPL_IPI_2_SET_1 0x1d01
+#define SPR_MPL_IPI_2_SET_2 0x1d02
+#define SPR_MPL_PERF_COUNT_SET_0 0x2000
+#define SPR_MPL_PERF_COUNT_SET_1 0x2001
+#define SPR_MPL_PERF_COUNT_SET_2 0x2002
+#define SPR_MPL_UDN_ACCESS_SET_0 0x0b00
+#define SPR_MPL_UDN_ACCESS_SET_1 0x0b01
+#define SPR_MPL_UDN_ACCESS_SET_2 0x0b02
+#define SPR_MPL_UDN_AVAIL_SET_0 0x1b00
+#define SPR_MPL_UDN_AVAIL_SET_1 0x1b01
+#define SPR_MPL_UDN_AVAIL_SET_2 0x1b02
+#define SPR_MPL_UDN_COMPLETE_SET_0 0x0600
+#define SPR_MPL_UDN_COMPLETE_SET_1 0x0601
+#define SPR_MPL_UDN_COMPLETE_SET_2 0x0602
+#define SPR_MPL_UDN_FIREWALL_SET_0 0x1500
+#define SPR_MPL_UDN_FIREWALL_SET_1 0x1501
+#define SPR_MPL_UDN_FIREWALL_SET_2 0x1502
+#define SPR_MPL_UDN_TIMER_SET_0 0x1900
+#define SPR_MPL_UDN_TIMER_SET_1 0x1901
+#define SPR_MPL_UDN_TIMER_SET_2 0x1902
+#define SPR_MPL_WORLD_ACCESS_SET_0 0x2700
+#define SPR_MPL_WORLD_ACCESS_SET_1 0x2701
+#define SPR_MPL_WORLD_ACCESS_SET_2 0x2702
+#define SPR_PASS 0x2709
+#define SPR_PERF_COUNT_0 0x2005
+#define SPR_PERF_COUNT_1 0x2006
+#define SPR_PERF_COUNT_CTL 0x2007
+#define SPR_PERF_COUNT_DN_CTL 0x2008
+#define SPR_PERF_COUNT_STS 0x2009
+#define SPR_PROC_STATUS 0x2784
+#define SPR_SIM_CONTROL 0x2785
+#define SPR_SINGLE_STEP_CONTROL_0 0x0405
+#define SPR_SINGLE_STEP_CONTROL_0__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_0__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_CONTROL_1 0x0305
+#define SPR_SINGLE_STEP_CONTROL_1__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_1__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_CONTROL_2 0x0205
+#define SPR_SINGLE_STEP_CONTROL_2__CANCELED_MASK 0x1
+#define SPR_SINGLE_STEP_CONTROL_2__INHIBIT_MASK 0x2
+#define SPR_SINGLE_STEP_EN_0_0 0x250a
+#define SPR_SINGLE_STEP_EN_0_1 0x240a
+#define SPR_SINGLE_STEP_EN_0_2 0x230a
+#define SPR_SINGLE_STEP_EN_1_0 0x250b
+#define SPR_SINGLE_STEP_EN_1_1 0x240b
+#define SPR_SINGLE_STEP_EN_1_2 0x230b
+#define SPR_SINGLE_STEP_EN_2_0 0x250c
+#define SPR_SINGLE_STEP_EN_2_1 0x240c
+#define SPR_SINGLE_STEP_EN_2_2 0x230c
+#define SPR_SYSTEM_SAVE_0_0 0x2582
+#define SPR_SYSTEM_SAVE_0_1 0x2583
+#define SPR_SYSTEM_SAVE_0_2 0x2584
+#define SPR_SYSTEM_SAVE_0_3 0x2585
+#define SPR_SYSTEM_SAVE_1_0 0x2482
+#define SPR_SYSTEM_SAVE_1_1 0x2483
+#define SPR_SYSTEM_SAVE_1_2 0x2484
+#define SPR_SYSTEM_SAVE_1_3 0x2485
+#define SPR_SYSTEM_SAVE_2_0 0x2382
+#define SPR_SYSTEM_SAVE_2_1 0x2383
+#define SPR_SYSTEM_SAVE_2_2 0x2384
+#define SPR_SYSTEM_SAVE_2_3 0x2385
+#define SPR_TILE_COORD 0x270b
+#define SPR_TILE_RTF_HWM 0x270c
+#define SPR_TILE_TIMER_CONTROL 0x1605
+#define SPR_UDN_AVAIL_EN 0x1b05
+#define SPR_UDN_DATA_AVAIL 0x0b80
+#define SPR_UDN_DEADLOCK_TIMEOUT 0x1906
+#define SPR_UDN_DEMUX_COUNT_0 0x0b05
+#define SPR_UDN_DEMUX_COUNT_1 0x0b06
+#define SPR_UDN_DEMUX_COUNT_2 0x0b07
+#define SPR_UDN_DEMUX_COUNT_3 0x0b08
+#define SPR_UDN_DIRECTION_PROTECT 0x1505
+#define SPR_UDN_PENDING 0x0b0a
+#define SPR_WATCH_MASK 0x200a
+#define SPR_WATCH_VAL 0x200b
+
+#endif /* !defined(__ARCH_SPR_DEF_64_H__) */
+
+#endif /* !defined(__DOXYGEN__) */
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 07/10 v10] target-tilegx: Add cpu basic features for linux-user
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (4 preceding siblings ...)
2015-05-10 22:43 ` [Qemu-devel] [PATCH 06/10 v10] target-tilegx: Add special register information " Chen Gang
@ 2015-05-10 22:44 ` Chen Gang
2015-05-10 22:44 ` [Qemu-devel] [PATCH 08/10 v10] target-tilegx: Add helper " Chen Gang
` (3 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:44 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
It implements minimized cpu features for linux-user.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/cpu.c | 143 +++++++++++++++++++++++++++++++++++++++++++++++
target-tilegx/cpu.h | 156 ++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 299 insertions(+)
create mode 100644 target-tilegx/cpu.c
create mode 100644 target-tilegx/cpu.h
diff --git a/target-tilegx/cpu.c b/target-tilegx/cpu.c
new file mode 100644
index 0000000..663fcb6
--- /dev/null
+++ b/target-tilegx/cpu.c
@@ -0,0 +1,143 @@
+/*
+ * QEMU TILE-Gx CPU
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu-common.h"
+#include "hw/qdev-properties.h"
+#include "migration/vmstate.h"
+
+TileGXCPU *cpu_tilegx_init(const char *cpu_model)
+{
+ TileGXCPU *cpu;
+
+ cpu = TILEGX_CPU(object_new(TYPE_TILEGX_CPU));
+
+ object_property_set_bool(OBJECT(cpu), true, "realized", NULL);
+
+ return cpu;
+}
+
+static void tilegx_cpu_set_pc(CPUState *cs, vaddr value)
+{
+ TileGXCPU *cpu = TILEGX_CPU(cs);
+
+ cpu->env.pc = value;
+}
+
+static bool tilegx_cpu_has_work(CPUState *cs)
+{
+ return true;
+}
+
+static void tilegx_cpu_reset(CPUState *s)
+{
+ TileGXCPU *cpu = TILEGX_CPU(s);
+ TileGXCPUClass *tcc = TILEGX_CPU_GET_CLASS(cpu);
+ CPUTLGState *env = &cpu->env;
+
+ tcc->parent_reset(s);
+
+ memset(env, 0, sizeof(CPUTLGState));
+ tlb_flush(s, 1);
+}
+
+static void tilegx_cpu_realizefn(DeviceState *dev, Error **errp)
+{
+ CPUState *cs = CPU(dev);
+ TileGXCPUClass *tcc = TILEGX_CPU_GET_CLASS(dev);
+
+ cpu_reset(cs);
+ qemu_init_vcpu(cs);
+
+ tcc->parent_realize(dev, errp);
+}
+
+static void tilegx_cpu_initfn(Object *obj)
+{
+ CPUState *cs = CPU(obj);
+ TileGXCPU *cpu = TILEGX_CPU(obj);
+ CPUTLGState *env = &cpu->env;
+ static bool tcg_initialized;
+
+ cs->env_ptr = env;
+ cpu_exec_init(env);
+
+ if (tcg_enabled() && !tcg_initialized) {
+ tcg_initialized = true;
+ tilegx_tcg_init();
+ }
+}
+
+static void tilegx_cpu_do_interrupt(CPUState *cs)
+{
+ cs->exception_index = -1;
+}
+
+static int tilegx_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int rw,
+ int mmu_idx)
+{
+ cpu_dump_state(cs, stderr, fprintf, 0);
+ return 1;
+}
+
+static bool tilegx_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
+{
+ if (interrupt_request & CPU_INTERRUPT_HARD) {
+ tilegx_cpu_do_interrupt(cs);
+ return true;
+ }
+ return false;
+}
+
+static void tilegx_cpu_class_init(ObjectClass *oc, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(oc);
+ CPUClass *cc = CPU_CLASS(oc);
+ TileGXCPUClass *tcc = TILEGX_CPU_CLASS(oc);
+
+ tcc->parent_realize = dc->realize;
+ dc->realize = tilegx_cpu_realizefn;
+
+ tcc->parent_reset = cc->reset;
+ cc->reset = tilegx_cpu_reset;
+
+ cc->has_work = tilegx_cpu_has_work;
+ cc->do_interrupt = tilegx_cpu_do_interrupt;
+ cc->cpu_exec_interrupt = tilegx_cpu_exec_interrupt;
+ cc->set_pc = tilegx_cpu_set_pc;
+ cc->handle_mmu_fault = tilegx_cpu_handle_mmu_fault;
+ cc->gdb_num_core_regs = 0;
+}
+
+static const TypeInfo tilegx_cpu_type_info = {
+ .name = TYPE_TILEGX_CPU,
+ .parent = TYPE_CPU,
+ .instance_size = sizeof(TileGXCPU),
+ .instance_init = tilegx_cpu_initfn,
+ .class_size = sizeof(TileGXCPUClass),
+ .class_init = tilegx_cpu_class_init,
+};
+
+static void tilegx_cpu_register_types(void)
+{
+ type_register_static(&tilegx_cpu_type_info);
+}
+
+type_init(tilegx_cpu_register_types)
diff --git a/target-tilegx/cpu.h b/target-tilegx/cpu.h
new file mode 100644
index 0000000..30f1828
--- /dev/null
+++ b/target-tilegx/cpu.h
@@ -0,0 +1,156 @@
+/*
+ * TILE-Gx virtual CPU header
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef CPU_TILEGX_H
+#define CPU_TILEGX_H
+
+#include "config.h"
+#include "qemu-common.h"
+
+#define TARGET_LONG_BITS 64
+
+#define CPUArchState struct CPUTLGState
+
+#include "exec/cpu-defs.h"
+
+/* TILE-Gx common register alias */
+#define TILEGX_R_RE 0 /* 0 register, for function/syscall return value */
+#define TILEGX_R_NR 10 /* 10 register, for syscall number */
+#define TILEGX_R_BP 52 /* 52 register, optional frame pointer */
+#define TILEGX_R_TP 53 /* TP register, thread local storage data */
+#define TILEGX_R_SP 54 /* SP register, stack pointer */
+#define TILEGX_R_LR 55 /* LR register, may save pc, but it is not pc */
+#define TILEGX_R_ZERO 63 /* Zero register, always zero */
+#define TILEGX_R_COUNT 56 /* Only 56 registers are really useful */
+#define TILEGX_R_NOREG 255 /* Invalid register value */
+
+/* TILE-Gx special registers used by outside */
+enum {
+ TILEGX_SPR_CMPEXCH = 0,
+ TILEGX_SPR_COUNT
+};
+
+typedef struct CPUTLGState {
+ uint64_t regs[TILEGX_R_COUNT]; /* Common used registers by outside */
+ uint64_t spregs[TILEGX_SPR_COUNT]; /* Special used registers by outside */
+ uint64_t pc; /* Current pc */
+
+#if defined(CONFIG_USER_ONLY)
+ uint32_t cmpexch; /* cmpexch(4) information */
+#endif
+
+ CPU_COMMON
+} CPUTLGState;
+
+#include "qom/cpu.h"
+
+#define TYPE_TILEGX_CPU "tilegx-cpu"
+
+#define TILEGX_CPU_CLASS(klass) \
+ OBJECT_CLASS_CHECK(TileGXCPUClass, (klass), TYPE_TILEGX_CPU)
+#define TILEGX_CPU(obj) \
+ OBJECT_CHECK(TileGXCPU, (obj), TYPE_TILEGX_CPU)
+#define TILEGX_CPU_GET_CLASS(obj) \
+ OBJECT_GET_CLASS(TileGXCPUClass, (obj), TYPE_TILEGX_CPU)
+
+/**
+ * TileGXCPUClass:
+ * @parent_realize: The parent class' realize handler.
+ * @parent_reset: The parent class' reset handler.
+ *
+ * A Tile-Gx CPU model.
+ */
+typedef struct TileGXCPUClass {
+ /*< private >*/
+ CPUClass parent_class;
+ /*< public >*/
+
+ DeviceRealize parent_realize;
+ void (*parent_reset)(CPUState *cpu);
+} TileGXCPUClass;
+
+/**
+ * TileGXCPU:
+ * @env: #CPUTLGState
+ *
+ * A Tile-GX CPU.
+ */
+typedef struct TileGXCPU {
+ /*< private >*/
+ CPUState parent_obj;
+ /*< public >*/
+
+ CPUTLGState env;
+} TileGXCPU;
+
+static inline TileGXCPU *tilegx_env_get_cpu(CPUTLGState *env)
+{
+ return container_of(env, TileGXCPU, env);
+}
+
+#define ENV_GET_CPU(e) CPU(tilegx_env_get_cpu(e))
+
+#define ENV_OFFSET offsetof(TileGXCPU, env)
+
+/* TILE-Gx memory attributes */
+#define TARGET_PAGE_BITS 16 /* TILE-Gx uses 64KB page size */
+#define MMAP_SHIFT TARGET_PAGE_BITS
+#define TARGET_PHYS_ADDR_SPACE_BITS 42 /* It has 42 bit physical addresses */
+#define TARGET_VIRT_ADDR_SPACE_BITS 64 /* It has 64 bit virtual addresses */
+#define MMU_USER_IDX 0 /* Current memory operation is in user mode */
+
+/* Exception numbers */
+enum {
+ TILEGX_EXCP_NONE = 0,
+ TILEGX_EXCP_SYSCALL = 1,
+ TILEGX_EXCP_OPCODE_UNKNOWN = 0x101,
+ TILEGX_EXCP_OPCODE_UNIMPLEMENTED = 0x102,
+ TILEGX_EXCP_OPCODE_CMPEXCH = 0x103,
+ TILEGX_EXCP_OPCODE_CMPEXCH4 = 0x104,
+ TILEGX_EXCP_OPCODE_EXCH = 0x105,
+ TILEGX_EXCP_OPCODE_EXCH4 = 0x106,
+ TILEGX_EXCP_REG_UNSUPPORTED = 0x181,
+ TILEGX_EXCP_UNALIGNMENT = 0x201,
+ TILEGX_EXCP_DBUG_BREAK = 0x301
+};
+
+#include "exec/cpu-all.h"
+
+void tilegx_tcg_init(void);
+int cpu_tilegx_exec(CPUTLGState *s);
+int cpu_tilegx_signal_handler(int host_signum, void *pinfo, void *puc);
+
+TileGXCPU *cpu_tilegx_init(const char *cpu_model);
+
+#define cpu_init(cpu_model) CPU(cpu_tilegx_init(cpu_model))
+
+#define cpu_exec cpu_tilegx_exec
+#define cpu_gen_code cpu_tilegx_gen_code
+#define cpu_signal_handler cpu_tilegx_signal_handler
+
+static inline void cpu_get_tb_cpu_state(CPUTLGState *env, target_ulong *pc,
+ target_ulong *cs_base, int *flags)
+{
+ *pc = env->pc;
+ *cs_base = 0;
+ *flags = 0;
+}
+
+#include "exec/exec-all.h"
+
+#endif
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 08/10 v10] target-tilegx: Add helper features for linux-user
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (5 preceding siblings ...)
2015-05-10 22:44 ` [Qemu-devel] [PATCH 07/10 v10] target-tilegx: Add cpu basic features for linux-user Chen Gang
@ 2015-05-10 22:44 ` Chen Gang
2015-05-10 22:45 ` [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib Chen Gang
` (2 subsequent siblings)
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:44 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Add several helpers for translation.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/helper.c | 41 +++++++++++++++++++++++++++++++++++++++++
target-tilegx/helper.h | 3 +++
2 files changed, 44 insertions(+)
create mode 100644 target-tilegx/helper.c
create mode 100644 target-tilegx/helper.h
diff --git a/target-tilegx/helper.c b/target-tilegx/helper.c
new file mode 100644
index 0000000..5fc53a8
--- /dev/null
+++ b/target-tilegx/helper.c
@@ -0,0 +1,41 @@
+/*
+ * QEMU TILE-Gx helpers
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu-common.h"
+#include "exec/helper-proto.h"
+
+void helper_exception(CPUTLGState *env, uint32_t excp)
+{
+ CPUState *cs = CPU(tilegx_env_get_cpu(env));
+
+ cs->exception_index = excp;
+ cpu_loop_exit(cs);
+}
+
+uint64_t helper_cntlz(uint64_t arg)
+{
+ return clz64(arg);
+}
+
+uint64_t helper_cnttz(uint64_t arg)
+{
+ return ctz64(arg);
+}
diff --git a/target-tilegx/helper.h b/target-tilegx/helper.h
new file mode 100644
index 0000000..15f841f
--- /dev/null
+++ b/target-tilegx/helper.h
@@ -0,0 +1,3 @@
+DEF_HELPER_2(exception, noreturn, env, i32)
+DEF_HELPER_FLAGS_1(cntlz, TCG_CALL_NO_RWG_SE, i64, i64)
+DEF_HELPER_FLAGS_1(cnttz, TCG_CALL_NO_RWG_SE, i64, i64)
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (6 preceding siblings ...)
2015-05-10 22:44 ` [Qemu-devel] [PATCH 08/10 v10] target-tilegx: Add helper " Chen Gang
@ 2015-05-10 22:45 ` Chen Gang
2015-05-11 16:55 ` Richard Henderson
2015-05-10 22:46 ` [Qemu-devel] [PATCH 10/10 v10] target-tilegx: Add TILE-Gx building files Chen Gang
[not found] ` <BLU437-SMTP59B35F884A72A991334DBB9DC0@phx.gbl>
9 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:45 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Generate related tcg instructions, and qemu tilegx runs to _init_malloc,
but causes assert in _init_malloc.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
target-tilegx/translate.c | 2889 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 2889 insertions(+)
create mode 100644 target-tilegx/translate.c
diff --git a/target-tilegx/translate.c b/target-tilegx/translate.c
new file mode 100644
index 0000000..3d7d327
--- /dev/null
+++ b/target-tilegx/translate.c
@@ -0,0 +1,2889 @@
+/*
+ * QEMU TILE-Gx CPU
+ *
+ * Copyright (c) 2015 Chen Gang
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, see
+ * <http://www.gnu.org/licenses/lgpl-2.1.html>
+ */
+
+#include "cpu.h"
+#include "qemu/log.h"
+#include "disas/disas.h"
+#include "tcg-op.h"
+#include "exec/cpu_ldst.h"
+#include "opcode_tilegx.h"
+#include "spr_def_64.h"
+
+#define FMT64X "%016" PRIx64
+
+#define TILEGX_OPCODE_MAX_X0 164 /* include 164 */
+#define TILEGX_OPCODE_MAX_X1 107 /* include 107 */
+#define TILEGX_OPCODE_MAX_Y0 15 /* include 15 */
+#define TILEGX_OPCODE_MAX_Y1 15 /* include 15 */
+#define TILEGX_OPCODE_MAX_Y2 3 /* include 3 */
+
+static TCGv_ptr cpu_env;
+static TCGv cpu_pc;
+static TCGv cpu_regs[TILEGX_R_COUNT];
+static TCGv cpu_spregs[TILEGX_SPR_COUNT];
+#if defined(CONFIG_USER_ONLY)
+static TCGv_i32 cpu_cmpexch;
+#endif
+
+static const char * const reg_names[] = {
+ "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7",
+ "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ "r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+ "r24", "r25", "r26", "r27", "r28", "r29", "r30", "r31",
+ "r32", "r33", "r34", "r35", "r36", "r37", "r38", "r39",
+ "r40", "r41", "r42", "r43", "r44", "r45", "r46", "r47",
+ "r48", "r49", "r50", "r51", "bp", "tp", "sp", "lr"
+};
+
+static const char * const spreg_names[] = {
+ "cmpexch"
+};
+
+/* It is for temporary registers */
+typedef struct DisasContextTemp {
+ uint8_t idx; /* index */
+ TCGv val; /* value */
+} DisasContextTemp;
+
+/* This is the state at translation time. */
+typedef struct DisasContext {
+ uint64_t pc; /* Current pc */
+ uint64_t exception; /* Current exception */
+
+ TCGv zero; /* For zero register */
+
+ DisasContextTemp *tmp_regcur; /* Current temporary registers */
+ DisasContextTemp tmp_regs[TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE];
+ /* All temporary registers */
+ struct {
+ TCGCond cond; /* Branch condition */
+ TCGv dest; /* pc jump destination, if will jump */
+ TCGv val1; /* Firt value for condition comparing */
+ TCGv val2; /* Second value for condition comparing */
+ } jmp; /* Jump object, only once in each TB block */
+} DisasContext;
+
+#include "exec/gen-icount.h"
+
+static TCGv load_zero(DisasContext *dc)
+{
+ if (TCGV_IS_UNUSED_I64(dc->zero)) {
+ dc->zero = tcg_const_local_i64(0);
+ }
+ return dc->zero;
+}
+
+static TCGv load_gr(DisasContext *dc, uint8_t reg)
+{
+ if (likely(reg < TILEGX_R_COUNT)) {
+ return cpu_regs[reg];
+ } else if (reg != TILEGX_R_ZERO) {
+ dc->exception = TILEGX_EXCP_REG_UNSUPPORTED;
+ }
+ return load_zero(dc);
+}
+
+static TCGv dest_gr(DisasContext *dc, uint8_t rdst)
+{
+ DisasContextTemp *tmp = dc->tmp_regcur;
+ tmp->idx = rdst;
+ tmp->val = tcg_temp_new_i64();
+ return tmp->val;
+}
+
+static void gen_exception(DisasContext *dc, int num)
+{
+ TCGv_i32 tmp = tcg_const_i32(num);
+
+ gen_helper_exception(cpu_env, tmp);
+ tcg_temp_free_i32(tmp);
+}
+
+/* mfspr can be only in X1 pipe, so it doesn't need to be bufferd */
+static void gen_mfspr(struct DisasContext *dc, uint8_t rdst, uint16_t imm14)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mfspr r%d, 0x%x\n", rdst, imm14);
+
+ if (rdst >= TILEGX_R_COUNT) {
+ if (rdst != TILEGX_R_ZERO) {
+ dc->exception = TILEGX_EXCP_REG_UNSUPPORTED;
+ }
+ return;
+ }
+
+ switch (imm14) {
+ case SPR_CMPEXCH_VALUE:
+ tcg_gen_mov_i64(cpu_regs[rdst], cpu_spregs[TILEGX_SPR_CMPEXCH]);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP, "UNIMP mfspr 0x%x.\n", imm14);
+ }
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+/* mtspr can be only in X1 pipe, so it doesn't need to be bufferd */
+static void gen_mtspr(struct DisasContext *dc, uint8_t rsrc, uint16_t imm14)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mtspr 0x%x, r%d\n", imm14, rsrc);
+
+ switch (imm14) {
+ case SPR_CMPEXCH_VALUE:
+ tcg_gen_mov_i64(cpu_spregs[TILEGX_SPR_CMPEXCH], load_gr(dc, rsrc));
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP, "UNIMP mtspr 0x%x.\n", imm14);
+ }
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void gen_cmpltsi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
+ rdst, rsrc, imm8);
+ tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ (int64_t)imm8);
+}
+
+static void gen_cmpltui(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
+ rdst, rsrc, imm8);
+ tcg_gen_setcondi_i64(TCG_COND_LTU,
+ dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_cmpeqi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_setcondi_i64(TCG_COND_EQ,
+ dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_cmp(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, TCGCond cond)
+{
+ const char *prefix;
+
+ switch (cond) {
+ case TCG_COND_EQ:
+ prefix = "eq";
+ break;
+ case TCG_COND_LE:
+ prefix = "les";
+ break;
+ case TCG_COND_LEU:
+ prefix = "leu";
+ break;
+ case TCG_COND_LT:
+ prefix = "lts";
+ break;
+ case TCG_COND_LTU:
+ prefix = "ltu";
+ break;
+ case TCG_COND_NE:
+ prefix = "ne";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmp%s r%d, r%d, r%d\n",
+ prefix, rdst, rsrc, rsrcb);
+ tcg_gen_setcond_i64(cond, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_gr(dc, rsrcb));
+}
+
+static void gen_exch(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, int excp)
+{
+ const char *prefix, *width;
+
+ switch (excp) {
+ case TILEGX_EXCP_OPCODE_EXCH4:
+ prefix = "";
+ width = "4";
+ break;
+ case TILEGX_EXCP_OPCODE_EXCH:
+ prefix = "";
+ width = "";
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH4:
+ prefix = "cmp";
+ width = "4";
+ break;
+ case TILEGX_EXCP_OPCODE_CMPEXCH:
+ prefix = "cmp";
+ width = "";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "%sexch%s r%d, r%d, r%d\n",
+ prefix, width, rdst, rsrc, rsrcb);
+#if defined(CONFIG_USER_ONLY)
+ tcg_gen_movi_i32(cpu_cmpexch, (rdst << 16) | (rsrc << 8) | rsrcb);
+ tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ dc->exception = excp;
+#else
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+#endif
+}
+
+/*
+ * uint64_t output = 0;
+ * uint32_t counter;
+ * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
+ * {
+ * int8_t srca = getByte (rf[SrcA], counter);
+ * int8_t srcb = signExtend8 (Imm8);
+ * output = setByte (output, counter, ((srca == srcb) ? 1 : 0));
+ * }
+ * rf[Dest] = output;
+ */
+static void gen_v1cmpeqi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t imm8)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1cmpeqi r%d, r%d, %d\n",
+ rdst, rsrc, imm8);
+
+ tcg_gen_movi_i64(vdst, 0);
+
+ for (count = 0; count < 8; count++) {
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (8 - count - 1) * 8);
+ tcg_gen_andi_i64(tmp, tmp, 0xff);
+ tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+ tcg_gen_shli_i64(vdst, vdst, 8);
+ }
+
+ tcg_temp_free_i64(tmp);
+}
+
+/*
+ * Description
+ *
+ * Interleave the four low-order bytes of the first operand with the four
+ * low-order bytes of the second operand. The low-order byte of the result will
+ * be the low-order byte of the second operand. For example if the first operand
+ * contains the packed bytes {A7,A6,A5,A4,A3,A2,A1,A0} and the second operand
+ * contains the packed bytes {B7,B6,B5,B4,B3,B2,B1,B0} then the result will be
+ * {A3,B3,A2,B2,A1,B1,A0,B0}.
+ *
+ * Functional Description
+ *
+ * uint64_t output = 0;
+ * uint32_t counter;
+ * for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
+ * {
+ * bool asel = ((counter & 1) == 1);
+ * int in_sel = 0 + counter / 2;
+ * int8_t srca = getByte (rf[SrcA], in_sel);
+ * int8_t srcb = getByte (rf[SrcB], in_sel);
+ * output = setByte (output, counter, (asel ? srca : srcb));
+ * }
+ * rf[Dest] = output;
+ */
+static void gen_v1int_l(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ int count;
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1int_l r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_movi_i64(vdst, 0);
+ for (count = 0; count < 4; count++) {
+
+ tcg_gen_shli_i64(vdst, vdst, 8);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (4 - count - 1) * 8);
+ tcg_gen_andi_i64(tmp, tmp, 0xff);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+ tcg_gen_shli_i64(vdst, vdst, 8);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrcb), (4 - count - 1) * 8);
+ tcg_gen_andi_i64(tmp, tmp, 0xff);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+ }
+ tcg_temp_free_i64(tmp);
+}
+
+/*
+ * Functional Description
+ *
+ * uint64_t output = 0;
+ * uint32_t counter;
+ * for (counter = 0; counter < (WORD_SIZE / 32); counter++)
+ * {
+ * bool asel = ((counter & 1) == 1);
+ * int in_sel = 0 + counter / 2;
+ * int32_t srca = get4Byte (rf[SrcA], in_sel);
+ * int32_t srcb = get4Byte (rf[SrcB], in_sel);
+ * output = set4Byte (output, counter, (asel ? srca : srcb));
+ * }
+ * rf[Dest] = output;
+*/
+
+static void gen_v4int_l(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_shli_i64(vdst, vdst, 8);
+ tcg_gen_andi_i64(tmp, load_gr(dc, rsrcb), 0xffffffff);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_cmoveqz(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmoveqz r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_movcond_i64(TCG_COND_EQ, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_zero(dc), load_gr(dc, rsrcb), load_gr(dc, rdst));
+}
+
+static void gen_cmovnez(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmovnez r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_movcond_i64(TCG_COND_NE, dest_gr(dc, rdst), load_gr(dc, rsrc),
+ load_zero(dc), load_gr(dc, rsrcb), load_gr(dc, rdst));
+}
+
+static void gen_mnz(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mnz r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 1);
+ tcg_gen_movcond_i64(TCG_COND_NE, vdst, load_gr(dc, rsrc),
+ load_zero(dc), vdst, load_zero(dc));
+}
+
+static void gen_mz(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mz r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 1);
+ tcg_gen_movcond_i64(TCG_COND_EQ, vdst, load_gr(dc, rsrc),
+ load_zero(dc), vdst, load_zero(dc));
+}
+
+static void gen_add(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "add r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_add_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_addimm(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int16_t imm)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "add(l)i r%d, r%d, %d\n",
+ rdst, rsrc, imm);
+ tcg_gen_addi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm);
+}
+
+static void gen_addx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ /* High bits have no effect with low bits, so addx and addxsc are merged. */
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx(sc) r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_add_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_addximm(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int16_t imm)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "addx(l)i r%d, r%d, %d\n",
+ rdst, rsrc, imm);
+ tcg_gen_addi_i64(vdst, load_gr(dc, rsrc), imm);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_sub(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "sub r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_sub_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_subx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "subx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_sub_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+/*
+ * uint64_t mask = 0;
+ * int64_t background = ((rf[SrcA] >> BFEnd) & 1) ? -1ULL : 0ULL;
+ * mask = ((-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1));
+ * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
+ * | (rf[SrcA] << (64 - BFStart));
+ * rf[Dest] = (rot_src & mask) | (background & ~mask);
+ */
+static void gen_bfexts(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
+ TCGv vldst = tcg_temp_local_new_i64();
+ TCGv tmp = tcg_temp_local_new_i64();
+ TCGv pmsk = tcg_const_i64(-1ULL);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfexts r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotri_i64(vldst, load_gr(dc, rsrc), start);
+ tcg_gen_andi_i64(vldst, vldst, mask);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), end);
+ tcg_gen_andi_i64(tmp, tmp, 1);
+ tcg_gen_movcond_i64(TCG_COND_EQ, tmp, tmp, load_zero(dc),
+ load_zero(dc), pmsk);
+ tcg_gen_andi_i64(tmp, tmp, ~mask);
+ tcg_gen_or_i64(dest_gr(dc, rdst), vldst, tmp);
+
+ tcg_temp_free_i64(pmsk);
+ tcg_temp_free_i64(tmp);
+ tcg_temp_free_i64(vldst);
+}
+
+/*
+ * The related functional description for bfextu in isa document:
+ *
+ * uint64_t mask = 0;
+ * mask = (-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1);
+ * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
+ * | (rf[SrcA] << (64 - BFStart));
+ * rf[Dest] = rot_src & mask;
+ */
+static void gen_bfextu(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfextu r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotri_i64(vdst, load_gr(dc, rsrc), start);
+ tcg_gen_andi_i64(vdst, vdst, mask);
+}
+
+/*
+ * mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
+ * : ((-1ULL << start) | (-1ULL >> (63 - end)));
+ * uint64_t rot_src = (rf[SrcA] << start)
+ * | ((uint64_t) rf[SrcA] >> (64 - start));
+ * rf[Dest] = (rot_src & mask) | (rf[Dest] & (-1ULL ^ mask));
+ */
+static void gen_bfins(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
+ uint8_t start, uint8_t end)
+{
+ uint64_t mask = (start <= end) ? ((-1ULL << start) ^ ((-1ULL << end) << 1))
+ : ((-1ULL << start) | (-1ULL >> (63 - end)));
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfins r%d, r%d, %d, %d\n",
+ rdst, rsrc, start, end);
+
+ tcg_gen_rotli_i64(tmp, load_gr(dc, rsrc), start);
+
+ tcg_gen_andi_i64(tmp, tmp, mask);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rdst), -1ULL ^ mask);
+ tcg_gen_or_i64(vdst, vdst, tmp);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_or(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "or r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_or_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_ori(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ori r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_ori_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_xor(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "xor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_xor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_nor(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "nor r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_nor_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_and(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "and r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_and_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), load_gr(dc, rsrcb));
+}
+
+static void gen_andi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "andi r%d, r%d, %d\n", rdst, rsrc, imm8);
+ tcg_gen_andi_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_mulx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mulx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+
+ tcg_gen_mul_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb));
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+/* FIXME: mul?_??_?? may need to be re-constructed, next. */
+
+static void gen_mul_hu_lu(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mul_hu_lu r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), 32);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 0xffffffff);
+ tcg_gen_mul_i64(vdst, tmp, vdst);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_mula_hu_lu(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mula_hu_lu r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), 32);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 0xffffffff);
+ tcg_gen_mul_i64(vdst, tmp, vdst);
+ tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_mula_lu_lu(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mula_lu_lu r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(tmp, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 0xffffffff);
+ tcg_gen_mul_i64(vdst, tmp, vdst);
+ tcg_gen_add_i64(vdst, load_gr(dc, rdst), vdst);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_shlx(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlx r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
+ tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_shl(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_shl_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+static void gen_shli(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shli r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shlxi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shlxi r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shamt & 31);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_shladd(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb,
+ uint8_t shift, uint8_t cast)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl%dadd%s r%d, r%d, r%d\n",
+ shift, cast ? "x" : "", rdst, rsrc, rsrcb);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), shift);
+ tcg_gen_add_i64(vdst, vdst, load_gr(dc, rsrcb));
+ if (cast) {
+ tcg_gen_ext32s_i64(vdst, vdst);
+ }
+}
+
+static void gen_shl16insli(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint16_t uimm16)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shl16insli r%d, r%d, 0x%x\n",
+ rdst, rsrc, uimm16);
+ tcg_gen_shli_i64(vdst, load_gr(dc, rsrc), 16);
+ tcg_gen_ori_i64(vdst, vdst, uimm16);
+}
+
+static void gen_shrs(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrs r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_sar_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+static void gen_shrux(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrux r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 31);
+ tcg_gen_andi_i64(tmp, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_shr_i64(vdst, tmp, vdst);
+ tcg_gen_ext32s_i64(vdst, vdst);
+
+ tcg_temp_free_i64(tmp);
+}
+
+static void gen_shru(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shru r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrcb), 63);
+ tcg_gen_shr_i64(vdst, load_gr(dc, rsrc), vdst);
+}
+
+/*
+ Functional Description
+ uint64_t a = rf[SrcA];
+ uint64_t b = rf[SrcB];
+ uint64_t d = rf[Dest];
+ uint64_t output = 0;
+ unsigned int counter;
+ for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
+ {
+ int sel = getByte (b, counter) & 0xf;
+ uint8_t byte = (sel < 8) ? getByte (d, sel) : getByte (a, (sel - 8));
+ output = setByte (output, counter, byte);
+ }
+ rf[Dest] = output;
+ */
+static void gen_shufflebytes(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ int counter;
+ TCGv vldst = tcg_temp_local_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shufflebytes r%d, r%d, r%d\n",
+ rdst, rsrc, rsrcb);
+
+ tcg_gen_movi_i64(vldst, 0);
+ for (counter = 0; counter < 8; counter++) {
+ TCGv tmpv = tcg_temp_local_new_i64();
+ TCGv tmp8 = tcg_const_i64(8);
+ TCGv tmps = tcg_temp_new_i64();
+ TCGv tmpd = tcg_temp_new_i64();
+ TCGv sel = tcg_temp_new_i64();
+
+ tcg_gen_shli_i64(vldst, vldst, 8);
+
+ tcg_gen_shri_i64(sel, load_gr(dc, rsrcb), (8 - counter - 1) * 8);
+ tcg_gen_andi_i64(sel, sel, 0x0f);
+
+ tcg_gen_andi_i64(tmpd, sel, 0x07);
+ tcg_gen_muli_i64(tmpd, tmpd, 8);
+ tcg_gen_mov_i64(tmps, tmpd);
+ tcg_gen_shr_i64(tmpd, load_gr(dc, rdst), tmpd);
+ tcg_gen_shr_i64(tmps, load_gr(dc, rsrc), tmps);
+
+ tcg_gen_movcond_i64(TCG_COND_LT, tmpv, sel, tmp8, tmpd, tmps);
+
+ tcg_gen_or_i64(vldst, vldst, tmpv);
+
+ tcg_temp_free_i64(sel);
+ tcg_temp_free_i64(tmp8);
+ tcg_temp_free_i64(tmps);
+ tcg_temp_free_i64(tmpd);
+ tcg_temp_free_i64(tmpv);
+ }
+ tcg_gen_mov_i64(dest_gr(dc, rdst), vldst);
+ tcg_temp_free_i64(vldst);
+}
+
+static void gen_shrsi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrsi r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_sari_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shrui(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shrui r%d, r%d, %u\n", rdst, rsrc, shamt);
+ tcg_gen_shri_i64(dest_gr(dc, rdst), load_gr(dc, rsrc), shamt);
+}
+
+static void gen_shruxi(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t shamt)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "shruxi r%d, r%d, %u\n",
+ rdst, rsrc, shamt);
+ tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
+ tcg_gen_shri_i64(vdst, vdst, shamt & 31);
+ tcg_gen_ext32s_i64(vdst, vdst);
+}
+
+static void gen_dblalign(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
+{
+ TCGv vdst = dest_gr(dc, rdst);
+ TCGv mask = tcg_temp_new_i64();
+ TCGv tmp = tcg_temp_new_i64();
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "dblalign r%d, r%d, r%d\n", rdst, rsrc, rsrcb);
+
+ tcg_gen_andi_i64(mask, load_gr(dc, rsrcb), 7);
+ tcg_gen_muli_i64(mask, mask, 8);
+ tcg_gen_shr_i64(vdst, load_gr(dc, rdst), mask);
+
+ tcg_gen_movi_i64(tmp, 64);
+ tcg_gen_sub_i64(mask, tmp, mask);
+ tcg_gen_shl_i64(mask, load_gr(dc, rsrc), mask);
+
+ tcg_gen_or_i64(vdst, vdst, mask);
+
+ tcg_temp_free_i64(tmp);
+ tcg_temp_free_i64(mask);
+}
+
+static void gen_cntlz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "cntlz r%d, r%d\n", rdst, rsrc);
+ gen_helper_cntlz(dest_gr(dc, rdst), load_gr(dc, rsrc));
+}
+
+static void gen_cnttz(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ctz r%d, r%d\n", rdst, rsrc);
+ gen_helper_cnttz(dest_gr(dc, rdst), load_gr(dc, rsrc));
+}
+
+/* FIXME: At present, skip unalignment checking */
+static void gen_ld(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, TCGMemOp ops)
+{
+ const char *prefix;
+
+ switch (ops) {
+ case MO_LEQ:
+ prefix = "(na)";
+ break;
+ case MO_UB:
+ prefix = "1u";
+ break;
+ case MO_SB:
+ prefix = "1s";
+ break;
+ case MO_LESW:
+ prefix = "2s";
+ break;
+ case MO_LEUW:
+ prefix = "2u";
+ break;
+ case MO_LESL:
+ prefix = "4s";
+ break;
+ case MO_LEUL:
+ prefix = "4u";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ld%s r%d, r%d\n", prefix, rdst, rsrc);
+ tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+}
+
+/* FIXME: At present, skip unalignment checking */
+static void gen_ld_add(struct DisasContext *dc,
+ uint8_t rdst, uint8_t rsrc, int8_t imm8, TCGMemOp ops)
+{
+ const char *prefix;
+
+ switch (ops) {
+ case MO_LEQ:
+ prefix = "(na)";
+ break;
+ case MO_UB:
+ prefix = "1u";
+ break;
+ case MO_SB:
+ prefix = "1s";
+ break;
+ case MO_LESW:
+ prefix = "2s";
+ break;
+ case MO_LEUW:
+ prefix = "2u";
+ break;
+ case MO_LESL:
+ prefix = "4s";
+ break;
+ case MO_LEUL:
+ prefix = "4u";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "ld%s_add r%d, r%d, %d\n",
+ prefix, rdst, rsrc, imm8);
+
+ tcg_gen_qemu_ld_i64(dest_gr(dc, rdst), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+ /*
+ * Each pipe only have one temp val which is already used, and it is only
+ * for pipe X1, so can use real register
+ */
+ if (rsrc < TILEGX_R_COUNT) {
+ tcg_gen_addi_i64(cpu_regs[rsrc], load_gr(dc, rsrc), imm8);
+ }
+}
+
+/* FIXME: At present, skip unalignment checking */
+static void gen_st(struct DisasContext *dc,
+ uint8_t rsrc, uint8_t rsrcb, TCGMemOp ops)
+{
+ const char *prefix;
+
+ switch (ops) {
+ case MO_LEQ:
+ prefix = "";
+ break;
+ case MO_UB:
+ prefix = "1";
+ break;
+ case MO_LEUW:
+ prefix = "2";
+ break;
+ case MO_LEUL:
+ prefix = "4";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "st%s r%d, r%d\n", prefix, rsrc, rsrcb);
+ tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+}
+
+/* FIXME: At present, skip unalignment checking */
+static void gen_st_add(struct DisasContext *dc,
+ uint8_t rsrc, uint8_t rsrcb, uint8_t imm8, TCGMemOp ops)
+{
+ const char *prefix;
+
+ switch (ops) {
+ case MO_LEQ:
+ prefix = "";
+ break;
+ case MO_UB:
+ prefix = "1";
+ break;
+ case MO_LEUW:
+ prefix = "2";
+ break;
+ case MO_LEUL:
+ prefix = "4";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "st%s_add r%d, r%d, %d\n",
+ prefix, rsrc, rsrcb, imm8);
+ tcg_gen_qemu_st_i64(load_gr(dc, rsrcb), load_gr(dc, rsrc),
+ MMU_USER_IDX, ops);
+ tcg_gen_addi_i64(dest_gr(dc, rsrc), load_gr(dc, rsrc), imm8);
+}
+
+static void gen_lnk(struct DisasContext *dc, uint8_t rdst)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "lnk r%d\n", rdst);
+ tcg_gen_movi_i64(dest_gr(dc, rdst), dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+}
+
+static void gen_b(struct DisasContext *dc,
+ uint8_t rsrc, int32_t off, TCGCond cond)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+ const char *prefix;
+
+ switch (cond) {
+ case TCG_COND_EQ:
+ prefix = "eqz(t)";
+ break;
+ case TCG_COND_NE:
+ prefix = "nez(t)";
+ break;
+ case TCG_COND_LE:
+ prefix = "lez(t)";
+ break;
+ case TCG_COND_LT:
+ prefix = "ltz(t)";
+ break;
+ default:
+ dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
+ return;
+ }
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM,
+ "b%s r%d, %d ([" TARGET_FMT_lx "] %s)\n",
+ prefix, rsrc, off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ dc->jmp.val1 = tcg_temp_new_i64();
+ dc->jmp.val2 = tcg_temp_new_i64();
+
+ dc->jmp.cond = cond;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+ tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
+ tcg_gen_movi_i64(dc->jmp.val2, 0);
+
+ return;
+}
+
+static int gen_blbc(struct DisasContext *dc, uint8_t rsrc, int32_t off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM,
+ "blbc(t) r%d, %d ([" TARGET_FMT_lx "] %s)\n",
+ rsrc, off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ dc->jmp.val1 = tcg_temp_new_i64();
+ dc->jmp.val2 = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_EQ;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+ tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
+ tcg_gen_andi_i64(dc->jmp.val1, dc->jmp.val1, 1ULL);
+ tcg_gen_movi_i64(dc->jmp.val2, 0);
+
+ return 0;
+}
+
+static int gen_blbs(struct DisasContext *dc, uint8_t rsrc, int32_t off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM,
+ "blbs(t) r%d, %d ([" TARGET_FMT_lx "] %s)\n",
+ rsrc, off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ dc->jmp.val1 = tcg_temp_new_i64();
+ dc->jmp.val2 = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_NE;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+ tcg_gen_mov_i64(dc->jmp.val1, load_gr(dc, rsrc));
+ tcg_gen_andi_i64(dc->jmp.val1, dc->jmp.val1, 1ULL);
+ tcg_gen_movi_i64(dc->jmp.val2, 0);
+
+ return 0;
+}
+
+/* For memory fence */
+static void gen_mf(struct DisasContext *dc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "mf\n");
+ /* FIXME: Do we need any implementation for it? I guess no. */
+}
+
+/* Write hitt 64 bytes. It is about cache. */
+static void gen_wh64(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "wh64 r%d\n", rsrc);
+ /* FIXME: Do we need any implementation for it? I guess no. */
+}
+
+static void gen_jr(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "jr(p) r%d\n", rsrc);
+
+ dc->jmp.dest = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
+}
+
+static void gen_jalr(struct DisasContext *dc, uint8_t rsrc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "jalr(p) r%d\n", rsrc);
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
+ dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_andi_i64(dc->jmp.dest, load_gr(dc, rsrc), ~(sizeof(uint64_t) - 1));
+}
+
+static void gen_j(struct DisasContext *dc, int off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM,
+ "j %d ([" TARGET_FMT_lx "] %s)\n",
+ off, pos, lookup_symbol(pos));
+
+ dc->jmp.dest = tcg_temp_new_i64();
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+}
+
+static void gen_jal(struct DisasContext *dc, int off)
+{
+ uint64_t pos = dc->pc + (int64_t)off * TILEGX_BUNDLE_SIZE_IN_BYTES;
+
+ qemu_log_mask(CPU_LOG_TB_IN_ASM,
+ "jal %d ([" TARGET_FMT_lx "] %s)\n",
+ off, pos, lookup_symbol(pos));
+
+
+ dc->jmp.dest = tcg_temp_new_i64();
+ tcg_gen_movi_i64(dest_gr(dc, TILEGX_R_LR),
+ dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+
+ dc->jmp.cond = TCG_COND_ALWAYS;
+ tcg_gen_movi_i64(dc->jmp.dest, pos);
+}
+
+static void gen_swint1(struct DisasContext *dc)
+{
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "swint1\n");
+
+ tcg_gen_movi_i64(cpu_pc, dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ dc->exception = TILEGX_EXCP_SYSCALL;
+}
+
+static void decode_rrr_0_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case ADD_RRR_0_OPCODE_Y0:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDX_RRR_0_OPCODE_Y0:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_Y0:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_Y0:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_0_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_1_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case UNARY_RRR_1_OPCODE_Y0:
+ switch (get_UnaryOpcodeExtension_Y0(bundle)) {
+ case CNTLZ_UNARY_OPCODE_Y0:
+ gen_cntlz(dc, rdst, rsrc);
+ return;
+ case CNTTZ_UNARY_OPCODE_Y0:
+ gen_cnttz(dc, rdst, rsrc);
+ return;
+ case NOP_UNARY_OPCODE_Y0:
+ case FNOP_UNARY_OPCODE_Y0:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case FSINGLE_PACK1_UNARY_OPCODE_Y0:
+ case PCNT_UNARY_OPCODE_Y0:
+ case REVBITS_UNARY_OPCODE_Y0:
+ case REVBYTES_UNARY_OPCODE_Y0:
+ case TBLIDXB0_UNARY_OPCODE_Y0:
+ case TBLIDXB1_UNARY_OPCODE_Y0:
+ case TBLIDXB2_UNARY_OPCODE_Y0:
+ case TBLIDXB3_UNARY_OPCODE_Y0:
+ default:
+ break;
+ }
+ break;
+ case SHL1ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 0);
+ return;
+ case SHL2ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 0);
+ return;
+ case SHL3ADD_RRR_1_OPCODE_Y0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 0);
+ return;
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_1_opcode_y0, [" FMT64X "]\n", bundle);
+}
+
+static void decode_rrr_2_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMPLES_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE);
+ return;
+ case CMPLEU_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU);
+ return;
+ case CMPLTS_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT);
+ return;
+ case CMPLTU_RRR_2_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU);
+ return;
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_2_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_3_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMPEQ_RRR_3_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ);
+ return;
+ case CMPNE_RRR_3_OPCODE_Y0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE);
+ return;
+ case MULX_RRR_3_OPCODE_Y0:
+ gen_mulx(dc, rdst, rsrc, rsrcb);
+ return;
+ case MULAX_RRR_3_OPCODE_Y0:
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_3_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_4_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case CMOVNEZ_RRR_4_OPCODE_Y0:
+ gen_cmovnez(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMOVEQZ_RRR_4_OPCODE_Y0:
+ gen_cmoveqz(dc, rdst, rsrc, rsrcb);
+ return;
+ case MNZ_RRR_4_OPCODE_Y0:
+ gen_mnz(dc, rdst, rsrc, rsrcb);
+ return;
+ case MZ_RRR_4_OPCODE_Y0:
+ gen_mz(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_4_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_5_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case OR_RRR_5_OPCODE_Y0:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_5_OPCODE_Y0:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_5_OPCODE_Y0:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_5_OPCODE_Y0:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_5_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_6_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rsrcb = get_SrcB_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_RRROpcodeExtension_Y0(bundle)) {
+ case SHL_RRR_6_OPCODE_Y0:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_6_OPCODE_Y0:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_6_OPCODE_Y0:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ case ROTL_RRR_6_OPCODE_Y0:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_6_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_shift_opcode_y0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t shamt = get_ShAmt_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+
+ switch (get_ShiftOpcodeExtension_Y0(bundle)) {
+ case SHLI_SHIFT_OPCODE_Y0:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_Y0:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_Y0:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case ROTLI_SHIFT_OPCODE_Y0:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP shift_opcode_y0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_1_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case UNARY_RRR_1_OPCODE_Y1:
+ switch (get_UnaryOpcodeExtension_Y1(bundle)) {
+ case NOP_UNARY_OPCODE_Y1:
+ case FNOP_UNARY_OPCODE_Y1:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case JALRP_UNARY_OPCODE_Y1:
+ case JALR_UNARY_OPCODE_Y1:
+ if (!rdst) {
+ gen_jalr(dc, rsrc);
+ return;
+ }
+ break;
+ case JR_UNARY_OPCODE_Y1:
+ case JRP_UNARY_OPCODE_Y1:
+ if (!rdst) {
+ gen_jr(dc, rsrc);
+ return;
+ }
+ break;
+ case LNK_UNARY_OPCODE_Y1:
+ if (!rsrc) {
+ gen_lnk(dc, rdst);
+ return;
+ }
+ break;
+ case ILL_UNARY_OPCODE_Y1:
+ default:
+ break;
+ }
+ break;
+ case SHL1ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 0);
+ return;
+ case SHL2ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 0);
+ return;
+ case SHL3ADD_RRR_1_OPCODE_Y1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 0);
+ return;
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_1_opcode_y1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_3_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case CMPEQ_RRR_3_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ);
+ return;
+ case CMPNE_RRR_3_OPCODE_Y1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE);
+ return;
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_3_opcode_y1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_5_opcode_y1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rsrcb = get_SrcB_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+
+ switch (get_RRROpcodeExtension_Y1(bundle)) {
+ case OR_RRR_5_OPCODE_Y1:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_5_OPCODE_Y1:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_5_OPCODE_Y1:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_5_OPCODE_Y1:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_5_opcode_y1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_ldst0_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrca = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YA2:
+ gen_ld(dc, rsrcbdst, rsrca, MO_SB);
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrca, rsrcbdst, MO_UB);
+ return;
+ case MODE_OPCODE_YB2:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP ldst0_opcode_y2, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_ldst1_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = (uint8_t)get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = (uint8_t)get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YA2:
+ if (rsrcbdst == TILEGX_R_ZERO) {
+ /* Need nothing */
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "prefetch r%d\n", rsrc);
+ return;
+ }
+ gen_ld(dc, rsrcbdst, rsrc, MO_UB);
+ return;
+ case MODE_OPCODE_YB2:
+ gen_ld(dc, rsrcbdst, rsrc, MO_LESL);
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrc, rsrcbdst, MO_LEUW);
+ return;
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP ldst1_opcode_y2, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_ldst2_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrca = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrca, rsrcbdst, MO_LEUL);
+ return;
+ case MODE_OPCODE_YA2:
+ case MODE_OPCODE_YB2:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP ldst2_opcode_y2, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_ldst3_opcode_y2(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrca = get_SrcA_Y2(bundle);
+ uint8_t rsrcbdst = get_SrcBDest_Y2(bundle);
+
+ switch (get_Mode(bundle)) {
+ case MODE_OPCODE_YB2:
+ gen_ld(dc, rsrcbdst, rsrca, MO_LEQ);
+ return;
+ case MODE_OPCODE_YC2:
+ gen_st(dc, rsrca, rsrcbdst, MO_LEQ);
+ return;
+ case MODE_OPCODE_YA2:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP ldst3_opcode_y2, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_bf_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ uint8_t start = get_BFStart_X0(bundle);
+ uint8_t end = get_BFEnd_X0(bundle);
+
+ switch (get_BFOpcodeExtension_X0(bundle)) {
+ case BFEXTS_BF_OPCODE_X0:
+ gen_bfexts(dc, rdst, rsrc, start, end);
+ return;
+ case BFEXTU_BF_OPCODE_X0:
+ gen_bfextu(dc, rdst, rsrc, start, end);
+ return;
+ case BFINS_BF_OPCODE_X0:
+ gen_bfins(dc, rdst, rsrc, start, end);
+ return;
+ case MM_BF_OPCODE_X0:
+ default:
+ break;
+ }
+
+ qemu_log_mask(LOG_UNIMP, "UNIMP bf_opcode_x0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_imm8_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ int8_t imm8 = get_Imm8_X0(bundle);
+
+ switch (get_Imm8OpcodeExtension_X0(bundle)) {
+ case ADDI_IMM8_OPCODE_X0:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_IMM8_OPCODE_X0:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_IMM8_OPCODE_X0:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_IMM8_OPCODE_X0:
+ gen_cmpeqi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTSI_IMM8_OPCODE_X0:
+ gen_cmpltsi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTUI_IMM8_OPCODE_X0:
+ gen_cmpltui(dc, rdst, rsrc, imm8);
+ return;
+ case ORI_IMM8_OPCODE_X0:
+ gen_ori(dc, rdst, rsrc, imm8);
+ return;
+ case V1ADDI_IMM8_OPCODE_X0:
+ case V1CMPEQI_IMM8_OPCODE_X0:
+ case V1CMPLTSI_IMM8_OPCODE_X0:
+ case V1CMPLTUI_IMM8_OPCODE_X0:
+ case V1MAXUI_IMM8_OPCODE_X0:
+ case V1MINUI_IMM8_OPCODE_X0:
+ case V2ADDI_IMM8_OPCODE_X0:
+ case V2CMPEQI_IMM8_OPCODE_X0:
+ case V2CMPLTSI_IMM8_OPCODE_X0:
+ case V2CMPLTUI_IMM8_OPCODE_X0:
+ case V2MAXSI_IMM8_OPCODE_X0:
+ case V2MINSI_IMM8_OPCODE_X0:
+ case XORI_IMM8_OPCODE_X0:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP imm8_opcode_x0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_0_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rsrcb = get_SrcB_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+
+ switch (get_RRROpcodeExtension_X0(bundle)) {
+ case ADD_RRR_0_OPCODE_X0:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADDXSC_RRR_0_OPCODE_X0:
+ case ADDX_RRR_0_OPCODE_X0:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_0_OPCODE_X0:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMOVEQZ_RRR_0_OPCODE_X0:
+ gen_cmoveqz(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMOVNEZ_RRR_0_OPCODE_X0:
+ gen_cmovnez(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMPEQ_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ);
+ return;
+ case CMPLES_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE);
+ return;
+ case CMPLTS_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT);
+ return;
+ case CMPLTU_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU);
+ return;
+ case CMPLEU_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU);
+ return;
+ case CMPNE_RRR_0_OPCODE_X0:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE);
+ return;
+ case DBLALIGN_RRR_0_OPCODE_X0:
+ gen_dblalign(dc, rdst, rsrc, rsrcb);
+ return;
+ case MNZ_RRR_0_OPCODE_X0:
+ gen_mnz(dc, rdst, rsrc, rsrcb);
+ return;
+ case MZ_RRR_0_OPCODE_X0:
+ gen_mz(dc, rdst, rsrc, rsrcb);
+ return;
+ case MULX_RRR_0_OPCODE_X0:
+ gen_mulx(dc, rdst, rsrc, rsrcb);
+ return;
+ case MULA_HU_LU_RRR_0_OPCODE_X0:
+ gen_mula_hu_lu(dc, rdst, rsrc, rsrcb);
+ return;
+ case MULA_LU_LU_RRR_0_OPCODE_X0:
+ gen_mula_lu_lu(dc, rdst, rsrc, rsrcb);
+ return;
+ case MUL_HU_LU_RRR_0_OPCODE_X0:
+ gen_mul_hu_lu(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_0_OPCODE_X0:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case OR_RRR_0_OPCODE_X0:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL_RRR_0_OPCODE_X0:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL1ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 1);
+ return;
+ case SHL1ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 0);
+ return;
+ case SHL2ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 1);
+ return;
+ case SHL2ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 0);
+ return;
+ case SHL3ADDX_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 1);
+ return;
+ case SHL3ADD_RRR_0_OPCODE_X0:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 0);
+ return;
+ case SHLX_RRR_0_OPCODE_X0:
+ gen_shlx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_0_OPCODE_X0:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRUX_RRR_0_OPCODE_X0:
+ gen_shrux(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_0_OPCODE_X0:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHUFFLEBYTES_RRR_0_OPCODE_X0:
+ gen_shufflebytes(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_X0:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUB_RRR_0_OPCODE_X0:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ case UNARY_RRR_0_OPCODE_X0:
+ switch (get_UnaryOpcodeExtension_X0(bundle)) {
+ case CNTLZ_UNARY_OPCODE_X0:
+ gen_cntlz(dc, rdst, rsrc);
+ return;
+ case CNTTZ_UNARY_OPCODE_X0:
+ gen_cnttz(dc, rdst, rsrc);
+ return;
+ case FNOP_UNARY_OPCODE_X0:
+ case NOP_UNARY_OPCODE_X0:
+ if (!rsrc && !rdst) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case FSINGLE_PACK1_UNARY_OPCODE_X0:
+ case PCNT_UNARY_OPCODE_X0:
+ case REVBITS_UNARY_OPCODE_X0:
+ case REVBYTES_UNARY_OPCODE_X0:
+ case TBLIDXB0_UNARY_OPCODE_X0:
+ case TBLIDXB1_UNARY_OPCODE_X0:
+ case TBLIDXB2_UNARY_OPCODE_X0:
+ case TBLIDXB3_UNARY_OPCODE_X0:
+ default:
+ break;
+ }
+ break;
+ case V1INT_L_RRR_0_OPCODE_X0:
+ gen_v1int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V4INT_L_RRR_0_OPCODE_X0:
+ gen_v4int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_0_OPCODE_X0:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMULAF_RRR_0_OPCODE_X0:
+ case CMULA_RRR_0_OPCODE_X0:
+ case CMULFR_RRR_0_OPCODE_X0:
+ case CMULF_RRR_0_OPCODE_X0:
+ case CMULHR_RRR_0_OPCODE_X0:
+ case CMULH_RRR_0_OPCODE_X0:
+ case CMUL_RRR_0_OPCODE_X0:
+ case CRC32_32_RRR_0_OPCODE_X0:
+ case CRC32_8_RRR_0_OPCODE_X0:
+ case DBLALIGN2_RRR_0_OPCODE_X0:
+ case DBLALIGN4_RRR_0_OPCODE_X0:
+ case DBLALIGN6_RRR_0_OPCODE_X0:
+ case FDOUBLE_ADDSUB_RRR_0_OPCODE_X0:
+ case FDOUBLE_ADD_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_MUL_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_PACK1_RRR_0_OPCODE_X0:
+ case FDOUBLE_PACK2_RRR_0_OPCODE_X0:
+ case FDOUBLE_SUB_FLAGS_RRR_0_OPCODE_X0:
+ case FDOUBLE_UNPACK_MAX_RRR_0_OPCODE_X0:
+ case FDOUBLE_UNPACK_MIN_RRR_0_OPCODE_X0:
+ case FSINGLE_ADD1_RRR_0_OPCODE_X0:
+ case FSINGLE_ADDSUB2_RRR_0_OPCODE_X0:
+ case FSINGLE_MUL1_RRR_0_OPCODE_X0:
+ case FSINGLE_MUL2_RRR_0_OPCODE_X0:
+ case FSINGLE_PACK2_RRR_0_OPCODE_X0:
+ case FSINGLE_SUB1_RRR_0_OPCODE_X0:
+ case MULAX_RRR_0_OPCODE_X0:
+ case MULA_HS_HS_RRR_0_OPCODE_X0:
+ case MULA_HS_HU_RRR_0_OPCODE_X0:
+ case MULA_HS_LS_RRR_0_OPCODE_X0:
+ case MULA_HS_LU_RRR_0_OPCODE_X0:
+ case MULA_HU_HU_RRR_0_OPCODE_X0:
+ case MULA_HU_LS_RRR_0_OPCODE_X0:
+ case MULA_LS_LS_RRR_0_OPCODE_X0:
+ case MULA_LS_LU_RRR_0_OPCODE_X0:
+ case MUL_HS_HS_RRR_0_OPCODE_X0:
+ case MUL_HS_HU_RRR_0_OPCODE_X0:
+ case MUL_HS_LS_RRR_0_OPCODE_X0:
+ case MUL_HS_LU_RRR_0_OPCODE_X0:
+ case MUL_HU_HU_RRR_0_OPCODE_X0:
+ case MUL_HU_LS_RRR_0_OPCODE_X0:
+ case MUL_LS_LS_RRR_0_OPCODE_X0:
+ case MUL_LS_LU_RRR_0_OPCODE_X0:
+ case MUL_LU_LU_RRR_0_OPCODE_X0:
+ case ROTL_RRR_0_OPCODE_X0:
+ case SUBXSC_RRR_0_OPCODE_X0:
+ case V1ADDUC_RRR_0_OPCODE_X0:
+ case V1ADD_RRR_0_OPCODE_X0:
+ case V1ADIFFU_RRR_0_OPCODE_X0:
+ case V1AVGU_RRR_0_OPCODE_X0:
+ case V1CMPEQ_RRR_0_OPCODE_X0:
+ case V1CMPLES_RRR_0_OPCODE_X0:
+ case V1CMPLEU_RRR_0_OPCODE_X0:
+ case V1CMPLTS_RRR_0_OPCODE_X0:
+ case V1CMPLTU_RRR_0_OPCODE_X0:
+ case V1CMPNE_RRR_0_OPCODE_X0:
+ case V1DDOTPUSA_RRR_0_OPCODE_X0:
+ case V1DDOTPUS_RRR_0_OPCODE_X0:
+ case V1DOTPA_RRR_0_OPCODE_X0:
+ case V1DOTPUSA_RRR_0_OPCODE_X0:
+ case V1DOTPUS_RRR_0_OPCODE_X0:
+ case V1DOTP_RRR_0_OPCODE_X0:
+ case V1MAXU_RRR_0_OPCODE_X0:
+ case V1MINU_RRR_0_OPCODE_X0:
+ case V1MNZ_RRR_0_OPCODE_X0:
+ case V1MULTU_RRR_0_OPCODE_X0:
+ case V1MULUS_RRR_0_OPCODE_X0:
+ case V1MULU_RRR_0_OPCODE_X0:
+ case V1MZ_RRR_0_OPCODE_X0:
+ case V1SADAU_RRR_0_OPCODE_X0:
+ case V1SADU_RRR_0_OPCODE_X0:
+ case V1SHL_RRR_0_OPCODE_X0:
+ case V1SHRS_RRR_0_OPCODE_X0:
+ case V1SHRU_RRR_0_OPCODE_X0:
+ case V1SUBUC_RRR_0_OPCODE_X0:
+ case V1SUB_RRR_0_OPCODE_X0:
+ case V1INT_H_RRR_0_OPCODE_X0:
+ case V2INT_H_RRR_0_OPCODE_X0:
+ case V2INT_L_RRR_0_OPCODE_X0:
+ case V4INT_H_RRR_0_OPCODE_X0:
+ case V2ADDSC_RRR_0_OPCODE_X0:
+ case V2ADD_RRR_0_OPCODE_X0:
+ case V2ADIFFS_RRR_0_OPCODE_X0:
+ case V2AVGS_RRR_0_OPCODE_X0:
+ case V2CMPEQ_RRR_0_OPCODE_X0:
+ case V2CMPLES_RRR_0_OPCODE_X0:
+ case V2CMPLEU_RRR_0_OPCODE_X0:
+ case V2CMPLTS_RRR_0_OPCODE_X0:
+ case V2CMPLTU_RRR_0_OPCODE_X0:
+ case V2CMPNE_RRR_0_OPCODE_X0:
+ case V2DOTPA_RRR_0_OPCODE_X0:
+ case V2DOTP_RRR_0_OPCODE_X0:
+ case V2MAXS_RRR_0_OPCODE_X0:
+ case V2MINS_RRR_0_OPCODE_X0:
+ case V2MNZ_RRR_0_OPCODE_X0:
+ case V2MULFSC_RRR_0_OPCODE_X0:
+ case V2MULS_RRR_0_OPCODE_X0:
+ case V2MULTS_RRR_0_OPCODE_X0:
+ case V2MZ_RRR_0_OPCODE_X0:
+ case V2PACKH_RRR_0_OPCODE_X0:
+ case V2PACKL_RRR_0_OPCODE_X0:
+ case V2PACKUC_RRR_0_OPCODE_X0:
+ case V2SADAS_RRR_0_OPCODE_X0:
+ case V2SADAU_RRR_0_OPCODE_X0:
+ case V2SADS_RRR_0_OPCODE_X0:
+ case V2SADU_RRR_0_OPCODE_X0:
+ case V2SHLSC_RRR_0_OPCODE_X0:
+ case V2SHL_RRR_0_OPCODE_X0:
+ case V2SHRS_RRR_0_OPCODE_X0:
+ case V2SHRU_RRR_0_OPCODE_X0:
+ case V2SUBSC_RRR_0_OPCODE_X0:
+ case V2SUB_RRR_0_OPCODE_X0:
+ case V4ADDSC_RRR_0_OPCODE_X0:
+ case V4ADD_RRR_0_OPCODE_X0:
+ case V4PACKSC_RRR_0_OPCODE_X0:
+ case V4SHLSC_RRR_0_OPCODE_X0:
+ case V4SHL_RRR_0_OPCODE_X0:
+ case V4SHRS_RRR_0_OPCODE_X0:
+ case V4SHRU_RRR_0_OPCODE_X0:
+ case V4SUBSC_RRR_0_OPCODE_X0:
+ case V4SUB_RRR_0_OPCODE_X0:
+ case V1DDOTPUA_RRR_0_OPCODE_X0:
+ case V1DDOTPU_RRR_0_OPCODE_X0:
+ case V1DOTPUA_RRR_0_OPCODE_X0:
+ case V1DOTPU_RRR_0_OPCODE_X0:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_0_opcode_x0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_shift_opcode_x0(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ uint8_t shamt = get_ShAmt_X0(bundle);
+
+ switch (get_ShiftOpcodeExtension_X0(bundle)) {
+ case SHLI_SHIFT_OPCODE_X0:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLXI_SHIFT_OPCODE_X0:
+ gen_shlxi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_X0:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_X0:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUXI_SHIFT_OPCODE_X0:
+ gen_shruxi(dc, rdst, rsrc, shamt);
+ return;
+ case ROTLI_SHIFT_OPCODE_X0:
+ case V1SHLI_SHIFT_OPCODE_X0:
+ case V1SHRSI_SHIFT_OPCODE_X0:
+ case V1SHRUI_SHIFT_OPCODE_X0:
+ case V2SHLI_SHIFT_OPCODE_X0:
+ case V2SHRSI_SHIFT_OPCODE_X0:
+ case V2SHRUI_SHIFT_OPCODE_X0:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP shift_opcode_x0, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_branch_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t src = get_SrcA_X1(bundle);
+ int32_t off = sign_extend(get_BrOff_X1(bundle), 17);
+
+ switch (get_BrType_X1(bundle)) {
+ case BEQZT_BRANCH_OPCODE_X1:
+ case BEQZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_EQ);
+ return;
+ case BNEZT_BRANCH_OPCODE_X1:
+ case BNEZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_NE);
+ return;
+ case BLBCT_BRANCH_OPCODE_X1:
+ case BLBC_BRANCH_OPCODE_X1:
+ gen_blbc(dc, src, off);
+ return;
+ case BLBST_BRANCH_OPCODE_X1:
+ case BLBS_BRANCH_OPCODE_X1:
+ gen_blbs(dc, src, off);
+ return;
+ case BLEZT_BRANCH_OPCODE_X1:
+ case BLEZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_LE);
+ return;
+ case BLTZT_BRANCH_OPCODE_X1:
+ case BLTZ_BRANCH_OPCODE_X1:
+ gen_b(dc, src, off, TCG_COND_LT);
+ return;
+ case BGEZT_BRANCH_OPCODE_X1:
+ case BGEZ_BRANCH_OPCODE_X1:
+ case BGTZT_BRANCH_OPCODE_X1:
+ case BGTZ_BRANCH_OPCODE_X1:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP branch_opcode_x1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_imm8_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+ int8_t imm8 = get_Imm8_X1(bundle);
+ uint8_t rsrcb = get_SrcB_X1(bundle);
+ int8_t dimm8 = get_Dest_Imm8_X1(bundle);
+
+ switch (get_Imm8OpcodeExtension_X1(bundle)) {
+ case ADDI_IMM8_OPCODE_X1:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_IMM8_OPCODE_X1:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_IMM8_OPCODE_X1:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_IMM8_OPCODE_X1:
+ gen_cmpeqi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTSI_IMM8_OPCODE_X1:
+ gen_cmpltsi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTUI_IMM8_OPCODE_X1:
+ gen_cmpltui(dc, rdst, rsrc, imm8);
+ return;
+ case LD1S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_SB);
+ return;
+ case LD1U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_UB);
+ return;
+ case LD2S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LESW);
+ return;
+ case LD2U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUW);
+ return;
+ case LD4S_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LESL);
+ return;
+ case LD4U_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEUL);
+ return;
+ case LD_ADD_IMM8_OPCODE_X1:
+ gen_ld_add(dc, rdst, rsrc, imm8, MO_LEQ);
+ return;
+ case MFSPR_IMM8_OPCODE_X1:
+ gen_mfspr(dc, rdst, get_MF_Imm14_X1(bundle));
+ return;
+ case MTSPR_IMM8_OPCODE_X1:
+ gen_mtspr(dc, rsrc, get_MT_Imm14_X1(bundle));
+ return;
+ case ORI_IMM8_OPCODE_X1:
+ gen_ori(dc, rdst, rsrc, imm8);
+ return;
+ case ST_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEQ);
+ return;
+ case ST1_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_UB);
+ return;
+ case ST2_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUW);
+ return;
+ case ST4_ADD_IMM8_OPCODE_X1:
+ gen_st_add(dc, rsrc, rsrcb, dimm8, MO_LEUL);
+ return;
+ case V1CMPEQI_IMM8_OPCODE_X1:
+ gen_v1cmpeqi(dc, rdst, rsrc, imm8);
+ return;
+ case LDNT1S_ADD_IMM8_OPCODE_X1:
+ case LDNT1U_ADD_IMM8_OPCODE_X1:
+ case LDNT2S_ADD_IMM8_OPCODE_X1:
+ case LDNT2U_ADD_IMM8_OPCODE_X1:
+ case LDNT4S_ADD_IMM8_OPCODE_X1:
+ case LDNT4U_ADD_IMM8_OPCODE_X1:
+ case LDNT_ADD_IMM8_OPCODE_X1:
+ case LWNA_ADD_IMM8_OPCODE_X1:
+ case STNT1_ADD_IMM8_OPCODE_X1:
+ case STNT2_ADD_IMM8_OPCODE_X1:
+ case STNT4_ADD_IMM8_OPCODE_X1:
+ case STNT_ADD_IMM8_OPCODE_X1:
+ case V1ADDI_IMM8_OPCODE_X1:
+ case V1CMPLTSI_IMM8_OPCODE_X1:
+ case V1CMPLTUI_IMM8_OPCODE_X1:
+ case V1MAXUI_IMM8_OPCODE_X1:
+ case V1MINUI_IMM8_OPCODE_X1:
+ case V2ADDI_IMM8_OPCODE_X1:
+ case V2CMPEQI_IMM8_OPCODE_X1:
+ case V2CMPLTSI_IMM8_OPCODE_X1:
+ case V2CMPLTUI_IMM8_OPCODE_X1:
+ case V2MAXSI_IMM8_OPCODE_X1:
+ case V2MINSI_IMM8_OPCODE_X1:
+ case XORI_IMM8_OPCODE_X1:
+ default:
+ qemu_log_mask(LOG_UNIMP, "UNIMP opcode ext: %u\n",
+ get_Imm8OpcodeExtension_X1(bundle));
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP imm8_opcode_x1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_jump_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ int off = sign_extend(get_JumpOff_X1(bundle), 27);
+
+ switch (get_JumpOpcodeExtension_X1(bundle)) {
+ case JAL_JUMP_OPCODE_X1:
+ gen_jal(dc, off);
+ return;
+ case J_JUMP_OPCODE_X1:
+ gen_j(dc, off);
+ return;
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP jump_opcode_x1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_rrr_0_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rsrcb = get_SrcB_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+
+ switch (get_RRROpcodeExtension_X1(bundle)) {
+ case ADDX_RRR_0_OPCODE_X1:
+ case ADDXSC_RRR_0_OPCODE_X1:
+ gen_addx(dc, rdst, rsrc, rsrcb);
+ return;
+ case ADD_RRR_0_OPCODE_X1:
+ gen_add(dc, rdst, rsrc, rsrcb);
+ return;
+ case AND_RRR_0_OPCODE_X1:
+ gen_and(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMPEQ_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_EQ);
+ return;
+ case CMPLES_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LE);
+ return;
+ case CMPLEU_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LEU);
+ return;
+ case CMPEXCH4_RRR_0_OPCODE_X1:
+ gen_exch(dc, rdst, rsrc, rsrcb, TILEGX_EXCP_OPCODE_CMPEXCH4);
+ return;
+ case CMPEXCH_RRR_0_OPCODE_X1:
+ gen_exch(dc, rdst, rsrc, rsrcb, TILEGX_EXCP_OPCODE_CMPEXCH);
+ return;
+ case CMPLTS_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LT);
+ return;
+ case CMPLTU_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_LTU);
+ return;
+ case EXCH4_RRR_0_OPCODE_X1:
+ gen_exch(dc, rdst, rsrc, rsrcb, TILEGX_EXCP_OPCODE_EXCH4);
+ return;
+ case EXCH_RRR_0_OPCODE_X1:
+ gen_exch(dc, rdst, rsrc, rsrcb, TILEGX_EXCP_OPCODE_EXCH);
+ return;
+ case MZ_RRR_0_OPCODE_X1:
+ gen_mz(dc, rdst, rsrc, rsrcb);
+ return;
+ case MNZ_RRR_0_OPCODE_X1:
+ gen_mnz(dc, rdst, rsrc, rsrcb);
+ return;
+ case NOR_RRR_0_OPCODE_X1:
+ gen_nor(dc, rdst, rsrc, rsrcb);
+ return;
+ case OR_RRR_0_OPCODE_X1:
+ gen_or(dc, rdst, rsrc, rsrcb);
+ return;
+ case CMPNE_RRR_0_OPCODE_X1:
+ gen_cmp(dc, rdst, rsrc, rsrcb, TCG_COND_NE);
+ return;
+ case SHL1ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 1);
+ return;
+ case SHL1ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 1, 0);
+ return;
+ case SHL2ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 1);
+ return;
+ case SHL2ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 2, 0);
+ return;
+ case SHL3ADDX_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 1);
+ return;
+ case SHL3ADD_RRR_0_OPCODE_X1:
+ gen_shladd(dc, rdst, rsrc, rsrcb, 3, 0);
+ return;
+ case SHLX_RRR_0_OPCODE_X1:
+ gen_shlx(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHL_RRR_0_OPCODE_X1:
+ gen_shl(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRS_RRR_0_OPCODE_X1:
+ gen_shrs(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRUX_RRR_0_OPCODE_X1:
+ gen_shrux(dc, rdst, rsrc, rsrcb);
+ return;
+ case SHRU_RRR_0_OPCODE_X1:
+ gen_shru(dc, rdst, rsrc, rsrcb);
+ return;
+ case ST1_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_UB);
+ return;
+ }
+ break;
+ case ST2_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEUW);
+ return;
+ }
+ break;
+ case ST4_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEUL);
+ return;
+ }
+ break;
+ case ST_RRR_0_OPCODE_X1:
+ if (!rdst) {
+ gen_st(dc, rsrc, rsrcb, MO_LEQ);
+ return;
+ }
+ break;
+ case SUB_RRR_0_OPCODE_X1:
+ gen_sub(dc, rdst, rsrc, rsrcb);
+ return;
+ case SUBX_RRR_0_OPCODE_X1:
+ gen_subx(dc, rdst, rsrc, rsrcb);
+ return;
+ case UNARY_RRR_0_OPCODE_X1:
+ switch (get_UnaryOpcodeExtension_X1(bundle)) {
+ case NOP_UNARY_OPCODE_X1:
+ case FNOP_UNARY_OPCODE_X1:
+ if (!rdst && !rsrc) {
+ qemu_log_mask(CPU_LOG_TB_IN_ASM, "(f)nop\n");
+ return;
+ }
+ break;
+ case JALRP_UNARY_OPCODE_X1:
+ case JALR_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_jalr(dc, rsrc);
+ return;
+ }
+ break;
+ case JRP_UNARY_OPCODE_X1:
+ case JR_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_jr(dc, rsrc);
+ return;
+ }
+ break;
+ case LD1S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_SB);
+ return;
+ case LD1U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_UB);
+ return;
+ case LD2S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LESW);
+ return;
+ case LD2U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEUW);
+ return;
+ case LD4U_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEUL);
+ return;
+ case LD4S_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LESL);
+ return;
+ case LDNA_UNARY_OPCODE_X1:
+ case LD_UNARY_OPCODE_X1:
+ gen_ld(dc, rdst, rsrc, MO_LEQ);
+ return;
+ case LNK_UNARY_OPCODE_X1:
+ if (!rsrc) {
+ gen_lnk(dc, rdst);
+ return;
+ }
+ break;
+ case MF_UNARY_OPCODE_X1:
+ if (!rdst && !rsrc) {
+ gen_mf(dc);
+ return;
+ }
+ case SWINT1_UNARY_OPCODE_X1:
+ if (!rsrc && !rdst) {
+ gen_swint1(dc);
+ return;
+ }
+ break;
+ case WH64_UNARY_OPCODE_X1:
+ if (!rdst) {
+ gen_wh64(dc, rsrc);
+ return;
+ }
+ break;
+ case DRAIN_UNARY_OPCODE_X1:
+ case DTLBPR_UNARY_OPCODE_X1:
+ case FINV_UNARY_OPCODE_X1:
+ case FLUSHWB_UNARY_OPCODE_X1:
+ case FLUSH_UNARY_OPCODE_X1:
+ case ICOH_UNARY_OPCODE_X1:
+ case ILL_UNARY_OPCODE_X1:
+ case INV_UNARY_OPCODE_X1:
+ case IRET_UNARY_OPCODE_X1:
+ case LDNT1S_UNARY_OPCODE_X1:
+ case LDNT1U_UNARY_OPCODE_X1:
+ case LDNT2S_UNARY_OPCODE_X1:
+ case LDNT2U_UNARY_OPCODE_X1:
+ case LDNT4S_UNARY_OPCODE_X1:
+ case LDNT4U_UNARY_OPCODE_X1:
+ case LDNT_UNARY_OPCODE_X1:
+ case NAP_UNARY_OPCODE_X1:
+ case SWINT0_UNARY_OPCODE_X1:
+ case SWINT2_UNARY_OPCODE_X1:
+ case SWINT3_UNARY_OPCODE_X1:
+ default:
+ break;
+ }
+ break;
+ case V1INT_L_RRR_0_OPCODE_X1:
+ gen_v1int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case V4INT_L_RRR_0_OPCODE_X1:
+ gen_v4int_l(dc, rdst, rsrc, rsrcb);
+ return;
+ case XOR_RRR_0_OPCODE_X1:
+ gen_xor(dc, rdst, rsrc, rsrcb);
+ return;
+ case DBLALIGN2_RRR_0_OPCODE_X1:
+ case DBLALIGN4_RRR_0_OPCODE_X1:
+ case DBLALIGN6_RRR_0_OPCODE_X1:
+ case FETCHADD4_RRR_0_OPCODE_X1:
+ case FETCHADDGEZ4_RRR_0_OPCODE_X1:
+ case FETCHADDGEZ_RRR_0_OPCODE_X1:
+ case FETCHADD_RRR_0_OPCODE_X1:
+ case FETCHAND4_RRR_0_OPCODE_X1:
+ case FETCHAND_RRR_0_OPCODE_X1:
+ case FETCHOR4_RRR_0_OPCODE_X1:
+ case FETCHOR_RRR_0_OPCODE_X1:
+ case ROTL_RRR_0_OPCODE_X1:
+ case STNT1_RRR_0_OPCODE_X1:
+ case STNT2_RRR_0_OPCODE_X1:
+ case STNT4_RRR_0_OPCODE_X1:
+ case STNT_RRR_0_OPCODE_X1:
+ case SUBXSC_RRR_0_OPCODE_X1:
+ case V1INT_H_RRR_0_OPCODE_X1:
+ case V2INT_H_RRR_0_OPCODE_X1:
+ case V2INT_L_RRR_0_OPCODE_X1:
+ case V4INT_H_RRR_0_OPCODE_X1:
+ case V1ADDUC_RRR_0_OPCODE_X1:
+ case V1ADD_RRR_0_OPCODE_X1:
+ case V1CMPEQ_RRR_0_OPCODE_X1:
+ case V1CMPLES_RRR_0_OPCODE_X1:
+ case V1CMPLEU_RRR_0_OPCODE_X1:
+ case V1CMPLTS_RRR_0_OPCODE_X1:
+ case V1CMPLTU_RRR_0_OPCODE_X1:
+ case V1CMPNE_RRR_0_OPCODE_X1:
+ case V1MAXU_RRR_0_OPCODE_X1:
+ case V1MINU_RRR_0_OPCODE_X1:
+ case V1MNZ_RRR_0_OPCODE_X1:
+ case V1MZ_RRR_0_OPCODE_X1:
+ case V1SHL_RRR_0_OPCODE_X1:
+ case V1SHRS_RRR_0_OPCODE_X1:
+ case V1SHRU_RRR_0_OPCODE_X1:
+ case V1SUBUC_RRR_0_OPCODE_X1:
+ case V1SUB_RRR_0_OPCODE_X1:
+ case V2ADDSC_RRR_0_OPCODE_X1:
+ case V2ADD_RRR_0_OPCODE_X1:
+ case V2CMPEQ_RRR_0_OPCODE_X1:
+ case V2CMPLES_RRR_0_OPCODE_X1:
+ case V2CMPLEU_RRR_0_OPCODE_X1:
+ case V2CMPLTS_RRR_0_OPCODE_X1:
+ case V2CMPLTU_RRR_0_OPCODE_X1:
+ case V2CMPNE_RRR_0_OPCODE_X1:
+ case V2MAXS_RRR_0_OPCODE_X1:
+ case V2MINS_RRR_0_OPCODE_X1:
+ case V2MNZ_RRR_0_OPCODE_X1:
+ case V2MZ_RRR_0_OPCODE_X1:
+ case V2PACKH_RRR_0_OPCODE_X1:
+ case V2PACKL_RRR_0_OPCODE_X1:
+ case V2PACKUC_RRR_0_OPCODE_X1:
+ case V2SHLSC_RRR_0_OPCODE_X1:
+ case V2SHL_RRR_0_OPCODE_X1:
+ case V2SHRS_RRR_0_OPCODE_X1:
+ case V2SHRU_RRR_0_OPCODE_X1:
+ case V2SUBSC_RRR_0_OPCODE_X1:
+ case V2SUB_RRR_0_OPCODE_X1:
+ case V4ADDSC_RRR_0_OPCODE_X1:
+ case V4ADD_RRR_0_OPCODE_X1:
+ case V4PACKSC_RRR_0_OPCODE_X1:
+ case V4SHLSC_RRR_0_OPCODE_X1:
+ case V4SHL_RRR_0_OPCODE_X1:
+ case V4SHRS_RRR_0_OPCODE_X1:
+ case V4SHRU_RRR_0_OPCODE_X1:
+ case V4SUBSC_RRR_0_OPCODE_X1:
+ case V4SUB_RRR_0_OPCODE_X1:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP rrr_0_opcode_x1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_shift_opcode_x1(struct DisasContext *dc,
+ tilegx_bundle_bits bundle)
+{
+ uint8_t rsrc = get_SrcA_X1(bundle);
+ uint8_t rdst = get_Dest_X1(bundle);
+ uint8_t shamt = get_ShAmt_X1(bundle);
+
+ switch (get_ShiftOpcodeExtension_X1(bundle)) {
+ case SHLI_SHIFT_OPCODE_X1:
+ gen_shli(dc, rdst, rsrc, shamt);
+ return;
+ case SHLXI_SHIFT_OPCODE_X1:
+ gen_shlxi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRSI_SHIFT_OPCODE_X1:
+ gen_shrsi(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUI_SHIFT_OPCODE_X1:
+ gen_shrui(dc, rdst, rsrc, shamt);
+ return;
+ case SHRUXI_SHIFT_OPCODE_X1:
+ gen_shruxi(dc, rdst, rsrc, shamt);
+ return;
+ case ROTLI_SHIFT_OPCODE_X1:
+ case V1SHLI_SHIFT_OPCODE_X1:
+ case V1SHRSI_SHIFT_OPCODE_X1:
+ case V1SHRUI_SHIFT_OPCODE_X1:
+ case V2SHLI_SHIFT_OPCODE_X1:
+ case V2SHRSI_SHIFT_OPCODE_X1:
+ case V2SHRUI_SHIFT_OPCODE_X1:
+ default:
+ break;
+ }
+ qemu_log_mask(LOG_UNIMP, "UNIMP shift_opcode_x1, [" FMT64X "]\n", bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+}
+
+static void decode_y0(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y0(bundle);
+ uint8_t rsrc = get_SrcA_Y0(bundle);
+ uint8_t rdst = get_Dest_Y0(bundle);
+ int8_t imm8 = get_Imm8_Y0(bundle);
+
+ dc->tmp_regcur = dc->tmp_regs + 0;
+
+ switch (opcode) {
+ case ADDI_OPCODE_Y0:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_OPCODE_Y0:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_OPCODE_Y0:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_OPCODE_Y0:
+ gen_cmpeqi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTSI_OPCODE_Y0:
+ gen_cmpltsi(dc, rdst, rsrc, imm8);
+ return;
+ case RRR_0_OPCODE_Y0:
+ decode_rrr_0_opcode_y0(dc, bundle);
+ return;
+ case RRR_1_OPCODE_Y0:
+ decode_rrr_1_opcode_y0(dc, bundle);
+ return;
+ case RRR_2_OPCODE_Y0:
+ decode_rrr_2_opcode_y0(dc, bundle);
+ return;
+ case RRR_3_OPCODE_Y0:
+ decode_rrr_3_opcode_y0(dc, bundle);
+ return;
+ case RRR_4_OPCODE_Y0:
+ decode_rrr_4_opcode_y0(dc, bundle);
+ return;
+ case RRR_5_OPCODE_Y0:
+ decode_rrr_5_opcode_y0(dc, bundle);
+ return;
+ case RRR_6_OPCODE_Y0:
+ decode_rrr_6_opcode_y0(dc, bundle);
+ return;
+ case SHIFT_OPCODE_Y0:
+ decode_shift_opcode_y0(dc, bundle);
+ return;
+ case RRR_7_OPCODE_Y0:
+ case RRR_8_OPCODE_Y0:
+ case RRR_9_OPCODE_Y0:
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP y0, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+ return;
+ }
+}
+
+static void decode_y1(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y1(bundle);
+ uint8_t rsrc = get_SrcA_Y1(bundle);
+ uint8_t rdst = get_Dest_Y1(bundle);
+ int8_t imm8 = get_Imm8_Y1(bundle);
+
+ dc->tmp_regcur = dc->tmp_regs + 1;
+
+ switch (opcode) {
+ case ADDI_OPCODE_Y1:
+ gen_addimm(dc, rdst, rsrc, imm8);
+ return;
+ case ADDXI_OPCODE_Y1:
+ gen_addximm(dc, rdst, rsrc, imm8);
+ return;
+ case ANDI_OPCODE_Y1:
+ gen_andi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPEQI_OPCODE_Y1:
+ gen_cmpeqi(dc, rdst, rsrc, imm8);
+ return;
+ case CMPLTSI_OPCODE_Y1:
+ gen_cmpltsi(dc, rdst, rsrc, imm8);
+ return;
+ case RRR_1_OPCODE_Y1:
+ decode_rrr_1_opcode_y1(dc, bundle);
+ return;
+ case RRR_3_OPCODE_Y1:
+ decode_rrr_3_opcode_y1(dc, bundle);
+ return;
+ case RRR_5_OPCODE_Y1:
+ decode_rrr_5_opcode_y1(dc, bundle);
+ return;
+ case RRR_0_OPCODE_Y1:
+ case RRR_2_OPCODE_Y1:
+ case RRR_4_OPCODE_Y1:
+ case RRR_6_OPCODE_Y1:
+ case RRR_7_OPCODE_Y1:
+ case SHIFT_OPCODE_Y1:
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP y1, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+ return;
+ }
+}
+
+static void decode_y2(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_Y2(bundle);
+
+ dc->tmp_regcur = dc->tmp_regs + 2;
+
+ switch (opcode) {
+ case 0: /* LD1S_OPCODE_Y2, ST1_OPCODE_Y2 */
+ decode_ldst0_opcode_y2(dc, bundle);
+ return;
+ case 1: /* LD4S_OPCODE_Y2, LD1U_OPCODE_Y2, ST2_OPCODE_Y2 */
+ decode_ldst1_opcode_y2(dc, bundle);
+ return;
+ case 2: /* LD2S_OPCODE_Y2, LD4U_OPCODE_Y2, ST4_OPCODE_Y2 */
+ decode_ldst2_opcode_y2(dc, bundle);
+ return;
+ case 3: /* LD_OPCODE_Y2, ST_OPCODE_Y2, LD2U_OPCODE_Y2 */
+ decode_ldst3_opcode_y2(dc, bundle);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP y2, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+ return;
+ }
+}
+
+static void decode_x0(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_X0(bundle);
+ uint8_t rsrc = get_SrcA_X0(bundle);
+ uint8_t rdst = get_Dest_X0(bundle);
+ int16_t imm16 = get_Imm16_X0(bundle);
+
+
+ dc->tmp_regcur = dc->tmp_regs + 0;
+
+ switch (opcode) {
+ case ADDLI_OPCODE_X0:
+ gen_addimm(dc, rdst, rsrc, imm16);
+ return;
+ case ADDXLI_OPCODE_X0:
+ gen_addximm(dc, rdst, rsrc, imm16);
+ return;
+ case BF_OPCODE_X0:
+ decode_bf_opcode_x0(dc, bundle);
+ return;
+ case IMM8_OPCODE_X0:
+ decode_imm8_opcode_x0(dc, bundle);
+ return;
+ case RRR_0_OPCODE_X0:
+ decode_rrr_0_opcode_x0(dc, bundle);
+ return;
+ case SHIFT_OPCODE_X0:
+ decode_shift_opcode_x0(dc, bundle);
+ return;
+ case SHL16INSLI_OPCODE_X0:
+ gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP x0, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+ return;
+ }
+}
+
+static void decode_x1(struct DisasContext *dc, tilegx_bundle_bits bundle)
+{
+ unsigned int opcode = get_Opcode_X1(bundle);
+ uint8_t rsrc = (uint8_t)get_SrcA_X1(bundle);
+ uint8_t rdst = (uint8_t)get_Dest_X1(bundle);
+ int16_t imm16 = (int16_t)get_Imm16_X1(bundle);
+
+ dc->tmp_regcur = dc->tmp_regs + 1;
+
+ switch (opcode) {
+ case ADDLI_OPCODE_X1:
+ gen_addimm(dc, rdst, rsrc, imm16);
+ return;
+ case ADDXLI_OPCODE_X1:
+ gen_addximm(dc, rdst, rsrc, imm16);
+ return;
+ case BRANCH_OPCODE_X1:
+ decode_branch_opcode_x1(dc, bundle);
+ return;
+ case IMM8_OPCODE_X1:
+ decode_imm8_opcode_x1(dc, bundle);
+ return;
+ case JUMP_OPCODE_X1:
+ decode_jump_opcode_x1(dc, bundle);
+ return;
+ case RRR_0_OPCODE_X1:
+ decode_rrr_0_opcode_x1(dc, bundle);
+ return;
+ case SHIFT_OPCODE_X1:
+ decode_shift_opcode_x1(dc, bundle);
+ return;
+ case SHL16INSLI_OPCODE_X1:
+ gen_shl16insli(dc, rdst, rsrc, (uint16_t)imm16);
+ return;
+ default:
+ qemu_log_mask(LOG_UNIMP,
+ "UNIMP x1, opcode %d, bundle [" FMT64X "]\n",
+ opcode, bundle);
+ dc->exception = TILEGX_EXCP_OPCODE_UNIMPLEMENTED;
+ return;
+ }
+}
+
+static void translate_one_bundle(struct DisasContext *dc, uint64_t bundle)
+{
+ int i;
+ TCGv tmp;
+
+ for (i = 0; i < TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE; i++) {
+ dc->tmp_regs[i].idx = TILEGX_R_NOREG;
+ TCGV_UNUSED_I64(dc->tmp_regs[i].val);
+ }
+
+ if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT))) {
+ tcg_gen_debug_insn_start(dc->pc);
+ }
+
+ if (get_Mode(bundle)) {
+ decode_y0(dc, bundle);
+ decode_y1(dc, bundle);
+ decode_y2(dc, bundle);
+ } else {
+ decode_x0(dc, bundle);
+ decode_x1(dc, bundle);
+ }
+
+ for (i = 0; i < TILEGX_MAX_INSTRUCTIONS_PER_BUNDLE; i++) {
+ if (dc->tmp_regs[i].idx == TILEGX_R_NOREG) {
+ continue;
+ }
+ if (dc->tmp_regs[i].idx < TILEGX_R_COUNT) {
+ tcg_gen_mov_i64(cpu_regs[dc->tmp_regs[i].idx], dc->tmp_regs[i].val);
+ }
+ tcg_temp_free_i64(dc->tmp_regs[i].val);
+ }
+
+ if (dc->jmp.cond != TCG_COND_NEVER) {
+ if (dc->jmp.cond == TCG_COND_ALWAYS) {
+ tcg_gen_mov_i64(cpu_pc, dc->jmp.dest);
+ } else {
+ tmp = tcg_const_i64(dc->pc + TILEGX_BUNDLE_SIZE_IN_BYTES);
+ tcg_gen_movcond_i64(dc->jmp.cond, cpu_pc,
+ dc->jmp.val1, dc->jmp.val2,
+ dc->jmp.dest, tmp);
+ tcg_temp_free_i64(dc->jmp.val1);
+ tcg_temp_free_i64(dc->jmp.val2);
+ tcg_temp_free_i64(tmp);
+ }
+ tcg_temp_free_i64(dc->jmp.dest);
+ tcg_gen_exit_tb(0);
+ }
+}
+
+static inline void gen_intermediate_code_internal(TileGXCPU *cpu,
+ TranslationBlock *tb,
+ bool search_pc)
+{
+ DisasContext ctx;
+ DisasContext *dc = &ctx;
+
+ CPUTLGState *env = &cpu->env;
+ uint64_t pc_start = tb->pc;
+ uint64_t next_page_start = (pc_start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
+ int j, lj = -1;
+ int num_insns = 0;
+ int max_insns = tb->cflags & CF_COUNT_MASK;
+
+ dc->pc = pc_start;
+ dc->exception = TILEGX_EXCP_NONE;
+ dc->jmp.cond = TCG_COND_NEVER;
+ TCGV_UNUSED_I64(dc->jmp.dest);
+ TCGV_UNUSED_I64(dc->jmp.val1);
+ TCGV_UNUSED_I64(dc->jmp.val2);
+
+ if (!max_insns) {
+ max_insns = CF_COUNT_MASK;
+ }
+ gen_tb_start(tb);
+
+ do {
+ TCGV_UNUSED_I64(dc->zero);
+ if (search_pc) {
+ j = tcg_op_buf_count();
+ if (lj < j) {
+ lj++;
+ while (lj < j) {
+ tcg_ctx.gen_opc_instr_start[lj++] = 0;
+ }
+ }
+ tcg_ctx.gen_opc_pc[lj] = dc->pc;
+ tcg_ctx.gen_opc_instr_start[lj] = 1;
+ tcg_ctx.gen_opc_icount[lj] = num_insns;
+ }
+ translate_one_bundle(dc, cpu_ldq_data(env, dc->pc));
+ num_insns++;
+ dc->pc += TILEGX_BUNDLE_SIZE_IN_BYTES;
+ if (dc->exception != TILEGX_EXCP_NONE) {
+ gen_exception(dc, dc->exception);
+ break;
+ }
+ } while (dc->jmp.cond == TCG_COND_NEVER && dc->pc < next_page_start
+ && num_insns < max_insns && !tcg_op_buf_full());
+
+ gen_tb_end(tb, num_insns);
+ if (search_pc) {
+ j = tcg_op_buf_count();
+ lj++;
+ while (lj <= j) {
+ tcg_ctx.gen_opc_instr_start[lj++] = 0;
+ }
+ } else {
+ tb->size = dc->pc - pc_start;
+ tb->icount = num_insns;
+ }
+
+ return;
+}
+
+void gen_intermediate_code(CPUTLGState *env, struct TranslationBlock *tb)
+{
+ gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, false);
+}
+
+void gen_intermediate_code_pc(CPUTLGState *env, struct TranslationBlock *tb)
+{
+ gen_intermediate_code_internal(tilegx_env_get_cpu(env), tb, true);
+}
+
+void restore_state_to_opc(CPUTLGState *env, TranslationBlock *tb, int pc_pos)
+{
+ env->pc = tcg_ctx.gen_opc_pc[pc_pos];
+}
+
+void tilegx_tcg_init(void)
+{
+ int i;
+
+ cpu_env = tcg_global_reg_new_ptr(TCG_AREG0, "env");
+ cpu_pc = tcg_global_mem_new_i64(TCG_AREG0, offsetof(CPUTLGState, pc), "pc");
+ for (i = 0; i < TILEGX_R_COUNT; i++) {
+ cpu_regs[i] = tcg_global_mem_new_i64(TCG_AREG0,
+ offsetof(CPUTLGState, regs[i]),
+ reg_names[i]);
+ }
+ for (i = 0; i < TILEGX_SPR_COUNT; i++) {
+ cpu_spregs[i] = tcg_global_mem_new_i64(TCG_AREG0,
+ offsetof(CPUTLGState, spregs[i]),
+ spreg_names[i]);
+ }
+#if defined(CONFIG_USER_ONLY)
+ cpu_cmpexch = tcg_global_mem_new_i32(TCG_AREG0,
+ offsetof(CPUTLGState, cmpexch),
+ "cmpexch_info");
+#endif
+}
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [Qemu-devel] [PATCH 10/10 v10] target-tilegx: Add TILE-Gx building files
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
` (7 preceding siblings ...)
2015-05-10 22:45 ` [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib Chen Gang
@ 2015-05-10 22:46 ` Chen Gang
[not found] ` <BLU437-SMTP59B35F884A72A991334DBB9DC0@phx.gbl>
9 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-10 22:46 UTC (permalink / raw)
To: Peter Maydell, Andreas Färber, rth, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Add related configuration, make files for tilegx. Now, qemu tilegx can
pass building.
Signed-off-by: Chen Gang <gang.chen.5i5j@gmail.com>
---
configure | 2 ++
default-configs/tilegx-linux-user.mak | 1 +
target-tilegx/Makefile.objs | 1 +
3 files changed, 4 insertions(+)
create mode 100644 default-configs/tilegx-linux-user.mak
create mode 100644 target-tilegx/Makefile.objs
diff --git a/configure b/configure
index b18aa9e..0a32741 100755
--- a/configure
+++ b/configure
@@ -5243,6 +5243,8 @@ case "$target_name" in
s390x)
gdb_xml_files="s390x-core64.xml s390-acr.xml s390-fpr.xml"
;;
+ tilegx)
+ ;;
tricore)
;;
unicore32)
diff --git a/default-configs/tilegx-linux-user.mak b/default-configs/tilegx-linux-user.mak
new file mode 100644
index 0000000..3e47493
--- /dev/null
+++ b/default-configs/tilegx-linux-user.mak
@@ -0,0 +1 @@
+# Default configuration for tilegx-linux-user
diff --git a/target-tilegx/Makefile.objs b/target-tilegx/Makefile.objs
new file mode 100644
index 0000000..8b3dc76
--- /dev/null
+++ b/target-tilegx/Makefile.objs
@@ -0,0 +1 @@
+obj-y += cpu.o translate.o helper.o
--
1.9.3
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
[not found] ` <BLU437-SMTP59B35F884A72A991334DBB9DC0@phx.gbl>
@ 2015-05-11 16:01 ` Richard Henderson
2015-05-11 21:06 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Richard Henderson @ 2015-05-11 16:01 UTC (permalink / raw)
To: Chen Gang, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 05/10/2015 03:42 PM, Chen Gang wrote:
> -static __inline unsigned int
> +static inline uint8_t
> get_BFEnd_X0(tilegx_bundle_bits num)
Do not change these casts to uint8_t. It's unnecessary churn.
r~
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-10 22:45 ` [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib Chen Gang
@ 2015-05-11 16:55 ` Richard Henderson
2015-05-11 21:26 ` Chen Gang
` (4 more replies)
0 siblings, 5 replies; 32+ messages in thread
From: Richard Henderson @ 2015-05-11 16:55 UTC (permalink / raw)
To: Chen Gang, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 05/10/2015 03:45 PM, Chen Gang wrote:
> +static void gen_cmpltsi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
> + rdst, rsrc, imm8);
> + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
> + (int64_t)imm8);
> +}
> +
> +static void gen_cmpltui(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
> + rdst, rsrc, imm8);
> + tcg_gen_setcondi_i64(TCG_COND_LTU,
> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
> +
> +static void gen_cmpeqi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
> +{
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
> + tcg_gen_setcondi_i64(TCG_COND_EQ,
> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
> +}
Merge these.
> +
> +static void gen_cmp(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, TCGCond cond)
> +{
> + const char *prefix;
> +
> + switch (cond) {
> + case TCG_COND_EQ:
> + prefix = "eq";
> + break;
> + case TCG_COND_LE:
> + prefix = "les";
> + break;
> + case TCG_COND_LEU:
> + prefix = "leu";
> + break;
> + case TCG_COND_LT:
> + prefix = "lts";
> + break;
> + case TCG_COND_LTU:
> + prefix = "ltu";
> + break;
> + case TCG_COND_NE:
> + prefix = "ne";
> + break;
> + default:
> + dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
> + return;
> + }
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmp%s r%d, r%d, r%d\n",
> + prefix, rdst, rsrc, rsrcb);
Better to just pass down the opcode name with the TCGCond rather than trying to
recreate it. Then there's no need for a switch, nor a need for a confusing
TILEGX_EXCP_OPCODE_UNKNOWN path.
> +static void gen_exch(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, int excp)
> +{
> + const char *prefix, *width;
> +
> + switch (excp) {
> + case TILEGX_EXCP_OPCODE_EXCH4:
> + prefix = "";
> + width = "4";
> + break;
> + case TILEGX_EXCP_OPCODE_EXCH:
> + prefix = "";
> + width = "";
> + break;
> + case TILEGX_EXCP_OPCODE_CMPEXCH4:
> + prefix = "cmp";
> + width = "4";
> + break;
> + case TILEGX_EXCP_OPCODE_CMPEXCH:
> + prefix = "cmp";
> + width = "";
> + break;
> + default:
> + dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
> + return;
> + }
Likewise.
> +static void gen_v1cmpeqi(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1cmpeqi r%d, r%d, %d\n",
> + rdst, rsrc, imm8);
> +
> + tcg_gen_movi_i64(vdst, 0);
> +
> + for (count = 0; count < 8; count++) {
> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (8 - count - 1) * 8);
> + tcg_gen_andi_i64(tmp, tmp, 0xff);
> + tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
> + tcg_gen_or_i64(vdst, vdst, tmp);
> + tcg_gen_shli_i64(vdst, vdst, 8);
For all of these vector instructions, I would encourage you to use helpers to
extract and insert values. Extraction has little choice but to use a shift and
a mask, as you use here. But insertion can use tcg_gen_deposit_i64. I think
that is a lot easier to reason with than your construction here which
sequentially shifts vdst.
E.g.
static inline void
extract_v1(TCGv out, TCGv in, unsigned byte)
{
tcg_gen_shri_i64(out, in, byte * 8);
tcg_gen_ext8u_i64(out, out);
}
static inline void
insert_v1(TCGv out, TCGv in, unsigned byte)
{
tcg_gen_deposit_i64(out, out, in, byte * 8, 8);
}
This loop then becomes
TCGv vsrc = load_gr(dc, src);
for (count = 0; count < 8; ++count) {
extract_v1(tmp, vsrc, count);
tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
insert_v1(vdst, tmp, count);
}
> +static void gen_v1int_l(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + int count;
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1int_l r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> +
> + tcg_gen_movi_i64(vdst, 0);
> + for (count = 0; count < 4; count++) {
> +
> + tcg_gen_shli_i64(vdst, vdst, 8);
> +
> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (4 - count - 1) * 8);
> + tcg_gen_andi_i64(tmp, tmp, 0xff);
> + tcg_gen_or_i64(vdst, vdst, tmp);
> + tcg_gen_shli_i64(vdst, vdst, 8);
> +
> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrcb), (4 - count - 1) * 8);
> + tcg_gen_andi_i64(tmp, tmp, 0xff);
> + tcg_gen_or_i64(vdst, vdst, tmp);
> + }
> + tcg_temp_free_i64(tmp);
TCGv vsrc = load_gr(dc, rsrc);
TCGv vsrcb = load_gr(dc, rsrcb);
for (count = 0; count < 4; ++count) {
extract_v1(tmp, vsrc, count);
insert_v1(vdst, tmp, count * 2);
extract_v1(tmp, vsrcb, count);
insert_v1(vdst, tmp, count * 2 + 1);
}
> +}
> +
> +/*
> + * Functional Description
> + *
> + * uint64_t output = 0;
> + * uint32_t counter;
> + * for (counter = 0; counter < (WORD_SIZE / 32); counter++)
> + * {
> + * bool asel = ((counter & 1) == 1);
> + * int in_sel = 0 + counter / 2;
> + * int32_t srca = get4Byte (rf[SrcA], in_sel);
> + * int32_t srcb = get4Byte (rf[SrcB], in_sel);
> + * output = set4Byte (output, counter, (asel ? srca : srcb));
> + * }
> + * rf[Dest] = output;
> +*/
> +
> +static void gen_v4int_l(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
> +{
> + TCGv vdst = dest_gr(dc, rdst);
> + TCGv tmp = tcg_temp_new_i64();
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
> + rdst, rsrc, rsrcb);
> +
> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
> + tcg_gen_shli_i64(vdst, vdst, 8);
> + tcg_gen_andi_i64(tmp, load_gr(dc, rsrcb), 0xffffffff);
> + tcg_gen_or_i64(vdst, vdst, tmp);
And herein is a bug, that I'd hope using the helper functions would avoid: you
shift by 8 instead of 32. This function simplifies to
tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
32, 32);
> +/*
> + * uint64_t mask = 0;
> + * int64_t background = ((rf[SrcA] >> BFEnd) & 1) ? -1ULL : 0ULL;
> + * mask = ((-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1));
> + * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
> + * | (rf[SrcA] << (64 - BFStart));
> + * rf[Dest] = (rot_src & mask) | (background & ~mask);
> + */
> +static void gen_bfexts(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
> + uint8_t start, uint8_t end)
> +{
> + uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
> + TCGv vldst = tcg_temp_local_new_i64();
> + TCGv tmp = tcg_temp_local_new_i64();
> + TCGv pmsk = tcg_const_i64(-1ULL);
Why the local temps?
> +
> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfexts r%d, r%d, %d, %d\n",
> + rdst, rsrc, start, end);
> +
> + tcg_gen_rotri_i64(vldst, load_gr(dc, rsrc), start);
> + tcg_gen_andi_i64(vldst, vldst, mask);
> +
> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), end);
> + tcg_gen_andi_i64(tmp, tmp, 1);
> + tcg_gen_movcond_i64(TCG_COND_EQ, tmp, tmp, load_zero(dc),
> + load_zero(dc), pmsk);
This movcond is equivalent to negation.
> +/*
> + Functional Description
> + uint64_t a = rf[SrcA];
> + uint64_t b = rf[SrcB];
> + uint64_t d = rf[Dest];
> + uint64_t output = 0;
> + unsigned int counter;
> + for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
> + {
> + int sel = getByte (b, counter) & 0xf;
> + uint8_t byte = (sel < 8) ? getByte (d, sel) : getByte (a, (sel - 8));
> + output = setByte (output, counter, byte);
> + }
> + rf[Dest] = output;
> + */
> +static void gen_shufflebytes(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
I strongly suggest this be moved to op_helper.c. It's too big.
> +/* FIXME: At present, skip unalignment checking */
> +static void gen_ld(struct DisasContext *dc,
> + uint8_t rdst, uint8_t rsrc, TCGMemOp ops)
Alignment checks would never be done here anyway.
Again, pass down the opcode string rather than rebuild.
r~
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-11 16:01 ` [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using Richard Henderson
@ 2015-05-11 21:06 ` Chen Gang
2015-05-11 22:06 ` Richard Henderson
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-11 21:06 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:01, Richard Henderson wrote:
> On 05/10/2015 03:42 PM, Chen Gang wrote:
>> -static __inline unsigned int
>> +static inline uint8_t
>> get_BFEnd_X0(tilegx_bundle_bits num)
>
> Do not change these casts to uint8_t. It's unnecessary churn.
>
For me, it is enough to return uint8_t, and the caller really treats it
as uint8_t. So for the function declaration, uint8_t is more precise
than unsigned int for return type.
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 16:55 ` Richard Henderson
@ 2015-05-11 21:26 ` Chen Gang
2015-05-26 21:39 ` Chen Gang
2015-05-14 14:56 ` Chen Gang
` (3 subsequent siblings)
4 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-11 21:26 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
Firstly, thank you very much for your response quickly!
On 5/12/15 00:55, Richard Henderson wrote:
> On 05/10/2015 03:45 PM, Chen Gang wrote:
>> +static void gen_cmpltsi(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
>> +{
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
>> + rdst, rsrc, imm8);
>> + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
>> + (int64_t)imm8);
>> +}
>> +
>> +static void gen_cmpltui(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
>> +{
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
>> + rdst, rsrc, imm8);
>> + tcg_gen_setcondi_i64(TCG_COND_LTU,
>> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> +}
>> +
>> +static void gen_cmpeqi(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
>> +{
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
>> + tcg_gen_setcondi_i64(TCG_COND_EQ,
>> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> +}
>
> Merge these.
>
OK, thanks.
>> +
>> +static void gen_cmp(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, TCGCond cond)
>> +{
>> + const char *prefix;
>> +
>> + switch (cond) {
>> + case TCG_COND_EQ:
>> + prefix = "eq";
>> + break;
>> + case TCG_COND_LE:
>> + prefix = "les";
>> + break;
>> + case TCG_COND_LEU:
>> + prefix = "leu";
>> + break;
>> + case TCG_COND_LT:
>> + prefix = "lts";
>> + break;
>> + case TCG_COND_LTU:
>> + prefix = "ltu";
>> + break;
>> + case TCG_COND_NE:
>> + prefix = "ne";
>> + break;
>> + default:
>> + dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
>> + return;
>> + }
>> +
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmp%s r%d, r%d, r%d\n",
>> + prefix, rdst, rsrc, rsrcb);
>
> Better to just pass down the opcode name with the TCGCond rather than trying to
> recreate it. Then there's no need for a switch, nor a need for a confusing
> TILEGX_EXCP_OPCODE_UNKNOWN path.
>
>> +static void gen_exch(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb, int excp)
>> +{
>> + const char *prefix, *width;
>> +
>> + switch (excp) {
>> + case TILEGX_EXCP_OPCODE_EXCH4:
>> + prefix = "";
>> + width = "4";
>> + break;
>> + case TILEGX_EXCP_OPCODE_EXCH:
>> + prefix = "";
>> + width = "";
>> + break;
>> + case TILEGX_EXCP_OPCODE_CMPEXCH4:
>> + prefix = "cmp";
>> + width = "4";
>> + break;
>> + case TILEGX_EXCP_OPCODE_CMPEXCH:
>> + prefix = "cmp";
>> + width = "";
>> + break;
>> + default:
>> + dc->exception = TILEGX_EXCP_OPCODE_UNKNOWN;
>> + return;
>> + }
>
> Likewise.
>
OK, thanks.
>> +static void gen_v1cmpeqi(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
>> +{
>> + int count;
>> + TCGv vdst = dest_gr(dc, rdst);
>> + TCGv tmp = tcg_temp_new_i64();
>> +
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1cmpeqi r%d, r%d, %d\n",
>> + rdst, rsrc, imm8);
>> +
>> + tcg_gen_movi_i64(vdst, 0);
>> +
>> + for (count = 0; count < 8; count++) {
>> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (8 - count - 1) * 8);
>> + tcg_gen_andi_i64(tmp, tmp, 0xff);
>> + tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
>> + tcg_gen_or_i64(vdst, vdst, tmp);
>> + tcg_gen_shli_i64(vdst, vdst, 8);
>
> For all of these vector instructions, I would encourage you to use helpers to
> extract and insert values. Extraction has little choice but to use a shift and
> a mask, as you use here. But insertion can use tcg_gen_deposit_i64. I think
> that is a lot easier to reason with than your construction here which
> sequentially shifts vdst.
>
> E.g.
>
> static inline void
> extract_v1(TCGv out, TCGv in, unsigned byte)
> {
> tcg_gen_shri_i64(out, in, byte * 8);
> tcg_gen_ext8u_i64(out, out);
> }
>
> static inline void
> insert_v1(TCGv out, TCGv in, unsigned byte)
> {
> tcg_gen_deposit_i64(out, out, in, byte * 8, 8);
> }
>
>
> This loop then becomes
>
> TCGv vsrc = load_gr(dc, src);
> for (count = 0; count < 8; ++count) {
> extract_v1(tmp, vsrc, count);
> tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
> insert_v1(vdst, tmp, count);
> }
>
>> +static void gen_v1int_l(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
>> +{
>> + int count;
>> + TCGv vdst = dest_gr(dc, rdst);
>> + TCGv tmp = tcg_temp_new_i64();
>> +
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1int_l r%d, r%d, r%d\n",
>> + rdst, rsrc, rsrcb);
>> +
>> + tcg_gen_movi_i64(vdst, 0);
>> + for (count = 0; count < 4; count++) {
>> +
>> + tcg_gen_shli_i64(vdst, vdst, 8);
>> +
>> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (4 - count - 1) * 8);
>> + tcg_gen_andi_i64(tmp, tmp, 0xff);
>> + tcg_gen_or_i64(vdst, vdst, tmp);
>> + tcg_gen_shli_i64(vdst, vdst, 8);
>> +
>> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrcb), (4 - count - 1) * 8);
>> + tcg_gen_andi_i64(tmp, tmp, 0xff);
>> + tcg_gen_or_i64(vdst, vdst, tmp);
>> + }
>> + tcg_temp_free_i64(tmp);
>
> TCGv vsrc = load_gr(dc, rsrc);
> TCGv vsrcb = load_gr(dc, rsrcb);
>
> for (count = 0; count < 4; ++count) {
> extract_v1(tmp, vsrc, count);
> insert_v1(vdst, tmp, count * 2);
> extract_v1(tmp, vsrcb, count);
> insert_v1(vdst, tmp, count * 2 + 1);
> }
>
>
OK, thanks.
>> +}
>> +
>> +/*
>> + * Functional Description
>> + *
>> + * uint64_t output = 0;
>> + * uint32_t counter;
>> + * for (counter = 0; counter < (WORD_SIZE / 32); counter++)
>> + * {
>> + * bool asel = ((counter & 1) == 1);
>> + * int in_sel = 0 + counter / 2;
>> + * int32_t srca = get4Byte (rf[SrcA], in_sel);
>> + * int32_t srcb = get4Byte (rf[SrcB], in_sel);
>> + * output = set4Byte (output, counter, (asel ? srca : srcb));
>> + * }
>> + * rf[Dest] = output;
>> +*/
>> +
>> +static void gen_v4int_l(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
>> +{
>> + TCGv vdst = dest_gr(dc, rdst);
>> + TCGv tmp = tcg_temp_new_i64();
>> +
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
>> + rdst, rsrc, rsrcb);
>> +
>> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
>> + tcg_gen_shli_i64(vdst, vdst, 8);
>> + tcg_gen_andi_i64(tmp, load_gr(dc, rsrcb), 0xffffffff);
>> + tcg_gen_or_i64(vdst, vdst, tmp);
>
> And herein is a bug, that I'd hope using the helper functions would avoid: you
> shift by 8 instead of 32. This function simplifies to
>
OK, thank you very much.
> tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
> 32, 32);
>
OK, thanks.
>> +/*
>> + * uint64_t mask = 0;
>> + * int64_t background = ((rf[SrcA] >> BFEnd) & 1) ? -1ULL : 0ULL;
>> + * mask = ((-1ULL) ^ ((-1ULL << ((BFEnd - BFStart) & 63)) << 1));
>> + * uint64_t rot_src = (((uint64_t) rf[SrcA]) >> BFStart)
>> + * | (rf[SrcA] << (64 - BFStart));
>> + * rf[Dest] = (rot_src & mask) | (background & ~mask);
>> + */
>> +static void gen_bfexts(struct DisasContext *dc, uint8_t rdst, uint8_t rsrc,
>> + uint8_t start, uint8_t end)
>> +{
>> + uint64_t mask = (-1ULL) ^ ((-1ULL << ((end - start) & 63)) << 1);
>> + TCGv vldst = tcg_temp_local_new_i64();
>> + TCGv tmp = tcg_temp_local_new_i64();
>> + TCGv pmsk = tcg_const_i64(-1ULL);
>
> Why the local temps?
>
Excuse me, I am not quite sure whether tcg_gen_movcond_i64() belongs to
the branch or not.
>> +
>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "bfexts r%d, r%d, %d, %d\n",
>> + rdst, rsrc, start, end);
>> +
>> + tcg_gen_rotri_i64(vldst, load_gr(dc, rsrc), start);
>> + tcg_gen_andi_i64(vldst, vldst, mask);
>> +
>> + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), end);
>> + tcg_gen_andi_i64(tmp, tmp, 1);
>> + tcg_gen_movcond_i64(TCG_COND_EQ, tmp, tmp, load_zero(dc),
>> + load_zero(dc), pmsk);
>
> This movcond is equivalent to negation.
>
OK, thanks.
>> +/*
>> + Functional Description
>> + uint64_t a = rf[SrcA];
>> + uint64_t b = rf[SrcB];
>> + uint64_t d = rf[Dest];
>> + uint64_t output = 0;
>> + unsigned int counter;
>> + for (counter = 0; counter < (WORD_SIZE / BYTE_SIZE); counter++)
>> + {
>> + int sel = getByte (b, counter) & 0xf;
>> + uint8_t byte = (sel < 8) ? getByte (d, sel) : getByte (a, (sel - 8));
>> + output = setByte (output, counter, byte);
>> + }
>> + rf[Dest] = output;
>> + */
>> +static void gen_shufflebytes(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
>
> I strongly suggest this be moved to op_helper.c. It's too big.
>
OK, thanks.
>> +/* FIXME: At present, skip unalignment checking */
>> +static void gen_ld(struct DisasContext *dc,
>> + uint8_t rdst, uint8_t rsrc, TCGMemOp ops)
>
> Alignment checks would never be done here anyway.
OK, thanks.
> Again, pass down the opcode string rather than rebuild.
>
OK, thanks.
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-11 21:06 ` Chen Gang
@ 2015-05-11 22:06 ` Richard Henderson
2015-05-12 0:43 ` gchen gchen
0 siblings, 1 reply; 32+ messages in thread
From: Richard Henderson @ 2015-05-11 22:06 UTC (permalink / raw)
To: Chen Gang, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 05/11/2015 02:06 PM, Chen Gang wrote:
> On 5/12/15 00:01, Richard Henderson wrote:
>> On 05/10/2015 03:42 PM, Chen Gang wrote:
>>> -static __inline unsigned int
>>> +static inline uint8_t
>>> get_BFEnd_X0(tilegx_bundle_bits num)
>>
>> Do not change these casts to uint8_t. It's unnecessary churn.
>>
>
> For me, it is enough to return uint8_t, and the caller really treats it
> as uint8_t. So for the function declaration, uint8_t is more precise
> than unsigned int for return type.
I don't want to argue about this anymore. Drop all the uint8_t and uint16_t.
r~
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-11 22:06 ` Richard Henderson
@ 2015-05-12 0:43 ` gchen gchen
2015-05-12 10:56 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: gchen gchen @ 2015-05-12 0:43 UTC (permalink / raw)
To: rth, peter.maydell, afaerber, cmetcalf; +Cc: walt, riku.voipio, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1111 bytes --]
For me, I still stick to uint8_t, since all callers and callee always treat it as uint8_t. It will make the code more clearer for readers.
> Date: Mon, 11 May 2015 15:06:48 -0700
> From: rth@twiddle.net
> To: xili_gchen_5257@hotmail.com; peter.maydell@linaro.org; afaerber@suse.de; cmetcalf@ezchip.com
> CC: riku.voipio@iki.fi; walt@tilera.com; qemu-devel@nongnu.org
> Subject: Re: [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
>
> On 05/11/2015 02:06 PM, Chen Gang wrote:
> > On 5/12/15 00:01, Richard Henderson wrote:
> >> On 05/10/2015 03:42 PM, Chen Gang wrote:
> >>> -static __inline unsigned int
> >>> +static inline uint8_t
> >>> get_BFEnd_X0(tilegx_bundle_bits num)
> >>
> >> Do not change these casts to uint8_t. It's unnecessary churn.
> >>
> >
> > For me, it is enough to return uint8_t, and the caller really treats it
> > as uint8_t. So for the function declaration, uint8_t is more precise
> > than unsigned int for return type.
>
> I don't want to argue about this anymore. Drop all the uint8_t and uint16_t.
>
>
> r~
[-- Attachment #2: Type: text/html, Size: 1613 bytes --]
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-12 0:43 ` gchen gchen
@ 2015-05-12 10:56 ` Chen Gang
2015-05-12 11:08 ` Peter Maydell
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-12 10:56 UTC (permalink / raw)
To: rth, peter.maydell, afaerber, cmetcalf; +Cc: walt, riku.voipio, qemu-devel
Welcome any other members' ideas, suggestions or completions for it.
If one of another members also suggests to drop all uint8_t and uint16_t,
I shall drop them (more explanations for dropping them will be better).
Thanks.
On 05/12/2015 08:43 AM, gchen gchen wrote:
> For me, I still stick to uint8_t, since all callers and callee always
> treat it as uint8_t. It will make the code more clearer for readers.
>
>> Date: Mon, 11 May 2015 15:06:48 -0700
>> From: rth@twiddle.net
>> To: xili_gchen_5257@hotmail.com; peter.maydell@linaro.org;
> afaerber@suse.de; cmetcalf@ezchip.com
>> CC: riku.voipio@iki.fi; walt@tilera.com; qemu-devel@nongnu.org
>> Subject: Re: [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify
> it to fit qemu using
>>
>> On 05/11/2015 02:06 PM, Chen Gang wrote:
>> > On 5/12/15 00:01, Richard Henderson wrote:
>> >> On 05/10/2015 03:42 PM, Chen Gang wrote:
>> >>> -static __inline unsigned int
>> >>> +static inline uint8_t
>> >>> get_BFEnd_X0(tilegx_bundle_bits num)
>> >>
>> >> Do not change these casts to uint8_t. It's unnecessary churn.
>> >>
>> >
>> > For me, it is enough to return uint8_t, and the caller really treats it
>> > as uint8_t. So for the function declaration, uint8_t is more precise
>> > than unsigned int for return type.
>>
>> I don't want to argue about this anymore. Drop all the uint8_t and
> uint16_t.
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-12 10:56 ` Chen Gang
@ 2015-05-12 11:08 ` Peter Maydell
2015-05-12 11:16 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Peter Maydell @ 2015-05-12 11:08 UTC (permalink / raw)
To: Chen Gang
Cc: Riku Voipio, QEMU Developers, Chris Metcalf, walt,
Andreas Färber, Richard Henderson
On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
> Welcome any other members' ideas, suggestions or completions for it.
>
> If one of another members also suggests to drop all uint8_t and uint16_t,
> I shall drop them (more explanations for dropping them will be better).
I agree with Richard on this one.
-- PMM
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-12 11:08 ` Peter Maydell
@ 2015-05-12 11:16 ` Chen Gang
2015-05-19 2:47 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-12 11:16 UTC (permalink / raw)
To: Peter Maydell
Cc: Riku Voipio, QEMU Developers, Chris Metcalf, walt,
Andreas Färber, Richard Henderson
On 05/12/2015 07:08 PM, Peter Maydell wrote:
> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>> Welcome any other members' ideas, suggestions or completions for it.
>>
>> If one of another members also suggests to drop all uint8_t and uint16_t,
>> I shall drop them (more explanations for dropping them will be better).
>
> I agree with Richard on this one.
>
OK, thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 16:55 ` Richard Henderson
2015-05-11 21:26 ` Chen Gang
@ 2015-05-14 14:56 ` Chen Gang
2015-05-14 15:08 ` Chen Gang
` (2 subsequent siblings)
4 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-14 14:56 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:55, Richard Henderson wrote:
> On 05/10/2015 03:45 PM, Chen Gang wrote:
>> > +static void gen_cmpltsi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, int8_t imm8)
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
>> > + (int64_t)imm8);
>> > +}
It is another bug: need to use TCG_COND_LT instead of TCG_COND_LTU.
>> > +
>> > +static void gen_cmpltui(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
For imm8, its type needs to be int8_t.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
>> > +
>> > +static void gen_cmpeqi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
For imm8, its type needs to be int8_t.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_EQ,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
> Merge these.
>
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 16:55 ` Richard Henderson
2015-05-11 21:26 ` Chen Gang
2015-05-14 14:56 ` Chen Gang
@ 2015-05-14 15:08 ` Chen Gang
2015-05-14 16:05 ` Chen Gang
2015-05-29 19:29 ` Chen Gang
4 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-14 15:08 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:55, Richard Henderson wrote:
> On 05/10/2015 03:45 PM, Chen Gang wrote:
>> > +static void gen_cmpltsi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, int8_t imm8)
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
>> > + (int64_t)imm8);
It is another bug (which the root cause to lead current _init_malloc to
assert). Need use TCG_COND_LT instead of TCG_COND_LTU.
>> > +}
>> > +
>> > +static void gen_cmpltui(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
>> > +
>> > +static void gen_cmpeqi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_EQ,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
> Merge these.
>
Next, I shall continue for printing "Hello world".
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 16:55 ` Richard Henderson
` (2 preceding siblings ...)
2015-05-14 15:08 ` Chen Gang
@ 2015-05-14 16:05 ` Chen Gang
2015-05-15 2:31 ` Chen Gang
2015-05-29 19:29 ` Chen Gang
4 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-14 16:05 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:55, Richard Henderson wrote:
> On 05/10/2015 03:45 PM, Chen Gang wrote:
>> > +static void gen_cmpltsi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, int8_t imm8)
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
>> > + (int64_t)imm8);
It is another bug (which the root cause to lead current _init_malloc to
assert). Need use TCG_COND_LT instead of TCG_COND_LTU.
>> > +}
>> > +
>> > +static void gen_cmpltui(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_LTU,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
>> > +
>> > +static void gen_cmpeqi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>> > +{
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
>> > + tcg_gen_setcondi_i64(TCG_COND_EQ,
>> > + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>> > +}
> Merge these.
>
Thank you again for your reviewing. Next, I shall continue for printing
"Hello world".
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-14 16:05 ` Chen Gang
@ 2015-05-15 2:31 ` Chen Gang
0 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-15 2:31 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:55, Richard Henderson wrote:
> On 05/10/2015 03:45 PM, Chen Gang wrote:
>>> +static void gen_cmpltsi(struct DisasContext *dc,
>>> + uint8_t rdst, uint8_t rsrc, int8_t imm8)
>>> +{
>>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltsi r%d, r%d, %d\n",
>>> + rdst, rsrc, imm8);
>>> + tcg_gen_setcondi_i64(TCG_COND_LTU, dest_gr(dc, rdst), load_gr(dc, rsrc),
>>> + (int64_t)imm8);
It is another bug (which the root cause to lead current _init_malloc to
assert). Need use TCG_COND_LT instead of TCG_COND_LTU.
>>> +}
>>> +
>>> +static void gen_cmpltui(struct DisasContext *dc,
>>> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>>> +{
>>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpltui r%d, r%d, %d\n",
>>> + rdst, rsrc, imm8);
>>> + tcg_gen_setcondi_i64(TCG_COND_LTU,
>>> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>>> +}
>>> +
>>> +static void gen_cmpeqi(struct DisasContext *dc,
>>> + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
Need to be int8_t for imm8.
>>> +{
>>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "cmpeqi r%d, r%d, %d\n", rdst, rsrc, imm8);
>>> + tcg_gen_setcondi_i64(TCG_COND_EQ,
>>> + dest_gr(dc, rdst), load_gr(dc, rsrc), imm8);
>>> +}
> Merge these.
>
Thank you again for your reviewing. Next, I shall continue to print
"Hello world".
Thanks.
-- Chen Gang Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-12 11:16 ` Chen Gang
@ 2015-05-19 2:47 ` Chen Gang
2015-05-21 20:59 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-19 2:47 UTC (permalink / raw)
To: Peter Maydell
Cc: Riku Voipio, QEMU Developers, Chris Metcalf, walt,
Andreas Färber, Richard Henderson
Hello All:
I also found another bug: I did not set the system call error number to
r1 register, which will cause new_heap() fail although mmap64 succeed.
Hope it is my last bug for printing "Hello world" executable binary.
Thanks.
On 05/12/2015 07:16 PM, Chen Gang wrote:
> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>> Welcome any other members' ideas, suggestions or completions for it.
>>>
>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>> I shall drop them (more explanations for dropping them will be better).
>>
>> I agree with Richard on this one.
>>
>
> OK, thanks.
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-19 2:47 ` Chen Gang
@ 2015-05-21 20:59 ` Chen Gang
2015-05-21 23:40 ` Chris Metcalf
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-21 20:59 UTC (permalink / raw)
To: Peter Maydell
Cc: Riku Voipio, QEMU Developers, Chris Metcalf, walt,
Andreas Färber, Richard Henderson
After fix additional 3 bugs (one for mnz, one for mz, one for v1cmpeqi),
at present, tilegx linux user can print "Hello World"! :-)
I shall reconstruct/prepare the code and send patch v11 for review
within this month.
Thanks.
On 5/19/15 10:47, Chen Gang wrote:
> Hello All:
>
> I also found another bug: I did not set the system call error number to
> r1 register, which will cause new_heap() fail although mmap64 succeed.
>
> Hope it is my last bug for printing "Hello world" executable binary.
>
> Thanks.
>
> On 05/12/2015 07:16 PM, Chen Gang wrote:
>> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>> Welcome any other members' ideas, suggestions or completions for it.
>>>>
>>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>>> I shall drop them (more explanations for dropping them will be better).
>>>
>>> I agree with Richard on this one.
>>>
>>
>> OK, thanks.
>>
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-21 20:59 ` Chen Gang
@ 2015-05-21 23:40 ` Chris Metcalf
2015-05-22 1:48 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chris Metcalf @ 2015-05-21 23:40 UTC (permalink / raw)
To: Chen Gang
Cc: Peter Maydell, Riku Voipio, QEMU Developers, Walter Lee,
Andreas Färber, Richard Henderson
Congratulations!
> On May 21, 2015, at 4:58 PM, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>
>
> After fix additional 3 bugs (one for mnz, one for mz, one for v1cmpeqi),
> at present, tilegx linux user can print "Hello World"! :-)
>
> I shall reconstruct/prepare the code and send patch v11 for review
> within this month.
>
> Thanks.
>
>> On 5/19/15 10:47, Chen Gang wrote:
>> Hello All:
>>
>> I also found another bug: I did not set the system call error number to
>> r1 register, which will cause new_heap() fail although mmap64 succeed.
>>
>> Hope it is my last bug for printing "Hello world" executable binary.
>>
>> Thanks.
>>
>>> On 05/12/2015 07:16 PM, Chen Gang wrote:
>>>> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>>>>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>>> Welcome any other members' ideas, suggestions or completions for it.
>>>>>
>>>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>>>> I shall drop them (more explanations for dropping them will be better).
>>>>
>>>> I agree with Richard on this one.
>>>>
>>>
>>> OK, thanks.
>>>
>>
>
> --
> Chen Gang
>
> Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-21 23:40 ` Chris Metcalf
@ 2015-05-22 1:48 ` Chen Gang
2015-05-24 22:03 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-22 1:48 UTC (permalink / raw)
To: Chris Metcalf
Cc: Peter Maydell, Riku Voipio, QEMU Developers, Walter Lee,
Andreas Färber, Richard Henderson
On 05/22/2015 07:40 AM, Chris Metcalf wrote:
> Congratulations!
>
Again, really thank all of you very much!! :-)
>> On May 21, 2015, at 4:58 PM, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>
>>
>> After fix additional 3 bugs (one for mnz, one for mz, one for v1cmpeqi),
>> at present, tilegx linux user can print "Hello World"! :-)
>>
>> I shall reconstruct/prepare the code and send patch v11 for review
>> within this month.
>>
>> Thanks.
>>
>>> On 5/19/15 10:47, Chen Gang wrote:
>>> Hello All:
>>>
>>> I also found another bug: I did not set the system call error number to
>>> r1 register, which will cause new_heap() fail although mmap64 succeed.
>>>
>>> Hope it is my last bug for printing "Hello world" executable binary.
>>>
>>> Thanks.
>>>
>>>> On 05/12/2015 07:16 PM, Chen Gang wrote:
>>>>> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>>>>>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>>>> Welcome any other members' ideas, suggestions or completions for it.
>>>>>>
>>>>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>>>>> I shall drop them (more explanations for dropping them will be better).
>>>>>
>>>>> I agree with Richard on this one.
>>>>>
>>>>
>>>> OK, thanks.
>>>>
>>>
>>
>> --
>> Chen Gang
>>
>> Open, share, and attitude like air, water, and life which God blessed
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-22 1:48 ` Chen Gang
@ 2015-05-24 22:03 ` Chen Gang
2015-05-25 15:13 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-24 22:03 UTC (permalink / raw)
To: Chris Metcalf
Cc: Peter Maydell, Riku Voipio, QEMU Developers, Walter Lee,
Andreas Färber, Richard Henderson
For "Hello world" with shared glibc, it needs to implement additional
instructions and fix one additional bug (it is about syscall_nr.h: need
stat64 and fstatat64).
I shall send patch v11 within this month. :-)
Thanks.
On 5/22/15 09:48, Chen Gang wrote:
> On 05/22/2015 07:40 AM, Chris Metcalf wrote:
>> Congratulations!
>>
>
> Again, really thank all of you very much!! :-)
>
>
>>> On May 21, 2015, at 4:58 PM, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>
>>>
>>> After fix additional 3 bugs (one for mnz, one for mz, one for v1cmpeqi),
>>> at present, tilegx linux user can print "Hello World"! :-)
>>>
>>> I shall reconstruct/prepare the code and send patch v11 for review
>>> within this month.
>>>
>>> Thanks.
>>>
>>>> On 5/19/15 10:47, Chen Gang wrote:
>>>> Hello All:
>>>>
>>>> I also found another bug: I did not set the system call error number to
>>>> r1 register, which will cause new_heap() fail although mmap64 succeed.
>>>>
>>>> Hope it is my last bug for printing "Hello world" executable binary.
>>>>
>>>> Thanks.
>>>>
>>>>> On 05/12/2015 07:16 PM, Chen Gang wrote:
>>>>>> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>>>>>>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>>>>> Welcome any other members' ideas, suggestions or completions for it.
>>>>>>>
>>>>>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>>>>>> I shall drop them (more explanations for dropping them will be better).
>>>>>>
>>>>>> I agree with Richard on this one.
>>>>>>
>>>>>
>>>>> OK, thanks.
>>>>>
>>>>
>>>
>>> --
>>> Chen Gang
>>>
>>> Open, share, and attitude like air, water, and life which God blessed
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using
2015-05-24 22:03 ` Chen Gang
@ 2015-05-25 15:13 ` Chen Gang
0 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-25 15:13 UTC (permalink / raw)
To: Chris Metcalf
Cc: Peter Maydell, Riku Voipio, QEMU Developers, Walter Lee,
Andreas Färber, Richard Henderson
Also additional bug: when a block finishes with no branch insn (e.g. the
insns are too much to be in a block), we need to modify pc and exit_tb.
I found this bug when mark "-d all" for Hello world with shared glibc.
At present, I finished all "Hello world" related test cases which I can
find, and begin make patches for tilegx, next.
Welcome any ideas, suggestions and completions (e.g. for test cases). If
no additional reply, I shall send patches for tilegx within 4 days.
Thanks.
On 5/25/15 06:03, Chen Gang wrote:
>
> For "Hello world" with shared glibc, it needs to implement additional
> instructions and fix one additional bug (it is about syscall_nr.h: need
> stat64 and fstatat64).
>
> I shall send patch v11 within this month. :-)
>
> Thanks.
>
> On 5/22/15 09:48, Chen Gang wrote:
>> On 05/22/2015 07:40 AM, Chris Metcalf wrote:
>>> Congratulations!
>>>
>>
>> Again, really thank all of you very much!! :-)
>>
>>
>>>> On May 21, 2015, at 4:58 PM, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>>
>>>>
>>>> After fix additional 3 bugs (one for mnz, one for mz, one for v1cmpeqi),
>>>> at present, tilegx linux user can print "Hello World"! :-)
>>>>
>>>> I shall reconstruct/prepare the code and send patch v11 for review
>>>> within this month.
>>>>
>>>> Thanks.
>>>>
>>>>> On 5/19/15 10:47, Chen Gang wrote:
>>>>> Hello All:
>>>>>
>>>>> I also found another bug: I did not set the system call error number to
>>>>> r1 register, which will cause new_heap() fail although mmap64 succeed.
>>>>>
>>>>> Hope it is my last bug for printing "Hello world" executable binary.
>>>>>
>>>>> Thanks.
>>>>>
>>>>>> On 05/12/2015 07:16 PM, Chen Gang wrote:
>>>>>>> On 05/12/2015 07:08 PM, Peter Maydell wrote:
>>>>>>>> On 12 May 2015 at 11:56, Chen Gang <xili_gchen_5257@hotmail.com> wrote:
>>>>>>>> Welcome any other members' ideas, suggestions or completions for it.
>>>>>>>>
>>>>>>>> If one of another members also suggests to drop all uint8_t and uint16_t,
>>>>>>>> I shall drop them (more explanations for dropping them will be better).
>>>>>>>
>>>>>>> I agree with Richard on this one.
>>>>>>>
>>>>>>
>>>>>> OK, thanks.
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Chen Gang
>>>>
>>>> Open, share, and attitude like air, water, and life which God blessed
>>
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 21:26 ` Chen Gang
@ 2015-05-26 21:39 ` Chen Gang
2015-06-01 20:58 ` Chen Gang
0 siblings, 1 reply; 32+ messages in thread
From: Chen Gang @ 2015-05-26 21:39 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 05:26, Chen Gang wrote:
>>> >> +}
>>> >> +
>>> >> +/*
>>> >> + * Functional Description
>>> >> + *
>>> >> + * uint64_t output = 0;
>>> >> + * uint32_t counter;
>>> >> + * for (counter = 0; counter < (WORD_SIZE / 32); counter++)
>>> >> + * {
>>> >> + * bool asel = ((counter & 1) == 1);
>>> >> + * int in_sel = 0 + counter / 2;
>>> >> + * int32_t srca = get4Byte (rf[SrcA], in_sel);
>>> >> + * int32_t srcb = get4Byte (rf[SrcB], in_sel);
>>> >> + * output = set4Byte (output, counter, (asel ? srca : srcb));
>>> >> + * }
>>> >> + * rf[Dest] = output;
>>> >> +*/
>>> >> +
>>> >> +static void gen_v4int_l(struct DisasContext *dc,
>>> >> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
>>> >> +{
>>> >> + TCGv vdst = dest_gr(dc, rdst);
>>> >> + TCGv tmp = tcg_temp_new_i64();
>>> >> +
>>> >> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
>>> >> + rdst, rsrc, rsrcb);
>>> >> +
>>> >> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
>>> >> + tcg_gen_shli_i64(vdst, vdst, 8);
>>> >> + tcg_gen_andi_i64(tmp, load_gr(dc, rsrcb), 0xffffffff);
>>> >> + tcg_gen_or_i64(vdst, vdst, tmp);
>> >
>> > And herein is a bug, that I'd hope using the helper functions would avoid: you
>> > shift by 8 instead of 32. This function simplifies to
>> >
> OK, thank you very much.
>
>> > tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
>> > 32, 32);
>> >
Oh, it is:
tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
0, 32);
> OK, thanks.
>
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-11 16:55 ` Richard Henderson
` (3 preceding siblings ...)
2015-05-14 16:05 ` Chen Gang
@ 2015-05-29 19:29 ` Chen Gang
4 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-05-29 19:29 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/12/15 00:55, Richard Henderson wrote:
>> +static void gen_v1cmpeqi(struct DisasContext *dc,
>> > + uint8_t rdst, uint8_t rsrc, uint8_t imm8)
>> > +{
>> > + int count;
>> > + TCGv vdst = dest_gr(dc, rdst);
>> > + TCGv tmp = tcg_temp_new_i64();
>> > +
>> > + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v1cmpeqi r%d, r%d, %d\n",
>> > + rdst, rsrc, imm8);
>> > +
>> > + tcg_gen_movi_i64(vdst, 0);
>> > +
>> > + for (count = 0; count < 8; count++) {
>> > + tcg_gen_shri_i64(tmp, load_gr(dc, rsrc), (8 - count - 1) * 8);
>> > + tcg_gen_andi_i64(tmp, tmp, 0xff);
>> > + tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
>> > + tcg_gen_or_i64(vdst, vdst, tmp);
>> > + tcg_gen_shli_i64(vdst, vdst, 8);
> For all of these vector instructions, I would encourage you to use helpers to
> extract and insert values. Extraction has little choice but to use a shift and
> a mask, as you use here. But insertion can use tcg_gen_deposit_i64. I think
> that is a lot easier to reason with than your construction here which
> sequentially shifts vdst.
>
> E.g.
>
> static inline void
> extract_v1(TCGv out, TCGv in, unsigned byte)
> {
> tcg_gen_shri_i64(out, in, byte * 8);
> tcg_gen_ext8u_i64(out, out);
> }
>
> static inline void
> insert_v1(TCGv out, TCGv in, unsigned byte)
> {
> tcg_gen_deposit_i64(out, out, in, byte * 8, 8);
> }
>
>
> This loop then becomes
>
> TCGv vsrc = load_gr(dc, src);
> for (count = 0; count < 8; ++count) {
> extract_v1(tmp, vsrc, count);
> tcg_gen_setcondi_i64(TCG_COND_EQ, tmp, tmp, imm8);
> insert_v1(vdst, tmp, count);
> }
>
It also needs "tcg_gen_movi_i64(vdst, 0);" or will generate assertion
`ts->val_type == TEMP_VAL_REG' in debug mode.
And I shall try to send patch within one day (sorry for a little late).
Thanks.
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib
2015-05-26 21:39 ` Chen Gang
@ 2015-06-01 20:58 ` Chen Gang
0 siblings, 0 replies; 32+ messages in thread
From: Chen Gang @ 2015-06-01 20:58 UTC (permalink / raw)
To: Richard Henderson, Peter Maydell, Andreas Färber, Chris Metcalf
Cc: walt, Riku Voipio, qemu-devel
On 5/27/15 05:39, Chen Gang wrote:
> On 5/12/15 05:26, Chen Gang wrote:
>>>>>> +}
>>>>>> +
>>>>>> +/*
>>>>>> + * Functional Description
>>>>>> + *
>>>>>> + * uint64_t output = 0;
>>>>>> + * uint32_t counter;
>>>>>> + * for (counter = 0; counter < (WORD_SIZE / 32); counter++)
>>>>>> + * {
>>>>>> + * bool asel = ((counter & 1) == 1);
>>>>>> + * int in_sel = 0 + counter / 2;
>>>>>> + * int32_t srca = get4Byte (rf[SrcA], in_sel);
>>>>>> + * int32_t srcb = get4Byte (rf[SrcB], in_sel);
>>>>>> + * output = set4Byte (output, counter, (asel ? srca : srcb));
>>>>>> + * }
>>>>>> + * rf[Dest] = output;
>>>>>> +*/
>>>>>> +
>>>>>> +static void gen_v4int_l(struct DisasContext *dc,
>>>>>> + uint8_t rdst, uint8_t rsrc, uint8_t rsrcb)
>>>>>> +{
>>>>>> + TCGv vdst = dest_gr(dc, rdst);
>>>>>> + TCGv tmp = tcg_temp_new_i64();
>>>>>> +
>>>>>> + qemu_log_mask(CPU_LOG_TB_IN_ASM, "v4int_l r%d, r%d, r%d\n",
>>>>>> + rdst, rsrc, rsrcb);
>>>>>> +
>>>>>> + tcg_gen_andi_i64(vdst, load_gr(dc, rsrc), 0xffffffff);
>>>>>> + tcg_gen_shli_i64(vdst, vdst, 8);
>>>>>> + tcg_gen_andi_i64(tmp, load_gr(dc, rsrcb), 0xffffffff);
>>>>>> + tcg_gen_or_i64(vdst, vdst, tmp);
>>>>
>>>> And herein is a bug, that I'd hope using the helper functions would avoid: you
>>>> shift by 8 instead of 32. This function simplifies to
>>>>
>> OK, thank you very much.
>>
>>>> tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
>>>> 32, 32);
>>>>
Oh, it should be:
tcg_gen_deposit_i64(vdst, load_gr(dc, rsrcb), load_gr(dc, rsrc),
32, 32);
>
> Oh, it is:
>
> tcg_gen_deposit_i64(vdst, load_gr(dc, rsrc), load_gr(dc, rsrcb),
> 0, 32);
>
>> OK, thanks.
>>
>
> Thanks.
>
--
Chen Gang
Open, share, and attitude like air, water, and life which God blessed
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2015-06-01 20:58 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-10 22:36 [Qemu-devel] [PATCH 00/10 v10] tilegx: Firstly add tilegx target for linux-user Chen Gang
2015-05-10 22:38 ` [Qemu-devel] [PATCH 01/10 v10] linux-user: tilegx: Firstly add architecture related features Chen Gang
2015-05-10 22:39 ` [Qemu-devel] [PATCH 02/10 v10] linux-user: Support tilegx architecture in linux-user Chen Gang
2015-05-10 22:40 ` [Qemu-devel] [PATCH 03/10 v10] linux-user/syscall.c: Conditionalize syscalls which are not defined in tilegx Chen Gang
2015-05-10 22:41 ` [Qemu-devel] [PATCH 04/10 v10] target-tilegx: Add opcode basic implementation from Tilera Corporation Chen Gang
2015-05-10 22:43 ` [Qemu-devel] [PATCH 06/10 v10] target-tilegx: Add special register information " Chen Gang
2015-05-10 22:44 ` [Qemu-devel] [PATCH 07/10 v10] target-tilegx: Add cpu basic features for linux-user Chen Gang
2015-05-10 22:44 ` [Qemu-devel] [PATCH 08/10 v10] target-tilegx: Add helper " Chen Gang
2015-05-10 22:45 ` [Qemu-devel] [PATCH 09/10 v10] target-tilegx: Generate tcg instructions to execute to _init_malloc in glib Chen Gang
2015-05-11 16:55 ` Richard Henderson
2015-05-11 21:26 ` Chen Gang
2015-05-26 21:39 ` Chen Gang
2015-06-01 20:58 ` Chen Gang
2015-05-14 14:56 ` Chen Gang
2015-05-14 15:08 ` Chen Gang
2015-05-14 16:05 ` Chen Gang
2015-05-15 2:31 ` Chen Gang
2015-05-29 19:29 ` Chen Gang
2015-05-10 22:46 ` [Qemu-devel] [PATCH 10/10 v10] target-tilegx: Add TILE-Gx building files Chen Gang
[not found] ` <BLU437-SMTP59B35F884A72A991334DBB9DC0@phx.gbl>
2015-05-11 16:01 ` [Qemu-devel] [PATCH 05/10 v10] target-tilegx/opcode_tilegx.h: Modify it to fit qemu using Richard Henderson
2015-05-11 21:06 ` Chen Gang
2015-05-11 22:06 ` Richard Henderson
2015-05-12 0:43 ` gchen gchen
2015-05-12 10:56 ` Chen Gang
2015-05-12 11:08 ` Peter Maydell
2015-05-12 11:16 ` Chen Gang
2015-05-19 2:47 ` Chen Gang
2015-05-21 20:59 ` Chen Gang
2015-05-21 23:40 ` Chris Metcalf
2015-05-22 1:48 ` Chen Gang
2015-05-24 22:03 ` Chen Gang
2015-05-25 15:13 ` Chen Gang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.