From: Aleksandar Markovic <aleksandar.markovic@rt-rk.com>
To: qemu-devel@nongnu.org
Cc: aurelien@aurel32.net, amarkovic@wavecomp.com
Subject: [Qemu-devel] [PATCH 26/26] target/mips: Clean up handling of CP0 register 31
Date: Thu, 22 Aug 2019 13:35:50 +0200 [thread overview]
Message-ID: <1566473750-17743-27-git-send-email-aleksandar.markovic@rt-rk.com> (raw)
In-Reply-To: <1566473750-17743-1-git-send-email-aleksandar.markovic@rt-rk.com>
From: Aleksandar Markovic <amarkovic@wavecomp.com>
Clean up handling of CP0 register 31.
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
---
target/mips/cpu.h | 2 +-
target/mips/translate.c | 56 ++++++++++++++++++++++++-------------------------
2 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/target/mips/cpu.h b/target/mips/cpu.h
index 90d1373..070f5ea 100644
--- a/target/mips/cpu.h
+++ b/target/mips/cpu.h
@@ -610,7 +610,6 @@ struct CPUMIPSState {
* CP0 Register 4
*/
target_ulong CP0_Context;
- target_ulong CP0_KScratch[MIPS_KSCRATCH_NUM];
int32_t CP0_MemoryMapID;
/*
* CP0 Register 5
@@ -1021,6 +1020,7 @@ struct CPUMIPSState {
* CP0 Register 31
*/
int32_t CP0_DESAVE;
+ target_ulong CP0_KScratch[MIPS_KSCRATCH_NUM];
/* We waste some space so we can handle shadow registers like TCs. */
TCState tcs[MIPS_SHADOW_SET_MAX];
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 808d046..ba4e28e 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -7579,17 +7579,17 @@ static void gen_mfc0(DisasContext *ctx, TCGv arg, int reg, int sel)
break;
case CP0_REGISTER_31:
switch (sel) {
- case 0:
+ case CP0_REG31__DESAVE:
/* EJTAG support */
gen_mfc0_load32(arg, offsetof(CPUMIPSState, CP0_DESAVE));
register_name = "DESAVE";
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case CP0_REG31__KSCRATCH1:
+ case CP0_REG31__KSCRATCH2:
+ case CP0_REG31__KSCRATCH3:
+ case CP0_REG31__KSCRATCH4:
+ case CP0_REG31__KSCRATCH5:
+ case CP0_REG31__KSCRATCH6:
CP0_CHECK(ctx->kscrexist & (1 << sel));
tcg_gen_ld_tl(arg, cpu_env,
offsetof(CPUMIPSState, CP0_KScratch[sel-2]));
@@ -8333,17 +8333,17 @@ static void gen_mtc0(DisasContext *ctx, TCGv arg, int reg, int sel)
break;
case CP0_REGISTER_31:
switch (sel) {
- case 0:
+ case CP0_REG31__DESAVE:
/* EJTAG support */
gen_mtc0_store32(arg, offsetof(CPUMIPSState, CP0_DESAVE));
register_name = "DESAVE";
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case CP0_REG31__KSCRATCH1:
+ case CP0_REG31__KSCRATCH2:
+ case CP0_REG31__KSCRATCH3:
+ case CP0_REG31__KSCRATCH4:
+ case CP0_REG31__KSCRATCH5:
+ case CP0_REG31__KSCRATCH6:
CP0_CHECK(ctx->kscrexist & (1 << sel));
tcg_gen_st_tl(arg, cpu_env,
offsetof(CPUMIPSState, CP0_KScratch[sel-2]));
@@ -9068,17 +9068,17 @@ static void gen_dmfc0(DisasContext *ctx, TCGv arg, int reg, int sel)
break;
case CP0_REGISTER_31:
switch (sel) {
- case 0:
+ case CP0_REG31__DESAVE:
/* EJTAG support */
gen_mfc0_load32(arg, offsetof(CPUMIPSState, CP0_DESAVE));
register_name = "DESAVE";
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case CP0_REG31__KSCRATCH1:
+ case CP0_REG31__KSCRATCH2:
+ case CP0_REG31__KSCRATCH3:
+ case CP0_REG31__KSCRATCH4:
+ case CP0_REG31__KSCRATCH5:
+ case CP0_REG31__KSCRATCH6:
CP0_CHECK(ctx->kscrexist & (1 << sel));
tcg_gen_ld_tl(arg, cpu_env,
offsetof(CPUMIPSState, CP0_KScratch[sel-2]));
@@ -9809,17 +9809,17 @@ static void gen_dmtc0(DisasContext *ctx, TCGv arg, int reg, int sel)
break;
case CP0_REGISTER_31:
switch (sel) {
- case 0:
+ case CP0_REG31__DESAVE:
/* EJTAG support */
gen_mtc0_store32(arg, offsetof(CPUMIPSState, CP0_DESAVE));
register_name = "DESAVE";
break;
- case 2:
- case 3:
- case 4:
- case 5:
- case 6:
- case 7:
+ case CP0_REG31__KSCRATCH1:
+ case CP0_REG31__KSCRATCH2:
+ case CP0_REG31__KSCRATCH3:
+ case CP0_REG31__KSCRATCH4:
+ case CP0_REG31__KSCRATCH5:
+ case CP0_REG31__KSCRATCH6:
CP0_CHECK(ctx->kscrexist & (1 << sel));
tcg_gen_st_tl(arg, cpu_env,
offsetof(CPUMIPSState, CP0_KScratch[sel - 2]));
--
2.7.4
prev parent reply other threads:[~2019-08-22 11:59 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-22 11:35 [Qemu-devel] [PATCH 00/26] Clean up handling of configuration register CP0 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 01/26] target/mips: Clean up handling of CP0 register 0 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 02/26] target/mips: Clean up handling of CP0 register 1 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 03/26] target/mips: Clean up handling of CP0 register 2 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 04/26] target/mips: Clean up handling of CP0 register 5 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 05/26] target/mips: Clean up handling of CP0 register 6 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 06/26] target/mips: Clean up handling of CP0 register 7 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 07/26] target/mips: Clean up handling of CP0 register 8 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 08/26] target/mips: Clean up handling of CP0 register 9 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 09/26] target/mips: Clean up handling of CP0 register 10 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 10/26] target/mips: Clean up handling of CP0 register 11 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 11/26] target/mips: Clean up handling of CP0 register 12 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 12/26] target/mips: Clean up handling of CP0 register 15 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 13/26] target/mips: Clean up handling of CP0 register 16 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 14/26] target/mips: Clean up handling of CP0 register 17 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 15/26] target/mips: Clean up handling of CP0 register 18 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 16/26] target/mips: Clean up handling of CP0 register 19 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 17/26] target/mips: Clean up handling of CP0 register 20 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 18/26] target/mips: Clean up handling of CP0 register 23 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 19/26] target/mips: Clean up handling of CP0 register 24 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 20/26] target/mips: Clean up handling of CP0 register 25 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 21/26] target/mips: Clean up handling of CP0 register 26 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 22/26] target/mips: Clean up handling of CP0 register 27 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 23/26] target/mips: Clean up handling of CP0 register 28 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 24/26] target/mips: Clean up handling of CP0 register 29 Aleksandar Markovic
2019-08-22 11:35 ` [Qemu-devel] [PATCH 25/26] target/mips: Clean up handling of CP0 register 30 Aleksandar Markovic
2019-08-22 11:35 ` Aleksandar Markovic [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1566473750-17743-27-git-send-email-aleksandar.markovic@rt-rk.com \
--to=aleksandar.markovic@rt-rk.com \
--cc=amarkovic@wavecomp.com \
--cc=aurelien@aurel32.net \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).