All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/8] LoongArch: minor assembly clean-ups
@ 2022-07-26 15:57 WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 1/8] LoongArch: Remove several syntactic sugar macros for branches WANG Xuerui
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Hi,

Just several minor clean-ups to the few lines of loongarch assembly
that have been irritating me recently. Most of them belong to the "let's
not support prehistoric toolchains" kind, no functional changes
whatsoever.

Tested on 3A5000 without problem.

v3:

- Reordered patches (suggested by Huacai)
- Migrated all non-definition uses of raw register names to ABI names,
  unraveling more chances of simplification as a result (suggested by
  Jinyang)
- Re-tabbed all assembly files after the cleanups

v2:

Migrated "BLT foo, zero" and "BGT foo, zero" too, suggested by Jinyang.

WANG Xuerui (8):
  LoongArch: Remove several syntactic sugar macros for branches
  LoongArch: Use ABI names of registers where appropriate
  LoongArch: Use the "jr" pseudo-instruction where applicable
  LoongArch: Use the "move" pseudo-instruction where applicable
  LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ
  LoongArch: Simplify "BLT foo, zero" with BLTZ
  LoongArch: Simplify "BGT foo, zero" with BGTZ
  LoongArch: Re-tab the assembly files

 arch/loongarch/include/asm/asmmacro.h    |  12 --
 arch/loongarch/include/asm/atomic.h      |  24 ++--
 arch/loongarch/include/asm/barrier.h     |   4 +-
 arch/loongarch/include/asm/cmpxchg.h     |   4 +-
 arch/loongarch/include/asm/futex.h       |   6 +-
 arch/loongarch/include/asm/loongson.h    |   4 +-
 arch/loongarch/include/asm/stacktrace.h  |  12 +-
 arch/loongarch/include/asm/thread_info.h |   4 +-
 arch/loongarch/include/asm/uaccess.h     |   2 +-
 arch/loongarch/kernel/entry.S            |   4 +-
 arch/loongarch/kernel/fpu.S              | 174 +++++++++++------------
 arch/loongarch/kernel/genex.S            |  12 +-
 arch/loongarch/kernel/head.S             |   8 +-
 arch/loongarch/kernel/switch.S           |   4 +-
 arch/loongarch/lib/clear_user.S          |   2 +-
 arch/loongarch/lib/copy_user.S           |   2 +-
 arch/loongarch/mm/page.S                 | 118 +++++++--------
 arch/loongarch/mm/tlbex.S                |  98 ++++++-------
 18 files changed, 241 insertions(+), 253 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/8] LoongArch: Remove several syntactic sugar macros for branches
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 2/8] LoongArch: Use ABI names of registers where appropriate WANG Xuerui
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

These syntactic sugars have been supported by upstream binutils from the
beginning, so no need to patch them locally.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/include/asm/asmmacro.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/loongarch/include/asm/asmmacro.h b/arch/loongarch/include/asm/asmmacro.h
index a1a04083bd67..be037a40580d 100644
--- a/arch/loongarch/include/asm/asmmacro.h
+++ b/arch/loongarch/include/asm/asmmacro.h
@@ -274,16 +274,4 @@
 	nor	\dst, \src, zero
 .endm
 
-.macro bgt r0 r1 label
-	blt	\r1, \r0, \label
-.endm
-
-.macro bltz r0 label
-	blt	\r0, zero, \label
-.endm
-
-.macro bgez r0 label
-	bge	\r0, zero, \label
-.endm
-
 #endif /* _ASM_ASMMACRO_H */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/8] LoongArch: Use ABI names of registers where appropriate
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 1/8] LoongArch: Remove several syntactic sugar macros for branches WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 3/8] LoongArch: Use the "jr" pseudo-instruction where applicable WANG Xuerui
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Some of the assembly in the LoongArch port seem to come from a
prehistoric time, when the assembler didn't even have support for the
ABI names we all come to know and love, thus used raw register numbers
which hampered readability.

The usages are found with a regex match inside arch/loongarch, then
manually adjusted for those non-definitions.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/include/asm/barrier.h     |  4 +-
 arch/loongarch/include/asm/loongson.h    |  4 +-
 arch/loongarch/include/asm/stacktrace.h  | 12 ++--
 arch/loongarch/include/asm/thread_info.h |  4 +-
 arch/loongarch/include/asm/uaccess.h     |  2 +-
 arch/loongarch/mm/page.S                 |  4 +-
 arch/loongarch/mm/tlbex.S                | 80 ++++++++++++------------
 7 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/arch/loongarch/include/asm/barrier.h b/arch/loongarch/include/asm/barrier.h
index b6517eeeb141..cda977675854 100644
--- a/arch/loongarch/include/asm/barrier.h
+++ b/arch/loongarch/include/asm/barrier.h
@@ -48,9 +48,9 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
 	__asm__ __volatile__(
 		"sltu	%0, %1, %2\n\t"
 #if (__SIZEOF_LONG__ == 4)
-		"sub.w	%0, $r0, %0\n\t"
+		"sub.w	%0, $zero, %0\n\t"
 #elif (__SIZEOF_LONG__ == 8)
-		"sub.d	%0, $r0, %0\n\t"
+		"sub.d	%0, $zero, %0\n\t"
 #endif
 		: "=r" (mask)
 		: "r" (index), "r" (size)
diff --git a/arch/loongarch/include/asm/loongson.h b/arch/loongarch/include/asm/loongson.h
index 6a8038725ba7..8522afafc24e 100644
--- a/arch/loongarch/include/asm/loongson.h
+++ b/arch/loongarch/include/asm/loongson.h
@@ -58,7 +58,7 @@ static inline void xconf_writel(u32 val, volatile void __iomem *addr)
 {
 	asm volatile (
 	"	st.w	%[v], %[hw], 0	\n"
-	"	ld.b	$r0, %[hw], 0	\n"
+	"	ld.b	$zero, %[hw], 0	\n"
 	:
 	: [hw] "r" (addr), [v] "r" (val)
 	);
@@ -68,7 +68,7 @@ static inline void xconf_writeq(u64 val64, volatile void __iomem *addr)
 {
 	asm volatile (
 	"	st.d	%[v], %[hw], 0	\n"
-	"	ld.b	$r0, %[hw], 0	\n"
+	"	ld.b	$zero, %[hw], 0	\n"
 	:
 	: [hw] "r" (addr),  [v] "r" (val64)
 	);
diff --git a/arch/loongarch/include/asm/stacktrace.h b/arch/loongarch/include/asm/stacktrace.h
index 26483e396ad1..6b5c2a7aa706 100644
--- a/arch/loongarch/include/asm/stacktrace.h
+++ b/arch/loongarch/include/asm/stacktrace.h
@@ -23,13 +23,13 @@
 static __always_inline void prepare_frametrace(struct pt_regs *regs)
 {
 	__asm__ __volatile__(
-		/* Save $r1 */
+		/* Save $ra */
 		STORE_ONE_REG(1)
-		/* Use $r1 to save PC */
-		"pcaddi	$r1, 0\n\t"
-		STR_LONG_S " $r1, %0\n\t"
-		/* Restore $r1 */
-		STR_LONG_L " $r1, %1, "STR_LONGSIZE"\n\t"
+		/* Use $ra to save PC */
+		"pcaddi	$ra, 0\n\t"
+		STR_LONG_S " $ra, %0\n\t"
+		/* Restore $ra */
+		STR_LONG_L " $ra, %1, "STR_LONGSIZE"\n\t"
 		STORE_ONE_REG(2)
 		STORE_ONE_REG(3)
 		STORE_ONE_REG(4)
diff --git a/arch/loongarch/include/asm/thread_info.h b/arch/loongarch/include/asm/thread_info.h
index 99beb11c2fa8..b7dd9f19a5a9 100644
--- a/arch/loongarch/include/asm/thread_info.h
+++ b/arch/loongarch/include/asm/thread_info.h
@@ -44,14 +44,14 @@ struct thread_info {
 }
 
 /* How to get the thread information struct from C. */
-register struct thread_info *__current_thread_info __asm__("$r2");
+register struct thread_info *__current_thread_info __asm__("$tp");
 
 static inline struct thread_info *current_thread_info(void)
 {
 	return __current_thread_info;
 }
 
-register unsigned long current_stack_pointer __asm__("$r3");
+register unsigned long current_stack_pointer __asm__("$sp");
 
 #endif /* !__ASSEMBLY__ */
 
diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/asm/uaccess.h
index 217c6a3727b1..42da43211765 100644
--- a/arch/loongarch/include/asm/uaccess.h
+++ b/arch/loongarch/include/asm/uaccess.h
@@ -162,7 +162,7 @@ do {									\
 	"2:							\n"	\
 	"	.section .fixup,\"ax\"				\n"	\
 	"3:	li.w	%0, %3					\n"	\
-	"	or	%1, $r0, $r0				\n"	\
+	"	or	%1, $zero, $zero			\n"	\
 	"	b	2b					\n"	\
 	"	.previous					\n"	\
 	"	.section __ex_table,\"a\"			\n"	\
diff --git a/arch/loongarch/mm/page.S b/arch/loongarch/mm/page.S
index ddc78ab33c7b..270d509adbaa 100644
--- a/arch/loongarch/mm/page.S
+++ b/arch/loongarch/mm/page.S
@@ -32,7 +32,7 @@ SYM_FUNC_START(clear_page)
 	st.d     zero, a0, -8
 	bne      t0,   a0, 1b
 
-	jirl     $r0, ra, 0
+	jirl     zero, ra, 0
 SYM_FUNC_END(clear_page)
 EXPORT_SYMBOL(clear_page)
 
@@ -79,6 +79,6 @@ SYM_FUNC_START(copy_page)
 	st.d     t7, a0,  -8
 
 	bne      t8, a0, 1b
-	jirl     $r0, ra, 0
+	jirl     zero, ra, 0
 SYM_FUNC_END(copy_page)
 EXPORT_SYMBOL(copy_page)
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 7eee40271577..9e98afe7a67f 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -47,7 +47,7 @@ SYM_FUNC_START(handle_tlb_load)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, $r0, vmalloc_load
+	blt	t0, zero, vmalloc_load
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_load:
@@ -80,7 +80,7 @@ vmalloc_done_load:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, $r0, tlb_huge_update_load
+	bne	t0, zero, tlb_huge_update_load
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -100,12 +100,12 @@ smp_pgtable_change_load:
 
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, 1
-	beq	ra, $r0, nopage_tlb_load
+	beq	ra, zero, nopage_tlb_load
 
 	ori	t0, t0, _PAGE_VALID
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, smp_pgtable_change_load
+	beq	t0, zero, smp_pgtable_change_load
 #else
 	st.d	t0, t1, 0
 #endif
@@ -139,23 +139,23 @@ tlb_huge_update_load:
 #endif
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, 1
-	beq	ra, $r0, nopage_tlb_load
+	beq	ra, zero, nopage_tlb_load
 	tlbsrch
 
 	ori	t0, t0, _PAGE_VALID
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, tlb_huge_update_load
+	beq	t0, zero, tlb_huge_update_load
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
 #endif
-	addu16i.d	t1, $r0, -(CSR_TLBIDX_EHINV >> 16)
+	addu16i.d	t1, zero, -(CSR_TLBIDX_EHINV >> 16)
 	addi.d	ra, t1, 0
 	csrxchg	ra, t1, LOONGARCH_CSR_TLBIDX
 	tlbwr
 
-	csrxchg	$r0, t1, LOONGARCH_CSR_TLBIDX
+	csrxchg	zero, t1, LOONGARCH_CSR_TLBIDX
 
 	/*
 	 * A huge PTE describes an area the size of the
@@ -178,27 +178,27 @@ tlb_huge_update_load:
 	addi.d	t0, ra, 0
 
 	/* Convert to entrylo1 */
-	addi.d	t1, $r0, 1
+	addi.d	t1, zero, 1
 	slli.d	t1, t1, (HPAGE_SHIFT - 1)
 	add.d	t0, t0, t1
 	csrwr	t0, LOONGARCH_CSR_TLBELO1
 
 	/* Set huge page tlb entry size */
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 	tlbfill
 
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 nopage_tlb_load:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_0
-	jirl	$r0, t0, 0
+	jirl	zero, t0, 0
 SYM_FUNC_END(handle_tlb_load)
 
 SYM_FUNC_START(handle_tlb_store)
@@ -210,7 +210,7 @@ SYM_FUNC_START(handle_tlb_store)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, $r0, vmalloc_store
+	blt	t0, zero, vmalloc_store
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_store:
@@ -244,7 +244,7 @@ vmalloc_done_store:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, $r0, tlb_huge_update_store
+	bne	t0, zero, tlb_huge_update_store
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -265,12 +265,12 @@ smp_pgtable_change_store:
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
 	xori	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
-	bne	ra, $r0, nopage_tlb_store
+	bne	ra, zero, nopage_tlb_store
 
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, smp_pgtable_change_store
+	beq	t0, zero, smp_pgtable_change_store
 #else
 	st.d	t0, t1, 0
 #endif
@@ -306,24 +306,24 @@ tlb_huge_update_store:
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
 	xori	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
-	bne	ra, $r0, nopage_tlb_store
+	bne	ra, zero, nopage_tlb_store
 
 	tlbsrch
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, tlb_huge_update_store
+	beq	t0, zero, tlb_huge_update_store
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
 #endif
-	addu16i.d	t1, $r0, -(CSR_TLBIDX_EHINV >> 16)
+	addu16i.d	t1, zero, -(CSR_TLBIDX_EHINV >> 16)
 	addi.d	ra, t1, 0
 	csrxchg	ra, t1, LOONGARCH_CSR_TLBIDX
 	tlbwr
 
-	csrxchg	$r0, t1, LOONGARCH_CSR_TLBIDX
+	csrxchg	zero, t1, LOONGARCH_CSR_TLBIDX
 	/*
 	 * A huge PTE describes an area the size of the
 	 * configured huge page size. This is twice the
@@ -345,28 +345,28 @@ tlb_huge_update_store:
 	addi.d	t0, ra, 0
 
 	/* Convert to entrylo1 */
-	addi.d	t1, $r0, 1
+	addi.d	t1, zero, 1
 	slli.d	t1, t1, (HPAGE_SHIFT - 1)
 	add.d	t0, t0, t1
 	csrwr	t0, LOONGARCH_CSR_TLBELO1
 
 	/* Set huge page tlb entry size */
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 	tlbfill
 
 	/* Reset default page size */
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 nopage_tlb_store:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_1
-	jirl	$r0, t0, 0
+	jirl	zero, t0, 0
 SYM_FUNC_END(handle_tlb_store)
 
 SYM_FUNC_START(handle_tlb_modify)
@@ -378,7 +378,7 @@ SYM_FUNC_START(handle_tlb_modify)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, $r0, vmalloc_modify
+	blt	t0, zero, vmalloc_modify
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_modify:
@@ -411,7 +411,7 @@ vmalloc_done_modify:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, $r0, tlb_huge_update_modify
+	bne	t0, zero, tlb_huge_update_modify
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -431,12 +431,12 @@ smp_pgtable_change_modify:
 
 	srli.d	ra, t0, _PAGE_WRITE_SHIFT
 	andi	ra, ra, 1
-	beq	ra, $r0, nopage_tlb_modify
+	beq	ra, zero, nopage_tlb_modify
 
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, smp_pgtable_change_modify
+	beq	t0, zero, smp_pgtable_change_modify
 #else
 	st.d	t0, t1, 0
 #endif
@@ -471,14 +471,14 @@ tlb_huge_update_modify:
 
 	srli.d	ra, t0, _PAGE_WRITE_SHIFT
 	andi	ra, ra, 1
-	beq	ra, $r0, nopage_tlb_modify
+	beq	ra, zero, nopage_tlb_modify
 
 	tlbsrch
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, $r0, tlb_huge_update_modify
+	beq	t0, zero, tlb_huge_update_modify
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
@@ -504,28 +504,28 @@ tlb_huge_update_modify:
 	addi.d	t0, ra, 0
 
 	/* Convert to entrylo1 */
-	addi.d	t1, $r0, 1
+	addi.d	t1, zero, 1
 	slli.d	t1, t1, (HPAGE_SHIFT - 1)
 	add.d	t0, t0, t1
 	csrwr	t0, LOONGARCH_CSR_TLBELO1
 
 	/* Set huge page tlb entry size */
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg	t1, t0, LOONGARCH_CSR_TLBIDX
 
 	tlbwr
 
 	/* Reset default page size */
-	addu16i.d	t0, $r0, (CSR_TLBIDX_PS >> 16)
-	addu16i.d	t1, $r0, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
+	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
+	addu16i.d	t1, zero, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
 	csrxchg	t1, t0, LOONGARCH_CSR_TLBIDX
 
 nopage_tlb_modify:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_1
-	jirl	$r0, t0, 0
+	jirl	zero, t0, 0
 SYM_FUNC_END(handle_tlb_modify)
 
 SYM_FUNC_START(handle_tlb_refill)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/8] LoongArch: Use the "jr" pseudo-instruction where applicable
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 1/8] LoongArch: Remove several syntactic sugar macros for branches WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 2/8] LoongArch: Use ABI names of registers where appropriate WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 4/8] LoongArch: Use the "move" " WANG Xuerui
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Some of the assembly code in the LoongArch port likely originated
from a time when the assembler did not support pseudo-instructions like
"move" or "jr", so the desugared form was used and readability suffers
(to a minor degree) as a result.

As the upstream toolchain supports these pseudo-instructions from the
beginning, migrate the existing few usages to them for better
readability.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/kernel/fpu.S   | 12 ++++++------
 arch/loongarch/kernel/genex.S |  4 ++--
 arch/loongarch/kernel/head.S  |  4 ++--
 arch/loongarch/mm/page.S      |  4 ++--
 arch/loongarch/mm/tlbex.S     |  6 +++---
 5 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/loongarch/kernel/fpu.S b/arch/loongarch/kernel/fpu.S
index a631a7137667..e14f096d40bd 100644
--- a/arch/loongarch/kernel/fpu.S
+++ b/arch/loongarch/kernel/fpu.S
@@ -153,7 +153,7 @@ SYM_FUNC_START(_save_fp)
 	fpu_save_csr	a0 t1
 	fpu_save_double a0 t1			# clobbers t1
 	fpu_save_cc	a0 t1 t2		# clobbers t1, t2
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(_save_fp)
 EXPORT_SYMBOL(_save_fp)
 
@@ -164,7 +164,7 @@ SYM_FUNC_START(_restore_fp)
 	fpu_restore_double a0 t1		# clobbers t1
 	fpu_restore_csr	a0 t1
 	fpu_restore_cc	a0 t1 t2		# clobbers t1, t2
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(_restore_fp)
 
 /*
@@ -216,7 +216,7 @@ SYM_FUNC_START(_init_fpu)
 	movgr2fr.d	$f30, t1
 	movgr2fr.d	$f31, t1
 
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(_init_fpu)
 
 /*
@@ -229,7 +229,7 @@ SYM_FUNC_START(_save_fp_context)
 	sc_save_fcsr a2 t1
 	sc_save_fp a0
 	li.w	a0, 0					# success
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(_save_fp_context)
 
 /*
@@ -242,10 +242,10 @@ SYM_FUNC_START(_restore_fp_context)
 	sc_restore_fcc a1 t1 t2
 	sc_restore_fcsr a2 t1
 	li.w	a0, 0					# success
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(_restore_fp_context)
 
 SYM_FUNC_START(fault)
 	li.w	a0, -EFAULT				# failure
-	jirl zero, ra, 0
+	jr	ra
 SYM_FUNC_END(fault)
diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
index 93496852b3cc..0df6d17dde23 100644
--- a/arch/loongarch/kernel/genex.S
+++ b/arch/loongarch/kernel/genex.S
@@ -28,7 +28,7 @@ SYM_FUNC_START(__arch_cpu_idle)
 	nop
 	idle	0
 	/* end of rollback region */
-1:	jirl	zero, ra, 0
+1:	jr	ra
 SYM_FUNC_END(__arch_cpu_idle)
 
 SYM_FUNC_START(handle_vint)
@@ -91,5 +91,5 @@ SYM_FUNC_END(except_vec_cex)
 
 SYM_FUNC_START(handle_sys)
 	la.abs	t0, handle_syscall
-	jirl    zero, t0, 0
+	jr	t0
 SYM_FUNC_END(handle_sys)
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index d01e62dd414f..e553c5fc17da 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -32,7 +32,7 @@ SYM_CODE_START(kernel_entry)			# kernel entry point
 	/* We might not get launched at the address the kernel is linked to,
 	   so we jump there.  */
 	la.abs		t0, 0f
-	jirl		zero, t0, 0
+	jr		t0
 0:
 	la		t0, __bss_start		# clear .bss
 	st.d		zero, t0, 0
@@ -86,7 +86,7 @@ SYM_CODE_START(smpboot_entry)
 	ld.d		tp, t0, CPU_BOOT_TINFO
 
 	la.abs	t0, 0f
-	jirl	zero, t0, 0
+	jr	t0
 0:
 	bl		start_secondary
 SYM_CODE_END(smpboot_entry)
diff --git a/arch/loongarch/mm/page.S b/arch/loongarch/mm/page.S
index 270d509adbaa..1e20dd5e3a4b 100644
--- a/arch/loongarch/mm/page.S
+++ b/arch/loongarch/mm/page.S
@@ -32,7 +32,7 @@ SYM_FUNC_START(clear_page)
 	st.d     zero, a0, -8
 	bne      t0,   a0, 1b
 
-	jirl     zero, ra, 0
+	jr       ra
 SYM_FUNC_END(clear_page)
 EXPORT_SYMBOL(clear_page)
 
@@ -79,6 +79,6 @@ SYM_FUNC_START(copy_page)
 	st.d     t7, a0,  -8
 
 	bne      t8, a0, 1b
-	jirl     zero, ra, 0
+	jr       ra
 SYM_FUNC_END(copy_page)
 EXPORT_SYMBOL(copy_page)
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 9e98afe7a67f..f1234a9c311f 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -198,7 +198,7 @@ nopage_tlb_load:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_0
-	jirl	zero, t0, 0
+	jr	t0
 SYM_FUNC_END(handle_tlb_load)
 
 SYM_FUNC_START(handle_tlb_store)
@@ -366,7 +366,7 @@ nopage_tlb_store:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_1
-	jirl	zero, t0, 0
+	jr	t0
 SYM_FUNC_END(handle_tlb_store)
 
 SYM_FUNC_START(handle_tlb_modify)
@@ -525,7 +525,7 @@ nopage_tlb_modify:
 	dbar	0
 	csrrd	ra, EXCEPTION_KS2
 	la.abs	t0, tlb_do_page_fault_1
-	jirl	zero, t0, 0
+	jr	t0
 SYM_FUNC_END(handle_tlb_modify)
 
 SYM_FUNC_START(handle_tlb_refill)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/8] LoongArch: Use the "move" pseudo-instruction where applicable
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
                   ` (2 preceding siblings ...)
  2022-07-26 15:57 ` [PATCH v3 3/8] LoongArch: Use the "jr" pseudo-instruction where applicable WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 5/8] LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ WANG Xuerui
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Some of the assembly code in the LoongArch port likely originated
from a time when the assembler did not support pseudo-instructions like
"move" or "jr", so the desugared form was used and readability suffers
(to a minor degree) as a result.

As the upstream toolchain supports these pseudo-instructions from the
beginning, migrate the existing few usages to them for better
readability.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/include/asm/atomic.h  | 8 ++++----
 arch/loongarch/include/asm/cmpxchg.h | 2 +-
 arch/loongarch/include/asm/futex.h   | 2 +-
 arch/loongarch/include/asm/uaccess.h | 2 +-
 arch/loongarch/kernel/head.S         | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 979367ad4e2c..a0a33ee793d6 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -157,7 +157,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		__asm__ __volatile__(
 		"1:	ll.w	%1, %2		# atomic_sub_if_positive\n"
 		"	addi.w	%0, %1, %3				\n"
-		"	or	%1, %0, $zero				\n"
+		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.w	%1, %2					\n"
 		"	beq	$zero, %1, 1b				\n"
@@ -170,7 +170,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		__asm__ __volatile__(
 		"1:	ll.w	%1, %2		# atomic_sub_if_positive\n"
 		"	sub.w	%0, %1, %3				\n"
-		"	or	%1, %0, $zero				\n"
+		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.w	%1, %2					\n"
 		"	beq	$zero, %1, 1b				\n"
@@ -320,7 +320,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		__asm__ __volatile__(
 		"1:	ll.d	%1, %2	# atomic64_sub_if_positive	\n"
 		"	addi.d	%0, %1, %3				\n"
-		"	or	%1, %0, $zero				\n"
+		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.d	%1, %2					\n"
 		"	beq	%1, $zero, 1b				\n"
@@ -333,7 +333,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		__asm__ __volatile__(
 		"1:	ll.d	%1, %2	# atomic64_sub_if_positive	\n"
 		"	sub.d	%0, %1, %3				\n"
-		"	or	%1, %0, $zero				\n"
+		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.d	%1, %2					\n"
 		"	beq	%1, $zero, 1b				\n"
diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
index 75b3a4478652..9e9939196471 100644
--- a/arch/loongarch/include/asm/cmpxchg.h
+++ b/arch/loongarch/include/asm/cmpxchg.h
@@ -55,7 +55,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
 	__asm__ __volatile__(						\
 	"1:	" ld "	%0, %2		# __cmpxchg_asm \n"		\
 	"	bne	%0, %z3, 2f			\n"		\
-	"	or	$t0, %z4, $zero			\n"		\
+	"	move	$t0, %z4			\n"		\
 	"	" st "	$t0, %1				\n"		\
 	"	beq	$zero, $t0, 1b			\n"		\
 	"2:						\n"		\
diff --git a/arch/loongarch/include/asm/futex.h b/arch/loongarch/include/asm/futex.h
index 9de8231694ec..170ec9f97e58 100644
--- a/arch/loongarch/include/asm/futex.h
+++ b/arch/loongarch/include/asm/futex.h
@@ -82,7 +82,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, u32 oldval, u32 newv
 	"# futex_atomic_cmpxchg_inatomic			\n"
 	"1:	ll.w	%1, %3					\n"
 	"	bne	%1, %z4, 3f				\n"
-	"	or	$t0, %z5, $zero				\n"
+	"	move	$t0, %z5				\n"
 	"2:	sc.w	$t0, %2					\n"
 	"	beq	$zero, $t0, 1b				\n"
 	"3:							\n"
diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/asm/uaccess.h
index 42da43211765..2b44edc604a2 100644
--- a/arch/loongarch/include/asm/uaccess.h
+++ b/arch/loongarch/include/asm/uaccess.h
@@ -162,7 +162,7 @@ do {									\
 	"2:							\n"	\
 	"	.section .fixup,\"ax\"				\n"	\
 	"3:	li.w	%0, %3					\n"	\
-	"	or	%1, $zero, $zero			\n"	\
+	"	move	%1, $zero				\n"	\
 	"	b	2b					\n"	\
 	"	.previous					\n"	\
 	"	.section __ex_table,\"a\"			\n"	\
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index e553c5fc17da..fd6a62f17161 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -50,7 +50,7 @@ SYM_CODE_START(kernel_entry)			# kernel entry point
 	/* KSave3 used for percpu base, initialized as 0 */
 	csrwr		zero, PERCPU_BASE_KS
 	/* GPR21 used for percpu base (runtime), initialized as 0 */
-	or		u0, zero, zero
+	move		u0, zero
 
 	la		tp, init_thread_union
 	/* Set the SP after an empty pt_regs.  */
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 5/8] LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
                   ` (3 preceding siblings ...)
  2022-07-26 15:57 ` [PATCH v3 4/8] LoongArch: Use the "move" " WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 6/8] LoongArch: Simplify "BLT foo, zero" with BLTZ WANG Xuerui
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

While B{EQ,NE}Z and B{EQ,NE} are different instructions, and the vastly
expanded range for branch destination does not really matter in the few
cases touched, use the B{EQ,NE}Z where possible for shorter lines and
better consistency (e.g. some places used "BEQ foo, zero", while some
used "BEQ zero, foo").

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/include/asm/atomic.h  |  8 ++++----
 arch/loongarch/include/asm/cmpxchg.h |  2 +-
 arch/loongarch/include/asm/futex.h   |  4 ++--
 arch/loongarch/mm/tlbex.S            | 30 ++++++++++++++--------------
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index a0a33ee793d6..0869bec2c937 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -160,7 +160,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.w	%1, %2					\n"
-		"	beq	$zero, %1, 1b				\n"
+		"	beqz	%1, 1b					\n"
 		"2:							\n"
 		__WEAK_LLSC_MB
 		: "=&r" (result), "=&r" (temp),
@@ -173,7 +173,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.w	%1, %2					\n"
-		"	beq	$zero, %1, 1b				\n"
+		"	beqz	%1, 1b					\n"
 		"2:							\n"
 		__WEAK_LLSC_MB
 		: "=&r" (result), "=&r" (temp),
@@ -323,7 +323,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.d	%1, %2					\n"
-		"	beq	%1, $zero, 1b				\n"
+		"	beqz	%1, 1b					\n"
 		"2:							\n"
 		__WEAK_LLSC_MB
 		: "=&r" (result), "=&r" (temp),
@@ -336,7 +336,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		"	move	%1, %0					\n"
 		"	blt	%0, $zero, 2f				\n"
 		"	sc.d	%1, %2					\n"
-		"	beq	%1, $zero, 1b				\n"
+		"	beqz	%1, 1b					\n"
 		"2:							\n"
 		__WEAK_LLSC_MB
 		: "=&r" (result), "=&r" (temp),
diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
index 9e9939196471..0a9b0fac1eee 100644
--- a/arch/loongarch/include/asm/cmpxchg.h
+++ b/arch/loongarch/include/asm/cmpxchg.h
@@ -57,7 +57,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
 	"	bne	%0, %z3, 2f			\n"		\
 	"	move	$t0, %z4			\n"		\
 	"	" st "	$t0, %1				\n"		\
-	"	beq	$zero, $t0, 1b			\n"		\
+	"	beqz	$t0, 1b				\n"		\
 	"2:						\n"		\
 	__WEAK_LLSC_MB							\
 	: "=&r" (__ret), "=ZB"(*m)					\
diff --git a/arch/loongarch/include/asm/futex.h b/arch/loongarch/include/asm/futex.h
index 170ec9f97e58..837659335fb1 100644
--- a/arch/loongarch/include/asm/futex.h
+++ b/arch/loongarch/include/asm/futex.h
@@ -17,7 +17,7 @@
 	"1:	ll.w	%1, %4 # __futex_atomic_op\n"		\
 	"	" insn	"				\n"	\
 	"2:	sc.w	$t0, %2				\n"	\
-	"	beq	$t0, $zero, 1b			\n"	\
+	"	beqz	$t0, 1b				\n"	\
 	"3:						\n"	\
 	"	.section .fixup,\"ax\"			\n"	\
 	"4:	li.w	%0, %6				\n"	\
@@ -84,7 +84,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, u32 oldval, u32 newv
 	"	bne	%1, %z4, 3f				\n"
 	"	move	$t0, %z5				\n"
 	"2:	sc.w	$t0, %2					\n"
-	"	beq	$zero, $t0, 1b				\n"
+	"	beqz	$t0, 1b					\n"
 	"3:							\n"
 	__WEAK_LLSC_MB
 	"	.section .fixup,\"ax\"				\n"
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index f1234a9c311f..4d16e27020e0 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -80,7 +80,7 @@ vmalloc_done_load:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, zero, tlb_huge_update_load
+	bnez	t0, tlb_huge_update_load
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -100,12 +100,12 @@ smp_pgtable_change_load:
 
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, 1
-	beq	ra, zero, nopage_tlb_load
+	beqz	ra, nopage_tlb_load
 
 	ori	t0, t0, _PAGE_VALID
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, smp_pgtable_change_load
+	beqz	t0, smp_pgtable_change_load
 #else
 	st.d	t0, t1, 0
 #endif
@@ -139,13 +139,13 @@ tlb_huge_update_load:
 #endif
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, 1
-	beq	ra, zero, nopage_tlb_load
+	beqz	ra, nopage_tlb_load
 	tlbsrch
 
 	ori	t0, t0, _PAGE_VALID
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, tlb_huge_update_load
+	beqz	t0, tlb_huge_update_load
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
@@ -244,7 +244,7 @@ vmalloc_done_store:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, zero, tlb_huge_update_store
+	bnez	t0, tlb_huge_update_store
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -265,12 +265,12 @@ smp_pgtable_change_store:
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
 	xori	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
-	bne	ra, zero, nopage_tlb_store
+	bnez	ra, nopage_tlb_store
 
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, smp_pgtable_change_store
+	beqz	t0, smp_pgtable_change_store
 #else
 	st.d	t0, t1, 0
 #endif
@@ -306,14 +306,14 @@ tlb_huge_update_store:
 	srli.d	ra, t0, _PAGE_PRESENT_SHIFT
 	andi	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
 	xori	ra, ra, ((_PAGE_PRESENT | _PAGE_WRITE) >> _PAGE_PRESENT_SHIFT)
-	bne	ra, zero, nopage_tlb_store
+	bnez	ra, nopage_tlb_store
 
 	tlbsrch
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, tlb_huge_update_store
+	beqz	t0, tlb_huge_update_store
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
@@ -411,7 +411,7 @@ vmalloc_done_modify:
 	 * see if we need to jump to huge tlb processing.
 	 */
 	andi	t0, ra, _PAGE_HUGE
-	bne	t0, zero, tlb_huge_update_modify
+	bnez	t0, tlb_huge_update_modify
 
 	csrrd	t0, LOONGARCH_CSR_BADV
 	srli.d	t0, t0, (PAGE_SHIFT + PTE_ORDER)
@@ -431,12 +431,12 @@ smp_pgtable_change_modify:
 
 	srli.d	ra, t0, _PAGE_WRITE_SHIFT
 	andi	ra, ra, 1
-	beq	ra, zero, nopage_tlb_modify
+	beqz	ra, nopage_tlb_modify
 
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, smp_pgtable_change_modify
+	beqz	t0, smp_pgtable_change_modify
 #else
 	st.d	t0, t1, 0
 #endif
@@ -471,14 +471,14 @@ tlb_huge_update_modify:
 
 	srli.d	ra, t0, _PAGE_WRITE_SHIFT
 	andi	ra, ra, 1
-	beq	ra, zero, nopage_tlb_modify
+	beqz	ra, nopage_tlb_modify
 
 	tlbsrch
 	ori	t0, t0, (_PAGE_VALID | _PAGE_DIRTY | _PAGE_MODIFIED)
 
 #ifdef CONFIG_SMP
 	sc.d	t0, t1, 0
-	beq	t0, zero, tlb_huge_update_modify
+	beqz	t0, tlb_huge_update_modify
 	ld.d	t0, t1, 0
 #else
 	st.d	t0, t1, 0
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 6/8] LoongArch: Simplify "BLT foo, zero" with BLTZ
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
                   ` (4 preceding siblings ...)
  2022-07-26 15:57 ` [PATCH v3 5/8] LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 7/8] LoongArch: Simplify "BGT foo, zero" with BGTZ WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 8/8] LoongArch: Re-tab the assembly files WANG Xuerui
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Support for the syntactic sugar is present in upstream binutils port
from the beginning. Use it for shorter lines and better consistency.
Generated code should be identical.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/include/asm/atomic.h | 8 ++++----
 arch/loongarch/mm/tlbex.S           | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 0869bec2c937..dc2ae4f22c8e 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -158,7 +158,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		"1:	ll.w	%1, %2		# atomic_sub_if_positive\n"
 		"	addi.w	%0, %1, %3				\n"
 		"	move	%1, %0					\n"
-		"	blt	%0, $zero, 2f				\n"
+		"	bltz	%0, 2f					\n"
 		"	sc.w	%1, %2					\n"
 		"	beqz	%1, 1b					\n"
 		"2:							\n"
@@ -171,7 +171,7 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
 		"1:	ll.w	%1, %2		# atomic_sub_if_positive\n"
 		"	sub.w	%0, %1, %3				\n"
 		"	move	%1, %0					\n"
-		"	blt	%0, $zero, 2f				\n"
+		"	bltz	%0, 2f					\n"
 		"	sc.w	%1, %2					\n"
 		"	beqz	%1, 1b					\n"
 		"2:							\n"
@@ -321,7 +321,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		"1:	ll.d	%1, %2	# atomic64_sub_if_positive	\n"
 		"	addi.d	%0, %1, %3				\n"
 		"	move	%1, %0					\n"
-		"	blt	%0, $zero, 2f				\n"
+		"	bltz	%0, 2f					\n"
 		"	sc.d	%1, %2					\n"
 		"	beqz	%1, 1b					\n"
 		"2:							\n"
@@ -334,7 +334,7 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
 		"1:	ll.d	%1, %2	# atomic64_sub_if_positive	\n"
 		"	sub.d	%0, %1, %3				\n"
 		"	move	%1, %0					\n"
-		"	blt	%0, $zero, 2f				\n"
+		"	bltz	%0, 2f					\n"
 		"	sc.d	%1, %2					\n"
 		"	beqz	%1, 1b					\n"
 		"2:							\n"
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 4d16e27020e0..9ca1e3ff1ded 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -47,7 +47,7 @@ SYM_FUNC_START(handle_tlb_load)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, zero, vmalloc_load
+	bltz	t0, vmalloc_load
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_load:
@@ -210,7 +210,7 @@ SYM_FUNC_START(handle_tlb_store)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, zero, vmalloc_store
+	bltz	t0, vmalloc_store
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_store:
@@ -378,7 +378,7 @@ SYM_FUNC_START(handle_tlb_modify)
 	 * The vmalloc handling is not in the hotpath.
 	 */
 	csrrd	t0, LOONGARCH_CSR_BADV
-	blt	t0, zero, vmalloc_modify
+	bltz	t0, vmalloc_modify
 	csrrd	t1, LOONGARCH_CSR_PGDL
 
 vmalloc_done_modify:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 7/8] LoongArch: Simplify "BGT foo, zero" with BGTZ
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
                   ` (5 preceding siblings ...)
  2022-07-26 15:57 ` [PATCH v3 6/8] LoongArch: Simplify "BLT foo, zero" with BLTZ WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  2022-07-26 15:57 ` [PATCH v3 8/8] LoongArch: Re-tab the assembly files WANG Xuerui
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Support for the syntactic sugar is present in upstream binutils port
from the beginning. Use it for shorter lines and better consistency.
Generated code should be identical.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/lib/clear_user.S | 2 +-
 arch/loongarch/lib/copy_user.S  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/loongarch/lib/clear_user.S b/arch/loongarch/lib/clear_user.S
index 25d9be5fbb19..16ba2b8dd68a 100644
--- a/arch/loongarch/lib/clear_user.S
+++ b/arch/loongarch/lib/clear_user.S
@@ -32,7 +32,7 @@ SYM_FUNC_START(__clear_user)
 1:	st.b	zero, a0, 0
 	addi.d	a0, a0, 1
 	addi.d	a1, a1, -1
-	bgt	a1, zero, 1b
+	bgtz	a1, 1b
 
 2:	move	a0, a1
 	jr	ra
diff --git a/arch/loongarch/lib/copy_user.S b/arch/loongarch/lib/copy_user.S
index 9ae507f851b5..97d20327a69e 100644
--- a/arch/loongarch/lib/copy_user.S
+++ b/arch/loongarch/lib/copy_user.S
@@ -35,7 +35,7 @@ SYM_FUNC_START(__copy_user)
 	addi.d	a0, a0, 1
 	addi.d	a1, a1, 1
 	addi.d	a2, a2, -1
-	bgt	a2, zero, 1b
+	bgtz	a2, 1b
 
 3:	move	a0, a2
 	jr	ra
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 8/8] LoongArch: Re-tab the assembly files
  2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
                   ` (6 preceding siblings ...)
  2022-07-26 15:57 ` [PATCH v3 7/8] LoongArch: Simplify "BGT foo, zero" with BGTZ WANG Xuerui
@ 2022-07-26 15:57 ` WANG Xuerui
  7 siblings, 0 replies; 9+ messages in thread
From: WANG Xuerui @ 2022-07-26 15:57 UTC (permalink / raw)
  To: Huacai Chen; +Cc: Jinyang He, loongarch, WANG Xuerui

From: WANG Xuerui <git@xen0n.name>

Reflow the *.S files for better stylistic consistency, namely hard tabs
after mnemonic position, and vertical alignment of the first operand
with hard tabs. Tab width is obviously 8. Some pre-existing intra-block
vertical alignments are preserved.

Signed-off-by: WANG Xuerui <git@xen0n.name>
---
 arch/loongarch/kernel/entry.S  |   4 +-
 arch/loongarch/kernel/fpu.S    | 170 ++++++++++++++++-----------------
 arch/loongarch/kernel/genex.S  |   8 +-
 arch/loongarch/kernel/head.S   |   4 +-
 arch/loongarch/kernel/switch.S |   4 +-
 arch/loongarch/mm/page.S       | 118 +++++++++++------------
 arch/loongarch/mm/tlbex.S      |  18 ++--
 7 files changed, 163 insertions(+), 163 deletions(-)

diff --git a/arch/loongarch/kernel/entry.S b/arch/loongarch/kernel/entry.S
index d5b3dbcf5425..d53b631c9022 100644
--- a/arch/loongarch/kernel/entry.S
+++ b/arch/loongarch/kernel/entry.S
@@ -27,7 +27,7 @@ SYM_FUNC_START(handle_syscall)
 
 	addi.d	sp, sp, -PT_SIZE
 	cfi_st	t2, PT_R3
-	cfi_rel_offset  sp, PT_R3
+	cfi_rel_offset	sp, PT_R3
 	st.d	zero, sp, PT_R0
 	csrrd	t2, LOONGARCH_CSR_PRMD
 	st.d	t2, sp, PT_PRMD
@@ -50,7 +50,7 @@ SYM_FUNC_START(handle_syscall)
 	cfi_st	a7, PT_R11
 	csrrd	ra, LOONGARCH_CSR_ERA
 	st.d	ra, sp, PT_ERA
-	cfi_rel_offset ra, PT_ERA
+	cfi_rel_offset	ra, PT_ERA
 
 	cfi_st	tp, PT_R2
 	cfi_st	u0, PT_R21
diff --git a/arch/loongarch/kernel/fpu.S b/arch/loongarch/kernel/fpu.S
index e14f096d40bd..576b3370a296 100644
--- a/arch/loongarch/kernel/fpu.S
+++ b/arch/loongarch/kernel/fpu.S
@@ -27,78 +27,78 @@
 	.endm
 
 	.macro sc_save_fp base
-	EX	fst.d $f0,  \base, (0 * FPU_REG_WIDTH)
-	EX	fst.d $f1,  \base, (1 * FPU_REG_WIDTH)
-	EX	fst.d $f2,  \base, (2 * FPU_REG_WIDTH)
-	EX	fst.d $f3,  \base, (3 * FPU_REG_WIDTH)
-	EX	fst.d $f4,  \base, (4 * FPU_REG_WIDTH)
-	EX	fst.d $f5,  \base, (5 * FPU_REG_WIDTH)
-	EX	fst.d $f6,  \base, (6 * FPU_REG_WIDTH)
-	EX	fst.d $f7,  \base, (7 * FPU_REG_WIDTH)
-	EX	fst.d $f8,  \base, (8 * FPU_REG_WIDTH)
-	EX	fst.d $f9,  \base, (9 * FPU_REG_WIDTH)
-	EX	fst.d $f10, \base, (10 * FPU_REG_WIDTH)
-	EX	fst.d $f11, \base, (11 * FPU_REG_WIDTH)
-	EX	fst.d $f12, \base, (12 * FPU_REG_WIDTH)
-	EX	fst.d $f13, \base, (13 * FPU_REG_WIDTH)
-	EX	fst.d $f14, \base, (14 * FPU_REG_WIDTH)
-	EX	fst.d $f15, \base, (15 * FPU_REG_WIDTH)
-	EX	fst.d $f16, \base, (16 * FPU_REG_WIDTH)
-	EX	fst.d $f17, \base, (17 * FPU_REG_WIDTH)
-	EX	fst.d $f18, \base, (18 * FPU_REG_WIDTH)
-	EX	fst.d $f19, \base, (19 * FPU_REG_WIDTH)
-	EX	fst.d $f20, \base, (20 * FPU_REG_WIDTH)
-	EX	fst.d $f21, \base, (21 * FPU_REG_WIDTH)
-	EX	fst.d $f22, \base, (22 * FPU_REG_WIDTH)
-	EX	fst.d $f23, \base, (23 * FPU_REG_WIDTH)
-	EX	fst.d $f24, \base, (24 * FPU_REG_WIDTH)
-	EX	fst.d $f25, \base, (25 * FPU_REG_WIDTH)
-	EX	fst.d $f26, \base, (26 * FPU_REG_WIDTH)
-	EX	fst.d $f27, \base, (27 * FPU_REG_WIDTH)
-	EX	fst.d $f28, \base, (28 * FPU_REG_WIDTH)
-	EX	fst.d $f29, \base, (29 * FPU_REG_WIDTH)
-	EX	fst.d $f30, \base, (30 * FPU_REG_WIDTH)
-	EX	fst.d $f31, \base, (31 * FPU_REG_WIDTH)
+	EX	fst.d	$f0,  \base, (0 * FPU_REG_WIDTH)
+	EX	fst.d	$f1,  \base, (1 * FPU_REG_WIDTH)
+	EX	fst.d	$f2,  \base, (2 * FPU_REG_WIDTH)
+	EX	fst.d	$f3,  \base, (3 * FPU_REG_WIDTH)
+	EX	fst.d	$f4,  \base, (4 * FPU_REG_WIDTH)
+	EX	fst.d	$f5,  \base, (5 * FPU_REG_WIDTH)
+	EX	fst.d	$f6,  \base, (6 * FPU_REG_WIDTH)
+	EX	fst.d	$f7,  \base, (7 * FPU_REG_WIDTH)
+	EX	fst.d	$f8,  \base, (8 * FPU_REG_WIDTH)
+	EX	fst.d	$f9,  \base, (9 * FPU_REG_WIDTH)
+	EX	fst.d	$f10, \base, (10 * FPU_REG_WIDTH)
+	EX	fst.d	$f11, \base, (11 * FPU_REG_WIDTH)
+	EX	fst.d	$f12, \base, (12 * FPU_REG_WIDTH)
+	EX	fst.d	$f13, \base, (13 * FPU_REG_WIDTH)
+	EX	fst.d	$f14, \base, (14 * FPU_REG_WIDTH)
+	EX	fst.d	$f15, \base, (15 * FPU_REG_WIDTH)
+	EX	fst.d	$f16, \base, (16 * FPU_REG_WIDTH)
+	EX	fst.d	$f17, \base, (17 * FPU_REG_WIDTH)
+	EX	fst.d	$f18, \base, (18 * FPU_REG_WIDTH)
+	EX	fst.d	$f19, \base, (19 * FPU_REG_WIDTH)
+	EX	fst.d	$f20, \base, (20 * FPU_REG_WIDTH)
+	EX	fst.d	$f21, \base, (21 * FPU_REG_WIDTH)
+	EX	fst.d	$f22, \base, (22 * FPU_REG_WIDTH)
+	EX	fst.d	$f23, \base, (23 * FPU_REG_WIDTH)
+	EX	fst.d	$f24, \base, (24 * FPU_REG_WIDTH)
+	EX	fst.d	$f25, \base, (25 * FPU_REG_WIDTH)
+	EX	fst.d	$f26, \base, (26 * FPU_REG_WIDTH)
+	EX	fst.d	$f27, \base, (27 * FPU_REG_WIDTH)
+	EX	fst.d	$f28, \base, (28 * FPU_REG_WIDTH)
+	EX	fst.d	$f29, \base, (29 * FPU_REG_WIDTH)
+	EX	fst.d	$f30, \base, (30 * FPU_REG_WIDTH)
+	EX	fst.d	$f31, \base, (31 * FPU_REG_WIDTH)
 	.endm
 
 	.macro sc_restore_fp base
-	EX	fld.d $f0,  \base, (0 * FPU_REG_WIDTH)
-	EX	fld.d $f1,  \base, (1 * FPU_REG_WIDTH)
-	EX	fld.d $f2,  \base, (2 * FPU_REG_WIDTH)
-	EX	fld.d $f3,  \base, (3 * FPU_REG_WIDTH)
-	EX	fld.d $f4,  \base, (4 * FPU_REG_WIDTH)
-	EX	fld.d $f5,  \base, (5 * FPU_REG_WIDTH)
-	EX	fld.d $f6,  \base, (6 * FPU_REG_WIDTH)
-	EX	fld.d $f7,  \base, (7 * FPU_REG_WIDTH)
-	EX	fld.d $f8,  \base, (8 * FPU_REG_WIDTH)
-	EX	fld.d $f9,  \base, (9 * FPU_REG_WIDTH)
-	EX	fld.d $f10, \base, (10 * FPU_REG_WIDTH)
-	EX	fld.d $f11, \base, (11 * FPU_REG_WIDTH)
-	EX	fld.d $f12, \base, (12 * FPU_REG_WIDTH)
-	EX	fld.d $f13, \base, (13 * FPU_REG_WIDTH)
-	EX	fld.d $f14, \base, (14 * FPU_REG_WIDTH)
-	EX	fld.d $f15, \base, (15 * FPU_REG_WIDTH)
-	EX	fld.d $f16, \base, (16 * FPU_REG_WIDTH)
-	EX	fld.d $f17, \base, (17 * FPU_REG_WIDTH)
-	EX	fld.d $f18, \base, (18 * FPU_REG_WIDTH)
-	EX	fld.d $f19, \base, (19 * FPU_REG_WIDTH)
-	EX	fld.d $f20, \base, (20 * FPU_REG_WIDTH)
-	EX	fld.d $f21, \base, (21 * FPU_REG_WIDTH)
-	EX	fld.d $f22, \base, (22 * FPU_REG_WIDTH)
-	EX	fld.d $f23, \base, (23 * FPU_REG_WIDTH)
-	EX	fld.d $f24, \base, (24 * FPU_REG_WIDTH)
-	EX	fld.d $f25, \base, (25 * FPU_REG_WIDTH)
-	EX	fld.d $f26, \base, (26 * FPU_REG_WIDTH)
-	EX	fld.d $f27, \base, (27 * FPU_REG_WIDTH)
-	EX	fld.d $f28, \base, (28 * FPU_REG_WIDTH)
-	EX	fld.d $f29, \base, (29 * FPU_REG_WIDTH)
-	EX	fld.d $f30, \base, (30 * FPU_REG_WIDTH)
-	EX	fld.d $f31, \base, (31 * FPU_REG_WIDTH)
+	EX	fld.d	$f0,  \base, (0 * FPU_REG_WIDTH)
+	EX	fld.d	$f1,  \base, (1 * FPU_REG_WIDTH)
+	EX	fld.d	$f2,  \base, (2 * FPU_REG_WIDTH)
+	EX	fld.d	$f3,  \base, (3 * FPU_REG_WIDTH)
+	EX	fld.d	$f4,  \base, (4 * FPU_REG_WIDTH)
+	EX	fld.d	$f5,  \base, (5 * FPU_REG_WIDTH)
+	EX	fld.d	$f6,  \base, (6 * FPU_REG_WIDTH)
+	EX	fld.d	$f7,  \base, (7 * FPU_REG_WIDTH)
+	EX	fld.d	$f8,  \base, (8 * FPU_REG_WIDTH)
+	EX	fld.d	$f9,  \base, (9 * FPU_REG_WIDTH)
+	EX	fld.d	$f10, \base, (10 * FPU_REG_WIDTH)
+	EX	fld.d	$f11, \base, (11 * FPU_REG_WIDTH)
+	EX	fld.d	$f12, \base, (12 * FPU_REG_WIDTH)
+	EX	fld.d	$f13, \base, (13 * FPU_REG_WIDTH)
+	EX	fld.d	$f14, \base, (14 * FPU_REG_WIDTH)
+	EX	fld.d	$f15, \base, (15 * FPU_REG_WIDTH)
+	EX	fld.d	$f16, \base, (16 * FPU_REG_WIDTH)
+	EX	fld.d	$f17, \base, (17 * FPU_REG_WIDTH)
+	EX	fld.d	$f18, \base, (18 * FPU_REG_WIDTH)
+	EX	fld.d	$f19, \base, (19 * FPU_REG_WIDTH)
+	EX	fld.d	$f20, \base, (20 * FPU_REG_WIDTH)
+	EX	fld.d	$f21, \base, (21 * FPU_REG_WIDTH)
+	EX	fld.d	$f22, \base, (22 * FPU_REG_WIDTH)
+	EX	fld.d	$f23, \base, (23 * FPU_REG_WIDTH)
+	EX	fld.d	$f24, \base, (24 * FPU_REG_WIDTH)
+	EX	fld.d	$f25, \base, (25 * FPU_REG_WIDTH)
+	EX	fld.d	$f26, \base, (26 * FPU_REG_WIDTH)
+	EX	fld.d	$f27, \base, (27 * FPU_REG_WIDTH)
+	EX	fld.d	$f28, \base, (28 * FPU_REG_WIDTH)
+	EX	fld.d	$f29, \base, (29 * FPU_REG_WIDTH)
+	EX	fld.d	$f30, \base, (30 * FPU_REG_WIDTH)
+	EX	fld.d	$f31, \base, (31 * FPU_REG_WIDTH)
 	.endm
 
 	.macro sc_save_fcc base, tmp0, tmp1
 	movcf2gr	\tmp0, $fcc0
-	move	\tmp1, \tmp0
+	move		\tmp1, \tmp0
 	movcf2gr	\tmp0, $fcc1
 	bstrins.d	\tmp1, \tmp0, 15, 8
 	movcf2gr	\tmp0, $fcc2
@@ -113,11 +113,11 @@
 	bstrins.d	\tmp1, \tmp0, 55, 48
 	movcf2gr	\tmp0, $fcc7
 	bstrins.d	\tmp1, \tmp0, 63, 56
-	EX	st.d \tmp1, \base, 0
+	EX	st.d	\tmp1, \base, 0
 	.endm
 
 	.macro sc_restore_fcc base, tmp0, tmp1
-	EX	ld.d \tmp0, \base, 0
+	EX	ld.d	\tmp0, \base, 0
 	bstrpick.d	\tmp1, \tmp0, 7, 0
 	movgr2cf	$fcc0, \tmp1
 	bstrpick.d	\tmp1, \tmp0, 15, 8
@@ -138,11 +138,11 @@
 
 	.macro sc_save_fcsr base, tmp0
 	movfcsr2gr	\tmp0, fcsr0
-	EX	st.w \tmp0, \base, 0
+	EX	st.w	\tmp0, \base, 0
 	.endm
 
 	.macro sc_restore_fcsr base, tmp0
-	EX	ld.w \tmp0, \base, 0
+	EX	ld.w	\tmp0, \base, 0
 	movgr2fcsr	fcsr0, \tmp0
 	.endm
 
@@ -151,9 +151,9 @@
  */
 SYM_FUNC_START(_save_fp)
 	fpu_save_csr	a0 t1
-	fpu_save_double a0 t1			# clobbers t1
+	fpu_save_double	a0 t1			# clobbers t1
 	fpu_save_cc	a0 t1 t2		# clobbers t1, t2
-	jr	ra
+	jr		ra
 SYM_FUNC_END(_save_fp)
 EXPORT_SYMBOL(_save_fp)
 
@@ -161,10 +161,10 @@ EXPORT_SYMBOL(_save_fp)
  * Restore a thread's fp context.
  */
 SYM_FUNC_START(_restore_fp)
-	fpu_restore_double a0 t1		# clobbers t1
-	fpu_restore_csr	a0 t1
-	fpu_restore_cc	a0 t1 t2		# clobbers t1, t2
-	jr	ra
+	fpu_restore_double	a0 t1		# clobbers t1
+	fpu_restore_csr		a0 t1
+	fpu_restore_cc		a0 t1 t2	# clobbers t1, t2
+	jr			ra
 SYM_FUNC_END(_restore_fp)
 
 /*
@@ -225,11 +225,11 @@ SYM_FUNC_END(_init_fpu)
  * a2: fcsr
  */
 SYM_FUNC_START(_save_fp_context)
-	sc_save_fcc a1 t1 t2
-	sc_save_fcsr a2 t1
-	sc_save_fp a0
-	li.w	a0, 0					# success
-	jr	ra
+	sc_save_fcc	a1 t1 t2
+	sc_save_fcsr	a2 t1
+	sc_save_fp	a0
+	li.w		a0, 0				# success
+	jr		ra
 SYM_FUNC_END(_save_fp_context)
 
 /*
@@ -238,11 +238,11 @@ SYM_FUNC_END(_save_fp_context)
  * a2: fcsr
  */
 SYM_FUNC_START(_restore_fp_context)
-	sc_restore_fp a0
-	sc_restore_fcc a1 t1 t2
-	sc_restore_fcsr a2 t1
-	li.w	a0, 0					# success
-	jr	ra
+	sc_restore_fp	a0
+	sc_restore_fcc	a1 t1 t2
+	sc_restore_fcsr	a2 t1
+	li.w		a0, 0				# success
+	jr		ra
 SYM_FUNC_END(_restore_fp_context)
 
 SYM_FUNC_START(fault)
diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
index 0df6d17dde23..75e5be807a0d 100644
--- a/arch/loongarch/kernel/genex.S
+++ b/arch/loongarch/kernel/genex.S
@@ -35,16 +35,16 @@ SYM_FUNC_START(handle_vint)
 	BACKUP_T0T1
 	SAVE_ALL
 	la.abs	t1, __arch_cpu_idle
-	LONG_L  t0, sp, PT_ERA
+	LONG_L	t0, sp, PT_ERA
 	/* 32 byte rollback region */
 	ori	t0, t0, 0x1f
 	xori	t0, t0, 0x1f
 	bne	t0, t1, 1f
-	LONG_S  t0, sp, PT_ERA
+	LONG_S	t0, sp, PT_ERA
 1:	move	a0, sp
 	move	a1, sp
 	la.abs	t0, do_vint
-	jirl    ra, t0, 0
+	jirl	ra, t0, 0
 	RESTORE_ALL_AND_RET
 SYM_FUNC_END(handle_vint)
 
@@ -72,7 +72,7 @@ SYM_FUNC_END(except_vec_cex)
 	build_prep_\prep
 	move	a0, sp
 	la.abs	t0, do_\handler
-	jirl    ra, t0, 0
+	jirl	ra, t0, 0
 	RESTORE_ALL_AND_RET
 	SYM_FUNC_END(handle_\exception)
 	.endm
diff --git a/arch/loongarch/kernel/head.S b/arch/loongarch/kernel/head.S
index fd6a62f17161..7062cdf0e33e 100644
--- a/arch/loongarch/kernel/head.S
+++ b/arch/loongarch/kernel/head.S
@@ -85,8 +85,8 @@ SYM_CODE_START(smpboot_entry)
 	ld.d		sp, t0, CPU_BOOT_STACK
 	ld.d		tp, t0, CPU_BOOT_TINFO
 
-	la.abs	t0, 0f
-	jr	t0
+	la.abs		t0, 0f
+	jr		t0
 0:
 	bl		start_secondary
 SYM_CODE_END(smpboot_entry)
diff --git a/arch/loongarch/kernel/switch.S b/arch/loongarch/kernel/switch.S
index 53e2fa8e580e..37e84ac8ffc2 100644
--- a/arch/loongarch/kernel/switch.S
+++ b/arch/loongarch/kernel/switch.S
@@ -24,8 +24,8 @@ SYM_FUNC_START(__switch_to)
 	move	tp, a2
 	cpu_restore_nonscratch a1
 
-	li.w	t0, _THREAD_SIZE - 32
-	PTR_ADD	t0, t0, tp
+	li.w		t0, _THREAD_SIZE - 32
+	PTR_ADD		t0, t0, tp
 	set_saved_sp	t0, t1, t2
 
 	ldptr.d	t1, a1, THREAD_CSRPRMD
diff --git a/arch/loongarch/mm/page.S b/arch/loongarch/mm/page.S
index 1e20dd5e3a4b..4c874a7af0ad 100644
--- a/arch/loongarch/mm/page.S
+++ b/arch/loongarch/mm/page.S
@@ -10,75 +10,75 @@
 
 	.align 5
 SYM_FUNC_START(clear_page)
-	lu12i.w  t0, 1 << (PAGE_SHIFT - 12)
-	add.d    t0, t0, a0
+	lu12i.w	t0, 1 << (PAGE_SHIFT - 12)
+	add.d	t0, t0, a0
 1:
-	st.d     zero, a0, 0
-	st.d     zero, a0, 8
-	st.d     zero, a0, 16
-	st.d     zero, a0, 24
-	st.d     zero, a0, 32
-	st.d     zero, a0, 40
-	st.d     zero, a0, 48
-	st.d     zero, a0, 56
-	addi.d   a0,   a0, 128
-	st.d     zero, a0, -64
-	st.d     zero, a0, -56
-	st.d     zero, a0, -48
-	st.d     zero, a0, -40
-	st.d     zero, a0, -32
-	st.d     zero, a0, -24
-	st.d     zero, a0, -16
-	st.d     zero, a0, -8
-	bne      t0,   a0, 1b
+	st.d	zero, a0, 0
+	st.d	zero, a0, 8
+	st.d	zero, a0, 16
+	st.d	zero, a0, 24
+	st.d	zero, a0, 32
+	st.d	zero, a0, 40
+	st.d	zero, a0, 48
+	st.d	zero, a0, 56
+	addi.d	a0,   a0, 128
+	st.d	zero, a0, -64
+	st.d	zero, a0, -56
+	st.d	zero, a0, -48
+	st.d	zero, a0, -40
+	st.d	zero, a0, -32
+	st.d	zero, a0, -24
+	st.d	zero, a0, -16
+	st.d	zero, a0, -8
+	bne	t0,   a0, 1b
 
-	jr       ra
+	jr	ra
 SYM_FUNC_END(clear_page)
 EXPORT_SYMBOL(clear_page)
 
 .align 5
 SYM_FUNC_START(copy_page)
-	lu12i.w  t8, 1 << (PAGE_SHIFT - 12)
-	add.d    t8, t8, a0
+	lu12i.w	t8, 1 << (PAGE_SHIFT - 12)
+	add.d	t8, t8, a0
 1:
-	ld.d     t0, a1,  0
-	ld.d     t1, a1,  8
-	ld.d     t2, a1,  16
-	ld.d     t3, a1,  24
-	ld.d     t4, a1,  32
-	ld.d     t5, a1,  40
-	ld.d     t6, a1,  48
-	ld.d     t7, a1,  56
+	ld.d	t0, a1, 0
+	ld.d	t1, a1, 8
+	ld.d	t2, a1, 16
+	ld.d	t3, a1, 24
+	ld.d	t4, a1, 32
+	ld.d	t5, a1, 40
+	ld.d	t6, a1, 48
+	ld.d	t7, a1, 56
 
-	st.d     t0, a0,  0
-	st.d     t1, a0,  8
-	ld.d     t0, a1,  64
-	ld.d     t1, a1,  72
-	st.d     t2, a0,  16
-	st.d     t3, a0,  24
-	ld.d     t2, a1,  80
-	ld.d     t3, a1,  88
-	st.d     t4, a0,  32
-	st.d     t5, a0,  40
-	ld.d     t4, a1,  96
-	ld.d     t5, a1,  104
-	st.d     t6, a0,  48
-	st.d     t7, a0,  56
-	ld.d     t6, a1,  112
-	ld.d     t7, a1,  120
-	addi.d   a0, a0,  128
-	addi.d   a1, a1,  128
+	st.d	t0, a0, 0
+	st.d	t1, a0, 8
+	ld.d	t0, a1, 64
+	ld.d	t1, a1, 72
+	st.d	t2, a0, 16
+	st.d	t3, a0, 24
+	ld.d	t2, a1, 80
+	ld.d	t3, a1, 88
+	st.d	t4, a0, 32
+	st.d	t5, a0, 40
+	ld.d	t4, a1, 96
+	ld.d	t5, a1, 104
+	st.d	t6, a0, 48
+	st.d	t7, a0, 56
+	ld.d	t6, a1, 112
+	ld.d	t7, a1, 120
+	addi.d	a0, a0, 128
+	addi.d	a1, a1, 128
 
-	st.d     t0, a0,  -64
-	st.d     t1, a0,  -56
-	st.d     t2, a0,  -48
-	st.d     t3, a0,  -40
-	st.d     t4, a0,  -32
-	st.d     t5, a0,  -24
-	st.d     t6, a0,  -16
-	st.d     t7, a0,  -8
+	st.d	t0, a0, -64
+	st.d	t1, a0, -56
+	st.d	t2, a0, -48
+	st.d	t3, a0, -40
+	st.d	t4, a0, -32
+	st.d	t5, a0, -24
+	st.d	t6, a0, -16
+	st.d	t7, a0, -8
 
-	bne      t8, a0, 1b
-	jr       ra
+	bne	t8, a0, 1b
+	jr	ra
 SYM_FUNC_END(copy_page)
 EXPORT_SYMBOL(copy_page)
diff --git a/arch/loongarch/mm/tlbex.S b/arch/loongarch/mm/tlbex.S
index 9ca1e3ff1ded..de19fa2d7f0d 100644
--- a/arch/loongarch/mm/tlbex.S
+++ b/arch/loongarch/mm/tlbex.S
@@ -18,7 +18,7 @@
 	REG_S	a2, sp, PT_BVADDR
 	li.w	a1, \write
 	la.abs	t0, do_page_fault
-	jirl    ra, t0, 0
+	jirl	ra, t0, 0
 	RESTORE_ALL_AND_RET
 	SYM_FUNC_END(tlb_do_page_fault_\write)
 	.endm
@@ -34,7 +34,7 @@ SYM_FUNC_START(handle_tlb_protect)
 	csrrd	a2, LOONGARCH_CSR_BADV
 	REG_S	a2, sp, PT_BVADDR
 	la.abs	t0, do_page_fault
-	jirl    ra, t0, 0
+	jirl	ra, t0, 0
 	RESTORE_ALL_AND_RET
 SYM_FUNC_END(handle_tlb_protect)
 
@@ -151,8 +151,8 @@ tlb_huge_update_load:
 	st.d	t0, t1, 0
 #endif
 	addu16i.d	t1, zero, -(CSR_TLBIDX_EHINV >> 16)
-	addi.d	ra, t1, 0
-	csrxchg	ra, t1, LOONGARCH_CSR_TLBIDX
+	addi.d		ra, t1, 0
+	csrxchg		ra, t1, LOONGARCH_CSR_TLBIDX
 	tlbwr
 
 	csrxchg	zero, t1, LOONGARCH_CSR_TLBIDX
@@ -319,8 +319,8 @@ tlb_huge_update_store:
 	st.d	t0, t1, 0
 #endif
 	addu16i.d	t1, zero, -(CSR_TLBIDX_EHINV >> 16)
-	addi.d	ra, t1, 0
-	csrxchg	ra, t1, LOONGARCH_CSR_TLBIDX
+	addi.d		ra, t1, 0
+	csrxchg		ra, t1, LOONGARCH_CSR_TLBIDX
 	tlbwr
 
 	csrxchg	zero, t1, LOONGARCH_CSR_TLBIDX
@@ -454,7 +454,7 @@ leave_modify:
 	ertn
 #ifdef CONFIG_64BIT
 vmalloc_modify:
-	la.abs  t1, swapper_pg_dir
+	la.abs	t1, swapper_pg_dir
 	b	vmalloc_done_modify
 #endif
 
@@ -512,14 +512,14 @@ tlb_huge_update_modify:
 	/* Set huge page tlb entry size */
 	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
 	addu16i.d	t1, zero, (PS_HUGE_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
-	csrxchg	t1, t0, LOONGARCH_CSR_TLBIDX
+	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 	tlbwr
 
 	/* Reset default page size */
 	addu16i.d	t0, zero, (CSR_TLBIDX_PS >> 16)
 	addu16i.d	t1, zero, (PS_DEFAULT_SIZE << (CSR_TLBIDX_PS_SHIFT - 16))
-	csrxchg	t1, t0, LOONGARCH_CSR_TLBIDX
+	csrxchg		t1, t0, LOONGARCH_CSR_TLBIDX
 
 nopage_tlb_modify:
 	dbar	0
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-07-26 15:57 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-26 15:57 [PATCH v3 0/8] LoongArch: minor assembly clean-ups WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 1/8] LoongArch: Remove several syntactic sugar macros for branches WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 2/8] LoongArch: Use ABI names of registers where appropriate WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 3/8] LoongArch: Use the "jr" pseudo-instruction where applicable WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 4/8] LoongArch: Use the "move" " WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 5/8] LoongArch: Simplify "BEQ/BNE foo, zero" with BEQZ/BNEZ WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 6/8] LoongArch: Simplify "BLT foo, zero" with BLTZ WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 7/8] LoongArch: Simplify "BGT foo, zero" with BGTZ WANG Xuerui
2022-07-26 15:57 ` [PATCH v3 8/8] LoongArch: Re-tab the assembly files WANG Xuerui

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.