* [PATCH V4]: minor cleanup and improvement
@ 2021-03-09 8:02 Huang Pei
2021-03-09 8:02 ` [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling Huang Pei
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Huang Pei @ 2021-03-09 8:02 UTC (permalink / raw)
To: Thomas Bogendoerfer, ambrosehua
Cc: Bibo Mao, linux-mips, linux-arch, linux-mm, Jiaxun Yang,
Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin, Huacai Chen,
Jinyang He
[PATCH 1/2] V4 vs V3:
+. fix stupid "<<" vs ">>" error, and cast CAC_BASE to u64
+. casting to s64 causes run-time uasm error
+. test running on 3A1000 and 3B1500, OK
+. test building loongson1c_defconfig, OK
[PATCH 1/2] V4 vs V3:
no change
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling
2021-03-09 8:02 [PATCH V4]: minor cleanup and improvement Huang Pei
@ 2021-03-09 8:02 ` Huang Pei
2021-03-12 10:24 ` Thomas Bogendoerfer
2021-03-09 8:02 ` [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time Huang Pei
2021-03-14 22:31 ` Maciej W. Rozycki
2 siblings, 1 reply; 11+ messages in thread
From: Huang Pei @ 2021-03-09 8:02 UTC (permalink / raw)
To: Thomas Bogendoerfer, ambrosehua
Cc: Bibo Mao, linux-mips, linux-arch, linux-mm, Jiaxun Yang,
Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin, Huacai Chen,
Jinyang He
+. LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached
+. let CONFIG_MIPS_PGD_C0_CONTEXT depend on 64bit
+. cast CAC_BASE into u64 to silence warning on MIPS32
CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
+. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 48,
only save 48 - 12 + 5(for bit[63:59]) = 41 bits, aka. :
bit[63:59] | 0000 0000 000 | bit[47:12] | 0000 0000 0000
+. for CKSEG0, only save 29 - 12 = 17 bits
Signed-off-by: Huang Pei <huangpei@loongson.cn>
---
arch/mips/Kconfig | 3 ++-
arch/mips/mm/tlbex.c | 10 +++++-----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2000bb2b0220..5741dae35b74 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2142,7 +2142,8 @@ config CPU_SUPPORTS_HUGEPAGES
depends on !(32BIT && (ARCH_PHYS_ADDR_T_64BIT || EVA))
config MIPS_PGD_C0_CONTEXT
bool
- default y if 64BIT && (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
+ depends on 64BIT
+ default y if (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
#
# Set to y for ptrace access to watch registers.
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index a7521b8f7658..591cfa0fca02 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -848,8 +848,8 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
/* Clear lower 23 bits of context. */
uasm_i_dins(p, ptr, 0, 0, 23);
- /* 1 0 1 0 1 << 6 xkphys cached */
- uasm_i_ori(p, ptr, ptr, 0x540);
+ /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
+ uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
uasm_i_drotr(p, ptr, ptr, 11);
#elif defined(CONFIG_SMP)
UASM_i_CPUID_MFC0(p, ptr, SMP_CPUID_REG);
@@ -1164,8 +1164,9 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
if (pgd_reg == -1) {
vmalloc_branch_delay_filled = 1;
- /* 1 0 1 0 1 << 6 xkphys cached */
- uasm_i_ori(p, ptr, ptr, 0x540);
+ /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
+ uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
+
uasm_i_drotr(p, ptr, ptr, 11);
}
@@ -1292,7 +1293,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
return rv;
}
-
/*
* For a 64-bit kernel, we are using the 64-bit XTLB refill exception
* because EXL == 0. If we wrap, we can also use the 32 instruction
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time
2021-03-09 8:02 [PATCH V4]: minor cleanup and improvement Huang Pei
2021-03-09 8:02 ` [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling Huang Pei
@ 2021-03-09 8:02 ` Huang Pei
2021-03-12 10:27 ` Thomas Bogendoerfer
2021-03-14 22:31 ` Maciej W. Rozycki
2 siblings, 1 reply; 11+ messages in thread
From: Huang Pei @ 2021-03-09 8:02 UTC (permalink / raw)
To: Thomas Bogendoerfer, ambrosehua
Cc: Bibo Mao, linux-mips, linux-arch, linux-mm, Jiaxun Yang,
Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin, Huacai Chen,
Jinyang He
Loongson64 allocates arrays of pglist_data statically and is located
at Node 0, and cpu from Nodes other than 0 need remote access to
pglist_data and zone info.
Delay pglist_data allocation till run time, and make it NUMA-aware
Signed-off-by: Huang Pei <huangpei@loongson.cn>
---
arch/mips/loongson64/numa.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
index cf9459f79f9b..afafd367cb38 100644
--- a/arch/mips/loongson64/numa.c
+++ b/arch/mips/loongson64/numa.c
@@ -26,7 +26,6 @@
#include <asm/wbflush.h>
#include <boot_param.h>
-static struct pglist_data prealloc__node_data[MAX_NUMNODES];
unsigned char __node_distances[MAX_NUMNODES][MAX_NUMNODES];
EXPORT_SYMBOL(__node_distances);
struct pglist_data *__node_data[MAX_NUMNODES];
@@ -151,8 +150,12 @@ static void __init szmem(unsigned int node)
static void __init node_mem_init(unsigned int node)
{
+ struct pglist_data *nd;
unsigned long node_addrspace_offset;
unsigned long start_pfn, end_pfn;
+ unsigned long nd_pa;
+ int tnid;
+ const size_t nd_size = roundup(sizeof(pg_data_t), SMP_CACHE_BYTES);
node_addrspace_offset = nid_to_addrbase(node);
pr_info("Node%d's addrspace_offset is 0x%lx\n",
@@ -162,8 +165,16 @@ static void __init node_mem_init(unsigned int node)
pr_info("Node%d: start_pfn=0x%lx, end_pfn=0x%lx\n",
node, start_pfn, end_pfn);
- __node_data[node] = prealloc__node_data + node;
-
+ nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, node);
+ if (!nd_pa)
+ panic("Cannot allocate %zu bytes for node %d data\n",
+ nd_size, node);
+ nd = __va(nd_pa);
+ memset(nd, 0, sizeof(struct pglist_data));
+ tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT);
+ if (tnid != node)
+ pr_info("NODE_DATA(%d) on node %d\n", node, tnid);
+ __node_data[node] = nd;
NODE_DATA(node)->node_start_pfn = start_pfn;
NODE_DATA(node)->node_spanned_pages = end_pfn - start_pfn;
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling
2021-03-09 8:02 ` [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling Huang Pei
@ 2021-03-12 10:24 ` Thomas Bogendoerfer
2021-03-13 0:41 ` Huang Pei
2021-03-13 1:18 ` Huang Pei
0 siblings, 2 replies; 11+ messages in thread
From: Thomas Bogendoerfer @ 2021-03-12 10:24 UTC (permalink / raw)
To: Huang Pei
Cc: ambrosehua, Bibo Mao, linux-mips, linux-arch, linux-mm,
Jiaxun Yang, Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin,
Huacai Chen, Jinyang He
On Tue, Mar 09, 2021 at 04:02:09PM +0800, Huang Pei wrote:
> +. LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached
>
> +. let CONFIG_MIPS_PGD_C0_CONTEXT depend on 64bit
>
> +. cast CAC_BASE into u64 to silence warning on MIPS32
>
> CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
>
> +. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 48,
> only save 48 - 12 + 5(for bit[63:59]) = 41 bits, aka. :
>
> bit[63:59] | 0000 0000 000 | bit[47:12] | 0000 0000 0000
>
> +. for CKSEG0, only save 29 - 12 = 17 bits
you are explaining what you are doing, but not why you are doing this.
So why are you doing this ?
> #
> # Set to y for ptrace access to watch registers.
> diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> index a7521b8f7658..591cfa0fca02 100644
> --- a/arch/mips/mm/tlbex.c
> +++ b/arch/mips/mm/tlbex.c
> @@ -848,8 +848,8 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
> /* Clear lower 23 bits of context. */
> uasm_i_dins(p, ptr, 0, 0, 23);
>
> - /* 1 0 1 0 1 << 6 xkphys cached */
> - uasm_i_ori(p, ptr, ptr, 0x540);
> + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
you want to use bits 63..59 but picking bits 63..53 with this. While
bits 58..53 are probably 0, wouldn't it make also sense to mask them out ?
> uasm_i_drotr(p, ptr, ptr, 11);
> #elif defined(CONFIG_SMP)
> UASM_i_CPUID_MFC0(p, ptr, SMP_CPUID_REG);
> @@ -1164,8 +1164,9 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
>
> if (pgd_reg == -1) {
> vmalloc_branch_delay_filled = 1;
> - /* 1 0 1 0 1 << 6 xkphys cached */
> - uasm_i_ori(p, ptr, ptr, 0x540);
> + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
> +
> uasm_i_drotr(p, ptr, ptr, 11);
> }
>
> @@ -1292,7 +1293,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
>
> return rv;
> }
> -
> /*
why are you removing this empty line ? I'd prefer that it stays there...
> * For a 64-bit kernel, we are using the 64-bit XTLB refill exception
> * because EXL == 0. If we wrap, we can also use the 32 instruction
> --
> 2.17.1
--
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea. [ RFC1925, 2.3 ]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time
2021-03-09 8:02 ` [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time Huang Pei
@ 2021-03-12 10:27 ` Thomas Bogendoerfer
0 siblings, 0 replies; 11+ messages in thread
From: Thomas Bogendoerfer @ 2021-03-12 10:27 UTC (permalink / raw)
To: Huang Pei
Cc: ambrosehua, Bibo Mao, linux-mips, linux-arch, linux-mm,
Jiaxun Yang, Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin,
Huacai Chen, Jinyang He
On Tue, Mar 09, 2021 at 04:02:10PM +0800, Huang Pei wrote:
> Loongson64 allocates arrays of pglist_data statically and is located
> at Node 0, and cpu from Nodes other than 0 need remote access to
> pglist_data and zone info.
>
> Delay pglist_data allocation till run time, and make it NUMA-aware
>
> Signed-off-by: Huang Pei <huangpei@loongson.cn>
> ---
> arch/mips/loongson64/numa.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
applied to mips-next.
This patch looks independant from the first one in this series (that's
why I've applied it). So please post such patches as single patches
and not as series.
Thomas.
--
Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
good idea. [ RFC1925, 2.3 ]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling
2021-03-12 10:24 ` Thomas Bogendoerfer
@ 2021-03-13 0:41 ` Huang Pei
2021-03-13 1:18 ` Huang Pei
1 sibling, 0 replies; 11+ messages in thread
From: Huang Pei @ 2021-03-13 0:41 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: ambrosehua, Bibo Mao, linux-mips, linux-arch, linux-mm,
Jiaxun Yang, Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin,
Huacai Chen, Jinyang He
Hi,
On Fri, Mar 12, 2021 at 11:24:10AM +0100, Thomas Bogendoerfer wrote:
> On Tue, Mar 09, 2021 at 04:02:09PM +0800, Huang Pei wrote:
> > +. LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached
> >
> > +. let CONFIG_MIPS_PGD_C0_CONTEXT depend on 64bit
> >
> > +. cast CAC_BASE into u64 to silence warning on MIPS32
> >
> > CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
> >
> > +. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 48,
> > only save 48 - 12 + 5(for bit[63:59]) = 41 bits, aka. :
> >
> > bit[63:59] | 0000 0000 000 | bit[47:12] | 0000 0000 0000
> >
> > +. for CKSEG0, only save 29 - 12 = 17 bits
>
> you are explaining what you are doing, but not why you are doing this.
> So why are you doing this ?
>
LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached, instead of
0xa8xx_xxxx_xxxx_xxxx;
> > #
> > # Set to y for ptrace access to watch registers.
> > diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> > index a7521b8f7658..591cfa0fca02 100644
> > --- a/arch/mips/mm/tlbex.c
> > +++ b/arch/mips/mm/tlbex.c
> > @@ -848,8 +848,8 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
> > /* Clear lower 23 bits of context. */
> > uasm_i_dins(p, ptr, 0, 0, 23);
> >
> > - /* 1 0 1 0 1 << 6 xkphys cached */
> > - uasm_i_ori(p, ptr, ptr, 0x540);
> > + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> > + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
>
> you want to use bits 63..59 but picking bits 63..53 with this. While
> bits 58..53 are probably 0, wouldn't it make also sense to mask them out ?
In CP0 Context, xphys in wrapped as:
bit[47:12] (36 bits) | bit[63:59] (5 bits) | badv2 (19 bits) | 0 (4bits)
bit[58:53] is located at badv2, which is not used, whether it is xphys cached
or CKSEG0 wrapped into CP0 Context, it is extracted as xphys cached by
prefixed with bit[63:59]
>
> > uasm_i_drotr(p, ptr, ptr, 11);
> > #elif defined(CONFIG_SMP)
> > UASM_i_CPUID_MFC0(p, ptr, SMP_CPUID_REG);
> > @@ -1164,8 +1164,9 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
> >
> > if (pgd_reg == -1) {
> > vmalloc_branch_delay_filled = 1;
> > - /* 1 0 1 0 1 << 6 xkphys cached */
> > - uasm_i_ori(p, ptr, ptr, 0x540);
> > + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> > + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
> > +
> > uasm_i_drotr(p, ptr, ptr, 11);
> > }
> >
> > @@ -1292,7 +1293,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
> >
> > return rv;
> > }
> > -
> > /*
>
> why are you removing this empty line ? I'd prefer that it stays there...
>
> > * For a 64-bit kernel, we are using the 64-bit XTLB refill exception
> > * because EXL == 0. If we wrap, we can also use the 32 instruction
OK, I whill resend V5
> > --
> > 2.17.1
>
> --
> Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> good idea. [ RFC1925, 2.3 ]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling
2021-03-12 10:24 ` Thomas Bogendoerfer
2021-03-13 0:41 ` Huang Pei
@ 2021-03-13 1:18 ` Huang Pei
1 sibling, 0 replies; 11+ messages in thread
From: Huang Pei @ 2021-03-13 1:18 UTC (permalink / raw)
To: Thomas Bogendoerfer
Cc: ambrosehua, Bibo Mao, linux-mips, linux-arch, linux-mm,
Jiaxun Yang, Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin,
Huacai Chen, Jinyang He
Hi, my calculate is wrong, but the result is right
Here is the new one:
CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
+. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 53,
only save 53 - 12 = 41 bit
bit[63:59] | 0000 00 | bit[52:12] | 0000 0000 0000
+. for CKSEG0, only save 29 - 12 = 17 bits
So, when switch pgd, only save bit[52:12] or bit[28:12] into CP0 Context's
bit[63:23], see following asm generated at runtime, ao hold pgd
.set push
.set noreorder
tlbmiss_handler_setup_pgd:
dsra a2, a0, 29
move a3, a0
dins a0, zero, 29, 35
daddiu a2, a2, 4
movn a0, a3, a2
dsll a0, a0, 11
jr ra
dmtc0 a0, CP0_CONTEXT
.set pop
when used pgd at page walking
dmfc0 k0, CP0_CONTEXT
dins k0, k0, 0, 23 //zero badv2
ori k0, k0, (CAC_BASE >> 53) //*prefix* with bit[63:59]
drotr k0, k0, 11 // kick it at right position
On Fri, Mar 12, 2021 at 11:24:10AM +0100, Thomas Bogendoerfer wrote:
> On Tue, Mar 09, 2021 at 04:02:09PM +0800, Huang Pei wrote:
> > +. LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached
> >
> > +. let CONFIG_MIPS_PGD_C0_CONTEXT depend on 64bit
> >
> > +. cast CAC_BASE into u64 to silence warning on MIPS32
> >
> > CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
> >
> > +. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 48,
> > only save 48 - 12 + 5(for bit[63:59]) = 41 bits, aka. :
> >
> > bit[63:59] | 0000 0000 000 | bit[47:12] | 0000 0000 0000
> >
> > +. for CKSEG0, only save 29 - 12 = 17 bits
>
> you are explaining what you are doing, but not why you are doing this.
> So why are you doing this ?
>
> > #
> > # Set to y for ptrace access to watch registers.
> > diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
> > index a7521b8f7658..591cfa0fca02 100644
> > --- a/arch/mips/mm/tlbex.c
> > +++ b/arch/mips/mm/tlbex.c
> > @@ -848,8 +848,8 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
> > /* Clear lower 23 bits of context. */
> > uasm_i_dins(p, ptr, 0, 0, 23);
> >
> > - /* 1 0 1 0 1 << 6 xkphys cached */
> > - uasm_i_ori(p, ptr, ptr, 0x540);
> > + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> > + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
>
> you want to use bits 63..59 but picking bits 63..53 with this. While
> bits 58..53 are probably 0, wouldn't it make also sense to mask them out ?
>
> > uasm_i_drotr(p, ptr, ptr, 11);
> > #elif defined(CONFIG_SMP)
> > UASM_i_CPUID_MFC0(p, ptr, SMP_CPUID_REG);
> > @@ -1164,8 +1164,9 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
> >
> > if (pgd_reg == -1) {
> > vmalloc_branch_delay_filled = 1;
> > - /* 1 0 1 0 1 << 6 xkphys cached */
> > - uasm_i_ori(p, ptr, ptr, 0x540);
> > + /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
> > + uasm_i_ori(p, ptr, ptr, ((u64)(CAC_BASE) >> 53));
> > +
> > uasm_i_drotr(p, ptr, ptr, 11);
> > }
> >
> > @@ -1292,7 +1293,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
> >
> > return rv;
> > }
> > -
> > /*
>
> why are you removing this empty line ? I'd prefer that it stays there...
>
> > * For a 64-bit kernel, we are using the 64-bit XTLB refill exception
> > * because EXL == 0. If we wrap, we can also use the 32 instruction
> > --
> > 2.17.1
>
> --
> Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> good idea. [ RFC1925, 2.3 ]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V4]: minor cleanup and improvement
2021-03-09 8:02 [PATCH V4]: minor cleanup and improvement Huang Pei
@ 2021-03-14 22:31 ` Maciej W. Rozycki
2021-03-09 8:02 ` [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time Huang Pei
2021-03-14 22:31 ` Maciej W. Rozycki
2 siblings, 0 replies; 11+ messages in thread
From: Maciej W. Rozycki @ 2021-03-14 22:31 UTC (permalink / raw)
To: Huang Pei
Cc: Thomas Bogendoerfer, ambrosehua, Bibo Mao, linux-mips,
linux-arch, linux-mm, Jiaxun Yang, Paul Burton, Li Xuefeng,
Yang Tiezhu, Gao Juxin, Huacai Chen, Jinyang He
On Tue, 9 Mar 2021, Huang Pei wrote:
> [PATCH 1/2] V4 vs V3:
It will help if you don't change the subject proper of the cover letter
with every iteration of a patch series, as in that case mail user agents
(at least the sane ones) will group all iterations together in the thread
sorting mode. With the subject changed every time the link is lost and
submissions are scattered all over the mail folder.
Maciej
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V4]: minor cleanup and improvement
@ 2021-03-14 22:31 ` Maciej W. Rozycki
0 siblings, 0 replies; 11+ messages in thread
From: Maciej W. Rozycki @ 2021-03-14 22:31 UTC (permalink / raw)
To: Huang Pei
Cc: Thomas Bogendoerfer, ambrosehua, Bibo Mao, linux-mips,
linux-arch, linux-mm, Jiaxun Yang, Paul Burton, Li Xuefeng,
Yang Tiezhu, Gao Juxin, Huacai Chen, Jinyang He
On Tue, 9 Mar 2021, Huang Pei wrote:
> [PATCH 1/2] V4 vs V3:
It will help if you don't change the subject proper of the cover letter
with every iteration of a patch series, as in that case mail user agents
(at least the sane ones) will group all iterations together in the thread
sorting mode. With the subject changed every time the link is lost and
submissions are scattered all over the mail folder.
Maciej
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH V4]: minor cleanup and improvement
2021-03-14 22:31 ` Maciej W. Rozycki
(?)
@ 2021-03-16 12:55 ` Huang Pei
-1 siblings, 0 replies; 11+ messages in thread
From: Huang Pei @ 2021-03-16 12:55 UTC (permalink / raw)
To: Maciej W. Rozycki
Cc: Thomas Bogendoerfer, ambrosehua, Bibo Mao, linux-mips,
linux-arch, linux-mm, Jiaxun Yang, Paul Burton, Li Xuefeng,
Yang Tiezhu, Gao Juxin, Huacai Chen, Jinyang He
hi,
On Sun, Mar 14, 2021 at 11:31:49PM +0100, Maciej W. Rozycki wrote:
> On Tue, 9 Mar 2021, Huang Pei wrote:
>
> > [PATCH 1/2] V4 vs V3:
>
> It will help if you don't change the subject proper of the cover letter
> with every iteration of a patch series, as in that case mail user agents
> (at least the sane ones) will group all iterations together in the thread
> sorting mode. With the subject changed every time the link is lost and
> submissions are scattered all over the mail folder.
>
> Maciej
I got it, thank you
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling
2021-03-09 1:54 [PATCH V3]: minor cleanup on TLB and MM Huang Pei
@ 2021-03-09 1:54 ` Huang Pei
0 siblings, 0 replies; 11+ messages in thread
From: Huang Pei @ 2021-03-09 1:54 UTC (permalink / raw)
To: Thomas Bogendoerfer, ambrosehua
Cc: Bibo Mao, linux-mips, linux-arch, linux-mm, Jiaxun Yang,
Paul Burton, Li Xuefeng, Yang Tiezhu, Gao Juxin, Huacai Chen,
Jinyang He
+. LOONGSON64 use 0x98xx_xxxx_xxxx_xxxx as xphys cached
+. let CONFIG_MIPS_PGD_C0_CONTEXT depend on 64bit
CP0 Context has enough room for wraping pgd into its 41-bit PTEBase field.
+. For XPHYS, the trick is that pgd is 4kB aligned, and the PABITS <= 48,
only save 48 - 12 + 5(for bit[63:59]) = 41 bits, aka. :
bit[63:59] | 0000 0000 000 | bit[47:12] | 0000 0000 0000
+. for CKSEG0, only save 29 - 12 = 17 bits
Signed-off-by: Huang Pei <huangpei@loongson.cn>
---
arch/mips/Kconfig | 3 ++-
arch/mips/mm/tlbex.c | 10 +++++-----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2000bb2b0220..5741dae35b74 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2142,7 +2142,8 @@ config CPU_SUPPORTS_HUGEPAGES
depends on !(32BIT && (ARCH_PHYS_ADDR_T_64BIT || EVA))
config MIPS_PGD_C0_CONTEXT
bool
- default y if 64BIT && (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
+ depends on 64BIT
+ default y if (CPU_MIPSR2 || CPU_MIPSR6) && !CPU_XLP
#
# Set to y for ptrace access to watch registers.
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index a7521b8f7658..e775f7adf279 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -848,8 +848,8 @@ void build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
/* Clear lower 23 bits of context. */
uasm_i_dins(p, ptr, 0, 0, 23);
- /* 1 0 1 0 1 << 6 xkphys cached */
- uasm_i_ori(p, ptr, ptr, 0x540);
+ /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
+ uasm_i_ori(p, ptr, ptr, ((s64)(CAC_BASE) << 53));
uasm_i_drotr(p, ptr, ptr, 11);
#elif defined(CONFIG_SMP)
UASM_i_CPUID_MFC0(p, ptr, SMP_CPUID_REG);
@@ -1164,8 +1164,9 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
if (pgd_reg == -1) {
vmalloc_branch_delay_filled = 1;
- /* 1 0 1 0 1 << 6 xkphys cached */
- uasm_i_ori(p, ptr, ptr, 0x540);
+ /* insert bit[63:59] of CAC_BASE into bit[11:6] of ptr */
+ uasm_i_ori(p, ptr, ptr, ((s64)(CAC_BASE) << 53));
+
uasm_i_drotr(p, ptr, ptr, 11);
}
@@ -1292,7 +1293,6 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l,
return rv;
}
-
/*
* For a 64-bit kernel, we are using the 64-bit XTLB refill exception
* because EXL == 0. If we wrap, we can also use the 32 instruction
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-03-16 12:57 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 8:02 [PATCH V4]: minor cleanup and improvement Huang Pei
2021-03-09 8:02 ` [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling Huang Pei
2021-03-12 10:24 ` Thomas Bogendoerfer
2021-03-13 0:41 ` Huang Pei
2021-03-13 1:18 ` Huang Pei
2021-03-09 8:02 ` [PATCH 2/2] MIPS: loongson64: alloc pglist_data at run time Huang Pei
2021-03-12 10:27 ` Thomas Bogendoerfer
2021-03-14 22:31 ` [PATCH V4]: minor cleanup and improvement Maciej W. Rozycki
2021-03-14 22:31 ` Maciej W. Rozycki
2021-03-16 12:55 ` Huang Pei
-- strict thread matches above, loose matches on Subject: below --
2021-03-09 1:54 [PATCH V3]: minor cleanup on TLB and MM Huang Pei
2021-03-09 1:54 ` [PATCH 1/2] MIPS: clean up CONFIG_MIPS_PGD_C0_CONTEXT handling Huang Pei
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.