linux-csky.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/4] csky: Use generic asid function from arm
@ 2019-06-21  9:39 guoren
  2019-06-21  9:39 ` [PATCH V2 1/4] csky: Revert mmu ASID mechanism guoren
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: guoren @ 2019-06-21  9:39 UTC (permalink / raw)
  To: julien.grall, arnd, linux-kernel; +Cc: linux-csky, Guo Ren

From: Guo Ren <ren_guo@c-sky.com>

For C-SKY the first implementation is from mips and it's not implemented
well for SMP in performance. Arm's asid allocator is right for us and
we've tested it in our all CPUs:610/807/810/860 and it really reduce the
tlb flush. We'll continue stress testing before merge into master.

Changes for V2:
 - Add back set/clear cpu_mm.
 - Update commit message to explain asid allocator.

Guo Ren (4):
  csky: Revert mmu ASID mechanism
  csky: Add new asid lib code from arm
  csky: Use generic asid algorithm to implement switch_mm
  csky: Improve tlb operation with help of asid

 arch/arm64/lib/asid.c               |   9 +-
 arch/csky/abiv1/inc/abi/ckmmu.h     |   6 +
 arch/csky/abiv2/inc/abi/ckmmu.h     |  10 ++
 arch/csky/include/asm/asid.h        |  78 ++++++++++++
 arch/csky/include/asm/mmu.h         |   2 +-
 arch/csky/include/asm/mmu_context.h | 114 ++---------------
 arch/csky/include/asm/pgtable.h     |   2 -
 arch/csky/kernel/smp.c              |   2 -
 arch/csky/mm/Makefile               |   2 +
 arch/csky/mm/asid.c                 | 188 ++++++++++++++++++++++++++++
 arch/csky/mm/context.c              |  46 +++++++
 arch/csky/mm/init.c                 |   2 -
 arch/csky/mm/tlb.c                  | 238 ++++++++++++++----------------------
 13 files changed, 444 insertions(+), 255 deletions(-)
 create mode 100644 arch/csky/include/asm/asid.h
 create mode 100644 arch/csky/mm/asid.c
 create mode 100644 arch/csky/mm/context.c

-- 
2.7.4


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V2 1/4] csky: Revert mmu ASID mechanism
  2019-06-21  9:39 [PATCH V2 0/4] csky: Use generic asid function from arm guoren
@ 2019-06-21  9:39 ` guoren
  2019-06-21  9:39 ` [PATCH V2 2/4] csky: Add new asid lib code from arm guoren
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: guoren @ 2019-06-21  9:39 UTC (permalink / raw)
  To: julien.grall, arnd, linux-kernel; +Cc: linux-csky, Guo Ren, Arnd Bergmann

From: Guo Ren <ren_guo@c-sky.com>

Current C-SKY ASID mechanism is from mips and it doesn't work well
with multi-cores. ASID per core mechanism is not suitable for C-SKY
SMP tlb maintain operations, eg: tlbi.vas need share the same asid
in all processors and it'll invalid the tlb entry in all cores with
the same asid.

This patch is prepare for new ASID mechanism.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Arnd Bergmann <arnd@arnd.de>
---
 arch/csky/include/asm/mmu.h         |   1 -
 arch/csky/include/asm/mmu_context.h | 112 ++-------------------
 arch/csky/include/asm/pgtable.h     |   2 -
 arch/csky/kernel/smp.c              |   2 -
 arch/csky/mm/init.c                 |   2 -
 arch/csky/mm/tlb.c                  | 190 ++----------------------------------
 6 files changed, 14 insertions(+), 295 deletions(-)

diff --git a/arch/csky/include/asm/mmu.h b/arch/csky/include/asm/mmu.h
index cb34467..06f509a 100644
--- a/arch/csky/include/asm/mmu.h
+++ b/arch/csky/include/asm/mmu.h
@@ -5,7 +5,6 @@
 #define __ASM_CSKY_MMU_H
 
 typedef struct {
-	unsigned long asid[NR_CPUS];
 	void *vdso;
 } mm_context_t;
 
diff --git a/arch/csky/include/asm/mmu_context.h b/arch/csky/include/asm/mmu_context.h
index 734db3a1..86dde48 100644
--- a/arch/csky/include/asm/mmu_context.h
+++ b/arch/csky/include/asm/mmu_context.h
@@ -16,122 +16,24 @@
 
 #define TLBMISS_HANDLER_SETUP_PGD(pgd) \
 	setup_pgd(__pa(pgd), false)
+
 #define TLBMISS_HANDLER_SETUP_PGD_KERNEL(pgd) \
 	setup_pgd(__pa(pgd), true)
 
-#define cpu_context(cpu, mm)	((mm)->context.asid[cpu])
-#define cpu_asid(cpu, mm)	(cpu_context((cpu), (mm)) & ASID_MASK)
-#define asid_cache(cpu)		(cpu_data[cpu].asid_cache)
-
-#define ASID_FIRST_VERSION	(1 << CONFIG_CPU_ASID_BITS)
-#define ASID_INC		0x1
-#define ASID_MASK		(ASID_FIRST_VERSION - 1)
-#define ASID_VERSION_MASK	~ASID_MASK
+#define init_new_context(tsk,mm)	0
+#define activate_mm(prev,next)		switch_mm(prev, next, current)
 
 #define destroy_context(mm)		do {} while (0)
 #define enter_lazy_tlb(mm, tsk)		do {} while (0)
 #define deactivate_mm(tsk, mm)		do {} while (0)
 
-/*
- *  All unused by hardware upper bits will be considered
- *  as a software asid extension.
- */
-static inline void
-get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
-{
-	unsigned long asid = asid_cache(cpu);
-
-	asid += ASID_INC;
-	if (!(asid & ASID_MASK)) {
-		flush_tlb_all();	/* start new asid cycle */
-		if (!asid)		/* fix version if needed */
-			asid = ASID_FIRST_VERSION;
-	}
-	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
-}
-
-/*
- * Initialize the context related info for a new mm_struct
- * instance.
- */
-static inline int
-init_new_context(struct task_struct *tsk, struct mm_struct *mm)
-{
-	int i;
-
-	for_each_online_cpu(i)
-		cpu_context(i, mm) = 0;
-	return 0;
-}
-
-static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
-			struct task_struct *tsk)
-{
-	unsigned int cpu = smp_processor_id();
-	unsigned long flags;
-
-	local_irq_save(flags);
-	/* Check if our ASID is of an older version and thus invalid */
-	if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
-		get_new_mmu_context(next, cpu);
-	write_mmu_entryhi(cpu_asid(cpu, next));
-	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
-
-	/*
-	 * Mark current->active_mm as not "active" anymore.
-	 * We don't want to mislead possible IPI tlb flush routines.
-	 */
-	cpumask_clear_cpu(cpu, mm_cpumask(prev));
-	cpumask_set_cpu(cpu, mm_cpumask(next));
-
-	local_irq_restore(flags);
-}
-
-/*
- * After we have set current->mm to a new value, this activates
- * the context for the new mm so we see the new mappings.
- */
 static inline void
-activate_mm(struct mm_struct *prev, struct mm_struct *next)
+switch_mm(struct mm_struct *prev, struct mm_struct *next,
+	  struct task_struct *tsk)
 {
-	unsigned long flags;
-	int cpu = smp_processor_id();
-
-	local_irq_save(flags);
+	if (prev != next)
+		tlb_invalid_all();
 
-	/* Unconditionally get a new ASID.  */
-	get_new_mmu_context(next, cpu);
-
-	write_mmu_entryhi(cpu_asid(cpu, next));
 	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
-
-	/* mark mmu ownership change */
-	cpumask_clear_cpu(cpu, mm_cpumask(prev));
-	cpumask_set_cpu(cpu, mm_cpumask(next));
-
-	local_irq_restore(flags);
 }
-
-/*
- * If mm is currently active_mm, we can't really drop it. Instead,
- * we will get a new one for it.
- */
-static inline void
-drop_mmu_context(struct mm_struct *mm, unsigned int cpu)
-{
-	unsigned long flags;
-
-	local_irq_save(flags);
-
-	if (cpumask_test_cpu(cpu, mm_cpumask(mm)))  {
-		get_new_mmu_context(mm, cpu);
-		write_mmu_entryhi(cpu_asid(cpu, mm));
-	} else {
-		/* will get a new context next time */
-		cpu_context(cpu, mm) = 0;
-	}
-
-	local_irq_restore(flags);
-}
-
 #endif /* __ASM_CSKY_MMU_CONTEXT_H */
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index dcea277..c429a6f 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -290,8 +290,6 @@ static inline pte_t *pte_offset(pmd_t *dir, unsigned long address)
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern void paging_init(void);
 
-extern void show_jtlb_table(void);
-
 void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
 		      pte_t *pte);
 
diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
index b07a534..b753d38 100644
--- a/arch/csky/kernel/smp.c
+++ b/arch/csky/kernel/smp.c
@@ -212,8 +212,6 @@ void csky_start_secondary(void)
 	TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir);
 	TLBMISS_HANDLER_SETUP_PGD_KERNEL(swapper_pg_dir);
 
-	asid_cache(smp_processor_id()) = ASID_FIRST_VERSION;
-
 #ifdef CONFIG_CPU_HAS_FPU
 	init_fpu();
 #endif
diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index 66e5970..eb0dc9e 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -114,8 +114,6 @@ void __init pre_mmu_init(void)
 	TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir);
 	TLBMISS_HANDLER_SETUP_PGD_KERNEL(swapper_pg_dir);
 
-	asid_cache(smp_processor_id()) = ASID_FIRST_VERSION;
-
 	/* Setup page mask to 4k */
 	write_mmu_pagemask(0);
 }
diff --git a/arch/csky/mm/tlb.c b/arch/csky/mm/tlb.c
index 08b8394..efae81c 100644
--- a/arch/csky/mm/tlb.c
+++ b/arch/csky/mm/tlb.c
@@ -10,8 +10,6 @@
 #include <asm/pgtable.h>
 #include <asm/setup.h>
 
-#define CSKY_TLB_SIZE CONFIG_CPU_TLB_SIZE
-
 void flush_tlb_all(void)
 {
 	tlb_invalid_all();
@@ -19,201 +17,27 @@ void flush_tlb_all(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	int cpu = smp_processor_id();
-
-	if (cpu_context(cpu, mm) != 0)
-		drop_mmu_context(mm, cpu);
-
 	tlb_invalid_all();
 }
 
-#define restore_asid_inv_utlb(oldpid, newpid) \
-do { \
-	if ((oldpid & ASID_MASK) == newpid) \
-		write_mmu_entryhi(oldpid + 1); \
-	write_mmu_entryhi(oldpid); \
-} while (0)
-
 void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
-			   unsigned long end)
+			unsigned long end)
 {
-	struct mm_struct *mm = vma->vm_mm;
-	int cpu = smp_processor_id();
-
-	if (cpu_context(cpu, mm) != 0) {
-		unsigned long size, flags;
-		int newpid = cpu_asid(cpu, mm);
-
-		local_irq_save(flags);
-		size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
-		size = (size + 1) >> 1;
-		if (size <= CSKY_TLB_SIZE/2) {
-			start &= (PAGE_MASK << 1);
-			end += ((PAGE_SIZE << 1) - 1);
-			end &= (PAGE_MASK << 1);
-#ifdef CONFIG_CPU_HAS_TLBI
-			while (start < end) {
-				asm volatile("tlbi.vaas %0"
-					     ::"r"(start | newpid));
-				start += (PAGE_SIZE << 1);
-			}
-			sync_is();
-#else
-			{
-			int oldpid = read_mmu_entryhi();
-
-			while (start < end) {
-				int idx;
-
-				write_mmu_entryhi(start | newpid);
-				start += (PAGE_SIZE << 1);
-				tlb_probe();
-				idx = read_mmu_index();
-				if (idx >= 0)
-					tlb_invalid_indexed();
-			}
-			restore_asid_inv_utlb(oldpid, newpid);
-			}
-#endif
-		} else {
-			drop_mmu_context(mm, cpu);
-		}
-		local_irq_restore(flags);
-	}
+	tlb_invalid_all();
 }
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
-	unsigned long size, flags;
-
-	local_irq_save(flags);
-	size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
-	if (size <= CSKY_TLB_SIZE) {
-		start &= (PAGE_MASK << 1);
-		end += ((PAGE_SIZE << 1) - 1);
-		end &= (PAGE_MASK << 1);
-#ifdef CONFIG_CPU_HAS_TLBI
-		while (start < end) {
-			asm volatile("tlbi.vaas %0"::"r"(start));
-			start += (PAGE_SIZE << 1);
-		}
-		sync_is();
-#else
-		{
-		int oldpid = read_mmu_entryhi();
-
-		while (start < end) {
-			int idx;
-
-			write_mmu_entryhi(start);
-			start += (PAGE_SIZE << 1);
-			tlb_probe();
-			idx = read_mmu_index();
-			if (idx >= 0)
-				tlb_invalid_indexed();
-		}
-		restore_asid_inv_utlb(oldpid, 0);
-		}
-#endif
-	} else {
-		flush_tlb_all();
-	}
-
-	local_irq_restore(flags);
+	tlb_invalid_all();
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 {
-	int cpu = smp_processor_id();
-	int newpid = cpu_asid(cpu, vma->vm_mm);
-
-	if (!vma || cpu_context(cpu, vma->vm_mm) != 0) {
-		page &= (PAGE_MASK << 1);
-
-#ifdef CONFIG_CPU_HAS_TLBI
-		asm volatile("tlbi.vaas %0"::"r"(page | newpid));
-		sync_is();
-#else
-		{
-		int oldpid, idx;
-		unsigned long flags;
-
-		local_irq_save(flags);
-		oldpid = read_mmu_entryhi();
-		write_mmu_entryhi(page | newpid);
-		tlb_probe();
-		idx = read_mmu_index();
-		if (idx >= 0)
-			tlb_invalid_indexed();
-
-		restore_asid_inv_utlb(oldpid, newpid);
-		local_irq_restore(flags);
-		}
-#endif
-	}
+	tlb_invalid_all();
 }
 
-/*
- * Remove one kernel space TLB entry.  This entry is assumed to be marked
- * global so we don't do the ASID thing.
- */
-void flush_tlb_one(unsigned long page)
+void flush_tlb_one(unsigned long addr)
 {
-	int oldpid;
-
-	oldpid = read_mmu_entryhi();
-	page &= (PAGE_MASK << 1);
-
-#ifdef CONFIG_CPU_HAS_TLBI
-	page = page | (oldpid & 0xfff);
-	asm volatile("tlbi.vaas %0"::"r"(page));
-	sync_is();
-#else
-	{
-	int idx;
-	unsigned long flags;
-
-	page = page | (oldpid & 0xff);
-
-	local_irq_save(flags);
-	write_mmu_entryhi(page);
-	tlb_probe();
-	idx = read_mmu_index();
-	if (idx >= 0)
-		tlb_invalid_indexed();
-	restore_asid_inv_utlb(oldpid, oldpid);
-	local_irq_restore(flags);
-	}
-#endif
+	tlb_invalid_all();
 }
 EXPORT_SYMBOL(flush_tlb_one);
-
-/* show current 32 jtlbs */
-void show_jtlb_table(void)
-{
-	unsigned long flags;
-	int entryhi, entrylo0, entrylo1;
-	int entry;
-	int oldpid;
-
-	local_irq_save(flags);
-	entry = 0;
-	pr_info("\n\n\n");
-
-	oldpid = read_mmu_entryhi();
-	while (entry < CSKY_TLB_SIZE) {
-		write_mmu_index(entry);
-		tlb_read();
-		entryhi = read_mmu_entryhi();
-		entrylo0 = read_mmu_entrylo0();
-		entrylo0 = entrylo0;
-		entrylo1 = read_mmu_entrylo1();
-		entrylo1 = entrylo1;
-		pr_info("jtlb[%d]:	entryhi - 0x%x;	entrylo0 - 0x%x;"
-			"	entrylo1 - 0x%x\n",
-			entry, entryhi, entrylo0, entrylo1);
-		entry++;
-	}
-	write_mmu_entryhi(oldpid);
-	local_irq_restore(flags);
-}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V2 2/4] csky: Add new asid lib code from arm
  2019-06-21  9:39 [PATCH V2 0/4] csky: Use generic asid function from arm guoren
  2019-06-21  9:39 ` [PATCH V2 1/4] csky: Revert mmu ASID mechanism guoren
@ 2019-06-21  9:39 ` guoren
  2019-06-21 10:10   ` Julien Grall
  2019-06-21  9:39 ` [PATCH V2 3/4] csky: Use generic asid algorithm to implement switch_mm guoren
  2019-06-21  9:39 ` [PATCH V2 4/4] csky: Improve tlb operation with help of asid guoren
  3 siblings, 1 reply; 7+ messages in thread
From: guoren @ 2019-06-21  9:39 UTC (permalink / raw)
  To: julien.grall, arnd, linux-kernel; +Cc: linux-csky, Guo Ren, Arnd Bergmann

From: Guo Ren <ren_guo@c-sky.com>

This patch only contains asid help code from arm for next patch to
use.

The asid allocator use five level check to reduce the cost of
switch_mm.

 1. Check if the asid version is the same (it's general)
 2. Check reserved_asid which is set in rollover flush_context()
    and key point is to keep the same bit position with the current
    asid version instead of input version.
 3. Check if the position of bitmap is free then it could be set &
    used directly.
 4. find_next_zero_bit() (a little performance cost)
 5. flush_context  (this is the worst cost with increase current asid
    version)

Check is level by level and cost is also higher with the next level.
The reserved_asid and bitmap mechanism prevent unnecessary
find_next_zero_bit().

The atomic 64 bit asid is also suitable for 32-bit system and it
won't cost a lot in 1th 2th 3th level check.

The operation of set/clear mm_cpumask was removed in arm64 compared to
arm32. It seems no side effect on current arm64 system, but from
software meaning it's wrong. Although csky also needn't it, we add it
back for csky.

The asid_per_ctxt is no use for csky and it reserves the lowest bits for
other use, maybe: trust zone ? Ok, just keep it in csky copy.

Seems it also could be used by other archs and it's worth to move asid
code to generic in future.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Arnd Bergmann <arnd@arnd.de>
Cc: Julien Grall <julien.grall@arm.com>
---
 arch/arm64/lib/asid.c        |   9 ++-
 arch/csky/include/asm/asid.h |  78 ++++++++++++++++++
 arch/csky/mm/Makefile        |   1 +
 arch/csky/mm/asid.c          | 188 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 273 insertions(+), 3 deletions(-)
 create mode 100644 arch/csky/include/asm/asid.h
 create mode 100644 arch/csky/mm/asid.c

diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c
index 72b71bf..bdd6915 100644
--- a/arch/arm64/lib/asid.c
+++ b/arch/arm64/lib/asid.c
@@ -75,7 +75,8 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
 	return hit;
 }
 
-static u64 new_context(struct asid_info *info, atomic64_t *pasid)
+static u64 new_context(struct asid_info *info, atomic64_t *pasid
+		       struct mm_struct *mm)
 {
 	static u32 cur_idx = 1;
 	u64 asid = atomic64_read(pasid);
@@ -121,6 +122,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid)
 set_asid:
 	__set_bit(asid, info->map);
 	cur_idx = asid;
+	cpumask_clear(mm_cpumask(mm));
 	return idx2asid(info, asid) | generation;
 }
 
@@ -132,7 +134,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid)
  * @cpu: current CPU ID. Must have been acquired through get_cpu()
  */
 void asid_new_context(struct asid_info *info, atomic64_t *pasid,
-		      unsigned int cpu)
+		      unsigned int cpu, struct mm_struct *mm)
 {
 	unsigned long flags;
 	u64 asid;
@@ -141,7 +143,7 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid,
 	/* Check that our ASID belongs to the current generation. */
 	asid = atomic64_read(pasid);
 	if ((asid ^ atomic64_read(&info->generation)) >> info->bits) {
-		asid = new_context(info, pasid);
+		asid = new_context(info, pasid, mm);
 		atomic64_set(pasid, asid);
 	}
 
@@ -149,6 +151,7 @@ void asid_new_context(struct asid_info *info, atomic64_t *pasid,
 		info->flush_cpu_ctxt_cb();
 
 	atomic64_set(&active_asid(info, cpu), asid);
+	cpumask_set_cpu(cpu, mm_cpumask(mm));
 	raw_spin_unlock_irqrestore(&info->lock, flags);
 }
 
diff --git a/arch/csky/include/asm/asid.h b/arch/csky/include/asm/asid.h
new file mode 100644
index 0000000..ac08b0f
--- /dev/null
+++ b/arch/csky/include/asm/asid.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_ASID_H
+#define __ASM_ASM_ASID_H
+
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/spinlock.h>
+
+struct asid_info
+{
+	atomic64_t	generation;
+	unsigned long	*map;
+	atomic64_t __percpu	*active;
+	u64 __percpu		*reserved;
+	u32			bits;
+	/* Lock protecting the structure */
+	raw_spinlock_t		lock;
+	/* Which CPU requires context flush on next call */
+	cpumask_t		flush_pending;
+	/* Number of ASID allocated by context (shift value) */
+	unsigned int		ctxt_shift;
+	/* Callback to locally flush the context. */
+	void			(*flush_cpu_ctxt_cb)(void);
+};
+
+#define NUM_ASIDS(info)			(1UL << ((info)->bits))
+#define NUM_CTXT_ASIDS(info)		(NUM_ASIDS(info) >> (info)->ctxt_shift)
+
+#define active_asid(info, cpu)	*per_cpu_ptr((info)->active, cpu)
+
+void asid_new_context(struct asid_info *info, atomic64_t *pasid,
+		      unsigned int cpu, struct mm_struct *mm);
+
+/*
+ * Check the ASID is still valid for the context. If not generate a new ASID.
+ *
+ * @pasid: Pointer to the current ASID batch
+ * @cpu: current CPU ID. Must have been acquired throught get_cpu()
+ */
+static inline void asid_check_context(struct asid_info *info,
+				      atomic64_t *pasid, unsigned int cpu,
+				      struct mm_struct *mm)
+{
+	u64 asid, old_active_asid;
+
+	asid = atomic64_read(pasid);
+
+	/*
+	 * The memory ordering here is subtle.
+	 * If our active_asid is non-zero and the ASID matches the current
+	 * generation, then we update the active_asid entry with a relaxed
+	 * cmpxchg. Racing with a concurrent rollover means that either:
+	 *
+	 * - We get a zero back from the cmpxchg and end up waiting on the
+	 *   lock. Taking the lock synchronises with the rollover and so
+	 *   we are forced to see the updated generation.
+	 *
+	 * - We get a valid ASID back from the cmpxchg, which means the
+	 *   relaxed xchg in flush_context will treat us as reserved
+	 *   because atomic RmWs are totally ordered for a given location.
+	 */
+	old_active_asid = atomic64_read(&active_asid(info, cpu));
+	if (old_active_asid &&
+	    !((asid ^ atomic64_read(&info->generation)) >> info->bits) &&
+	    atomic64_cmpxchg_relaxed(&active_asid(info, cpu),
+				     old_active_asid, asid))
+		return;
+
+	asid_new_context(info, pasid, cpu, mm);
+}
+
+int asid_allocator_init(struct asid_info *info,
+			u32 bits, unsigned int asid_per_ctxt,
+			void (*flush_cpu_ctxt_cb)(void));
+
+#endif
diff --git a/arch/csky/mm/Makefile b/arch/csky/mm/Makefile
index c870eb3..897368f 100644
--- a/arch/csky/mm/Makefile
+++ b/arch/csky/mm/Makefile
@@ -11,3 +11,4 @@ obj-y +=			init.o
 obj-y +=			ioremap.o
 obj-y +=			syscache.o
 obj-y +=			tlb.o
+obj-y +=			asid.o
diff --git a/arch/csky/mm/asid.c b/arch/csky/mm/asid.c
new file mode 100644
index 0000000..4c68ade
--- /dev/null
+++ b/arch/csky/mm/asid.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generic ASID allocator.
+ *
+ * Based on arch/arm/mm/context.c
+ *
+ * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved.
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/slab.h>
+
+#include <asm/asid.h>
+
+#define reserved_asid(info, cpu) *per_cpu_ptr((info)->reserved, cpu)
+
+#define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
+#define ASID_FIRST_VERSION(info)	(1UL << ((info)->bits))
+
+#define asid2idx(info, asid)		(((asid) & ~ASID_MASK(info)) >> (info)->ctxt_shift)
+#define idx2asid(info, idx)		(((idx) << (info)->ctxt_shift) & ~ASID_MASK(info))
+
+static void flush_context(struct asid_info *info)
+{
+	int i;
+	u64 asid;
+
+	/* Update the list of reserved ASIDs and the ASID bitmap. */
+	bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info));
+
+	for_each_possible_cpu(i) {
+		asid = atomic64_xchg_relaxed(&active_asid(info, i), 0);
+		/*
+		 * If this CPU has already been through a
+		 * rollover, but hasn't run another task in
+		 * the meantime, we must preserve its reserved
+		 * ASID, as this is the only trace we have of
+		 * the process it is still running.
+		 */
+		if (asid == 0)
+			asid = reserved_asid(info, i);
+		__set_bit(asid2idx(info, asid), info->map);
+		reserved_asid(info, i) = asid;
+	}
+
+	/*
+	 * Queue a TLB invalidation for each CPU to perform on next
+	 * context-switch
+	 */
+	cpumask_setall(&info->flush_pending);
+}
+
+static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
+				       u64 newasid)
+{
+	int cpu;
+	bool hit = false;
+
+	/*
+	 * Iterate over the set of reserved ASIDs looking for a match.
+	 * If we find one, then we can update our mm to use newasid
+	 * (i.e. the same ASID in the current generation) but we can't
+	 * exit the loop early, since we need to ensure that all copies
+	 * of the old ASID are updated to reflect the mm. Failure to do
+	 * so could result in us missing the reserved ASID in a future
+	 * generation.
+	 */
+	for_each_possible_cpu(cpu) {
+		if (reserved_asid(info, cpu) == asid) {
+			hit = true;
+			reserved_asid(info, cpu) = newasid;
+		}
+	}
+
+	return hit;
+}
+
+static u64 new_context(struct asid_info *info, atomic64_t *pasid,
+		       struct mm_struct *mm)
+{
+	static u32 cur_idx = 1;
+	u64 asid = atomic64_read(pasid);
+	u64 generation = atomic64_read(&info->generation);
+
+	if (asid != 0) {
+		u64 newasid = generation | (asid & ~ASID_MASK(info));
+
+		/*
+		 * If our current ASID was active during a rollover, we
+		 * can continue to use it and this was just a false alarm.
+		 */
+		if (check_update_reserved_asid(info, asid, newasid))
+			return newasid;
+
+		/*
+		 * We had a valid ASID in a previous life, so try to re-use
+		 * it if possible.
+		 */
+		if (!__test_and_set_bit(asid2idx(info, asid), info->map))
+			return newasid;
+	}
+
+	/*
+	 * Allocate a free ASID. If we can't find one, take a note of the
+	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
+	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
+	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+	 * pairs.
+	 */
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), cur_idx);
+	if (asid != NUM_CTXT_ASIDS(info))
+		goto set_asid;
+
+	/* We're out of ASIDs, so increment the global generation count */
+	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info),
+						 &info->generation);
+	flush_context(info);
+
+	/* We have more ASIDs than CPUs, so this will always succeed */
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1);
+
+set_asid:
+	__set_bit(asid, info->map);
+	cur_idx = asid;
+	cpumask_clear(mm_cpumask(mm));
+	return idx2asid(info, asid) | generation;
+}
+
+/*
+ * Generate a new ASID for the context.
+ *
+ * @pasid: Pointer to the current ASID batch allocated. It will be updated
+ * with the new ASID batch.
+ * @cpu: current CPU ID. Must have been acquired through get_cpu()
+ */
+void asid_new_context(struct asid_info *info, atomic64_t *pasid,
+		      unsigned int cpu, struct mm_struct *mm)
+{
+	unsigned long flags;
+	u64 asid;
+
+	raw_spin_lock_irqsave(&info->lock, flags);
+	/* Check that our ASID belongs to the current generation. */
+	asid = atomic64_read(pasid);
+	if ((asid ^ atomic64_read(&info->generation)) >> info->bits) {
+		asid = new_context(info, pasid, mm);
+		atomic64_set(pasid, asid);
+	}
+
+	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending))
+		info->flush_cpu_ctxt_cb();
+
+	atomic64_set(&active_asid(info, cpu), asid);
+	cpumask_set_cpu(cpu, mm_cpumask(mm));
+	raw_spin_unlock_irqrestore(&info->lock, flags);
+}
+
+/*
+ * Initialize the ASID allocator
+ *
+ * @info: Pointer to the asid allocator structure
+ * @bits: Number of ASIDs available
+ * @asid_per_ctxt: Number of ASIDs to allocate per-context. ASIDs are
+ * allocated contiguously for a given context. This value should be a power of
+ * 2.
+ */
+int asid_allocator_init(struct asid_info *info,
+			u32 bits, unsigned int asid_per_ctxt,
+			void (*flush_cpu_ctxt_cb)(void))
+{
+	info->bits = bits;
+	info->ctxt_shift = ilog2(asid_per_ctxt);
+	info->flush_cpu_ctxt_cb = flush_cpu_ctxt_cb;
+	/*
+	 * Expect allocation after rollover to fail if we don't have at least
+	 * one more ASID than CPUs. ASID #0 is always reserved.
+	 */
+	WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus());
+	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
+	info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
+			    sizeof(*info->map), GFP_KERNEL);
+	if (!info->map)
+		return -ENOMEM;
+
+	raw_spin_lock_init(&info->lock);
+
+	return 0;
+}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V2 3/4] csky: Use generic asid algorithm to implement switch_mm
  2019-06-21  9:39 [PATCH V2 0/4] csky: Use generic asid function from arm guoren
  2019-06-21  9:39 ` [PATCH V2 1/4] csky: Revert mmu ASID mechanism guoren
  2019-06-21  9:39 ` [PATCH V2 2/4] csky: Add new asid lib code from arm guoren
@ 2019-06-21  9:39 ` guoren
  2019-06-21  9:39 ` [PATCH V2 4/4] csky: Improve tlb operation with help of asid guoren
  3 siblings, 0 replies; 7+ messages in thread
From: guoren @ 2019-06-21  9:39 UTC (permalink / raw)
  To: julien.grall, arnd, linux-kernel; +Cc: linux-csky, Guo Ren

From: Guo Ren <ren_guo@c-sky.com>

Use linux generic asid/vmid algorithm to implement csky
switch_mm function. The algorithm is from arm and it could
work with SMP system. It'll help reduce tlb flush for
switch_mm in task/vm switch.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 arch/csky/abiv1/inc/abi/ckmmu.h     |  6 +++++
 arch/csky/abiv2/inc/abi/ckmmu.h     | 10 ++++++++
 arch/csky/include/asm/mmu.h         |  1 +
 arch/csky/include/asm/mmu_context.h | 12 ++++++++--
 arch/csky/mm/Makefile               |  1 +
 arch/csky/mm/context.c              | 46 +++++++++++++++++++++++++++++++++++++
 6 files changed, 74 insertions(+), 2 deletions(-)
 create mode 100644 arch/csky/mm/context.c

diff --git a/arch/csky/abiv1/inc/abi/ckmmu.h b/arch/csky/abiv1/inc/abi/ckmmu.h
index 81f3771..ba8eb58 100644
--- a/arch/csky/abiv1/inc/abi/ckmmu.h
+++ b/arch/csky/abiv1/inc/abi/ckmmu.h
@@ -78,6 +78,12 @@ static inline void tlb_invalid_all(void)
 	cpwcr("cpcr8", 0x04000000);
 }
 
+
+static inline void local_tlb_invalid_all(void)
+{
+	tlb_invalid_all();
+}
+
 static inline void tlb_invalid_indexed(void)
 {
 	cpwcr("cpcr8", 0x02000000);
diff --git a/arch/csky/abiv2/inc/abi/ckmmu.h b/arch/csky/abiv2/inc/abi/ckmmu.h
index e4480e6..73ded7c 100644
--- a/arch/csky/abiv2/inc/abi/ckmmu.h
+++ b/arch/csky/abiv2/inc/abi/ckmmu.h
@@ -85,6 +85,16 @@ static inline void tlb_invalid_all(void)
 #endif
 }
 
+static inline void local_tlb_invalid_all(void)
+{
+#ifdef CONFIG_CPU_HAS_TLBI
+	asm volatile("tlbi.all\n":::"memory");
+	sync_is();
+#else
+	tlb_invalid_all();
+#endif
+}
+
 static inline void tlb_invalid_indexed(void)
 {
 	mtcr("cr<8, 15>", 0x02000000);
diff --git a/arch/csky/include/asm/mmu.h b/arch/csky/include/asm/mmu.h
index 06f509a..b382a14 100644
--- a/arch/csky/include/asm/mmu.h
+++ b/arch/csky/include/asm/mmu.h
@@ -5,6 +5,7 @@
 #define __ASM_CSKY_MMU_H
 
 typedef struct {
+	atomic64_t	asid;
 	void *vdso;
 } mm_context_t;
 
diff --git a/arch/csky/include/asm/mmu_context.h b/arch/csky/include/asm/mmu_context.h
index 86dde48..0285b0a 100644
--- a/arch/csky/include/asm/mmu_context.h
+++ b/arch/csky/include/asm/mmu_context.h
@@ -20,20 +20,28 @@
 #define TLBMISS_HANDLER_SETUP_PGD_KERNEL(pgd) \
 	setup_pgd(__pa(pgd), true)
 
-#define init_new_context(tsk,mm)	0
+#define ASID_MASK		((1 << CONFIG_CPU_ASID_BITS) - 1)
+#define cpu_asid(mm)		(atomic64_read(&mm->context.asid) & ASID_MASK)
+
+#define init_new_context(tsk,mm)	({ atomic64_set(&(mm)->context.asid, 0); 0; })
 #define activate_mm(prev,next)		switch_mm(prev, next, current)
 
 #define destroy_context(mm)		do {} while (0)
 #define enter_lazy_tlb(mm, tsk)		do {} while (0)
 #define deactivate_mm(tsk, mm)		do {} while (0)
 
+void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
+
 static inline void
 switch_mm(struct mm_struct *prev, struct mm_struct *next,
 	  struct task_struct *tsk)
 {
+	unsigned int cpu = smp_processor_id();
+
 	if (prev != next)
-		tlb_invalid_all();
+		check_and_switch_context(next, cpu);
 
 	TLBMISS_HANDLER_SETUP_PGD(next->pgd);
+	write_mmu_entryhi(next->context.asid.counter);
 }
 #endif /* __ASM_CSKY_MMU_CONTEXT_H */
diff --git a/arch/csky/mm/Makefile b/arch/csky/mm/Makefile
index 897368f..34cb5b1 100644
--- a/arch/csky/mm/Makefile
+++ b/arch/csky/mm/Makefile
@@ -12,3 +12,4 @@ obj-y +=			ioremap.o
 obj-y +=			syscache.o
 obj-y +=			tlb.o
 obj-y +=			asid.o
+obj-y +=			context.o
diff --git a/arch/csky/mm/context.c b/arch/csky/mm/context.c
new file mode 100644
index 0000000..0d95bdd
--- /dev/null
+++ b/arch/csky/mm/context.c
@@ -0,0 +1,46 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
+
+#include <linux/bitops.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+
+#include <asm/asid.h>
+#include <asm/mmu_context.h>
+#include <asm/smp.h>
+#include <asm/tlbflush.h>
+
+static DEFINE_PER_CPU(atomic64_t, active_asids);
+static DEFINE_PER_CPU(u64, reserved_asids);
+
+struct asid_info asid_info;
+
+void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+{
+	asid_check_context(&asid_info, &mm->context.asid, cpu, mm);
+}
+
+static void asid_flush_cpu_ctxt(void)
+{
+	local_tlb_invalid_all();
+}
+
+static int asids_init(void)
+{
+	BUG_ON(((1 << CONFIG_CPU_ASID_BITS) - 1) <= num_possible_cpus());
+
+	if (asid_allocator_init(&asid_info, CONFIG_CPU_ASID_BITS, 1,
+				asid_flush_cpu_ctxt))
+		panic("Unable to initialize ASID allocator for %lu ASIDs\n",
+		      NUM_ASIDS(&asid_info));
+
+	asid_info.active = &active_asids;
+	asid_info.reserved = &reserved_asids;
+
+	pr_info("ASID allocator initialised with %lu entries\n",
+		NUM_CTXT_ASIDS(&asid_info));
+
+	return 0;
+}
+early_initcall(asids_init);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V2 4/4] csky: Improve tlb operation with help of asid
  2019-06-21  9:39 [PATCH V2 0/4] csky: Use generic asid function from arm guoren
                   ` (2 preceding siblings ...)
  2019-06-21  9:39 ` [PATCH V2 3/4] csky: Use generic asid algorithm to implement switch_mm guoren
@ 2019-06-21  9:39 ` guoren
  3 siblings, 0 replies; 7+ messages in thread
From: guoren @ 2019-06-21  9:39 UTC (permalink / raw)
  To: julien.grall, arnd, linux-kernel; +Cc: linux-csky, Guo Ren

From: Guo Ren <ren_guo@c-sky.com>

There are two generations of tlb operation instruction for C-SKY.
First generation is use mcr register and it need software do more
things, second generation is use specific instructions, eg:
 tlbi.va, tlbi.vas, tlbi.alls

We implemented the following functions:

 - flush_tlb_range (a range of entries)
 - flush_tlb_page (one entry)

 Above functions use asid from vma->mm to invalid tlb entries and
 we could use tlbi.vas instruction for newest generation csky cpu.

 - flush_tlb_kernel_range
 - flush_tlb_one

 Above functions don't care asid and it invalid the tlb entries only
 with vpn and we could use tlbi.vaas instruction for newest generat-
 ion csky cpu.

Signed-off-by: Guo Ren <ren_guo@c-sky.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 arch/csky/mm/tlb.c | 136 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 132 insertions(+), 4 deletions(-)

diff --git a/arch/csky/mm/tlb.c b/arch/csky/mm/tlb.c
index efae81c..eb3ba6c 100644
--- a/arch/csky/mm/tlb.c
+++ b/arch/csky/mm/tlb.c
@@ -10,6 +10,13 @@
 #include <asm/pgtable.h>
 #include <asm/setup.h>
 
+/*
+ * One C-SKY MMU TLB entry contain two PFN/page entry, ie:
+ * 1VPN -> 2PFN
+ */
+#define TLB_ENTRY_SIZE		(PAGE_SIZE * 2)
+#define TLB_ENTRY_SIZE_MASK	(PAGE_MASK << 1)
+
 void flush_tlb_all(void)
 {
 	tlb_invalid_all();
@@ -17,27 +24,148 @@ void flush_tlb_all(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
+#ifdef CONFIG_CPU_HAS_TLBI
+	asm volatile("tlbi.asids %0"::"r"(cpu_asid(mm)));
+#else
 	tlb_invalid_all();
+#endif
 }
 
+/*
+ * MMU operation regs only could invalid tlb entry in jtlb and we
+ * need change asid field to invalid I-utlb & D-utlb.
+ */
+#ifndef CONFIG_CPU_HAS_TLBI
+#define restore_asid_inv_utlb(oldpid, newpid) \
+do { \
+	if (oldpid == newpid) \
+		write_mmu_entryhi(oldpid + 1); \
+	write_mmu_entryhi(oldpid); \
+} while (0)
+#endif
+
 void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 			unsigned long end)
 {
-	tlb_invalid_all();
+	unsigned long newpid = cpu_asid(vma->vm_mm);
+
+	start &= TLB_ENTRY_SIZE_MASK;
+	end   += TLB_ENTRY_SIZE - 1;
+	end   &= TLB_ENTRY_SIZE_MASK;
+
+#ifdef CONFIG_CPU_HAS_TLBI
+	while (start < end) {
+		asm volatile("tlbi.vas %0"::"r"(start | newpid));
+		start += 2*PAGE_SIZE;
+	}
+	sync_is();
+#else
+	{
+	unsigned long flags, oldpid;
+
+	local_irq_save(flags);
+	oldpid = read_mmu_entryhi() & ASID_MASK;
+	while (start < end) {
+		int idx;
+
+		write_mmu_entryhi(start | newpid);
+		start += 2*PAGE_SIZE;
+		tlb_probe();
+		idx = read_mmu_index();
+		if (idx >= 0)
+			tlb_invalid_indexed();
+	}
+	restore_asid_inv_utlb(oldpid, newpid);
+	local_irq_restore(flags);
+	}
+#endif
 }
 
 void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 {
-	tlb_invalid_all();
+	start &= TLB_ENTRY_SIZE_MASK;
+	end   += TLB_ENTRY_SIZE - 1;
+	end   &= TLB_ENTRY_SIZE_MASK;
+
+#ifdef CONFIG_CPU_HAS_TLBI
+	while (start < end) {
+		asm volatile("tlbi.vaas %0"::"r"(start));
+		start += 2*PAGE_SIZE;
+	}
+	sync_is();
+#else
+	{
+	unsigned long flags, oldpid;
+
+	local_irq_save(flags);
+	oldpid = read_mmu_entryhi() & ASID_MASK;
+	while (start < end) {
+		int idx;
+
+		write_mmu_entryhi(start | oldpid);
+		start += 2*PAGE_SIZE;
+		tlb_probe();
+		idx = read_mmu_index();
+		if (idx >= 0)
+			tlb_invalid_indexed();
+	}
+	restore_asid_inv_utlb(oldpid, oldpid);
+	local_irq_restore(flags);
+	}
+#endif
 }
 
 void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 {
-	tlb_invalid_all();
+	int newpid = cpu_asid(vma->vm_mm);
+
+	addr &= TLB_ENTRY_SIZE_MASK;
+
+#ifdef CONFIG_CPU_HAS_TLBI
+	asm volatile("tlbi.vas %0"::"r"(addr | newpid));
+	sync_is();
+#else
+	{
+	int oldpid, idx;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	oldpid = read_mmu_entryhi() & ASID_MASK;
+	write_mmu_entryhi(addr | newpid);
+	tlb_probe();
+	idx = read_mmu_index();
+	if (idx >= 0)
+		tlb_invalid_indexed();
+
+	restore_asid_inv_utlb(oldpid, newpid);
+	local_irq_restore(flags);
+	}
+#endif
 }
 
 void flush_tlb_one(unsigned long addr)
 {
-	tlb_invalid_all();
+	addr &= TLB_ENTRY_SIZE_MASK;
+
+#ifdef CONFIG_CPU_HAS_TLBI
+	asm volatile("tlbi.vaas %0"::"r"(addr));
+	sync_is();
+#else
+	{
+	int oldpid, idx;
+	unsigned long flags;
+
+	local_irq_save(flags);
+	oldpid = read_mmu_entryhi() & ASID_MASK;
+	write_mmu_entryhi(addr | oldpid);
+	tlb_probe();
+	idx = read_mmu_index();
+	if (idx >= 0)
+		tlb_invalid_indexed();
+
+	restore_asid_inv_utlb(oldpid, oldpid);
+	local_irq_restore(flags);
+	}
+#endif
 }
 EXPORT_SYMBOL(flush_tlb_one);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 2/4] csky: Add new asid lib code from arm
  2019-06-21  9:39 ` [PATCH V2 2/4] csky: Add new asid lib code from arm guoren
@ 2019-06-21 10:10   ` Julien Grall
  2019-06-21 11:05     ` Guo Ren
  0 siblings, 1 reply; 7+ messages in thread
From: Julien Grall @ 2019-06-21 10:10 UTC (permalink / raw)
  To: guoren, arnd, linux-kernel
  Cc: linux-csky, Guo Ren, Arnd Bergmann, linux-arm-kernel,
	Catalin Marinas, Will Deacon

Hi,

On 21/06/2019 10:39, guoren@kernel.org wrote:
> Signed-off-by: Guo Ren <ren_guo@c-sky.com>
> Cc: Arnd Bergmann <arnd@arnd.de>
> Cc: Julien Grall <julien.grall@arm.com>
> ---
>   arch/arm64/lib/asid.c        |   9 ++-

This change should be in a separate e-mail with the Arm64 maintainers in CC.

But you seem to have a copy of the allocator in csky now. So why do you need to 
modify arm64/lib/asid.c here?

Cheers,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V2 2/4] csky: Add new asid lib code from arm
  2019-06-21 10:10   ` Julien Grall
@ 2019-06-21 11:05     ` Guo Ren
  0 siblings, 0 replies; 7+ messages in thread
From: Guo Ren @ 2019-06-21 11:05 UTC (permalink / raw)
  To: Julien Grall
  Cc: Arnd Bergmann, linux-kernel, linux-csky, Guo Ren, Arnd Bergmann,
	linux-arm-kernel, Catalin Marinas, Will Deacon

Sorry, I forgot delete arm's. It's mistake, no change arm64 file.

On Fri, Jun 21, 2019 at 6:10 PM Julien Grall <julien.grall@arm.com> wrote:
>
> Hi,
>
> On 21/06/2019 10:39, guoren@kernel.org wrote:
> > Signed-off-by: Guo Ren <ren_guo@c-sky.com>
> > Cc: Arnd Bergmann <arnd@arnd.de>
> > Cc: Julien Grall <julien.grall@arm.com>
> > ---
> >   arch/arm64/lib/asid.c        |   9 ++-
>
> This change should be in a separate e-mail with the Arm64 maintainers in CC.
>
> But you seem to have a copy of the allocator in csky now. So why do you need to
> modify arm64/lib/asid.c here?
>
> Cheers,
>
> --
> Julien Grall



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-06-21 11:05 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-21  9:39 [PATCH V2 0/4] csky: Use generic asid function from arm guoren
2019-06-21  9:39 ` [PATCH V2 1/4] csky: Revert mmu ASID mechanism guoren
2019-06-21  9:39 ` [PATCH V2 2/4] csky: Add new asid lib code from arm guoren
2019-06-21 10:10   ` Julien Grall
2019-06-21 11:05     ` Guo Ren
2019-06-21  9:39 ` [PATCH V2 3/4] csky: Use generic asid algorithm to implement switch_mm guoren
2019-06-21  9:39 ` [PATCH V2 4/4] csky: Improve tlb operation with help of asid guoren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).