kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one
@ 2021-04-14 11:22 Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Shameer Kolothum
                   ` (16 more replies)
  0 siblings, 17 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:22 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

Hi,

This is an attempt to revive this series originally posted by
Julien Grall[1]. The main motive to work on this now is because
of the requirement to have Pinned KVM VMIDs and the RFC discussion
for the same basically suggested[2] to have a common/better vmid
allocator for KVM which this series provides.
 
Major Changes from v3:

-Changes related to Pinned ASID support.
-Changes to take care KPTI related bits reservation.
-Dropped support for 32 bit KVM.
-Rebase to 5.12-rc7

Individual patches have change history for any major changes
from v3.

Tests were performed on a HiSilicon D06 platform and so far not observed
any regressions.

For ASID allocation,

Avg of 10 runs(hackbench -s 512 -l 200 -g 300 -f 25 -P),
5.12-rc7: Time:18.8119
5.12-rc7+v4: Time: 18.459

~1.8% improvement.

For KVM VMID,

The measurement was made with maxcpus set to 8 and with the
number of VMID limited to 4-bit. The test involves running
concurrently 40 guests with 2 vCPUs. Each guest will then
execute hackbench 5 times before exiting.

The performance difference between the current algo and the
new one are(ag. of 10 runs):
    - 1.9% less exit from the guest
    - 0.7% faster

For complete series, please see,
 https://github.com/hisilicon/kernel-dev/tree/private-v5.12-rc7-asid-v4

Please take a look and let me know your feedback.

Thanks,
Shameer
[1].https://patchwork.kernel.org/project/linux-arm-kernel/cover/20190724162534.7390-1-julien.grall@arm.com/
[2].https://lore.kernel.org/linux-arm-kernel/20210222155338.26132-6-shameerali.kolothum.thodi@huawei.com/T/#mff3129997739e2747172f4a2e81fd66be91ffea4
--------
From V3:
--------
Hi all,

This patch series is moving out the ASID allocator in a separate file in order
to re-use it for the VMID. The benefits are:
    - CPUs are not forced to exit on a roll-over.
    - Context invalidation is now per-CPU rather than
      broadcasted.

There are no performance regression on the fastpath for ASID allocation.
Actually on the hackbench measurement (300 hackbench) it was .7% faster.

The measurement was made on a Seattle based SoC (8 CPUs), with the
number of VMID limited to 4-bit. The test involves running concurrently 40
guests with 2 vCPUs. Each guest will then execute hackbench 5 times
before exiting.

The performance difference (on 5.1-rc1) between the current algo and the
new one are:
    - 2.5% less exit from the guest
    - 22.4% more flush, although they are now local rather than broadcasted
    - 0.11% faster (just for the record)

The ASID allocator rework to make it generic has been divided in multiple
patches to make the review easier.

A branch with the patch based on 5.3-rc1 can be found:

http://xenbits.xen.org/gitweb/?p=people/julieng/linux-arm.git;a=shortlog;h=refs/heads/vmid-rework/v3

For all the changes see in each patch.

Best regards,

Julien Grall (13):
  arm64/mm: Introduce asid_info structure and move
    asid_generation/asid_map to it
  arm64/mm: Move active_asids and reserved_asids to asid_info
  arm64/mm: Move bits to asid_info
  arm64/mm: Move the variable lock and tlb_flush_pending to asid_info
  arm64/mm: Remove dependency on MM in new_context
  arm64/mm: Introduce NUM_CTXT_ASIDS
  arm64/mm: Split asid_inits in 2 parts
  arm64/mm: Split the function check_and_switch_context in 3 parts
  arm64/mm: Introduce a callback to flush the local context
  arm64: Move the ASID allocator code in a separate file
  arm64/lib: Add an helper to free memory allocated by the ASID
    allocator
  arch/arm64: Introduce a capability to tell whether 16-bit VMID is
    available
  kvm/arm: Align the VMID allocation with the arm64 ASID one

Shameer Kolothum (3):
  arm64/mm: Move Pinned ASID related variables to asid_info
  arm64/mm: Split the arm64_mm_context_get/put
  arm64/mm: Introduce a callback to set reserved bits

 arch/arm64/include/asm/cpucaps.h   |   3 +-
 arch/arm64/include/asm/kvm_asm.h   |   4 +-
 arch/arm64/include/asm/kvm_host.h  |   5 +-
 arch/arm64/include/asm/kvm_mmu.h   |   7 +-
 arch/arm64/include/asm/lib_asid.h  |  87 +++++++++
 arch/arm64/kernel/cpufeature.c     |   9 +
 arch/arm64/kvm/arm.c               | 124 +++++--------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c |   6 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c      |  10 +-
 arch/arm64/kvm/hyp/vhe/tlb.c       |  10 +-
 arch/arm64/kvm/mmu.c               |   1 -
 arch/arm64/lib/Makefile            |   2 +
 arch/arm64/lib/asid.c              | 264 +++++++++++++++++++++++++++
 arch/arm64/mm/context.c            | 283 ++++-------------------------
 14 files changed, 469 insertions(+), 346 deletions(-)
 create mode 100644 arch/arm64/include/asm/lib_asid.h
 create mode 100644 arch/arm64/lib/asid.c

-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
@ 2021-04-14 11:22 ` Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 02/16] arm64/mm: Move active_asids and reserved_asids to asid_info Shameer Kolothum
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:22 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

In an attempt to make the ASID allocator generic, create a new structure
asid_info to store all the information necessary for the allocator.

For now, move the variables asid_generation, asid_map, cur_idx to the
new structure asid_info. Follow-up patches will move more variables.

Note to avoid more renaming aftwards, a local variable 'info' has been
created and is a pointer to the ASID allocator structure.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3-->v4:
  Move cur_idx into asid_info.
---
 arch/arm64/mm/context.c | 71 +++++++++++++++++++++++------------------
 1 file changed, 40 insertions(+), 31 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 001737a8f309..783f8bdb91ee 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -20,8 +20,12 @@
 static u32 asid_bits;
 static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
 
-static atomic64_t asid_generation;
-static unsigned long *asid_map;
+static struct asid_info
+{
+	atomic64_t	generation;
+	unsigned long	*map;
+	unsigned int	map_idx;
+} asid_info;
 
 static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
@@ -88,26 +92,26 @@ static void set_kpti_asid_bits(unsigned long *map)
 	memset(map, 0xaa, len);
 }
 
-static void set_reserved_asid_bits(void)
+static void set_reserved_asid_bits(struct asid_info *info)
 {
 	if (pinned_asid_map)
-		bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS);
+		bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS);
 	else if (arm64_kernel_unmapped_at_el0())
-		set_kpti_asid_bits(asid_map);
+		set_kpti_asid_bits(info->map);
 	else
-		bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
+		bitmap_clear(info->map, 0, NUM_USER_ASIDS);
 }
 
-#define asid_gen_match(asid) \
-	(!(((asid) ^ atomic64_read(&asid_generation)) >> asid_bits))
+#define asid_gen_match(asid, info) \
+	(!(((asid) ^ atomic64_read(&(info)->generation)) >> asid_bits))
 
-static void flush_context(void)
+static void flush_context(struct asid_info *info)
 {
 	int i;
 	u64 asid;
 
 	/* Update the list of reserved ASIDs and the ASID bitmap. */
-	set_reserved_asid_bits();
+	set_reserved_asid_bits(info);
 
 	for_each_possible_cpu(i) {
 		asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0);
@@ -120,7 +124,7 @@ static void flush_context(void)
 		 */
 		if (asid == 0)
 			asid = per_cpu(reserved_asids, i);
-		__set_bit(asid2idx(asid), asid_map);
+		__set_bit(asid2idx(asid), info->map);
 		per_cpu(reserved_asids, i) = asid;
 	}
 
@@ -155,11 +159,10 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid)
 	return hit;
 }
 
-static u64 new_context(struct mm_struct *mm)
+static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 {
-	static u32 cur_idx = 1;
 	u64 asid = atomic64_read(&mm->context.id);
-	u64 generation = atomic64_read(&asid_generation);
+	u64 generation = atomic64_read(&info->generation);
 
 	if (asid != 0) {
 		u64 newasid = generation | (asid & ~ASID_MASK);
@@ -183,7 +186,7 @@ static u64 new_context(struct mm_struct *mm)
 		 * We had a valid ASID in a previous life, so try to re-use
 		 * it if possible.
 		 */
-		if (!__test_and_set_bit(asid2idx(asid), asid_map))
+		if (!__test_and_set_bit(asid2idx(asid), info->map))
 			return newasid;
 	}
 
@@ -194,21 +197,21 @@ static u64 new_context(struct mm_struct *mm)
 	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
 	 * pairs.
 	 */
-	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
+	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, info->map_idx);
 	if (asid != NUM_USER_ASIDS)
 		goto set_asid;
 
 	/* We're out of ASIDs, so increment the global generation count */
 	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION,
-						 &asid_generation);
-	flush_context();
+						 &info->generation);
+	flush_context(info);
 
 	/* We have more ASIDs than CPUs, so this will always succeed */
-	asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, 1);
+	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1);
 
 set_asid:
-	__set_bit(asid, asid_map);
-	cur_idx = asid;
+	__set_bit(asid, info->map);
+	info->map_idx = asid;
 	return idx2asid(asid) | generation;
 }
 
@@ -217,6 +220,7 @@ void check_and_switch_context(struct mm_struct *mm)
 	unsigned long flags;
 	unsigned int cpu;
 	u64 asid, old_active_asid;
+	struct asid_info *info = &asid_info;
 
 	if (system_supports_cnp())
 		cpu_set_reserved_ttbr0();
@@ -238,7 +242,7 @@ void check_and_switch_context(struct mm_struct *mm)
 	 *   because atomic RmWs are totally ordered for a given location.
 	 */
 	old_active_asid = atomic64_read(this_cpu_ptr(&active_asids));
-	if (old_active_asid && asid_gen_match(asid) &&
+	if (old_active_asid && asid_gen_match(asid, info) &&
 	    atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_asids),
 				     old_active_asid, asid))
 		goto switch_mm_fastpath;
@@ -246,8 +250,8 @@ void check_and_switch_context(struct mm_struct *mm)
 	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
 	/* Check that our ASID belongs to the current generation. */
 	asid = atomic64_read(&mm->context.id);
-	if (!asid_gen_match(asid)) {
-		asid = new_context(mm);
+	if (!asid_gen_match(asid, info)) {
+		asid = new_context(info, mm);
 		atomic64_set(&mm->context.id, asid);
 	}
 
@@ -274,6 +278,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 {
 	unsigned long flags;
 	u64 asid;
+	struct asid_info *info = &asid_info;
 
 	if (!pinned_asid_map)
 		return 0;
@@ -290,12 +295,12 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 		goto out_unlock;
 	}
 
-	if (!asid_gen_match(asid)) {
+	if (!asid_gen_match(asid, info)) {
 		/*
 		 * We went through one or more rollover since that ASID was
 		 * used. Ensure that it is still valid, or generate a new one.
 		 */
-		asid = new_context(mm);
+		asid = new_context(info, mm);
 		atomic64_set(&mm->context.id, asid);
 	}
 
@@ -400,14 +405,18 @@ arch_initcall(asids_update_limit);
 
 static int asids_init(void)
 {
+	struct asid_info *info = &asid_info;
+
 	asid_bits = get_cpu_asid_bits();
-	atomic64_set(&asid_generation, ASID_FIRST_VERSION);
-	asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*asid_map),
-			   GFP_KERNEL);
-	if (!asid_map)
+	atomic64_set(&info->generation, ASID_FIRST_VERSION);
+	info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map),
+			    GFP_KERNEL);
+	if (!info->map)
 		panic("Failed to allocate bitmap for %lu ASIDs\n",
 		      NUM_USER_ASIDS);
 
+	info->map_idx = 1;
+
 	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS),
 				  sizeof(*pinned_asid_map), GFP_KERNEL);
 	nr_pinned_asids = 0;
@@ -418,7 +427,7 @@ static int asids_init(void)
 	 * and reserve kernel ASID's from beginning.
 	 */
 	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
-		set_kpti_asid_bits(asid_map);
+		set_kpti_asid_bits(info->map);
 	return 0;
 }
 early_initcall(asids_init);
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 02/16] arm64/mm: Move active_asids and reserved_asids to asid_info
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Shameer Kolothum
@ 2021-04-14 11:22 ` Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 03/16] arm64/mm: Move bits " Shameer Kolothum
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:22 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

The variables active_asids and reserved_asids hold information for a
given ASID allocator. So move them to the structure asid_info.

At the same time, introduce wrappers to access the active and reserved
ASIDs to make the code clearer.


Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3-->v4
  keep the this_cpu_ptr in fastpath. See c4885bbb3afe("arm64/mm: save
memory access in check_and_switch_context() fast switch path")

---
 arch/arm64/mm/context.c | 32 ++++++++++++++++++++------------
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 783f8bdb91ee..42e011094571 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -25,8 +25,13 @@ static struct asid_info
 	atomic64_t	generation;
 	unsigned long	*map;
 	unsigned int	map_idx;
+	atomic64_t __percpu	*active;
+	u64 __percpu		*reserved;
 } asid_info;
 
+#define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
+#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu))
+
 static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
 static cpumask_t tlb_flush_pending;
@@ -114,7 +119,7 @@ static void flush_context(struct asid_info *info)
 	set_reserved_asid_bits(info);
 
 	for_each_possible_cpu(i) {
-		asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0);
+		asid = atomic64_xchg_relaxed(&active_asid(info, i), 0);
 		/*
 		 * If this CPU has already been through a
 		 * rollover, but hasn't run another task in
@@ -123,9 +128,9 @@ static void flush_context(struct asid_info *info)
 		 * the process it is still running.
 		 */
 		if (asid == 0)
-			asid = per_cpu(reserved_asids, i);
+			asid = reserved_asid(info, i);
 		__set_bit(asid2idx(asid), info->map);
-		per_cpu(reserved_asids, i) = asid;
+		reserved_asid(info, i) = asid;
 	}
 
 	/*
@@ -135,7 +140,8 @@ static void flush_context(struct asid_info *info)
 	cpumask_setall(&tlb_flush_pending);
 }
 
-static bool check_update_reserved_asid(u64 asid, u64 newasid)
+static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
+				       u64 newasid)
 {
 	int cpu;
 	bool hit = false;
@@ -150,9 +156,9 @@ static bool check_update_reserved_asid(u64 asid, u64 newasid)
 	 * generation.
 	 */
 	for_each_possible_cpu(cpu) {
-		if (per_cpu(reserved_asids, cpu) == asid) {
+		if (reserved_asid(info, cpu) == asid) {
 			hit = true;
-			per_cpu(reserved_asids, cpu) = newasid;
+			reserved_asid(info, cpu) = newasid;
 		}
 	}
 
@@ -171,7 +177,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 		 * If our current ASID was active during a rollover, we
 		 * can continue to use it and this was just a false alarm.
 		 */
-		if (check_update_reserved_asid(asid, newasid))
+		if (check_update_reserved_asid(info, asid, newasid))
 			return newasid;
 
 		/*
@@ -229,8 +235,8 @@ void check_and_switch_context(struct mm_struct *mm)
 
 	/*
 	 * The memory ordering here is subtle.
-	 * If our active_asids is non-zero and the ASID matches the current
-	 * generation, then we update the active_asids entry with a relaxed
+	 * If our active_asid is non-zero and the ASID matches the current
+	 * generation, then we update the active_asid entry with a relaxed
 	 * cmpxchg. Racing with a concurrent rollover means that either:
 	 *
 	 * - We get a zero back from the cmpxchg and end up waiting on the
@@ -241,9 +247,9 @@ void check_and_switch_context(struct mm_struct *mm)
 	 *   relaxed xchg in flush_context will treat us as reserved
 	 *   because atomic RmWs are totally ordered for a given location.
 	 */
-	old_active_asid = atomic64_read(this_cpu_ptr(&active_asids));
+	old_active_asid = atomic64_read(this_cpu_ptr(info->active));
 	if (old_active_asid && asid_gen_match(asid, info) &&
-	    atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_asids),
+	    atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active),
 				     old_active_asid, asid))
 		goto switch_mm_fastpath;
 
@@ -259,7 +265,7 @@ void check_and_switch_context(struct mm_struct *mm)
 	if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending))
 		local_flush_tlb_all();
 
-	atomic64_set(this_cpu_ptr(&active_asids), asid);
+	atomic64_set(&active_asid(info, cpu), asid);
 	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
 
 switch_mm_fastpath:
@@ -416,6 +422,8 @@ static int asids_init(void)
 		      NUM_USER_ASIDS);
 
 	info->map_idx = 1;
+	info->active = &active_asids;
+	info->reserved = &reserved_asids;
 
 	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS),
 				  sizeof(*pinned_asid_map), GFP_KERNEL);
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 03/16] arm64/mm: Move bits to asid_info
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Shameer Kolothum
  2021-04-14 11:22 ` [PATCH v4 02/16] arm64/mm: Move active_asids and reserved_asids to asid_info Shameer Kolothum
@ 2021-04-14 11:22 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 04/16] arm64/mm: Move the variable lock and tlb_flush_pending " Shameer Kolothum
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:22 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

The variable bits hold information for a given ASID allocator. So move
it to the asid_info structure.

Because most of the macros were relying on bits, they are now taking an
extra parameter that is a pointer to the asid_info structure.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 70 +++++++++++++++++++++--------------------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 42e011094571..1fd40a42955c 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -17,7 +17,6 @@
 #include <asm/smp.h>
 #include <asm/tlbflush.h>
 
-static u32 asid_bits;
 static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
 
 static struct asid_info
@@ -27,6 +26,7 @@ static struct asid_info
 	unsigned int	map_idx;
 	atomic64_t __percpu	*active;
 	u64 __percpu		*reserved;
+	u32			bits;
 } asid_info;
 
 #define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
@@ -40,12 +40,12 @@ static unsigned long max_pinned_asids;
 static unsigned long nr_pinned_asids;
 static unsigned long *pinned_asid_map;
 
-#define ASID_MASK		(~GENMASK(asid_bits - 1, 0))
-#define ASID_FIRST_VERSION	(1UL << asid_bits)
+#define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
+#define ASID_FIRST_VERSION(info)	(1UL << (info)->bits)
 
-#define NUM_USER_ASIDS		ASID_FIRST_VERSION
-#define asid2idx(asid)		((asid) & ~ASID_MASK)
-#define idx2asid(idx)		asid2idx(idx)
+#define NUM_USER_ASIDS(info)		ASID_FIRST_VERSION(info)
+#define asid2idx(info, asid)		((asid) & ~ASID_MASK(info))
+#define idx2asid(info, idx)		asid2idx(info, idx)
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
@@ -74,20 +74,20 @@ void verify_cpu_asid_bits(void)
 {
 	u32 asid = get_cpu_asid_bits();
 
-	if (asid < asid_bits) {
+	if (asid < asid_info.bits) {
 		/*
 		 * We cannot decrease the ASID size at runtime, so panic if we support
 		 * fewer ASID bits than the boot CPU.
 		 */
 		pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n",
-				smp_processor_id(), asid, asid_bits);
+				smp_processor_id(), asid, asid_info.bits);
 		cpu_panic_kernel();
 	}
 }
 
-static void set_kpti_asid_bits(unsigned long *map)
+static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map)
 {
-	unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned long);
+	unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS(info)) * sizeof(unsigned long);
 	/*
 	 * In case of KPTI kernel/user ASIDs are allocated in
 	 * pairs, the bottom bit distinguishes the two: if it
@@ -100,15 +100,15 @@ static void set_kpti_asid_bits(unsigned long *map)
 static void set_reserved_asid_bits(struct asid_info *info)
 {
 	if (pinned_asid_map)
-		bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS);
+		bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS(info));
 	else if (arm64_kernel_unmapped_at_el0())
-		set_kpti_asid_bits(info->map);
+		set_kpti_asid_bits(info, info->map);
 	else
-		bitmap_clear(info->map, 0, NUM_USER_ASIDS);
+		bitmap_clear(info->map, 0, NUM_USER_ASIDS(info));
 }
 
 #define asid_gen_match(asid, info) \
-	(!(((asid) ^ atomic64_read(&(info)->generation)) >> asid_bits))
+	(!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits))
 
 static void flush_context(struct asid_info *info)
 {
@@ -129,7 +129,7 @@ static void flush_context(struct asid_info *info)
 		 */
 		if (asid == 0)
 			asid = reserved_asid(info, i);
-		__set_bit(asid2idx(asid), info->map);
+		__set_bit(asid2idx(info, asid), info->map);
 		reserved_asid(info, i) = asid;
 	}
 
@@ -171,7 +171,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 	u64 generation = atomic64_read(&info->generation);
 
 	if (asid != 0) {
-		u64 newasid = generation | (asid & ~ASID_MASK);
+		u64 newasid = generation | (asid & ~ASID_MASK(info));
 
 		/*
 		 * If our current ASID was active during a rollover, we
@@ -192,7 +192,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 		 * We had a valid ASID in a previous life, so try to re-use
 		 * it if possible.
 		 */
-		if (!__test_and_set_bit(asid2idx(asid), info->map))
+		if (!__test_and_set_bit(asid2idx(info, asid), info->map))
 			return newasid;
 	}
 
@@ -203,22 +203,22 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
 	 * pairs.
 	 */
-	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, info->map_idx);
-	if (asid != NUM_USER_ASIDS)
+	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), info->map_idx);
+	if (asid != NUM_USER_ASIDS(info))
 		goto set_asid;
 
 	/* We're out of ASIDs, so increment the global generation count */
-	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION,
+	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info),
 						 &info->generation);
 	flush_context(info);
 
 	/* We have more ASIDs than CPUs, so this will always succeed */
-	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS, 1);
+	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1);
 
 set_asid:
 	__set_bit(asid, info->map);
 	info->map_idx = asid;
-	return idx2asid(asid) | generation;
+	return idx2asid(info, asid) | generation;
 }
 
 void check_and_switch_context(struct mm_struct *mm)
@@ -311,13 +311,13 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 	}
 
 	nr_pinned_asids++;
-	__set_bit(asid2idx(asid), pinned_asid_map);
+	__set_bit(asid2idx(info, asid), pinned_asid_map);
 	refcount_set(&mm->context.pinned, 1);
 
 out_unlock:
 	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
 
-	asid &= ~ASID_MASK;
+	asid &= ~ASID_MASK(info);
 
 	/* Set the equivalent of USER_ASID_BIT */
 	if (asid && arm64_kernel_unmapped_at_el0())
@@ -330,6 +330,7 @@ EXPORT_SYMBOL_GPL(arm64_mm_context_get);
 void arm64_mm_context_put(struct mm_struct *mm)
 {
 	unsigned long flags;
+	struct asid_info *info = &asid_info;
 	u64 asid = atomic64_read(&mm->context.id);
 
 	if (!pinned_asid_map)
@@ -338,7 +339,7 @@ void arm64_mm_context_put(struct mm_struct *mm)
 	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
 
 	if (refcount_dec_and_test(&mm->context.pinned)) {
-		__clear_bit(asid2idx(asid), pinned_asid_map);
+		__clear_bit(asid2idx(info, asid), pinned_asid_map);
 		nr_pinned_asids--;
 	}
 
@@ -384,12 +385,13 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
 
 static int asids_update_limit(void)
 {
-	unsigned long num_available_asids = NUM_USER_ASIDS;
+	struct asid_info *info = &asid_info;
+	unsigned long num_available_asids = NUM_USER_ASIDS(info);
 
 	if (arm64_kernel_unmapped_at_el0()) {
 		num_available_asids /= 2;
 		if (pinned_asid_map)
-			set_kpti_asid_bits(pinned_asid_map);
+			set_kpti_asid_bits(info, pinned_asid_map);
 	}
 	/*
 	 * Expect allocation after rollover to fail if we don't have at least
@@ -413,19 +415,19 @@ static int asids_init(void)
 {
 	struct asid_info *info = &asid_info;
 
-	asid_bits = get_cpu_asid_bits();
-	atomic64_set(&info->generation, ASID_FIRST_VERSION);
-	info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS), sizeof(*info->map),
-			    GFP_KERNEL);
+	info->bits = get_cpu_asid_bits();
+	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
+	info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)),
+			    sizeof(*info->map), GFP_KERNEL);
 	if (!info->map)
 		panic("Failed to allocate bitmap for %lu ASIDs\n",
-		      NUM_USER_ASIDS);
+		      NUM_USER_ASIDS(info));
 
 	info->map_idx = 1;
 	info->active = &active_asids;
 	info->reserved = &reserved_asids;
 
-	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS),
+	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)),
 				  sizeof(*pinned_asid_map), GFP_KERNEL);
 	nr_pinned_asids = 0;
 
@@ -435,7 +437,7 @@ static int asids_init(void)
 	 * and reserve kernel ASID's from beginning.
 	 */
 	if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
-		set_kpti_asid_bits(info->map);
+		set_kpti_asid_bits(info, info->map);
 	return 0;
 }
 early_initcall(asids_init);
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 04/16] arm64/mm: Move the variable lock and tlb_flush_pending to asid_info
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (2 preceding siblings ...)
  2021-04-14 11:22 ` [PATCH v4 03/16] arm64/mm: Move bits " Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 05/16] arm64/mm: Remove dependency on MM in new_context Shameer Kolothum
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

The variables lock and tlb_flush_pending holds information for a given
ASID allocator. So move them to the asid_info structure.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 1fd40a42955c..139ebc161acb 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -17,8 +17,6 @@
 #include <asm/smp.h>
 #include <asm/tlbflush.h>
 
-static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
-
 static struct asid_info
 {
 	atomic64_t	generation;
@@ -27,6 +25,9 @@ static struct asid_info
 	atomic64_t __percpu	*active;
 	u64 __percpu		*reserved;
 	u32			bits;
+	raw_spinlock_t		lock;
+	/* Which CPU requires context flush on next call */
+	cpumask_t		flush_pending;
 } asid_info;
 
 #define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
@@ -34,7 +35,6 @@ static struct asid_info
 
 static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
-static cpumask_t tlb_flush_pending;
 
 static unsigned long max_pinned_asids;
 static unsigned long nr_pinned_asids;
@@ -137,7 +137,7 @@ static void flush_context(struct asid_info *info)
 	 * Queue a TLB invalidation for each CPU to perform on next
 	 * context-switch
 	 */
-	cpumask_setall(&tlb_flush_pending);
+	cpumask_setall(&info->flush_pending);
 }
 
 static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
@@ -253,7 +253,7 @@ void check_and_switch_context(struct mm_struct *mm)
 				     old_active_asid, asid))
 		goto switch_mm_fastpath;
 
-	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+	raw_spin_lock_irqsave(&info->lock, flags);
 	/* Check that our ASID belongs to the current generation. */
 	asid = atomic64_read(&mm->context.id);
 	if (!asid_gen_match(asid, info)) {
@@ -262,11 +262,11 @@ void check_and_switch_context(struct mm_struct *mm)
 	}
 
 	cpu = smp_processor_id();
-	if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending))
+	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending))
 		local_flush_tlb_all();
 
 	atomic64_set(&active_asid(info, cpu), asid);
-	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+	raw_spin_unlock_irqrestore(&info->lock, flags);
 
 switch_mm_fastpath:
 
@@ -289,7 +289,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 	if (!pinned_asid_map)
 		return 0;
 
-	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+	raw_spin_lock_irqsave(&info->lock, flags);
 
 	asid = atomic64_read(&mm->context.id);
 
@@ -315,7 +315,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 	refcount_set(&mm->context.pinned, 1);
 
 out_unlock:
-	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+	raw_spin_unlock_irqrestore(&info->lock, flags);
 
 	asid &= ~ASID_MASK(info);
 
@@ -336,14 +336,14 @@ void arm64_mm_context_put(struct mm_struct *mm)
 	if (!pinned_asid_map)
 		return;
 
-	raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+	raw_spin_lock_irqsave(&info->lock, flags);
 
 	if (refcount_dec_and_test(&mm->context.pinned)) {
 		__clear_bit(asid2idx(info, asid), pinned_asid_map);
 		nr_pinned_asids--;
 	}
 
-	raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+	raw_spin_unlock_irqrestore(&info->lock, flags);
 }
 EXPORT_SYMBOL_GPL(arm64_mm_context_put);
 
@@ -426,6 +426,7 @@ static int asids_init(void)
 	info->map_idx = 1;
 	info->active = &active_asids;
 	info->reserved = &reserved_asids;
+	raw_spin_lock_init(&info->lock);
 
 	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)),
 				  sizeof(*pinned_asid_map), GFP_KERNEL);
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 05/16] arm64/mm: Remove dependency on MM in new_context
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (3 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 04/16] arm64/mm: Move the variable lock and tlb_flush_pending " Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 06/16] arm64/mm: Introduce NUM_CTXT_ASIDS Shameer Kolothum
                   ` (11 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

The function new_context will be part of a generic ASID allocator. At
the moment, the MM structure is currently used to fetch the ASID and
pinned refcount.

To remove the dependency on MM, it is possible to just pass a pointer to
the current ASID and pinned refcount. Also please note that 'pinned' may
be NULL if the user doesn't require the pinned asid support.


Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3-->v4:
  Changes related to Pinned ASID refcount.

---
 arch/arm64/mm/context.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 139ebc161acb..628304e0d3b1 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -165,9 +165,10 @@ static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
 	return hit;
 }
 
-static u64 new_context(struct asid_info *info, struct mm_struct *mm)
+static u64 new_context(struct asid_info *info, atomic64_t *pasid,
+		       refcount_t *pinned)
 {
-	u64 asid = atomic64_read(&mm->context.id);
+	u64 asid = atomic64_read(pasid);
 	u64 generation = atomic64_read(&info->generation);
 
 	if (asid != 0) {
@@ -185,7 +186,7 @@ static u64 new_context(struct asid_info *info, struct mm_struct *mm)
 		 * takes priority, because even if it is also pinned, we need to
 		 * update the generation into the reserved_asids.
 		 */
-		if (refcount_read(&mm->context.pinned))
+		if (pinned && refcount_read(pinned))
 			return newasid;
 
 		/*
@@ -257,7 +258,7 @@ void check_and_switch_context(struct mm_struct *mm)
 	/* Check that our ASID belongs to the current generation. */
 	asid = atomic64_read(&mm->context.id);
 	if (!asid_gen_match(asid, info)) {
-		asid = new_context(info, mm);
+		asid = new_context(info, &mm->context.id, &mm->context.pinned);
 		atomic64_set(&mm->context.id, asid);
 	}
 
@@ -306,7 +307,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 		 * We went through one or more rollover since that ASID was
 		 * used. Ensure that it is still valid, or generate a new one.
 		 */
-		asid = new_context(info, mm);
+		asid = new_context(info, &mm->context.id, &mm->context.pinned);
 		atomic64_set(&mm->context.id, asid);
 	}
 
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 06/16] arm64/mm: Introduce NUM_CTXT_ASIDS
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (4 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 05/16] arm64/mm: Remove dependency on MM in new_context Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 07/16] arm64/mm: Move Pinned ASID related variables to asid_info Shameer Kolothum
                   ` (10 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

At the moment ASID_FIRST_VERSION is used to know the number of ASIDs
supported. As we are going to move the ASID allocator to a separate file,
it would be better to use a different name for external users.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3-->v4
 -Dropped patch #6, but retained the name NUM_CTXT_ASIDS.

---
 arch/arm64/mm/context.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 628304e0d3b1..0f11d7c7f6a3 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -41,9 +41,9 @@ static unsigned long nr_pinned_asids;
 static unsigned long *pinned_asid_map;
 
 #define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
-#define ASID_FIRST_VERSION(info)	(1UL << (info)->bits)
+#define NUM_CTXT_ASIDS(info)		(1UL << ((info)->bits))
+#define ASID_FIRST_VERSION(info)        NUM_CTXT_ASIDS(info)
 
-#define NUM_USER_ASIDS(info)		ASID_FIRST_VERSION(info)
 #define asid2idx(info, asid)		((asid) & ~ASID_MASK(info))
 #define idx2asid(info, idx)		asid2idx(info, idx)
 
@@ -87,7 +87,7 @@ void verify_cpu_asid_bits(void)
 
 static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map)
 {
-	unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS(info)) * sizeof(unsigned long);
+	unsigned int len = BITS_TO_LONGS(NUM_CTXT_ASIDS(info)) * sizeof(unsigned long);
 	/*
 	 * In case of KPTI kernel/user ASIDs are allocated in
 	 * pairs, the bottom bit distinguishes the two: if it
@@ -100,11 +100,11 @@ static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map)
 static void set_reserved_asid_bits(struct asid_info *info)
 {
 	if (pinned_asid_map)
-		bitmap_copy(info->map, pinned_asid_map, NUM_USER_ASIDS(info));
+		bitmap_copy(info->map, pinned_asid_map, NUM_CTXT_ASIDS(info));
 	else if (arm64_kernel_unmapped_at_el0())
 		set_kpti_asid_bits(info, info->map);
 	else
-		bitmap_clear(info->map, 0, NUM_USER_ASIDS(info));
+		bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info));
 }
 
 #define asid_gen_match(asid, info) \
@@ -204,8 +204,8 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid,
 	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
 	 * pairs.
 	 */
-	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), info->map_idx);
-	if (asid != NUM_USER_ASIDS(info))
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx);
+	if (asid != NUM_CTXT_ASIDS(info))
 		goto set_asid;
 
 	/* We're out of ASIDs, so increment the global generation count */
@@ -214,7 +214,7 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid,
 	flush_context(info);
 
 	/* We have more ASIDs than CPUs, so this will always succeed */
-	asid = find_next_zero_bit(info->map, NUM_USER_ASIDS(info), 1);
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1);
 
 set_asid:
 	__set_bit(asid, info->map);
@@ -387,7 +387,7 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
 static int asids_update_limit(void)
 {
 	struct asid_info *info = &asid_info;
-	unsigned long num_available_asids = NUM_USER_ASIDS(info);
+	unsigned long num_available_asids = NUM_CTXT_ASIDS(info);
 
 	if (arm64_kernel_unmapped_at_el0()) {
 		num_available_asids /= 2;
@@ -418,18 +418,18 @@ static int asids_init(void)
 
 	info->bits = get_cpu_asid_bits();
 	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
-	info->map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)),
+	info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
 			    sizeof(*info->map), GFP_KERNEL);
 	if (!info->map)
 		panic("Failed to allocate bitmap for %lu ASIDs\n",
-		      NUM_USER_ASIDS(info));
+		      NUM_CTXT_ASIDS(info));
 
 	info->map_idx = 1;
 	info->active = &active_asids;
 	info->reserved = &reserved_asids;
 	raw_spin_lock_init(&info->lock);
 
-	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_USER_ASIDS(info)),
+	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
 				  sizeof(*pinned_asid_map), GFP_KERNEL);
 	nr_pinned_asids = 0;
 
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 07/16] arm64/mm: Move Pinned ASID related variables to asid_info
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (5 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 06/16] arm64/mm: Introduce NUM_CTXT_ASIDS Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 08/16] arm64/mm: Split asid_inits in 2 parts Shameer Kolothum
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

The Pinned ASID variables hold information for a given ASID
allocator. So move them to the structure asid_info.

Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 0f11d7c7f6a3..8af54e06f5bc 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -28,6 +28,10 @@ static struct asid_info
 	raw_spinlock_t		lock;
 	/* Which CPU requires context flush on next call */
 	cpumask_t		flush_pending;
+	/* Pinned ASIDs info */
+	unsigned long		*pinned_map;
+	unsigned long		max_pinned_asids;
+	unsigned long		nr_pinned_asids;
 } asid_info;
 
 #define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
@@ -36,10 +40,6 @@ static struct asid_info
 static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
 
-static unsigned long max_pinned_asids;
-static unsigned long nr_pinned_asids;
-static unsigned long *pinned_asid_map;
-
 #define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
 #define NUM_CTXT_ASIDS(info)		(1UL << ((info)->bits))
 #define ASID_FIRST_VERSION(info)        NUM_CTXT_ASIDS(info)
@@ -99,8 +99,8 @@ static void set_kpti_asid_bits(struct asid_info *info, unsigned long *map)
 
 static void set_reserved_asid_bits(struct asid_info *info)
 {
-	if (pinned_asid_map)
-		bitmap_copy(info->map, pinned_asid_map, NUM_CTXT_ASIDS(info));
+	if (info->pinned_map)
+		bitmap_copy(info->map, info->pinned_map, NUM_CTXT_ASIDS(info));
 	else if (arm64_kernel_unmapped_at_el0())
 		set_kpti_asid_bits(info, info->map);
 	else
@@ -287,7 +287,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 	u64 asid;
 	struct asid_info *info = &asid_info;
 
-	if (!pinned_asid_map)
+	if (!info->pinned_map)
 		return 0;
 
 	raw_spin_lock_irqsave(&info->lock, flags);
@@ -297,7 +297,7 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 	if (refcount_inc_not_zero(&mm->context.pinned))
 		goto out_unlock;
 
-	if (nr_pinned_asids >= max_pinned_asids) {
+	if (info->nr_pinned_asids >= info->max_pinned_asids) {
 		asid = 0;
 		goto out_unlock;
 	}
@@ -311,8 +311,8 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 		atomic64_set(&mm->context.id, asid);
 	}
 
-	nr_pinned_asids++;
-	__set_bit(asid2idx(info, asid), pinned_asid_map);
+	info->nr_pinned_asids++;
+	__set_bit(asid2idx(info, asid), info->pinned_map);
 	refcount_set(&mm->context.pinned, 1);
 
 out_unlock:
@@ -334,14 +334,14 @@ void arm64_mm_context_put(struct mm_struct *mm)
 	struct asid_info *info = &asid_info;
 	u64 asid = atomic64_read(&mm->context.id);
 
-	if (!pinned_asid_map)
+	if (!info->pinned_map)
 		return;
 
 	raw_spin_lock_irqsave(&info->lock, flags);
 
 	if (refcount_dec_and_test(&mm->context.pinned)) {
-		__clear_bit(asid2idx(info, asid), pinned_asid_map);
-		nr_pinned_asids--;
+		__clear_bit(asid2idx(info, asid), info->pinned_map);
+		info->nr_pinned_asids--;
 	}
 
 	raw_spin_unlock_irqrestore(&info->lock, flags);
@@ -391,8 +391,8 @@ static int asids_update_limit(void)
 
 	if (arm64_kernel_unmapped_at_el0()) {
 		num_available_asids /= 2;
-		if (pinned_asid_map)
-			set_kpti_asid_bits(info, pinned_asid_map);
+		if (info->pinned_map)
+			set_kpti_asid_bits(info, info->pinned_map);
 	}
 	/*
 	 * Expect allocation after rollover to fail if we don't have at least
@@ -407,7 +407,7 @@ static int asids_update_limit(void)
 	 * even if all CPUs have a reserved ASID and the maximum number of ASIDs
 	 * are pinned, there still is at least one empty slot in the ASID map.
 	 */
-	max_pinned_asids = num_available_asids - num_possible_cpus() - 2;
+	info->max_pinned_asids = num_available_asids - num_possible_cpus() - 2;
 	return 0;
 }
 arch_initcall(asids_update_limit);
@@ -429,9 +429,9 @@ static int asids_init(void)
 	info->reserved = &reserved_asids;
 	raw_spin_lock_init(&info->lock);
 
-	pinned_asid_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
-				  sizeof(*pinned_asid_map), GFP_KERNEL);
-	nr_pinned_asids = 0;
+	info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
+				   sizeof(*info->pinned_map), GFP_KERNEL);
+	info->nr_pinned_asids = 0;
 
 	/*
 	 * We cannot call set_reserved_asid_bits() here because CPU
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 08/16] arm64/mm: Split asid_inits in 2 parts
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (6 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 07/16] arm64/mm: Move Pinned ASID related variables to asid_info Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts Shameer Kolothum
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

Move out the common initialization of the ASID allocator in a separate
function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3-->v4
  -dropped asid_per_ctxt and added pinned asid map init.
---
 arch/arm64/mm/context.c | 44 +++++++++++++++++++++++++++++++----------
 1 file changed, 34 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 8af54e06f5bc..041c3c5e0216 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -412,26 +412,50 @@ static int asids_update_limit(void)
 }
 arch_initcall(asids_update_limit);
 
-static int asids_init(void)
+/*
+ * Initialize the ASID allocator
+ *
+ * @info: Pointer to the asid allocator structure
+ * @bits: Number of ASIDs available
+ * @pinned: Support for Pinned ASIDs
+ */
+static int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned)
 {
-	struct asid_info *info = &asid_info;
+	info->bits = bits;
 
-	info->bits = get_cpu_asid_bits();
+	/*
+	 * Expect allocation after rollover to fail if we don't have at least
+	 * one more ASID than CPUs. ASID #0 is always reserved.
+	 */
+	WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus());
 	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
 	info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
 			    sizeof(*info->map), GFP_KERNEL);
 	if (!info->map)
-		panic("Failed to allocate bitmap for %lu ASIDs\n",
-		      NUM_CTXT_ASIDS(info));
+		return -ENOMEM;
 
 	info->map_idx = 1;
-	info->active = &active_asids;
-	info->reserved = &reserved_asids;
 	raw_spin_lock_init(&info->lock);
 
-	info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
-				   sizeof(*info->pinned_map), GFP_KERNEL);
-	info->nr_pinned_asids = 0;
+	if (pinned) {
+		info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
+					   sizeof(*info->pinned_map), GFP_KERNEL);
+		info->nr_pinned_asids = 0;
+	}
+
+	return 0;
+}
+
+static int asids_init(void)
+{
+	struct asid_info *info = &asid_info;
+
+	if (asid_allocator_init(info, get_cpu_asid_bits(), true))
+		panic("Unable to initialize ASID allocator for %lu ASIDs\n",
+		      NUM_CTXT_ASIDS(info));
+
+	info->active = &active_asids;
+	info->reserved = &reserved_asids;
 
 	/*
 	 * We cannot call set_reserved_asid_bits() here because CPU
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (7 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 08/16] arm64/mm: Split asid_inits in 2 parts Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 10/16] arm64/mm: Split the arm64_mm_context_get/put Shameer Kolothum
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

The function check_and_switch_context is used to:
    1) Check whether the ASID is still valid
    2) Generate a new one if it is not valid
    3) Switch the context

While the latter is specific to the MM subsystem, the rest could be part
of the generic ASID allocator.

After this patch, the function is now split in 3 parts which corresponds
to the use of the functions:
    1) asid_check_context: Check if the ASID is still valid
    2) asid_new_context: Generate a new ASID for the context
    3) check_and_switch_context: Call 1) and 2) and switch the context

1) and 2) have not been merged in a single function because we want to
avoid to add a branch in when the ASID is still valid. This will matter
when the code will be moved in separate file later on as 1) will reside
in the header as a static inline function.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
v3 comment:
    Will wants to avoid to add a branch when the ASID is still valid. So
    1) and 2) are in separates function. The former will move to a new
    header and make static inline.
---
 arch/arm64/mm/context.c | 70 ++++++++++++++++++++++++++++-------------
 1 file changed, 48 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 041c3c5e0216..40ef013c90c3 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -222,17 +222,49 @@ static u64 new_context(struct asid_info *info, atomic64_t *pasid,
 	return idx2asid(info, asid) | generation;
 }
 
-void check_and_switch_context(struct mm_struct *mm)
+/*
+ * Generate a new ASID for the context.
+ *
+ * @pasid: Pointer to the current ASID batch allocated. It will be updated
+ * with the new ASID batch.
+ * @pinned: refcount if asid is pinned.
+ * Caller needs to make sure preempt is disabled before calling this function.
+ */
+static void asid_new_context(struct asid_info *info, atomic64_t *pasid,
+			     refcount_t *pinned)
 {
 	unsigned long flags;
-	unsigned int cpu;
-	u64 asid, old_active_asid;
-	struct asid_info *info = &asid_info;
+	u64 asid;
+	unsigned int cpu = smp_processor_id();
 
-	if (system_supports_cnp())
-		cpu_set_reserved_ttbr0();
+	raw_spin_lock_irqsave(&info->lock, flags);
+	/* Check that our ASID belongs to the current generation. */
+	asid = atomic64_read(pasid);
+	if (!asid_gen_match(asid, info)) {
+		asid = new_context(info, pasid, pinned);
+		atomic64_set(pasid, asid);
+	}
 
-	asid = atomic64_read(&mm->context.id);
+	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending))
+		local_flush_tlb_all();
+
+	atomic64_set(&active_asid(info, cpu), asid);
+	raw_spin_unlock_irqrestore(&info->lock, flags);
+}
+
+/*
+ * Check the ASID is still valid for the context. If not generate a new ASID.
+ *
+ * @pasid: Pointer to the current ASID batch
+ * @pinned: refcount if asid is pinned
+ * Caller needs to make sure preempt is disabled before calling this function.
+ */
+static void asid_check_context(struct asid_info *info, atomic64_t *pasid,
+			       refcount_t *pinned)
+{
+	u64 asid, old_active_asid;
+
+	asid = atomic64_read(pasid);
 
 	/*
 	 * The memory ordering here is subtle.
@@ -252,24 +284,18 @@ void check_and_switch_context(struct mm_struct *mm)
 	if (old_active_asid && asid_gen_match(asid, info) &&
 	    atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active),
 				     old_active_asid, asid))
-		goto switch_mm_fastpath;
-
-	raw_spin_lock_irqsave(&info->lock, flags);
-	/* Check that our ASID belongs to the current generation. */
-	asid = atomic64_read(&mm->context.id);
-	if (!asid_gen_match(asid, info)) {
-		asid = new_context(info, &mm->context.id, &mm->context.pinned);
-		atomic64_set(&mm->context.id, asid);
-	}
+		return;
 
-	cpu = smp_processor_id();
-	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending))
-		local_flush_tlb_all();
+	asid_new_context(info, pasid, pinned);
+}
 
-	atomic64_set(&active_asid(info, cpu), asid);
-	raw_spin_unlock_irqrestore(&info->lock, flags);
+void check_and_switch_context(struct mm_struct *mm)
+{
+	if (system_supports_cnp())
+		cpu_set_reserved_ttbr0();
 
-switch_mm_fastpath:
+	asid_check_context(&asid_info, &mm->context.id,
+			   &mm->context.pinned);
 
 	arm64_apply_bp_hardening();
 
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 10/16] arm64/mm: Split the arm64_mm_context_get/put
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (8 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 11/16] arm64/mm: Introduce a callback to flush the local context Shameer Kolothum
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

Keep only the mm specific part in arm64_mm_context_get/put
and move the rest to generic functions.

Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 53 +++++++++++++++++++++++++++--------------
 1 file changed, 35 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 40ef013c90c3..901472a57b5d 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -307,20 +307,21 @@ void check_and_switch_context(struct mm_struct *mm)
 		cpu_switch_mm(mm->pgd, mm);
 }
 
-unsigned long arm64_mm_context_get(struct mm_struct *mm)
+static unsigned long asid_context_pinned_get(struct asid_info *info,
+					     atomic64_t *pasid,
+					     refcount_t *pinned)
 {
 	unsigned long flags;
 	u64 asid;
-	struct asid_info *info = &asid_info;
 
 	if (!info->pinned_map)
 		return 0;
 
 	raw_spin_lock_irqsave(&info->lock, flags);
 
-	asid = atomic64_read(&mm->context.id);
+	asid = atomic64_read(pasid);
 
-	if (refcount_inc_not_zero(&mm->context.pinned))
+	if (refcount_inc_not_zero(pinned))
 		goto out_unlock;
 
 	if (info->nr_pinned_asids >= info->max_pinned_asids) {
@@ -333,45 +334,61 @@ unsigned long arm64_mm_context_get(struct mm_struct *mm)
 		 * We went through one or more rollover since that ASID was
 		 * used. Ensure that it is still valid, or generate a new one.
 		 */
-		asid = new_context(info, &mm->context.id, &mm->context.pinned);
-		atomic64_set(&mm->context.id, asid);
+		asid = new_context(info, pasid, pinned);
+		atomic64_set(pasid, asid);
 	}
 
 	info->nr_pinned_asids++;
 	__set_bit(asid2idx(info, asid), info->pinned_map);
-	refcount_set(&mm->context.pinned, 1);
+	refcount_set(pinned, 1);
 
 out_unlock:
 	raw_spin_unlock_irqrestore(&info->lock, flags);
-
 	asid &= ~ASID_MASK(info);
-
-	/* Set the equivalent of USER_ASID_BIT */
-	if (asid && arm64_kernel_unmapped_at_el0())
-		asid |= 1;
-
 	return asid;
 }
-EXPORT_SYMBOL_GPL(arm64_mm_context_get);
 
-void arm64_mm_context_put(struct mm_struct *mm)
+static void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid,
+				    refcount_t *pinned)
 {
 	unsigned long flags;
-	struct asid_info *info = &asid_info;
-	u64 asid = atomic64_read(&mm->context.id);
+	u64 asid = atomic64_read(pasid);
 
 	if (!info->pinned_map)
 		return;
 
 	raw_spin_lock_irqsave(&info->lock, flags);
 
-	if (refcount_dec_and_test(&mm->context.pinned)) {
+	if (refcount_dec_and_test(pinned)) {
 		__clear_bit(asid2idx(info, asid), info->pinned_map);
 		info->nr_pinned_asids--;
 	}
 
 	raw_spin_unlock_irqrestore(&info->lock, flags);
 }
+
+unsigned long arm64_mm_context_get(struct mm_struct *mm)
+{
+	u64 asid;
+	struct asid_info *info = &asid_info;
+
+	asid = asid_context_pinned_get(info, &mm->context.id,
+				       &mm->context.pinned);
+
+	/* Set the equivalent of USER_ASID_BIT */
+	if (asid && arm64_kernel_unmapped_at_el0())
+		asid |= 1;
+
+	return asid;
+}
+EXPORT_SYMBOL_GPL(arm64_mm_context_get);
+
+void arm64_mm_context_put(struct mm_struct *mm)
+{
+	struct asid_info *info = &asid_info;
+
+	asid_context_pinned_put(info, &mm->context.id, &mm->context.pinned);
+}
 EXPORT_SYMBOL_GPL(arm64_mm_context_put);
 
 /* Errata workaround post TTBRx_EL1 update. */
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 11/16] arm64/mm: Introduce a callback to flush the local context
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (9 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 10/16] arm64/mm: Split the arm64_mm_context_get/put Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 12/16] arm64/mm: Introduce a callback to set reserved bits Shameer Kolothum
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

Flushing the local context will vary depending on the actual user
of the ASID allocator. Introduce a new callback to flush the local
context and move the call to flush local TLB in it.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 901472a57b5d..ee446f7535a3 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -32,6 +32,8 @@ static struct asid_info
 	unsigned long		*pinned_map;
 	unsigned long		max_pinned_asids;
 	unsigned long		nr_pinned_asids;
+	/* Callback to locally flush the context. */
+	void			(*flush_cpu_ctxt_cb)(void);
 } asid_info;
 
 #define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
@@ -245,8 +247,9 @@ static void asid_new_context(struct asid_info *info, atomic64_t *pasid,
 		atomic64_set(pasid, asid);
 	}
 
-	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending))
-		local_flush_tlb_all();
+	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) &&
+	    info->flush_cpu_ctxt_cb)
+		info->flush_cpu_ctxt_cb();
 
 	atomic64_set(&active_asid(info, cpu), asid);
 	raw_spin_unlock_irqrestore(&info->lock, flags);
@@ -427,6 +430,11 @@ void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm)
 	post_ttbr_update_workaround();
 }
 
+static void asid_flush_cpu_ctxt(void)
+{
+	local_flush_tlb_all();
+}
+
 static int asids_update_limit(void)
 {
 	struct asid_info *info = &asid_info;
@@ -499,6 +507,7 @@ static int asids_init(void)
 
 	info->active = &active_asids;
 	info->reserved = &reserved_asids;
+	info->flush_cpu_ctxt_cb = asid_flush_cpu_ctxt;
 
 	/*
 	 * We cannot call set_reserved_asid_bits() here because CPU
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 12/16] arm64/mm: Introduce a callback to set reserved bits
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (10 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 11/16] arm64/mm: Introduce a callback to flush the local context Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 13/16] arm64: Move the ASID allocator code in a separate file Shameer Kolothum
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

Setting the reserved asid bits will vary depending on the actual
user of the ASID allocator. Introduce a new callback.

Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/mm/context.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index ee446f7535a3..e9049d14f54a 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -34,6 +34,8 @@ static struct asid_info
 	unsigned long		nr_pinned_asids;
 	/* Callback to locally flush the context. */
 	void			(*flush_cpu_ctxt_cb)(void);
+	/* Callback to set the list of reserved ASIDs */
+	void			(*set_reserved_bits)(struct asid_info *info);
 } asid_info;
 
 #define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
@@ -118,7 +120,8 @@ static void flush_context(struct asid_info *info)
 	u64 asid;
 
 	/* Update the list of reserved ASIDs and the ASID bitmap. */
-	set_reserved_asid_bits(info);
+	if (info->set_reserved_bits)
+		info->set_reserved_bits(info);
 
 	for_each_possible_cpu(i) {
 		asid = atomic64_xchg_relaxed(&active_asid(info, i), 0);
@@ -508,6 +511,7 @@ static int asids_init(void)
 	info->active = &active_asids;
 	info->reserved = &reserved_asids;
 	info->flush_cpu_ctxt_cb = asid_flush_cpu_ctxt;
+	info->set_reserved_bits = set_reserved_asid_bits;
 
 	/*
 	 * We cannot call set_reserved_asid_bits() here because CPU
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 13/16] arm64: Move the ASID allocator code in a separate file
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (11 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 12/16] arm64/mm: Introduce a callback to set reserved bits Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 14/16] arm64/lib: Add an helper to free memory allocated by the ASID allocator Shameer Kolothum
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

We will want to re-use the ASID allocator in a separate context (e.g
allocating VMID). So move the code in a new file.

The function asid_check_context has been moved in the header as a static
inline function because we want to avoid add a branch when checking if the
ASID is still valid.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/include/asm/lib_asid.h |  85 ++++++++
 arch/arm64/lib/Makefile           |   2 +
 arch/arm64/lib/asid.c             | 258 +++++++++++++++++++++++++
 arch/arm64/mm/context.c           | 310 +-----------------------------
 4 files changed, 347 insertions(+), 308 deletions(-)
 create mode 100644 arch/arm64/include/asm/lib_asid.h
 create mode 100644 arch/arm64/lib/asid.c

diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h
new file mode 100644
index 000000000000..acae8d243d17
--- /dev/null
+++ b/arch/arm64/include/asm/lib_asid.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_ASM_LIB_ASID_H
+#define __ASM_ASM_LIB_ASID_H
+
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/spinlock.h>
+
+struct asid_info {
+	atomic64_t	generation;
+	unsigned long	*map;
+	unsigned int	map_idx;
+	atomic64_t __percpu	*active;
+	u64 __percpu		*reserved;
+	u32			bits;
+	raw_spinlock_t		lock;
+	/* Which CPU requires context flush on next call */
+	cpumask_t		flush_pending;
+	/* Pinned ASIDs info */
+	unsigned long		*pinned_map;
+	unsigned long		max_pinned_asids;
+	unsigned long		nr_pinned_asids;
+	/* Callback to locally flush the context. */
+	void			(*flush_cpu_ctxt_cb)(void);
+	/* Callback to set the list of reserved ASIDs */
+	void			(*set_reserved_bits)(struct asid_info *info);
+};
+
+#define NUM_CTXT_ASIDS(info)		(1UL << ((info)->bits))
+
+#define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
+#define asid_gen_match(asid, info) \
+	(!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits))
+
+void asid_new_context(struct asid_info *info, atomic64_t *pasid,
+		      refcount_t *pinned, unsigned int cpu);
+
+/*
+ * Check the ASID is still valid for the context. If not generate a new ASID.
+ *
+ * @pasid: Pointer to the current ASID batch
+ * @pinned: refcount if asid is pinned
+ */
+static inline void asid_check_context(struct asid_info *info, atomic64_t *pasid,
+				      refcount_t *pinned)
+{
+	unsigned int cpu;
+	u64 asid, old_active_asid;
+
+	asid = atomic64_read(pasid);
+
+	/*
+	 * The memory ordering here is subtle.
+	 * If our active_asid is non-zero and the ASID matches the current
+	 * generation, then we update the active_asid entry with a relaxed
+	 * cmpxchg. Racing with a concurrent rollover means that either:
+	 *
+	 * - We get a zero back from the cmpxchg and end up waiting on the
+	 *   lock. Taking the lock synchronises with the rollover and so
+	 *   we are forced to see the updated generation.
+	 *
+	 * - We get a valid ASID back from the cmpxchg, which means the
+	 *   relaxed xchg in flush_context will treat us as reserved
+	 *   because atomic RmWs are totally ordered for a given location.
+	 */
+	old_active_asid = atomic64_read(this_cpu_ptr(info->active));
+	if (old_active_asid && asid_gen_match(asid, info) &&
+	    atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active),
+				     old_active_asid, asid))
+		return;
+
+	cpu = smp_processor_id();
+	asid_new_context(info, pasid, pinned, cpu);
+}
+
+unsigned long asid_context_pinned_get(struct asid_info *info,
+				      atomic64_t *pasid,
+				      refcount_t *pinned);
+void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid,
+			     refcount_t *pinned);
+int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned);
+
+#endif
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index d31e1169d9b8..d42c66ce0460 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -5,6 +5,8 @@ lib-y		:= clear_user.o delay.o copy_from_user.o		\
 		   memset.o memcmp.o strcmp.o strncmp.o strlen.o	\
 		   strnlen.o strchr.o strrchr.o tishift.o
 
+lib-y		+= asid.o
+
 ifeq ($(CONFIG_KERNEL_MODE_NEON), y)
 obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
 CFLAGS_REMOVE_xor-neon.o	+= -mgeneral-regs-only
diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c
new file mode 100644
index 000000000000..286285616f65
--- /dev/null
+++ b/arch/arm64/lib/asid.c
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generic ASID allocator.
+ *
+ * Based on arch/arm/mm/context.c
+ *
+ * Copyright (C) 2002-2003 Deep Blue Solutions Ltd, all rights reserved.
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#include <linux/slab.h>
+
+#include <asm/lib_asid.h>
+
+#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu))
+
+#define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
+#define ASID_FIRST_VERSION(info)        NUM_CTXT_ASIDS(info)
+
+#define asid2idx(info, asid)		((asid) & ~ASID_MASK(info))
+#define idx2asid(info, idx)		asid2idx(info, idx)
+
+static void flush_context(struct asid_info *info)
+{
+	int i;
+	u64 asid;
+
+	/* Update the list of reserved ASIDs and the ASID bitmap. */
+	if (info->set_reserved_bits)
+		info->set_reserved_bits(info);
+
+	for_each_possible_cpu(i) {
+		asid = atomic64_xchg_relaxed(&active_asid(info, i), 0);
+		/*
+		 * If this CPU has already been through a
+		 * rollover, but hasn't run another task in
+		 * the meantime, we must preserve its reserved
+		 * ASID, as this is the only trace we have of
+		 * the process it is still running.
+		 */
+		if (asid == 0)
+			asid = reserved_asid(info, i);
+		__set_bit(asid2idx(info, asid), info->map);
+		reserved_asid(info, i) = asid;
+	}
+
+	/*
+	 * Queue a TLB invalidation for each CPU to perform on next
+	 * context-switch
+	 */
+	cpumask_setall(&info->flush_pending);
+}
+
+static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
+				       u64 newasid)
+{
+	int cpu;
+	bool hit = false;
+
+	/*
+	 * Iterate over the set of reserved ASIDs looking for a match.
+	 * If we find one, then we can update our mm to use newasid
+	 * (i.e. the same ASID in the current generation) but we can't
+	 * exit the loop early, since we need to ensure that all copies
+	 * of the old ASID are updated to reflect the mm. Failure to do
+	 * so could result in us missing the reserved ASID in a future
+	 * generation.
+	 */
+	for_each_possible_cpu(cpu) {
+		if (reserved_asid(info, cpu) == asid) {
+			hit = true;
+			reserved_asid(info, cpu) = newasid;
+		}
+	}
+
+	return hit;
+}
+
+static u64 new_context(struct asid_info *info, atomic64_t *pasid,
+		       refcount_t *pinned)
+{
+	u64 asid = atomic64_read(pasid);
+	u64 generation = atomic64_read(&info->generation);
+
+	if (asid != 0) {
+		u64 newasid = generation | (asid & ~ASID_MASK(info));
+
+		/*
+		 * If our current ASID was active during a rollover, we
+		 * can continue to use it and this was just a false alarm.
+		 */
+		if (check_update_reserved_asid(info, asid, newasid))
+			return newasid;
+
+		/*
+		 * If it is pinned, we can keep using it. Note that reserved
+		 * takes priority, because even if it is also pinned, we need to
+		 * update the generation into the reserved_asids.
+		 */
+		if (pinned && refcount_read(pinned))
+			return newasid;
+
+		/*
+		 * We had a valid ASID in a previous life, so try to re-use
+		 * it if possible.
+		 */
+		if (!__test_and_set_bit(asid2idx(info, asid), info->map))
+			return newasid;
+	}
+
+	/*
+	 * Allocate a free ASID. If we can't find one, take a note of the
+	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
+	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
+	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
+	 * pairs.
+	 */
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx);
+	if (asid != NUM_CTXT_ASIDS(info))
+		goto set_asid;
+
+	/* We're out of ASIDs, so increment the global generation count */
+	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info),
+						 &info->generation);
+	flush_context(info);
+
+	/* We have more ASIDs than CPUs, so this will always succeed */
+	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1);
+
+set_asid:
+	__set_bit(asid, info->map);
+	info->map_idx = asid;
+	return idx2asid(info, asid) | generation;
+}
+
+/*
+ * Generate a new ASID for the context.
+ *
+ * @pasid: Pointer to the current ASID batch allocated. It will be updated
+ * with the new ASID batch.
+ * @pinned: refcount if asid is pinned
+ * @cpu: current CPU ID. Must have been acquired through get_cpu()
+ */
+void asid_new_context(struct asid_info *info, atomic64_t *pasid,
+		      refcount_t *pinned, unsigned int cpu)
+{
+	unsigned long flags;
+	u64 asid;
+
+	raw_spin_lock_irqsave(&info->lock, flags);
+	/* Check that our ASID belongs to the current generation. */
+	asid = atomic64_read(pasid);
+	if (!asid_gen_match(asid, info)) {
+		asid = new_context(info, pasid, pinned);
+		atomic64_set(pasid, asid);
+	}
+
+	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) &&
+	    info->flush_cpu_ctxt_cb)
+		info->flush_cpu_ctxt_cb();
+
+	atomic64_set(&active_asid(info, cpu), asid);
+	raw_spin_unlock_irqrestore(&info->lock, flags);
+}
+
+unsigned long asid_context_pinned_get(struct asid_info *info,
+				      atomic64_t *pasid,
+				      refcount_t *pinned)
+{
+	unsigned long flags;
+	u64 asid;
+
+	if (!info->pinned_map)
+		return 0;
+
+	raw_spin_lock_irqsave(&info->lock, flags);
+
+	asid = atomic64_read(pasid);
+
+	if (refcount_inc_not_zero(pinned))
+		goto out_unlock;
+
+	if (info->nr_pinned_asids >= info->max_pinned_asids) {
+		asid = 0;
+		goto out_unlock;
+	}
+
+	if (!asid_gen_match(asid, info)) {
+		/*
+		 * We went through one or more rollover since that ASID was
+		 * used. Ensure that it is still valid, or generate a new one.
+		 */
+		asid = new_context(info, pasid, pinned);
+		atomic64_set(pasid, asid);
+	}
+
+	info->nr_pinned_asids++;
+	__set_bit(asid2idx(info, asid), info->pinned_map);
+	refcount_set(pinned, 1);
+
+out_unlock:
+	raw_spin_unlock_irqrestore(&info->lock, flags);
+	asid &= ~ASID_MASK(info);
+	return asid;
+}
+
+void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid,
+			     refcount_t *pinned)
+{
+	unsigned long flags;
+	u64 asid = atomic64_read(pasid);
+
+	if (!info->pinned_map)
+		return;
+
+	raw_spin_lock_irqsave(&info->lock, flags);
+
+	if (refcount_dec_and_test(pinned)) {
+		__clear_bit(asid2idx(info, asid), info->pinned_map);
+		info->nr_pinned_asids--;
+	}
+
+	raw_spin_unlock_irqrestore(&info->lock, flags);
+}
+
+/*
+ * Initialize the ASID allocator
+ *
+ * @info: Pointer to the asid allocator structure
+ * @bits: Number of ASIDs available
+ * @pinned: Support for Pinned ASIDs
+ */
+int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned)
+{
+	info->bits = bits;
+
+	/*
+	 * Expect allocation after rollover to fail if we don't have at least
+	 * one more ASID than CPUs. ASID #0 is always reserved.
+	 */
+	WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus());
+	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
+	info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
+			    sizeof(*info->map), GFP_KERNEL);
+	if (!info->map)
+		return -ENOMEM;
+
+	info->map_idx = 1;
+	raw_spin_lock_init(&info->lock);
+
+	if (pinned) {
+		info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
+					   sizeof(*info->pinned_map), GFP_KERNEL);
+		info->nr_pinned_asids = 0;
+	}
+
+	return 0;
+}
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index e9049d14f54a..f44e08981841 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -13,43 +13,15 @@
 #include <linux/mm.h>
 
 #include <asm/cpufeature.h>
+#include <asm/lib_asid.h>
 #include <asm/mmu_context.h>
 #include <asm/smp.h>
 #include <asm/tlbflush.h>
 
-static struct asid_info
-{
-	atomic64_t	generation;
-	unsigned long	*map;
-	unsigned int	map_idx;
-	atomic64_t __percpu	*active;
-	u64 __percpu		*reserved;
-	u32			bits;
-	raw_spinlock_t		lock;
-	/* Which CPU requires context flush on next call */
-	cpumask_t		flush_pending;
-	/* Pinned ASIDs info */
-	unsigned long		*pinned_map;
-	unsigned long		max_pinned_asids;
-	unsigned long		nr_pinned_asids;
-	/* Callback to locally flush the context. */
-	void			(*flush_cpu_ctxt_cb)(void);
-	/* Callback to set the list of reserved ASIDs */
-	void			(*set_reserved_bits)(struct asid_info *info);
-} asid_info;
-
-#define active_asid(info, cpu)	 (*per_cpu_ptr((info)->active, cpu))
-#define reserved_asid(info, cpu) (*per_cpu_ptr((info)->reserved, cpu))
-
 static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
 
-#define ASID_MASK(info)			(~GENMASK((info)->bits - 1, 0))
-#define NUM_CTXT_ASIDS(info)		(1UL << ((info)->bits))
-#define ASID_FIRST_VERSION(info)        NUM_CTXT_ASIDS(info)
-
-#define asid2idx(info, asid)		((asid) & ~ASID_MASK(info))
-#define idx2asid(info, idx)		asid2idx(info, idx)
+static struct asid_info asid_info;
 
 /* Get the ASIDBits supported by the current CPU */
 static u32 get_cpu_asid_bits(void)
@@ -111,190 +83,6 @@ static void set_reserved_asid_bits(struct asid_info *info)
 		bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info));
 }
 
-#define asid_gen_match(asid, info) \
-	(!(((asid) ^ atomic64_read(&(info)->generation)) >> info->bits))
-
-static void flush_context(struct asid_info *info)
-{
-	int i;
-	u64 asid;
-
-	/* Update the list of reserved ASIDs and the ASID bitmap. */
-	if (info->set_reserved_bits)
-		info->set_reserved_bits(info);
-
-	for_each_possible_cpu(i) {
-		asid = atomic64_xchg_relaxed(&active_asid(info, i), 0);
-		/*
-		 * If this CPU has already been through a
-		 * rollover, but hasn't run another task in
-		 * the meantime, we must preserve its reserved
-		 * ASID, as this is the only trace we have of
-		 * the process it is still running.
-		 */
-		if (asid == 0)
-			asid = reserved_asid(info, i);
-		__set_bit(asid2idx(info, asid), info->map);
-		reserved_asid(info, i) = asid;
-	}
-
-	/*
-	 * Queue a TLB invalidation for each CPU to perform on next
-	 * context-switch
-	 */
-	cpumask_setall(&info->flush_pending);
-}
-
-static bool check_update_reserved_asid(struct asid_info *info, u64 asid,
-				       u64 newasid)
-{
-	int cpu;
-	bool hit = false;
-
-	/*
-	 * Iterate over the set of reserved ASIDs looking for a match.
-	 * If we find one, then we can update our mm to use newasid
-	 * (i.e. the same ASID in the current generation) but we can't
-	 * exit the loop early, since we need to ensure that all copies
-	 * of the old ASID are updated to reflect the mm. Failure to do
-	 * so could result in us missing the reserved ASID in a future
-	 * generation.
-	 */
-	for_each_possible_cpu(cpu) {
-		if (reserved_asid(info, cpu) == asid) {
-			hit = true;
-			reserved_asid(info, cpu) = newasid;
-		}
-	}
-
-	return hit;
-}
-
-static u64 new_context(struct asid_info *info, atomic64_t *pasid,
-		       refcount_t *pinned)
-{
-	u64 asid = atomic64_read(pasid);
-	u64 generation = atomic64_read(&info->generation);
-
-	if (asid != 0) {
-		u64 newasid = generation | (asid & ~ASID_MASK(info));
-
-		/*
-		 * If our current ASID was active during a rollover, we
-		 * can continue to use it and this was just a false alarm.
-		 */
-		if (check_update_reserved_asid(info, asid, newasid))
-			return newasid;
-
-		/*
-		 * If it is pinned, we can keep using it. Note that reserved
-		 * takes priority, because even if it is also pinned, we need to
-		 * update the generation into the reserved_asids.
-		 */
-		if (pinned && refcount_read(pinned))
-			return newasid;
-
-		/*
-		 * We had a valid ASID in a previous life, so try to re-use
-		 * it if possible.
-		 */
-		if (!__test_and_set_bit(asid2idx(info, asid), info->map))
-			return newasid;
-	}
-
-	/*
-	 * Allocate a free ASID. If we can't find one, take a note of the
-	 * currently active ASIDs and mark the TLBs as requiring flushes.  We
-	 * always count from ASID #2 (index 1), as we use ASID #0 when setting
-	 * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
-	 * pairs.
-	 */
-	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), info->map_idx);
-	if (asid != NUM_CTXT_ASIDS(info))
-		goto set_asid;
-
-	/* We're out of ASIDs, so increment the global generation count */
-	generation = atomic64_add_return_relaxed(ASID_FIRST_VERSION(info),
-						 &info->generation);
-	flush_context(info);
-
-	/* We have more ASIDs than CPUs, so this will always succeed */
-	asid = find_next_zero_bit(info->map, NUM_CTXT_ASIDS(info), 1);
-
-set_asid:
-	__set_bit(asid, info->map);
-	info->map_idx = asid;
-	return idx2asid(info, asid) | generation;
-}
-
-/*
- * Generate a new ASID for the context.
- *
- * @pasid: Pointer to the current ASID batch allocated. It will be updated
- * with the new ASID batch.
- * @pinned: refcount if asid is pinned.
- * Caller needs to make sure preempt is disabled before calling this function.
- */
-static void asid_new_context(struct asid_info *info, atomic64_t *pasid,
-			     refcount_t *pinned)
-{
-	unsigned long flags;
-	u64 asid;
-	unsigned int cpu = smp_processor_id();
-
-	raw_spin_lock_irqsave(&info->lock, flags);
-	/* Check that our ASID belongs to the current generation. */
-	asid = atomic64_read(pasid);
-	if (!asid_gen_match(asid, info)) {
-		asid = new_context(info, pasid, pinned);
-		atomic64_set(pasid, asid);
-	}
-
-	if (cpumask_test_and_clear_cpu(cpu, &info->flush_pending) &&
-	    info->flush_cpu_ctxt_cb)
-		info->flush_cpu_ctxt_cb();
-
-	atomic64_set(&active_asid(info, cpu), asid);
-	raw_spin_unlock_irqrestore(&info->lock, flags);
-}
-
-/*
- * Check the ASID is still valid for the context. If not generate a new ASID.
- *
- * @pasid: Pointer to the current ASID batch
- * @pinned: refcount if asid is pinned
- * Caller needs to make sure preempt is disabled before calling this function.
- */
-static void asid_check_context(struct asid_info *info, atomic64_t *pasid,
-			       refcount_t *pinned)
-{
-	u64 asid, old_active_asid;
-
-	asid = atomic64_read(pasid);
-
-	/*
-	 * The memory ordering here is subtle.
-	 * If our active_asid is non-zero and the ASID matches the current
-	 * generation, then we update the active_asid entry with a relaxed
-	 * cmpxchg. Racing with a concurrent rollover means that either:
-	 *
-	 * - We get a zero back from the cmpxchg and end up waiting on the
-	 *   lock. Taking the lock synchronises with the rollover and so
-	 *   we are forced to see the updated generation.
-	 *
-	 * - We get a valid ASID back from the cmpxchg, which means the
-	 *   relaxed xchg in flush_context will treat us as reserved
-	 *   because atomic RmWs are totally ordered for a given location.
-	 */
-	old_active_asid = atomic64_read(this_cpu_ptr(info->active));
-	if (old_active_asid && asid_gen_match(asid, info) &&
-	    atomic64_cmpxchg_relaxed(this_cpu_ptr(info->active),
-				     old_active_asid, asid))
-		return;
-
-	asid_new_context(info, pasid, pinned);
-}
-
 void check_and_switch_context(struct mm_struct *mm)
 {
 	if (system_supports_cnp())
@@ -313,66 +101,6 @@ void check_and_switch_context(struct mm_struct *mm)
 		cpu_switch_mm(mm->pgd, mm);
 }
 
-static unsigned long asid_context_pinned_get(struct asid_info *info,
-					     atomic64_t *pasid,
-					     refcount_t *pinned)
-{
-	unsigned long flags;
-	u64 asid;
-
-	if (!info->pinned_map)
-		return 0;
-
-	raw_spin_lock_irqsave(&info->lock, flags);
-
-	asid = atomic64_read(pasid);
-
-	if (refcount_inc_not_zero(pinned))
-		goto out_unlock;
-
-	if (info->nr_pinned_asids >= info->max_pinned_asids) {
-		asid = 0;
-		goto out_unlock;
-	}
-
-	if (!asid_gen_match(asid, info)) {
-		/*
-		 * We went through one or more rollover since that ASID was
-		 * used. Ensure that it is still valid, or generate a new one.
-		 */
-		asid = new_context(info, pasid, pinned);
-		atomic64_set(pasid, asid);
-	}
-
-	info->nr_pinned_asids++;
-	__set_bit(asid2idx(info, asid), info->pinned_map);
-	refcount_set(pinned, 1);
-
-out_unlock:
-	raw_spin_unlock_irqrestore(&info->lock, flags);
-	asid &= ~ASID_MASK(info);
-	return asid;
-}
-
-static void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid,
-				    refcount_t *pinned)
-{
-	unsigned long flags;
-	u64 asid = atomic64_read(pasid);
-
-	if (!info->pinned_map)
-		return;
-
-	raw_spin_lock_irqsave(&info->lock, flags);
-
-	if (refcount_dec_and_test(pinned)) {
-		__clear_bit(asid2idx(info, asid), info->pinned_map);
-		info->nr_pinned_asids--;
-	}
-
-	raw_spin_unlock_irqrestore(&info->lock, flags);
-}
-
 unsigned long arm64_mm_context_get(struct mm_struct *mm)
 {
 	u64 asid;
@@ -466,40 +194,6 @@ static int asids_update_limit(void)
 }
 arch_initcall(asids_update_limit);
 
-/*
- * Initialize the ASID allocator
- *
- * @info: Pointer to the asid allocator structure
- * @bits: Number of ASIDs available
- * @pinned: Support for Pinned ASIDs
- */
-static int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned)
-{
-	info->bits = bits;
-
-	/*
-	 * Expect allocation after rollover to fail if we don't have at least
-	 * one more ASID than CPUs. ASID #0 is always reserved.
-	 */
-	WARN_ON(NUM_CTXT_ASIDS(info) - 1 <= num_possible_cpus());
-	atomic64_set(&info->generation, ASID_FIRST_VERSION(info));
-	info->map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
-			    sizeof(*info->map), GFP_KERNEL);
-	if (!info->map)
-		return -ENOMEM;
-
-	info->map_idx = 1;
-	raw_spin_lock_init(&info->lock);
-
-	if (pinned) {
-		info->pinned_map = kcalloc(BITS_TO_LONGS(NUM_CTXT_ASIDS(info)),
-					   sizeof(*info->pinned_map), GFP_KERNEL);
-		info->nr_pinned_asids = 0;
-	}
-
-	return 0;
-}
-
 static int asids_init(void)
 {
 	struct asid_info *info = &asid_info;
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 14/16] arm64/lib: Add an helper to free memory allocated by the ASID allocator
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (12 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 13/16] arm64: Move the ASID allocator code in a separate file Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 15/16] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available Shameer Kolothum
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

Some users of the ASID allocator (e.g VMID) may need to free any
resources if the initialization fail. So introduce a function that
allows freeing of any memory allocated by the ASID allocator.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
 arch/arm64/include/asm/lib_asid.h | 2 ++
 arch/arm64/lib/asid.c             | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/arch/arm64/include/asm/lib_asid.h b/arch/arm64/include/asm/lib_asid.h
index acae8d243d17..4dbc0a3f19a6 100644
--- a/arch/arm64/include/asm/lib_asid.h
+++ b/arch/arm64/include/asm/lib_asid.h
@@ -82,4 +82,6 @@ void asid_context_pinned_put(struct asid_info *info, atomic64_t *pasid,
 			     refcount_t *pinned);
 int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned);
 
+void asid_allocator_free(struct asid_info *info);
+
 #endif
diff --git a/arch/arm64/lib/asid.c b/arch/arm64/lib/asid.c
index 286285616f65..7bd031f9516a 100644
--- a/arch/arm64/lib/asid.c
+++ b/arch/arm64/lib/asid.c
@@ -256,3 +256,9 @@ int asid_allocator_init(struct asid_info *info, u32 bits, bool pinned)
 
 	return 0;
 }
+
+void asid_allocator_free(struct asid_info *info)
+{
+	kfree(info->map);
+	kfree(info->pinned_map);
+}
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 15/16] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (13 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 14/16] arm64/lib: Add an helper to free memory allocated by the ASID allocator Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-14 11:23 ` [PATCH v4 16/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
  2021-04-22 16:08 ` [PATCH v4 00/16] " Will Deacon
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

At the moment, the function kvm_get_vmid_bits() is looking up for the
sanitized value of ID_AA64MMFR1_EL1 and extract the information
regarding the number of VMID bits supported.

This is fine as the function is mainly used during VMID roll-over. New
use in a follow-up patch will require the function to be called a every
context switch so we want the function to be more efficient.

A new capability is introduced to tell whether 16-bit VMID is
available.

Signed-off-by: Julien Grall <julien.grall@arm.com>
---
 arch/arm64/include/asm/cpucaps.h | 3 ++-
 arch/arm64/include/asm/kvm_mmu.h | 4 +---
 arch/arm64/kernel/cpufeature.c   | 9 +++++++++
 3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index c40f2490cd7b..acb92da5c254 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -67,7 +67,8 @@
 #define ARM64_HAS_LDAPR				59
 #define ARM64_KVM_PROTECTED_MODE		60
 #define ARM64_WORKAROUND_NVIDIA_CARMEL_CNP	61
+#define ARM64_HAS_16BIT_VMID			62
 
-#define ARM64_NCAPS				62
+#define ARM64_NCAPS				63
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 90873851f677..c3080966ef83 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -213,9 +213,7 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled);
 
 static inline unsigned int kvm_get_vmid_bits(void)
 {
-	int reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
-
-	return get_vmid_bits(reg);
+	return cpus_have_const_cap(ARM64_HAS_16BIT_VMID) ? 16 : 8;
 }
 
 /*
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e5281e1c8f1d..ff956fb2f712 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2203,6 +2203,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.matches = has_cpuid_feature,
 		.min_field_value = 1,
 	},
+	{
+		.capability = ARM64_HAS_16BIT_VMID,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.sys_reg = SYS_ID_AA64MMFR1_EL1,
+		.field_pos = ID_AA64MMFR1_VMIDBITS_SHIFT,
+		.sign = FTR_UNSIGNED,
+		.min_field_value = ID_AA64MMFR1_VMIDBITS_16,
+		.matches = has_cpuid_feature,
+	},
 	{},
 };
 
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4 16/16] kvm/arm: Align the VMID allocation with the arm64 ASID one
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (14 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 15/16] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available Shameer Kolothum
@ 2021-04-14 11:23 ` Shameer Kolothum
  2021-04-22 16:08 ` [PATCH v4 00/16] " Will Deacon
  16 siblings, 0 replies; 19+ messages in thread
From: Shameer Kolothum @ 2021-04-14 11:23 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, linux-kernel
  Cc: jean-philippe, julien, maz, linuxarm, catalin.marinas, will

From: Julien Grall <julien.grall@arm.com>

At the moment, the VMID algorithm will send an SGI to all the CPUs to
force an exit and then broadcast a full TLB flush and I-Cache
invalidation.

This patch re-use the new ASID allocator. The
benefits are:
    - CPUs are not forced to exit at roll-over. Instead the VMID will be
    marked reserved and the context will be flushed at next exit. This
    will reduce the IPIs traffic.
    - Context invalidation is now per-CPU rather than broadcasted.
    - Catalin has a formal model of the ASID allocator.

With the new algo, the code is now adapted:
    - The function __kvm_flush_vm_context() has been renamed to
    __kvm_tlb_flush_local_all() and now only flushing the current CPU
    context.
    - The call to update_vmid() will be done with preemption disabled
    as the new algo requires to store information per-CPU.
    - The TLBs associated to EL1 will be flushed when booting a CPU to
    deal with stale information. This was previously done on the
    allocation of the first VMID of a new generation.


Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
---
Test Results:

v4:
The measurement was made on a HiSilicon D06 platform with maxcpus set
to 8 and with the number of VMID limited to 4-bit. The test involves
running concurrently 40 guests with 2 vCPUs. Each guest will then
 execute hackbench 5 times before exiting.

The performance difference between the current algo and the new one are
(avg. of 10 runs):
   - 1.9% less entry/exit from guest
   - 0.7% faster

v3:
The measurement was made on a Seattle based SoC (8 CPUs), with the
number of VMID limited to 4-bit. The test involves running concurrently 40
guests with 2 vCPUs. Each guest will then execute hackbench 5 times
before exiting.

The performance difference between the current algo and the new one are:
    - 2.5% less exit from the guest
    - 22.4% more flush, although they are now local rather than
    broadcasted
    - 0.11% faster (just for the record)

---
 arch/arm64/include/asm/kvm_asm.h   |   4 +-
 arch/arm64/include/asm/kvm_host.h  |   5 +-
 arch/arm64/include/asm/kvm_mmu.h   |   3 +-
 arch/arm64/kvm/arm.c               | 124 +++++++++++------------------
 arch/arm64/kvm/hyp/nvhe/hyp-main.c |   6 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c      |  10 +--
 arch/arm64/kvm/hyp/vhe/tlb.c       |  10 +--
 arch/arm64/kvm/mmu.c               |   1 -
 8 files changed, 65 insertions(+), 98 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index a7ab84f781f7..29697c5ab2c2 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -44,7 +44,7 @@
 
 #define __KVM_HOST_SMCCC_FUNC___kvm_hyp_init			0
 #define __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run			1
-#define __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context		2
+#define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_local_all		2
 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_ipa		3
 #define __KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid		4
 #define __KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context		5
@@ -182,7 +182,7 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end);
 DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
 #define __bp_harden_hyp_vecs	CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
 
-extern void __kvm_flush_vm_context(void);
+extern void __kvm_tlb_flush_local_all(void);
 extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
 				     int level);
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527f7d..5309216e4a94 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -70,9 +70,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
 void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu);
 
 struct kvm_vmid {
-	/* The VMID generation used for the virt. memory system */
-	u64    vmid_gen;
-	u32    vmid;
+	atomic64_t id;
 };
 
 struct kvm_s2_mmu {
@@ -631,7 +629,6 @@ void kvm_arm_resume_guest(struct kvm *kvm);
 		ret;							\
 	})
 
-void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
 int handle_exit(struct kvm_vcpu *vcpu, int exception_index);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index c3080966ef83..43e83df87e3a 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -252,7 +252,8 @@ static __always_inline u64 kvm_get_vttbr(struct kvm_s2_mmu *mmu)
 	u64 cnp = system_supports_cnp() ? VTTBR_CNP_BIT : 0;
 
 	baddr = mmu->pgd_phys;
-	vmid_field = (u64)vmid->vmid << VTTBR_VMID_SHIFT;
+	vmid_field = atomic64_read(&vmid->id) << VTTBR_VMID_SHIFT;
+	vmid_field &= VTTBR_VMID_MASK(kvm_get_vmid_bits());
 	return kvm_phys_to_vttbr(baddr) | vmid_field | cnp;
 }
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 7f06ba76698d..c63242db2d42 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -31,6 +31,7 @@
 #include <asm/tlbflush.h>
 #include <asm/cacheflush.h>
 #include <asm/cpufeature.h>
+#include <asm/lib_asid.h>
 #include <asm/virt.h>
 #include <asm/kvm_arm.h>
 #include <asm/kvm_asm.h>
@@ -55,10 +56,10 @@ static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
 unsigned long kvm_arm_hyp_percpu_base[NR_CPUS];
 DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
 
-/* The VMID used in the VTTBR */
-static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
-static u32 kvm_next_vmid;
-static DEFINE_SPINLOCK(kvm_vmid_lock);
+static DEFINE_PER_CPU(atomic64_t, active_vmids);
+static DEFINE_PER_CPU(u64, reserved_vmids);
+
+static struct asid_info vmid_info;
 
 static bool vgic_present;
 
@@ -486,85 +487,22 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
 	return vcpu_mode_priv(vcpu);
 }
 
-/* Just ensure a guest exit from a particular CPU */
-static void exit_vm_noop(void *info)
-{
-}
-
-void force_vm_exit(const cpumask_t *mask)
+static void vmid_flush_cpu_ctxt(void)
 {
-	preempt_disable();
-	smp_call_function_many(mask, exit_vm_noop, NULL, true);
-	preempt_enable();
+	kvm_call_hyp(__kvm_tlb_flush_local_all);
 }
 
-/**
- * need_new_vmid_gen - check that the VMID is still valid
- * @vmid: The VMID to check
- *
- * return true if there is a new generation of VMIDs being used
- *
- * The hardware supports a limited set of values with the value zero reserved
- * for the host, so we check if an assigned value belongs to a previous
- * generation, which requires us to assign a new value. If we're the first to
- * use a VMID for the new generation, we must flush necessary caches and TLBs
- * on all CPUs.
- */
-static bool need_new_vmid_gen(struct kvm_vmid *vmid)
+static void vmid_set_reserved_bits(struct asid_info *info)
 {
-	u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
-	smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
-	return unlikely(READ_ONCE(vmid->vmid_gen) != current_vmid_gen);
+	bitmap_clear(info->map, 0, NUM_CTXT_ASIDS(info));
 }
-
 /**
  * update_vmid - Update the vmid with a valid VMID for the current generation
  * @vmid: The stage-2 VMID information struct
  */
 static void update_vmid(struct kvm_vmid *vmid)
 {
-	if (!need_new_vmid_gen(vmid))
-		return;
-
-	spin_lock(&kvm_vmid_lock);
-
-	/*
-	 * We need to re-check the vmid_gen here to ensure that if another vcpu
-	 * already allocated a valid vmid for this vm, then this vcpu should
-	 * use the same vmid.
-	 */
-	if (!need_new_vmid_gen(vmid)) {
-		spin_unlock(&kvm_vmid_lock);
-		return;
-	}
-
-	/* First user of a new VMID generation? */
-	if (unlikely(kvm_next_vmid == 0)) {
-		atomic64_inc(&kvm_vmid_gen);
-		kvm_next_vmid = 1;
-
-		/*
-		 * On SMP we know no other CPUs can use this CPU's or each
-		 * other's VMID after force_vm_exit returns since the
-		 * kvm_vmid_lock blocks them from reentry to the guest.
-		 */
-		force_vm_exit(cpu_all_mask);
-		/*
-		 * Now broadcast TLB + ICACHE invalidation over the inner
-		 * shareable domain to make sure all data structures are
-		 * clean.
-		 */
-		kvm_call_hyp(__kvm_flush_vm_context);
-	}
-
-	vmid->vmid = kvm_next_vmid;
-	kvm_next_vmid++;
-	kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1;
-
-	smp_wmb();
-	WRITE_ONCE(vmid->vmid_gen, atomic64_read(&kvm_vmid_gen));
-
-	spin_unlock(&kvm_vmid_lock);
+	asid_check_context(&vmid_info, &vmid->id, NULL);
 }
 
 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
@@ -728,8 +666,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		 */
 		cond_resched();
 
-		update_vmid(&vcpu->arch.hw_mmu->vmid);
-
 		check_vcpu_requests(vcpu);
 
 		/*
@@ -739,6 +675,15 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		 */
 		preempt_disable();
 
+		/*
+		 * The ASID/VMID allocator only tracks active VMIDs per
+		 * physical CPU, and therefore the VMID allocated may not be
+		 * preserved on VMID roll-over if the task was preempted,
+		 * making a thread's VMID inactive. So we need to call
+		 * update_vttbr in non-premptible context.
+		 */
+		update_vmid(&vcpu->arch.hw_mmu->vmid);
+
 		kvm_pmu_flush_hwstate(vcpu);
 
 		local_irq_disable();
@@ -777,8 +722,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
 		 */
 		smp_store_mb(vcpu->mode, IN_GUEST_MODE);
 
-		if (ret <= 0 || need_new_vmid_gen(&vcpu->arch.hw_mmu->vmid) ||
-		    kvm_request_pending(vcpu)) {
+		if (ret <= 0 || kvm_request_pending(vcpu)) {
 			vcpu->mode = OUTSIDE_GUEST_MODE;
 			isb(); /* Ensure work in x_flush_hwstate is committed */
 			kvm_pmu_sync_hwstate(vcpu);
@@ -1460,6 +1404,8 @@ static void cpu_hyp_reset(void)
 {
 	if (!is_kernel_in_hyp_mode())
 		__hyp_reset_vectors();
+
+	kvm_call_hyp(__kvm_tlb_flush_local_all);
 }
 
 /*
@@ -1635,9 +1581,32 @@ static bool init_psci_relay(void)
 
 static int init_common_resources(void)
 {
+	struct asid_info *info = &vmid_info;
+	int err;
+
+	/*
+	 * Initialize the ASID allocator telling it to allocate a single
+	 * VMID per VM.
+	 */
+	err = asid_allocator_init(info, kvm_get_vmid_bits(), false);
+	if (err) {
+		kvm_err("Failed to initialize VMID allocator.\n");
+		return err;
+	}
+
+	info->active = &active_vmids;
+	info->reserved = &reserved_vmids;
+	info->flush_cpu_ctxt_cb = vmid_flush_cpu_ctxt;
+	info->set_reserved_bits = vmid_set_reserved_bits;
+
 	return kvm_set_ipa_limit();
 }
 
+static void free_common_resources(void)
+{
+	asid_allocator_free(&vmid_info);
+}
+
 static int init_subsystems(void)
 {
 	int err = 0;
@@ -1918,7 +1887,7 @@ int kvm_arch_init(void *opaque)
 
 	err = kvm_arm_init_sve();
 	if (err)
-		return err;
+		goto out_err;
 
 	if (!in_hyp_mode) {
 		err = init_hyp_mode();
@@ -1952,6 +1921,7 @@ int kvm_arch_init(void *opaque)
 	if (!in_hyp_mode)
 		teardown_hyp_mode();
 out_err:
+	free_common_resources();
 	return err;
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 936328207bde..62027448d534 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -25,9 +25,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
 	cpu_reg(host_ctxt, 1) =  __kvm_vcpu_run(kern_hyp_va(vcpu));
 }
 
-static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
+static void handle___kvm_tlb_flush_local_all(struct kvm_cpu_context *host_ctxt)
 {
-	__kvm_flush_vm_context();
+	__kvm_tlb_flush_local_all();
 }
 
 static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
@@ -112,7 +112,7 @@ typedef void (*hcall_t)(struct kvm_cpu_context *);
 
 static const hcall_t host_hcall[] = {
 	HANDLE_FUNC(__kvm_vcpu_run),
-	HANDLE_FUNC(__kvm_flush_vm_context),
+	HANDLE_FUNC(__kvm_tlb_flush_local_all),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid_ipa),
 	HANDLE_FUNC(__kvm_tlb_flush_vmid),
 	HANDLE_FUNC(__kvm_flush_cpu_context),
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 229b06748c20..3f1fc5125e9e 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -138,10 +138,10 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
 	__tlb_switch_to_host(&cxt);
 }
 
-void __kvm_flush_vm_context(void)
+void __kvm_tlb_flush_local_all(void)
 {
-	dsb(ishst);
-	__tlbi(alle1is);
+	dsb(nshst);
+	__tlbi(alle1);
 
 	/*
 	 * VIPT and PIPT caches are not affected by VMID, so no maintenance
@@ -153,7 +153,7 @@ void __kvm_flush_vm_context(void)
 	 *
 	 */
 	if (icache_is_vpipt())
-		asm volatile("ic ialluis");
+		asm volatile("ic iallu" : : );
 
-	dsb(ish);
+	dsb(nsh);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 66f17349f0c3..89f229e77b7d 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -142,10 +142,10 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
 	__tlb_switch_to_host(&cxt);
 }
 
-void __kvm_flush_vm_context(void)
+void __kvm_tlb_flush_local_all(void)
 {
-	dsb(ishst);
-	__tlbi(alle1is);
+	dsb(nshst);
+	__tlbi(alle1);
 
 	/*
 	 * VIPT and PIPT caches are not affected by VMID, so no maintenance
@@ -157,7 +157,7 @@ void __kvm_flush_vm_context(void)
 	 *
 	 */
 	if (icache_is_vpipt())
-		asm volatile("ic ialluis");
+		asm volatile("ic iallu" : : );
 
-	dsb(ish);
+	dsb(nsh);
 }
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 8711894db8c2..4933fc9a13fb 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -390,7 +390,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
 	mmu->kvm = kvm;
 	mmu->pgt = pgt;
 	mmu->pgd_phys = __pa(pgt->pgd);
-	mmu->vmid.vmid_gen = 0;
 	return 0;
 
 out_destroy_pgtable:
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one
  2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
                   ` (15 preceding siblings ...)
  2021-04-14 11:23 ` [PATCH v4 16/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
@ 2021-04-22 16:08 ` Will Deacon
  2021-04-23  8:31   ` Shameerali Kolothum Thodi
  16 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2021-04-22 16:08 UTC (permalink / raw)
  To: Shameer Kolothum
  Cc: jean-philippe, julien, maz, linux-kernel, linuxarm,
	catalin.marinas, kvmarm, linux-arm-kernel

On Wed, Apr 14, 2021 at 12:22:56PM +0100, Shameer Kolothum wrote:
> Hi,
> 
> This is an attempt to revive this series originally posted by
> Julien Grall[1]. The main motive to work on this now is because
> of the requirement to have Pinned KVM VMIDs and the RFC discussion
> for the same basically suggested[2] to have a common/better vmid
> allocator for KVM which this series provides.
>  
> Major Changes from v3:
> 
> -Changes related to Pinned ASID support.
> -Changes to take care KPTI related bits reservation.
> -Dropped support for 32 bit KVM.
> -Rebase to 5.12-rc7
> 
> Individual patches have change history for any major changes
> from v3.
> 
> Tests were performed on a HiSilicon D06 platform and so far not observed
> any regressions.
> 
> For ASID allocation,
> 
> Avg of 10 runs(hackbench -s 512 -l 200 -g 300 -f 25 -P),
> 5.12-rc7: Time:18.8119
> 5.12-rc7+v4: Time: 18.459
> 
> ~1.8% improvement.
> 
> For KVM VMID,
> 
> The measurement was made with maxcpus set to 8 and with the
> number of VMID limited to 4-bit. The test involves running
> concurrently 40 guests with 2 vCPUs. Each guest will then
> execute hackbench 5 times before exiting.
> 
> The performance difference between the current algo and the
> new one are(ag. of 10 runs):
>     - 1.9% less exit from the guest
>     - 0.7% faster
> 
> For complete series, please see,
>  https://github.com/hisilicon/kernel-dev/tree/private-v5.12-rc7-asid-v4
> 
> Please take a look and let me know your feedback.

Although I think aligning the two algorithms makes sense, I'm not completely
sold on the need to abstract all this into a library and whether the
additional indirection is justified.

It would be great to compare this approach with one where portions of the
code are duplicated into a separate VMID allocator. Have you tried that to
see what it looks like? Doesn't need to be a proper patch set, but comparing
the end result might help to evaluate the proposal here.

Will
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one
  2021-04-22 16:08 ` [PATCH v4 00/16] " Will Deacon
@ 2021-04-23  8:31   ` Shameerali Kolothum Thodi
  0 siblings, 0 replies; 19+ messages in thread
From: Shameerali Kolothum Thodi @ 2021-04-23  8:31 UTC (permalink / raw)
  To: Will Deacon
  Cc: jean-philippe, julien, maz, linux-kernel, Linuxarm,
	catalin.marinas, kvmarm, linux-arm-kernel



> -----Original Message-----
> From: Will Deacon [mailto:will@kernel.org]
> Sent: 22 April 2021 18:09
> To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com>
> Cc: linux-arm-kernel@lists.infradead.org; kvmarm@lists.cs.columbia.edu;
> linux-kernel@vger.kernel.org; maz@kernel.org; catalin.marinas@arm.com;
> james.morse@arm.com; julien.thierry.kdev@gmail.com;
> suzuki.poulose@arm.com; jean-philippe@linaro.org; julien@xen.org; Linuxarm
> <linuxarm@huawei.com>
> Subject: Re: [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the
> arm64 ASID one
> 
> On Wed, Apr 14, 2021 at 12:22:56PM +0100, Shameer Kolothum wrote:
> > Hi,
> >
> > This is an attempt to revive this series originally posted by
> > Julien Grall[1]. The main motive to work on this now is because
> > of the requirement to have Pinned KVM VMIDs and the RFC discussion
> > for the same basically suggested[2] to have a common/better vmid
> > allocator for KVM which this series provides.
> >
> > Major Changes from v3:
> >
> > -Changes related to Pinned ASID support.
> > -Changes to take care KPTI related bits reservation.
> > -Dropped support for 32 bit KVM.
> > -Rebase to 5.12-rc7
> >
> > Individual patches have change history for any major changes
> > from v3.
> >
> > Tests were performed on a HiSilicon D06 platform and so far not observed
> > any regressions.
> >
> > For ASID allocation,
> >
> > Avg of 10 runs(hackbench -s 512 -l 200 -g 300 -f 25 -P),
> > 5.12-rc7: Time:18.8119
> > 5.12-rc7+v4: Time: 18.459
> >
> > ~1.8% improvement.
> >
> > For KVM VMID,
> >
> > The measurement was made with maxcpus set to 8 and with the
> > number of VMID limited to 4-bit. The test involves running
> > concurrently 40 guests with 2 vCPUs. Each guest will then
> > execute hackbench 5 times before exiting.
> >
> > The performance difference between the current algo and the
> > new one are(ag. of 10 runs):
> >     - 1.9% less exit from the guest
> >     - 0.7% faster
> >
> > For complete series, please see,
> >  https://github.com/hisilicon/kernel-dev/tree/private-v5.12-rc7-asid-v4
> >
> > Please take a look and let me know your feedback.
> 
> Although I think aligning the two algorithms makes sense, I'm not completely
> sold on the need to abstract all this into a library and whether the
> additional indirection is justified.
> 
> It would be great to compare this approach with one where portions of the
> code are duplicated into a separate VMID allocator. Have you tried that to
> see what it looks like? Doesn't need to be a proper patch set, but comparing
> the end result might help to evaluate the proposal here.

Ok. I will give it a go and get back.

Thanks,
Shameer
	
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-04-23  8:31 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-14 11:22 [PATCH v4 00/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
2021-04-14 11:22 ` [PATCH v4 01/16] arm64/mm: Introduce asid_info structure and move asid_generation/asid_map to it Shameer Kolothum
2021-04-14 11:22 ` [PATCH v4 02/16] arm64/mm: Move active_asids and reserved_asids to asid_info Shameer Kolothum
2021-04-14 11:22 ` [PATCH v4 03/16] arm64/mm: Move bits " Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 04/16] arm64/mm: Move the variable lock and tlb_flush_pending " Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 05/16] arm64/mm: Remove dependency on MM in new_context Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 06/16] arm64/mm: Introduce NUM_CTXT_ASIDS Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 07/16] arm64/mm: Move Pinned ASID related variables to asid_info Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 08/16] arm64/mm: Split asid_inits in 2 parts Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 09/16] arm64/mm: Split the function check_and_switch_context in 3 parts Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 10/16] arm64/mm: Split the arm64_mm_context_get/put Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 11/16] arm64/mm: Introduce a callback to flush the local context Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 12/16] arm64/mm: Introduce a callback to set reserved bits Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 13/16] arm64: Move the ASID allocator code in a separate file Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 14/16] arm64/lib: Add an helper to free memory allocated by the ASID allocator Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 15/16] arch/arm64: Introduce a capability to tell whether 16-bit VMID is available Shameer Kolothum
2021-04-14 11:23 ` [PATCH v4 16/16] kvm/arm: Align the VMID allocation with the arm64 ASID one Shameer Kolothum
2021-04-22 16:08 ` [PATCH v4 00/16] " Will Deacon
2021-04-23  8:31   ` Shameerali Kolothum Thodi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).