All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] Support Common Not Private translations
@ 2018-06-19 10:18 Vladimir Murzin
  2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Vladimir Murzin @ 2018-06-19 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

Common Not Private (CNP) translations is a feature of ARMv8.2
extension which allows translation table entries to be shared between
different PEs in the same inner shareable domain, so the hardware can
use this fact to optimise the caching of such entries in the TLB.

This patch set is an attempt to bring CNP support into Linux. It was
tested on a v8.2 Fast Model with exploring traces and checking that
TTBRx_ELy and VTTBR_EL2 have CnP bit set where appropriate.

Changelog:

	 v4 -> v5
	    - cleanups and comment/commit improvements (per James)
	    - VTTBR_CNP_BIT instead of a hard-coded value (per Catalin)
	    - rebased on 4.18-rc1

	 v3 -> v4
	    - rebased on 4.17-rc4 for real

	 v2 -> v3
	    - do not enable CNP if we are crush kernel (per James)
	    - default to "no"
	    - rebased on 4.17-rc4

  	 v1 -> v2
            - handle cpuilde case (per James)
	    - use lm_allias with swapper_pg_dir (per James)
            - rule out ARM64_SW_TTBR0_PAN case (per Catalin)
	    - s/BUG_ON/WARN_ON/ (per Catalin)
	    - comment and commit message updates (per Catalin)
	    - TTBR_CNP_BIT moved to asm/pgtable-hwdef.h (per Catalin)
	    - has_useable_cnp() simplified (per Julien)

	RFC -> v1
            - dropped RFC tag
            - rebased on 4.14-rc4

Thanks!

Vladimir Murzin (3):
  arm64: mm: Support Common Not Private translations
  arm64: KVM: Enable Common Not Private translations
  arm64: Introduce command line parameter to disable CNP

 Documentation/admin-guide/kernel-parameters.txt |  4 +++
 arch/arm/include/asm/kvm_arm.h                  |  1 +
 arch/arm/include/asm/kvm_mmu.h                  |  5 +++
 arch/arm64/Kconfig                              | 13 ++++++++
 arch/arm64/include/asm/cpucaps.h                |  3 +-
 arch/arm64/include/asm/cpufeature.h             |  6 ++++
 arch/arm64/include/asm/kvm_arm.h                |  1 +
 arch/arm64/include/asm/kvm_mmu.h                |  5 +++
 arch/arm64/include/asm/mmu_context.h            | 12 +++++++
 arch/arm64/include/asm/pgtable-hwdef.h          |  2 ++
 arch/arm64/kernel/cpufeature.c                  | 44 +++++++++++++++++++++++++
 arch/arm64/kernel/suspend.c                     |  4 +++
 arch/arm64/kvm/hyp-init.S                       |  3 ++
 arch/arm64/mm/context.c                         |  3 ++
 arch/arm64/mm/proc.S                            |  6 ++++
 virt/kvm/arm/arm.c                              |  4 +--
 16 files changed, 113 insertions(+), 3 deletions(-)

-- 
2.0.0

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-06-19 10:18 [PATCH v5 0/3] Support Common Not Private translations Vladimir Murzin
@ 2018-06-19 10:18 ` Vladimir Murzin
  2018-07-27 11:35   ` Catalin Marinas
  2018-07-30 16:24   ` Suzuki K Poulose
  2018-06-19 10:18 ` [PATCH v5 2/3] arm64: KVM: Enable " Vladimir Murzin
  2018-06-19 10:18 ` [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP Vladimir Murzin
  2 siblings, 2 replies; 16+ messages in thread
From: Vladimir Murzin @ 2018-06-19 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

Common Not Private (CNP) is a feature of ARMv8.2 extension which
allows translation table entries to be shared between different PEs in
the same inner shareable domain, so the hardware can use this fact to
optimise the caching of such entries in the TLB.

CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
the hardware that the translation table entries pointed to by this
TTBR are the same as every PE in the same inner shareable domain for
which the equivalent TTBR also has CNP bit set. In case CNP bit is set
but TTBR does not point at the same translation table entries for a
given ASID and VMID, then the system is mis-configured, so the results
of translations are UNPREDICTABLE.

For kernel we postpone setting CNP till all cpus are up and rely on
cpufeature framework to 1) patch the code which is sensitive to CNP
and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
For these two cases we restore CNP bit via __cpu_suspend_exit().

There are a few cases we need to care of changes in TTBR0_EL1:
  - a switch to idmap
  - software emulated PAN

we rule out latter via Kconfig options and for the former we make
sure that CNP is set for non-zero ASIDs only.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm64/Kconfig                     | 13 +++++++++++++
 arch/arm64/include/asm/cpucaps.h       |  3 ++-
 arch/arm64/include/asm/cpufeature.h    |  6 ++++++
 arch/arm64/include/asm/mmu_context.h   | 12 ++++++++++++
 arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
 arch/arm64/kernel/cpufeature.c         | 35 ++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/suspend.c            |  4 ++++
 arch/arm64/mm/context.c                |  3 +++
 arch/arm64/mm/proc.S                   |  6 ++++++
 9 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 42c090c..70a4ad3 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1124,6 +1124,19 @@ config ARM64_RAS_EXTN
 	  and access the new registers if the system supports the extension.
 	  Platform RAS features may additionally depend on firmware support.
 
+config ARM64_CNP
+	bool "Enable support for Common Not Private (CNP) translations"
+	depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
+	help
+	  Common Not Private (CNP) allows translation table entries to
+	  be shared between different PEs in the same inner shareable
+	  domain, so the hardware can use this fact to optimise the
+	  caching of such entries in the TLB.
+
+	  Selecting this option allows the CNP feature to be detected
+	  at runtime, and does not affect PEs that do not implement
+	  this feature.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 8a699c7..7219654 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -49,7 +49,8 @@
 #define ARM64_HAS_CACHE_DIC			28
 #define ARM64_HW_DBM				29
 #define ARM64_SSBD				30
+#define ARM64_HAS_CNP				31
 
-#define ARM64_NCAPS				31
+#define ARM64_NCAPS				32
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1717ba1..b5e2440 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -508,6 +508,12 @@ static inline bool system_supports_sve(void)
 		cpus_have_const_cap(ARM64_SVE);
 }
 
+static inline bool system_supports_cnp(void)
+{
+	return IS_ENABLED(CONFIG_ARM64_CNP) &&
+		cpus_have_const_cap(ARM64_HAS_CNP);
+}
+
 #define ARM64_SSBD_UNKNOWN		-1
 #define ARM64_SSBD_FORCE_DISABLE	0
 #define ARM64_SSBD_KERNEL		1
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 39ec0b8..c506fb7 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
 
 	phys_addr_t pgd_phys = virt_to_phys(pgdp);
 
+	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
+		/*
+		 * cpu_replace_ttbr1() is used when there's a boot CPU
+		 * up (i.e. cpufeature framework is not up yet) and
+		 * latter only when we enable CNP via cpufeature's
+		 * enable() callback.
+		 * Also we rely on the cpu_hwcap bit being set before
+		 * calling the enable() function.
+		 */
+		pgd_phys |= TTBR_CNP_BIT;
+	}
+
 	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208ea..1d7d8da 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -211,6 +211,8 @@
 #define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
 #define PHYS_MASK		((UL(1) << PHYS_MASK_SHIFT) - 1)
 
+#define TTBR_CNP_BIT		(UL(1) << 0)
+
 /*
  * TCR flags.
  */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d2856b1..e0129f2 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -20,6 +20,7 @@
 
 #include <linux/bsearch.h>
 #include <linux/cpumask.h>
+#include <linux/crash_dump.h>
 #include <linux/sort.h>
 #include <linux/stop_machine.h>
 #include <linux/types.h>
@@ -117,6 +118,7 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
 static bool __maybe_unused
 cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
 
+static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
 
 /*
  * NOTE: Any changes to the visibility of features should be kept in
@@ -858,6 +860,21 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
 	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
 }
 
+static bool __maybe_unused
+has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
+{
+#ifdef CONFIG_CRASH_DUMP
+	/*
+	 * Kdump isn't guaranteed to power-off all secondary CPUs, CNP
+	 * may share TLB entries with a CPU stuck in the crashed
+	 * kernel.
+	 */
+	 if (elfcorehdr_size)
+		return false;
+#endif
+	return has_cpuid_feature(entry, scope);
+}
+
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
@@ -1202,6 +1219,19 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.cpu_enable = cpu_enable_hw_dbm,
 	},
 #endif
+#ifdef CONFIG_ARM64_CNP
+	{
+		.desc = "Common not Private translations",
+		.capability = ARM64_HAS_CNP,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_useable_cnp,
+		.sys_reg = SYS_ID_AA64MMFR2_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64MMFR2_CNP_SHIFT,
+		.min_field_value = 1,
+		.cpu_enable = cpu_enable_cnp,
+	},
+#endif
 	{},
 };
 
@@ -1638,6 +1668,11 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
 	return (cpus_have_const_cap(ARM64_HAS_PAN) && !cpus_have_const_cap(ARM64_HAS_UAO));
 }
 
+static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
+{
+	cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
+}
+
 /*
  * We emulate only the following system register space.
  * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0, 4 - 7]
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 70c2833..9405d1b 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -48,6 +48,10 @@ void notrace __cpu_suspend_exit(void)
 	 */
 	cpu_uninstall_idmap();
 
+	/* Restore CnP bit in TTBR1_EL1 */
+	if (system_supports_cnp())
+		cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
+
 	/*
 	 * PSTATE was not saved over suspend/resume, re-enable any detected
 	 * features that might not have been set correctly.
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index c127f94..a65af49 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -196,6 +196,9 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
 	unsigned long flags;
 	u64 asid, old_active_asid;
 
+	if (system_supports_cnp())
+		cpu_set_reserved_ttbr0();
+
 	asid = atomic64_read(&mm->context.id);
 
 	/*
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5f9a73a..d46d0ca 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -160,6 +160,12 @@ ENTRY(cpu_do_switch_mm)
 	mrs	x2, ttbr1_el1
 	mmid	x1, x1				// get mm->context.id
 	phys_to_ttbr x3, x0
+
+alternative_if ARM64_HAS_CNP
+	cbz     x1, 1f                          // skip CNP for reserved ASID
+	orr     x3, x3, #TTBR_CNP_BIT
+1:
+alternative_else_nop_endif
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 	bfi	x3, x1, #48, #16		// set the ASID field in TTBR0
 #endif
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 2/3] arm64: KVM: Enable Common Not Private translations
  2018-06-19 10:18 [PATCH v5 0/3] Support Common Not Private translations Vladimir Murzin
  2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
@ 2018-06-19 10:18 ` Vladimir Murzin
  2018-07-27 11:41   ` Catalin Marinas
  2018-06-19 10:18 ` [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP Vladimir Murzin
  2 siblings, 1 reply; 16+ messages in thread
From: Vladimir Murzin @ 2018-06-19 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

We rely on cpufeature framework to detect and enable CNP so for KVM we
need to patch hyp to set CNP bit just before TTBR0_EL2 gets written.

For the guest we encode CNP bit while building vttbr, so we don't need
to bother with that in a world switch.

Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 arch/arm/include/asm/kvm_arm.h   | 1 +
 arch/arm/include/asm/kvm_mmu.h   | 5 +++++
 arch/arm64/include/asm/kvm_arm.h | 1 +
 arch/arm64/include/asm/kvm_mmu.h | 5 +++++
 arch/arm64/kvm/hyp-init.S        | 3 +++
 virt/kvm/arm/arm.c               | 4 ++--
 6 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h
index 3ab8b37..2d43dca 100644
--- a/arch/arm/include/asm/kvm_arm.h
+++ b/arch/arm/include/asm/kvm_arm.h
@@ -161,6 +161,7 @@
 #else
 #define VTTBR_X		(5 - KVM_T0SZ)
 #endif
+#define VTTBR_CNP_BIT     _AC(1, UL)
 #define VTTBR_BADDR_MASK  (((_AC(1, ULL) << (40 - VTTBR_X)) - 1) << VTTBR_X)
 #define VTTBR_VMID_SHIFT  _AC(48, ULL)
 #define VTTBR_VMID_MASK(size)	(_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index 8553d68..98d1753 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -363,6 +363,11 @@ static inline int hyp_map_aux_data(void)
 
 #define kvm_phys_to_vttbr(addr)		(addr)
 
+static inline bool kvm_cpu_has_cnp(void)
+{
+	return false;
+}
+
 #endif	/* !__ASSEMBLY__ */
 
 #endif /* __ARM_KVM_MMU_H__ */
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 6dd285e..0b5a849 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -174,6 +174,7 @@
 #define VTCR_EL2_FLAGS			(VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN_FLAGS)
 #define VTTBR_X				(VTTBR_X_TGRAN_MAGIC - VTCR_EL2_T0SZ_IPA)
 
+#define VTTBR_CNP_BIT     (UL(1))
 #define VTTBR_BADDR_MASK  (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_X)
 #define VTTBR_VMID_SHIFT  (UL(48))
 #define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index fb9a712..b66b13f 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -498,5 +498,10 @@ static inline int hyp_map_aux_data(void)
 
 #define kvm_phys_to_vttbr(addr)		phys_to_ttbr(addr)
 
+static inline bool kvm_cpu_has_cnp(void)
+{
+	return system_supports_cnp();
+}
+
 #endif /* __ASSEMBLY__ */
 #endif /* __ARM64_KVM_MMU_H__ */
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 6fd91b3..94998d5 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -64,6 +64,9 @@ __do_hyp_init:
 	b.lo	__kvm_handle_stub_hvc
 
 	phys_to_ttbr x4, x0
+alternative_if ARM64_HAS_CNP
+	orr	x4, x4, #TTBR_CNP_BIT
+alternative_else_nop_endif
 	msr	ttbr0_el2, x4
 
 	mrs	x4, tcr_el1
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 04e554c..221b127 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -490,7 +490,7 @@ static bool need_new_vmid_gen(struct kvm *kvm)
 static void update_vttbr(struct kvm *kvm)
 {
 	phys_addr_t pgd_phys;
-	u64 vmid;
+	u64 vmid, cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
 	bool new_gen;
 
 	read_lock(&kvm_vmid_lock);
@@ -540,7 +540,7 @@ static void update_vttbr(struct kvm *kvm)
 	pgd_phys = virt_to_phys(kvm->arch.pgd);
 	BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
-	kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid;
+	kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
 
 	write_unlock(&kvm_vmid_lock);
 }
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP
  2018-06-19 10:18 [PATCH v5 0/3] Support Common Not Private translations Vladimir Murzin
  2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
  2018-06-19 10:18 ` [PATCH v5 2/3] arm64: KVM: Enable " Vladimir Murzin
@ 2018-06-19 10:18 ` Vladimir Murzin
  2018-07-27 11:43   ` Catalin Marinas
  2 siblings, 1 reply; 16+ messages in thread
From: Vladimir Murzin @ 2018-06-19 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

There are cases when activating of Common Not Private (CNP) feature
might not be desirable; this patch allows to forcefully disable CNP
even it is supported by hardware.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  4 ++++
 arch/arm64/kernel/cpufeature.c                  | 11 ++++++++++-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index efc7aa7..4abc485 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2648,6 +2648,10 @@
 
 	noclflush	[BUGS=X86] Don't use the CLFLUSH instruction
 
+	nocnp		[ARM64]
+			Disable CNP (Common not Private translations)
+			even if it is supported by processor.
+
 	nodelayacct	[KNL] Disable per-task delay accounting
 
 	nodsp		[SH] Disable hardware DSP at boot time.
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e0129f2..270fb3b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -860,6 +860,15 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
 	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
 }
 
+static bool nocnp;
+
+static int __init early_nocnp(char *p)
+{
+	nocnp = true;
+	return 0;
+}
+early_param("nocnp", early_nocnp);
+
 static bool __maybe_unused
 has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 {
@@ -872,7 +881,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	 if (elfcorehdr_size)
 		return false;
 #endif
-	return has_cpuid_feature(entry, scope);
+	return has_cpuid_feature(entry, scope) && !nocnp;
 }
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
@ 2018-07-27 11:35   ` Catalin Marinas
  2018-07-30 10:08     ` Vladimir Murzin
  2018-07-30 16:24   ` Suzuki K Poulose
  1 sibling, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2018-07-27 11:35 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Vladimir,

On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 39ec0b8..c506fb7 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>  
>  	phys_addr_t pgd_phys = virt_to_phys(pgdp);
>  
> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
> +		/*
> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
> +		 * up (i.e. cpufeature framework is not up yet) and
> +		 * latter only when we enable CNP via cpufeature's
> +		 * enable() callback.
> +		 * Also we rely on the cpu_hwcap bit being set before
> +		 * calling the enable() function.
> +		 */
> +		pgd_phys |= TTBR_CNP_BIT;
> +	}
> +
>  	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);

So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
performs a phys_to_ttbr transformation of pgd_phys which masks out the
bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
to tweak TTBR_BADDR_MASK_52 to start from bit 0.

(cc'ing Kristina as she added this code, in case there is any issue with
extending the mask)

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 2/3] arm64: KVM: Enable Common Not Private translations
  2018-06-19 10:18 ` [PATCH v5 2/3] arm64: KVM: Enable " Vladimir Murzin
@ 2018-07-27 11:41   ` Catalin Marinas
  2018-07-27 12:02     ` Marc Zyngier
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2018-07-27 11:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 19, 2018 at 11:18:21AM +0100, Vladimir Murzin wrote:
> We rely on cpufeature framework to detect and enable CNP so for KVM we
> need to patch hyp to set CNP bit just before TTBR0_EL2 gets written.
> 
> For the guest we encode CNP bit while building vttbr, so we don't need
> to bother with that in a world switch.
> 
> Reviewed-by: James Morse <james.morse@arm.com>
> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

(but you should get an ack from the kvm maintainers)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP
  2018-06-19 10:18 ` [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP Vladimir Murzin
@ 2018-07-27 11:43   ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2018-07-27 11:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 19, 2018 at 11:18:22AM +0100, Vladimir Murzin wrote:
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2648,6 +2648,10 @@
>  
>  	noclflush	[BUGS=X86] Don't use the CLFLUSH instruction
>  
> +	nocnp		[ARM64]
> +			Disable CNP (Common not Private translations)
> +			even if it is supported by processor.
> +
>  	nodelayacct	[KNL] Disable per-task delay accounting
>  
>  	nodsp		[SH] Disable hardware DSP at boot time.

I'll leave the decision to Will here. As I commented on a previous
version, I'm not particularly fond of new command like options.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 2/3] arm64: KVM: Enable Common Not Private translations
  2018-07-27 11:41   ` Catalin Marinas
@ 2018-07-27 12:02     ` Marc Zyngier
  0 siblings, 0 replies; 16+ messages in thread
From: Marc Zyngier @ 2018-07-27 12:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 27/07/18 12:41, Catalin Marinas wrote:
> On Tue, Jun 19, 2018 at 11:18:21AM +0100, Vladimir Murzin wrote:
>> We rely on cpufeature framework to detect and enable CNP so for KVM we
>> need to patch hyp to set CNP bit just before TTBR0_EL2 gets written.
>>
>> For the guest we encode CNP bit while building vttbr, so we don't need
>> to bother with that in a world switch.
>>
>> Reviewed-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
> 
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> (but you should get an ack from the kvm maintainers)

This is absolutely fine by me:

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-27 11:35   ` Catalin Marinas
@ 2018-07-30 10:08     ` Vladimir Murzin
  2018-07-30 15:42       ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Vladimir Murzin @ 2018-07-30 10:08 UTC (permalink / raw)
  To: linux-arm-kernel

On 27/07/18 12:35, Catalin Marinas wrote:
> Hi Vladimir,
> 
> On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
>> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
>> index 39ec0b8..c506fb7 100644
>> --- a/arch/arm64/include/asm/mmu_context.h
>> +++ b/arch/arm64/include/asm/mmu_context.h
>> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>>  
>>  	phys_addr_t pgd_phys = virt_to_phys(pgdp);
>>  
>> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
>> +		/*
>> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
>> +		 * up (i.e. cpufeature framework is not up yet) and
>> +		 * latter only when we enable CNP via cpufeature's
>> +		 * enable() callback.
>> +		 * Also we rely on the cpu_hwcap bit being set before
>> +		 * calling the enable() function.
>> +		 */
>> +		pgd_phys |= TTBR_CNP_BIT;
>> +	}
>> +
>>  	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
> 
> So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
> the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
> performs a phys_to_ttbr transformation of pgd_phys which masks out the
> bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
> to tweak TTBR_BADDR_MASK_52 to start from bit 0.
> 
> (cc'ing Kristina as she added this code, in case there is any issue with
> extending the mask)
> 

Something like bellow?

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 0bcc98d..e0b4b2f 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -524,11 +524,10 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * 	ttbr:	returns the TTBR value
  */
 	.macro	phys_to_ttbr, ttbr, phys
-#ifdef CONFIG_ARM64_PA_BITS_52
-	orr	\ttbr, \phys, \phys, lsr #46
-	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
-#else
 	mov	\ttbr, \phys
+#ifdef CONFIG_ARM64_PA_BITS_52
+	ubfx	\ttbr, \ttbr, #48, #4
+	orr	\ttbr, \phys, \ttbr, lsl #2
 #endif
 	.endm
 
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 1bdeca8..1b9d0e9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -770,7 +770,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
 #define kc_offset_to_vaddr(o)	((o) | VA_START)
 
 #ifdef CONFIG_ARM64_PA_BITS_52
-#define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
+#define phys_to_ttbr(addr)	((addr) | (((addr) >> 46) & TTBR_BADDR_MASK_52))
 #else
 #define phys_to_ttbr(addr)	(addr)
 #endif


Vladimir

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-30 10:08     ` Vladimir Murzin
@ 2018-07-30 15:42       ` Catalin Marinas
  2018-07-30 16:29         ` Robin Murphy
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2018-07-30 15:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 30, 2018 at 11:08:27AM +0100, Vladimir Murzin wrote:
> On 27/07/18 12:35, Catalin Marinas wrote:
> > On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
> >> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> >> index 39ec0b8..c506fb7 100644
> >> --- a/arch/arm64/include/asm/mmu_context.h
> >> +++ b/arch/arm64/include/asm/mmu_context.h
> >> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
> >>  
> >>  	phys_addr_t pgd_phys = virt_to_phys(pgdp);
> >>  
> >> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
> >> +		/*
> >> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
> >> +		 * up (i.e. cpufeature framework is not up yet) and
> >> +		 * latter only when we enable CNP via cpufeature's
> >> +		 * enable() callback.
> >> +		 * Also we rely on the cpu_hwcap bit being set before
> >> +		 * calling the enable() function.
> >> +		 */
> >> +		pgd_phys |= TTBR_CNP_BIT;
> >> +	}
> >> +
> >>  	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
> > 
> > So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
> > the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
> > performs a phys_to_ttbr transformation of pgd_phys which masks out the
> > bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
> > to tweak TTBR_BADDR_MASK_52 to start from bit 0.
> 
> Something like bellow?
> 
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 0bcc98d..e0b4b2f 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -524,11 +524,10 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>   * 	ttbr:	returns the TTBR value
>   */
>  	.macro	phys_to_ttbr, ttbr, phys
> -#ifdef CONFIG_ARM64_PA_BITS_52
> -	orr	\ttbr, \phys, \phys, lsr #46
> -	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
> -#else
>  	mov	\ttbr, \phys
> +#ifdef CONFIG_ARM64_PA_BITS_52
> +	ubfx	\ttbr, \ttbr, #48, #4
> +	orr	\ttbr, \phys, \ttbr, lsl #2
>  #endif
>  	.endm

This would do, I don't have a better idea on how to write it. But I'd
like a comment to say that this is moving bits 51:48 of the address to
bits 5:2 of TTBR_ELx.

> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 1bdeca8..1b9d0e9 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -770,7 +770,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
>  #define kc_offset_to_vaddr(o)	((o) | VA_START)
>  
>  #ifdef CONFIG_ARM64_PA_BITS_52
> -#define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
> +#define phys_to_ttbr(addr)	((addr) | (((addr) >> 46) & TTBR_BADDR_MASK_52))

With these changes:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Please repost the updated patches and cc Will as he's handling the
upcoming merging window.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
  2018-07-27 11:35   ` Catalin Marinas
@ 2018-07-30 16:24   ` Suzuki K Poulose
  2018-07-31 10:18     ` Vladimir Murzin
  1 sibling, 1 reply; 16+ messages in thread
From: Suzuki K Poulose @ 2018-07-30 16:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/19/2018 11:18 AM, Vladimir Murzin wrote:
> Common Not Private (CNP) is a feature of ARMv8.2 extension which
> allows translation table entries to be shared between different PEs in
> the same inner shareable domain, so the hardware can use this fact to
> optimise the caching of such entries in the TLB.
> 
> CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
> the hardware that the translation table entries pointed to by this
> TTBR are the same as every PE in the same inner shareable domain for
> which the equivalent TTBR also has CNP bit set. In case CNP bit is set
> but TTBR does not point at the same translation table entries for a
> given ASID and VMID, then the system is mis-configured, so the results
> of translations are UNPREDICTABLE.
> 
> For kernel we postpone setting CNP till all cpus are up and rely on
> cpufeature framework to 1) patch the code which is sensitive to CNP
> and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
> reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
> For these two cases we restore CNP bit via __cpu_suspend_exit().
> 
> There are a few cases we need to care of changes in TTBR0_EL1:
>    - a switch to idmap
>    - software emulated PAN
> 
> we rule out latter via Kconfig options and for the former we make
> sure that CNP is set for non-zero ASIDs only.
> 
> Reviewed-by: James Morse <james.morse@arm.com>
> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
> ---
>   arch/arm64/Kconfig                     | 13 +++++++++++++
>   arch/arm64/include/asm/cpucaps.h       |  3 ++-
>   arch/arm64/include/asm/cpufeature.h    |  6 ++++++
>   arch/arm64/include/asm/mmu_context.h   | 12 ++++++++++++
>   arch/arm64/include/asm/pgtable-hwdef.h |  2 ++
>   arch/arm64/kernel/cpufeature.c         | 35 ++++++++++++++++++++++++++++++++++
>   arch/arm64/kernel/suspend.c            |  4 ++++
>   arch/arm64/mm/context.c                |  3 +++
>   arch/arm64/mm/proc.S                   |  6 ++++++
>   9 files changed, 83 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 42c090c..70a4ad3 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1124,6 +1124,19 @@ config ARM64_RAS_EXTN
>   	  and access the new registers if the system supports the extension.
>   	  Platform RAS features may additionally depend on firmware support.
>   
> +config ARM64_CNP
> +	bool "Enable support for Common Not Private (CNP) translations"
> +	depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
> +	help
> +	  Common Not Private (CNP) allows translation table entries to
> +	  be shared between different PEs in the same inner shareable
> +	  domain, so the hardware can use this fact to optimise the
> +	  caching of such entries in the TLB.
> +
> +	  Selecting this option allows the CNP feature to be detected
> +	  at runtime, and does not affect PEs that do not implement
> +	  this feature.
> +
>   endmenu
>   
>   config ARM64_SVE
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> index 8a699c7..7219654 100644
> --- a/arch/arm64/include/asm/cpucaps.h
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -49,7 +49,8 @@
>   #define ARM64_HAS_CACHE_DIC			28
>   #define ARM64_HW_DBM				29
>   #define ARM64_SSBD				30
> +#define ARM64_HAS_CNP				31
>   
> -#define ARM64_NCAPS				31
> +#define ARM64_NCAPS				32
>   
>   #endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 1717ba1..b5e2440 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -508,6 +508,12 @@ static inline bool system_supports_sve(void)
>   		cpus_have_const_cap(ARM64_SVE);
>   }
>   
> +static inline bool system_supports_cnp(void)
> +{
> +	return IS_ENABLED(CONFIG_ARM64_CNP) &&
> +		cpus_have_const_cap(ARM64_HAS_CNP);
> +}
> +
>   #define ARM64_SSBD_UNKNOWN		-1
>   #define ARM64_SSBD_FORCE_DISABLE	0
>   #define ARM64_SSBD_KERNEL		1
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 39ec0b8..c506fb7 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>   
>   	phys_addr_t pgd_phys = virt_to_phys(pgdp);
>   
> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
> +		/*
> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
> +		 * up (i.e. cpufeature framework is not up yet) and
> +		 * latter only when we enable CNP via cpufeature's
> +		 * enable() callback.
> +		 * Also we rely on the cpu_hwcap bit being set before
> +		 * calling the enable() function.
> +		 */
> +		pgd_phys |= TTBR_CNP_BIT;
> +	}
> +
>   	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>   
>   	cpu_install_idmap();
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index fd208ea..1d7d8da 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -211,6 +211,8 @@
>   #define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
>   #define PHYS_MASK		((UL(1) << PHYS_MASK_SHIFT) - 1)
>   
> +#define TTBR_CNP_BIT		(UL(1) << 0)
> +
>   /*
>    * TCR flags.
>    */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d2856b1..e0129f2 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -20,6 +20,7 @@
>   
>   #include <linux/bsearch.h>
>   #include <linux/cpumask.h>
> +#include <linux/crash_dump.h>
>   #include <linux/sort.h>
>   #include <linux/stop_machine.h>
>   #include <linux/types.h>
> @@ -117,6 +118,7 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
>   static bool __maybe_unused
>   cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
>   
> +static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>   
>   /*
>    * NOTE: Any changes to the visibility of features should be kept in
> @@ -858,6 +860,21 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
>   	return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
>   }
>   
> +static bool __maybe_unused
> +has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
> +{
> +#ifdef CONFIG_CRASH_DUMP
> +	/*
> +	 * Kdump isn't guaranteed to power-off all secondary CPUs, CNP
> +	 * may share TLB entries with a CPU stuck in the crashed
> +	 * kernel.
> +	 */
> +	 if (elfcorehdr_size)
> +		return false;
> +#endif

Vladimir,

If you are respinning with the changes mentioned by Catalin, could we 
use is_kdump_kernel() instead of relying on the variable ?

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

Suzuki

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-30 15:42       ` Catalin Marinas
@ 2018-07-30 16:29         ` Robin Murphy
  2018-07-30 17:03           ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Robin Murphy @ 2018-07-30 16:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/07/18 16:42, Catalin Marinas wrote:
> On Mon, Jul 30, 2018 at 11:08:27AM +0100, Vladimir Murzin wrote:
>> On 27/07/18 12:35, Catalin Marinas wrote:
>>> On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
>>>> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
>>>> index 39ec0b8..c506fb7 100644
>>>> --- a/arch/arm64/include/asm/mmu_context.h
>>>> +++ b/arch/arm64/include/asm/mmu_context.h
>>>> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>>>>   
>>>>   	phys_addr_t pgd_phys = virt_to_phys(pgdp);
>>>>   
>>>> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
>>>> +		/*
>>>> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
>>>> +		 * up (i.e. cpufeature framework is not up yet) and
>>>> +		 * latter only when we enable CNP via cpufeature's
>>>> +		 * enable() callback.
>>>> +		 * Also we rely on the cpu_hwcap bit being set before
>>>> +		 * calling the enable() function.
>>>> +		 */
>>>> +		pgd_phys |= TTBR_CNP_BIT;
>>>> +	}
>>>> +
>>>>   	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>>>
>>> So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
>>> the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
>>> performs a phys_to_ttbr transformation of pgd_phys which masks out the
>>> bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
>>> to tweak TTBR_BADDR_MASK_52 to start from bit 0.
>>
>> Something like bellow?
>>
>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> index 0bcc98d..e0b4b2f 100644
>> --- a/arch/arm64/include/asm/assembler.h
>> +++ b/arch/arm64/include/asm/assembler.h
>> @@ -524,11 +524,10 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>>    * 	ttbr:	returns the TTBR value
>>    */
>>   	.macro	phys_to_ttbr, ttbr, phys
>> -#ifdef CONFIG_ARM64_PA_BITS_52
>> -	orr	\ttbr, \phys, \phys, lsr #46
>> -	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
>> -#else
>>   	mov	\ttbr, \phys
>> +#ifdef CONFIG_ARM64_PA_BITS_52
>> +	ubfx	\ttbr, \ttbr, #48, #4
>> +	orr	\ttbr, \phys, \ttbr, lsl #2
>>   #endif
>>   	.endm
> 
> This would do, I don't have a better idea on how to write it. But I'd
> like a comment to say that this is moving bits 51:48 of the address to
> bits 5:2 of TTBR_ELx.

The diff above is *copying* 51:48 into 5:2; it doesn't *move* it like 
the existing code does. That's pretty crucial, because without the BADDR 
masking operation it's going to leave the upper address bits in the 
ASID/VMID field, which looks like a recipe for disaster if the reserved 
TTBR1 happens to be at a high enough address (at least cpu_switch_mm 
might just be OK by virtue of using BFI rather than ORR for the ASID).

>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 1bdeca8..1b9d0e9 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -770,7 +770,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
>>   #define kc_offset_to_vaddr(o)	((o) | VA_START)
>>   
>>   #ifdef CONFIG_ARM64_PA_BITS_52
>> -#define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
>> +#define phys_to_ttbr(addr)	((addr) | (((addr) >> 46) & TTBR_BADDR_MASK_52))

Ditto - by the look of things, this definitely stands to corrupt the 
VMID in update_vttbr(). If we really have no choice but to smuggle the 
CnP value in something which is logically an address, then I think we'd 
need to handle it more like this:

#define TTBR_CNP (1)
#define phys_to_ttbr(addr) \
((((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52) | (addr & TTBR_CNP))

Although in patch #2 we're only applying CnP *after* the BADDR 
conversion, so is changing this one even necessary? If I've understood 
the intent correctly, all that might be needed is something like the 
below (untested, of course).

Robin.

----->8-----
diff --git a/arch/arm64/include/asm/assembler.h 
b/arch/arm64/include/asm/assembler.h
index 0bcc98dbba56..5b0f8f80238f 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -527,6 +527,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  #ifdef CONFIG_ARM64_PA_BITS_52
  	orr	\ttbr, \phys, \phys, lsr #46
  	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
+	bfi	\ttbr, \addr, #0, #1	// CnP
  #else
  	mov	\ttbr, \phys
  #endif

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-30 16:29         ` Robin Murphy
@ 2018-07-30 17:03           ` Catalin Marinas
  2018-07-31 10:17             ` Vladimir Murzin
  0 siblings, 1 reply; 16+ messages in thread
From: Catalin Marinas @ 2018-07-30 17:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 30, 2018 at 05:29:35PM +0100, Robin Murphy wrote:
> On 30/07/18 16:42, Catalin Marinas wrote:
> > On Mon, Jul 30, 2018 at 11:08:27AM +0100, Vladimir Murzin wrote:
> > > On 27/07/18 12:35, Catalin Marinas wrote:
> > > > On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
> > > > > diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> > > > > index 39ec0b8..c506fb7 100644
> > > > > --- a/arch/arm64/include/asm/mmu_context.h
> > > > > +++ b/arch/arm64/include/asm/mmu_context.h
> > > > > @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
> > > > >   	phys_addr_t pgd_phys = virt_to_phys(pgdp);
> > > > > +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
> > > > > +		/*
> > > > > +		 * cpu_replace_ttbr1() is used when there's a boot CPU
> > > > > +		 * up (i.e. cpufeature framework is not up yet) and
> > > > > +		 * latter only when we enable CNP via cpufeature's
> > > > > +		 * enable() callback.
> > > > > +		 * Also we rely on the cpu_hwcap bit being set before
> > > > > +		 * calling the enable() function.
> > > > > +		 */
> > > > > +		pgd_phys |= TTBR_CNP_BIT;
> > > > > +	}
> > > > > +
> > > > >   	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
> > > > 
> > > > So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
> > > > the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
> > > > performs a phys_to_ttbr transformation of pgd_phys which masks out the
> > > > bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
> > > > to tweak TTBR_BADDR_MASK_52 to start from bit 0.
> > > 
> > > Something like bellow?
> > > 
> > > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > > index 0bcc98d..e0b4b2f 100644
> > > --- a/arch/arm64/include/asm/assembler.h
> > > +++ b/arch/arm64/include/asm/assembler.h
> > > @@ -524,11 +524,10 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
> > >    * 	ttbr:	returns the TTBR value
> > >    */
> > >   	.macro	phys_to_ttbr, ttbr, phys
> > > -#ifdef CONFIG_ARM64_PA_BITS_52
> > > -	orr	\ttbr, \phys, \phys, lsr #46
> > > -	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
> > > -#else
> > >   	mov	\ttbr, \phys
> > > +#ifdef CONFIG_ARM64_PA_BITS_52
> > > +	ubfx	\ttbr, \ttbr, #48, #4
> > > +	orr	\ttbr, \phys, \ttbr, lsl #2
> > >   #endif
> > >   	.endm
> > 
> > This would do, I don't have a better idea on how to write it. But I'd
> > like a comment to say that this is moving bits 51:48 of the address to
> > bits 5:2 of TTBR_ELx.
> 
> The diff above is *copying* 51:48 into 5:2; it doesn't *move* it like the
> existing code does. That's pretty crucial, because without the BADDR masking
> operation it's going to leave the upper address bits in the ASID/VMID field,
> which looks like a recipe for disaster if the reserved TTBR1 happens to be
> at a high enough address (at least cpu_switch_mm might just be OK by virtue
> of using BFI rather than ORR for the ASID).

Oh, very good point.

> > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > > index 1bdeca8..1b9d0e9 100644
> > > --- a/arch/arm64/include/asm/pgtable.h
> > > +++ b/arch/arm64/include/asm/pgtable.h
> > > @@ -770,7 +770,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
> > >   #define kc_offset_to_vaddr(o)	((o) | VA_START)
> > >   #ifdef CONFIG_ARM64_PA_BITS_52
> > > -#define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
> > > +#define phys_to_ttbr(addr)	((addr) | (((addr) >> 46) & TTBR_BADDR_MASK_52))
> 
> Ditto - by the look of things, this definitely stands to corrupt the VMID in
> update_vttbr(). If we really have no choice but to smuggle the CnP value in
> something which is logically an address, then I think we'd need to handle it
> more like this:
> 
> #define TTBR_CNP (1)
> #define phys_to_ttbr(addr) \
> ((((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52) | (addr & TTBR_CNP))
> 
> Although in patch #2 we're only applying CnP *after* the BADDR conversion,
> so is changing this one even necessary? If I've understood the intent
> correctly, all that might be needed is something like the below (untested,
> of course).

Or we could keep phys_to_ttbr() as it is for both the asm and C and
apply the CNP bit afterwards (as you've already noticed in patch 2). For
cpu_replace_ttbr1(), we'd also need to change idmap_cpu_replace_ttbr1 to
actually take the TTBR1_EL1 value directly so that it doesn't have to
invoke phys_to_ttbr().

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-30 17:03           ` Catalin Marinas
@ 2018-07-31 10:17             ` Vladimir Murzin
  2018-07-31 11:29               ` Catalin Marinas
  0 siblings, 1 reply; 16+ messages in thread
From: Vladimir Murzin @ 2018-07-31 10:17 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/07/18 18:03, Catalin Marinas wrote:
> On Mon, Jul 30, 2018 at 05:29:35PM +0100, Robin Murphy wrote:
>> On 30/07/18 16:42, Catalin Marinas wrote:
>>> On Mon, Jul 30, 2018 at 11:08:27AM +0100, Vladimir Murzin wrote:
>>>> On 27/07/18 12:35, Catalin Marinas wrote:
>>>>> On Tue, Jun 19, 2018 at 11:18:20AM +0100, Vladimir Murzin wrote:
>>>>>> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
>>>>>> index 39ec0b8..c506fb7 100644
>>>>>> --- a/arch/arm64/include/asm/mmu_context.h
>>>>>> +++ b/arch/arm64/include/asm/mmu_context.h
>>>>>> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>>>>>>   	phys_addr_t pgd_phys = virt_to_phys(pgdp);
>>>>>> +	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
>>>>>> +		/*
>>>>>> +		 * cpu_replace_ttbr1() is used when there's a boot CPU
>>>>>> +		 * up (i.e. cpufeature framework is not up yet) and
>>>>>> +		 * latter only when we enable CNP via cpufeature's
>>>>>> +		 * enable() callback.
>>>>>> +		 * Also we rely on the cpu_hwcap bit being set before
>>>>>> +		 * calling the enable() function.
>>>>>> +		 */
>>>>>> +		pgd_phys |= TTBR_CNP_BIT;
>>>>>> +	}
>>>>>> +
>>>>>>   	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>>>>>
>>>>> So the above code sets the TTBR_CNP_BIT (bit 0) in pgd_phys and calls
>>>>> the idmap_cpu_replace_ttbr1() with this value. Looking at the latter, it
>>>>> performs a phys_to_ttbr transformation of pgd_phys which masks out the
>>>>> bottom 2 bits when CONFIG_ARM64_PA_BITS_52 is enabled. I think we need
>>>>> to tweak TTBR_BADDR_MASK_52 to start from bit 0.
>>>>
>>>> Something like bellow?
>>>>
>>>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>>>> index 0bcc98d..e0b4b2f 100644
>>>> --- a/arch/arm64/include/asm/assembler.h
>>>> +++ b/arch/arm64/include/asm/assembler.h
>>>> @@ -524,11 +524,10 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>>>>    * 	ttbr:	returns the TTBR value
>>>>    */
>>>>   	.macro	phys_to_ttbr, ttbr, phys
>>>> -#ifdef CONFIG_ARM64_PA_BITS_52
>>>> -	orr	\ttbr, \phys, \phys, lsr #46
>>>> -	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
>>>> -#else
>>>>   	mov	\ttbr, \phys
>>>> +#ifdef CONFIG_ARM64_PA_BITS_52
>>>> +	ubfx	\ttbr, \ttbr, #48, #4
>>>> +	orr	\ttbr, \phys, \ttbr, lsl #2
>>>>   #endif
>>>>   	.endm
>>>
>>> This would do, I don't have a better idea on how to write it. But I'd
>>> like a comment to say that this is moving bits 51:48 of the address to
>>> bits 5:2 of TTBR_ELx.
>>
>> The diff above is *copying* 51:48 into 5:2; it doesn't *move* it like the
>> existing code does. That's pretty crucial, because without the BADDR masking
>> operation it's going to leave the upper address bits in the ASID/VMID field,
>> which looks like a recipe for disaster if the reserved TTBR1 happens to be
>> at a high enough address (at least cpu_switch_mm might just be OK by virtue
>> of using BFI rather than ORR for the ASID).
> 
> Oh, very good point.
> 
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 1bdeca8..1b9d0e9 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -770,7 +770,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
>>>>   #define kc_offset_to_vaddr(o)	((o) | VA_START)
>>>>   #ifdef CONFIG_ARM64_PA_BITS_52
>>>> -#define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
>>>> +#define phys_to_ttbr(addr)	((addr) | (((addr) >> 46) & TTBR_BADDR_MASK_52))
>>
>> Ditto - by the look of things, this definitely stands to corrupt the VMID in
>> update_vttbr(). If we really have no choice but to smuggle the CnP value in
>> something which is logically an address, then I think we'd need to handle it
>> more like this:
>>
>> #define TTBR_CNP (1)
>> #define phys_to_ttbr(addr) \
>> ((((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52) | (addr & TTBR_CNP))
>>
>> Although in patch #2 we're only applying CnP *after* the BADDR conversion,
>> so is changing this one even necessary? If I've understood the intent
>> correctly, all that might be needed is something like the below (untested,
>> of course).
> 
> Or we could keep phys_to_ttbr() as it is for both the asm and C and
> apply the CNP bit afterwards (as you've already noticed in patch 2). For
> cpu_replace_ttbr1(), we'd also need to change idmap_cpu_replace_ttbr1 to
> actually take the TTBR1_EL1 value directly so that it doesn't have to
> invoke phys_to_ttbr().
> 

Does it represent your idea correctly?

diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index c506fb7..b6ba2f0 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -147,7 +147,8 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
 	extern ttbr_replace_func idmap_cpu_replace_ttbr1;
 	ttbr_replace_func *replace_phys;
 
-	phys_addr_t pgd_phys = virt_to_phys(pgdp);
+	/* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */
+	phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp));
 
 	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
 		/*
@@ -158,13 +159,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
 		 * Also we rely on the cpu_hwcap bit being set before
 		 * calling the enable() function.
 		 */
-		pgd_phys |= TTBR_CNP_BIT;
+		ttbr1 |= TTBR_CNP_BIT;
 	}
 
 	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
 
 	cpu_install_idmap();
-	replace_phys(pgd_phys);
+	replace_phys(ttbr1);
 	cpu_uninstall_idmap();
 }
 
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index bde3a3b..2c75b0b 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -190,7 +190,7 @@ ENDPROC(cpu_do_switch_mm)
 .endm
 
 /*
- * void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
+ * void idmap_cpu_replace_ttbr1(phys_addr_t ttbr1)
  *
  * This is the low-level counterpart to cpu_replace_ttbr1, and should not be
  * called by anything else. It can only be executed from a TTBR0 mapping.
@@ -200,8 +200,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 
 	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
-	phys_to_ttbr x3, x0
-	msr	ttbr1_el1, x3
+	msr	ttbr1_el1, x0
 	isb
 
 	restore_daif x2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-30 16:24   ` Suzuki K Poulose
@ 2018-07-31 10:18     ` Vladimir Murzin
  0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Murzin @ 2018-07-31 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/07/18 17:24, Suzuki K Poulose wrote:
> On 06/19/2018 11:18 AM, Vladimir Murzin wrote:
>> Common Not Private (CNP) is a feature of ARMv8.2 extension which
>> allows translation table entries to be shared between different PEs in
>> the same inner shareable domain, so the hardware can use this fact to
>> optimise the caching of such entries in the TLB.
>>
>> CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to
>> the hardware that the translation table entries pointed to by this
>> TTBR are the same as every PE in the same inner shareable domain for
>> which the equivalent TTBR also has CNP bit set. In case CNP bit is set
>> but TTBR does not point at the same translation table entries for a
>> given ASID and VMID, then the system is mis-configured, so the results
>> of translations are UNPREDICTABLE.
>>
>> For kernel we postpone setting CNP till all cpus are up and rely on
>> cpufeature framework to 1) patch the code which is sensitive to CNP
>> and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be
>> reprogrammed as result of hibernation or cpuidle (via __enable_mmu).
>> For these two cases we restore CNP bit via __cpu_suspend_exit().
>>
>> There are a few cases we need to care of changes in TTBR0_EL1:
>> ?? - a switch to idmap
>> ?? - software emulated PAN
>>
>> we rule out latter via Kconfig options and for the former we make
>> sure that CNP is set for non-zero ASIDs only.
>>
>> Reviewed-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
>> ---
>> ? arch/arm64/Kconfig???????????????????? | 13 +++++++++++++
>> ? arch/arm64/include/asm/cpucaps.h?????? |? 3 ++-
>> ? arch/arm64/include/asm/cpufeature.h??? |? 6 ++++++
>> ? arch/arm64/include/asm/mmu_context.h?? | 12 ++++++++++++
>> ? arch/arm64/include/asm/pgtable-hwdef.h |? 2 ++
>> ? arch/arm64/kernel/cpufeature.c???????? | 35 ++++++++++++++++++++++++++++++++++
>> ? arch/arm64/kernel/suspend.c??????????? |? 4 ++++
>> ? arch/arm64/mm/context.c??????????????? |? 3 +++
>> ? arch/arm64/mm/proc.S?????????????????? |? 6 ++++++
>> ? 9 files changed, 83 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 42c090c..70a4ad3 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -1124,6 +1124,19 @@ config ARM64_RAS_EXTN
>> ??????? and access the new registers if the system supports the extension.
>> ??????? Platform RAS features may additionally depend on firmware support.
>> ? +config ARM64_CNP
>> +??? bool "Enable support for Common Not Private (CNP) translations"
>> +??? depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
>> +??? help
>> +????? Common Not Private (CNP) allows translation table entries to
>> +????? be shared between different PEs in the same inner shareable
>> +????? domain, so the hardware can use this fact to optimise the
>> +????? caching of such entries in the TLB.
>> +
>> +????? Selecting this option allows the CNP feature to be detected
>> +????? at runtime, and does not affect PEs that do not implement
>> +????? this feature.
>> +
>> ? endmenu
>> ? ? config ARM64_SVE
>> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
>> index 8a699c7..7219654 100644
>> --- a/arch/arm64/include/asm/cpucaps.h
>> +++ b/arch/arm64/include/asm/cpucaps.h
>> @@ -49,7 +49,8 @@
>> ? #define ARM64_HAS_CACHE_DIC??????????? 28
>> ? #define ARM64_HW_DBM??????????????? 29
>> ? #define ARM64_SSBD??????????????? 30
>> +#define ARM64_HAS_CNP??????????????? 31
>> ? -#define ARM64_NCAPS??????????????? 31
>> +#define ARM64_NCAPS??????????????? 32
>> ? ? #endif /* __ASM_CPUCAPS_H */
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index 1717ba1..b5e2440 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -508,6 +508,12 @@ static inline bool system_supports_sve(void)
>> ????????? cpus_have_const_cap(ARM64_SVE);
>> ? }
>> ? +static inline bool system_supports_cnp(void)
>> +{
>> +??? return IS_ENABLED(CONFIG_ARM64_CNP) &&
>> +??????? cpus_have_const_cap(ARM64_HAS_CNP);
>> +}
>> +
>> ? #define ARM64_SSBD_UNKNOWN??????? -1
>> ? #define ARM64_SSBD_FORCE_DISABLE??? 0
>> ? #define ARM64_SSBD_KERNEL??????? 1
>> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
>> index 39ec0b8..c506fb7 100644
>> --- a/arch/arm64/include/asm/mmu_context.h
>> +++ b/arch/arm64/include/asm/mmu_context.h
>> @@ -149,6 +149,18 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>> ? ????? phys_addr_t pgd_phys = virt_to_phys(pgdp);
>> ? +??? if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
>> +??????? /*
>> +???????? * cpu_replace_ttbr1() is used when there's a boot CPU
>> +???????? * up (i.e. cpufeature framework is not up yet) and
>> +???????? * latter only when we enable CNP via cpufeature's
>> +???????? * enable() callback.
>> +???????? * Also we rely on the cpu_hwcap bit being set before
>> +???????? * calling the enable() function.
>> +???????? */
>> +??????? pgd_phys |= TTBR_CNP_BIT;
>> +??? }
>> +
>> ????? replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>> ? ????? cpu_install_idmap();
>> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
>> index fd208ea..1d7d8da 100644
>> --- a/arch/arm64/include/asm/pgtable-hwdef.h
>> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
>> @@ -211,6 +211,8 @@
>> ? #define PHYS_MASK_SHIFT??????? (CONFIG_ARM64_PA_BITS)
>> ? #define PHYS_MASK??????? ((UL(1) << PHYS_MASK_SHIFT) - 1)
>> ? +#define TTBR_CNP_BIT??????? (UL(1) << 0)
>> +
>> ? /*
>> ?? * TCR flags.
>> ?? */
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index d2856b1..e0129f2 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -20,6 +20,7 @@
>> ? ? #include <linux/bsearch.h>
>> ? #include <linux/cpumask.h>
>> +#include <linux/crash_dump.h>
>> ? #include <linux/sort.h>
>> ? #include <linux/stop_machine.h>
>> ? #include <linux/types.h>
>> @@ -117,6 +118,7 @@ EXPORT_SYMBOL(cpu_hwcap_keys);
>> ? static bool __maybe_unused
>> ? cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
>> ? +static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>> ? ? /*
>> ?? * NOTE: Any changes to the visibility of features should be kept in
>> @@ -858,6 +860,21 @@ static bool has_cache_dic(const struct arm64_cpu_capabilities *entry,
>> ????? return read_sanitised_ftr_reg(SYS_CTR_EL0) & BIT(CTR_DIC_SHIFT);
>> ? }
>> ? +static bool __maybe_unused
>> +has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
>> +{
>> +#ifdef CONFIG_CRASH_DUMP
>> +??? /*
>> +???? * Kdump isn't guaranteed to power-off all secondary CPUs, CNP
>> +???? * may share TLB entries with a CPU stuck in the crashed
>> +???? * kernel.
>> +???? */
>> +???? if (elfcorehdr_size)
>> +??????? return false;
>> +#endif
> 
> Vladimir,
> 
> If you are respinning with the changes mentioned by Catalin, could we use is_kdump_kernel() instead of relying on the variable ?

Sure, it looks much better! Thanks for suggestion!

> 
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>

Cheers
Vladimir

> 
> Suzuki
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v5 1/3] arm64: mm: Support Common Not Private translations
  2018-07-31 10:17             ` Vladimir Murzin
@ 2018-07-31 11:29               ` Catalin Marinas
  0 siblings, 0 replies; 16+ messages in thread
From: Catalin Marinas @ 2018-07-31 11:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 31, 2018 at 11:17:30AM +0100, Vladimir Murzin wrote:
> Does it represent your idea correctly?
> 
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index c506fb7..b6ba2f0 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -147,7 +147,8 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>  	extern ttbr_replace_func idmap_cpu_replace_ttbr1;
>  	ttbr_replace_func *replace_phys;
>  
> -	phys_addr_t pgd_phys = virt_to_phys(pgdp);
> +	/* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */
> +	phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp));
>  
>  	if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
>  		/*
> @@ -158,13 +159,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
>  		 * Also we rely on the cpu_hwcap bit being set before
>  		 * calling the enable() function.
>  		 */
> -		pgd_phys |= TTBR_CNP_BIT;
> +		ttbr1 |= TTBR_CNP_BIT;
>  	}
>  
>  	replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
>  
>  	cpu_install_idmap();
> -	replace_phys(pgd_phys);
> +	replace_phys(ttbr1);
>  	cpu_uninstall_idmap();
>  }
>  
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index bde3a3b..2c75b0b 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -190,7 +190,7 @@ ENDPROC(cpu_do_switch_mm)
>  .endm
>  
>  /*
> - * void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
> + * void idmap_cpu_replace_ttbr1(phys_addr_t ttbr1)
>   *
>   * This is the low-level counterpart to cpu_replace_ttbr1, and should not be
>   * called by anything else. It can only be executed from a TTBR0 mapping.
> @@ -200,8 +200,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
>  
>  	__idmap_cpu_set_reserved_ttbr1 x1, x3
>  
> -	phys_to_ttbr x3, x0
> -	msr	ttbr1_el1, x3
> +	msr	ttbr1_el1, x0
>  	isb
>  
>  	restore_daif x2

It looks fine. Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-07-31 11:29 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-19 10:18 [PATCH v5 0/3] Support Common Not Private translations Vladimir Murzin
2018-06-19 10:18 ` [PATCH v5 1/3] arm64: mm: " Vladimir Murzin
2018-07-27 11:35   ` Catalin Marinas
2018-07-30 10:08     ` Vladimir Murzin
2018-07-30 15:42       ` Catalin Marinas
2018-07-30 16:29         ` Robin Murphy
2018-07-30 17:03           ` Catalin Marinas
2018-07-31 10:17             ` Vladimir Murzin
2018-07-31 11:29               ` Catalin Marinas
2018-07-30 16:24   ` Suzuki K Poulose
2018-07-31 10:18     ` Vladimir Murzin
2018-06-19 10:18 ` [PATCH v5 2/3] arm64: KVM: Enable " Vladimir Murzin
2018-07-27 11:41   ` Catalin Marinas
2018-07-27 12:02     ` Marc Zyngier
2018-06-19 10:18 ` [PATCH v5 3/3] arm64: Introduce command line parameter to disable CNP Vladimir Murzin
2018-07-27 11:43   ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.