All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/6] arm64: Initial support for CVADP
@ 2019-04-09  9:52 ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose

ARMv8.5 introduces a DC CVADP instruction which cleans the data cache to
the point of deep persistence. This series makes the instruction
available to userspace and advertises the presence of this CPU feature.

At present when CONFIG_ARM64_PMEM is enabled and the CVAP feature is
present (ARMv8.2) the CVAP instruction is used (from memcpy_flushcache
and arch_wb_cache_pmem). No changes have been made to use CVADP in
these functions or similar.

As we have moved beyond 32 capabilities we now begin using AT_HWCAP2
for userspace.

Tested as follows:

$ dmesg | grep "Deep"
[    0.166496] CPU features: detected: Data cache clean to Point of Deep Persistence

$ LD_SHOW_AUXV=1 sleep 2>&1 | grep AT_HWCAP
AT_HWCAP:        ef91ff87
AT_HWCAP2:       0x1

Changes since v4:

 - Rebased onto v5.1-rc4

 - Add missing .sign field to cpufeature

 - Include <linux/log2.h> where needed

Changes since v3:

 - Rebased onto v5.1-rc3

 - Add macros for mapping uapi HWCAPs to KERNEL_HWCAPs

 - Squash AT_HWCAP2 documentation into AT_HWCAP2 patch

 - Reorder series to allow HWCAP changes to be taken without
   CVADP (if needed)

 - Additional comments, changes to comments and commit
   messages

Changes since v2:

 - Rebased onto v5.1-rc2

 - Renamed cpu_{have,set}_feature_name to cpu_{have,set}_named_feature

 - Add additional comments and update kernel Documentation

Changes since v1:

 - Rebased onto v5.0-rc7

 - Introduced cpu_{have,set}_feature_name to eliminate use of
   KERNEL_HWCAP prefix

 - Hard coded MAX_CPU_FEATURES and added a WARN_ON

 - Minor comment and tab/spacing changes

 - Use elf_hwcap for all 64 caps in the kernel instead of
   a new elf_hwcap2


Andrew Murray (6):
  arm64: HWCAP: add support for AT_HWCAP2
  arm64: HWCAP: encapsulate elf_hwcap
  arm64: Handle trapped DC CVADP
  arm64: Expose DC CVADP to userspace
  arm64: add CVADP support to the cache maintenance helper
  arm64: Advertise ARM64_HAS_DCPODP cpu feature

 Documentation/arm64/elf_hwcaps.txt       |  18 +++-
 arch/arm64/crypto/aes-ce-ccm-glue.c      |   2 +-
 arch/arm64/crypto/aes-neonbs-glue.c      |   2 +-
 arch/arm64/crypto/chacha-neon-glue.c     |   2 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |   4 +-
 arch/arm64/crypto/ghash-ce-glue.c        |   8 +-
 arch/arm64/crypto/nhpoly1305-neon-glue.c |   2 +-
 arch/arm64/crypto/sha256-glue.c          |   4 +-
 arch/arm64/include/asm/assembler.h       |   4 +
 arch/arm64/include/asm/cpucaps.h         |   3 +-
 arch/arm64/include/asm/cpufeature.h      |  21 ++---
 arch/arm64/include/asm/esr.h             |   3 +-
 arch/arm64/include/asm/hwcap.h           |  54 ++++++++++-
 arch/arm64/include/uapi/asm/hwcap.h      |   7 +-
 arch/arm64/kernel/cpufeature.c           | 110 +++++++++++++++--------
 arch/arm64/kernel/cpuinfo.c              |   3 +-
 arch/arm64/kernel/fpsimd.c               |   4 +-
 arch/arm64/kernel/traps.c                |   3 +
 drivers/clocksource/arm_arch_timer.c     |   8 ++
 19 files changed, 191 insertions(+), 71 deletions(-)

-- 
2.21.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v5 0/6] arm64: Initial support for CVADP
@ 2019-04-09  9:52 ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, dave.martin, linux-arm-kernel

ARMv8.5 introduces a DC CVADP instruction which cleans the data cache to
the point of deep persistence. This series makes the instruction
available to userspace and advertises the presence of this CPU feature.

At present when CONFIG_ARM64_PMEM is enabled and the CVAP feature is
present (ARMv8.2) the CVAP instruction is used (from memcpy_flushcache
and arch_wb_cache_pmem). No changes have been made to use CVADP in
these functions or similar.

As we have moved beyond 32 capabilities we now begin using AT_HWCAP2
for userspace.

Tested as follows:

$ dmesg | grep "Deep"
[    0.166496] CPU features: detected: Data cache clean to Point of Deep Persistence

$ LD_SHOW_AUXV=1 sleep 2>&1 | grep AT_HWCAP
AT_HWCAP:        ef91ff87
AT_HWCAP2:       0x1

Changes since v4:

 - Rebased onto v5.1-rc4

 - Add missing .sign field to cpufeature

 - Include <linux/log2.h> where needed

Changes since v3:

 - Rebased onto v5.1-rc3

 - Add macros for mapping uapi HWCAPs to KERNEL_HWCAPs

 - Squash AT_HWCAP2 documentation into AT_HWCAP2 patch

 - Reorder series to allow HWCAP changes to be taken without
   CVADP (if needed)

 - Additional comments, changes to comments and commit
   messages

Changes since v2:

 - Rebased onto v5.1-rc2

 - Renamed cpu_{have,set}_feature_name to cpu_{have,set}_named_feature

 - Add additional comments and update kernel Documentation

Changes since v1:

 - Rebased onto v5.0-rc7

 - Introduced cpu_{have,set}_feature_name to eliminate use of
   KERNEL_HWCAP prefix

 - Hard coded MAX_CPU_FEATURES and added a WARN_ON

 - Minor comment and tab/spacing changes

 - Use elf_hwcap for all 64 caps in the kernel instead of
   a new elf_hwcap2


Andrew Murray (6):
  arm64: HWCAP: add support for AT_HWCAP2
  arm64: HWCAP: encapsulate elf_hwcap
  arm64: Handle trapped DC CVADP
  arm64: Expose DC CVADP to userspace
  arm64: add CVADP support to the cache maintenance helper
  arm64: Advertise ARM64_HAS_DCPODP cpu feature

 Documentation/arm64/elf_hwcaps.txt       |  18 +++-
 arch/arm64/crypto/aes-ce-ccm-glue.c      |   2 +-
 arch/arm64/crypto/aes-neonbs-glue.c      |   2 +-
 arch/arm64/crypto/chacha-neon-glue.c     |   2 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |   4 +-
 arch/arm64/crypto/ghash-ce-glue.c        |   8 +-
 arch/arm64/crypto/nhpoly1305-neon-glue.c |   2 +-
 arch/arm64/crypto/sha256-glue.c          |   4 +-
 arch/arm64/include/asm/assembler.h       |   4 +
 arch/arm64/include/asm/cpucaps.h         |   3 +-
 arch/arm64/include/asm/cpufeature.h      |  21 ++---
 arch/arm64/include/asm/esr.h             |   3 +-
 arch/arm64/include/asm/hwcap.h           |  54 ++++++++++-
 arch/arm64/include/uapi/asm/hwcap.h      |   7 +-
 arch/arm64/kernel/cpufeature.c           | 110 +++++++++++++++--------
 arch/arm64/kernel/cpuinfo.c              |   3 +-
 arch/arm64/kernel/fpsimd.c               |   4 +-
 arch/arm64/kernel/traps.c                |   3 +
 drivers/clocksource/arm_arch_timer.c     |   8 ++
 19 files changed, 191 insertions(+), 71 deletions(-)

-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose

As we will exhaust the first 32 bits of AT_HWCAP let's start
exposing AT_HWCAP2 to userspace to give us up to 64 caps.

Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
prefer to expand into AT_HWCAP2 in order to provide a consistent
view to userspace between ILP32 and LP64. However internal to the
kernel we prefer to continue to use the full space of elf_hwcap.

To reduce complexity and allow for future expansion, we now
represent hwcaps in the kernel as ordinals and use a
KERNEL_HWCAP_ prefix. This allows us to support automatic feature
based module loading for all our hwcaps.

We introduce cpu_set_feature to set hwcaps which complements the
existing cpu_have_feature helper. These helpers allow us to clean
up existing direct uses of elf_hwcap and reduce any future effort
required to move beyond 64 caps.

For convenience we also introduce cpu_{have,set}_named_feature which
makes use of the cpu_feature macro to allow providing a hwcap name
without a {KERNEL_}HWCAP_ prefix.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 Documentation/arm64/elf_hwcaps.txt       | 14 +++--
 arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
 arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
 arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
 arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
 arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
 arch/arm64/crypto/sha256-glue.c          |  4 +-
 arch/arm64/include/asm/cpufeature.h      | 22 ++++----
 arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
 arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
 arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
 arch/arm64/kernel/cpuinfo.c              |  2 +-
 arch/arm64/kernel/fpsimd.c               |  4 +-
 drivers/clocksource/arm_arch_timer.c     |  8 +++
 15 files changed, 131 insertions(+), 63 deletions(-)

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index 13d6691b37be..c04f8e87bab8 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
 kernel exposes the presence of these features to userspace through a set
 of flags called hwcaps, exposed in the auxilliary vector.
 
-Userspace software can test for features by acquiring the AT_HWCAP entry
-of the auxilliary vector, and testing whether the relevant flags are
-set, e.g.
+Userspace software can test for features by acquiring the AT_HWCAP or
+AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
+flags are set, e.g.
 
 bool floating_point_is_present(void)
 {
@@ -194,3 +194,11 @@ HWCAP_PACG
     Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
     ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
     Documentation/arm64/pointer-authentication.txt.
+
+
+4. Unused AT_HWCAP bits
+-----------------------
+
+Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
+in bits [31:0]. For interoperation with userspace we guarantee that bits
+62 and 63 of AT_HWCAP will always be returned as 0.
diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
index 5fc6f51908fd..036ea77f83bc 100644
--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
+++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
 
 static int __init aes_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_AES))
+	if (!cpu_have_named_feature(AES))
 		return -ENODEV;
 	return crypto_register_aead(&ccm_aes_alg);
 }
diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
index e7a95a566462..bf1b321ff4c1 100644
--- a/arch/arm64/crypto/aes-neonbs-glue.c
+++ b/arch/arm64/crypto/aes-neonbs-glue.c
@@ -440,7 +440,7 @@ static int __init aes_init(void)
 	int err;
 	int i;
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
index bece1d85bd81..cb054f51c917 100644
--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -173,7 +173,7 @@ static struct skcipher_alg algs[] = {
 
 static int __init chacha_simd_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	return crypto_register_skciphers(algs, ARRAY_SIZE(algs));
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index dd325829ee44..e81d5bd555c0 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -101,7 +101,7 @@ static struct shash_alg crc_t10dif_alg[] = {{
 
 static int __init crc_t10dif_mod_init(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		return crypto_register_shashes(crc_t10dif_alg,
 					       ARRAY_SIZE(crc_t10dif_alg));
 	else
@@ -111,7 +111,7 @@ static int __init crc_t10dif_mod_init(void)
 
 static void __exit crc_t10dif_mod_exit(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		crypto_unregister_shashes(crc_t10dif_alg,
 					  ARRAY_SIZE(crc_t10dif_alg));
 	else
diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
index 791ad422c427..4e69bb78ea89 100644
--- a/arch/arm64/crypto/ghash-ce-glue.c
+++ b/arch/arm64/crypto/ghash-ce-glue.c
@@ -704,10 +704,10 @@ static int __init ghash_ce_mod_init(void)
 {
 	int ret;
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		ret = crypto_register_shashes(ghash_alg,
 					      ARRAY_SIZE(ghash_alg));
 	else
@@ -717,7 +717,7 @@ static int __init ghash_ce_mod_init(void)
 	if (ret)
 		return ret;
 
-	if (elf_hwcap & HWCAP_PMULL) {
+	if (cpu_have_named_feature(PMULL)) {
 		ret = crypto_register_aead(&gcm_aes_alg);
 		if (ret)
 			crypto_unregister_shashes(ghash_alg,
@@ -728,7 +728,7 @@ static int __init ghash_ce_mod_init(void)
 
 static void __exit ghash_ce_mod_exit(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		crypto_unregister_shashes(ghash_alg, ARRAY_SIZE(ghash_alg));
 	else
 		crypto_unregister_shash(ghash_alg);
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index 22cc32ac9448..38a589044b6c 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -56,7 +56,7 @@ static struct shash_alg nhpoly1305_alg = {
 
 static int __init nhpoly1305_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	return crypto_register_shash(&nhpoly1305_alg);
diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
index 4aedeaefd61f..0cccdb9cc2c0 100644
--- a/arch/arm64/crypto/sha256-glue.c
+++ b/arch/arm64/crypto/sha256-glue.c
@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
 	if (ret)
 		return ret;
 
-	if (elf_hwcap & HWCAP_ASIMD) {
+	if (cpu_have_named_feature(ASIMD)) {
 		ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
 		if (ret)
 			crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
 
 static void __exit sha256_mod_fini(void)
 {
-	if (elf_hwcap & HWCAP_ASIMD)
+	if (cpu_have_named_feature(ASIMD))
 		crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
 	crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
 }
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e505e1fbd2b9..347c17046668 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -14,15 +14,8 @@
 #include <asm/hwcap.h>
 #include <asm/sysreg.h>
 
-/*
- * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
- * in the kernel and for user space to keep track of which optional features
- * are supported by the current system. So let's map feature 'x' to HWCAP_x.
- * Note that HWCAP_x constants are bit fields so we need to take the log.
- */
-
-#define MAX_CPU_FEATURES	(8 * sizeof(elf_hwcap))
-#define cpu_feature(x)		ilog2(HWCAP_ ## x)
+#define MAX_CPU_FEATURES	64
+#define cpu_feature(x)		KERNEL_HWCAP_ ## x
 
 #ifndef __ASSEMBLY__
 
@@ -400,10 +393,19 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 
 bool this_cpu_has_cap(unsigned int cap);
 
+static inline void cpu_set_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	elf_hwcap |= BIT(num);
+}
+#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
+
 static inline bool cpu_have_feature(unsigned int num)
 {
-	return elf_hwcap & (1UL << num);
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	return elf_hwcap & BIT(num);
 }
+#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
 /* System capability check for constant caps */
 static inline bool __cpus_have_const_cap(int num)
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index 400b80b49595..beb53a00eb16 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -40,11 +40,61 @@
 #define COMPAT_HWCAP2_CRC32	(1 << 4)
 
 #ifndef __ASSEMBLY__
+#include <linux/kernel.h>
+#include <linux/log2.h>
+
+/*
+ * For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
+ * as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
+ * natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
+ * with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
+ *
+ * Hwcaps should be set and tested within the kernel via the
+ * cpu_{set,have}_named_feature(feature) where feature is the unique suffix
+ * of KERNEL_HWCAP_{feature}.
+ */
+#define __khwcap_feature(x)		ilog2(HWCAP_ ## x)
+#define KERNEL_HWCAP_FP			__khwcap_feature(FP)
+#define KERNEL_HWCAP_ASIMD		__khwcap_feature(ASIMD)
+#define KERNEL_HWCAP_EVTSTRM		__khwcap_feature(EVTSTRM)
+#define KERNEL_HWCAP_AES		__khwcap_feature(AES)
+#define KERNEL_HWCAP_PMULL		__khwcap_feature(PMULL)
+#define KERNEL_HWCAP_SHA1		__khwcap_feature(SHA1)
+#define KERNEL_HWCAP_SHA2		__khwcap_feature(SHA2)
+#define KERNEL_HWCAP_CRC32		__khwcap_feature(CRC32)
+#define KERNEL_HWCAP_ATOMICS		__khwcap_feature(ATOMICS)
+#define KERNEL_HWCAP_FPHP		__khwcap_feature(FPHP)
+#define KERNEL_HWCAP_ASIMDHP		__khwcap_feature(ASIMDHP)
+#define KERNEL_HWCAP_CPUID		__khwcap_feature(CPUID)
+#define KERNEL_HWCAP_ASIMDRDM		__khwcap_feature(ASIMDRDM)
+#define KERNEL_HWCAP_JSCVT		__khwcap_feature(JSCVT)
+#define KERNEL_HWCAP_FCMA		__khwcap_feature(FCMA)
+#define KERNEL_HWCAP_LRCPC		__khwcap_feature(LRCPC)
+#define KERNEL_HWCAP_DCPOP		__khwcap_feature(DCPOP)
+#define KERNEL_HWCAP_SHA3		__khwcap_feature(SHA3)
+#define KERNEL_HWCAP_SM3		__khwcap_feature(SM3)
+#define KERNEL_HWCAP_SM4		__khwcap_feature(SM4)
+#define KERNEL_HWCAP_ASIMDDP		__khwcap_feature(ASIMDDP)
+#define KERNEL_HWCAP_SHA512		__khwcap_feature(SHA512)
+#define KERNEL_HWCAP_SVE		__khwcap_feature(SVE)
+#define KERNEL_HWCAP_ASIMDFHM		__khwcap_feature(ASIMDFHM)
+#define KERNEL_HWCAP_DIT		__khwcap_feature(DIT)
+#define KERNEL_HWCAP_USCAT		__khwcap_feature(USCAT)
+#define KERNEL_HWCAP_ILRCPC		__khwcap_feature(ILRCPC)
+#define KERNEL_HWCAP_FLAGM		__khwcap_feature(FLAGM)
+#define KERNEL_HWCAP_SSBS		__khwcap_feature(SSBS)
+#define KERNEL_HWCAP_SB			__khwcap_feature(SB)
+#define KERNEL_HWCAP_PACA		__khwcap_feature(PACA)
+#define KERNEL_HWCAP_PACG		__khwcap_feature(PACG)
+
+#define __khwcap2_feature(x)		(ilog2(HWCAP2_ ## x) + 32)
+
 /*
  * This yields a mask that user programs can use to figure out what
  * instruction set this cpu supports.
  */
-#define ELF_HWCAP		(elf_hwcap)
+#define ELF_HWCAP		lower_32_bits(elf_hwcap)
+#define ELF_HWCAP2		upper_32_bits(elf_hwcap)
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP	(compat_elf_hwcap)
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 5f0750c2199c..453b45af80b7 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -18,7 +18,7 @@
 #define _UAPI__ASM_HWCAP_H
 
 /*
- * HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
+ * HWCAP flags - for AT_HWCAP
  */
 #define HWCAP_FP		(1 << 0)
 #define HWCAP_ASIMD		(1 << 1)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4061de10cea6..986ceeacd19f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1571,39 +1571,39 @@ static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
 #endif
 
 static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SB),
-	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SB),
+	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
 #ifdef CONFIG_ARM64_SVE
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
 #endif
-	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
+	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
 #ifdef CONFIG_ARM64_PTR_AUTH
-	HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, HWCAP_PACA),
-	HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, HWCAP_PACG),
+	HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA),
+	HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
 #endif
 	{},
 };
@@ -1623,7 +1623,7 @@ static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 {
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
-		elf_hwcap |= cap->hwcap;
+		cpu_set_feature(cap->hwcap);
 		break;
 #ifdef CONFIG_COMPAT
 	case CAP_COMPAT_HWCAP:
@@ -1646,7 +1646,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
-		rc = (elf_hwcap & cap->hwcap) != 0;
+		rc = cpu_have_feature(cap->hwcap);
 		break;
 #ifdef CONFIG_COMPAT
 	case CAP_COMPAT_HWCAP:
@@ -1667,7 +1667,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 {
 	/* We support emulation of accesses to CPU ID feature registers */
-	elf_hwcap |= HWCAP_CPUID;
+	cpu_set_named_feature(CPUID);
 	for (; hwcaps->matches; hwcaps++)
 		if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
 			cap_set_elf_hwcap(hwcaps);
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index ca0685f33900..810db95f293f 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -167,7 +167,7 @@ static int c_show(struct seq_file *m, void *v)
 #endif /* CONFIG_COMPAT */
 		} else {
 			for (j = 0; hwcap_str[j]; j++)
-				if (elf_hwcap & (1 << j))
+				if (cpu_have_feature(j))
 					seq_printf(m, " %s", hwcap_str[j]);
 		}
 		seq_puts(m, "\n");
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 5ebe73b69961..735cf1f8b109 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1258,14 +1258,14 @@ static inline void fpsimd_hotplug_init(void) { }
  */
 static int __init fpsimd_init(void)
 {
-	if (elf_hwcap & HWCAP_FP) {
+	if (cpu_have_named_feature(FP)) {
 		fpsimd_pm_init();
 		fpsimd_hotplug_init();
 	} else {
 		pr_notice("Floating-point is not implemented\n");
 	}
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		pr_notice("Advanced SIMD is not implemented\n");
 
 	return sve_sysctl_init();
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index aa4ec53281ce..6cc8aff83805 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
 	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
 			| ARCH_TIMER_VIRT_EVT_EN;
 	arch_timer_set_cntkctl(cntkctl);
+#ifdef CONFIG_ARM64
+	cpu_set_named_feature(EVTSTRM);
+#else
 	elf_hwcap |= HWCAP_EVTSTRM;
+#endif
 #ifdef CONFIG_COMPAT
 	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
 #endif
@@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
 	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
 		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
 
+#ifdef CONFIG_ARM64
+		if (cpu_have_named_feature(EVTSTRM))
+#else
 		if (elf_hwcap & HWCAP_EVTSTRM)
+#endif
 			cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
 	}
 	return NOTIFY_OK;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, dave.martin, linux-arm-kernel

As we will exhaust the first 32 bits of AT_HWCAP let's start
exposing AT_HWCAP2 to userspace to give us up to 64 caps.

Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
prefer to expand into AT_HWCAP2 in order to provide a consistent
view to userspace between ILP32 and LP64. However internal to the
kernel we prefer to continue to use the full space of elf_hwcap.

To reduce complexity and allow for future expansion, we now
represent hwcaps in the kernel as ordinals and use a
KERNEL_HWCAP_ prefix. This allows us to support automatic feature
based module loading for all our hwcaps.

We introduce cpu_set_feature to set hwcaps which complements the
existing cpu_have_feature helper. These helpers allow us to clean
up existing direct uses of elf_hwcap and reduce any future effort
required to move beyond 64 caps.

For convenience we also introduce cpu_{have,set}_named_feature which
makes use of the cpu_feature macro to allow providing a hwcap name
without a {KERNEL_}HWCAP_ prefix.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
---
 Documentation/arm64/elf_hwcaps.txt       | 14 +++--
 arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
 arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
 arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
 arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
 arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
 arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
 arch/arm64/crypto/sha256-glue.c          |  4 +-
 arch/arm64/include/asm/cpufeature.h      | 22 ++++----
 arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
 arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
 arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
 arch/arm64/kernel/cpuinfo.c              |  2 +-
 arch/arm64/kernel/fpsimd.c               |  4 +-
 drivers/clocksource/arm_arch_timer.c     |  8 +++
 15 files changed, 131 insertions(+), 63 deletions(-)

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index 13d6691b37be..c04f8e87bab8 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
 kernel exposes the presence of these features to userspace through a set
 of flags called hwcaps, exposed in the auxilliary vector.
 
-Userspace software can test for features by acquiring the AT_HWCAP entry
-of the auxilliary vector, and testing whether the relevant flags are
-set, e.g.
+Userspace software can test for features by acquiring the AT_HWCAP or
+AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
+flags are set, e.g.
 
 bool floating_point_is_present(void)
 {
@@ -194,3 +194,11 @@ HWCAP_PACG
     Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
     ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
     Documentation/arm64/pointer-authentication.txt.
+
+
+4. Unused AT_HWCAP bits
+-----------------------
+
+Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
+in bits [31:0]. For interoperation with userspace we guarantee that bits
+62 and 63 of AT_HWCAP will always be returned as 0.
diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
index 5fc6f51908fd..036ea77f83bc 100644
--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
+++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
@@ -372,7 +372,7 @@ static struct aead_alg ccm_aes_alg = {
 
 static int __init aes_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_AES))
+	if (!cpu_have_named_feature(AES))
 		return -ENODEV;
 	return crypto_register_aead(&ccm_aes_alg);
 }
diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
index e7a95a566462..bf1b321ff4c1 100644
--- a/arch/arm64/crypto/aes-neonbs-glue.c
+++ b/arch/arm64/crypto/aes-neonbs-glue.c
@@ -440,7 +440,7 @@ static int __init aes_init(void)
 	int err;
 	int i;
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	err = crypto_register_skciphers(aes_algs, ARRAY_SIZE(aes_algs));
diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
index bece1d85bd81..cb054f51c917 100644
--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -173,7 +173,7 @@ static struct skcipher_alg algs[] = {
 
 static int __init chacha_simd_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	return crypto_register_skciphers(algs, ARRAY_SIZE(algs));
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index dd325829ee44..e81d5bd555c0 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -101,7 +101,7 @@ static struct shash_alg crc_t10dif_alg[] = {{
 
 static int __init crc_t10dif_mod_init(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		return crypto_register_shashes(crc_t10dif_alg,
 					       ARRAY_SIZE(crc_t10dif_alg));
 	else
@@ -111,7 +111,7 @@ static int __init crc_t10dif_mod_init(void)
 
 static void __exit crc_t10dif_mod_exit(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		crypto_unregister_shashes(crc_t10dif_alg,
 					  ARRAY_SIZE(crc_t10dif_alg));
 	else
diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
index 791ad422c427..4e69bb78ea89 100644
--- a/arch/arm64/crypto/ghash-ce-glue.c
+++ b/arch/arm64/crypto/ghash-ce-glue.c
@@ -704,10 +704,10 @@ static int __init ghash_ce_mod_init(void)
 {
 	int ret;
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		ret = crypto_register_shashes(ghash_alg,
 					      ARRAY_SIZE(ghash_alg));
 	else
@@ -717,7 +717,7 @@ static int __init ghash_ce_mod_init(void)
 	if (ret)
 		return ret;
 
-	if (elf_hwcap & HWCAP_PMULL) {
+	if (cpu_have_named_feature(PMULL)) {
 		ret = crypto_register_aead(&gcm_aes_alg);
 		if (ret)
 			crypto_unregister_shashes(ghash_alg,
@@ -728,7 +728,7 @@ static int __init ghash_ce_mod_init(void)
 
 static void __exit ghash_ce_mod_exit(void)
 {
-	if (elf_hwcap & HWCAP_PMULL)
+	if (cpu_have_named_feature(PMULL))
 		crypto_unregister_shashes(ghash_alg, ARRAY_SIZE(ghash_alg));
 	else
 		crypto_unregister_shash(ghash_alg);
diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
index 22cc32ac9448..38a589044b6c 100644
--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
+++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
@@ -56,7 +56,7 @@ static struct shash_alg nhpoly1305_alg = {
 
 static int __init nhpoly1305_mod_init(void)
 {
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		return -ENODEV;
 
 	return crypto_register_shash(&nhpoly1305_alg);
diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c
index 4aedeaefd61f..0cccdb9cc2c0 100644
--- a/arch/arm64/crypto/sha256-glue.c
+++ b/arch/arm64/crypto/sha256-glue.c
@@ -173,7 +173,7 @@ static int __init sha256_mod_init(void)
 	if (ret)
 		return ret;
 
-	if (elf_hwcap & HWCAP_ASIMD) {
+	if (cpu_have_named_feature(ASIMD)) {
 		ret = crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs));
 		if (ret)
 			crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
@@ -183,7 +183,7 @@ static int __init sha256_mod_init(void)
 
 static void __exit sha256_mod_fini(void)
 {
-	if (elf_hwcap & HWCAP_ASIMD)
+	if (cpu_have_named_feature(ASIMD))
 		crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs));
 	crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
 }
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e505e1fbd2b9..347c17046668 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -14,15 +14,8 @@
 #include <asm/hwcap.h>
 #include <asm/sysreg.h>
 
-/*
- * In the arm64 world (as in the ARM world), elf_hwcap is used both internally
- * in the kernel and for user space to keep track of which optional features
- * are supported by the current system. So let's map feature 'x' to HWCAP_x.
- * Note that HWCAP_x constants are bit fields so we need to take the log.
- */
-
-#define MAX_CPU_FEATURES	(8 * sizeof(elf_hwcap))
-#define cpu_feature(x)		ilog2(HWCAP_ ## x)
+#define MAX_CPU_FEATURES	64
+#define cpu_feature(x)		KERNEL_HWCAP_ ## x
 
 #ifndef __ASSEMBLY__
 
@@ -400,10 +393,19 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 
 bool this_cpu_has_cap(unsigned int cap);
 
+static inline void cpu_set_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	elf_hwcap |= BIT(num);
+}
+#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
+
 static inline bool cpu_have_feature(unsigned int num)
 {
-	return elf_hwcap & (1UL << num);
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	return elf_hwcap & BIT(num);
 }
+#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
 /* System capability check for constant caps */
 static inline bool __cpus_have_const_cap(int num)
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index 400b80b49595..beb53a00eb16 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -40,11 +40,61 @@
 #define COMPAT_HWCAP2_CRC32	(1 << 4)
 
 #ifndef __ASSEMBLY__
+#include <linux/kernel.h>
+#include <linux/log2.h>
+
+/*
+ * For userspace we represent hwcaps as a collection of HWCAP{,2}_x bitfields
+ * as described in uapi/asm/hwcap.h. For the kernel we represent hwcaps as
+ * natural numbers (in a single range of size MAX_CPU_FEATURES) defined here
+ * with prefix KERNEL_HWCAP_ mapped to their HWCAP{,2}_x counterpart.
+ *
+ * Hwcaps should be set and tested within the kernel via the
+ * cpu_{set,have}_named_feature(feature) where feature is the unique suffix
+ * of KERNEL_HWCAP_{feature}.
+ */
+#define __khwcap_feature(x)		ilog2(HWCAP_ ## x)
+#define KERNEL_HWCAP_FP			__khwcap_feature(FP)
+#define KERNEL_HWCAP_ASIMD		__khwcap_feature(ASIMD)
+#define KERNEL_HWCAP_EVTSTRM		__khwcap_feature(EVTSTRM)
+#define KERNEL_HWCAP_AES		__khwcap_feature(AES)
+#define KERNEL_HWCAP_PMULL		__khwcap_feature(PMULL)
+#define KERNEL_HWCAP_SHA1		__khwcap_feature(SHA1)
+#define KERNEL_HWCAP_SHA2		__khwcap_feature(SHA2)
+#define KERNEL_HWCAP_CRC32		__khwcap_feature(CRC32)
+#define KERNEL_HWCAP_ATOMICS		__khwcap_feature(ATOMICS)
+#define KERNEL_HWCAP_FPHP		__khwcap_feature(FPHP)
+#define KERNEL_HWCAP_ASIMDHP		__khwcap_feature(ASIMDHP)
+#define KERNEL_HWCAP_CPUID		__khwcap_feature(CPUID)
+#define KERNEL_HWCAP_ASIMDRDM		__khwcap_feature(ASIMDRDM)
+#define KERNEL_HWCAP_JSCVT		__khwcap_feature(JSCVT)
+#define KERNEL_HWCAP_FCMA		__khwcap_feature(FCMA)
+#define KERNEL_HWCAP_LRCPC		__khwcap_feature(LRCPC)
+#define KERNEL_HWCAP_DCPOP		__khwcap_feature(DCPOP)
+#define KERNEL_HWCAP_SHA3		__khwcap_feature(SHA3)
+#define KERNEL_HWCAP_SM3		__khwcap_feature(SM3)
+#define KERNEL_HWCAP_SM4		__khwcap_feature(SM4)
+#define KERNEL_HWCAP_ASIMDDP		__khwcap_feature(ASIMDDP)
+#define KERNEL_HWCAP_SHA512		__khwcap_feature(SHA512)
+#define KERNEL_HWCAP_SVE		__khwcap_feature(SVE)
+#define KERNEL_HWCAP_ASIMDFHM		__khwcap_feature(ASIMDFHM)
+#define KERNEL_HWCAP_DIT		__khwcap_feature(DIT)
+#define KERNEL_HWCAP_USCAT		__khwcap_feature(USCAT)
+#define KERNEL_HWCAP_ILRCPC		__khwcap_feature(ILRCPC)
+#define KERNEL_HWCAP_FLAGM		__khwcap_feature(FLAGM)
+#define KERNEL_HWCAP_SSBS		__khwcap_feature(SSBS)
+#define KERNEL_HWCAP_SB			__khwcap_feature(SB)
+#define KERNEL_HWCAP_PACA		__khwcap_feature(PACA)
+#define KERNEL_HWCAP_PACG		__khwcap_feature(PACG)
+
+#define __khwcap2_feature(x)		(ilog2(HWCAP2_ ## x) + 32)
+
 /*
  * This yields a mask that user programs can use to figure out what
  * instruction set this cpu supports.
  */
-#define ELF_HWCAP		(elf_hwcap)
+#define ELF_HWCAP		lower_32_bits(elf_hwcap)
+#define ELF_HWCAP2		upper_32_bits(elf_hwcap)
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP	(compat_elf_hwcap)
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 5f0750c2199c..453b45af80b7 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -18,7 +18,7 @@
 #define _UAPI__ASM_HWCAP_H
 
 /*
- * HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
+ * HWCAP flags - for AT_HWCAP
  */
 #define HWCAP_FP		(1 << 0)
 #define HWCAP_ASIMD		(1 << 1)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4061de10cea6..986ceeacd19f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1571,39 +1571,39 @@ static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
 #endif
 
 static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_SHA512),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDRDM),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA3),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM3),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
-	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
-	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SB),
-	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_PMULL),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_AES),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA1),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA2),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_SHA512),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_CRC32),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ATOMICS),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_RDM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDRDM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SHA3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM3_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM3),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SM4),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDDP),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDFHM),
+	HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FLAGM),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_FP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FPHP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, KERNEL_HWCAP_ASIMD),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_ILRCPC),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_SB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_SB),
+	HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_USCAT),
 #ifdef CONFIG_ARM64_SVE
-	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
+	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, KERNEL_HWCAP_SVE),
 #endif
-	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, HWCAP_SSBS),
+	HWCAP_CAP(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_SSBS_SHIFT, FTR_UNSIGNED, ID_AA64PFR1_SSBS_PSTATE_INSNS, CAP_HWCAP, KERNEL_HWCAP_SSBS),
 #ifdef CONFIG_ARM64_PTR_AUTH
-	HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, HWCAP_PACA),
-	HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, HWCAP_PACG),
+	HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_matches, CAP_HWCAP, KERNEL_HWCAP_PACA),
+	HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_matches, CAP_HWCAP, KERNEL_HWCAP_PACG),
 #endif
 	{},
 };
@@ -1623,7 +1623,7 @@ static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 {
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
-		elf_hwcap |= cap->hwcap;
+		cpu_set_feature(cap->hwcap);
 		break;
 #ifdef CONFIG_COMPAT
 	case CAP_COMPAT_HWCAP:
@@ -1646,7 +1646,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 
 	switch (cap->hwcap_type) {
 	case CAP_HWCAP:
-		rc = (elf_hwcap & cap->hwcap) != 0;
+		rc = cpu_have_feature(cap->hwcap);
 		break;
 #ifdef CONFIG_COMPAT
 	case CAP_COMPAT_HWCAP:
@@ -1667,7 +1667,7 @@ static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
 static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
 {
 	/* We support emulation of accesses to CPU ID feature registers */
-	elf_hwcap |= HWCAP_CPUID;
+	cpu_set_named_feature(CPUID);
 	for (; hwcaps->matches; hwcaps++)
 		if (hwcaps->matches(hwcaps, cpucap_default_scope(hwcaps)))
 			cap_set_elf_hwcap(hwcaps);
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index ca0685f33900..810db95f293f 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -167,7 +167,7 @@ static int c_show(struct seq_file *m, void *v)
 #endif /* CONFIG_COMPAT */
 		} else {
 			for (j = 0; hwcap_str[j]; j++)
-				if (elf_hwcap & (1 << j))
+				if (cpu_have_feature(j))
 					seq_printf(m, " %s", hwcap_str[j]);
 		}
 		seq_puts(m, "\n");
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 5ebe73b69961..735cf1f8b109 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1258,14 +1258,14 @@ static inline void fpsimd_hotplug_init(void) { }
  */
 static int __init fpsimd_init(void)
 {
-	if (elf_hwcap & HWCAP_FP) {
+	if (cpu_have_named_feature(FP)) {
 		fpsimd_pm_init();
 		fpsimd_hotplug_init();
 	} else {
 		pr_notice("Floating-point is not implemented\n");
 	}
 
-	if (!(elf_hwcap & HWCAP_ASIMD))
+	if (!cpu_have_named_feature(ASIMD))
 		pr_notice("Advanced SIMD is not implemented\n");
 
 	return sve_sysctl_init();
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index aa4ec53281ce..6cc8aff83805 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
 	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
 			| ARCH_TIMER_VIRT_EVT_EN;
 	arch_timer_set_cntkctl(cntkctl);
+#ifdef CONFIG_ARM64
+	cpu_set_named_feature(EVTSTRM);
+#else
 	elf_hwcap |= HWCAP_EVTSTRM;
+#endif
 #ifdef CONFIG_COMPAT
 	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
 #endif
@@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
 	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
 		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
 
+#ifdef CONFIG_ARM64
+		if (cpu_have_named_feature(EVTSTRM))
+#else
 		if (elf_hwcap & HWCAP_EVTSTRM)
+#endif
 			cpumask_set_cpu(smp_processor_id(), &evtstrm_available);
 	}
 	return NOTIFY_OK;
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 2/6] arm64: HWCAP: encapsulate elf_hwcap
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose,
	Dave Martin

The introduction of AT_HWCAP2 introduced accessors which ensure that
hwcap features are set and tested appropriately.

Let's now mandate access to elf_hwcap via these accessors by making
elf_hwcap static within cpufeature.c.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 15 ++++---------
 arch/arm64/include/asm/hwcap.h      |  7 +++---
 arch/arm64/kernel/cpufeature.c      | 33 +++++++++++++++++++++++++++--
 3 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 347c17046668..a3f028f82def 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -392,19 +392,12 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 	for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS)
 
 bool this_cpu_has_cap(unsigned int cap);
+void cpu_set_feature(unsigned int num);
+bool cpu_have_feature(unsigned int num);
+unsigned long cpu_get_elf_hwcap(void);
+unsigned long cpu_get_elf_hwcap2(void);
 
-static inline void cpu_set_feature(unsigned int num)
-{
-	WARN_ON(num >= MAX_CPU_FEATURES);
-	elf_hwcap |= BIT(num);
-}
 #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
-
-static inline bool cpu_have_feature(unsigned int num)
-{
-	WARN_ON(num >= MAX_CPU_FEATURES);
-	return elf_hwcap & BIT(num);
-}
 #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
 /* System capability check for constant caps */
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index beb53a00eb16..d0d4e163c50b 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -17,6 +17,7 @@
 #define __ASM_HWCAP_H
 
 #include <uapi/asm/hwcap.h>
+#include <asm/cpufeature.h>
 
 #define COMPAT_HWCAP_HALF	(1 << 1)
 #define COMPAT_HWCAP_THUMB	(1 << 2)
@@ -40,7 +41,6 @@
 #define COMPAT_HWCAP2_CRC32	(1 << 4)
 
 #ifndef __ASSEMBLY__
-#include <linux/kernel.h>
 #include <linux/log2.h>
 
 /*
@@ -93,8 +93,8 @@
  * This yields a mask that user programs can use to figure out what
  * instruction set this cpu supports.
  */
-#define ELF_HWCAP		lower_32_bits(elf_hwcap)
-#define ELF_HWCAP2		upper_32_bits(elf_hwcap)
+#define ELF_HWCAP		cpu_get_elf_hwcap()
+#define ELF_HWCAP2		cpu_get_elf_hwcap2()
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP	(compat_elf_hwcap)
@@ -110,6 +110,5 @@ enum {
 #endif
 };
 
-extern unsigned long elf_hwcap;
 #endif
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 986ceeacd19f..a655d1bb1186 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -35,8 +35,8 @@
 #include <asm/traps.h>
 #include <asm/virt.h>
 
-unsigned long elf_hwcap __read_mostly;
-EXPORT_SYMBOL_GPL(elf_hwcap);
+/* Kernel representation of AT_HWCAP and AT_HWCAP2 */
+static unsigned long elf_hwcap __read_mostly;
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP_DEFAULT	\
@@ -1947,6 +1947,35 @@ bool this_cpu_has_cap(unsigned int n)
 	return false;
 }
 
+void cpu_set_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	elf_hwcap |= BIT(num);
+}
+EXPORT_SYMBOL_GPL(cpu_set_feature);
+
+bool cpu_have_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	return elf_hwcap & BIT(num);
+}
+EXPORT_SYMBOL_GPL(cpu_have_feature);
+
+unsigned long cpu_get_elf_hwcap(void)
+{
+	/*
+	 * We currently only populate the first 32 bits of AT_HWCAP. Please
+	 * note that for userspace compatibility we guarantee that bits 62
+	 * and 63 will always be returned as 0.
+	 */
+	return lower_32_bits(elf_hwcap);
+}
+
+unsigned long cpu_get_elf_hwcap2(void)
+{
+	return upper_32_bits(elf_hwcap);
+}
+
 static void __init setup_system_capabilities(void)
 {
 	/*
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 2/6] arm64: HWCAP: encapsulate elf_hwcap
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, Dave Martin, linux-arm-kernel

The introduction of AT_HWCAP2 introduced accessors which ensure that
hwcap features are set and tested appropriately.

Let's now mandate access to elf_hwcap via these accessors by making
elf_hwcap static within cpufeature.c.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 15 ++++---------
 arch/arm64/include/asm/hwcap.h      |  7 +++---
 arch/arm64/kernel/cpufeature.c      | 33 +++++++++++++++++++++++++++--
 3 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 347c17046668..a3f028f82def 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -392,19 +392,12 @@ extern DECLARE_BITMAP(boot_capabilities, ARM64_NPATCHABLE);
 	for_each_set_bit(cap, cpu_hwcaps, ARM64_NCAPS)
 
 bool this_cpu_has_cap(unsigned int cap);
+void cpu_set_feature(unsigned int num);
+bool cpu_have_feature(unsigned int num);
+unsigned long cpu_get_elf_hwcap(void);
+unsigned long cpu_get_elf_hwcap2(void);
 
-static inline void cpu_set_feature(unsigned int num)
-{
-	WARN_ON(num >= MAX_CPU_FEATURES);
-	elf_hwcap |= BIT(num);
-}
 #define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
-
-static inline bool cpu_have_feature(unsigned int num)
-{
-	WARN_ON(num >= MAX_CPU_FEATURES);
-	return elf_hwcap & BIT(num);
-}
 #define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
 
 /* System capability check for constant caps */
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index beb53a00eb16..d0d4e163c50b 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -17,6 +17,7 @@
 #define __ASM_HWCAP_H
 
 #include <uapi/asm/hwcap.h>
+#include <asm/cpufeature.h>
 
 #define COMPAT_HWCAP_HALF	(1 << 1)
 #define COMPAT_HWCAP_THUMB	(1 << 2)
@@ -40,7 +41,6 @@
 #define COMPAT_HWCAP2_CRC32	(1 << 4)
 
 #ifndef __ASSEMBLY__
-#include <linux/kernel.h>
 #include <linux/log2.h>
 
 /*
@@ -93,8 +93,8 @@
  * This yields a mask that user programs can use to figure out what
  * instruction set this cpu supports.
  */
-#define ELF_HWCAP		lower_32_bits(elf_hwcap)
-#define ELF_HWCAP2		upper_32_bits(elf_hwcap)
+#define ELF_HWCAP		cpu_get_elf_hwcap()
+#define ELF_HWCAP2		cpu_get_elf_hwcap2()
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP	(compat_elf_hwcap)
@@ -110,6 +110,5 @@ enum {
 #endif
 };
 
-extern unsigned long elf_hwcap;
 #endif
 #endif
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 986ceeacd19f..a655d1bb1186 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -35,8 +35,8 @@
 #include <asm/traps.h>
 #include <asm/virt.h>
 
-unsigned long elf_hwcap __read_mostly;
-EXPORT_SYMBOL_GPL(elf_hwcap);
+/* Kernel representation of AT_HWCAP and AT_HWCAP2 */
+static unsigned long elf_hwcap __read_mostly;
 
 #ifdef CONFIG_COMPAT
 #define COMPAT_ELF_HWCAP_DEFAULT	\
@@ -1947,6 +1947,35 @@ bool this_cpu_has_cap(unsigned int n)
 	return false;
 }
 
+void cpu_set_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	elf_hwcap |= BIT(num);
+}
+EXPORT_SYMBOL_GPL(cpu_set_feature);
+
+bool cpu_have_feature(unsigned int num)
+{
+	WARN_ON(num >= MAX_CPU_FEATURES);
+	return elf_hwcap & BIT(num);
+}
+EXPORT_SYMBOL_GPL(cpu_have_feature);
+
+unsigned long cpu_get_elf_hwcap(void)
+{
+	/*
+	 * We currently only populate the first 32 bits of AT_HWCAP. Please
+	 * note that for userspace compatibility we guarantee that bits 62
+	 * and 63 will always be returned as 0.
+	 */
+	return lower_32_bits(elf_hwcap);
+}
+
+unsigned long cpu_get_elf_hwcap2(void)
+{
+	return upper_32_bits(elf_hwcap);
+}
+
 static void __init setup_system_capabilities(void)
 {
 	/*
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 3/6] arm64: Handle trapped DC CVADP
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose,
	Dave Martin

The ARMv8.5 DC CVADP instruction may be trapped to EL1 via
SCTLR_EL1.UCI therefore let's provide a handler for it.

Just like the CVAP instruction we use a 'sys' instruction instead of
the 'dc' alias to avoid build issues with older toolchains.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/esr.h | 3 ++-
 arch/arm64/kernel/traps.c    | 3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 52233f00d53d..07d5c026a0b3 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -198,9 +198,10 @@
 /*
  * User space cache operations have the following sysreg encoding
  * in System instructions.
- * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 14 }, WRITE (L=0)
+ * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 13, 14 }, WRITE (L=0)
  */
 #define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC	14
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVADP	13
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAP	12
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAU	11
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAC	10
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 8ad119c3f665..f66e1ddbe4a7 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -459,6 +459,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAC:	/* DC CVAC, gets promoted */
 		__user_cache_maint("dc civac", address, ret);
 		break;
+	case ESR_ELx_SYS64_ISS_CRM_DC_CVADP:	/* DC CVADP */
+		__user_cache_maint("sys 3, c7, c13, 1", address, ret);
+		break;
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAP:	/* DC CVAP */
 		__user_cache_maint("sys 3, c7, c12, 1", address, ret);
 		break;
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 3/6] arm64: Handle trapped DC CVADP
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, Dave Martin, linux-arm-kernel

The ARMv8.5 DC CVADP instruction may be trapped to EL1 via
SCTLR_EL1.UCI therefore let's provide a handler for it.

Just like the CVAP instruction we use a 'sys' instruction instead of
the 'dc' alias to avoid build issues with older toolchains.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/esr.h | 3 ++-
 arch/arm64/kernel/traps.c    | 3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 52233f00d53d..07d5c026a0b3 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -198,9 +198,10 @@
 /*
  * User space cache operations have the following sysreg encoding
  * in System instructions.
- * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 14 }, WRITE (L=0)
+ * op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 12, 13, 14 }, WRITE (L=0)
  */
 #define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC	14
+#define ESR_ELx_SYS64_ISS_CRM_DC_CVADP	13
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAP	12
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAU	11
 #define ESR_ELx_SYS64_ISS_CRM_DC_CVAC	10
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 8ad119c3f665..f66e1ddbe4a7 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -459,6 +459,9 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAC:	/* DC CVAC, gets promoted */
 		__user_cache_maint("dc civac", address, ret);
 		break;
+	case ESR_ELx_SYS64_ISS_CRM_DC_CVADP:	/* DC CVADP */
+		__user_cache_maint("sys 3, c7, c13, 1", address, ret);
+		break;
 	case ESR_ELx_SYS64_ISS_CRM_DC_CVAP:	/* DC CVAP */
 		__user_cache_maint("sys 3, c7, c12, 1", address, ret);
 		break;
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 4/6] arm64: Expose DC CVADP to userspace
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose,
	Dave Martin

ARMv8.5 builds upon the ARMv8.2 DC CVAP instruction by introducing a DC
CVADP instruction which cleans the data cache to the point of deep
persistence. Let's expose this support via the arm64 ELF hwcaps.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 Documentation/arm64/elf_hwcaps.txt  | 4 ++++
 arch/arm64/include/asm/hwcap.h      | 1 +
 arch/arm64/include/uapi/asm/hwcap.h | 5 +++++
 arch/arm64/kernel/cpufeature.c      | 1 +
 arch/arm64/kernel/cpuinfo.c         | 1 +
 5 files changed, 12 insertions(+)

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index c04f8e87bab8..7b591c1dcb53 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -135,6 +135,10 @@ HWCAP_DCPOP
 
     Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
 
+HWCAP2_DCPODP
+
+    Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
+
 HWCAP_SHA3
 
     Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index d0d4e163c50b..3a31a9ee9e95 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -88,6 +88,7 @@
 #define KERNEL_HWCAP_PACG		__khwcap_feature(PACG)
 
 #define __khwcap2_feature(x)		(ilog2(HWCAP2_ ## x) + 32)
+#define KERNEL_HWCAP_DCPODP		__khwcap2_feature(DCPODP)
 
 /*
  * This yields a mask that user programs can use to figure out what
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 453b45af80b7..d64af3913a9e 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -53,4 +53,9 @@
 #define HWCAP_PACA		(1 << 30)
 #define HWCAP_PACG		(1UL << 31)
 
+/*
+ * HWCAP2 flags - for AT_HWCAP2
+ */
+#define HWCAP2_DCPODP		(1 << 0)
+
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a655d1bb1186..f8b682a3a9f4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1591,6 +1591,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 810db95f293f..093ca53ce1d1 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -85,6 +85,7 @@ static const char *const hwcap_str[] = {
 	"sb",
 	"paca",
 	"pacg",
+	"dcpodp",
 	NULL
 };
 
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 4/6] arm64: Expose DC CVADP to userspace
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, Dave Martin, linux-arm-kernel

ARMv8.5 builds upon the ARMv8.2 DC CVAP instruction by introducing a DC
CVADP instruction which cleans the data cache to the point of deep
persistence. Let's expose this support via the arm64 ELF hwcaps.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 Documentation/arm64/elf_hwcaps.txt  | 4 ++++
 arch/arm64/include/asm/hwcap.h      | 1 +
 arch/arm64/include/uapi/asm/hwcap.h | 5 +++++
 arch/arm64/kernel/cpufeature.c      | 1 +
 arch/arm64/kernel/cpuinfo.c         | 1 +
 5 files changed, 12 insertions(+)

diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index c04f8e87bab8..7b591c1dcb53 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -135,6 +135,10 @@ HWCAP_DCPOP
 
     Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0001.
 
+HWCAP2_DCPODP
+
+    Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
+
 HWCAP_SHA3
 
     Functionality implied by ID_AA64ISAR0_EL1.SHA3 == 0b0001.
diff --git a/arch/arm64/include/asm/hwcap.h b/arch/arm64/include/asm/hwcap.h
index d0d4e163c50b..3a31a9ee9e95 100644
--- a/arch/arm64/include/asm/hwcap.h
+++ b/arch/arm64/include/asm/hwcap.h
@@ -88,6 +88,7 @@
 #define KERNEL_HWCAP_PACG		__khwcap_feature(PACG)
 
 #define __khwcap2_feature(x)		(ilog2(HWCAP2_ ## x) + 32)
+#define KERNEL_HWCAP_DCPODP		__khwcap2_feature(DCPODP)
 
 /*
  * This yields a mask that user programs can use to figure out what
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index 453b45af80b7..d64af3913a9e 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -53,4 +53,9 @@
 #define HWCAP_PACA		(1 << 30)
 #define HWCAP_PACG		(1UL << 31)
 
+/*
+ * HWCAP2 flags - for AT_HWCAP2
+ */
+#define HWCAP2_DCPODP		(1 << 0)
+
 #endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a655d1bb1186..f8b682a3a9f4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1591,6 +1591,7 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_ASIMDHP),
 	HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DIT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_DCPOP),
+	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, KERNEL_HWCAP_DCPODP),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_JSCVT),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_FCMA),
 	HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, KERNEL_HWCAP_LRCPC),
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 810db95f293f..093ca53ce1d1 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -85,6 +85,7 @@ static const char *const hwcap_str[] = {
 	"sb",
 	"paca",
 	"pacg",
+	"dcpodp",
 	NULL
 };
 
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 5/6] arm64: add CVADP support to the cache maintenance helper
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose,
	Dave Martin

Allow users of dcache_by_line_op to specify cvadp as an op.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/assembler.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index c5308d01e228..d50caf0e6b64 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -407,10 +407,14 @@ alternative_endif
 	.ifc	\op, cvap
 	sys	3, c7, c12, 1, \kaddr	// dc cvap
 	.else
+	.ifc	\op, cvadp
+	sys	3, c7, c13, 1, \kaddr	// dc cvadp
+	.else
 	dc	\op, \kaddr
 	.endif
 	.endif
 	.endif
+	.endif
 	add	\kaddr, \kaddr, \tmp1
 	cmp	\kaddr, \size
 	b.lo	9998b
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 5/6] arm64: add CVADP support to the cache maintenance helper
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, Dave Martin, linux-arm-kernel

Allow users of dcache_by_line_op to specify cvadp as an op.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/assembler.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index c5308d01e228..d50caf0e6b64 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -407,10 +407,14 @@ alternative_endif
 	.ifc	\op, cvap
 	sys	3, c7, c12, 1, \kaddr	// dc cvap
 	.else
+	.ifc	\op, cvadp
+	sys	3, c7, c13, 1, \kaddr	// dc cvadp
+	.else
 	dc	\op, \kaddr
 	.endif
 	.endif
 	.endif
+	.endif
 	add	\kaddr, \kaddr, \tmp1
 	cmp	\kaddr, \size
 	b.lo	9998b
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 6/6] arm64: Advertise ARM64_HAS_DCPODP cpu feature
  2019-04-09  9:52 ` Andrew Murray
@ 2019-04-09  9:52   ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Szabolcs Nagy, dave.martin, linux-arm-kernel, Mark Rutland,
	Phil Blundell, libc-alpha, linux-api, Suzuki K Poulose,
	Dave Martin

Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported.

Even though we don't use this feature now, we provide it for consistency
with DCPOP and anticipate it being used in the future.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 10 ++++++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f6a76e43f39e..defdc67d9ab4 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -61,7 +61,8 @@
 #define ARM64_HAS_GENERIC_AUTH_ARCH		40
 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
+#define ARM64_HAS_DCPODP			43
 
-#define ARM64_NCAPS				43
+#define ARM64_NCAPS				44
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f8b682a3a9f4..9d18e45311fd 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1340,6 +1340,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
 		.min_field_value = 1,
 	},
+	{
+		.desc = "Data cache clean to Point of Deep Persistence",
+		.capability = ARM64_HAS_DCPODP,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
+		.min_field_value = 2,
+	},
 #endif
 #ifdef CONFIG_ARM64_SVE
 	{
-- 
2.21.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v5 6/6] arm64: Advertise ARM64_HAS_DCPODP cpu feature
@ 2019-04-09  9:52   ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-09  9:52 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	linux-api, Phil Blundell, Dave Martin, linux-arm-kernel

Advertise ARM64_HAS_DCPODP when both DC CVAP and DC CVADP are supported.

Even though we don't use this feature now, we provide it for consistency
with DCPOP and anticipate it being used in the future.

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/kernel/cpufeature.c   | 10 ++++++++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index f6a76e43f39e..defdc67d9ab4 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -61,7 +61,8 @@
 #define ARM64_HAS_GENERIC_AUTH_ARCH		40
 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
+#define ARM64_HAS_DCPODP			43
 
-#define ARM64_NCAPS				43
+#define ARM64_NCAPS				44
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index f8b682a3a9f4..9d18e45311fd 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1340,6 +1340,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
 		.min_field_value = 1,
 	},
+	{
+		.desc = "Data cache clean to Point of Deep Persistence",
+		.capability = ARM64_HAS_DCPODP,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_cpuid_feature,
+		.sys_reg = SYS_ID_AA64ISAR1_EL1,
+		.sign = FTR_UNSIGNED,
+		.field_pos = ID_AA64ISAR1_DPB_SHIFT,
+		.min_field_value = 2,
+	},
 #endif
 #ifdef CONFIG_ARM64_SVE
 	{
-- 
2.21.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
  2019-04-09  9:52   ` Andrew Murray
@ 2019-04-16 13:51     ` Will Deacon
  -1 siblings, 0 replies; 20+ messages in thread
From: Will Deacon @ 2019-04-16 13:51 UTC (permalink / raw)
  To: Andrew Murray
  Cc: Catalin Marinas, Szabolcs Nagy, dave.martin, linux-arm-kernel,
	Mark Rutland, Phil Blundell, libc-alpha, linux-api,
	Suzuki K Poulose

On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> As we will exhaust the first 32 bits of AT_HWCAP let's start
> exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> 
> Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> prefer to expand into AT_HWCAP2 in order to provide a consistent
> view to userspace between ILP32 and LP64. However internal to the
> kernel we prefer to continue to use the full space of elf_hwcap.
> 
> To reduce complexity and allow for future expansion, we now
> represent hwcaps in the kernel as ordinals and use a
> KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> based module loading for all our hwcaps.
> 
> We introduce cpu_set_feature to set hwcaps which complements the
> existing cpu_have_feature helper. These helpers allow us to clean
> up existing direct uses of elf_hwcap and reduce any future effort
> required to move beyond 64 caps.
> 
> For convenience we also introduce cpu_{have,set}_named_feature which
> makes use of the cpu_feature macro to allow providing a hwcap name
> without a {KERNEL_}HWCAP_ prefix.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
>  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
>  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
>  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
>  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
>  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
>  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
>  arch/arm64/crypto/sha256-glue.c          |  4 +-
>  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
>  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
>  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
>  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
>  arch/arm64/kernel/cpuinfo.c              |  2 +-
>  arch/arm64/kernel/fpsimd.c               |  4 +-
>  drivers/clocksource/arm_arch_timer.c     |  8 +++
>  15 files changed, 131 insertions(+), 63 deletions(-)
> 
> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> index 13d6691b37be..c04f8e87bab8 100644
> --- a/Documentation/arm64/elf_hwcaps.txt
> +++ b/Documentation/arm64/elf_hwcaps.txt
> @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
>  kernel exposes the presence of these features to userspace through a set
>  of flags called hwcaps, exposed in the auxilliary vector.
>  
> -Userspace software can test for features by acquiring the AT_HWCAP entry
> -of the auxilliary vector, and testing whether the relevant flags are
> -set, e.g.
> +Userspace software can test for features by acquiring the AT_HWCAP or
> +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> +flags are set, e.g.
>  
>  bool floating_point_is_present(void)
>  {
> @@ -194,3 +194,11 @@ HWCAP_PACG
>      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
>      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
>      Documentation/arm64/pointer-authentication.txt.
> +
> +
> +4. Unused AT_HWCAP bits
> +-----------------------
> +
> +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> +in bits [31:0]. For interoperation with userspace we guarantee that bits
> +62 and 63 of AT_HWCAP will always be returned as 0.

I'm a little nervous about the first sentence here, since it could be
taken to mean that we will never allocate 61:32. Mind if I drop it?

> diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> index aa4ec53281ce..6cc8aff83805 100644
> --- a/drivers/clocksource/arm_arch_timer.c
> +++ b/drivers/clocksource/arm_arch_timer.c
> @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
>  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
>  			| ARCH_TIMER_VIRT_EVT_EN;
>  	arch_timer_set_cntkctl(cntkctl);
> +#ifdef CONFIG_ARM64
> +	cpu_set_named_feature(EVTSTRM);
> +#else
>  	elf_hwcap |= HWCAP_EVTSTRM;
> +#endif
>  #ifdef CONFIG_COMPAT
>  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
>  #endif
> @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
>  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
>  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
>  
> +#ifdef CONFIG_ARM64
> +		if (cpu_have_named_feature(EVTSTRM))
> +#else
>  		if (elf_hwcap & HWCAP_EVTSTRM)
> +#endif

I think this is an indication that the abstraction isn't quite right and
should probably be done in an arch-helped via asm/arch_timer.h. However,
that can be done as a separate patch later on.

Will

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
@ 2019-04-16 13:51     ` Will Deacon
  0 siblings, 0 replies; 20+ messages in thread
From: Will Deacon @ 2019-04-16 13:51 UTC (permalink / raw)
  To: Andrew Murray
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	Catalin Marinas, Phil Blundell, linux-api, dave.martin,
	linux-arm-kernel

On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> As we will exhaust the first 32 bits of AT_HWCAP let's start
> exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> 
> Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> prefer to expand into AT_HWCAP2 in order to provide a consistent
> view to userspace between ILP32 and LP64. However internal to the
> kernel we prefer to continue to use the full space of elf_hwcap.
> 
> To reduce complexity and allow for future expansion, we now
> represent hwcaps in the kernel as ordinals and use a
> KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> based module loading for all our hwcaps.
> 
> We introduce cpu_set_feature to set hwcaps which complements the
> existing cpu_have_feature helper. These helpers allow us to clean
> up existing direct uses of elf_hwcap and reduce any future effort
> required to move beyond 64 caps.
> 
> For convenience we also introduce cpu_{have,set}_named_feature which
> makes use of the cpu_feature macro to allow providing a hwcap name
> without a {KERNEL_}HWCAP_ prefix.
> 
> Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> ---
>  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
>  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
>  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
>  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
>  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
>  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
>  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
>  arch/arm64/crypto/sha256-glue.c          |  4 +-
>  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
>  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
>  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
>  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
>  arch/arm64/kernel/cpuinfo.c              |  2 +-
>  arch/arm64/kernel/fpsimd.c               |  4 +-
>  drivers/clocksource/arm_arch_timer.c     |  8 +++
>  15 files changed, 131 insertions(+), 63 deletions(-)
> 
> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> index 13d6691b37be..c04f8e87bab8 100644
> --- a/Documentation/arm64/elf_hwcaps.txt
> +++ b/Documentation/arm64/elf_hwcaps.txt
> @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
>  kernel exposes the presence of these features to userspace through a set
>  of flags called hwcaps, exposed in the auxilliary vector.
>  
> -Userspace software can test for features by acquiring the AT_HWCAP entry
> -of the auxilliary vector, and testing whether the relevant flags are
> -set, e.g.
> +Userspace software can test for features by acquiring the AT_HWCAP or
> +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> +flags are set, e.g.
>  
>  bool floating_point_is_present(void)
>  {
> @@ -194,3 +194,11 @@ HWCAP_PACG
>      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
>      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
>      Documentation/arm64/pointer-authentication.txt.
> +
> +
> +4. Unused AT_HWCAP bits
> +-----------------------
> +
> +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> +in bits [31:0]. For interoperation with userspace we guarantee that bits
> +62 and 63 of AT_HWCAP will always be returned as 0.

I'm a little nervous about the first sentence here, since it could be
taken to mean that we will never allocate 61:32. Mind if I drop it?

> diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> index aa4ec53281ce..6cc8aff83805 100644
> --- a/drivers/clocksource/arm_arch_timer.c
> +++ b/drivers/clocksource/arm_arch_timer.c
> @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
>  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
>  			| ARCH_TIMER_VIRT_EVT_EN;
>  	arch_timer_set_cntkctl(cntkctl);
> +#ifdef CONFIG_ARM64
> +	cpu_set_named_feature(EVTSTRM);
> +#else
>  	elf_hwcap |= HWCAP_EVTSTRM;
> +#endif
>  #ifdef CONFIG_COMPAT
>  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
>  #endif
> @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
>  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
>  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
>  
> +#ifdef CONFIG_ARM64
> +		if (cpu_have_named_feature(EVTSTRM))
> +#else
>  		if (elf_hwcap & HWCAP_EVTSTRM)
> +#endif

I think this is an indication that the abstraction isn't quite right and
should probably be done in an arch-helped via asm/arch_timer.h. However,
that can be done as a separate patch later on.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
  2019-04-16 13:51     ` Will Deacon
@ 2019-04-16 16:30       ` Dave Martin
  -1 siblings, 0 replies; 20+ messages in thread
From: Dave Martin @ 2019-04-16 16:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrew Murray, Catalin Marinas, Szabolcs Nagy, linux-arm-kernel,
	Mark Rutland, Phil Blundell, libc-alpha, linux-api,
	Suzuki K Poulose

On Tue, Apr 16, 2019 at 02:51:57PM +0100, Will Deacon wrote:
> On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> > As we will exhaust the first 32 bits of AT_HWCAP let's start
> > exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> > 
> > Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> > prefer to expand into AT_HWCAP2 in order to provide a consistent
> > view to userspace between ILP32 and LP64. However internal to the
> > kernel we prefer to continue to use the full space of elf_hwcap.
> > 
> > To reduce complexity and allow for future expansion, we now
> > represent hwcaps in the kernel as ordinals and use a
> > KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> > based module loading for all our hwcaps.
> > 
> > We introduce cpu_set_feature to set hwcaps which complements the
> > existing cpu_have_feature helper. These helpers allow us to clean
> > up existing direct uses of elf_hwcap and reduce any future effort
> > required to move beyond 64 caps.
> > 
> > For convenience we also introduce cpu_{have,set}_named_feature which
> > makes use of the cpu_feature macro to allow providing a hwcap name
> > without a {KERNEL_}HWCAP_ prefix.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
> >  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
> >  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
> >  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
> >  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
> >  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
> >  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
> >  arch/arm64/crypto/sha256-glue.c          |  4 +-
> >  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
> >  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
> >  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
> >  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
> >  arch/arm64/kernel/cpuinfo.c              |  2 +-
> >  arch/arm64/kernel/fpsimd.c               |  4 +-
> >  drivers/clocksource/arm_arch_timer.c     |  8 +++
> >  15 files changed, 131 insertions(+), 63 deletions(-)
> > 
> > diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> > index 13d6691b37be..c04f8e87bab8 100644
> > --- a/Documentation/arm64/elf_hwcaps.txt
> > +++ b/Documentation/arm64/elf_hwcaps.txt
> > @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
> >  kernel exposes the presence of these features to userspace through a set
> >  of flags called hwcaps, exposed in the auxilliary vector.
> >  
> > -Userspace software can test for features by acquiring the AT_HWCAP entry
> > -of the auxilliary vector, and testing whether the relevant flags are
> > -set, e.g.
> > +Userspace software can test for features by acquiring the AT_HWCAP or
> > +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> > +flags are set, e.g.
> >  
> >  bool floating_point_is_present(void)
> >  {
> > @@ -194,3 +194,11 @@ HWCAP_PACG
> >      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
> >      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
> >      Documentation/arm64/pointer-authentication.txt.
> > +
> > +
> > +4. Unused AT_HWCAP bits
> > +-----------------------
> > +
> > +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> > +in bits [31:0]. For interoperation with userspace we guarantee that bits
> > +62 and 63 of AT_HWCAP will always be returned as 0.
> 
> I'm a little nervous about the first sentence here, since it could be
> taken to mean that we will never allocate 61:32. Mind if I drop it?

Ack: I don't think we want to say explicitly that we will never use
those bits, apart from AT_HWCAP[63:62] for which there are specific
reasons.

(For now of course, we won't use them.)

> > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> > index aa4ec53281ce..6cc8aff83805 100644
> > --- a/drivers/clocksource/arm_arch_timer.c
> > +++ b/drivers/clocksource/arm_arch_timer.c
> > @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
> >  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
> >  			| ARCH_TIMER_VIRT_EVT_EN;
> >  	arch_timer_set_cntkctl(cntkctl);
> > +#ifdef CONFIG_ARM64
> > +	cpu_set_named_feature(EVTSTRM);
> > +#else
> >  	elf_hwcap |= HWCAP_EVTSTRM;
> > +#endif
> >  #ifdef CONFIG_COMPAT
> >  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
> >  #endif
> > @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
> >  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
> >  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
> >  
> > +#ifdef CONFIG_ARM64
> > +		if (cpu_have_named_feature(EVTSTRM))
> > +#else
> >  		if (elf_hwcap & HWCAP_EVTSTRM)
> > +#endif
> 
> I think this is an indication that the abstraction isn't quite right and
> should probably be done in an arch-helped via asm/arch_timer.h. However,
> that can be done as a separate patch later on.

It probably does make sense to add an arch-specific helper for that.

Given that we don't want to encourage this kind of poking about in
elf_hwcap.  It might make sense to have a single-purpose helper just for
checking this flag.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
@ 2019-04-16 16:30       ` Dave Martin
  0 siblings, 0 replies; 20+ messages in thread
From: Dave Martin @ 2019-04-16 16:30 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	Catalin Marinas, Phil Blundell, linux-api, Andrew Murray,
	linux-arm-kernel

On Tue, Apr 16, 2019 at 02:51:57PM +0100, Will Deacon wrote:
> On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> > As we will exhaust the first 32 bits of AT_HWCAP let's start
> > exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> > 
> > Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> > prefer to expand into AT_HWCAP2 in order to provide a consistent
> > view to userspace between ILP32 and LP64. However internal to the
> > kernel we prefer to continue to use the full space of elf_hwcap.
> > 
> > To reduce complexity and allow for future expansion, we now
> > represent hwcaps in the kernel as ordinals and use a
> > KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> > based module loading for all our hwcaps.
> > 
> > We introduce cpu_set_feature to set hwcaps which complements the
> > existing cpu_have_feature helper. These helpers allow us to clean
> > up existing direct uses of elf_hwcap and reduce any future effort
> > required to move beyond 64 caps.
> > 
> > For convenience we also introduce cpu_{have,set}_named_feature which
> > makes use of the cpu_feature macro to allow providing a hwcap name
> > without a {KERNEL_}HWCAP_ prefix.
> > 
> > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > ---
> >  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
> >  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
> >  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
> >  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
> >  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
> >  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
> >  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
> >  arch/arm64/crypto/sha256-glue.c          |  4 +-
> >  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
> >  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
> >  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
> >  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
> >  arch/arm64/kernel/cpuinfo.c              |  2 +-
> >  arch/arm64/kernel/fpsimd.c               |  4 +-
> >  drivers/clocksource/arm_arch_timer.c     |  8 +++
> >  15 files changed, 131 insertions(+), 63 deletions(-)
> > 
> > diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> > index 13d6691b37be..c04f8e87bab8 100644
> > --- a/Documentation/arm64/elf_hwcaps.txt
> > +++ b/Documentation/arm64/elf_hwcaps.txt
> > @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
> >  kernel exposes the presence of these features to userspace through a set
> >  of flags called hwcaps, exposed in the auxilliary vector.
> >  
> > -Userspace software can test for features by acquiring the AT_HWCAP entry
> > -of the auxilliary vector, and testing whether the relevant flags are
> > -set, e.g.
> > +Userspace software can test for features by acquiring the AT_HWCAP or
> > +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> > +flags are set, e.g.
> >  
> >  bool floating_point_is_present(void)
> >  {
> > @@ -194,3 +194,11 @@ HWCAP_PACG
> >      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
> >      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
> >      Documentation/arm64/pointer-authentication.txt.
> > +
> > +
> > +4. Unused AT_HWCAP bits
> > +-----------------------
> > +
> > +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> > +in bits [31:0]. For interoperation with userspace we guarantee that bits
> > +62 and 63 of AT_HWCAP will always be returned as 0.
> 
> I'm a little nervous about the first sentence here, since it could be
> taken to mean that we will never allocate 61:32. Mind if I drop it?

Ack: I don't think we want to say explicitly that we will never use
those bits, apart from AT_HWCAP[63:62] for which there are specific
reasons.

(For now of course, we won't use them.)

> > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> > index aa4ec53281ce..6cc8aff83805 100644
> > --- a/drivers/clocksource/arm_arch_timer.c
> > +++ b/drivers/clocksource/arm_arch_timer.c
> > @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
> >  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
> >  			| ARCH_TIMER_VIRT_EVT_EN;
> >  	arch_timer_set_cntkctl(cntkctl);
> > +#ifdef CONFIG_ARM64
> > +	cpu_set_named_feature(EVTSTRM);
> > +#else
> >  	elf_hwcap |= HWCAP_EVTSTRM;
> > +#endif
> >  #ifdef CONFIG_COMPAT
> >  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
> >  #endif
> > @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
> >  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
> >  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
> >  
> > +#ifdef CONFIG_ARM64
> > +		if (cpu_have_named_feature(EVTSTRM))
> > +#else
> >  		if (elf_hwcap & HWCAP_EVTSTRM)
> > +#endif
> 
> I think this is an indication that the abstraction isn't quite right and
> should probably be done in an arch-helped via asm/arch_timer.h. However,
> that can be done as a separate patch later on.

It probably does make sense to add an arch-specific helper for that.

Given that we don't want to encourage this kind of poking about in
elf_hwcap.  It might make sense to have a single-purpose helper just for
checking this flag.

Cheers
---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
  2019-04-16 16:30       ` Dave Martin
@ 2019-04-30 11:09         ` Andrew Murray
  -1 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-30 11:09 UTC (permalink / raw)
  To: Dave Martin
  Cc: Will Deacon, Catalin Marinas, Szabolcs Nagy, linux-arm-kernel,
	Mark Rutland, Phil Blundell, libc-alpha, linux-api,
	Suzuki K Poulose

On Tue, Apr 16, 2019 at 05:30:41PM +0100, Dave Martin wrote:
> On Tue, Apr 16, 2019 at 02:51:57PM +0100, Will Deacon wrote:
> > On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> > > As we will exhaust the first 32 bits of AT_HWCAP let's start
> > > exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> > > 
> > > Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> > > prefer to expand into AT_HWCAP2 in order to provide a consistent
> > > view to userspace between ILP32 and LP64. However internal to the
> > > kernel we prefer to continue to use the full space of elf_hwcap.
> > > 
> > > To reduce complexity and allow for future expansion, we now
> > > represent hwcaps in the kernel as ordinals and use a
> > > KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> > > based module loading for all our hwcaps.
> > > 
> > > We introduce cpu_set_feature to set hwcaps which complements the
> > > existing cpu_have_feature helper. These helpers allow us to clean
> > > up existing direct uses of elf_hwcap and reduce any future effort
> > > required to move beyond 64 caps.
> > > 
> > > For convenience we also introduce cpu_{have,set}_named_feature which
> > > makes use of the cpu_feature macro to allow providing a hwcap name
> > > without a {KERNEL_}HWCAP_ prefix.
> > > 
> > > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > > ---
> > >  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
> > >  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
> > >  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
> > >  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
> > >  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
> > >  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
> > >  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
> > >  arch/arm64/crypto/sha256-glue.c          |  4 +-
> > >  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
> > >  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
> > >  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
> > >  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
> > >  arch/arm64/kernel/cpuinfo.c              |  2 +-
> > >  arch/arm64/kernel/fpsimd.c               |  4 +-
> > >  drivers/clocksource/arm_arch_timer.c     |  8 +++
> > >  15 files changed, 131 insertions(+), 63 deletions(-)
> > > 
> > > diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> > > index 13d6691b37be..c04f8e87bab8 100644
> > > --- a/Documentation/arm64/elf_hwcaps.txt
> > > +++ b/Documentation/arm64/elf_hwcaps.txt
> > > @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
> > >  kernel exposes the presence of these features to userspace through a set
> > >  of flags called hwcaps, exposed in the auxilliary vector.
> > >  
> > > -Userspace software can test for features by acquiring the AT_HWCAP entry
> > > -of the auxilliary vector, and testing whether the relevant flags are
> > > -set, e.g.
> > > +Userspace software can test for features by acquiring the AT_HWCAP or
> > > +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> > > +flags are set, e.g.
> > >  
> > >  bool floating_point_is_present(void)
> > >  {
> > > @@ -194,3 +194,11 @@ HWCAP_PACG
> > >      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
> > >      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
> > >      Documentation/arm64/pointer-authentication.txt.
> > > +
> > > +
> > > +4. Unused AT_HWCAP bits
> > > +-----------------------
> > > +
> > > +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> > > +in bits [31:0]. For interoperation with userspace we guarantee that bits
> > > +62 and 63 of AT_HWCAP will always be returned as 0.
> > 
> > I'm a little nervous about the first sentence here, since it could be
> > taken to mean that we will never allocate 61:32. Mind if I drop it?
> 
> Ack: I don't think we want to say explicitly that we will never use
> those bits, apart from AT_HWCAP[63:62] for which there are specific
> reasons.

No problem with me. Thanks for making this change (and const_ilog2).

> 
> (For now of course, we won't use them.)
> 
> > > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> > > index aa4ec53281ce..6cc8aff83805 100644
> > > --- a/drivers/clocksource/arm_arch_timer.c
> > > +++ b/drivers/clocksource/arm_arch_timer.c
> > > @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
> > >  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
> > >  			| ARCH_TIMER_VIRT_EVT_EN;
> > >  	arch_timer_set_cntkctl(cntkctl);
> > > +#ifdef CONFIG_ARM64
> > > +	cpu_set_named_feature(EVTSTRM);
> > > +#else
> > >  	elf_hwcap |= HWCAP_EVTSTRM;
> > > +#endif
> > >  #ifdef CONFIG_COMPAT
> > >  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
> > >  #endif
> > > @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
> > >  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
> > >  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
> > >  
> > > +#ifdef CONFIG_ARM64
> > > +		if (cpu_have_named_feature(EVTSTRM))
> > > +#else
> > >  		if (elf_hwcap & HWCAP_EVTSTRM)
> > > +#endif
> > 
> > I think this is an indication that the abstraction isn't quite right and
> > should probably be done in an arch-helped via asm/arch_timer.h. However,
> > that can be done as a separate patch later on.
> 
> It probably does make sense to add an arch-specific helper for that.
> 
> Given that we don't want to encourage this kind of poking about in
> elf_hwcap.  It might make sense to have a single-purpose helper just for
> checking this flag.

I'll propose something in a separate patch.

Thanks,

Andrew Murray

> 
> Cheers
> ---Dave

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2
@ 2019-04-30 11:09         ` Andrew Murray
  0 siblings, 0 replies; 20+ messages in thread
From: Andrew Murray @ 2019-04-30 11:09 UTC (permalink / raw)
  To: Dave Martin
  Cc: Mark Rutland, libc-alpha, Suzuki K Poulose, Szabolcs Nagy,
	Catalin Marinas, Will Deacon, Phil Blundell, linux-api,
	linux-arm-kernel

On Tue, Apr 16, 2019 at 05:30:41PM +0100, Dave Martin wrote:
> On Tue, Apr 16, 2019 at 02:51:57PM +0100, Will Deacon wrote:
> > On Tue, Apr 09, 2019 at 10:52:40AM +0100, Andrew Murray wrote:
> > > As we will exhaust the first 32 bits of AT_HWCAP let's start
> > > exposing AT_HWCAP2 to userspace to give us up to 64 caps.
> > > 
> > > Whilst it's possible to use the remaining 32 bits of AT_HWCAP, we
> > > prefer to expand into AT_HWCAP2 in order to provide a consistent
> > > view to userspace between ILP32 and LP64. However internal to the
> > > kernel we prefer to continue to use the full space of elf_hwcap.
> > > 
> > > To reduce complexity and allow for future expansion, we now
> > > represent hwcaps in the kernel as ordinals and use a
> > > KERNEL_HWCAP_ prefix. This allows us to support automatic feature
> > > based module loading for all our hwcaps.
> > > 
> > > We introduce cpu_set_feature to set hwcaps which complements the
> > > existing cpu_have_feature helper. These helpers allow us to clean
> > > up existing direct uses of elf_hwcap and reduce any future effort
> > > required to move beyond 64 caps.
> > > 
> > > For convenience we also introduce cpu_{have,set}_named_feature which
> > > makes use of the cpu_feature macro to allow providing a hwcap name
> > > without a {KERNEL_}HWCAP_ prefix.
> > > 
> > > Signed-off-by: Andrew Murray <andrew.murray@arm.com>
> > > ---
> > >  Documentation/arm64/elf_hwcaps.txt       | 14 +++--
> > >  arch/arm64/crypto/aes-ce-ccm-glue.c      |  2 +-
> > >  arch/arm64/crypto/aes-neonbs-glue.c      |  2 +-
> > >  arch/arm64/crypto/chacha-neon-glue.c     |  2 +-
> > >  arch/arm64/crypto/crct10dif-ce-glue.c    |  4 +-
> > >  arch/arm64/crypto/ghash-ce-glue.c        |  8 +--
> > >  arch/arm64/crypto/nhpoly1305-neon-glue.c |  2 +-
> > >  arch/arm64/crypto/sha256-glue.c          |  4 +-
> > >  arch/arm64/include/asm/cpufeature.h      | 22 ++++----
> > >  arch/arm64/include/asm/hwcap.h           | 52 ++++++++++++++++++-
> > >  arch/arm64/include/uapi/asm/hwcap.h      |  2 +-
> > >  arch/arm64/kernel/cpufeature.c           | 66 ++++++++++++------------
> > >  arch/arm64/kernel/cpuinfo.c              |  2 +-
> > >  arch/arm64/kernel/fpsimd.c               |  4 +-
> > >  drivers/clocksource/arm_arch_timer.c     |  8 +++
> > >  15 files changed, 131 insertions(+), 63 deletions(-)
> > > 
> > > diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> > > index 13d6691b37be..c04f8e87bab8 100644
> > > --- a/Documentation/arm64/elf_hwcaps.txt
> > > +++ b/Documentation/arm64/elf_hwcaps.txt
> > > @@ -13,9 +13,9 @@ architected discovery mechanism available to userspace code at EL0. The
> > >  kernel exposes the presence of these features to userspace through a set
> > >  of flags called hwcaps, exposed in the auxilliary vector.
> > >  
> > > -Userspace software can test for features by acquiring the AT_HWCAP entry
> > > -of the auxilliary vector, and testing whether the relevant flags are
> > > -set, e.g.
> > > +Userspace software can test for features by acquiring the AT_HWCAP or
> > > +AT_HWCAP2 entry of the auxiliary vector, and testing whether the relevant
> > > +flags are set, e.g.
> > >  
> > >  bool floating_point_is_present(void)
> > >  {
> > > @@ -194,3 +194,11 @@ HWCAP_PACG
> > >      Functionality implied by ID_AA64ISAR1_EL1.GPA == 0b0001 or
> > >      ID_AA64ISAR1_EL1.GPI == 0b0001, as described by
> > >      Documentation/arm64/pointer-authentication.txt.
> > > +
> > > +
> > > +4. Unused AT_HWCAP bits
> > > +-----------------------
> > > +
> > > +Each AT_HWCAP and AT_HWCAP2 entry provides for up to 32 hwcaps contained
> > > +in bits [31:0]. For interoperation with userspace we guarantee that bits
> > > +62 and 63 of AT_HWCAP will always be returned as 0.
> > 
> > I'm a little nervous about the first sentence here, since it could be
> > taken to mean that we will never allocate 61:32. Mind if I drop it?
> 
> Ack: I don't think we want to say explicitly that we will never use
> those bits, apart from AT_HWCAP[63:62] for which there are specific
> reasons.

No problem with me. Thanks for making this change (and const_ilog2).

> 
> (For now of course, we won't use them.)
> 
> > > diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
> > > index aa4ec53281ce..6cc8aff83805 100644
> > > --- a/drivers/clocksource/arm_arch_timer.c
> > > +++ b/drivers/clocksource/arm_arch_timer.c
> > > @@ -833,7 +833,11 @@ static void arch_timer_evtstrm_enable(int divider)
> > >  	cntkctl |= (divider << ARCH_TIMER_EVT_TRIGGER_SHIFT)
> > >  			| ARCH_TIMER_VIRT_EVT_EN;
> > >  	arch_timer_set_cntkctl(cntkctl);
> > > +#ifdef CONFIG_ARM64
> > > +	cpu_set_named_feature(EVTSTRM);
> > > +#else
> > >  	elf_hwcap |= HWCAP_EVTSTRM;
> > > +#endif
> > >  #ifdef CONFIG_COMPAT
> > >  	compat_elf_hwcap |= COMPAT_HWCAP_EVTSTRM;
> > >  #endif
> > > @@ -1055,7 +1059,11 @@ static int arch_timer_cpu_pm_notify(struct notifier_block *self,
> > >  	} else if (action == CPU_PM_ENTER_FAILED || action == CPU_PM_EXIT) {
> > >  		arch_timer_set_cntkctl(__this_cpu_read(saved_cntkctl));
> > >  
> > > +#ifdef CONFIG_ARM64
> > > +		if (cpu_have_named_feature(EVTSTRM))
> > > +#else
> > >  		if (elf_hwcap & HWCAP_EVTSTRM)
> > > +#endif
> > 
> > I think this is an indication that the abstraction isn't quite right and
> > should probably be done in an arch-helped via asm/arch_timer.h. However,
> > that can be done as a separate patch later on.
> 
> It probably does make sense to add an arch-specific helper for that.
> 
> Given that we don't want to encourage this kind of poking about in
> elf_hwcap.  It might make sense to have a single-purpose helper just for
> checking this flag.

I'll propose something in a separate patch.

Thanks,

Andrew Murray

> 
> Cheers
> ---Dave

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-04-30 11:10 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-09  9:52 [PATCH v5 0/6] arm64: Initial support for CVADP Andrew Murray
2019-04-09  9:52 ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 1/6] arm64: HWCAP: add support for AT_HWCAP2 Andrew Murray
2019-04-09  9:52   ` Andrew Murray
2019-04-16 13:51   ` Will Deacon
2019-04-16 13:51     ` Will Deacon
2019-04-16 16:30     ` Dave Martin
2019-04-16 16:30       ` Dave Martin
2019-04-30 11:09       ` Andrew Murray
2019-04-30 11:09         ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 2/6] arm64: HWCAP: encapsulate elf_hwcap Andrew Murray
2019-04-09  9:52   ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 3/6] arm64: Handle trapped DC CVADP Andrew Murray
2019-04-09  9:52   ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 4/6] arm64: Expose DC CVADP to userspace Andrew Murray
2019-04-09  9:52   ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 5/6] arm64: add CVADP support to the cache maintenance helper Andrew Murray
2019-04-09  9:52   ` Andrew Murray
2019-04-09  9:52 ` [PATCH v5 6/6] arm64: Advertise ARM64_HAS_DCPODP cpu feature Andrew Murray
2019-04-09  9:52   ` Andrew Murray

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.