xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo
@ 2021-06-29 17:08 Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux Bertrand Marquis
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Bertrand Marquis @ 2021-06-29 17:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

On arm architecture we might have heterogeneous platforms with different
types of cores. As a guest can potentialy run on any of those cores we
have to present them cpu features which are compatible with all cores
and discard the features which are only available on some cores.

As the features can be fairly complex, the way to deduce from 2
different features, what should be the acceptable minimal feature can be
complex (and sometime impossible).

This RFC is a first attempt for a solution to start a discussion and get
inputs from the community.

To reduce the implementation effort in Xen, this serie is importing the
structures and filtering system used by Linux in order to build a
cpuinfo containing the best values compatible with all cores on the
platform.

The serie start by importing the necessary code and structure from Linux
and then use it to sanitize the boot cpuinfo.
Finally it is simplifying some Xen code which was doing the same in p2m
and allows to use heterogeneous platforms on arm64.


Bertrand Marquis (4):
  xen/arm: Import ID registers definitions from Linux
  xen/arm: Import ID features sanitize from linux
  xen/arm: Sanitize cpuinfo ID registers fields
  xen/arm: Use sanitize values for p2m

 xen/arch/arm/arm64/Makefile         |   1 +
 xen/arch/arm/arm64/cpusanitize.c    | 628 ++++++++++++++++++++++++++++
 xen/arch/arm/p2m.c                  |  30 +-
 xen/arch/arm/smpboot.c              |   5 +-
 xen/include/asm-arm/arm64/sysregs.h | 312 ++++++++++++++
 xen/include/asm-arm/cpufeature.h    |   9 +
 xen/include/xen/lib.h               |   1 +
 7 files changed, 965 insertions(+), 21 deletions(-)
 create mode 100644 xen/arch/arm/arm64/cpusanitize.c

-- 
2.17.1



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux
  2021-06-29 17:08 [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo Bertrand Marquis
@ 2021-06-29 17:08 ` Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux Bertrand Marquis
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Bertrand Marquis @ 2021-06-29 17:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, Julien Grall, Volodymyr Babchuk

Import some ID registers definitions from Linux sysreg header to have
required shift definitions for all ID registers fields.

Those are required to reuse the cpufeature sanitization system from
Linux kernel.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/include/asm-arm/arm64/sysregs.h | 312 ++++++++++++++++++++++++++++
 1 file changed, 312 insertions(+)

diff --git a/xen/include/asm-arm/arm64/sysregs.h b/xen/include/asm-arm/arm64/sysregs.h
index 077fd95fb7..555f1e08b2 100644
--- a/xen/include/asm-arm/arm64/sysregs.h
+++ b/xen/include/asm-arm/arm64/sysregs.h
@@ -85,6 +85,318 @@
 #define ID_DFR1_EL1                 S3_0_C0_C3_5
 #endif
 
+/* ID registers (imported from arm64/include/asm/sysreg.h in Linux) */
+
+/* id_aa64isar0 */
+#define ID_AA64ISAR0_RNDR_SHIFT      60
+#define ID_AA64ISAR0_TLB_SHIFT       56
+#define ID_AA64ISAR0_TS_SHIFT        52
+#define ID_AA64ISAR0_FHM_SHIFT       48
+#define ID_AA64ISAR0_DP_SHIFT        44
+#define ID_AA64ISAR0_SM4_SHIFT       40
+#define ID_AA64ISAR0_SM3_SHIFT       36
+#define ID_AA64ISAR0_SHA3_SHIFT      32
+#define ID_AA64ISAR0_RDM_SHIFT       28
+#define ID_AA64ISAR0_ATOMICS_SHIFT   20
+#define ID_AA64ISAR0_CRC32_SHIFT     16
+#define ID_AA64ISAR0_SHA2_SHIFT      12
+#define ID_AA64ISAR0_SHA1_SHIFT      8
+#define ID_AA64ISAR0_AES_SHIFT       4
+
+#define ID_AA64ISAR0_TLB_RANGE_NI    0x0
+#define ID_AA64ISAR0_TLB_RANGE       0x2
+
+/* id_aa64isar1 */
+#define ID_AA64ISAR1_I8MM_SHIFT      52
+#define ID_AA64ISAR1_DGH_SHIFT       48
+#define ID_AA64ISAR1_BF16_SHIFT      44
+#define ID_AA64ISAR1_SPECRES_SHIFT   40
+#define ID_AA64ISAR1_SB_SHIFT        36
+#define ID_AA64ISAR1_FRINTTS_SHIFT   32
+#define ID_AA64ISAR1_GPI_SHIFT       28
+#define ID_AA64ISAR1_GPA_SHIFT       24
+#define ID_AA64ISAR1_LRCPC_SHIFT     20
+#define ID_AA64ISAR1_FCMA_SHIFT      16
+#define ID_AA64ISAR1_JSCVT_SHIFT     12
+#define ID_AA64ISAR1_API_SHIFT       8
+#define ID_AA64ISAR1_APA_SHIFT       4
+#define ID_AA64ISAR1_DPB_SHIFT       0
+
+#define ID_AA64ISAR1_APA_NI                     0x0
+#define ID_AA64ISAR1_APA_ARCHITECTED            0x1
+#define ID_AA64ISAR1_APA_ARCH_EPAC              0x2
+#define ID_AA64ISAR1_APA_ARCH_EPAC2             0x3
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC        0x4
+#define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB    0x5
+#define ID_AA64ISAR1_API_NI                     0x0
+#define ID_AA64ISAR1_API_IMP_DEF                0x1
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC           0x2
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2          0x3
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC     0x4
+#define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5
+#define ID_AA64ISAR1_GPA_NI                     0x0
+#define ID_AA64ISAR1_GPA_ARCHITECTED            0x1
+#define ID_AA64ISAR1_GPI_NI                     0x0
+#define ID_AA64ISAR1_GPI_IMP_DEF                0x1
+
+/* id_aa64pfr0 */
+#define ID_AA64PFR0_CSV3_SHIFT       60
+#define ID_AA64PFR0_CSV2_SHIFT       56
+#define ID_AA64PFR0_DIT_SHIFT        48
+#define ID_AA64PFR0_AMU_SHIFT        44
+#define ID_AA64PFR0_MPAM_SHIFT       40
+#define ID_AA64PFR0_SEL2_SHIFT       36
+#define ID_AA64PFR0_SVE_SHIFT        32
+#define ID_AA64PFR0_RAS_SHIFT        28
+#define ID_AA64PFR0_GIC_SHIFT        24
+#define ID_AA64PFR0_ASIMD_SHIFT      20
+#define ID_AA64PFR0_FP_SHIFT         16
+#define ID_AA64PFR0_EL3_SHIFT        12
+#define ID_AA64PFR0_EL2_SHIFT        8
+#define ID_AA64PFR0_EL1_SHIFT        4
+#define ID_AA64PFR0_EL0_SHIFT        0
+
+#define ID_AA64PFR0_AMU              0x1
+#define ID_AA64PFR0_SVE              0x1
+#define ID_AA64PFR0_RAS_V1           0x1
+#define ID_AA64PFR0_FP_NI            0xf
+#define ID_AA64PFR0_FP_SUPPORTED     0x0
+#define ID_AA64PFR0_ASIMD_NI         0xf
+#define ID_AA64PFR0_ASIMD_SUPPORTED  0x0
+#define ID_AA64PFR0_EL1_64BIT_ONLY   0x1
+#define ID_AA64PFR0_EL1_32BIT_64BIT  0x2
+#define ID_AA64PFR0_EL0_64BIT_ONLY   0x1
+#define ID_AA64PFR0_EL0_32BIT_64BIT  0x2
+
+/* id_aa64pfr1 */
+#define ID_AA64PFR1_MPAMFRAC_SHIFT   16
+#define ID_AA64PFR1_RASFRAC_SHIFT    12
+#define ID_AA64PFR1_MTE_SHIFT        8
+#define ID_AA64PFR1_SSBS_SHIFT       4
+#define ID_AA64PFR1_BT_SHIFT         0
+
+#define ID_AA64PFR1_SSBS_PSTATE_NI    0
+#define ID_AA64PFR1_SSBS_PSTATE_ONLY  1
+#define ID_AA64PFR1_SSBS_PSTATE_INSNS 2
+#define ID_AA64PFR1_BT_BTI            0x1
+
+#define ID_AA64PFR1_MTE_NI           0x0
+#define ID_AA64PFR1_MTE_EL0          0x1
+#define ID_AA64PFR1_MTE              0x2
+
+/* id_aa64zfr0 */
+#define ID_AA64ZFR0_F64MM_SHIFT      56
+#define ID_AA64ZFR0_F32MM_SHIFT      52
+#define ID_AA64ZFR0_I8MM_SHIFT       44
+#define ID_AA64ZFR0_SM4_SHIFT        40
+#define ID_AA64ZFR0_SHA3_SHIFT       32
+#define ID_AA64ZFR0_BF16_SHIFT       20
+#define ID_AA64ZFR0_BITPERM_SHIFT    16
+#define ID_AA64ZFR0_AES_SHIFT        4
+#define ID_AA64ZFR0_SVEVER_SHIFT     0
+
+#define ID_AA64ZFR0_F64MM            0x1
+#define ID_AA64ZFR0_F32MM            0x1
+#define ID_AA64ZFR0_I8MM             0x1
+#define ID_AA64ZFR0_BF16             0x1
+#define ID_AA64ZFR0_SM4              0x1
+#define ID_AA64ZFR0_SHA3             0x1
+#define ID_AA64ZFR0_BITPERM          0x1
+#define ID_AA64ZFR0_AES              0x1
+#define ID_AA64ZFR0_AES_PMULL        0x2
+#define ID_AA64ZFR0_SVEVER_SVE2      0x1
+
+/* id_aa64mmfr0 */
+#define ID_AA64MMFR0_ECV_SHIFT       60
+#define ID_AA64MMFR0_FGT_SHIFT       56
+#define ID_AA64MMFR0_EXS_SHIFT       44
+#define ID_AA64MMFR0_TGRAN4_2_SHIFT  40
+#define ID_AA64MMFR0_TGRAN64_2_SHIFT 36
+#define ID_AA64MMFR0_TGRAN16_2_SHIFT 32
+#define ID_AA64MMFR0_TGRAN4_SHIFT    28
+#define ID_AA64MMFR0_TGRAN64_SHIFT   24
+#define ID_AA64MMFR0_TGRAN16_SHIFT   20
+#define ID_AA64MMFR0_BIGENDEL0_SHIFT 16
+#define ID_AA64MMFR0_SNSMEM_SHIFT    12
+#define ID_AA64MMFR0_BIGENDEL_SHIFT  8
+#define ID_AA64MMFR0_ASID_SHIFT      4
+#define ID_AA64MMFR0_PARANGE_SHIFT   0
+
+#define ID_AA64MMFR0_TGRAN4_NI         0xf
+#define ID_AA64MMFR0_TGRAN4_SUPPORTED  0x0
+#define ID_AA64MMFR0_TGRAN64_NI        0xf
+#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
+#define ID_AA64MMFR0_TGRAN16_NI        0x0
+#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
+#define ID_AA64MMFR0_PARANGE_48        0x5
+#define ID_AA64MMFR0_PARANGE_52        0x6
+
+/* id_aa64mmfr1 */
+#define ID_AA64MMFR1_ETS_SHIFT       36
+#define ID_AA64MMFR1_TWED_SHIFT      32
+#define ID_AA64MMFR1_XNX_SHIFT       28
+#define ID_AA64MMFR1_SPECSEI_SHIFT   24
+#define ID_AA64MMFR1_PAN_SHIFT       20
+#define ID_AA64MMFR1_LOR_SHIFT       16
+#define ID_AA64MMFR1_HPD_SHIFT       12
+#define ID_AA64MMFR1_VHE_SHIFT       8
+#define ID_AA64MMFR1_VMIDBITS_SHIFT  4
+#define ID_AA64MMFR1_HADBS_SHIFT     0
+
+#define ID_AA64MMFR1_VMIDBITS_8      0
+#define ID_AA64MMFR1_VMIDBITS_16     2
+
+/* id_aa64mmfr2 */
+#define ID_AA64MMFR2_E0PD_SHIFT      60
+#define ID_AA64MMFR2_EVT_SHIFT       56
+#define ID_AA64MMFR2_BBM_SHIFT       52
+#define ID_AA64MMFR2_TTL_SHIFT       48
+#define ID_AA64MMFR2_FWB_SHIFT       40
+#define ID_AA64MMFR2_IDS_SHIFT       36
+#define ID_AA64MMFR2_AT_SHIFT        32
+#define ID_AA64MMFR2_ST_SHIFT        28
+#define ID_AA64MMFR2_NV_SHIFT        24
+#define ID_AA64MMFR2_CCIDX_SHIFT     20
+#define ID_AA64MMFR2_LVA_SHIFT       16
+#define ID_AA64MMFR2_IESB_SHIFT      12
+#define ID_AA64MMFR2_LSM_SHIFT       8
+#define ID_AA64MMFR2_UAO_SHIFT       4
+#define ID_AA64MMFR2_CNP_SHIFT       0
+
+/* id_aa64dfr0 */
+#define ID_AA64DFR0_DOUBLELOCK_SHIFT 36
+#define ID_AA64DFR0_PMSVER_SHIFT     32
+#define ID_AA64DFR0_CTX_CMPS_SHIFT   28
+#define ID_AA64DFR0_WRPS_SHIFT       20
+#define ID_AA64DFR0_BRPS_SHIFT       12
+#define ID_AA64DFR0_PMUVER_SHIFT     8
+#define ID_AA64DFR0_TRACEVER_SHIFT   4
+#define ID_AA64DFR0_DEBUGVER_SHIFT   0
+
+#define ID_AA64DFR0_PMUVER_8_0       0x1
+#define ID_AA64DFR0_PMUVER_8_1       0x4
+#define ID_AA64DFR0_PMUVER_8_4       0x5
+#define ID_AA64DFR0_PMUVER_8_5       0x6
+#define ID_AA64DFR0_PMUVER_IMP_DEF   0xf
+
+#define ID_DFR0_PERFMON_SHIFT        24
+
+#define ID_DFR0_PERFMON_8_1          0x4
+
+#define ID_ISAR4_SWP_FRAC_SHIFT        28
+#define ID_ISAR4_PSR_M_SHIFT           24
+#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20
+#define ID_ISAR4_BARRIER_SHIFT         16
+#define ID_ISAR4_SMC_SHIFT             12
+#define ID_ISAR4_WRITEBACK_SHIFT       8
+#define ID_ISAR4_WITHSHIFTS_SHIFT      4
+#define ID_ISAR4_UNPRIV_SHIFT          0
+
+#define ID_DFR1_MTPMU_SHIFT          0
+
+#define ID_ISAR0_DIVIDE_SHIFT        24
+#define ID_ISAR0_DEBUG_SHIFT         20
+#define ID_ISAR0_COPROC_SHIFT        16
+#define ID_ISAR0_CMPBRANCH_SHIFT     12
+#define ID_ISAR0_BITFIELD_SHIFT      8
+#define ID_ISAR0_BITCOUNT_SHIFT      4
+#define ID_ISAR0_SWAP_SHIFT          0
+
+#define ID_ISAR5_RDM_SHIFT           24
+#define ID_ISAR5_CRC32_SHIFT         16
+#define ID_ISAR5_SHA2_SHIFT          12
+#define ID_ISAR5_SHA1_SHIFT          8
+#define ID_ISAR5_AES_SHIFT           4
+#define ID_ISAR5_SEVL_SHIFT          0
+
+#define ID_ISAR6_I8MM_SHIFT          24
+#define ID_ISAR6_BF16_SHIFT          20
+#define ID_ISAR6_SPECRES_SHIFT       16
+#define ID_ISAR6_SB_SHIFT            12
+#define ID_ISAR6_FHM_SHIFT           8
+#define ID_ISAR6_DP_SHIFT            4
+#define ID_ISAR6_JSCVT_SHIFT         0
+
+#define ID_MMFR0_INNERSHR_SHIFT      28
+#define ID_MMFR0_FCSE_SHIFT          24
+#define ID_MMFR0_AUXREG_SHIFT        20
+#define ID_MMFR0_TCM_SHIFT           16
+#define ID_MMFR0_SHARELVL_SHIFT      12
+#define ID_MMFR0_OUTERSHR_SHIFT      8
+#define ID_MMFR0_PMSA_SHIFT          4
+#define ID_MMFR0_VMSA_SHIFT          0
+
+#define ID_MMFR4_EVT_SHIFT           28
+#define ID_MMFR4_CCIDX_SHIFT         24
+#define ID_MMFR4_LSM_SHIFT           20
+#define ID_MMFR4_HPDS_SHIFT          16
+#define ID_MMFR4_CNP_SHIFT           12
+#define ID_MMFR4_XNX_SHIFT           8
+#define ID_MMFR4_AC2_SHIFT           4
+#define ID_MMFR4_SPECSEI_SHIFT       0
+
+#define ID_MMFR5_ETS_SHIFT           0
+
+#define ID_PFR0_DIT_SHIFT            24
+#define ID_PFR0_CSV2_SHIFT           16
+#define ID_PFR0_STATE3_SHIFT         12
+#define ID_PFR0_STATE2_SHIFT         8
+#define ID_PFR0_STATE1_SHIFT         4
+#define ID_PFR0_STATE0_SHIFT         0
+
+#define ID_DFR0_PERFMON_SHIFT        24
+#define ID_DFR0_MPROFDBG_SHIFT       20
+#define ID_DFR0_MMAPTRC_SHIFT        16
+#define ID_DFR0_COPTRC_SHIFT         12
+#define ID_DFR0_MMAPDBG_SHIFT        8
+#define ID_DFR0_COPSDBG_SHIFT        4
+#define ID_DFR0_COPDBG_SHIFT         0
+
+#define ID_PFR2_SSBS_SHIFT           4
+#define ID_PFR2_CSV3_SHIFT           0
+
+#define MVFR0_FPROUND_SHIFT          28
+#define MVFR0_FPSHVEC_SHIFT          24
+#define MVFR0_FPSQRT_SHIFT           20
+#define MVFR0_FPDIVIDE_SHIFT         16
+#define MVFR0_FPTRAP_SHIFT           12
+#define MVFR0_FPDP_SHIFT             8
+#define MVFR0_FPSP_SHIFT             4
+#define MVFR0_SIMD_SHIFT             0
+
+#define MVFR1_SIMDFMAC_SHIFT         28
+#define MVFR1_FPHP_SHIFT             24
+#define MVFR1_SIMDHP_SHIFT           20
+#define MVFR1_SIMDSP_SHIFT           16
+#define MVFR1_SIMDINT_SHIFT          12
+#define MVFR1_SIMDLS_SHIFT           8
+#define MVFR1_FPDNAN_SHIFT           4
+#define MVFR1_FPFTZ_SHIFT            0
+
+#define ID_PFR1_GIC_SHIFT            28
+#define ID_PFR1_VIRT_FRAC_SHIFT      24
+#define ID_PFR1_SEC_FRAC_SHIFT       20
+#define ID_PFR1_GENTIMER_SHIFT       16
+#define ID_PFR1_VIRTUALIZATION_SHIFT 12
+#define ID_PFR1_MPROGMOD_SHIFT       8
+#define ID_PFR1_SECURITY_SHIFT       4
+#define ID_PFR1_PROGMOD_SHIFT        0
+
+#define MVFR2_FPMISC_SHIFT           4
+#define MVFR2_SIMDMISC_SHIFT         0
+
+#define DCZID_DZP_SHIFT              4
+#define DCZID_BS_SHIFT               0
+
+/*
+ * The ZCR_ELx_LEN_* definitions intentionally include bits [8:4] which
+ * are reserved by the SVE architecture for future expansion of the LEN
+ * field, with compatible semantics.
+ */
+#define ZCR_ELx_LEN_SHIFT            0
+#define ZCR_ELx_LEN_SIZE             9
+#define ZCR_ELx_LEN_MASK             0x1ff
+
 /* Access to system registers */
 
 #define READ_SYSREG32(name) ((uint32_t)READ_SYSREG64(name))
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux
  2021-06-29 17:08 [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux Bertrand Marquis
@ 2021-06-29 17:08 ` Bertrand Marquis
  2021-07-12  9:36   ` Julien Grall
  2021-06-29 17:08 ` [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 4/4] xen/arm: Use sanitize values for p2m Bertrand Marquis
  3 siblings, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-06-29 17:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, Julien Grall, Volodymyr Babchuk

Import structures declared in Linux file arch/arm64/kernel/cpufeature.c
and import the required types.
Current code has been imported from Linux 5.13-rc5 (Commit ID
cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9)

Those structure will be used to sanitize the cpu features available to
the ones availble on all cores of a system even if we are on an
heterogeneous platform (from example a big/LITTLE).

For each feature field of all ID registers, those structures define what
is the safest value and if we can allow to have different values in
different cores.

This patch is introducing Linux code without any changes to it.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/arm64/Makefile      |   1 +
 xen/arch/arm/arm64/cpusanitize.c | 494 +++++++++++++++++++++++++++++++
 2 files changed, 495 insertions(+)
 create mode 100644 xen/arch/arm/arm64/cpusanitize.c

diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
index 40642ff574..c626990185 100644
--- a/xen/arch/arm/arm64/Makefile
+++ b/xen/arch/arm/arm64/Makefile
@@ -1,6 +1,7 @@
 obj-y += lib/
 
 obj-y += cache.o
+obj-y += cpusanitize.o
 obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
 obj-$(CONFIG_EARLY_PRINTK) += debug.o
 obj-y += domctl.o
diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
new file mode 100644
index 0000000000..4cc8378c14
--- /dev/null
+++ b/xen/arch/arm/arm64/cpusanitize.c
@@ -0,0 +1,494 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Sanitize CPU feature definitions
+ *
+ * Copyright (C) 2021 Arm Ltd.
+ * based on code from the Linux kernel, which is:
+ *  Copyright (C) 2015 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <asm/sysregs.h>
+#include <asm/cpufeature.h>
+
+/*
+ * CPU feature register tracking
+ *
+ * The safe value of a CPUID feature field is dependent on the implications
+ * of the values assigned to it by the architecture. Based on the relationship
+ * between the values, the features are classified into 3 types - LOWER_SAFE,
+ * HIGHER_SAFE and EXACT.
+ *
+ * The lowest value of all the CPUs is chosen for LOWER_SAFE and highest
+ * for HIGHER_SAFE. It is expected that all CPUs have the same value for
+ * a field when EXACT is specified, failing which, the safe value specified
+ * in the table is chosen.
+ */
+
+enum ftr_type {
+    FTR_EXACT,               /* Use a predefined safe value */
+    FTR_LOWER_SAFE,          /* Smaller value is safe */
+    FTR_HIGHER_SAFE,         /* Bigger value is safe */
+    FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
+};
+
+#define FTR_STRICT      true    /* SANITY check strict matching required */
+#define FTR_NONSTRICT   false   /* SANITY check ignored */
+
+#define FTR_SIGNED      true    /* Value should be treated as signed */
+#define FTR_UNSIGNED    false   /* Value should be treated as unsigned */
+
+#define FTR_VISIBLE true    /* Feature visible to the user space */
+#define FTR_HIDDEN  false   /* Feature is hidden from the user */
+
+#define FTR_VISIBLE_IF_IS_ENABLED(config)       \
+    (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
+
+struct arm64_ftr_bits {
+    bool    sign;   /* Value is signed ? */
+    bool    visible;
+    bool    strict; /* CPU Sanity check: strict matching required ? */
+    enum ftr_type   type;
+    u8      shift;
+    u8      width;
+    s64     safe_val; /* safe value for FTR_EXACT features */
+};
+
+/*
+ * NOTE: The following structures are imported directly from Linux kernel and
+ * should be kept in sync.
+ * The current version has been imported from arch/arm64/kernel/cpufeature.c
+ *  from kernel version 5.13-rc5
+ */
+
+#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	{						\
+		.sign = SIGNED,				\
+		.visible = VISIBLE,			\
+		.strict = STRICT,			\
+		.type = TYPE,				\
+		.shift = SHIFT,				\
+		.width = WIDTH,				\
+		.safe_val = SAFE_VAL,			\
+	}
+
+/* Define a feature with unsigned values */
+#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	__ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
+
+/* Define a feature with a signed value */
+#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
+	__ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
+
+#define ARM64_FTR_END					\
+	{						\
+		.width = 0,				\
+	}
+
+static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
+
+static bool __system_matches_cap(unsigned int n);
+
+/*
+ * NOTE: Any changes to the visibility of features should be kept in
+ * sync with the documentation of the CPU feature register ABI.
+ */
+static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TLB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DGH_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SPECRES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+				   FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
+	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
+				    FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F64MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F32MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
+	/*
+	 * Page size not being supported at Stage-2 is not fatal. You
+	 * just give up KVM if PAGE_SIZE isn't supported there. Go fix
+	 * your favourite nesting hypervisor.
+	 *
+	 * There is a small corner case where the hypervisor explicitly
+	 * advertises a given granule size at Stage-2 (value 2) on some
+	 * vCPUs, and uses the fallback to Stage-1 (value 0) for other
+	 * vCPUs. Although this is not forbidden by the architecture, it
+	 * indicates that the hypervisor is being silly (or buggy).
+	 *
+	 * We make no effort to cope with this and pretend that if these
+	 * fields are inconsistent across vCPUs, then it isn't worth
+	 * trying to bring KVM up.
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1),
+	/*
+	 * We already refuse to boot CPUs that don't support our configured
+	 * page size, so we can only detect mismatches for a page size other
+	 * than the one we're currently using. Unfortunately, SoCs like this
+	 * exist in the wild so, even though we don't like it, we'll have to go
+	 * along with it and treat them as non-strict.
+	 */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
+
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
+	/* Linux shouldn't care about secure memory */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+	/*
+	 * Differing PARange is fine as long as all peripherals and memory are mapped
+	 * within the minimum PARange of all CPUs
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_SPECSEI_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_BBM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IDS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_ST_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_NV_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CCIDX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_ctr[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
+	/*
+	 * Linux can handle differing I-cache policies. Userspace JITs will
+	 * make use of *minLine.
+	 * If we have differing I-cache policies, report it as the weakest - VIPT.
+	 */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static struct arm64_ftr_override __ro_after_init no_override = { };
+
+struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
+	.name		= "SYS_CTR_EL0",
+	.ftr_bits	= ftr_ctr,
+	.override	= &no_override,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_AUXREG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_TCM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_SHARELVL_SHIFT, 4, 0),
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_PMSA_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_VMSA_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
+	/*
+	 * We can instantiate multiple PMU instances with different levels
+	 * of support.
+	 */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_mvfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_dczid[] = {
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
+	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar5[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_AC2_SHIFT, 4, 0),
+
+	/*
+	 * SpecSEI = 1 indicates that the PE might generate an SError on an
+	 * external abort on speculative read. It is safe to assume that an
+	 * SError might be generated than it will not be. Hence it has been
+	 * classified as FTR_HIGHER_SAFE.
+	 */
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar4[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_mmfr5[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_isar6[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_I8MM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_BF16_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SPECRES_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SB_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_FHM_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_DP_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_JSCVT_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr0[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE3_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE2_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE1_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE0_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr1[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_pfr2[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_dfr0[] = {
+	/* [31:28] TraceFilt */
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPSDBG_SHIFT, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPDBG_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_id_dfr1[] = {
+	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_zcr[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
+		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
+	ARM64_FTR_END,
+};
+
+/*
+ * Common ftr bits for a 32bit register with all hidden, strict
+ * attributes, with 4bit feature fields and a default safe value of
+ * 0. Covers the following 32bit registers:
+ * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
+ */
+static const struct arm64_ftr_bits ftr_generic_32bits[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),
+	ARM64_FTR_END,
+};
+
+/* Table for a single 32bit feature value */
+static const struct arm64_ftr_bits ftr_single32[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
+	ARM64_FTR_END,
+};
+
+static const struct arm64_ftr_bits ftr_raz[] = {
+	ARM64_FTR_END,
+};
+
+/*
+ * End of imported linux structures
+ */
+
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-06-29 17:08 [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux Bertrand Marquis
  2021-06-29 17:08 ` [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux Bertrand Marquis
@ 2021-06-29 17:08 ` Bertrand Marquis
  2021-07-12 10:16   ` Julien Grall
  2021-06-29 17:08 ` [RFC PATCH 4/4] xen/arm: Use sanitize values for p2m Bertrand Marquis
  3 siblings, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-06-29 17:08 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Julien Grall, Volodymyr Babchuk,
	Andrew Cooper, George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

Define a sanitize_cpu function to be called on secondary cores to
sanitize the cpuinfo structure from the boot CPU.

The safest value is taken when possible and the system is marked tainted
if we encounter values which are incompatible with each other.

Call the sanitize_cpu function on all secondary cores and remove the
code disabling secondary cores if they are not the same as the boot core
as we are now able to handle this use case.

This is only supported on arm64 so cpu_sanitize is an empty static
inline on arm32.

This patch also removes the code imported from Linux that will not be
used by Xen.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
 xen/arch/arm/smpboot.c           |   5 +-
 xen/include/asm-arm/cpufeature.h |   9 ++
 xen/include/xen/lib.h            |   1 +
 4 files changed, 199 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
index 4cc8378c14..744006ca7c 100644
--- a/xen/arch/arm/arm64/cpusanitize.c
+++ b/xen/arch/arm/arm64/cpusanitize.c
@@ -97,10 +97,6 @@ struct arm64_ftr_bits {
 		.width = 0,				\
 	}
 
-static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
-
-static bool __system_matches_cap(unsigned int n);
-
 /*
  * NOTE: Any changes to the visibility of features should be kept in
  * sync with the documentation of the CPU feature register ABI.
@@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_ctr[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
-	/*
-	 * Linux can handle differing I-cache policies. Userspace JITs will
-	 * make use of *minLine.
-	 * If we have differing I-cache policies, report it as the weakest - VIPT.
-	 */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
-	ARM64_FTR_END,
-};
-
-static struct arm64_ftr_override __ro_after_init no_override = { };
-
-struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
-	.name		= "SYS_CTR_EL0",
-	.ftr_bits	= ftr_ctr,
-	.override	= &no_override,
-};
-
 static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
 	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
@@ -335,12 +306,6 @@ static const struct arm64_ftr_bits ftr_mvfr2[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_dczid[] = {
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
-	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
-	ARM64_FTR_END,
-};
-
 static const struct arm64_ftr_bits ftr_id_isar0[] = {
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
 	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
@@ -454,12 +419,6 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
-static const struct arm64_ftr_bits ftr_zcr[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
-		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
-	ARM64_FTR_END,
-};
-
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
  * attributes, with 4bit feature fields and a default safe value of
@@ -478,17 +437,192 @@ static const struct arm64_ftr_bits ftr_generic_32bits[] = {
 	ARM64_FTR_END,
 };
 
-/* Table for a single 32bit feature value */
-static const struct arm64_ftr_bits ftr_single32[] = {
-	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
-	ARM64_FTR_END,
-};
-
-static const struct arm64_ftr_bits ftr_raz[] = {
-	ARM64_FTR_END,
-};
-
 /*
  * End of imported linux structures
  */
 
+static inline int __attribute_const__
+cpuid_feature_extract_signed_field_width(u64 features, int field, int width)
+{
+    return (s64)(features << (64 - width - field)) >> (64 - width);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_signed_field(u64 features, int field)
+{
+    return cpuid_feature_extract_signed_field_width(features, field, 4);
+}
+
+static inline unsigned int __attribute_const__
+cpuid_feature_extract_unsigned_field_width(u64 features, int field, int width)
+{
+    return (u64)(features << (64 - width - field)) >> (64 - width);
+}
+
+static inline unsigned int __attribute_const__
+cpuid_feature_extract_unsigned_field(u64 features, int field)
+{
+    return cpuid_feature_extract_unsigned_field_width(features, field, 4);
+}
+
+static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
+{
+    return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_field_width(u64 features, int field, int width,
+                                  bool sign)
+{
+    return (sign) ?
+        cpuid_feature_extract_signed_field_width(features, field, width) :
+        cpuid_feature_extract_unsigned_field_width(features, field, width);
+}
+
+static inline int __attribute_const__
+cpuid_feature_extract_field(u64 features, int field, bool sign)
+{
+    return cpuid_feature_extract_field_width(features, field, 4, sign);
+}
+
+static inline s64 arm64_ftr_value(const struct arm64_ftr_bits *ftrp, u64 val)
+{
+    return (s64)cpuid_feature_extract_field_width(val, ftrp->shift,
+                                                  ftrp->width, ftrp->sign);
+}
+
+static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
+                                s64 cur)
+{
+    s64 ret = 0;
+
+    switch ( ftrp->type ) {
+    case FTR_EXACT:
+        ret = ftrp->safe_val;
+        break;
+    case FTR_LOWER_SAFE:
+        ret = new < cur ? new : cur;
+        break;
+    case FTR_HIGHER_OR_ZERO_SAFE:
+        if (!cur || !new)
+            break;
+        fallthrough;
+    case FTR_HIGHER_SAFE:
+        ret = new > cur ? new : cur;
+        break;
+    default:
+        BUG();
+    }
+
+    return ret;
+}
+
+static void sanitize_reg(u64 *cur_reg, u64 new_reg, const char *reg_name,
+                        const struct arm64_ftr_bits *ftrp)
+{
+    int taint = 0;
+    u64 old_reg = *cur_reg;
+
+    for ( ; ftrp->width !=0 ; ftrp++ )
+    {
+        u64 mask;
+        s64 cur_field = arm64_ftr_value(ftrp, *cur_reg);
+        s64 new_field = arm64_ftr_value(ftrp, new_reg);
+
+        if ( cur_field == new_field )
+            continue;
+
+        if ( ftrp->strict )
+            taint = 1;
+
+        mask = arm64_ftr_mask(ftrp);
+
+        *cur_reg &= ~mask;
+        *cur_reg |= (arm64_ftr_safe_value(ftrp, new_field, cur_field)
+                    << ftrp->shift) & mask;
+    }
+
+    if ( old_reg != new_reg )
+        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
+               reg_name, old_reg, new_reg);
+    if ( old_reg != *cur_reg )
+        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
+               reg_name, old_reg, *cur_reg);
+
+    if ( taint )
+    {
+        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
+                reg_name);
+        add_taint(TAINT_CPU_OUT_OF_SPEC);
+    }
+}
+
+
+/*
+ * This function should be called on secondary cores to sanitize the boot cpu
+ * cpuinfo.
+ */
+void sanitize_cpu(const struct cpuinfo_arm *new)
+{
+
+#define SANITIZE_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_##reg)
+
+#define SANITIZE_ID_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_id_##reg)
+
+#define SANITIZE_GENERIC_REG(field, num, reg)  \
+    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
+                 #reg, ftr_generic_32bits)
+
+    SANITIZE_ID_REG(pfr64, 0, aa64pfr0);
+    SANITIZE_ID_REG(pfr64, 1, aa64pfr1);
+
+    SANITIZE_ID_REG(dbg64, 0, aa64dfr0);
+
+    /* aux64 x2 */
+
+    SANITIZE_ID_REG(mm64, 0, aa64mmfr0);
+    SANITIZE_ID_REG(mm64, 1, aa64mmfr1);
+    SANITIZE_ID_REG(mm64, 2, aa64mmfr2);
+
+    SANITIZE_ID_REG(isa64, 0, aa64isar0);
+    SANITIZE_ID_REG(isa64, 1, aa64isar1);
+
+    SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
+
+    if ( cpu_feature64_has_el0_32(&boot_cpu_data) )
+    {
+        SANITIZE_ID_REG(pfr32, 0, pfr0);
+        SANITIZE_ID_REG(pfr32, 1, pfr1);
+        SANITIZE_ID_REG(pfr32, 2, pfr2);
+
+        SANITIZE_ID_REG(dbg32, 0, dfr0);
+        SANITIZE_ID_REG(dbg32, 1, dfr1);
+
+        /* aux32 x1 */
+
+        SANITIZE_ID_REG(mm32, 0, mmfr0);
+        SANITIZE_GENERIC_REG(mm32, 1, mmfr1);
+        SANITIZE_GENERIC_REG(mm32, 2, mmfr2);
+        SANITIZE_GENERIC_REG(mm32, 3, mmfr3);
+        SANITIZE_ID_REG(mm32, 4, mmfr4);
+        SANITIZE_ID_REG(mm32, 5, mmfr5);
+
+        SANITIZE_ID_REG(isa32, 0, isar0);
+        SANITIZE_GENERIC_REG(isa32, 1, isar1);
+        SANITIZE_GENERIC_REG(isa32, 2, isar2);
+        SANITIZE_GENERIC_REG(isa32, 3, isar3);
+        SANITIZE_ID_REG(isa32, 4, isar4);
+        SANITIZE_ID_REG(isa32, 5, isar5);
+        SANITIZE_ID_REG(isa32, 6, isar6);
+
+        SANITIZE_GENERIC_REG(mvfr, 0, mvfr0);
+        SANITIZE_GENERIC_REG(mvfr, 1, mvfr1);
+#ifndef MVFR2_MAYBE_UNDEFINED
+        SANITIZE_REG(mvfr, 2, mvfr2);
+#endif
+    }
+}
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index a1ee3146ef..8fdf5e70d3 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -307,12 +307,14 @@ void start_secondary(void)
     set_processor_id(cpuid);
 
     identify_cpu(&current_cpu_data);
+    sanitize_cpu(&current_cpu_data);
     processor_setup();
 
     init_traps();
 
+#ifndef CONFIG_ARM_64
     /*
-     * Currently Xen assumes the platform has only one kind of CPUs.
+     * Currently Xen assumes the platform has only one kind of CPUs on ARM32.
      * This assumption does not hold on big.LITTLE platform and may
      * result to instability and insecure platform (unless cpu affinity
      * is manually specified for all domains). Better to park them for
@@ -328,6 +330,7 @@ void start_secondary(void)
                boot_cpu_data.midr.bits);
         stop_cpu();
     }
+#endif
 
     if ( dcache_line_bytes != read_dcache_line_bytes() )
     {
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index ba48db3eac..ad34e55cc8 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -330,6 +330,15 @@ extern struct cpuinfo_arm boot_cpu_data;
 
 extern void identify_cpu(struct cpuinfo_arm *);
 
+#ifdef CONFIG_ARM_64
+extern void sanitize_cpu(const struct cpuinfo_arm *);
+#else
+static inline void sanitize_cpu(const struct cpuinfo_arm *cpuinfo)
+{
+    /* Not supported on arm32 */
+}
+#endif
+
 extern struct cpuinfo_arm cpu_data[];
 #define current_cpu_data cpu_data[smp_processor_id()]
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 1198c7c0b2..c6987973bf 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -192,6 +192,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
 #define TAINT_ERROR_INJECT              (1u << 2)
 #define TAINT_HVM_FEP                   (1u << 3)
 #define TAINT_MACHINE_UNSECURE          (1u << 4)
+#define TAINT_CPU_OUT_OF_SPEC           (1u << 5)
 extern unsigned int tainted;
 #define TAINT_STRING_MAX_LEN            20
 extern char *print_tainted(char *str);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 4/4] xen/arm: Use sanitize values for p2m
  2021-06-29 17:08 [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo Bertrand Marquis
                   ` (2 preceding siblings ...)
  2021-06-29 17:08 ` [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields Bertrand Marquis
@ 2021-06-29 17:08 ` Bertrand Marquis
  3 siblings, 0 replies; 16+ messages in thread
From: Bertrand Marquis @ 2021-06-29 17:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Stefano Stabellini, Julien Grall, Volodymyr Babchuk

Replace the code in p2m trying to find a sane value for the VMID size
supported and the PAR to use. We are now using the boot cpuinfo as the
values there are sanitized during boot and the value for those
parameters is now the safest possible value on the system.

On arm32, the system will panic if there are different types of core so
those checks were not needed anyway.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/p2m.c | 30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d414c4feb9..2536a86a13 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -2030,31 +2030,21 @@ void __init setup_virt_paging(void)
         [7] = { 0 }  /* Invalid */
     };
 
-    unsigned int i, cpu;
+    unsigned int i;
     unsigned int pa_range = 0x10; /* Larger than any possible value */
-    bool vmid_8_bit = false;
-
-    for_each_online_cpu ( cpu )
-    {
-        const struct cpuinfo_arm *info = &cpu_data[cpu];
 
-        /*
-         * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
-         * with IPA bits == PA bits, compare against "pabits".
-         */
-        if ( pa_range_info[info->mm64.pa_range].pabits < p2m_ipa_bits )
-            p2m_ipa_bits = pa_range_info[info->mm64.pa_range].pabits;
-
-        /* Set a flag if the current cpu does not support 16 bit VMIDs. */
-        if ( info->mm64.vmid_bits != MM64_VMID_16_BITS_SUPPORT )
-            vmid_8_bit = true;
-    }
+    /*
+     * Restrict "p2m_ipa_bits" if needed. As P2M table is always configured
+     * with IPA bits == PA bits, compare against "pabits".
+     */
+    if ( pa_range_info[boot_cpu_data.mm64.pa_range].pabits < p2m_ipa_bits )
+        p2m_ipa_bits = pa_range_info[boot_cpu_data.mm64.pa_range].pabits;
 
     /*
-     * If the flag is not set then it means all CPUs support 16-bit
-     * VMIDs.
+     * cpu info sanitization made sure we support 16bits VMID only if all
+     * cores are supporting it.
      */
-    if ( !vmid_8_bit )
+    if ( boot_cpu_data.mm64.vmid_bits == MM64_VMID_16_BITS_SUPPORT )
         max_vmid = MAX_VMID_16_BIT;
 
     /* Choose suitable "pa_range" according to the resulted "p2m_ipa_bits". */
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux
  2021-06-29 17:08 ` [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux Bertrand Marquis
@ 2021-07-12  9:36   ` Julien Grall
  2021-07-12 10:50     ` Bertrand Marquis
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2021-07-12  9:36 UTC (permalink / raw)
  To: Bertrand Marquis, xen-devel; +Cc: Stefano Stabellini, Volodymyr Babchuk



On 29/06/2021 18:08, Bertrand Marquis wrote:
> Import structures declared in Linux file arch/arm64/kernel/cpufeature.c
> and import the required types.
> Current code has been imported from Linux 5.13-rc5 (Commit ID
> cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9)
> 
> Those structure will be used to sanitize the cpu features available to
> the ones availble on all cores of a system even if we are on an
> heterogeneous platform (from example a big/LITTLE).
> 
> For each feature field of all ID registers, those structures define what
> is the safest value and if we can allow to have different values in
> different cores.
> 
> This patch is introducing Linux code without any changes to it.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   xen/arch/arm/arm64/Makefile      |   1 +
>   xen/arch/arm/arm64/cpusanitize.c | 494 +++++++++++++++++++++++++++++++
>   2 files changed, 495 insertions(+)
>   create mode 100644 xen/arch/arm/arm64/cpusanitize.c
> 
> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
> index 40642ff574..c626990185 100644
> --- a/xen/arch/arm/arm64/Makefile
> +++ b/xen/arch/arm/arm64/Makefile
> @@ -1,6 +1,7 @@
>   obj-y += lib/
>   
>   obj-y += cache.o
> +obj-y += cpusanitize.o

Looking at the code, I don't think this cpusanitize.c would build after 
this patch. To allow bisection, this line would need to move when the 
file can build.

>   obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
>   obj-$(CONFIG_EARLY_PRINTK) += debug.o
>   obj-y += domctl.o
> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
> new file mode 100644
> index 0000000000..4cc8378c14
> --- /dev/null
> +++ b/xen/arch/arm/arm64/cpusanitize.c

Any reason to not stick with the Linux name?

> @@ -0,0 +1,494 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Sanitize CPU feature definitions
> + *
> + * Copyright (C) 2021 Arm Ltd.
> + * based on code from the Linux kernel, which is:
> + *  Copyright (C) 2015 ARM Ltd.

Linux has a large comment explaining the goal of the file. I think it is 
worth to keep it for Xen.

> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.

This is a redundant with the SPDX tag above. Please get rid of one of them.

> + */
> +
> +#include <xen/types.h>
> +#include <asm/sysregs.h>
> +#include <asm/cpufeature.h>
> +
> +/*
> + * CPU feature register tracking
> + *
> + * The safe value of a CPUID feature field is dependent on the implications
> + * of the values assigned to it by the architecture. Based on the relationship
> + * between the values, the features are classified into 3 types - LOWER_SAFE,
> + * HIGHER_SAFE and EXACT.
> + *
> + * The lowest value of all the CPUs is chosen for LOWER_SAFE and highest
> + * for HIGHER_SAFE. It is expected that all CPUs have the same value for
> + * a field when EXACT is specified, failing which, the safe value specified
> + * in the table is chosen.
> + */
> +
> +enum ftr_type {
> +    FTR_EXACT,               /* Use a predefined safe value */
> +    FTR_LOWER_SAFE,          /* Smaller value is safe */
> +    FTR_HIGHER_SAFE,         /* Bigger value is safe */
> +    FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
> +};
Please use the Linux coding style to stay consistent with the rest of 
the file. However, unless there is a reason to, I would prefer if the 
definition are in a separate header like Linux did.

> +
> +#define FTR_STRICT      true    /* SANITY check strict matching required */
> +#define FTR_NONSTRICT   false   /* SANITY check ignored */
> +
> +#define FTR_SIGNED      true    /* Value should be treated as signed */
> +#define FTR_UNSIGNED    false   /* Value should be treated as unsigned */
> +
> +#define FTR_VISIBLE true    /* Feature visible to the user space */
> +#define FTR_HIDDEN  false   /* Feature is hidden from the user */
> +
> +#define FTR_VISIBLE_IF_IS_ENABLED(config)       \
> +    (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
> +
> +struct arm64_ftr_bits {
> +    bool    sign;   /* Value is signed ? */
> +    bool    visible;
> +    bool    strict; /* CPU Sanity check: strict matching required ? */
> +    enum ftr_type   type;
> +    u8      shift;
> +    u8      width;
> +    s64     safe_val; /* safe value for FTR_EXACT features */
> +};
> +
> +/*
> + * NOTE: The following structures are imported directly from Linux kernel and
> + * should be kept in sync.
> + * The current version has been imported from arch/arm64/kernel/cpufeature.c
> + *  from kernel version 5.13-rc5
> + */

It feels a bit odd to add this comment in the middle of the definition. 
It would be better to move it close to the copyright.

> +
> +#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> +	{						\
> +		.sign = SIGNED,				\
> +		.visible = VISIBLE,			\
> +		.strict = STRICT,			\
> +		.type = TYPE,				\
> +		.shift = SHIFT,				\
> +		.width = WIDTH,				\
> +		.safe_val = SAFE_VAL,			\
> +	}
> +
> +/* Define a feature with unsigned values */
> +#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> +	__ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
> +
> +/* Define a feature with a signed value */
> +#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> +	__ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
> +
> +#define ARM64_FTR_END					\
> +	{						\
> +		.width = 0,				\
> +	}
> +
> +static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);

This function is not defined in the code you import.

> +
> +static bool __system_matches_cap(unsigned int n);

Ditto.

> +
> +/*
> + * NOTE: Any changes to the visibility of features should be kept in
> + * sync with the documentation of the CPU feature register ABI.
> + */
> +static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TLB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_I8MM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DGH_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_BF16_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SPECRES_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +				   FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
> +	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
> +	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
> +				    FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F64MM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F32MM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_I8MM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BF16_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
> +	/*
> +	 * Page size not being supported at Stage-2 is not fatal. You
> +	 * just give up KVM if PAGE_SIZE isn't supported there. Go fix
> +	 * your favourite nesting hypervisor.
> +	 *
> +	 * There is a small corner case where the hypervisor explicitly
> +	 * advertises a given granule size at Stage-2 (value 2) on some
> +	 * vCPUs, and uses the fallback to Stage-1 (value 0) for other
> +	 * vCPUs. Although this is not forbidden by the architecture, it
> +	 * indicates that the hypervisor is being silly (or buggy).
> +	 *
> +	 * We make no effort to cope with this and pretend that if these
> +	 * fields are inconsistent across vCPUs, then it isn't worth
> +	 * trying to bring KVM up.
> +	 */
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1),
> +	/*
> +	 * We already refuse to boot CPUs that don't support our configured
> +	 * page size, so we can only detect mismatches for a page size other
> +	 * than the one we're currently using. Unfortunately, SoCs like this
> +	 * exist in the wild so, even though we don't like it, we'll have to go
> +	 * along with it and treat them as non-strict.
> +	 */
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
> +
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
> +	/* Linux shouldn't care about secure memory */
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
> +	/*
> +	 * Differing PARange is fine as long as all peripherals and memory are mapped
> +	 * within the minimum PARange of all CPUs
> +	 */
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_SPECSEI_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EVT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_BBM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IDS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_ST_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_NV_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CCIDX_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_ctr[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
> +	/*
> +	 * Linux can handle differing I-cache policies. Userspace JITs will
> +	 * make use of *minLine.
> +	 * If we have differing I-cache policies, report it as the weakest - VIPT.
> +	 */
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static struct arm64_ftr_override __ro_after_init no_override = { };
> +
> +struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
> +	.name		= "SYS_CTR_EL0",
> +	.ftr_bits	= ftr_ctr,
> +	.override	= &no_override,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_AUXREG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_TCM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_SHARELVL_SHIFT, 4, 0),
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 4, 0xf),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_PMSA_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_VMSA_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
> +	/*
> +	 * We can instantiate multiple PMU instances with different levels
> +	 * of support.
> +	 */
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_mvfr2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_dczid[] = {
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_isar0[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_isar5[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_AC2_SHIFT, 4, 0),
> +
> +	/*
> +	 * SpecSEI = 1 indicates that the PE might generate an SError on an
> +	 * external abort on speculative read. It is safe to assume that an
> +	 * SError might be generated than it will not be. Hence it has been
> +	 * classified as FTR_HIGHER_SAFE.
> +	 */
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_isar4[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_mmfr5[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_isar6[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_I8MM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_BF16_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SPECRES_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SB_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_FHM_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_DP_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_JSCVT_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_pfr0[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE3_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE2_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE1_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE0_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_pfr1[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_pfr2[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_dfr0[] = {
> +	/* [31:28] TraceFilt */
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPDBG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPSDBG_SHIFT, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPDBG_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_id_dfr1[] = {
> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_zcr[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
> +		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
> +	ARM64_FTR_END,
> +};
> +
> +/*
> + * Common ftr bits for a 32bit register with all hidden, strict
> + * attributes, with 4bit feature fields and a default safe value of
> + * 0. Covers the following 32bit registers:
> + * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
> + */
> +static const struct arm64_ftr_bits ftr_generic_32bits[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),
> +	ARM64_FTR_END,
> +};
> +
> +/* Table for a single 32bit feature value */
> +static const struct arm64_ftr_bits ftr_single32[] = {
> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
> +	ARM64_FTR_END,
> +};
> +
> +static const struct arm64_ftr_bits ftr_raz[] = {
> +	ARM64_FTR_END,
> +};
> +
> +/*
> + * End of imported linux structures
> + */
> +
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-06-29 17:08 ` [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields Bertrand Marquis
@ 2021-07-12 10:16   ` Julien Grall
  2021-07-12 11:03     ` Bertrand Marquis
  2021-07-16 17:14     ` Bertrand Marquis
  0 siblings, 2 replies; 16+ messages in thread
From: Julien Grall @ 2021-07-12 10:16 UTC (permalink / raw)
  To: Bertrand Marquis, xen-devel
  Cc: Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

Hi Bertrand,

On 29/06/2021 18:08, Bertrand Marquis wrote:
> Define a sanitize_cpu function to be called on secondary cores to
> sanitize the cpuinfo structure from the boot CPU.
> 
> The safest value is taken when possible and the system is marked tainted
> if we encounter values which are incompatible with each other.
> 
> Call the sanitize_cpu function on all secondary cores and remove the
> code disabling secondary cores if they are not the same as the boot core
> as we are now able to handle this use case.
> 
> This is only supported on arm64 so cpu_sanitize is an empty static
> inline on arm32.
> 
> This patch also removes the code imported from Linux that will not be
> used by Xen.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
>   xen/arch/arm/smpboot.c           |   5 +-
>   xen/include/asm-arm/cpufeature.h |   9 ++
>   xen/include/xen/lib.h            |   1 +
>   4 files changed, 199 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
> index 4cc8378c14..744006ca7c 100644
> --- a/xen/arch/arm/arm64/cpusanitize.c
> +++ b/xen/arch/arm/arm64/cpusanitize.c
> @@ -97,10 +97,6 @@ struct arm64_ftr_bits {
>   		.width = 0,				\
>   	}
>   
> -static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
> -
> -static bool __system_matches_cap(unsigned int n);
> -
>   /*
>    * NOTE: Any changes to the visibility of features should be kept in
>    * sync with the documentation of the CPU feature register ABI.
> @@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>   	ARM64_FTR_END,
>   };
>   
> -static const struct arm64_ftr_bits ftr_ctr[] = {
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
> -	/*
> -	 * Linux can handle differing I-cache policies. Userspace JITs will
> -	 * make use of *minLine.
> -	 * If we have differing I-cache policies, report it as the weakest - VIPT.
> -	 */
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
> -	ARM64_FTR_END,
> -};

I don't think this is should be dropped. Xen will need to know the 
safest cacheline size that can be used for cache maintenance instructions.

> -
> -static struct arm64_ftr_override __ro_after_init no_override = { };
> -
> -struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
> -	.name		= "SYS_CTR_EL0",
> -	.ftr_bits	= ftr_ctr,
> -	.override	= &no_override,
> -};
> -
>   static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
>   	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
> @@ -335,12 +306,6 @@ static const struct arm64_ftr_bits ftr_mvfr2[] = {
>   	ARM64_FTR_END,
>   };
>   
> -static const struct arm64_ftr_bits ftr_dczid[] = {
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
> -	ARM64_FTR_END,
> -};

Why is this dropped?

> -
>   static const struct arm64_ftr_bits ftr_id_isar0[] = {
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
>   	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
> @@ -454,12 +419,6 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
>   	ARM64_FTR_END,
>   };
>   
> -static const struct arm64_ftr_bits ftr_zcr[] = {
> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
> -		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
> -	ARM64_FTR_END,
> -};

At some point we will support SVE in Xen. So any reason we would not 
want to keep the code?

> -
>   /*
>    * Common ftr bits for a 32bit register with all hidden, strict
>    * attributes, with 4bit feature fields and a default safe value of
> @@ -478,17 +437,192 @@ static const struct arm64_ftr_bits ftr_generic_32bits[] = {
>   	ARM64_FTR_END,
>   };
>   
> -/* Table for a single 32bit feature value */
> -static const struct arm64_ftr_bits ftr_single32[] = {
> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
> -	ARM64_FTR_END,
> -};
> -
> -static const struct arm64_ftr_bits ftr_raz[] = {
> -	ARM64_FTR_END,
> -};
> -
>   /*
>    * End of imported linux structures
>    */
>   
> +static inline int __attribute_const__
> +cpuid_feature_extract_signed_field_width(u64 features, int field, int width)
> +{
> +    return (s64)(features << (64 - width - field)) >> (64 - width);
> +}

Please avoid to mix Xen and Linux coding style in the same file.

Also, this function and some others below seems to have been taken as-is 
from Linux. So this should at least be mentionned in the commit message.

I would also consider to import them in a separate patch (or maybe in 
patch#2) so it is clear this is not "new" code.

> +
> +static inline int __attribute_const__
> +cpuid_feature_extract_signed_field(u64 features, int field)
> +{
> +    return cpuid_feature_extract_signed_field_width(features, field, 4);
> +}
> +
> +static inline unsigned int __attribute_const__
> +cpuid_feature_extract_unsigned_field_width(u64 features, int field, int width)
> +{
> +    return (u64)(features << (64 - width - field)) >> (64 - width);
> +}
> +
> +static inline unsigned int __attribute_const__
> +cpuid_feature_extract_unsigned_field(u64 features, int field)
> +{
> +    return cpuid_feature_extract_unsigned_field_width(features, field, 4);
> +}
> +
> +static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
> +{
> +    return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
> +}
> +
> +static inline int __attribute_const__
> +cpuid_feature_extract_field_width(u64 features, int field, int width,
> +                                  bool sign)
> +{
> +    return (sign) ?
> +        cpuid_feature_extract_signed_field_width(features, field, width) :
> +        cpuid_feature_extract_unsigned_field_width(features, field, width);
> +}
> +
> +static inline int __attribute_const__
> +cpuid_feature_extract_field(u64 features, int field, bool sign)
> +{
> +    return cpuid_feature_extract_field_width(features, field, 4, sign);
> +}
> +
> +static inline s64 arm64_ftr_value(const struct arm64_ftr_bits *ftrp, u64 val)
> +{
> +    return (s64)cpuid_feature_extract_field_width(val, ftrp->shift,
> +                                                  ftrp->width, ftrp->sign);
> +}
> +
> +static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
> +                                s64 cur)
> +{
> +    s64 ret = 0;
> +
> +    switch ( ftrp->type ) {
> +    case FTR_EXACT:
> +        ret = ftrp->safe_val;
> +        break;
> +    case FTR_LOWER_SAFE:
> +        ret = new < cur ? new : cur;
> +        break;
> +    case FTR_HIGHER_OR_ZERO_SAFE:
> +        if (!cur || !new)
> +            break;
> +        fallthrough;
> +    case FTR_HIGHER_SAFE:
> +        ret = new > cur ? new : cur;

We have a macro max() defined in kernel.h.

> +        break;
> +    default:
> +        BUG();
> +    }
> +
> +    return ret;
> +}
> +
> +static void sanitize_reg(u64 *cur_reg, u64 new_reg, const char *reg_name,
> +                        const struct arm64_ftr_bits *ftrp)
> +{
> +    int taint = 0;
> +    u64 old_reg = *cur_reg;
> +
> +    for ( ; ftrp->width !=0 ; ftrp++ )
> +    {
> +        u64 mask;
> +        s64 cur_field = arm64_ftr_value(ftrp, *cur_reg);
> +        s64 new_field = arm64_ftr_value(ftrp, new_reg);
> +
> +        if ( cur_field == new_field )
> +            continue;
> +
> +        if ( ftrp->strict )
> +            taint = 1;
> +
> +        mask = arm64_ftr_mask(ftrp);
> +
> +        *cur_reg &= ~mask;
> +        *cur_reg |= (arm64_ftr_safe_value(ftrp, new_field, cur_field)
> +                    << ftrp->shift) & mask;
> +    }
> +
> +    if ( old_reg != new_reg )
> +        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
> +               reg_name, old_reg, new_reg);
> +    if ( old_reg != *cur_reg )
> +        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
> +               reg_name, old_reg, *cur_reg);
> +
> +    if ( taint )
> +    {
> +        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
> +                reg_name);
> +        add_taint(TAINT_CPU_OUT_OF_SPEC);
> +    }
> +}
> +
> +
> +/*
> + * This function should be called on secondary cores to sanitize the boot cpu
> + * cpuinfo.

Can we renamed boot_cpu_data to system_cpuinfo (or something similar)? 
This would make clear this is not only the features for the boot CPU?

> + */
> +void sanitize_cpu(const struct cpuinfo_arm *new)

How about naming it to "update_system_features()"?

> +{
> +
> +#define SANITIZE_REG(field, num, reg)  \
> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
> +                 #reg, ftr_##reg)
> +
> +#define SANITIZE_ID_REG(field, num, reg)  \
> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
> +                 #reg, ftr_id_##reg)
> +
> +#define SANITIZE_GENERIC_REG(field, num, reg)  \
> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
> +                 #reg, ftr_generic_32bits)
> +
> +    SANITIZE_ID_REG(pfr64, 0, aa64pfr0);
> +    SANITIZE_ID_REG(pfr64, 1, aa64pfr1);
> +
> +    SANITIZE_ID_REG(dbg64, 0, aa64dfr0);
> +
> +    /* aux64 x2 */
> +
> +    SANITIZE_ID_REG(mm64, 0, aa64mmfr0);
> +    SANITIZE_ID_REG(mm64, 1, aa64mmfr1);
> +    SANITIZE_ID_REG(mm64, 2, aa64mmfr2);
> +
> +    SANITIZE_ID_REG(isa64, 0, aa64isar0);
> +    SANITIZE_ID_REG(isa64, 1, aa64isar1);
> +
> +    SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
> +
> +    if ( cpu_feature64_has_el0_32(&boot_cpu_data) )
> +    {
> +        SANITIZE_ID_REG(pfr32, 0, pfr0);
> +        SANITIZE_ID_REG(pfr32, 1, pfr1);
> +        SANITIZE_ID_REG(pfr32, 2, pfr2);
> +
> +        SANITIZE_ID_REG(dbg32, 0, dfr0);
> +        SANITIZE_ID_REG(dbg32, 1, dfr1);
> +
> +        /* aux32 x1 */

What does this comment mean?

> +
> +        SANITIZE_ID_REG(mm32, 0, mmfr0);
> +        SANITIZE_GENERIC_REG(mm32, 1, mmfr1);
> +        SANITIZE_GENERIC_REG(mm32, 2, mmfr2);
> +        SANITIZE_GENERIC_REG(mm32, 3, mmfr3);
> +        SANITIZE_ID_REG(mm32, 4, mmfr4);
> +        SANITIZE_ID_REG(mm32, 5, mmfr5);
> +
> +        SANITIZE_ID_REG(isa32, 0, isar0);
> +        SANITIZE_GENERIC_REG(isa32, 1, isar1);
> +        SANITIZE_GENERIC_REG(isa32, 2, isar2);
> +        SANITIZE_GENERIC_REG(isa32, 3, isar3);
> +        SANITIZE_ID_REG(isa32, 4, isar4);
> +        SANITIZE_ID_REG(isa32, 5, isar5);
> +        SANITIZE_ID_REG(isa32, 6, isar6);
> +
> +        SANITIZE_GENERIC_REG(mvfr, 0, mvfr0);
> +        SANITIZE_GENERIC_REG(mvfr, 1, mvfr1);
> +#ifndef MVFR2_MAYBE_UNDEFINED
> +        SANITIZE_REG(mvfr, 2, mvfr2);
> +#endif
> +    }
> +}
> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
> index a1ee3146ef..8fdf5e70d3 100644
> --- a/xen/arch/arm/smpboot.c
> +++ b/xen/arch/arm/smpboot.c
> @@ -307,12 +307,14 @@ void start_secondary(void)
>       set_processor_id(cpuid);
>   
>       identify_cpu(&current_cpu_data);
> +    sanitize_cpu(&current_cpu_data);
>       processor_setup();
>   
>       init_traps();
>   
> +#ifndef CONFIG_ARM_64
>       /*
> -     * Currently Xen assumes the platform has only one kind of CPUs.
> +     * Currently Xen assumes the platform has only one kind of CPUs on ARM32.
>        * This assumption does not hold on big.LITTLE platform and may
>        * result to instability and insecure platform (unless cpu affinity
>        * is manually specified for all domains). Better to park them for
> @@ -328,6 +330,7 @@ void start_secondary(void)
>                  boot_cpu_data.midr.bits);
>           stop_cpu();
>       }
> +#endif
>   
>       if ( dcache_line_bytes != read_dcache_line_bytes() )
>       {

Any plan to drop this check?

> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index ba48db3eac..ad34e55cc8 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -330,6 +330,15 @@ extern struct cpuinfo_arm boot_cpu_data;
>   
>   extern void identify_cpu(struct cpuinfo_arm *);
>   
> +#ifdef CONFIG_ARM_64
> +extern void sanitize_cpu(const struct cpuinfo_arm *);
> +#else
> +static inline void sanitize_cpu(const struct cpuinfo_arm *cpuinfo)
> +{
> +    /* Not supported on arm32 */
> +}
> +#endif
> +
>   extern struct cpuinfo_arm cpu_data[];
>   #define current_cpu_data cpu_data[smp_processor_id()]
>   
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index 1198c7c0b2..c6987973bf 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -192,6 +192,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
>   #define TAINT_ERROR_INJECT              (1u << 2)
>   #define TAINT_HVM_FEP                   (1u << 3)
>   #define TAINT_MACHINE_UNSECURE          (1u << 4)
> +#define TAINT_CPU_OUT_OF_SPEC           (1u << 5)

You want to also update at least print_tainted().

>   extern unsigned int tainted;
>   #define TAINT_STRING_MAX_LEN            20
>   extern char *print_tainted(char *str);
> 

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux
  2021-07-12  9:36   ` Julien Grall
@ 2021-07-12 10:50     ` Bertrand Marquis
  2021-07-12 19:03       ` Julien Grall
  0 siblings, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-07-12 10:50 UTC (permalink / raw)
  To: Julien Grall; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

Hi Julien,


> On 12 Jul 2021, at 10:36, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 29/06/2021 18:08, Bertrand Marquis wrote:
>> Import structures declared in Linux file arch/arm64/kernel/cpufeature.c
>> and import the required types.
>> Current code has been imported from Linux 5.13-rc5 (Commit ID
>> cd1245d75ce93b8fd206f4b34eb58bcfe156d5e9)
>> Those structure will be used to sanitize the cpu features available to
>> the ones availble on all cores of a system even if we are on an
>> heterogeneous platform (from example a big/LITTLE).
>> For each feature field of all ID registers, those structures define what
>> is the safest value and if we can allow to have different values in
>> different cores.
>> This patch is introducing Linux code without any changes to it.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>  xen/arch/arm/arm64/Makefile      |   1 +
>>  xen/arch/arm/arm64/cpusanitize.c | 494 +++++++++++++++++++++++++++++++
>>  2 files changed, 495 insertions(+)
>>  create mode 100644 xen/arch/arm/arm64/cpusanitize.c
>> diff --git a/xen/arch/arm/arm64/Makefile b/xen/arch/arm/arm64/Makefile
>> index 40642ff574..c626990185 100644
>> --- a/xen/arch/arm/arm64/Makefile
>> +++ b/xen/arch/arm/arm64/Makefile
>> @@ -1,6 +1,7 @@
>>  obj-y += lib/
>>    obj-y += cache.o
>> +obj-y += cpusanitize.o
> 
> Looking at the code, I don't think this cpusanitize.c would build after this patch. To allow bisection, this line would need to move when the file can build.

You are right. I will push the Makefile change to the next patch.

> 
>>  obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
>>  obj-$(CONFIG_EARLY_PRINTK) += debug.o
>>  obj-y += domctl.o
>> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
>> new file mode 100644
>> index 0000000000..4cc8378c14
>> --- /dev/null
>> +++ b/xen/arch/arm/arm64/cpusanitize.c
> 
> Any reason to not stick with the Linux name?

File in linux is named cpufeature.c and we already have a file cpufeature.c in arch/arm so I wanted to prevent clashes.
As this is anyway in arm64 subdirectory the clash does not really exist so I will rename the file to cpufeature.c.

> 
>> @@ -0,0 +1,494 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/*
>> + * Sanitize CPU feature definitions
>> + *
>> + * Copyright (C) 2021 Arm Ltd.
>> + * based on code from the Linux kernel, which is:
>> + *  Copyright (C) 2015 ARM Ltd.
> 
> Linux has a large comment explaining the goal of the file. I think it is worth to keep it for Xen.

Ok

> 
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> 
> This is a redundant with the SPDX tag above. Please get rid of one of them.

Ok I will keep just the SPDX and copyrights.


> 
>> + */
>> +
>> +#include <xen/types.h>
>> +#include <asm/sysregs.h>
>> +#include <asm/cpufeature.h>
>> +
>> +/*
>> + * CPU feature register tracking
>> + *
>> + * The safe value of a CPUID feature field is dependent on the implications
>> + * of the values assigned to it by the architecture. Based on the relationship
>> + * between the values, the features are classified into 3 types - LOWER_SAFE,
>> + * HIGHER_SAFE and EXACT.
>> + *
>> + * The lowest value of all the CPUs is chosen for LOWER_SAFE and highest
>> + * for HIGHER_SAFE. It is expected that all CPUs have the same value for
>> + * a field when EXACT is specified, failing which, the safe value specified
>> + * in the table is chosen.
>> + */
>> +
>> +enum ftr_type {
>> +    FTR_EXACT,               /* Use a predefined safe value */
>> +    FTR_LOWER_SAFE,          /* Smaller value is safe */
>> +    FTR_HIGHER_SAFE,         /* Bigger value is safe */
>> +    FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
>> +};
> Please use the Linux coding style to stay consistent with the rest of the file. However, unless there is a reason to, I would prefer if the definition are in a separate header like Linux did.

Ok, I will apply the linux coding style to the whole file.

> 
>> +
>> +#define FTR_STRICT      true    /* SANITY check strict matching required */
>> +#define FTR_NONSTRICT   false   /* SANITY check ignored */
>> +
>> +#define FTR_SIGNED      true    /* Value should be treated as signed */
>> +#define FTR_UNSIGNED    false   /* Value should be treated as unsigned */
>> +
>> +#define FTR_VISIBLE true    /* Feature visible to the user space */
>> +#define FTR_HIDDEN  false   /* Feature is hidden from the user */
>> +
>> +#define FTR_VISIBLE_IF_IS_ENABLED(config)       \
>> +    (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
>> +
>> +struct arm64_ftr_bits {
>> +    bool    sign;   /* Value is signed ? */
>> +    bool    visible;
>> +    bool    strict; /* CPU Sanity check: strict matching required ? */
>> +    enum ftr_type   type;
>> +    u8      shift;
>> +    u8      width;
>> +    s64     safe_val; /* safe value for FTR_EXACT features */
>> +};
>> +
>> +/*
>> + * NOTE: The following structures are imported directly from Linux kernel and
>> + * should be kept in sync.
>> + * The current version has been imported from arch/arm64/kernel/cpufeature.c
>> + *  from kernel version 5.13-rc5
>> + */
> 
> It feels a bit odd to add this comment in the middle of the definition. It would be better to move it close to the copyright.

Ok.

> 
>> +
>> +#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>> +	{						\
>> +		.sign = SIGNED,				\
>> +		.visible = VISIBLE,			\
>> +		.strict = STRICT,			\
>> +		.type = TYPE,				\
>> +		.shift = SHIFT,				\
>> +		.width = WIDTH,				\
>> +		.safe_val = SAFE_VAL,			\
>> +	}
>> +
>> +/* Define a feature with unsigned values */
>> +#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>> +	__ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
>> +
>> +/* Define a feature with a signed value */
>> +#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>> +	__ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
>> +
>> +#define ARM64_FTR_END					\
>> +	{						\
>> +		.width = 0,				\
>> +	}
>> +
>> +static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
> 
> This function is not defined in the code you import.

I imported the block I am interested in from Linux and I am filtering it in the
Next patch where I remove those function prototypes.

This was to allow easier update of the code.

Should I filter directly when importing linux code then ?

> 
>> +
>> +static bool __system_matches_cap(unsigned int n);
> 
> Ditto.

Cheers
Bertrand

> 
>> +
>> +/*
>> + * NOTE: Any changes to the visibility of features should be kept in
>> + * sync with the documentation of the CPU feature register ABI.
>> + */
>> +static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RNDR_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TLB_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_I8MM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DGH_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_BF16_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SPECRES_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_SB_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FRINTTS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPI_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_GPA_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
>> +		       FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_AMU_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_MPAM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SEL2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +				   FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
>> +	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
>> +	S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MPAMFRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_RASFRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_MTE_SHIFT, 4, ID_AA64PFR1_MTE_NI),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_SSBS_SHIFT, 4, ID_AA64PFR1_SSBS_PSTATE_NI),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
>> +				    FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_BT_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F64MM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_F32MM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_I8MM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SM4_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SHA3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BF16_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_BITPERM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_AES_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
>> +		       FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_SVEVER_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ECV_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_FGT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_EXS_SHIFT, 4, 0),
>> +	/*
>> +	 * Page size not being supported at Stage-2 is not fatal. You
>> +	 * just give up KVM if PAGE_SIZE isn't supported there. Go fix
>> +	 * your favourite nesting hypervisor.
>> +	 *
>> +	 * There is a small corner case where the hypervisor explicitly
>> +	 * advertises a given granule size at Stage-2 (value 2) on some
>> +	 * vCPUs, and uses the fallback to Stage-1 (value 0) for other
>> +	 * vCPUs. Although this is not forbidden by the architecture, it
>> +	 * indicates that the hypervisor is being silly (or buggy).
>> +	 *
>> +	 * We make no effort to cope with this and pretend that if these
>> +	 * fields are inconsistent across vCPUs, then it isn't worth
>> +	 * trying to bring KVM up.
>> +	 */
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN4_2_SHIFT, 4, 1),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN64_2_SHIFT, 4, 1),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64MMFR0_TGRAN16_2_SHIFT, 4, 1),
>> +	/*
>> +	 * We already refuse to boot CPUs that don't support our configured
>> +	 * page size, so we can only detect mismatches for a page size other
>> +	 * than the one we're currently using. Unfortunately, SoCs like this
>> +	 * exist in the wild so, even though we don't like it, we'll have to go
>> +	 * along with it and treat them as non-strict.
>> +	 */
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, 4, ID_AA64MMFR0_TGRAN4_NI),
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, 4, ID_AA64MMFR0_TGRAN64_NI),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN16_SHIFT, 4, ID_AA64MMFR0_TGRAN16_NI),
>> +
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL0_SHIFT, 4, 0),
>> +	/* Linux shouldn't care about secure memory */
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
>> +	/*
>> +	 * Differing PARange is fine as long as all peripherals and memory are mapped
>> +	 * within the minimum PARange of all CPUs
>> +	 */
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_PARANGE_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_ETS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_TWED_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_XNX_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64MMFR1_SPECSEI_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_EVT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_BBM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IDS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_ST_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_NV_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CCIDX_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_ctr[] = {
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
>> +	/*
>> +	 * Linux can handle differing I-cache policies. Userspace JITs will
>> +	 * make use of *minLine.
>> +	 * If we have differing I-cache policies, report it as the weakest - VIPT.
>> +	 */
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static struct arm64_ftr_override __ro_after_init no_override = { };
>> +
>> +struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
>> +	.name		= "SYS_CTR_EL0",
>> +	.ftr_bits	= ftr_ctr,
>> +	.override	= &no_override,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_MMFR0_AUXREG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_TCM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_SHARELVL_SHIFT, 4, 0),
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 4, 0xf),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_PMSA_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_VMSA_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_aa64dfr0[] = {
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64DFR0_PMSVER_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_CTX_CMPS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_WRPS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64DFR0_BRPS_SHIFT, 4, 0),
>> +	/*
>> +	 * We can instantiate multiple PMU instances with different levels
>> +	 * of support.
>> +	 */
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_mvfr2[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_FPMISC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MVFR2_SIMDMISC_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_dczid[] = {
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
>> +	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_isar0[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_COPROC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_CMPBRANCH_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITFIELD_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_BITCOUNT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_SWAP_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_isar5[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_EVT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CCIDX_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_LSM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_HPDS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_CNP_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_XNX_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR4_AC2_SHIFT, 4, 0),
>> +
>> +	/*
>> +	 * SpecSEI = 1 indicates that the PE might generate an SError on an
>> +	 * external abort on speculative read. It is safe to assume that an
>> +	 * SError might be generated than it will not be. Hence it has been
>> +	 * classified as FTR_HIGHER_SAFE.
>> +	 */
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_MMFR4_SPECSEI_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_isar4[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_PSR_M_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_BARRIER_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_SMC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR4_UNPRIV_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_mmfr5[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR5_ETS_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_isar6[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_I8MM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_BF16_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SPECRES_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_SB_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_FHM_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_DP_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR6_JSCVT_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_pfr0[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_DIT_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR0_CSV2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE3_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE2_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE1_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR0_STATE0_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_pfr1[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GIC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_GENTIMER_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_MPROGMOD_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_SECURITY_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_PFR1_PROGMOD_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_pfr2[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_SSBS_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_PFR2_CSV3_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_dfr0[] = {
>> +	/* [31:28] TraceFilt */
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_PERFMON_SHIFT, 4, 0xf),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MPROFDBG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPTRC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPTRC_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_MMAPDBG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPSDBG_SHIFT, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR0_COPDBG_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_id_dfr1[] = {
>> +	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_zcr[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
>> +		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
>> +	ARM64_FTR_END,
>> +};
>> +
>> +/*
>> + * Common ftr bits for a 32bit register with all hidden, strict
>> + * attributes, with 4bit feature fields and a default safe value of
>> + * 0. Covers the following 32bit registers:
>> + * id_isar[1-4], id_mmfr[1-3], id_pfr1, mvfr[0-1]
>> + */
>> +static const struct arm64_ftr_bits ftr_generic_32bits[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 12, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 8, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 4, 4, 0),
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 0, 4, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +/* Table for a single 32bit feature value */
>> +static const struct arm64_ftr_bits ftr_single32[] = {
>> +	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
>> +	ARM64_FTR_END,
>> +};
>> +
>> +static const struct arm64_ftr_bits ftr_raz[] = {
>> +	ARM64_FTR_END,
>> +};
>> +
>> +/*
>> + * End of imported linux structures
>> + */
>> +
> 
> -- 
> Julien Grall



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-12 10:16   ` Julien Grall
@ 2021-07-12 11:03     ` Bertrand Marquis
  2021-07-12 11:13       ` Julien Grall
  2021-07-16 17:14     ` Bertrand Marquis
  1 sibling, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-07-12 11:03 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

Hi Julien,

> On 12 Jul 2021, at 11:16, Julien Grall <julien@xen.org> wrote:
> 
> Hi Bertrand,
> 
> On 29/06/2021 18:08, Bertrand Marquis wrote:
>> Define a sanitize_cpu function to be called on secondary cores to
>> sanitize the cpuinfo structure from the boot CPU.
>> The safest value is taken when possible and the system is marked tainted
>> if we encounter values which are incompatible with each other.
>> Call the sanitize_cpu function on all secondary cores and remove the
>> code disabling secondary cores if they are not the same as the boot core
>> as we are now able to handle this use case.
>> This is only supported on arm64 so cpu_sanitize is an empty static
>> inline on arm32.
>> This patch also removes the code imported from Linux that will not be
>> used by Xen.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>  xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
>>  xen/arch/arm/smpboot.c           |   5 +-
>>  xen/include/asm-arm/cpufeature.h |   9 ++
>>  xen/include/xen/lib.h            |   1 +
>>  4 files changed, 199 insertions(+), 52 deletions(-)
>> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
>> index 4cc8378c14..744006ca7c 100644
>> --- a/xen/arch/arm/arm64/cpusanitize.c
>> +++ b/xen/arch/arm/arm64/cpusanitize.c
>> @@ -97,10 +97,6 @@ struct arm64_ftr_bits {
>>  		.width = 0,				\
>>  	}
>>  -static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>> -
>> -static bool __system_matches_cap(unsigned int n);
>> -
>>  /*
>>   * NOTE: Any changes to the visibility of features should be kept in
>>   * sync with the documentation of the CPU feature register ABI.
>> @@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>>  	ARM64_FTR_END,
>>  };
>>  -static const struct arm64_ftr_bits ftr_ctr[] = {
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
>> -	/*
>> -	 * Linux can handle differing I-cache policies. Userspace JITs will
>> -	 * make use of *minLine.
>> -	 * If we have differing I-cache policies, report it as the weakest - VIPT.
>> -	 */
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
>> -	ARM64_FTR_END,
>> -};
> 
> I don't think this is should be dropped. Xen will need to know the safest cacheline size that can be used for cache maintenance instructions.

I will surround those entries by #if 0 instead

> 
>> -
>> -static struct arm64_ftr_override __ro_after_init no_override = { };
>> -
>> -struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {
>> -	.name		= "SYS_CTR_EL0",
>> -	.ftr_bits	= ftr_ctr,
>> -	.override	= &no_override,
>> -};
>> -
>>  static const struct arm64_ftr_bits ftr_id_mmfr0[] = {
>>  	S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 4, 0xf),
>>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_MMFR0_FCSE_SHIFT, 4, 0),
>> @@ -335,12 +306,6 @@ static const struct arm64_ftr_bits ftr_mvfr2[] = {
>>  	ARM64_FTR_END,
>>  };
>>  -static const struct arm64_ftr_bits ftr_dczid[] = {
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, DCZID_DZP_SHIFT, 1, 1),
>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, DCZID_BS_SHIFT, 4, 0),
>> -	ARM64_FTR_END,
>> -};
> 
> Why is this dropped?

Same I will keep that with #if 0

> 
>> -
>>  static const struct arm64_ftr_bits ftr_id_isar0[] = {
>>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DIVIDE_SHIFT, 4, 0),
>>  	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR0_DEBUG_SHIFT, 4, 0),
>> @@ -454,12 +419,6 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
>>  	ARM64_FTR_END,
>>  };
>>  -static const struct arm64_ftr_bits ftr_zcr[] = {
>> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE,
>> -		ZCR_ELx_LEN_SHIFT, ZCR_ELx_LEN_SIZE, 0),	/* LEN */
>> -	ARM64_FTR_END,
>> -};
> 
> At some point we will support SVE in Xen. So any reason we would not want to keep the code?

Same I will keep that with #if 0

> 
>> -
>>  /*
>>   * Common ftr bits for a 32bit register with all hidden, strict
>>   * attributes, with 4bit feature fields and a default safe value of
>> @@ -478,17 +437,192 @@ static const struct arm64_ftr_bits ftr_generic_32bits[] = {
>>  	ARM64_FTR_END,
>>  };
>>  -/* Table for a single 32bit feature value */
>> -static const struct arm64_ftr_bits ftr_single32[] = {
>> -	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, 0, 32, 0),
>> -	ARM64_FTR_END,
>> -};
>> -
>> -static const struct arm64_ftr_bits ftr_raz[] = {
>> -	ARM64_FTR_END,
>> -};
>> -
>>  /*
>>   * End of imported linux structures
>>   */
>>  +static inline int __attribute_const__
>> +cpuid_feature_extract_signed_field_width(u64 features, int field, int width)
>> +{
>> +    return (s64)(features << (64 - width - field)) >> (64 - width);
>> +}
> 
> Please avoid to mix Xen and Linux coding style in the same file.
> 
> Also, this function and some others below seems to have been taken as-is from Linux. So this should at least be mentionned in the commit message.
> 
> I would also consider to import them in a separate patch (or maybe in patch#2) so it is clear this is not "new" code.

I will move those in patch 2 and keep Linux code.

> 
>> +
>> +static inline int __attribute_const__
>> +cpuid_feature_extract_signed_field(u64 features, int field)
>> +{
>> +    return cpuid_feature_extract_signed_field_width(features, field, 4);
>> +}
>> +
>> +static inline unsigned int __attribute_const__
>> +cpuid_feature_extract_unsigned_field_width(u64 features, int field, int width)
>> +{
>> +    return (u64)(features << (64 - width - field)) >> (64 - width);
>> +}
>> +
>> +static inline unsigned int __attribute_const__
>> +cpuid_feature_extract_unsigned_field(u64 features, int field)
>> +{
>> +    return cpuid_feature_extract_unsigned_field_width(features, field, 4);
>> +}
>> +
>> +static inline u64 arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
>> +{
>> +    return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
>> +}
>> +
>> +static inline int __attribute_const__
>> +cpuid_feature_extract_field_width(u64 features, int field, int width,
>> +                                  bool sign)
>> +{
>> +    return (sign) ?
>> +        cpuid_feature_extract_signed_field_width(features, field, width) :
>> +        cpuid_feature_extract_unsigned_field_width(features, field, width);
>> +}
>> +
>> +static inline int __attribute_const__
>> +cpuid_feature_extract_field(u64 features, int field, bool sign)
>> +{
>> +    return cpuid_feature_extract_field_width(features, field, 4, sign);
>> +}
>> +
>> +static inline s64 arm64_ftr_value(const struct arm64_ftr_bits *ftrp, u64 val)
>> +{
>> +    return (s64)cpuid_feature_extract_field_width(val, ftrp->shift,
>> +                                                  ftrp->width, ftrp->sign);
>> +}
>> +
>> +static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
>> +                                s64 cur)
>> +{
>> +    s64 ret = 0;
>> +
>> +    switch ( ftrp->type ) {
>> +    case FTR_EXACT:
>> +        ret = ftrp->safe_val;
>> +        break;
>> +    case FTR_LOWER_SAFE:
>> +        ret = new < cur ? new : cur;
>> +        break;
>> +    case FTR_HIGHER_OR_ZERO_SAFE:
>> +        if (!cur || !new)
>> +            break;
>> +        fallthrough;
>> +    case FTR_HIGHER_SAFE:
>> +        ret = new > cur ? new : cur;
> 
> We have a macro max() defined in kernel.h.

Right I will use that instead.

> 
>> +        break;
>> +    default:
>> +        BUG();
>> +    }
>> +
>> +    return ret;
>> +}
>> +
>> +static void sanitize_reg(u64 *cur_reg, u64 new_reg, const char *reg_name,
>> +                        const struct arm64_ftr_bits *ftrp)
>> +{
>> +    int taint = 0;
>> +    u64 old_reg = *cur_reg;
>> +
>> +    for ( ; ftrp->width !=0 ; ftrp++ )
>> +    {
>> +        u64 mask;
>> +        s64 cur_field = arm64_ftr_value(ftrp, *cur_reg);
>> +        s64 new_field = arm64_ftr_value(ftrp, new_reg);
>> +
>> +        if ( cur_field == new_field )
>> +            continue;
>> +
>> +        if ( ftrp->strict )
>> +            taint = 1;
>> +
>> +        mask = arm64_ftr_mask(ftrp);
>> +
>> +        *cur_reg &= ~mask;
>> +        *cur_reg |= (arm64_ftr_safe_value(ftrp, new_field, cur_field)
>> +                    << ftrp->shift) & mask;
>> +    }
>> +
>> +    if ( old_reg != new_reg )
>> +        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>> +               reg_name, old_reg, new_reg);
>> +    if ( old_reg != *cur_reg )
>> +        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>> +               reg_name, old_reg, *cur_reg);
>> +
>> +    if ( taint )
>> +    {
>> +        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
>> +                reg_name);
>> +        add_taint(TAINT_CPU_OUT_OF_SPEC);
>> +    }
>> +}
>> +
>> +
>> +/*
>> + * This function should be called on secondary cores to sanitize the boot cpu
>> + * cpuinfo.
> 
> Can we renamed boot_cpu_data to system_cpuinfo (or something similar)? This would make clear this is not only the features for the boot CPU?

Ok I will do that.

> 
>> + */
>> +void sanitize_cpu(const struct cpuinfo_arm *new)
> 
> How about naming it to "update_system_features()"?

Ok

> 
>> +{
>> +
>> +#define SANITIZE_REG(field, num, reg)  \
>> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
>> +                 #reg, ftr_##reg)
>> +
>> +#define SANITIZE_ID_REG(field, num, reg)  \
>> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
>> +                 #reg, ftr_id_##reg)
>> +
>> +#define SANITIZE_GENERIC_REG(field, num, reg)  \
>> +    sanitize_reg(&boot_cpu_data.field.bits[num], new->field.bits[num], \
>> +                 #reg, ftr_generic_32bits)
>> +
>> +    SANITIZE_ID_REG(pfr64, 0, aa64pfr0);
>> +    SANITIZE_ID_REG(pfr64, 1, aa64pfr1);
>> +
>> +    SANITIZE_ID_REG(dbg64, 0, aa64dfr0);
>> +
>> +    /* aux64 x2 */
>> +
>> +    SANITIZE_ID_REG(mm64, 0, aa64mmfr0);
>> +    SANITIZE_ID_REG(mm64, 1, aa64mmfr1);
>> +    SANITIZE_ID_REG(mm64, 2, aa64mmfr2);
>> +
>> +    SANITIZE_ID_REG(isa64, 0, aa64isar0);
>> +    SANITIZE_ID_REG(isa64, 1, aa64isar1);
>> +
>> +    SANITIZE_ID_REG(zfr64, 0, aa64zfr0);
>> +
>> +    if ( cpu_feature64_has_el0_32(&boot_cpu_data) )
>> +    {
>> +        SANITIZE_ID_REG(pfr32, 0, pfr0);
>> +        SANITIZE_ID_REG(pfr32, 1, pfr1);
>> +        SANITIZE_ID_REG(pfr32, 2, pfr2);
>> +
>> +        SANITIZE_ID_REG(dbg32, 0, dfr0);
>> +        SANITIZE_ID_REG(dbg32, 1, dfr1);
>> +
>> +        /* aux32 x1 */
> 
> What does this comment mean?

Leftover during dev that I should turn in proper comment.
It was there as I am not sanitizing one aux32 register.
Same goes for the comment before for aux64.

I will remove them.

> 
>> +
>> +        SANITIZE_ID_REG(mm32, 0, mmfr0);
>> +        SANITIZE_GENERIC_REG(mm32, 1, mmfr1);
>> +        SANITIZE_GENERIC_REG(mm32, 2, mmfr2);
>> +        SANITIZE_GENERIC_REG(mm32, 3, mmfr3);
>> +        SANITIZE_ID_REG(mm32, 4, mmfr4);
>> +        SANITIZE_ID_REG(mm32, 5, mmfr5);
>> +
>> +        SANITIZE_ID_REG(isa32, 0, isar0);
>> +        SANITIZE_GENERIC_REG(isa32, 1, isar1);
>> +        SANITIZE_GENERIC_REG(isa32, 2, isar2);
>> +        SANITIZE_GENERIC_REG(isa32, 3, isar3);
>> +        SANITIZE_ID_REG(isa32, 4, isar4);
>> +        SANITIZE_ID_REG(isa32, 5, isar5);
>> +        SANITIZE_ID_REG(isa32, 6, isar6);
>> +
>> +        SANITIZE_GENERIC_REG(mvfr, 0, mvfr0);
>> +        SANITIZE_GENERIC_REG(mvfr, 1, mvfr1);
>> +#ifndef MVFR2_MAYBE_UNDEFINED
>> +        SANITIZE_REG(mvfr, 2, mvfr2);
>> +#endif
>> +    }
>> +}
>> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
>> index a1ee3146ef..8fdf5e70d3 100644
>> --- a/xen/arch/arm/smpboot.c
>> +++ b/xen/arch/arm/smpboot.c
>> @@ -307,12 +307,14 @@ void start_secondary(void)
>>      set_processor_id(cpuid);
>>        identify_cpu(&current_cpu_data);
>> +    sanitize_cpu(&current_cpu_data);
>>      processor_setup();
>>        init_traps();
>>  +#ifndef CONFIG_ARM_64
>>      /*
>> -     * Currently Xen assumes the platform has only one kind of CPUs.
>> +     * Currently Xen assumes the platform has only one kind of CPUs on ARM32.
>>       * This assumption does not hold on big.LITTLE platform and may
>>       * result to instability and insecure platform (unless cpu affinity
>>       * is manually specified for all domains). Better to park them for
>> @@ -328,6 +330,7 @@ void start_secondary(void)
>>                 boot_cpu_data.midr.bits);
>>          stop_cpu();
>>      }
>> +#endif
>>        if ( dcache_line_bytes != read_dcache_line_bytes() )
>>      {
> 
> Any plan to drop this check?

Yes this should be done as a next patch in the serie once we valeted
The way to go for the base.

> 
>> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
>> index ba48db3eac..ad34e55cc8 100644
>> --- a/xen/include/asm-arm/cpufeature.h
>> +++ b/xen/include/asm-arm/cpufeature.h
>> @@ -330,6 +330,15 @@ extern struct cpuinfo_arm boot_cpu_data;
>>    extern void identify_cpu(struct cpuinfo_arm *);
>>  +#ifdef CONFIG_ARM_64
>> +extern void sanitize_cpu(const struct cpuinfo_arm *);
>> +#else
>> +static inline void sanitize_cpu(const struct cpuinfo_arm *cpuinfo)
>> +{
>> +    /* Not supported on arm32 */
>> +}
>> +#endif
>> +
>>  extern struct cpuinfo_arm cpu_data[];
>>  #define current_cpu_data cpu_data[smp_processor_id()]
>>  diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
>> index 1198c7c0b2..c6987973bf 100644
>> --- a/xen/include/xen/lib.h
>> +++ b/xen/include/xen/lib.h
>> @@ -192,6 +192,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
>>  #define TAINT_ERROR_INJECT              (1u << 2)
>>  #define TAINT_HVM_FEP                   (1u << 3)
>>  #define TAINT_MACHINE_UNSECURE          (1u << 4)
>> +#define TAINT_CPU_OUT_OF_SPEC           (1u << 5)
> 
> You want to also update at least print_tainted().

Oh yes I will fix that in the next version.

Cheers
Bertrand

> 
>>  extern unsigned int tainted;
>>  #define TAINT_STRING_MAX_LEN            20
>>  extern char *print_tainted(char *str);
> 
> -- 
> Julien Grall



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-12 11:03     ` Bertrand Marquis
@ 2021-07-12 11:13       ` Julien Grall
  2021-07-12 16:29         ` Bertrand Marquis
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2021-07-12 11:13 UTC (permalink / raw)
  To: Bertrand Marquis
  Cc: Xen-devel, Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Ian Jackson, Jan Beulich, Wei Liu



On 12/07/2021 12:03, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

>> On 12 Jul 2021, at 11:16, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertrand,
>>
>> On 29/06/2021 18:08, Bertrand Marquis wrote:
>>> Define a sanitize_cpu function to be called on secondary cores to
>>> sanitize the cpuinfo structure from the boot CPU.
>>> The safest value is taken when possible and the system is marked tainted
>>> if we encounter values which are incompatible with each other.
>>> Call the sanitize_cpu function on all secondary cores and remove the
>>> code disabling secondary cores if they are not the same as the boot core
>>> as we are now able to handle this use case.
>>> This is only supported on arm64 so cpu_sanitize is an empty static
>>> inline on arm32.
>>> This patch also removes the code imported from Linux that will not be
>>> used by Xen.
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>>   xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
>>>   xen/arch/arm/smpboot.c           |   5 +-
>>>   xen/include/asm-arm/cpufeature.h |   9 ++
>>>   xen/include/xen/lib.h            |   1 +
>>>   4 files changed, 199 insertions(+), 52 deletions(-)
>>> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
>>> index 4cc8378c14..744006ca7c 100644
>>> --- a/xen/arch/arm/arm64/cpusanitize.c
>>> +++ b/xen/arch/arm/arm64/cpusanitize.c
>>> @@ -97,10 +97,6 @@ struct arm64_ftr_bits {
>>>   		.width = 0,				\
>>>   	}
>>>   -static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>>> -
>>> -static bool __system_matches_cap(unsigned int n);
>>> -
>>>   /*
>>>    * NOTE: Any changes to the visibility of features should be kept in
>>>    * sync with the documentation of the CPU feature register ABI.
>>> @@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>>>   	ARM64_FTR_END,
>>>   };
>>>   -static const struct arm64_ftr_bits ftr_ctr[] = {
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
>>> -	/*
>>> -	 * Linux can handle differing I-cache policies. Userspace JITs will
>>> -	 * make use of *minLine.
>>> -	 * If we have differing I-cache policies, report it as the weakest - VIPT.
>>> -	 */
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
>>> -	ARM64_FTR_END,
>>> -};
>>
>> I don't think this is should be dropped. Xen will need to know the safest cacheline size that can be used for cache maintenance instructions.
> 
> I will surround those entries by #if 0 instead

But, why can't this just be sanitized as the other registers? If this is 
just a matter of missing structure in Xen, then we should add one.

The same goes for the other 2 below.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-12 11:13       ` Julien Grall
@ 2021-07-12 16:29         ` Bertrand Marquis
  2021-07-12 18:53           ` Julien Grall
  0 siblings, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-07-12 16:29 UTC (permalink / raw)
  To: Julien Grall
  Cc: Xen-devel, Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

Hi Julien,

> On 12 Jul 2021, at 12:13, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 12/07/2021 12:03, Bertrand Marquis wrote:
>> Hi Julien,
> 
> Hi Bertrand,
> 
>>> On 12 Jul 2021, at 11:16, Julien Grall <julien@xen.org> wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On 29/06/2021 18:08, Bertrand Marquis wrote:
>>>> Define a sanitize_cpu function to be called on secondary cores to
>>>> sanitize the cpuinfo structure from the boot CPU.
>>>> The safest value is taken when possible and the system is marked tainted
>>>> if we encounter values which are incompatible with each other.
>>>> Call the sanitize_cpu function on all secondary cores and remove the
>>>> code disabling secondary cores if they are not the same as the boot core
>>>> as we are now able to handle this use case.
>>>> This is only supported on arm64 so cpu_sanitize is an empty static
>>>> inline on arm32.
>>>> This patch also removes the code imported from Linux that will not be
>>>> used by Xen.
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>> ---
>>>>  xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
>>>>  xen/arch/arm/smpboot.c           |   5 +-
>>>>  xen/include/asm-arm/cpufeature.h |   9 ++
>>>>  xen/include/xen/lib.h            |   1 +
>>>>  4 files changed, 199 insertions(+), 52 deletions(-)
>>>> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
>>>> index 4cc8378c14..744006ca7c 100644
>>>> --- a/xen/arch/arm/arm64/cpusanitize.c
>>>> +++ b/xen/arch/arm/arm64/cpusanitize.c
>>>> @@ -97,10 +97,6 @@ struct arm64_ftr_bits {
>>>>  		.width = 0,				\
>>>>  	}
>>>>  -static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>>>> -
>>>> -static bool __system_matches_cap(unsigned int n);
>>>> -
>>>>  /*
>>>>   * NOTE: Any changes to the visibility of features should be kept in
>>>>   * sync with the documentation of the CPU feature register ABI.
>>>> @@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>>>>  	ARM64_FTR_END,
>>>>  };
>>>>  -static const struct arm64_ftr_bits ftr_ctr[] = {
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
>>>> -	/*
>>>> -	 * Linux can handle differing I-cache policies. Userspace JITs will
>>>> -	 * make use of *minLine.
>>>> -	 * If we have differing I-cache policies, report it as the weakest - VIPT.
>>>> -	 */
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
>>>> -	ARM64_FTR_END,
>>>> -};
>>> 
>>> I don't think this is should be dropped. Xen will need to know the safest cacheline size that can be used for cache maintenance instructions.
>> I will surround those entries by #if 0 instead
> 
> But, why can't this just be sanitized as the other registers? If this is just a matter of missing structure in Xen, then we should add one.
> 
> The same goes for the other 2 below.

The point of the RFC was to validate the way to go and do the base.

Those require changes out of the cpuinfo and are on my todo list to.

Regards
Bertrand

> 
> Cheers,
> 
> -- 
> Julien Grall



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-12 16:29         ` Bertrand Marquis
@ 2021-07-12 18:53           ` Julien Grall
  0 siblings, 0 replies; 16+ messages in thread
From: Julien Grall @ 2021-07-12 18:53 UTC (permalink / raw)
  To: Bertrand Marquis
  Cc: Xen-devel, Stefano Stabellini, Volodymyr Babchuk, Andrew Cooper,
	George Dunlap, Ian Jackson, Jan Beulich, Wei Liu

Hi Bertrand,

On 12/07/2021 17:29, Bertrand Marquis wrote:
>> On 12 Jul 2021, at 12:13, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 12/07/2021 12:03, Bertrand Marquis wrote:
>>> Hi Julien,
>>
>> Hi Bertrand,
>>
>>>> On 12 Jul 2021, at 11:16, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Bertrand,
>>>>
>>>> On 29/06/2021 18:08, Bertrand Marquis wrote:
>>>>> Define a sanitize_cpu function to be called on secondary cores to
>>>>> sanitize the cpuinfo structure from the boot CPU.
>>>>> The safest value is taken when possible and the system is marked tainted
>>>>> if we encounter values which are incompatible with each other.
>>>>> Call the sanitize_cpu function on all secondary cores and remove the
>>>>> code disabling secondary cores if they are not the same as the boot core
>>>>> as we are now able to handle this use case.
>>>>> This is only supported on arm64 so cpu_sanitize is an empty static
>>>>> inline on arm32.
>>>>> This patch also removes the code imported from Linux that will not be
>>>>> used by Xen.
>>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>> ---
>>>>>   xen/arch/arm/arm64/cpusanitize.c | 236 ++++++++++++++++++++++++-------
>>>>>   xen/arch/arm/smpboot.c           |   5 +-
>>>>>   xen/include/asm-arm/cpufeature.h |   9 ++
>>>>>   xen/include/xen/lib.h            |   1 +
>>>>>   4 files changed, 199 insertions(+), 52 deletions(-)
>>>>> diff --git a/xen/arch/arm/arm64/cpusanitize.c b/xen/arch/arm/arm64/cpusanitize.c
>>>>> index 4cc8378c14..744006ca7c 100644
>>>>> --- a/xen/arch/arm/arm64/cpusanitize.c
>>>>> +++ b/xen/arch/arm/arm64/cpusanitize.c
>>>>> @@ -97,10 +97,6 @@ struct arm64_ftr_bits {
>>>>>   		.width = 0,				\
>>>>>   	}
>>>>>   -static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>>>>> -
>>>>> -static bool __system_matches_cap(unsigned int n);
>>>>> -
>>>>>   /*
>>>>>    * NOTE: Any changes to the visibility of features should be kept in
>>>>>    * sync with the documentation of the CPU feature register ABI.
>>>>> @@ -277,31 +273,6 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>>>>>   	ARM64_FTR_END,
>>>>>   };
>>>>>   -static const struct arm64_ftr_bits ftr_ctr[] = {
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
>>>>> -	/*
>>>>> -	 * Linux can handle differing I-cache policies. Userspace JITs will
>>>>> -	 * make use of *minLine.
>>>>> -	 * If we have differing I-cache policies, report it as the weakest - VIPT.
>>>>> -	 */
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_EXACT, CTR_L1IP_SHIFT, 2, ICACHE_POLICY_VIPT),	/* L1Ip */
>>>>> -	ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IMINLINE_SHIFT, 4, 0),
>>>>> -	ARM64_FTR_END,
>>>>> -};
>>>>
>>>> I don't think this is should be dropped. Xen will need to know the safest cacheline size that can be used for cache maintenance instructions.
>>> I will surround those entries by #if 0 instead
>>
>> But, why can't this just be sanitized as the other registers? If this is just a matter of missing structure in Xen, then we should add one.
>>
>> The same goes for the other 2 below.
> 
> The point of the RFC was to validate the way to go and do the base.

Right... I was commenting on your suggestion to switch to #if 0 for the 
next version. I assume this will be a non-RFC and...

> Those require changes out of the cpuinfo and are on my todo list to
... it wasn't clear to me that the change on the cpuinfo was on your 
TODO list for the next version.

So I prefer to ask before you spend time working on a change that I may 
disagree with ;).

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux
  2021-07-12 10:50     ` Bertrand Marquis
@ 2021-07-12 19:03       ` Julien Grall
  0 siblings, 0 replies; 16+ messages in thread
From: Julien Grall @ 2021-07-12 19:03 UTC (permalink / raw)
  To: Bertrand Marquis; +Cc: xen-devel, Stefano Stabellini, Volodymyr Babchuk

Hi Bertrand,

On 12/07/2021 11:50, Bertrand Marquis wrote:
>>> +#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>>> +	{						\
>>> +		.sign = SIGNED,				\
>>> +		.visible = VISIBLE,			\
>>> +		.strict = STRICT,			\
>>> +		.type = TYPE,				\
>>> +		.shift = SHIFT,				\
>>> +		.width = WIDTH,				\
>>> +		.safe_val = SAFE_VAL,			\
>>> +	}
>>> +
>>> +/* Define a feature with unsigned values */
>>> +#define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>>> +	__ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
>>> +
>>> +/* Define a feature with a signed value */
>>> +#define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
>>> +	__ARM64_FTR_BITS(FTR_SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL)
>>> +
>>> +#define ARM64_FTR_END					\
>>> +	{						\
>>> +		.width = 0,				\
>>> +	}
>>> +
>>> +static void cpu_enable_cnp(struct arm64_cpu_capabilities const *cap);
>>
>> This function is not defined in the code you import.
> 
> I imported the block I am interested in from Linux and I am filtering it in the
> Next patch where I remove those function prototypes.
I find it a bit confusing because most of the code imported makes sense 
except the two prototypes. At the same time...

> 
> This was to allow easier update of the code.

... I agree with this because if we need a resync of this patch, we may 
inadvertently re-introduce the prototype. So...

> 
> Should I filter directly when importing linux code then ?

... I will leave that up to you.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-12 10:16   ` Julien Grall
  2021-07-12 11:03     ` Bertrand Marquis
@ 2021-07-16 17:14     ` Bertrand Marquis
  2021-07-19  8:58       ` Julien Grall
  1 sibling, 1 reply; 16+ messages in thread
From: Bertrand Marquis @ 2021-07-16 17:14 UTC (permalink / raw)
  To: Julien Grall; +Cc: Xen-devel, Stefano Stabellini

Hi Julien

[…]
>> 
>> +
>> +    if ( old_reg != new_reg )
>> +        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>> +               reg_name, old_reg, new_reg);
>> +    if ( old_reg != *cur_reg )
>> +        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>> +               reg_name, old_reg, *cur_reg);
>> +
>> +    if ( taint )
>> +    {
>> +        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
>> +                reg_name);
>> +        add_taint(TAINT_CPU_OUT_OF_SPEC);
>> +    }
>> +}
>> +
>> +
>> +/*
>> + * This function should be called on secondary cores to sanitize the boot cpu
>> + * cpuinfo.
> 
> Can we renamed boot_cpu_data to system_cpuinfo (or something similar)? This would make clear this is not only the features for the boot CPU?

While looking at this request, I checked a bit how boot_cpu_data and cpu_data overall are used:
- boot_cpu_data is only used in setup.c, by boot_cpu_features macros, in smpboot to retrieve the bootcpu midr, in p2m and by cpufeatures
- cpu_data[] is used in smpboot, in errata handling to test for csv2, and in vcpreg to access the midr

So we have a bunch of cpuinfo structures as global variables but most of them are not really used or did I miss something ?

So I am wondering if we should not reduce a bit the amount of global data and:
- introduce a global system_cpuinfo
- remove cpu_data[] and use a temp structure in the stack of the cpu booting
- read midr directly in vcpreg
- use boot_cpu_data in errata for csv2
- use system_cpuinfo in p2m

It could even be possible to remove boot_cpu_data and put it on the stack of the boot cpu and only use system_cpuinfo but I did not investigate this.

Cheers
Bertrand


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-16 17:14     ` Bertrand Marquis
@ 2021-07-19  8:58       ` Julien Grall
  2021-08-03  9:57         ` Bertrand Marquis
  0 siblings, 1 reply; 16+ messages in thread
From: Julien Grall @ 2021-07-19  8:58 UTC (permalink / raw)
  To: Bertrand Marquis; +Cc: Xen-devel, Stefano Stabellini



On 16/07/2021 18:14, Bertrand Marquis wrote:
> Hi Julien

Hi Bertrand,

> […]
>>>
>>> +
>>> +    if ( old_reg != new_reg )
>>> +        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>>> +               reg_name, old_reg, new_reg);
>>> +    if ( old_reg != *cur_reg )
>>> +        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>>> +               reg_name, old_reg, *cur_reg);
>>> +
>>> +    if ( taint )
>>> +    {
>>> +        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
>>> +                reg_name);
>>> +        add_taint(TAINT_CPU_OUT_OF_SPEC);
>>> +    }
>>> +}
>>> +
>>> +
>>> +/*
>>> + * This function should be called on secondary cores to sanitize the boot cpu
>>> + * cpuinfo.
>>
>> Can we renamed boot_cpu_data to system_cpuinfo (or something similar)? This would make clear this is not only the features for the boot CPU?
> 
> While looking at this request, I checked a bit how boot_cpu_data and cpu_data overall are used:
> - boot_cpu_data is only used in setup.c, by boot_cpu_features macros, in smpboot to retrieve the bootcpu midr, in p2m and by cpufeatures
> - cpu_data[] is used in smpboot, in errata handling to test for csv2, and in vcpreg to access the midr
> 
> So we have a bunch of cpuinfo structures as global variables but most of them are not really used or did I miss something ?

While I agree this is not useful today, the idea is we can find easily 
what features each processor supports. This could be useful if we wanted 
to expose big.LITTLE to the guest.

For instance, imagine you have a system where some processor may support 
32-bit EL1 only on some processor. With a global approach, we would say 
"32-bit EL1 is not supported". That would prevent a user to use the 
system to its full advantage.

Note that I am not asking to implement such things today... This is more 
to show that we will likely want to keep the per-CPU info around.

The system_cpuinfo could be used for system wide decision in Xen (e.g. 
P2M size, cacheline size....) while the per-CPU could be used to enable 
features only used by a couple of CPUs.

> 
> So I am wondering if we should not reduce a bit the amount of global data and:

How much are we talking?

> - introduce a global system_cpuinfo
> - remove cpu_data[] and use a temp structure in the stack of the cpu booting
> - read midr directly in vcpreg
> - use boot_cpu_data in errata for csv2

This would not be quite the same. You may have a system where not all 
processors have ID_AA64PFR0_EL1.CSV2 is set, yet we want to avoid 
setting the hardening vector on process with the bit set to reduce the 
overhead.

Cheers,

-- 
Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields
  2021-07-19  8:58       ` Julien Grall
@ 2021-08-03  9:57         ` Bertrand Marquis
  0 siblings, 0 replies; 16+ messages in thread
From: Bertrand Marquis @ 2021-08-03  9:57 UTC (permalink / raw)
  To: Julien Grall; +Cc: Xen-devel, Stefano Stabellini

Hi Julien,

> On 19 Jul 2021, at 09:58, Julien Grall <julien@xen.org> wrote:
> 
> 
> 
> On 16/07/2021 18:14, Bertrand Marquis wrote:
>> Hi Julien
> 
> Hi Bertrand,
> 
>> […]
>>>> 
>>>> +
>>>> +    if ( old_reg != new_reg )
>>>> +        printk(XENLOG_DEBUG "SANITY DIF: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>>>> +               reg_name, old_reg, new_reg);
>>>> +    if ( old_reg != *cur_reg )
>>>> +        printk(XENLOG_DEBUG "SANITY FIX: %s 0x%"PRIx64" -> 0x%"PRIx64"\n",
>>>> +               reg_name, old_reg, *cur_reg);
>>>> +
>>>> +    if ( taint )
>>>> +    {
>>>> +        printk(XENLOG_WARNING "SANITY CHECK: Unexpected variation in %s.\n",
>>>> +                reg_name);
>>>> +        add_taint(TAINT_CPU_OUT_OF_SPEC);
>>>> +    }
>>>> +}
>>>> +
>>>> +
>>>> +/*
>>>> + * This function should be called on secondary cores to sanitize the boot cpu
>>>> + * cpuinfo.
>>> 
>>> Can we renamed boot_cpu_data to system_cpuinfo (or something similar)? This would make clear this is not only the features for the boot CPU?
>> While looking at this request, I checked a bit how boot_cpu_data and cpu_data overall are used:
>> - boot_cpu_data is only used in setup.c, by boot_cpu_features macros, in smpboot to retrieve the bootcpu midr, in p2m and by cpufeatures
>> - cpu_data[] is used in smpboot, in errata handling to test for csv2, and in vcpreg to access the midr
>> So we have a bunch of cpuinfo structures as global variables but most of them are not really used or did I miss something ?
> 
> While I agree this is not useful today, the idea is we can find easily what features each processor supports. This could be useful if we wanted to expose big.LITTLE to the guest.
> 
> For instance, imagine you have a system where some processor may support 32-bit EL1 only on some processor. With a global approach, we would say "32-bit EL1 is not supported". That would prevent a user to use the system to its full advantage.
> 
> Note that I am not asking to implement such things today... This is more to show that we will likely want to keep the per-CPU info around.
> 
> The system_cpuinfo could be used for system wide decision in Xen (e.g. P2M size, cacheline size....) while the per-CPU could be used to enable features only used by a couple of CPUs.

I understand the potential need (even though supporting to assign guest CPUs depending on features needed would be something complex to achieve).

> 
>> So I am wondering if we should not reduce a bit the amount of global data and:
> 
> How much are we talking?

We are declaring cpuinfo for all possible cores in smpboot:
struct cpuinfo_arm cpu_data[NR_CPUS];
And default value for NR_CPUS is 128 so that makes around 9k of data.

This is not that much I agree.


> 
>> - introduce a global system_cpuinfo
>> - remove cpu_data[] and use a temp structure in the stack of the cpu booting
>> - read midr directly in vcpreg
>> - use boot_cpu_data in errata for csv2
> 
> This would not be quite the same. You may have a system where not all processors have ID_AA64PFR0_EL1.CSV2 is set, yet we want to avoid setting the hardening vector on process with the bit set to reduce the overhead.

Agree and with the actual size reduction being quite small this does not make sense.

Thanks a lot for the feedback, I will go on with the system_cpuinfo and is.

Cheers
Bertrand

> 
> Cheers,
> 
> -- 
> Julien Grall


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-08-03  9:58 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-29 17:08 [RFC PATCH 0/4] xen/arm: Sanitize cpuinfo Bertrand Marquis
2021-06-29 17:08 ` [RFC PATCH 1/4] xen/arm: Import ID registers definitions from Linux Bertrand Marquis
2021-06-29 17:08 ` [RFC PATCH 2/4] xen/arm: Import ID features sanitize from linux Bertrand Marquis
2021-07-12  9:36   ` Julien Grall
2021-07-12 10:50     ` Bertrand Marquis
2021-07-12 19:03       ` Julien Grall
2021-06-29 17:08 ` [RFC PATCH 3/4] xen/arm: Sanitize cpuinfo ID registers fields Bertrand Marquis
2021-07-12 10:16   ` Julien Grall
2021-07-12 11:03     ` Bertrand Marquis
2021-07-12 11:13       ` Julien Grall
2021-07-12 16:29         ` Bertrand Marquis
2021-07-12 18:53           ` Julien Grall
2021-07-16 17:14     ` Bertrand Marquis
2021-07-19  8:58       ` Julien Grall
2021-08-03  9:57         ` Bertrand Marquis
2021-06-29 17:08 ` [RFC PATCH 4/4] xen/arm: Use sanitize values for p2m Bertrand Marquis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).