All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/38] KVM: arm64: Make CPU ID registers writable by userspace
@ 2022-04-19  6:55 ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

In KVM/arm64, values of ID registers for a guest are mostly same as
its host's values except for bits for feature that KVM doesn't support
and for opt-in features that userspace didn't configure.  Userspace
can use KVM_SET_ONE_REG to a set ID register value, but it fails
if userspace attempts to modify the register value.

This patch series adds support to allow userspace to modify a value of
ID registers (as long as KVM can support features that are indicated
in the registers) so userspace can have more control of configuring
and unconfiguring features for guests.  We need this because we would
like to expose a uniform set/level of features for a group of guests on
systems with different ARM CPUs.  Since some features are not binary
in nature (e.g. AA64DFR0_EL1.BRPs fields indicate number of
breakpoints minus 1), using KVM_ARM_VCPU_INIT to control such features
is inconvenient.  This will be supported only for AArch64 EL1 guests at
least for now.

The patch series is for both VHE and non-VHE, except for protected VMs,
which have a different way of configuring ID registers based on its
different requirements [1].
There was a patch series that tried to achieve the same thing [2].
A few snippets of codes in this series were inspired by or came from [2].

The initial value of ID registers for a vCPU will be the host's value
with bits cleared for unsupported features and for opt-in features that
were not configured. So, the initial value userspace can see (via
KVM_GET_ONE_REG) is the upper limit that can be set for the register.
Any requests to change the value that conflicts with opt-in features'
configuration will fail (e.g. if KVM_ARM_VCPU_PMU_V3 is configured by
KVM_ARM_VCPU_INIT, ID_AA64DFR0_EL1.PMUVER cannot be set to zero.
Otherwise, the initial value of ID_AA64DFR0_EL1.PMUVER will be zero,
and cannot be changed from zero).

When a guest tries to use a CPU feature that is not exposed to the guest,
trapping it (to emulate a real CPU's behavior) would generally be a
desirable behavior (when it is possible with no or little side effects).
The later patches in the series add codes for this.  Only features that
can be trapped independently will be trapped by this series though.

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest. In this
case, KVM will always trap and emulate breakpoints/watchpoints
accesses in that case.

This series adds kunit tests for new functions in sys_regs.c (except for
trivial ones), and these tests are enabled with a new configuration
option 'CONFIG_KVM_KUNIT_TEST'.

The series is based on 5.18-rc3.

Patch 01 introduces arm64_check_features(), which will validate
ID registers based on given arm64_ftr_bits[].

Patch 02 introduces id_regs[] to kvm_arch to save values of ID
registers.

Patches 03-04 introduces structure id_reg_desc to manage the ID register
specific control for the guest.

Patch 05 prohibits modifying values of ID regs for 32bit EL1 guests.

Patches 06-11 introduces id_reg_desc for ID_AA64PFR0_EL1, ID_AA64PFR1_EL1,
ID_AA64ISAR0_EL1, ID_AA64ISAR1_EL1, ID_AA64ISAR2_EL1 and ID_AA64MMFR0_EL1
to make them configurable.

Patches 12-14 take cares emulation of dbgbcr/dbgbvr/dbgwcr when the number
of non-context aware breakpoints are reduced for the guest.

Patches 15-19 introduces id_reg_desc for remaining ID registers to
make them configurable.

Patch 20 switches to use id_reg_desc_table[] for ID registers instead
of sys_reg_descs[].

Patch 21 introduces consistency checking of feature fractional
fields of ID registers at the first KVM_RUN.

Patch 22 introduces a new capability KVM_CAP_ARM_ID_REG_WRITABLE
to identify that ID registers are configurable by userspace.

Patch 23 introduces kunit test cases for sys_reg.c changes.

Patches 24-25 change the way of using vcpu->arch.cptr_el2/mdcr_el2 to
track certain bits of cptr_el2/mdcr_el2 in the vcpu->arch fields and use
them when setting them for the guest.  The following patches will update
the vcpu->arch fields based on available features for the guest.

Patch 26 introduces struct feature_config_ctrl and some helper
functions to enable trapping of features that are disabled for a guest.

Patches 27-31 add feature_config_ctrl for CPU features, which are
used to program configuration registers to trap each of the features.

Patch 32 adds kunit test cases for changes in patches 26-31.

Patch 33 adds a couple of helpers for selftests to extract a field of
ID registers.

Patch 34 adds a selftest to validate reading/writing ID registers.

Patches 35-38 add test cases for dbgbcr/dbgbvr/dbgwcr emulation
to the debug-exceptions test.

v7:
  - Add emulation of dbgbcr/dbgbvr/dbgwcr when the number of non-context
    aware breakpoints are reduced for the guest.
  - Add an array of arm64_ftr_bits to id_reg_desc so that KVM could
    have its own validation policy of ID registers.
  - Don't support configurable ID registers for 32bit EL1 guest.
  - Change the ID register validation function in cpufeature.c to
    accept arm64_ftr_bits as an argument.
  - Don't allocate buffers in kvm_arch for ID registers with CRm==0.
    [Oliver]
  - Change set_default_id_regs() not to walk the entire sys_reg_descs[]
    in the patch-2. [Oliver]
  - Add id_reg_desc for ID_AA64ISAR0_EL1

v6: https://lore.kernel.org/all/20220311044811.1980336-1-reijiw@google.com/
  - Remove entries for all ID register entries from sys_reg_descs[],
    and use id_reg_desc_table[] for ID registers instead. [Oliver]
  - Remove sys_reg field from id_reg_info, add reg_desc (sys_reg_desc)
    field to id_reg_info, and rename 'id_reg_info' to 'id_reg_desc'.
  - Merge the following three patches of v5 into one patch "KVM: arm64:
    Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable" to accept userspace's
    request to set ID_AA64DFR0_EL1.PMUVER/ID_DFR0_EL1.PERFMON to 0xf
    (they are set to 0 though) even after applying the patch-10. [Oliver]
     patch-10 KVM: arm64: Hide IMPLEMENTATION DEFINED PMU support for the guest
     patch-11 KVM: arm64: Make ID_AA64DFR0_EL1 writable"
     patch-12 KVM: arm64: Make ID_DFR0_EL1 writable"

v5: https://lore.kernel.org/all/20220214065746.1230608-1-reijiw@google.com/
  - Change the return value of kcalloc failure of init_arm64_ftr_bits_kvm
    to -ENOMEM from ENOMEM. [Fuad]
  - Call init_arm64_ftr_bits_kvm from init_cpu_features(). [Ricardo, Fuad]
  - Move is_id_reg() in arch/arm64/kvm/sys_regs.c. [Fuad]
  - Remove frac_ftr_check from feature_frac [Fuad]
  - Rename kvm_id_regs_consistency_check() [Fuad]
  - Add feature_config_ctrl for ID_AA64DFR0_TRACEVER [Fuad]
  - Move changes for kvm_set_id_reg_feature and __modify_kvm_id_reg from
    patch-4 to patch-3. [Fuad]
  - Comment additions and fixes [Fuad]
  - Rename arm64_check_features() [Ricardo]
  - Run arm64_check_features_kvm() for the default guest value [Ricardo]
  - Add ID_AA64MMFR1_EL1.HAFDBS validation
  - Cosmetic fixes

v4: https://lore.kernel.org/all/20220106042708.2869332-1-reijiw@google.com/
  - Make ID registers storage per VM instead of per vCPU. [Marc]
  - Implement arm64_check_features() in arch/arm64/kernel/cpufeature.c
    by using existing codes in the file. [Marc]
  - Use a configuration function to enable traps for disabled
    features. [Marc]
  - Document ID registers become immutable after the first KVM_RUN [Eric]
  - Update ID_AA64PFR0.GIC at the point where a GICv3 is created. [Marc]
  - Get TGranX's bit position by substracting 12 from TGranX_2's bit
    position. [Eric]
  - Don't validate AArch32 ID registers when the system doesn't support
    32bit EL0. [Eric]
  - Add/fixes comments for patches. [Eric]
  - Made bug fixes/improvements of the selftest. [Eric]
  - Added .kunitconfig for arm64 KUnit tests

v3: https://lore.kernel.org/all/20211117064359.2362060-1-reijiw@google.com/
  - Remove ID register consistency checking across vCPUs. [Oliver]
  - Change KVM_CAP_ARM_ID_REG_WRITABLE to
    KVM_CAP_ARM_ID_REG_CONFIGURABLE. [Oliver]
  - Add KUnit testing for ID register validation and trap initialization.
  - Change read_id_reg() to take care of ID_AA64PFR0_EL1.GIC.
  - Add a helper of read_id_reg() (__read_id_reg()) and use the helper
    instead of directly using __vcpu_sys_reg().
  - Change not to run kvm_id_regs_consistency_check() and
    kvm_vcpu_init_traps() for protected VMs.
  - Update selftest to remove test cases for ID register consistency.
    checking across vCPUs and to add test cases for ID_AA64PFR0_EL1.GIC.

v2: https://lore.kernel.org/all/20211103062520.1445832-1-reijiw@google.com/
  - Remove unnecessary line breaks. [Andrew]
  - Use @params for comments. [Andrew]
  - Move arm64_check_features to arch/arm64/kvm/sys_regs.c and
    change that KVM specific feature check function.  [Andrew]
  - Remove unnecessary raz handling from __set_id_reg. [Andrew]
  - Remove sys_val field from the initial id_reg_info and add it
    in the later patch. [Andrew]
  - Call id_reg->init() from id_reg_info_init(). [Andrew]
  - Fix cpuid_feature_cap_perfmon_field() to convert 0xf to 0x0
    (and use it in the following patches).
  - Change kvm_vcpu_first_run_init to set has_run_once to false
    when kvm_id_regs_consistency_check() fails.
  - Add a patch to introduce id_reg_info for ID_AA64MMFR0_EL1,
    which requires special validity checking for TGran*_2 fields.
  - Add patches to introduce id_reg_info for ID_DFR1_EL1 and
    ID_MMFR0_EL1, which are required due to arm64_check_features
    implementation change.
  - Add a new argument, which is a pointer to id_reg_info, for
    id_reg_info's validate().

v1: https://lore.kernel.org/all/20211012043535.500493-1-reijiw@google.com/

[1] https://lore.kernel.org/all/20211010145636.1950948-1-tabba@google.com/
[2] https://lore.kernel.org/all/20201102033422.657391-1-liangpeng10@huawei.com/
[3] https://lore.kernel.org/all/20220127161759.53553-2-alexandru.elisei@arm.com/

Reiji Watanabe (38):
  KVM: arm64: Introduce a validation function for an ID register
  KVM: arm64: Save ID registers' sanitized value per guest
  KVM: arm64: Introduce struct id_reg_desc
  KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
  KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
  KVM: arm64: Make ID_AA64PFR0_EL1 writable
  KVM: arm64: Make ID_AA64PFR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR0_EL1 writable
  KVM: arm64: Make ID_AA64ISAR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR2_EL1 writable
  KVM: arm64: Make ID_AA64MMFR0_EL1 writable
  KVM: arm64: Add a KVM flag indicating emulating debug regs access is
    needed
  KVM: arm64: Emulate dbgbcr/dbgbvr accesses
  KVM: arm64: Emulate dbgwcr accesses
  KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
  KVM: arm64: Make MVFR1_EL1 writable
  KVM: arm64: Add remaining ID registers to id_reg_desc_table
  KVM: arm64: Use id_reg_desc_table for ID registers
  KVM: arm64: Add consistency checking for frac fields of ID registers
  KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
  KVM: arm64: Add kunit test for ID register validation
  KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
  KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
  KVM: arm64: Introduce framework to trap disabled features
  KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
  KVM: arm64: Add kunit test for trap initialization
  KVM: arm64: selftests: Add helpers to extract a field of ID registers
  KVM: arm64: selftests: Introduce id_reg_test
  KVM: arm64: selftests: Test linked breakpoint and watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint register access
  KVM: arm64: selftests: Test with every breakpoint/watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint changing
    ID_AA64DFR0_EL1

 Documentation/virt/kvm/api.rst                |   16 +
 arch/arm64/include/asm/cpufeature.h           |    3 +-
 arch/arm64/include/asm/kvm_arm.h              |   32 +
 arch/arm64/include/asm/kvm_host.h             |   17 +
 arch/arm64/include/asm/sysreg.h               |   14 +-
 arch/arm64/kernel/cpufeature.c                |   52 +
 arch/arm64/kvm/.kunitconfig                   |    4 +
 arch/arm64/kvm/Kconfig                        |   11 +
 arch/arm64/kvm/arm.c                          |   24 +-
 arch/arm64/kvm/debug.c                        |   20 +-
 arch/arm64/kvm/hyp/vhe/switch.c               |   14 +-
 arch/arm64/kvm/sys_regs.c                     | 2363 +++++++++++++++--
 arch/arm64/kvm/sys_regs_test.c                | 1287 +++++++++
 arch/arm64/kvm/vgic/vgic-init.c               |    9 +
 include/uapi/linux/kvm.h                      |    1 +
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/debug-exceptions.c  |  649 ++++-
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++
 .../selftests/kvm/include/aarch64/processor.h |    5 +
 .../selftests/kvm/lib/aarch64/processor.c     |   27 +
 21 files changed, 5549 insertions(+), 298 deletions(-)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c


base-commit: b3fa05b7ec851be680eb51b20ddda0b195b7cdb8
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v7 00/38] KVM: arm64: Make CPU ID registers writable by userspace
@ 2022-04-19  6:55 ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

In KVM/arm64, values of ID registers for a guest are mostly same as
its host's values except for bits for feature that KVM doesn't support
and for opt-in features that userspace didn't configure.  Userspace
can use KVM_SET_ONE_REG to a set ID register value, but it fails
if userspace attempts to modify the register value.

This patch series adds support to allow userspace to modify a value of
ID registers (as long as KVM can support features that are indicated
in the registers) so userspace can have more control of configuring
and unconfiguring features for guests.  We need this because we would
like to expose a uniform set/level of features for a group of guests on
systems with different ARM CPUs.  Since some features are not binary
in nature (e.g. AA64DFR0_EL1.BRPs fields indicate number of
breakpoints minus 1), using KVM_ARM_VCPU_INIT to control such features
is inconvenient.  This will be supported only for AArch64 EL1 guests at
least for now.

The patch series is for both VHE and non-VHE, except for protected VMs,
which have a different way of configuring ID registers based on its
different requirements [1].
There was a patch series that tried to achieve the same thing [2].
A few snippets of codes in this series were inspired by or came from [2].

The initial value of ID registers for a vCPU will be the host's value
with bits cleared for unsupported features and for opt-in features that
were not configured. So, the initial value userspace can see (via
KVM_GET_ONE_REG) is the upper limit that can be set for the register.
Any requests to change the value that conflicts with opt-in features'
configuration will fail (e.g. if KVM_ARM_VCPU_PMU_V3 is configured by
KVM_ARM_VCPU_INIT, ID_AA64DFR0_EL1.PMUVER cannot be set to zero.
Otherwise, the initial value of ID_AA64DFR0_EL1.PMUVER will be zero,
and cannot be changed from zero).

When a guest tries to use a CPU feature that is not exposed to the guest,
trapping it (to emulate a real CPU's behavior) would generally be a
desirable behavior (when it is possible with no or little side effects).
The later patches in the series add codes for this.  Only features that
can be trapped independently will be trapped by this series though.

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest. In this
case, KVM will always trap and emulate breakpoints/watchpoints
accesses in that case.

This series adds kunit tests for new functions in sys_regs.c (except for
trivial ones), and these tests are enabled with a new configuration
option 'CONFIG_KVM_KUNIT_TEST'.

The series is based on 5.18-rc3.

Patch 01 introduces arm64_check_features(), which will validate
ID registers based on given arm64_ftr_bits[].

Patch 02 introduces id_regs[] to kvm_arch to save values of ID
registers.

Patches 03-04 introduces structure id_reg_desc to manage the ID register
specific control for the guest.

Patch 05 prohibits modifying values of ID regs for 32bit EL1 guests.

Patches 06-11 introduces id_reg_desc for ID_AA64PFR0_EL1, ID_AA64PFR1_EL1,
ID_AA64ISAR0_EL1, ID_AA64ISAR1_EL1, ID_AA64ISAR2_EL1 and ID_AA64MMFR0_EL1
to make them configurable.

Patches 12-14 take cares emulation of dbgbcr/dbgbvr/dbgwcr when the number
of non-context aware breakpoints are reduced for the guest.

Patches 15-19 introduces id_reg_desc for remaining ID registers to
make them configurable.

Patch 20 switches to use id_reg_desc_table[] for ID registers instead
of sys_reg_descs[].

Patch 21 introduces consistency checking of feature fractional
fields of ID registers at the first KVM_RUN.

Patch 22 introduces a new capability KVM_CAP_ARM_ID_REG_WRITABLE
to identify that ID registers are configurable by userspace.

Patch 23 introduces kunit test cases for sys_reg.c changes.

Patches 24-25 change the way of using vcpu->arch.cptr_el2/mdcr_el2 to
track certain bits of cptr_el2/mdcr_el2 in the vcpu->arch fields and use
them when setting them for the guest.  The following patches will update
the vcpu->arch fields based on available features for the guest.

Patch 26 introduces struct feature_config_ctrl and some helper
functions to enable trapping of features that are disabled for a guest.

Patches 27-31 add feature_config_ctrl for CPU features, which are
used to program configuration registers to trap each of the features.

Patch 32 adds kunit test cases for changes in patches 26-31.

Patch 33 adds a couple of helpers for selftests to extract a field of
ID registers.

Patch 34 adds a selftest to validate reading/writing ID registers.

Patches 35-38 add test cases for dbgbcr/dbgbvr/dbgwcr emulation
to the debug-exceptions test.

v7:
  - Add emulation of dbgbcr/dbgbvr/dbgwcr when the number of non-context
    aware breakpoints are reduced for the guest.
  - Add an array of arm64_ftr_bits to id_reg_desc so that KVM could
    have its own validation policy of ID registers.
  - Don't support configurable ID registers for 32bit EL1 guest.
  - Change the ID register validation function in cpufeature.c to
    accept arm64_ftr_bits as an argument.
  - Don't allocate buffers in kvm_arch for ID registers with CRm==0.
    [Oliver]
  - Change set_default_id_regs() not to walk the entire sys_reg_descs[]
    in the patch-2. [Oliver]
  - Add id_reg_desc for ID_AA64ISAR0_EL1

v6: https://lore.kernel.org/all/20220311044811.1980336-1-reijiw@google.com/
  - Remove entries for all ID register entries from sys_reg_descs[],
    and use id_reg_desc_table[] for ID registers instead. [Oliver]
  - Remove sys_reg field from id_reg_info, add reg_desc (sys_reg_desc)
    field to id_reg_info, and rename 'id_reg_info' to 'id_reg_desc'.
  - Merge the following three patches of v5 into one patch "KVM: arm64:
    Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable" to accept userspace's
    request to set ID_AA64DFR0_EL1.PMUVER/ID_DFR0_EL1.PERFMON to 0xf
    (they are set to 0 though) even after applying the patch-10. [Oliver]
     patch-10 KVM: arm64: Hide IMPLEMENTATION DEFINED PMU support for the guest
     patch-11 KVM: arm64: Make ID_AA64DFR0_EL1 writable"
     patch-12 KVM: arm64: Make ID_DFR0_EL1 writable"

v5: https://lore.kernel.org/all/20220214065746.1230608-1-reijiw@google.com/
  - Change the return value of kcalloc failure of init_arm64_ftr_bits_kvm
    to -ENOMEM from ENOMEM. [Fuad]
  - Call init_arm64_ftr_bits_kvm from init_cpu_features(). [Ricardo, Fuad]
  - Move is_id_reg() in arch/arm64/kvm/sys_regs.c. [Fuad]
  - Remove frac_ftr_check from feature_frac [Fuad]
  - Rename kvm_id_regs_consistency_check() [Fuad]
  - Add feature_config_ctrl for ID_AA64DFR0_TRACEVER [Fuad]
  - Move changes for kvm_set_id_reg_feature and __modify_kvm_id_reg from
    patch-4 to patch-3. [Fuad]
  - Comment additions and fixes [Fuad]
  - Rename arm64_check_features() [Ricardo]
  - Run arm64_check_features_kvm() for the default guest value [Ricardo]
  - Add ID_AA64MMFR1_EL1.HAFDBS validation
  - Cosmetic fixes

v4: https://lore.kernel.org/all/20220106042708.2869332-1-reijiw@google.com/
  - Make ID registers storage per VM instead of per vCPU. [Marc]
  - Implement arm64_check_features() in arch/arm64/kernel/cpufeature.c
    by using existing codes in the file. [Marc]
  - Use a configuration function to enable traps for disabled
    features. [Marc]
  - Document ID registers become immutable after the first KVM_RUN [Eric]
  - Update ID_AA64PFR0.GIC at the point where a GICv3 is created. [Marc]
  - Get TGranX's bit position by substracting 12 from TGranX_2's bit
    position. [Eric]
  - Don't validate AArch32 ID registers when the system doesn't support
    32bit EL0. [Eric]
  - Add/fixes comments for patches. [Eric]
  - Made bug fixes/improvements of the selftest. [Eric]
  - Added .kunitconfig for arm64 KUnit tests

v3: https://lore.kernel.org/all/20211117064359.2362060-1-reijiw@google.com/
  - Remove ID register consistency checking across vCPUs. [Oliver]
  - Change KVM_CAP_ARM_ID_REG_WRITABLE to
    KVM_CAP_ARM_ID_REG_CONFIGURABLE. [Oliver]
  - Add KUnit testing for ID register validation and trap initialization.
  - Change read_id_reg() to take care of ID_AA64PFR0_EL1.GIC.
  - Add a helper of read_id_reg() (__read_id_reg()) and use the helper
    instead of directly using __vcpu_sys_reg().
  - Change not to run kvm_id_regs_consistency_check() and
    kvm_vcpu_init_traps() for protected VMs.
  - Update selftest to remove test cases for ID register consistency.
    checking across vCPUs and to add test cases for ID_AA64PFR0_EL1.GIC.

v2: https://lore.kernel.org/all/20211103062520.1445832-1-reijiw@google.com/
  - Remove unnecessary line breaks. [Andrew]
  - Use @params for comments. [Andrew]
  - Move arm64_check_features to arch/arm64/kvm/sys_regs.c and
    change that KVM specific feature check function.  [Andrew]
  - Remove unnecessary raz handling from __set_id_reg. [Andrew]
  - Remove sys_val field from the initial id_reg_info and add it
    in the later patch. [Andrew]
  - Call id_reg->init() from id_reg_info_init(). [Andrew]
  - Fix cpuid_feature_cap_perfmon_field() to convert 0xf to 0x0
    (and use it in the following patches).
  - Change kvm_vcpu_first_run_init to set has_run_once to false
    when kvm_id_regs_consistency_check() fails.
  - Add a patch to introduce id_reg_info for ID_AA64MMFR0_EL1,
    which requires special validity checking for TGran*_2 fields.
  - Add patches to introduce id_reg_info for ID_DFR1_EL1 and
    ID_MMFR0_EL1, which are required due to arm64_check_features
    implementation change.
  - Add a new argument, which is a pointer to id_reg_info, for
    id_reg_info's validate().

v1: https://lore.kernel.org/all/20211012043535.500493-1-reijiw@google.com/

[1] https://lore.kernel.org/all/20211010145636.1950948-1-tabba@google.com/
[2] https://lore.kernel.org/all/20201102033422.657391-1-liangpeng10@huawei.com/
[3] https://lore.kernel.org/all/20220127161759.53553-2-alexandru.elisei@arm.com/

Reiji Watanabe (38):
  KVM: arm64: Introduce a validation function for an ID register
  KVM: arm64: Save ID registers' sanitized value per guest
  KVM: arm64: Introduce struct id_reg_desc
  KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
  KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
  KVM: arm64: Make ID_AA64PFR0_EL1 writable
  KVM: arm64: Make ID_AA64PFR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR0_EL1 writable
  KVM: arm64: Make ID_AA64ISAR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR2_EL1 writable
  KVM: arm64: Make ID_AA64MMFR0_EL1 writable
  KVM: arm64: Add a KVM flag indicating emulating debug regs access is
    needed
  KVM: arm64: Emulate dbgbcr/dbgbvr accesses
  KVM: arm64: Emulate dbgwcr accesses
  KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
  KVM: arm64: Make MVFR1_EL1 writable
  KVM: arm64: Add remaining ID registers to id_reg_desc_table
  KVM: arm64: Use id_reg_desc_table for ID registers
  KVM: arm64: Add consistency checking for frac fields of ID registers
  KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
  KVM: arm64: Add kunit test for ID register validation
  KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
  KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
  KVM: arm64: Introduce framework to trap disabled features
  KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
  KVM: arm64: Add kunit test for trap initialization
  KVM: arm64: selftests: Add helpers to extract a field of ID registers
  KVM: arm64: selftests: Introduce id_reg_test
  KVM: arm64: selftests: Test linked breakpoint and watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint register access
  KVM: arm64: selftests: Test with every breakpoint/watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint changing
    ID_AA64DFR0_EL1

 Documentation/virt/kvm/api.rst                |   16 +
 arch/arm64/include/asm/cpufeature.h           |    3 +-
 arch/arm64/include/asm/kvm_arm.h              |   32 +
 arch/arm64/include/asm/kvm_host.h             |   17 +
 arch/arm64/include/asm/sysreg.h               |   14 +-
 arch/arm64/kernel/cpufeature.c                |   52 +
 arch/arm64/kvm/.kunitconfig                   |    4 +
 arch/arm64/kvm/Kconfig                        |   11 +
 arch/arm64/kvm/arm.c                          |   24 +-
 arch/arm64/kvm/debug.c                        |   20 +-
 arch/arm64/kvm/hyp/vhe/switch.c               |   14 +-
 arch/arm64/kvm/sys_regs.c                     | 2363 +++++++++++++++--
 arch/arm64/kvm/sys_regs_test.c                | 1287 +++++++++
 arch/arm64/kvm/vgic/vgic-init.c               |    9 +
 include/uapi/linux/kvm.h                      |    1 +
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/debug-exceptions.c  |  649 ++++-
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++
 .../selftests/kvm/include/aarch64/processor.h |    5 +
 .../selftests/kvm/lib/aarch64/processor.c     |   27 +
 21 files changed, 5549 insertions(+), 298 deletions(-)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c


base-commit: b3fa05b7ec851be680eb51b20ddda0b195b7cdb8
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v7 00/38] KVM: arm64: Make CPU ID registers writable by userspace
@ 2022-04-19  6:55 ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

In KVM/arm64, values of ID registers for a guest are mostly same as
its host's values except for bits for feature that KVM doesn't support
and for opt-in features that userspace didn't configure.  Userspace
can use KVM_SET_ONE_REG to a set ID register value, but it fails
if userspace attempts to modify the register value.

This patch series adds support to allow userspace to modify a value of
ID registers (as long as KVM can support features that are indicated
in the registers) so userspace can have more control of configuring
and unconfiguring features for guests.  We need this because we would
like to expose a uniform set/level of features for a group of guests on
systems with different ARM CPUs.  Since some features are not binary
in nature (e.g. AA64DFR0_EL1.BRPs fields indicate number of
breakpoints minus 1), using KVM_ARM_VCPU_INIT to control such features
is inconvenient.  This will be supported only for AArch64 EL1 guests at
least for now.

The patch series is for both VHE and non-VHE, except for protected VMs,
which have a different way of configuring ID registers based on its
different requirements [1].
There was a patch series that tried to achieve the same thing [2].
A few snippets of codes in this series were inspired by or came from [2].

The initial value of ID registers for a vCPU will be the host's value
with bits cleared for unsupported features and for opt-in features that
were not configured. So, the initial value userspace can see (via
KVM_GET_ONE_REG) is the upper limit that can be set for the register.
Any requests to change the value that conflicts with opt-in features'
configuration will fail (e.g. if KVM_ARM_VCPU_PMU_V3 is configured by
KVM_ARM_VCPU_INIT, ID_AA64DFR0_EL1.PMUVER cannot be set to zero.
Otherwise, the initial value of ID_AA64DFR0_EL1.PMUVER will be zero,
and cannot be changed from zero).

When a guest tries to use a CPU feature that is not exposed to the guest,
trapping it (to emulate a real CPU's behavior) would generally be a
desirable behavior (when it is possible with no or little side effects).
The later patches in the series add codes for this.  Only features that
can be trapped independently will be trapped by this series though.

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest. In this
case, KVM will always trap and emulate breakpoints/watchpoints
accesses in that case.

This series adds kunit tests for new functions in sys_regs.c (except for
trivial ones), and these tests are enabled with a new configuration
option 'CONFIG_KVM_KUNIT_TEST'.

The series is based on 5.18-rc3.

Patch 01 introduces arm64_check_features(), which will validate
ID registers based on given arm64_ftr_bits[].

Patch 02 introduces id_regs[] to kvm_arch to save values of ID
registers.

Patches 03-04 introduces structure id_reg_desc to manage the ID register
specific control for the guest.

Patch 05 prohibits modifying values of ID regs for 32bit EL1 guests.

Patches 06-11 introduces id_reg_desc for ID_AA64PFR0_EL1, ID_AA64PFR1_EL1,
ID_AA64ISAR0_EL1, ID_AA64ISAR1_EL1, ID_AA64ISAR2_EL1 and ID_AA64MMFR0_EL1
to make them configurable.

Patches 12-14 take cares emulation of dbgbcr/dbgbvr/dbgwcr when the number
of non-context aware breakpoints are reduced for the guest.

Patches 15-19 introduces id_reg_desc for remaining ID registers to
make them configurable.

Patch 20 switches to use id_reg_desc_table[] for ID registers instead
of sys_reg_descs[].

Patch 21 introduces consistency checking of feature fractional
fields of ID registers at the first KVM_RUN.

Patch 22 introduces a new capability KVM_CAP_ARM_ID_REG_WRITABLE
to identify that ID registers are configurable by userspace.

Patch 23 introduces kunit test cases for sys_reg.c changes.

Patches 24-25 change the way of using vcpu->arch.cptr_el2/mdcr_el2 to
track certain bits of cptr_el2/mdcr_el2 in the vcpu->arch fields and use
them when setting them for the guest.  The following patches will update
the vcpu->arch fields based on available features for the guest.

Patch 26 introduces struct feature_config_ctrl and some helper
functions to enable trapping of features that are disabled for a guest.

Patches 27-31 add feature_config_ctrl for CPU features, which are
used to program configuration registers to trap each of the features.

Patch 32 adds kunit test cases for changes in patches 26-31.

Patch 33 adds a couple of helpers for selftests to extract a field of
ID registers.

Patch 34 adds a selftest to validate reading/writing ID registers.

Patches 35-38 add test cases for dbgbcr/dbgbvr/dbgwcr emulation
to the debug-exceptions test.

v7:
  - Add emulation of dbgbcr/dbgbvr/dbgwcr when the number of non-context
    aware breakpoints are reduced for the guest.
  - Add an array of arm64_ftr_bits to id_reg_desc so that KVM could
    have its own validation policy of ID registers.
  - Don't support configurable ID registers for 32bit EL1 guest.
  - Change the ID register validation function in cpufeature.c to
    accept arm64_ftr_bits as an argument.
  - Don't allocate buffers in kvm_arch for ID registers with CRm==0.
    [Oliver]
  - Change set_default_id_regs() not to walk the entire sys_reg_descs[]
    in the patch-2. [Oliver]
  - Add id_reg_desc for ID_AA64ISAR0_EL1

v6: https://lore.kernel.org/all/20220311044811.1980336-1-reijiw@google.com/
  - Remove entries for all ID register entries from sys_reg_descs[],
    and use id_reg_desc_table[] for ID registers instead. [Oliver]
  - Remove sys_reg field from id_reg_info, add reg_desc (sys_reg_desc)
    field to id_reg_info, and rename 'id_reg_info' to 'id_reg_desc'.
  - Merge the following three patches of v5 into one patch "KVM: arm64:
    Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable" to accept userspace's
    request to set ID_AA64DFR0_EL1.PMUVER/ID_DFR0_EL1.PERFMON to 0xf
    (they are set to 0 though) even after applying the patch-10. [Oliver]
     patch-10 KVM: arm64: Hide IMPLEMENTATION DEFINED PMU support for the guest
     patch-11 KVM: arm64: Make ID_AA64DFR0_EL1 writable"
     patch-12 KVM: arm64: Make ID_DFR0_EL1 writable"

v5: https://lore.kernel.org/all/20220214065746.1230608-1-reijiw@google.com/
  - Change the return value of kcalloc failure of init_arm64_ftr_bits_kvm
    to -ENOMEM from ENOMEM. [Fuad]
  - Call init_arm64_ftr_bits_kvm from init_cpu_features(). [Ricardo, Fuad]
  - Move is_id_reg() in arch/arm64/kvm/sys_regs.c. [Fuad]
  - Remove frac_ftr_check from feature_frac [Fuad]
  - Rename kvm_id_regs_consistency_check() [Fuad]
  - Add feature_config_ctrl for ID_AA64DFR0_TRACEVER [Fuad]
  - Move changes for kvm_set_id_reg_feature and __modify_kvm_id_reg from
    patch-4 to patch-3. [Fuad]
  - Comment additions and fixes [Fuad]
  - Rename arm64_check_features() [Ricardo]
  - Run arm64_check_features_kvm() for the default guest value [Ricardo]
  - Add ID_AA64MMFR1_EL1.HAFDBS validation
  - Cosmetic fixes

v4: https://lore.kernel.org/all/20220106042708.2869332-1-reijiw@google.com/
  - Make ID registers storage per VM instead of per vCPU. [Marc]
  - Implement arm64_check_features() in arch/arm64/kernel/cpufeature.c
    by using existing codes in the file. [Marc]
  - Use a configuration function to enable traps for disabled
    features. [Marc]
  - Document ID registers become immutable after the first KVM_RUN [Eric]
  - Update ID_AA64PFR0.GIC at the point where a GICv3 is created. [Marc]
  - Get TGranX's bit position by substracting 12 from TGranX_2's bit
    position. [Eric]
  - Don't validate AArch32 ID registers when the system doesn't support
    32bit EL0. [Eric]
  - Add/fixes comments for patches. [Eric]
  - Made bug fixes/improvements of the selftest. [Eric]
  - Added .kunitconfig for arm64 KUnit tests

v3: https://lore.kernel.org/all/20211117064359.2362060-1-reijiw@google.com/
  - Remove ID register consistency checking across vCPUs. [Oliver]
  - Change KVM_CAP_ARM_ID_REG_WRITABLE to
    KVM_CAP_ARM_ID_REG_CONFIGURABLE. [Oliver]
  - Add KUnit testing for ID register validation and trap initialization.
  - Change read_id_reg() to take care of ID_AA64PFR0_EL1.GIC.
  - Add a helper of read_id_reg() (__read_id_reg()) and use the helper
    instead of directly using __vcpu_sys_reg().
  - Change not to run kvm_id_regs_consistency_check() and
    kvm_vcpu_init_traps() for protected VMs.
  - Update selftest to remove test cases for ID register consistency.
    checking across vCPUs and to add test cases for ID_AA64PFR0_EL1.GIC.

v2: https://lore.kernel.org/all/20211103062520.1445832-1-reijiw@google.com/
  - Remove unnecessary line breaks. [Andrew]
  - Use @params for comments. [Andrew]
  - Move arm64_check_features to arch/arm64/kvm/sys_regs.c and
    change that KVM specific feature check function.  [Andrew]
  - Remove unnecessary raz handling from __set_id_reg. [Andrew]
  - Remove sys_val field from the initial id_reg_info and add it
    in the later patch. [Andrew]
  - Call id_reg->init() from id_reg_info_init(). [Andrew]
  - Fix cpuid_feature_cap_perfmon_field() to convert 0xf to 0x0
    (and use it in the following patches).
  - Change kvm_vcpu_first_run_init to set has_run_once to false
    when kvm_id_regs_consistency_check() fails.
  - Add a patch to introduce id_reg_info for ID_AA64MMFR0_EL1,
    which requires special validity checking for TGran*_2 fields.
  - Add patches to introduce id_reg_info for ID_DFR1_EL1 and
    ID_MMFR0_EL1, which are required due to arm64_check_features
    implementation change.
  - Add a new argument, which is a pointer to id_reg_info, for
    id_reg_info's validate().

v1: https://lore.kernel.org/all/20211012043535.500493-1-reijiw@google.com/

[1] https://lore.kernel.org/all/20211010145636.1950948-1-tabba@google.com/
[2] https://lore.kernel.org/all/20201102033422.657391-1-liangpeng10@huawei.com/
[3] https://lore.kernel.org/all/20220127161759.53553-2-alexandru.elisei@arm.com/

Reiji Watanabe (38):
  KVM: arm64: Introduce a validation function for an ID register
  KVM: arm64: Save ID registers' sanitized value per guest
  KVM: arm64: Introduce struct id_reg_desc
  KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
  KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
  KVM: arm64: Make ID_AA64PFR0_EL1 writable
  KVM: arm64: Make ID_AA64PFR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR0_EL1 writable
  KVM: arm64: Make ID_AA64ISAR1_EL1 writable
  KVM: arm64: Make ID_AA64ISAR2_EL1 writable
  KVM: arm64: Make ID_AA64MMFR0_EL1 writable
  KVM: arm64: Add a KVM flag indicating emulating debug regs access is
    needed
  KVM: arm64: Emulate dbgbcr/dbgbvr accesses
  KVM: arm64: Emulate dbgwcr accesses
  KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
  KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
  KVM: arm64: Make MVFR1_EL1 writable
  KVM: arm64: Add remaining ID registers to id_reg_desc_table
  KVM: arm64: Use id_reg_desc_table for ID registers
  KVM: arm64: Add consistency checking for frac fields of ID registers
  KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
  KVM: arm64: Add kunit test for ID register validation
  KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
  KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
  KVM: arm64: Introduce framework to trap disabled features
  KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
  KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
  KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
  KVM: arm64: Add kunit test for trap initialization
  KVM: arm64: selftests: Add helpers to extract a field of ID registers
  KVM: arm64: selftests: Introduce id_reg_test
  KVM: arm64: selftests: Test linked breakpoint and watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint register access
  KVM: arm64: selftests: Test with every breakpoint/watchpoint
  KVM: arm64: selftests: Test breakpoint/watchpoint changing
    ID_AA64DFR0_EL1

 Documentation/virt/kvm/api.rst                |   16 +
 arch/arm64/include/asm/cpufeature.h           |    3 +-
 arch/arm64/include/asm/kvm_arm.h              |   32 +
 arch/arm64/include/asm/kvm_host.h             |   17 +
 arch/arm64/include/asm/sysreg.h               |   14 +-
 arch/arm64/kernel/cpufeature.c                |   52 +
 arch/arm64/kvm/.kunitconfig                   |    4 +
 arch/arm64/kvm/Kconfig                        |   11 +
 arch/arm64/kvm/arm.c                          |   24 +-
 arch/arm64/kvm/debug.c                        |   20 +-
 arch/arm64/kvm/hyp/vhe/switch.c               |   14 +-
 arch/arm64/kvm/sys_regs.c                     | 2363 +++++++++++++++--
 arch/arm64/kvm/sys_regs_test.c                | 1287 +++++++++
 arch/arm64/kvm/vgic/vgic-init.c               |    9 +
 include/uapi/linux/kvm.h                      |    1 +
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/debug-exceptions.c  |  649 ++++-
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++
 .../selftests/kvm/include/aarch64/processor.h |    5 +
 .../selftests/kvm/lib/aarch64/processor.c     |   27 +
 21 files changed, 5549 insertions(+), 298 deletions(-)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c


base-commit: b3fa05b7ec851be680eb51b20ddda0b195b7cdb8
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 123+ messages in thread

* [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce arm64_check_features(), which does a basic validity checking
of an ID register value against the register's limit value, which is
generally the host's sanitized value.

This function will be used by the following patches to check if an ID
register value that userspace tries to set for a guest can be supported
on the host.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  1 +
 arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c62e7e5e2f0c..7a009d4e18a6 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d72c4b4d389c..dbbc69745f22 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
 		return sprintf(buf, "Vulnerable\n");
 	}
 }
+
+/**
+ * arm64_check_features() - Check if a feature register value constitutes
+ * a subset of features indicated by @limit.
+ *
+ * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
+ * an item whose width field is zero.
+ * @val: The feature register value to check
+ * @limit: The limit value of the feature register
+ *
+ * This function will check if each feature field of @val is the "safe" value
+ * against @limit based on @ftrp[], each of which specifies the target field
+ * (shift, width), whether or not the field is for a signed value (sign),
+ * how the field is determined to be "safe" (type), and the safe value
+ * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
+ * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
+ * won't be used by this function. If a field value in @val is the same
+ * as the one in @limit, it is always considered the safe value regardless
+ * of the type. For register fields that are not in @ftrp[], only the value
+ * in @limit is considered the safe value.
+ *
+ * Return: 0 if all the fields are safe. Otherwise, return negative errno.
+ */
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
+{
+	u64 mask = 0;
+
+	for (; ftrp->width; ftrp++) {
+		s64 f_val, f_lim, safe_val;
+
+		f_val = arm64_ftr_value(ftrp, val);
+		f_lim = arm64_ftr_value(ftrp, limit);
+		mask |= arm64_ftr_mask(ftrp);
+
+		if (f_val == f_lim)
+			safe_val = f_val;
+		else
+			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
+
+		if (safe_val != f_val)
+			return -E2BIG;
+	}
+
+	/*
+	 * For fields that are not indicated in ftrp, values in limit are the
+	 * safe values.
+	 */
+	if ((val & ~mask) != (limit & ~mask))
+		return -E2BIG;
+
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Introduce arm64_check_features(), which does a basic validity checking
of an ID register value against the register's limit value, which is
generally the host's sanitized value.

This function will be used by the following patches to check if an ID
register value that userspace tries to set for a guest can be supported
on the host.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  1 +
 arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c62e7e5e2f0c..7a009d4e18a6 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d72c4b4d389c..dbbc69745f22 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
 		return sprintf(buf, "Vulnerable\n");
 	}
 }
+
+/**
+ * arm64_check_features() - Check if a feature register value constitutes
+ * a subset of features indicated by @limit.
+ *
+ * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
+ * an item whose width field is zero.
+ * @val: The feature register value to check
+ * @limit: The limit value of the feature register
+ *
+ * This function will check if each feature field of @val is the "safe" value
+ * against @limit based on @ftrp[], each of which specifies the target field
+ * (shift, width), whether or not the field is for a signed value (sign),
+ * how the field is determined to be "safe" (type), and the safe value
+ * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
+ * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
+ * won't be used by this function. If a field value in @val is the same
+ * as the one in @limit, it is always considered the safe value regardless
+ * of the type. For register fields that are not in @ftrp[], only the value
+ * in @limit is considered the safe value.
+ *
+ * Return: 0 if all the fields are safe. Otherwise, return negative errno.
+ */
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
+{
+	u64 mask = 0;
+
+	for (; ftrp->width; ftrp++) {
+		s64 f_val, f_lim, safe_val;
+
+		f_val = arm64_ftr_value(ftrp, val);
+		f_lim = arm64_ftr_value(ftrp, limit);
+		mask |= arm64_ftr_mask(ftrp);
+
+		if (f_val == f_lim)
+			safe_val = f_val;
+		else
+			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
+
+		if (safe_val != f_val)
+			return -E2BIG;
+	}
+
+	/*
+	 * For fields that are not indicated in ftrp, values in limit are the
+	 * safe values.
+	 */
+	if ((val & ~mask) != (limit & ~mask))
+		return -E2BIG;
+
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce arm64_check_features(), which does a basic validity checking
of an ID register value against the register's limit value, which is
generally the host's sanitized value.

This function will be used by the following patches to check if an ID
register value that userspace tries to set for a guest can be supported
on the host.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |  1 +
 arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c62e7e5e2f0c..7a009d4e18a6 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
 
 u64 read_sanitised_ftr_reg(u32 id);
 u64 __read_sysreg_by_encoding(u32 sys_id);
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
 
 static inline bool cpu_supports_mixed_endian_el0(void)
 {
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d72c4b4d389c..dbbc69745f22 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
 		return sprintf(buf, "Vulnerable\n");
 	}
 }
+
+/**
+ * arm64_check_features() - Check if a feature register value constitutes
+ * a subset of features indicated by @limit.
+ *
+ * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
+ * an item whose width field is zero.
+ * @val: The feature register value to check
+ * @limit: The limit value of the feature register
+ *
+ * This function will check if each feature field of @val is the "safe" value
+ * against @limit based on @ftrp[], each of which specifies the target field
+ * (shift, width), whether or not the field is for a signed value (sign),
+ * how the field is determined to be "safe" (type), and the safe value
+ * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
+ * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
+ * won't be used by this function. If a field value in @val is the same
+ * as the one in @limit, it is always considered the safe value regardless
+ * of the type. For register fields that are not in @ftrp[], only the value
+ * in @limit is considered the safe value.
+ *
+ * Return: 0 if all the fields are safe. Otherwise, return negative errno.
+ */
+int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
+{
+	u64 mask = 0;
+
+	for (; ftrp->width; ftrp++) {
+		s64 f_val, f_lim, safe_val;
+
+		f_val = arm64_ftr_value(ftrp, val);
+		f_lim = arm64_ftr_value(ftrp, limit);
+		mask |= arm64_ftr_mask(ftrp);
+
+		if (f_val == f_lim)
+			safe_val = f_val;
+		else
+			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
+
+		if (safe_val != f_val)
+			return -E2BIG;
+	}
+
+	/*
+	 * For fields that are not indicated in ftrp, values in limit are the
+	 * safe values.
+	 */
+	if ((val & ~mask) != (limit & ~mask))
+		return -E2BIG;
+
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 02/38] KVM: arm64: Save ID registers' sanitized value per guest
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
and save ID registers' sanitized value in the array at KVM_CREATE_VM.
Use the saved ones when ID registers are read by the guest or
userspace (via KVM_GET_ONE_REG).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 11 +++++
 arch/arm64/kvm/arm.c              |  1 +
 arch/arm64/kvm/sys_regs.c         | 81 +++++++++++++++++++++++++------
 3 files changed, 78 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 94a27a7520f4..fc836df84748 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -150,6 +150,15 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+
+	/*
+	 * Save ID registers for the guest in id_regs[].
+	 * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
+	 * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+	 */
+#define KVM_ARM_ID_REG_MAX_NUM	56
+#define IDREG_IDX(id)		(((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
+	u64 id_regs[KVM_ARM_ID_REG_MAX_NUM];
 };
 
 struct kvm_vcpu_fault_info {
@@ -775,6 +784,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
+void set_default_id_regs(struct kvm *kvm);
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 523bc934fe2f..04312f7ee0da 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm->arch.max_vcpus = kvm_arm_default_max_vcpus();
 
 	set_default_spectre(kvm);
+	set_default_id_regs(kvm);
 
 	return ret;
 out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7b45c040cc27..5b813a0b7b1c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,8 @@
 
 #include "trace.h"
 
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
  * types are different. My gut feeling is that it should be pretty
@@ -277,7 +279,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
 	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
@@ -1102,17 +1104,20 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-/* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(const struct kvm_vcpu *vcpu,
-		struct sys_reg_desc const *r, bool raz)
+/*
+ * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
+ * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+ */
+static bool is_id_reg(u32 id)
 {
-	u32 id = reg_to_encoding(r);
-	u64 val;
-
-	if (raz)
-		return 0;
+	return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
+		sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
+		sys_reg_CRm(id) < 8);
+}
 
-	val = read_sanitised_ftr_reg(id);
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+{
+	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
 
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
@@ -1167,6 +1172,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
+static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+		       struct sys_reg_desc const *r, bool raz)
+{
+	u32 id = reg_to_encoding(r);
+
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+}
+
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
 				  const struct sys_reg_desc *r)
 {
@@ -1267,9 +1280,8 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 /*
  * cpufeature ID register user accessors
  *
- * For now, these registers are immutable for userspace, so no values
- * are stored, and for set_id_reg() we don't allow the effective value
- * to be changed.
+ * For now, these registers are immutable for userspace, so for set_id_reg()
+ * we don't allow the effective value to be changed.
  */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1882,8 +1894,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
 	if (p->is_write) {
 		return ignore_write(vcpu, p);
 	} else {
-		u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-		u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+		u64 dfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+		u64 pfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64PFR0_EL1);
 		u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT);
 
 		p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) |
@@ -2895,3 +2907,42 @@ void kvm_sys_reg_table_init(void)
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
 }
+
+/*
+ * Set the guest's ID registers that are defined in sys_reg_descs[]
+ * with ID_SANITISED() to the host's sanitized value.
+ */
+void set_default_id_regs(struct kvm *kvm)
+{
+	int i;
+	u32 id;
+	const struct sys_reg_desc *rd;
+	u64 val;
+	struct sys_reg_params params = {
+		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
+		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
+		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
+		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
+		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
+	};
+
+	/*
+	 * Find the first entry of the ID register (ID_PFR0_EL1) from
+	 * sys_reg_descs table, and walk through only the ID register
+	 * entries in the table.
+	 */
+	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
+		id = reg_to_encoding(rd);
+		if (WARN_ON_ONCE(!is_id_reg(id)))
+			/* Shouldn't happen */
+			continue;
+
+		if (rd->access != access_id_reg)
+			/* Hidden or reserved ID register */
+			continue;
+
+		val = read_sanitised_ftr_reg(id);
+		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+	}
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 02/38] KVM: arm64: Save ID registers' sanitized value per guest
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
and save ID registers' sanitized value in the array at KVM_CREATE_VM.
Use the saved ones when ID registers are read by the guest or
userspace (via KVM_GET_ONE_REG).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 11 +++++
 arch/arm64/kvm/arm.c              |  1 +
 arch/arm64/kvm/sys_regs.c         | 81 +++++++++++++++++++++++++------
 3 files changed, 78 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 94a27a7520f4..fc836df84748 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -150,6 +150,15 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+
+	/*
+	 * Save ID registers for the guest in id_regs[].
+	 * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
+	 * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+	 */
+#define KVM_ARM_ID_REG_MAX_NUM	56
+#define IDREG_IDX(id)		(((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
+	u64 id_regs[KVM_ARM_ID_REG_MAX_NUM];
 };
 
 struct kvm_vcpu_fault_info {
@@ -775,6 +784,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
+void set_default_id_regs(struct kvm *kvm);
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 523bc934fe2f..04312f7ee0da 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm->arch.max_vcpus = kvm_arm_default_max_vcpus();
 
 	set_default_spectre(kvm);
+	set_default_id_regs(kvm);
 
 	return ret;
 out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7b45c040cc27..5b813a0b7b1c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,8 @@
 
 #include "trace.h"
 
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
  * types are different. My gut feeling is that it should be pretty
@@ -277,7 +279,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
 	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
@@ -1102,17 +1104,20 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-/* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(const struct kvm_vcpu *vcpu,
-		struct sys_reg_desc const *r, bool raz)
+/*
+ * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
+ * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+ */
+static bool is_id_reg(u32 id)
 {
-	u32 id = reg_to_encoding(r);
-	u64 val;
-
-	if (raz)
-		return 0;
+	return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
+		sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
+		sys_reg_CRm(id) < 8);
+}
 
-	val = read_sanitised_ftr_reg(id);
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+{
+	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
 
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
@@ -1167,6 +1172,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
+static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+		       struct sys_reg_desc const *r, bool raz)
+{
+	u32 id = reg_to_encoding(r);
+
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+}
+
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
 				  const struct sys_reg_desc *r)
 {
@@ -1267,9 +1280,8 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 /*
  * cpufeature ID register user accessors
  *
- * For now, these registers are immutable for userspace, so no values
- * are stored, and for set_id_reg() we don't allow the effective value
- * to be changed.
+ * For now, these registers are immutable for userspace, so for set_id_reg()
+ * we don't allow the effective value to be changed.
  */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1882,8 +1894,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
 	if (p->is_write) {
 		return ignore_write(vcpu, p);
 	} else {
-		u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-		u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+		u64 dfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+		u64 pfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64PFR0_EL1);
 		u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT);
 
 		p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) |
@@ -2895,3 +2907,42 @@ void kvm_sys_reg_table_init(void)
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
 }
+
+/*
+ * Set the guest's ID registers that are defined in sys_reg_descs[]
+ * with ID_SANITISED() to the host's sanitized value.
+ */
+void set_default_id_regs(struct kvm *kvm)
+{
+	int i;
+	u32 id;
+	const struct sys_reg_desc *rd;
+	u64 val;
+	struct sys_reg_params params = {
+		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
+		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
+		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
+		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
+		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
+	};
+
+	/*
+	 * Find the first entry of the ID register (ID_PFR0_EL1) from
+	 * sys_reg_descs table, and walk through only the ID register
+	 * entries in the table.
+	 */
+	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
+		id = reg_to_encoding(rd);
+		if (WARN_ON_ONCE(!is_id_reg(id)))
+			/* Shouldn't happen */
+			continue;
+
+		if (rd->access != access_id_reg)
+			/* Hidden or reserved ID register */
+			continue;
+
+		val = read_sanitised_ftr_reg(id);
+		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+	}
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 02/38] KVM: arm64: Save ID registers' sanitized value per guest
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce id_regs[] in kvm_arch as a storage of guest's ID registers,
and save ID registers' sanitized value in the array at KVM_CREATE_VM.
Use the saved ones when ID registers are read by the guest or
userspace (via KVM_GET_ONE_REG).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 11 +++++
 arch/arm64/kvm/arm.c              |  1 +
 arch/arm64/kvm/sys_regs.c         | 81 +++++++++++++++++++++++++------
 3 files changed, 78 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 94a27a7520f4..fc836df84748 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -150,6 +150,15 @@ struct kvm_arch {
 
 	u8 pfr0_csv2;
 	u8 pfr0_csv3;
+
+	/*
+	 * Save ID registers for the guest in id_regs[].
+	 * (Op0, Op1, CRn, CRm, Op2) of the ID registers to be saved in it
+	 * is (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+	 */
+#define KVM_ARM_ID_REG_MAX_NUM	56
+#define IDREG_IDX(id)		(((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
+	u64 id_regs[KVM_ARM_ID_REG_MAX_NUM];
 };
 
 struct kvm_vcpu_fault_info {
@@ -775,6 +784,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
+void set_default_id_regs(struct kvm *kvm);
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 523bc934fe2f..04312f7ee0da 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 	kvm->arch.max_vcpus = kvm_arm_default_max_vcpus();
 
 	set_default_spectre(kvm);
+	set_default_id_regs(kvm);
 
 	return ret;
 out_free_stage2_pgd:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7b45c040cc27..5b813a0b7b1c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,8 @@
 
 #include "trace.h"
 
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
  * types are different. My gut feeling is that it should be pretty
@@ -277,7 +279,7 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
 	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
@@ -1102,17 +1104,20 @@ static bool access_arch_timer(struct kvm_vcpu *vcpu,
 	return true;
 }
 
-/* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(const struct kvm_vcpu *vcpu,
-		struct sys_reg_desc const *r, bool raz)
+/*
+ * Return true if the register's (Op0, Op1, CRn, CRm, Op2) is
+ * (3, 0, 0, crm, op2), where 1<=crm<8, 0<=op2<8.
+ */
+static bool is_id_reg(u32 id)
 {
-	u32 id = reg_to_encoding(r);
-	u64 val;
-
-	if (raz)
-		return 0;
+	return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
+		sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
+		sys_reg_CRm(id) < 8);
+}
 
-	val = read_sanitised_ftr_reg(id);
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+{
+	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
 
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
@@ -1167,6 +1172,14 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
+static u64 read_id_reg(const struct kvm_vcpu *vcpu,
+		       struct sys_reg_desc const *r, bool raz)
+{
+	u32 id = reg_to_encoding(r);
+
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+}
+
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
 				  const struct sys_reg_desc *r)
 {
@@ -1267,9 +1280,8 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 /*
  * cpufeature ID register user accessors
  *
- * For now, these registers are immutable for userspace, so no values
- * are stored, and for set_id_reg() we don't allow the effective value
- * to be changed.
+ * For now, these registers are immutable for userspace, so for set_id_reg()
+ * we don't allow the effective value to be changed.
  */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1882,8 +1894,8 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
 	if (p->is_write) {
 		return ignore_write(vcpu, p);
 	} else {
-		u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-		u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+		u64 dfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+		u64 pfr = read_id_reg_with_encoding(vcpu, SYS_ID_AA64PFR0_EL1);
 		u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT);
 
 		p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) |
@@ -2895,3 +2907,42 @@ void kvm_sys_reg_table_init(void)
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
 }
+
+/*
+ * Set the guest's ID registers that are defined in sys_reg_descs[]
+ * with ID_SANITISED() to the host's sanitized value.
+ */
+void set_default_id_regs(struct kvm *kvm)
+{
+	int i;
+	u32 id;
+	const struct sys_reg_desc *rd;
+	u64 val;
+	struct sys_reg_params params = {
+		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
+		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
+		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
+		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
+		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
+	};
+
+	/*
+	 * Find the first entry of the ID register (ID_PFR0_EL1) from
+	 * sys_reg_descs table, and walk through only the ID register
+	 * entries in the table.
+	 */
+	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
+		id = reg_to_encoding(rd);
+		if (WARN_ON_ONCE(!is_id_reg(id)))
+			/* Shouldn't happen */
+			continue;
+
+		if (rd->access != access_id_reg)
+			/* Hidden or reserved ID register */
+			continue;
+
+		val = read_sanitised_ftr_reg(id);
+		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+	}
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 03/38] KVM: arm64: Introduce struct id_reg_desc
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch lays the groundwork to make ID registers writable.

Introduce struct id_reg_desc for an ID register to manage the
register specific control of its value for the guest, and provide set
of functions commonly used for ID registers to make them writable.
Use the id_reg_desc to do register specific initialization, validation
of the ID register, etc.  The id_reg_desc has reg_desc field (struct
sys_reg_desc), which will be used instead of sys_reg_desc in
sys_reg_descs[] for ID registers in the following patches (and then
the entries in sys_reg_descs[] will be removed).

At present, changing an ID register from userspace is allowed only
if the ID register has the id_reg_desc, but that will be changed
by the following patches.

No ID register has the id_reg_desc yet, and the following patches
will add them for all the ID registers currently in sys_reg_descs[].

kvm_set_id_reg_feature(), which is introduced in this patch,
is going to be used by the following patch outside from sys_regs.c
when an ID register field needs to be updated.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   3 +-
 arch/arm64/kvm/sys_regs.c         | 313 ++++++++++++++++++++++++++++--
 3 files changed, 300 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fc836df84748..a43fddd58e68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -785,6 +785,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
 void set_default_id_regs(struct kvm *kvm);
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index fbf5f8bb9055..3d860108661b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1234,9 +1234,10 @@
 #define ICH_VTR_TDS_MASK	(1 << ICH_VTR_TDS_SHIFT)
 
 #define ARM64_FEATURE_FIELD_BITS	4
+#define ARM64_FEATURE_FIELD_MASK	GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0)
 
 /* Create a mask for the feature bits of the specified feature. */
-#define ARM64_FEATURE_MASK(x)	(GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT))
+#define ARM64_FEATURE_MASK(x)	(ARM64_FEATURE_FIELD_MASK << x##_SHIFT)
 
 #ifdef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b813a0b7b1c..30adc19e4619 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include "trace.h"
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+static inline struct id_reg_desc *get_id_reg_desc(u32 id);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -269,6 +270,112 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+/*
+ * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
+ * in 64 bit register + 1 entry for a terminator entry).
+ */
+#define	FTR_FIELDS_NUM	17
+
+struct id_reg_desc {
+	const struct sys_reg_desc	reg_desc;
+
+	/*
+	 * Limit value of the register for a vcpu. The value is the sanitized
+	 * system value with bits set/cleared for unsupported features for the
+	 * guest.
+	 */
+	u64	vcpu_limit_val;
+
+	/* Fields that are not validated by arm64_check_features. */
+	u64	ignore_mask;
+
+	/* An optional initialization function of the id_reg_desc */
+	void (*init)(struct id_reg_desc *id_reg);
+
+	/*
+	 * This is an optional ID register specific validation function. When
+	 * userspace tries to set the ID register, arm64_check_features()
+	 * will check if the requested value indicates any features that cannot
+	 * be supported by KVM on the host.  But, some ID register fields need
+	 * a special checking, and this function can be used for such fields.
+	 * e.g. When SVE is configured for a vCPU by KVM_ARM_VCPU_INIT,
+	 * ID_AA64PFR0_EL1.SVE shouldn't be set to 0 for the vCPU.
+	 * The validation function for ID_AA64PFR0_EL1 could be used to check
+	 * the field is consistent with SVE configuration.
+	 */
+	int (*validate)(struct kvm_vcpu *vcpu, const struct id_reg_desc *id_reg,
+			u64 val);
+
+	/*
+	 * Return a bitmask of the vCPU's ID register fields that are not
+	 * synced with saved (per VM) ID register value, which usually
+	 * indicates opt-in CPU features that are not configured for the vCPU.
+	 * ID registers are saved per VM, but some opt-in CPU features can
+	 * be configured per vCPU.  The saved (per VM) values for such
+	 * features are for vCPUs with the features (and zero for
+	 * vCPUs without the features).
+	 * Return value of this function is used to handle such fields
+	 * for per vCPU ID register read/write request with saved per VM
+	 * ID register.  See the __write_id_reg's comment for more detail.
+	 */
+	u64 (*vcpu_mask)(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg);
+
+	/*
+	 * Used to validate the ID register values with arm64_check_features().
+	 * The last item in the array must be terminated by an item whose
+	 * width field is zero as that is expected by arm64_check_features().
+	 */
+	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+};
+
+static void id_reg_desc_init(struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_sanitised_ftr_reg(id);
+
+	id_reg->vcpu_limit_val = val;
+	if (id_reg->init)
+		id_reg->init(id_reg);
+
+	/*
+	 * id_reg->init() might update id_reg->vcpu_limit_val.
+	 * Make sure that id_reg->vcpu_limit_val, which will be the default
+	 * register value for guests, is a safe value to use for guests
+	 * on the host.
+	 */
+	WARN_ON_ONCE(arm64_check_features(id_reg->ftr_bits,
+					  id_reg->vcpu_limit_val, val));
+}
+
+static int validate_id_reg(struct kvm_vcpu *vcpu,
+			   const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit, tmp_val;
+	int err;
+
+	limit = id_reg->vcpu_limit_val;
+
+	/*
+	 * Replace the fields that are indicated in ignore_mask with
+	 * the value in the limit to not have arm64_check_features()
+	 * check the field in @val.
+	 */
+	tmp_val = val & ~id_reg->ignore_mask;
+	tmp_val |= (limit & id_reg->ignore_mask);
+
+	/* Check if the value indicates any feature that is not in the limit. */
+	err = arm64_check_features(id_reg->ftr_bits, tmp_val, limit);
+	if (err)
+		return err;
+
+	if (id_reg && id_reg->validate)
+		/* Run the ID register specific validity check. */
+		err = id_reg->validate(vcpu, id_reg, val);
+
+	return err;
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -1115,10 +1222,107 @@ static bool is_id_reg(u32 id)
 		sys_reg_CRm(id) < 8);
 }
 
+static u64 read_kvm_id_reg(struct kvm *kvm, u32 id)
+{
+	return kvm->arch.id_regs[IDREG_IDX(id)];
+}
+
+static int __modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	u64 old, new;
+
+	lockdep_assert_held(&kvm->lock);
+
+	old = kvm->arch.id_regs[IDREG_IDX(id)];
+
+	/* Preserve the value at the bit position set in preserve_mask */
+	new = old & preserve_mask;
+	new |= (val & ~preserve_mask);
+
+	/* Don't allow to modify ID register value after KVM_RUN on any vCPUs */
+	if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) &&
+	    new != old)
+		return -EBUSY;
+
+	WRITE_ONCE(kvm->arch.id_regs[IDREG_IDX(id)], new);
+
+	return 0;
+}
+
+static int modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	int ret;
+
+	mutex_lock(&kvm->lock);
+	ret = __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+	mutex_unlock(&kvm->lock);
+
+	return ret;
+}
+
+static int write_kvm_id_reg(struct kvm *kvm, u32 id, u64 val)
+{
+	return modify_kvm_id_reg(kvm, id, val, 0);
+}
+
+/*
+ * KVM basically forces all vCPUs of the guest to have a uniform value for
+ * each ID register (it means KVM_SET_ONE_REG for a vCPU affects all
+ * the vCPUs of the guest), and the id_regs[] of kvm_arch holds values
+ * of ID registers for the guest.  However, there is an exception for
+ * ID register fields corresponding to CPU features that can be
+ * configured per vCPU by KVM_ARM_VCPU_INIT, or etc (e.g. PMUv3, SVE, etc).
+ * For such fields, all vCPUs that have the feature will have a non-zero
+ * uniform value, which can be updated by userspace, but the vCPUs that
+ * don't have the feature will have zero for the fields.
+ * Values that @id_regs holds are for vCPUs that have such features.  So,
+ * to get the ID register value for a vCPU that doesn't have those features,
+ * the corresponding fields in id_regs[] needs to be cleared.
+ * A bitmask of the fields are provided by id_reg_desc's vcpu_mask(), and
+ * __write_id_reg() and __read_id_reg() take care of those fields using
+ * the bitmask.
+ */
+static int __write_id_reg(struct kvm_vcpu *vcpu,
+			  struct id_reg_desc *id_reg, u64 val)
+{
+	u64 mask = 0;
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+
+	if (id_reg->vcpu_mask)
+		mask = id_reg->vcpu_mask(vcpu, id_reg);
+
+	/*
+	 * Update the ID register for the guest with @val, except for fields
+	 * that are set in the mask, which indicates fields for opt-in
+	 * features that are not configured for the vCPU.
+	 */
+	return modify_kvm_id_reg(vcpu->kvm, id, val, mask);
+}
+
+static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_kvm_id_reg(vcpu->kvm, id);
+
+	if (id_reg && id_reg->vcpu_mask)
+		/* Clear fields for opt-in features that are not configured. */
+		val &= ~(id_reg->vcpu_mask(vcpu, id_reg));
+
+	return val;
+}
+
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 {
-	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
+	u64 val;
+	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
+
+	if (id_reg)
+		return __read_id_reg(vcpu, id_reg);
 
+	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
 		if (!vcpu_has_sve(vcpu))
@@ -1175,9 +1379,7 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	u32 id = reg_to_encoding(r);
-
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -1277,12 +1479,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
-/*
- * cpufeature ID register user accessors
- *
- * For now, these registers are immutable for userspace, so for set_id_reg()
- * we don't allow the effective value to be changed.
- */
+/* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
@@ -1293,11 +1490,32 @@ static int __get_id_reg(const struct kvm_vcpu *vcpu,
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct kvm_vcpu *vcpu,
+/*
+ * Check if the given id indicates AArch32 ID register encoding.
+ */
+static bool is_aarch32_id_reg(u32 id)
+{
+	u32 crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+static int __set_id_reg(struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
+	u32 encoding = reg_to_encoding(rd);
+	struct id_reg_desc *id_reg;
 	int err;
 	u64 val;
 
@@ -1305,11 +1523,33 @@ static int __set_id_reg(const struct kvm_vcpu *vcpu,
 	if (err)
 		return err;
 
-	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(vcpu, rd, raz))
+	if (val == read_id_reg(vcpu, rd, raz))
+		/* The value is same as the current value. Nothing to do. */
+		return 0;
+
+	/* Don't allow to modify the register's value if the register is raz. */
+	if (raz)
 		return -EINVAL;
 
-	return 0;
+	/*
+	 * Don't allow to modify the register's value if the register doesn't
+	 * have the id_reg_desc.
+	 */
+	id_reg = get_id_reg_desc(encoding);
+	if (!id_reg)
+		return -EINVAL;
+
+	/*
+	 * Skip the validation of AArch32 ID registers if the system doesn't
+	 * 32bit EL0 (their value are UNKNOWN).
+	 */
+	if (system_supports_32bit_el0() || !is_aarch32_id_reg(encoding)) {
+		err = validate_id_reg(vcpu, id_reg, val);
+		if (err)
+			return err;
+	}
+
+	return __write_id_reg(vcpu, id_reg, val);
 }
 
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -2872,6 +3112,8 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 	return write_demux_regids(uindices);
 }
 
+static void id_reg_desc_init_all(void);
+
 void kvm_sys_reg_table_init(void)
 {
 	unsigned int i;
@@ -2906,6 +3148,43 @@ void kvm_sys_reg_table_init(void)
 			break;
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
+
+	id_reg_desc_init_all();
+}
+
+/*
+ * Update the ID register's field with @fval for the guest.
+ * The caller is expected to hold the kvm->lock.
+ * This will not fail unless any vCPUs in the guest have started.
+ */
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
+{
+	u64 val = ((u64)fval & ARM64_FEATURE_FIELD_MASK) << field_shift;
+	u64 preserve_mask = ~(ARM64_FEATURE_FIELD_MASK << field_shift);
+
+	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+}
+
+/* A table for ID registers's information. */
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+
+static inline struct id_reg_desc *get_id_reg_desc(u32 id)
+{
+	return id_reg_desc_table[IDREG_IDX(id)];
+}
+
+static void id_reg_desc_init_all(void)
+{
+	int i;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
+			continue;
+
+		id_reg_desc_init(id_reg);
+	}
 }
 
 /*
@@ -2918,6 +3197,7 @@ void set_default_id_regs(struct kvm *kvm)
 	u32 id;
 	const struct sys_reg_desc *rd;
 	u64 val;
+	struct id_reg_desc *idr;
 	struct sys_reg_params params = {
 		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
 		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
@@ -2942,7 +3222,8 @@ void set_default_id_regs(struct kvm *kvm)
 			/* Hidden or reserved ID register */
 			continue;
 
-		val = read_sanitised_ftr_reg(id);
-		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+		idr = get_id_reg_desc(id);
+		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 03/38] KVM: arm64: Introduce struct id_reg_desc
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch lays the groundwork to make ID registers writable.

Introduce struct id_reg_desc for an ID register to manage the
register specific control of its value for the guest, and provide set
of functions commonly used for ID registers to make them writable.
Use the id_reg_desc to do register specific initialization, validation
of the ID register, etc.  The id_reg_desc has reg_desc field (struct
sys_reg_desc), which will be used instead of sys_reg_desc in
sys_reg_descs[] for ID registers in the following patches (and then
the entries in sys_reg_descs[] will be removed).

At present, changing an ID register from userspace is allowed only
if the ID register has the id_reg_desc, but that will be changed
by the following patches.

No ID register has the id_reg_desc yet, and the following patches
will add them for all the ID registers currently in sys_reg_descs[].

kvm_set_id_reg_feature(), which is introduced in this patch,
is going to be used by the following patch outside from sys_regs.c
when an ID register field needs to be updated.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   3 +-
 arch/arm64/kvm/sys_regs.c         | 313 ++++++++++++++++++++++++++++--
 3 files changed, 300 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fc836df84748..a43fddd58e68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -785,6 +785,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
 void set_default_id_regs(struct kvm *kvm);
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index fbf5f8bb9055..3d860108661b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1234,9 +1234,10 @@
 #define ICH_VTR_TDS_MASK	(1 << ICH_VTR_TDS_SHIFT)
 
 #define ARM64_FEATURE_FIELD_BITS	4
+#define ARM64_FEATURE_FIELD_MASK	GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0)
 
 /* Create a mask for the feature bits of the specified feature. */
-#define ARM64_FEATURE_MASK(x)	(GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT))
+#define ARM64_FEATURE_MASK(x)	(ARM64_FEATURE_FIELD_MASK << x##_SHIFT)
 
 #ifdef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b813a0b7b1c..30adc19e4619 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include "trace.h"
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+static inline struct id_reg_desc *get_id_reg_desc(u32 id);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -269,6 +270,112 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+/*
+ * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
+ * in 64 bit register + 1 entry for a terminator entry).
+ */
+#define	FTR_FIELDS_NUM	17
+
+struct id_reg_desc {
+	const struct sys_reg_desc	reg_desc;
+
+	/*
+	 * Limit value of the register for a vcpu. The value is the sanitized
+	 * system value with bits set/cleared for unsupported features for the
+	 * guest.
+	 */
+	u64	vcpu_limit_val;
+
+	/* Fields that are not validated by arm64_check_features. */
+	u64	ignore_mask;
+
+	/* An optional initialization function of the id_reg_desc */
+	void (*init)(struct id_reg_desc *id_reg);
+
+	/*
+	 * This is an optional ID register specific validation function. When
+	 * userspace tries to set the ID register, arm64_check_features()
+	 * will check if the requested value indicates any features that cannot
+	 * be supported by KVM on the host.  But, some ID register fields need
+	 * a special checking, and this function can be used for such fields.
+	 * e.g. When SVE is configured for a vCPU by KVM_ARM_VCPU_INIT,
+	 * ID_AA64PFR0_EL1.SVE shouldn't be set to 0 for the vCPU.
+	 * The validation function for ID_AA64PFR0_EL1 could be used to check
+	 * the field is consistent with SVE configuration.
+	 */
+	int (*validate)(struct kvm_vcpu *vcpu, const struct id_reg_desc *id_reg,
+			u64 val);
+
+	/*
+	 * Return a bitmask of the vCPU's ID register fields that are not
+	 * synced with saved (per VM) ID register value, which usually
+	 * indicates opt-in CPU features that are not configured for the vCPU.
+	 * ID registers are saved per VM, but some opt-in CPU features can
+	 * be configured per vCPU.  The saved (per VM) values for such
+	 * features are for vCPUs with the features (and zero for
+	 * vCPUs without the features).
+	 * Return value of this function is used to handle such fields
+	 * for per vCPU ID register read/write request with saved per VM
+	 * ID register.  See the __write_id_reg's comment for more detail.
+	 */
+	u64 (*vcpu_mask)(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg);
+
+	/*
+	 * Used to validate the ID register values with arm64_check_features().
+	 * The last item in the array must be terminated by an item whose
+	 * width field is zero as that is expected by arm64_check_features().
+	 */
+	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+};
+
+static void id_reg_desc_init(struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_sanitised_ftr_reg(id);
+
+	id_reg->vcpu_limit_val = val;
+	if (id_reg->init)
+		id_reg->init(id_reg);
+
+	/*
+	 * id_reg->init() might update id_reg->vcpu_limit_val.
+	 * Make sure that id_reg->vcpu_limit_val, which will be the default
+	 * register value for guests, is a safe value to use for guests
+	 * on the host.
+	 */
+	WARN_ON_ONCE(arm64_check_features(id_reg->ftr_bits,
+					  id_reg->vcpu_limit_val, val));
+}
+
+static int validate_id_reg(struct kvm_vcpu *vcpu,
+			   const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit, tmp_val;
+	int err;
+
+	limit = id_reg->vcpu_limit_val;
+
+	/*
+	 * Replace the fields that are indicated in ignore_mask with
+	 * the value in the limit to not have arm64_check_features()
+	 * check the field in @val.
+	 */
+	tmp_val = val & ~id_reg->ignore_mask;
+	tmp_val |= (limit & id_reg->ignore_mask);
+
+	/* Check if the value indicates any feature that is not in the limit. */
+	err = arm64_check_features(id_reg->ftr_bits, tmp_val, limit);
+	if (err)
+		return err;
+
+	if (id_reg && id_reg->validate)
+		/* Run the ID register specific validity check. */
+		err = id_reg->validate(vcpu, id_reg, val);
+
+	return err;
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -1115,10 +1222,107 @@ static bool is_id_reg(u32 id)
 		sys_reg_CRm(id) < 8);
 }
 
+static u64 read_kvm_id_reg(struct kvm *kvm, u32 id)
+{
+	return kvm->arch.id_regs[IDREG_IDX(id)];
+}
+
+static int __modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	u64 old, new;
+
+	lockdep_assert_held(&kvm->lock);
+
+	old = kvm->arch.id_regs[IDREG_IDX(id)];
+
+	/* Preserve the value at the bit position set in preserve_mask */
+	new = old & preserve_mask;
+	new |= (val & ~preserve_mask);
+
+	/* Don't allow to modify ID register value after KVM_RUN on any vCPUs */
+	if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) &&
+	    new != old)
+		return -EBUSY;
+
+	WRITE_ONCE(kvm->arch.id_regs[IDREG_IDX(id)], new);
+
+	return 0;
+}
+
+static int modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	int ret;
+
+	mutex_lock(&kvm->lock);
+	ret = __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+	mutex_unlock(&kvm->lock);
+
+	return ret;
+}
+
+static int write_kvm_id_reg(struct kvm *kvm, u32 id, u64 val)
+{
+	return modify_kvm_id_reg(kvm, id, val, 0);
+}
+
+/*
+ * KVM basically forces all vCPUs of the guest to have a uniform value for
+ * each ID register (it means KVM_SET_ONE_REG for a vCPU affects all
+ * the vCPUs of the guest), and the id_regs[] of kvm_arch holds values
+ * of ID registers for the guest.  However, there is an exception for
+ * ID register fields corresponding to CPU features that can be
+ * configured per vCPU by KVM_ARM_VCPU_INIT, or etc (e.g. PMUv3, SVE, etc).
+ * For such fields, all vCPUs that have the feature will have a non-zero
+ * uniform value, which can be updated by userspace, but the vCPUs that
+ * don't have the feature will have zero for the fields.
+ * Values that @id_regs holds are for vCPUs that have such features.  So,
+ * to get the ID register value for a vCPU that doesn't have those features,
+ * the corresponding fields in id_regs[] needs to be cleared.
+ * A bitmask of the fields are provided by id_reg_desc's vcpu_mask(), and
+ * __write_id_reg() and __read_id_reg() take care of those fields using
+ * the bitmask.
+ */
+static int __write_id_reg(struct kvm_vcpu *vcpu,
+			  struct id_reg_desc *id_reg, u64 val)
+{
+	u64 mask = 0;
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+
+	if (id_reg->vcpu_mask)
+		mask = id_reg->vcpu_mask(vcpu, id_reg);
+
+	/*
+	 * Update the ID register for the guest with @val, except for fields
+	 * that are set in the mask, which indicates fields for opt-in
+	 * features that are not configured for the vCPU.
+	 */
+	return modify_kvm_id_reg(vcpu->kvm, id, val, mask);
+}
+
+static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_kvm_id_reg(vcpu->kvm, id);
+
+	if (id_reg && id_reg->vcpu_mask)
+		/* Clear fields for opt-in features that are not configured. */
+		val &= ~(id_reg->vcpu_mask(vcpu, id_reg));
+
+	return val;
+}
+
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 {
-	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
+	u64 val;
+	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
+
+	if (id_reg)
+		return __read_id_reg(vcpu, id_reg);
 
+	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
 		if (!vcpu_has_sve(vcpu))
@@ -1175,9 +1379,7 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	u32 id = reg_to_encoding(r);
-
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -1277,12 +1479,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
-/*
- * cpufeature ID register user accessors
- *
- * For now, these registers are immutable for userspace, so for set_id_reg()
- * we don't allow the effective value to be changed.
- */
+/* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
@@ -1293,11 +1490,32 @@ static int __get_id_reg(const struct kvm_vcpu *vcpu,
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct kvm_vcpu *vcpu,
+/*
+ * Check if the given id indicates AArch32 ID register encoding.
+ */
+static bool is_aarch32_id_reg(u32 id)
+{
+	u32 crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+static int __set_id_reg(struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
+	u32 encoding = reg_to_encoding(rd);
+	struct id_reg_desc *id_reg;
 	int err;
 	u64 val;
 
@@ -1305,11 +1523,33 @@ static int __set_id_reg(const struct kvm_vcpu *vcpu,
 	if (err)
 		return err;
 
-	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(vcpu, rd, raz))
+	if (val == read_id_reg(vcpu, rd, raz))
+		/* The value is same as the current value. Nothing to do. */
+		return 0;
+
+	/* Don't allow to modify the register's value if the register is raz. */
+	if (raz)
 		return -EINVAL;
 
-	return 0;
+	/*
+	 * Don't allow to modify the register's value if the register doesn't
+	 * have the id_reg_desc.
+	 */
+	id_reg = get_id_reg_desc(encoding);
+	if (!id_reg)
+		return -EINVAL;
+
+	/*
+	 * Skip the validation of AArch32 ID registers if the system doesn't
+	 * 32bit EL0 (their value are UNKNOWN).
+	 */
+	if (system_supports_32bit_el0() || !is_aarch32_id_reg(encoding)) {
+		err = validate_id_reg(vcpu, id_reg, val);
+		if (err)
+			return err;
+	}
+
+	return __write_id_reg(vcpu, id_reg, val);
 }
 
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -2872,6 +3112,8 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 	return write_demux_regids(uindices);
 }
 
+static void id_reg_desc_init_all(void);
+
 void kvm_sys_reg_table_init(void)
 {
 	unsigned int i;
@@ -2906,6 +3148,43 @@ void kvm_sys_reg_table_init(void)
 			break;
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
+
+	id_reg_desc_init_all();
+}
+
+/*
+ * Update the ID register's field with @fval for the guest.
+ * The caller is expected to hold the kvm->lock.
+ * This will not fail unless any vCPUs in the guest have started.
+ */
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
+{
+	u64 val = ((u64)fval & ARM64_FEATURE_FIELD_MASK) << field_shift;
+	u64 preserve_mask = ~(ARM64_FEATURE_FIELD_MASK << field_shift);
+
+	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+}
+
+/* A table for ID registers's information. */
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+
+static inline struct id_reg_desc *get_id_reg_desc(u32 id)
+{
+	return id_reg_desc_table[IDREG_IDX(id)];
+}
+
+static void id_reg_desc_init_all(void)
+{
+	int i;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
+			continue;
+
+		id_reg_desc_init(id_reg);
+	}
 }
 
 /*
@@ -2918,6 +3197,7 @@ void set_default_id_regs(struct kvm *kvm)
 	u32 id;
 	const struct sys_reg_desc *rd;
 	u64 val;
+	struct id_reg_desc *idr;
 	struct sys_reg_params params = {
 		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
 		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
@@ -2942,7 +3222,8 @@ void set_default_id_regs(struct kvm *kvm)
 			/* Hidden or reserved ID register */
 			continue;
 
-		val = read_sanitised_ftr_reg(id);
-		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+		idr = get_id_reg_desc(id);
+		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 03/38] KVM: arm64: Introduce struct id_reg_desc
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch lays the groundwork to make ID registers writable.

Introduce struct id_reg_desc for an ID register to manage the
register specific control of its value for the guest, and provide set
of functions commonly used for ID registers to make them writable.
Use the id_reg_desc to do register specific initialization, validation
of the ID register, etc.  The id_reg_desc has reg_desc field (struct
sys_reg_desc), which will be used instead of sys_reg_desc in
sys_reg_descs[] for ID registers in the following patches (and then
the entries in sys_reg_descs[] will be removed).

At present, changing an ID register from userspace is allowed only
if the ID register has the id_reg_desc, but that will be changed
by the following patches.

No ID register has the id_reg_desc yet, and the following patches
will add them for all the ID registers currently in sys_reg_descs[].

kvm_set_id_reg_feature(), which is introduced in this patch,
is going to be used by the following patch outside from sys_regs.c
when an ID register field needs to be updated.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/include/asm/sysreg.h   |   3 +-
 arch/arm64/kvm/sys_regs.c         | 313 ++++++++++++++++++++++++++++--
 3 files changed, 300 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index fc836df84748..a43fddd58e68 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -785,6 +785,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 				struct kvm_arm_copy_mte_tags *copy_tags);
 
 void set_default_id_regs(struct kvm *kvm);
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index fbf5f8bb9055..3d860108661b 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1234,9 +1234,10 @@
 #define ICH_VTR_TDS_MASK	(1 << ICH_VTR_TDS_SHIFT)
 
 #define ARM64_FEATURE_FIELD_BITS	4
+#define ARM64_FEATURE_FIELD_MASK	GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0)
 
 /* Create a mask for the feature bits of the specified feature. */
-#define ARM64_FEATURE_MASK(x)	(GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT))
+#define ARM64_FEATURE_MASK(x)	(ARM64_FEATURE_FIELD_MASK << x##_SHIFT)
 
 #ifdef __ASSEMBLY__
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b813a0b7b1c..30adc19e4619 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include "trace.h"
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
+static inline struct id_reg_desc *get_id_reg_desc(u32 id);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -269,6 +270,112 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+/*
+ * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
+ * in 64 bit register + 1 entry for a terminator entry).
+ */
+#define	FTR_FIELDS_NUM	17
+
+struct id_reg_desc {
+	const struct sys_reg_desc	reg_desc;
+
+	/*
+	 * Limit value of the register for a vcpu. The value is the sanitized
+	 * system value with bits set/cleared for unsupported features for the
+	 * guest.
+	 */
+	u64	vcpu_limit_val;
+
+	/* Fields that are not validated by arm64_check_features. */
+	u64	ignore_mask;
+
+	/* An optional initialization function of the id_reg_desc */
+	void (*init)(struct id_reg_desc *id_reg);
+
+	/*
+	 * This is an optional ID register specific validation function. When
+	 * userspace tries to set the ID register, arm64_check_features()
+	 * will check if the requested value indicates any features that cannot
+	 * be supported by KVM on the host.  But, some ID register fields need
+	 * a special checking, and this function can be used for such fields.
+	 * e.g. When SVE is configured for a vCPU by KVM_ARM_VCPU_INIT,
+	 * ID_AA64PFR0_EL1.SVE shouldn't be set to 0 for the vCPU.
+	 * The validation function for ID_AA64PFR0_EL1 could be used to check
+	 * the field is consistent with SVE configuration.
+	 */
+	int (*validate)(struct kvm_vcpu *vcpu, const struct id_reg_desc *id_reg,
+			u64 val);
+
+	/*
+	 * Return a bitmask of the vCPU's ID register fields that are not
+	 * synced with saved (per VM) ID register value, which usually
+	 * indicates opt-in CPU features that are not configured for the vCPU.
+	 * ID registers are saved per VM, but some opt-in CPU features can
+	 * be configured per vCPU.  The saved (per VM) values for such
+	 * features are for vCPUs with the features (and zero for
+	 * vCPUs without the features).
+	 * Return value of this function is used to handle such fields
+	 * for per vCPU ID register read/write request with saved per VM
+	 * ID register.  See the __write_id_reg's comment for more detail.
+	 */
+	u64 (*vcpu_mask)(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg);
+
+	/*
+	 * Used to validate the ID register values with arm64_check_features().
+	 * The last item in the array must be terminated by an item whose
+	 * width field is zero as that is expected by arm64_check_features().
+	 */
+	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+};
+
+static void id_reg_desc_init(struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_sanitised_ftr_reg(id);
+
+	id_reg->vcpu_limit_val = val;
+	if (id_reg->init)
+		id_reg->init(id_reg);
+
+	/*
+	 * id_reg->init() might update id_reg->vcpu_limit_val.
+	 * Make sure that id_reg->vcpu_limit_val, which will be the default
+	 * register value for guests, is a safe value to use for guests
+	 * on the host.
+	 */
+	WARN_ON_ONCE(arm64_check_features(id_reg->ftr_bits,
+					  id_reg->vcpu_limit_val, val));
+}
+
+static int validate_id_reg(struct kvm_vcpu *vcpu,
+			   const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit, tmp_val;
+	int err;
+
+	limit = id_reg->vcpu_limit_val;
+
+	/*
+	 * Replace the fields that are indicated in ignore_mask with
+	 * the value in the limit to not have arm64_check_features()
+	 * check the field in @val.
+	 */
+	tmp_val = val & ~id_reg->ignore_mask;
+	tmp_val |= (limit & id_reg->ignore_mask);
+
+	/* Check if the value indicates any feature that is not in the limit. */
+	err = arm64_check_features(id_reg->ftr_bits, tmp_val, limit);
+	if (err)
+		return err;
+
+	if (id_reg && id_reg->validate)
+		/* Run the ID register specific validity check. */
+		err = id_reg->validate(vcpu, id_reg, val);
+
+	return err;
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -1115,10 +1222,107 @@ static bool is_id_reg(u32 id)
 		sys_reg_CRm(id) < 8);
 }
 
+static u64 read_kvm_id_reg(struct kvm *kvm, u32 id)
+{
+	return kvm->arch.id_regs[IDREG_IDX(id)];
+}
+
+static int __modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	u64 old, new;
+
+	lockdep_assert_held(&kvm->lock);
+
+	old = kvm->arch.id_regs[IDREG_IDX(id)];
+
+	/* Preserve the value at the bit position set in preserve_mask */
+	new = old & preserve_mask;
+	new |= (val & ~preserve_mask);
+
+	/* Don't allow to modify ID register value after KVM_RUN on any vCPUs */
+	if (test_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags) &&
+	    new != old)
+		return -EBUSY;
+
+	WRITE_ONCE(kvm->arch.id_regs[IDREG_IDX(id)], new);
+
+	return 0;
+}
+
+static int modify_kvm_id_reg(struct kvm *kvm, u32 id, u64 val,
+			     u64 preserve_mask)
+{
+	int ret;
+
+	mutex_lock(&kvm->lock);
+	ret = __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+	mutex_unlock(&kvm->lock);
+
+	return ret;
+}
+
+static int write_kvm_id_reg(struct kvm *kvm, u32 id, u64 val)
+{
+	return modify_kvm_id_reg(kvm, id, val, 0);
+}
+
+/*
+ * KVM basically forces all vCPUs of the guest to have a uniform value for
+ * each ID register (it means KVM_SET_ONE_REG for a vCPU affects all
+ * the vCPUs of the guest), and the id_regs[] of kvm_arch holds values
+ * of ID registers for the guest.  However, there is an exception for
+ * ID register fields corresponding to CPU features that can be
+ * configured per vCPU by KVM_ARM_VCPU_INIT, or etc (e.g. PMUv3, SVE, etc).
+ * For such fields, all vCPUs that have the feature will have a non-zero
+ * uniform value, which can be updated by userspace, but the vCPUs that
+ * don't have the feature will have zero for the fields.
+ * Values that @id_regs holds are for vCPUs that have such features.  So,
+ * to get the ID register value for a vCPU that doesn't have those features,
+ * the corresponding fields in id_regs[] needs to be cleared.
+ * A bitmask of the fields are provided by id_reg_desc's vcpu_mask(), and
+ * __write_id_reg() and __read_id_reg() take care of those fields using
+ * the bitmask.
+ */
+static int __write_id_reg(struct kvm_vcpu *vcpu,
+			  struct id_reg_desc *id_reg, u64 val)
+{
+	u64 mask = 0;
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+
+	if (id_reg->vcpu_mask)
+		mask = id_reg->vcpu_mask(vcpu, id_reg);
+
+	/*
+	 * Update the ID register for the guest with @val, except for fields
+	 * that are set in the mask, which indicates fields for opt-in
+	 * features that are not configured for the vCPU.
+	 */
+	return modify_kvm_id_reg(vcpu->kvm, id, val, mask);
+}
+
+static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
+			 const struct id_reg_desc *id_reg)
+{
+	u32 id = reg_to_encoding(&id_reg->reg_desc);
+	u64 val = read_kvm_id_reg(vcpu->kvm, id);
+
+	if (id_reg && id_reg->vcpu_mask)
+		/* Clear fields for opt-in features that are not configured. */
+		val &= ~(id_reg->vcpu_mask(vcpu, id_reg));
+
+	return val;
+}
+
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 {
-	u64 val = vcpu->kvm->arch.id_regs[IDREG_IDX(id)];
+	u64 val;
+	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
+
+	if (id_reg)
+		return __read_id_reg(vcpu, id_reg);
 
+	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
 	case SYS_ID_AA64PFR0_EL1:
 		if (!vcpu_has_sve(vcpu))
@@ -1175,9 +1379,7 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	u32 id = reg_to_encoding(r);
-
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, id);
+	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -1277,12 +1479,7 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
-/*
- * cpufeature ID register user accessors
- *
- * For now, these registers are immutable for userspace, so for set_id_reg()
- * we don't allow the effective value to be changed.
- */
+/* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
@@ -1293,11 +1490,32 @@ static int __get_id_reg(const struct kvm_vcpu *vcpu,
 	return reg_to_user(uaddr, &val, id);
 }
 
-static int __set_id_reg(const struct kvm_vcpu *vcpu,
+/*
+ * Check if the given id indicates AArch32 ID register encoding.
+ */
+static bool is_aarch32_id_reg(u32 id)
+{
+	u32 crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+static int __set_id_reg(struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
 			bool raz)
 {
 	const u64 id = sys_reg_to_index(rd);
+	u32 encoding = reg_to_encoding(rd);
+	struct id_reg_desc *id_reg;
 	int err;
 	u64 val;
 
@@ -1305,11 +1523,33 @@ static int __set_id_reg(const struct kvm_vcpu *vcpu,
 	if (err)
 		return err;
 
-	/* This is what we mean by invariant: you can't change it. */
-	if (val != read_id_reg(vcpu, rd, raz))
+	if (val == read_id_reg(vcpu, rd, raz))
+		/* The value is same as the current value. Nothing to do. */
+		return 0;
+
+	/* Don't allow to modify the register's value if the register is raz. */
+	if (raz)
 		return -EINVAL;
 
-	return 0;
+	/*
+	 * Don't allow to modify the register's value if the register doesn't
+	 * have the id_reg_desc.
+	 */
+	id_reg = get_id_reg_desc(encoding);
+	if (!id_reg)
+		return -EINVAL;
+
+	/*
+	 * Skip the validation of AArch32 ID registers if the system doesn't
+	 * 32bit EL0 (their value are UNKNOWN).
+	 */
+	if (system_supports_32bit_el0() || !is_aarch32_id_reg(encoding)) {
+		err = validate_id_reg(vcpu, id_reg, val);
+		if (err)
+			return err;
+	}
+
+	return __write_id_reg(vcpu, id_reg, val);
 }
 
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
@@ -2872,6 +3112,8 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 	return write_demux_regids(uindices);
 }
 
+static void id_reg_desc_init_all(void);
+
 void kvm_sys_reg_table_init(void)
 {
 	unsigned int i;
@@ -2906,6 +3148,43 @@ void kvm_sys_reg_table_init(void)
 			break;
 	/* Clear all higher bits. */
 	cache_levels &= (1 << (i*3))-1;
+
+	id_reg_desc_init_all();
+}
+
+/*
+ * Update the ID register's field with @fval for the guest.
+ * The caller is expected to hold the kvm->lock.
+ * This will not fail unless any vCPUs in the guest have started.
+ */
+int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
+{
+	u64 val = ((u64)fval & ARM64_FEATURE_FIELD_MASK) << field_shift;
+	u64 preserve_mask = ~(ARM64_FEATURE_FIELD_MASK << field_shift);
+
+	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
+}
+
+/* A table for ID registers's information. */
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+
+static inline struct id_reg_desc *get_id_reg_desc(u32 id)
+{
+	return id_reg_desc_table[IDREG_IDX(id)];
+}
+
+static void id_reg_desc_init_all(void)
+{
+	int i;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
+			continue;
+
+		id_reg_desc_init(id_reg);
+	}
 }
 
 /*
@@ -2918,6 +3197,7 @@ void set_default_id_regs(struct kvm *kvm)
 	u32 id;
 	const struct sys_reg_desc *rd;
 	u64 val;
+	struct id_reg_desc *idr;
 	struct sys_reg_params params = {
 		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
 		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
@@ -2942,7 +3222,8 @@ void set_default_id_regs(struct kvm *kvm)
 			/* Hidden or reserved ID register */
 			continue;
 
-		val = read_sanitised_ftr_reg(id);
-		kvm->arch.id_regs[IDREG_IDX(id)] = val;
+		idr = get_id_reg_desc(id);
+		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 04/38] KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Most of entries in ftr_bits[] of id_reg_desc will be UNSIGNED+LOWER_SAFE.
Use that as the default arm64_ftr_bits for any entries that are not
statically defined in ftr_bits[] so that we don't have to statically
define every single UNSIGNED+LOWER_SAFE entry.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 54 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30adc19e4619..b19e14a1206a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -35,6 +35,7 @@
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
 static inline struct id_reg_desc *get_id_reg_desc(u32 id);
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -325,6 +326,8 @@ struct id_reg_desc {
 	 * Used to validate the ID register values with arm64_check_features().
 	 * The last item in the array must be terminated by an item whose
 	 * width field is zero as that is expected by arm64_check_features().
+	 * Entries that are not statically defined will be generated as
+	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
@@ -335,6 +338,9 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 	u64 val = read_sanitised_ftr_reg(id);
 
 	id_reg->vcpu_limit_val = val;
+
+	id_reg_desc_init_ftr(id_reg);
+
 	if (id_reg->init)
 		id_reg->init(id_reg);
 
@@ -3173,6 +3179,54 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
+{
+	ftrp->sign = FTR_UNSIGNED;
+	ftrp->type = FTR_LOWER_SAFE;
+	ftrp->shift = shift;
+	ftrp->width = ARM64_FEATURE_FIELD_BITS;
+	ftrp->safe_val = 0;
+}
+
+/*
+ * Check to see if the id_reg's ftr_bits have statically defined entries
+ * for all fields of the ID register, and generate the default ones
+ * (FTR_UNSIGNED+FTR_LOWER_SAFE) for any missing fields.
+ */
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr)
+{
+	struct arm64_ftr_bits *ftrp = idr->ftr_bits;
+	int index = 0;
+	int shift;
+	u64 ftr_mask;
+	u64 mask = 0;
+
+	/* Create a mask for fields that are statically defined */
+	for (index = 0; ftrp->width != 0; index++, ftrp++) {
+		ftr_mask = arm64_ftr_mask(ftrp);
+		WARN_ON_ONCE(mask & ftr_mask);
+		mask |= ftr_mask;
+	}
+
+	if (mask == -1UL)
+		/* All fields are statically defined */
+		return;
+
+	/* The 'index' indicates the first unused index of ftr_bits */
+	for (shift = 0; shift < 64; shift += ARM64_FEATURE_FIELD_BITS) {
+		/* Check if there is an existing ftrp for the field */
+		ftr_mask = ARM64_FEATURE_FIELD_MASK << shift;
+		if (mask & ftr_mask)
+			continue;
+
+		/* Generate the default arm64_ftr_bits for the field */
+		kvm_ftr_bits_set_default(shift, &idr->ftr_bits[index++]);
+		mask |= ftr_mask;
+	}
+
+	WARN_ON((mask != -1UL) || (index != (FTR_FIELDS_NUM - 1)));
+}
+
 static void id_reg_desc_init_all(void)
 {
 	int i;
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 04/38] KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Most of entries in ftr_bits[] of id_reg_desc will be UNSIGNED+LOWER_SAFE.
Use that as the default arm64_ftr_bits for any entries that are not
statically defined in ftr_bits[] so that we don't have to statically
define every single UNSIGNED+LOWER_SAFE entry.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 54 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30adc19e4619..b19e14a1206a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -35,6 +35,7 @@
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
 static inline struct id_reg_desc *get_id_reg_desc(u32 id);
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -325,6 +326,8 @@ struct id_reg_desc {
 	 * Used to validate the ID register values with arm64_check_features().
 	 * The last item in the array must be terminated by an item whose
 	 * width field is zero as that is expected by arm64_check_features().
+	 * Entries that are not statically defined will be generated as
+	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
@@ -335,6 +338,9 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 	u64 val = read_sanitised_ftr_reg(id);
 
 	id_reg->vcpu_limit_val = val;
+
+	id_reg_desc_init_ftr(id_reg);
+
 	if (id_reg->init)
 		id_reg->init(id_reg);
 
@@ -3173,6 +3179,54 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
+{
+	ftrp->sign = FTR_UNSIGNED;
+	ftrp->type = FTR_LOWER_SAFE;
+	ftrp->shift = shift;
+	ftrp->width = ARM64_FEATURE_FIELD_BITS;
+	ftrp->safe_val = 0;
+}
+
+/*
+ * Check to see if the id_reg's ftr_bits have statically defined entries
+ * for all fields of the ID register, and generate the default ones
+ * (FTR_UNSIGNED+FTR_LOWER_SAFE) for any missing fields.
+ */
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr)
+{
+	struct arm64_ftr_bits *ftrp = idr->ftr_bits;
+	int index = 0;
+	int shift;
+	u64 ftr_mask;
+	u64 mask = 0;
+
+	/* Create a mask for fields that are statically defined */
+	for (index = 0; ftrp->width != 0; index++, ftrp++) {
+		ftr_mask = arm64_ftr_mask(ftrp);
+		WARN_ON_ONCE(mask & ftr_mask);
+		mask |= ftr_mask;
+	}
+
+	if (mask == -1UL)
+		/* All fields are statically defined */
+		return;
+
+	/* The 'index' indicates the first unused index of ftr_bits */
+	for (shift = 0; shift < 64; shift += ARM64_FEATURE_FIELD_BITS) {
+		/* Check if there is an existing ftrp for the field */
+		ftr_mask = ARM64_FEATURE_FIELD_MASK << shift;
+		if (mask & ftr_mask)
+			continue;
+
+		/* Generate the default arm64_ftr_bits for the field */
+		kvm_ftr_bits_set_default(shift, &idr->ftr_bits[index++]);
+		mask |= ftr_mask;
+	}
+
+	WARN_ON((mask != -1UL) || (index != (FTR_FIELDS_NUM - 1)));
+}
+
 static void id_reg_desc_init_all(void)
 {
 	int i;
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 04/38] KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Most of entries in ftr_bits[] of id_reg_desc will be UNSIGNED+LOWER_SAFE.
Use that as the default arm64_ftr_bits for any entries that are not
statically defined in ftr_bits[] so that we don't have to statically
define every single UNSIGNED+LOWER_SAFE entry.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 54 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 30adc19e4619..b19e14a1206a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -35,6 +35,7 @@
 
 static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id);
 static inline struct id_reg_desc *get_id_reg_desc(u32 id);
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr);
 
 /*
  * All of this file is extremely similar to the ARM coproc.c, but the
@@ -325,6 +326,8 @@ struct id_reg_desc {
 	 * Used to validate the ID register values with arm64_check_features().
 	 * The last item in the array must be terminated by an item whose
 	 * width field is zero as that is expected by arm64_check_features().
+	 * Entries that are not statically defined will be generated as
+	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
@@ -335,6 +338,9 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 	u64 val = read_sanitised_ftr_reg(id);
 
 	id_reg->vcpu_limit_val = val;
+
+	id_reg_desc_init_ftr(id_reg);
+
 	if (id_reg->init)
 		id_reg->init(id_reg);
 
@@ -3173,6 +3179,54 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
+{
+	ftrp->sign = FTR_UNSIGNED;
+	ftrp->type = FTR_LOWER_SAFE;
+	ftrp->shift = shift;
+	ftrp->width = ARM64_FEATURE_FIELD_BITS;
+	ftrp->safe_val = 0;
+}
+
+/*
+ * Check to see if the id_reg's ftr_bits have statically defined entries
+ * for all fields of the ID register, and generate the default ones
+ * (FTR_UNSIGNED+FTR_LOWER_SAFE) for any missing fields.
+ */
+static void id_reg_desc_init_ftr(struct id_reg_desc *idr)
+{
+	struct arm64_ftr_bits *ftrp = idr->ftr_bits;
+	int index = 0;
+	int shift;
+	u64 ftr_mask;
+	u64 mask = 0;
+
+	/* Create a mask for fields that are statically defined */
+	for (index = 0; ftrp->width != 0; index++, ftrp++) {
+		ftr_mask = arm64_ftr_mask(ftrp);
+		WARN_ON_ONCE(mask & ftr_mask);
+		mask |= ftr_mask;
+	}
+
+	if (mask == -1UL)
+		/* All fields are statically defined */
+		return;
+
+	/* The 'index' indicates the first unused index of ftr_bits */
+	for (shift = 0; shift < 64; shift += ARM64_FEATURE_FIELD_BITS) {
+		/* Check if there is an existing ftrp for the field */
+		ftr_mask = ARM64_FEATURE_FIELD_MASK << shift;
+		if (mask & ftr_mask)
+			continue;
+
+		/* Generate the default arm64_ftr_bits for the field */
+		kvm_ftr_bits_set_default(shift, &idr->ftr_bits[index++]);
+		mask |= ftr_mask;
+	}
+
+	WARN_ON((mask != -1UL) || (index != (FTR_FIELDS_NUM - 1)));
+}
+
 static void id_reg_desc_init_all(void)
 {
 	int i;
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 05/38] KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Prohibit userspace from modifying values of ID registers.
(Don't support configurable ID registers for 32bit EL1 guests)

NOTE: The following patches will enable trapping disabled features
only based on values of AArch64 ID registers for the guest expecting
userspace to make AArch32 ID registers consistent with the AArch64
ones (Otherwise, it will be a userspace bug).  Supporting 32bit EL1
guests will require that KVM will not enable trapping based on values
of AArch64 ID registers (and should enable trapping based on the
AArch32 ID registers when possible).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b19e14a1206a..bc06570523f4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1537,6 +1537,10 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (raz)
 		return -EINVAL;
 
+	/* Don't allow to modify the register's value for the 32bit EL1 guest */
+	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
+		return -EPERM;
+
 	/*
 	 * Don't allow to modify the register's value if the register doesn't
 	 * have the id_reg_desc.
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 05/38] KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Prohibit userspace from modifying values of ID registers.
(Don't support configurable ID registers for 32bit EL1 guests)

NOTE: The following patches will enable trapping disabled features
only based on values of AArch64 ID registers for the guest expecting
userspace to make AArch32 ID registers consistent with the AArch64
ones (Otherwise, it will be a userspace bug).  Supporting 32bit EL1
guests will require that KVM will not enable trapping based on values
of AArch64 ID registers (and should enable trapping based on the
AArch32 ID registers when possible).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b19e14a1206a..bc06570523f4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1537,6 +1537,10 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (raz)
 		return -EINVAL;
 
+	/* Don't allow to modify the register's value for the 32bit EL1 guest */
+	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
+		return -EPERM;
+
 	/*
 	 * Don't allow to modify the register's value if the register doesn't
 	 * have the id_reg_desc.
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 05/38] KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Prohibit userspace from modifying values of ID registers.
(Don't support configurable ID registers for 32bit EL1 guests)

NOTE: The following patches will enable trapping disabled features
only based on values of AArch64 ID registers for the guest expecting
userspace to make AArch32 ID registers consistent with the AArch64
ones (Otherwise, it will be a userspace bug).  Supporting 32bit EL1
guests will require that KVM will not enable trapping based on values
of AArch64 ID registers (and should enable trapping based on the
AArch32 ID registers when possible).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b19e14a1206a..bc06570523f4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1537,6 +1537,10 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (raz)
 		return -EINVAL;
 
+	/* Don't allow to modify the register's value for the 32bit EL1 guest */
+	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
+		return -EPERM;
+
 	/*
 	 * Don't allow to modify the register's value if the register doesn't
 	 * have the id_reg_desc.
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 06/38] KVM: arm64: Make ID_AA64PFR0_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64PFR0_EL1 to make it writable by
userspace.

Return an error if userspace tries to set SVE/GIC field of the register
to a value that conflicts with SVE/GIC configuration for the guest.
SIMD/FP/SVE fields of the requested value are validated according to
Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   1 +
 arch/arm64/kvm/sys_regs.c       | 172 +++++++++++++++++++++-----------
 arch/arm64/kvm/vgic/vgic-init.c |   9 ++
 3 files changed, 123 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3d860108661b..3adb402fab86 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -834,6 +834,7 @@
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
 #define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
+#define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bc06570523f4..67a0604fe6f1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,19 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
+	.sign = ftr_sign,					\
+	.type = ftr_type,					\
+	.shift = bit_pos,					\
+	.width = ARM64_FEATURE_FIELD_BITS,			\
+	.safe_val = safe,					\
+}
+
+#define S_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_SIGNED, ftr_type, bit_pos, safe_val)
+#define U_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_UNSIGNED, ftr_type, bit_pos, safe_val)
+
 /*
  * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
  * in 64 bit register + 1 entry for a terminator entry).
@@ -354,6 +367,86 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 					  id_reg->vcpu_limit_val, val));
 }
 
+static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	int fp, simd;
+	unsigned int gic;
+	bool vcpu_has_sve = vcpu_has_sve(vcpu);
+	bool pfr0_has_sve = id_aa64pfr0_sve(val);
+
+	simd = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_ASIMD_SHIFT);
+	fp = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_FP_SHIFT);
+	/* AdvSIMD field must have the same value as FP field */
+	if (simd != fp)
+		return -EINVAL;
+
+	/* fp must be supported when sve is supported */
+	if (pfr0_has_sve && (fp < 0))
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (vcpu_has_sve ^ pfr0_has_sve)
+		return -EPERM;
+
+	if ((irqchip_in_kernel(vcpu->kvm) &&
+	     vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)) {
+		gic = cpuid_feature_extract_unsigned_field(val,
+							ID_AA64PFR0_GIC_SHIFT);
+		if (gic == 0)
+			return -EPERM;
+
+		if (gic > ID_AA64PFR0_GIC3)
+			return -E2BIG;
+	} else {
+		u64 mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		int r = arm64_check_features(id_reg->ftr_bits, val & mask,
+					     id_reg->vcpu_limit_val & mask);
+
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	unsigned int gic;
+
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
+	if (!system_supports_sve())
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+
+	/*
+	 * The default is to expose CSV2 == 1 and CSV3 == 1 if the HW
+	 * isn't affected.  Userspace can override this as long as it
+	 * doesn't promise the impossible.
+	 */
+	limit &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2) |
+		   ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3));
+
+	if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), 1);
+	if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), 1);
+
+	gic = cpuid_feature_extract_unsigned_field(limit, ID_AA64PFR0_GIC_SHIFT);
+	if (gic > 1) {
+		/* Limit to GICv3.0/4.0 */
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), ID_AA64PFR0_GIC3);
+	}
+	id_reg->vcpu_limit_val = limit;
+}
+
+static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1330,20 +1423,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR0_EL1:
-		if (!vcpu_has_sve(vcpu))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
-		if (irqchip_in_kernel(vcpu->kvm) &&
-		    vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
-			val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1);
-		}
-		break;
 	case SYS_ID_AA64PFR1_EL1:
 		if (!kvm_has_mte(vcpu->kvm))
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
@@ -1443,48 +1522,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
-static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
-			       const struct sys_reg_desc *rd,
-			       const struct kvm_one_reg *reg, void __user *uaddr)
-{
-	const u64 id = sys_reg_to_index(rd);
-	u8 csv2, csv3;
-	int err;
-	u64 val;
-
-	err = reg_from_user(&val, uaddr, id);
-	if (err)
-		return err;
-
-	/*
-	 * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
-	 * it doesn't promise more than what is actually provided (the
-	 * guest could otherwise be covered in ectoplasmic residue).
-	 */
-	csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV2_SHIFT);
-	if (csv2 > 1 ||
-	    (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* Same thing for CSV3 */
-	csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV3_SHIFT);
-	if (csv3 > 1 ||
-	    (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* We can only differ with CSV[23], and anything else is an error */
-	val ^= read_id_reg(vcpu, rd, false);
-	val &= ~((0xFUL << ID_AA64PFR0_CSV2_SHIFT) |
-		 (0xFUL << ID_AA64PFR0_CSV3_SHIFT));
-	if (val)
-		return -EINVAL;
-
-	vcpu->kvm->arch.pfr0_csv2 = csv2;
-	vcpu->kvm->arch.pfr0_csv3 = csv3 ;
-
-	return 0;
-}
-
 /* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1809,8 +1846,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* AArch64 ID registers */
 	/* CRm=4 */
-	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, },
+	ID_SANITISED(ID_AA64PFR0_EL1),
 	ID_SANITISED(ID_AA64PFR1_EL1),
 	ID_UNALLOCATED(4,2),
 	ID_UNALLOCATED(4,3),
@@ -3175,8 +3211,26 @@ int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
 	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
 }
 
+static struct id_reg_desc id_aa64pfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR0_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC),
+	.init = init_id_aa64pfr0_el1_desc,
+	.validate = validate_id_aa64pfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
+	}
+};
+
+#define ID_DESC(id_reg_name, id_reg_desc)	\
+	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
+
 /* A table for ID registers's information. */
-static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=4 */
+	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+};
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 {
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
index fc00304fe7d8..f0632b46fbf9 100644
--- a/arch/arm64/kvm/vgic/vgic-init.c
+++ b/arch/arm64/kvm/vgic/vgic-init.c
@@ -117,6 +117,15 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	else
 		INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions);
 
+	if (type == KVM_DEV_TYPE_ARM_VGIC_V3)
+		/*
+		 * Set ID_AA64PFR0_EL1.GIC to 1.  This shouldn't fail unless
+		 * any vCPU in the guest have started.
+		 */
+		WARN_ON_ONCE(kvm_set_id_reg_feature(kvm, SYS_ID_AA64PFR0_EL1,
+						    ID_AA64PFR0_GIC3,
+						    ID_AA64PFR0_GIC_SHIFT));
+
 out_unlock:
 	unlock_all_vcpus(kvm);
 	return ret;
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 06/38] KVM: arm64: Make ID_AA64PFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64PFR0_EL1 to make it writable by
userspace.

Return an error if userspace tries to set SVE/GIC field of the register
to a value that conflicts with SVE/GIC configuration for the guest.
SIMD/FP/SVE fields of the requested value are validated according to
Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   1 +
 arch/arm64/kvm/sys_regs.c       | 172 +++++++++++++++++++++-----------
 arch/arm64/kvm/vgic/vgic-init.c |   9 ++
 3 files changed, 123 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3d860108661b..3adb402fab86 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -834,6 +834,7 @@
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
 #define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
+#define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bc06570523f4..67a0604fe6f1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,19 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
+	.sign = ftr_sign,					\
+	.type = ftr_type,					\
+	.shift = bit_pos,					\
+	.width = ARM64_FEATURE_FIELD_BITS,			\
+	.safe_val = safe,					\
+}
+
+#define S_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_SIGNED, ftr_type, bit_pos, safe_val)
+#define U_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_UNSIGNED, ftr_type, bit_pos, safe_val)
+
 /*
  * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
  * in 64 bit register + 1 entry for a terminator entry).
@@ -354,6 +367,86 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 					  id_reg->vcpu_limit_val, val));
 }
 
+static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	int fp, simd;
+	unsigned int gic;
+	bool vcpu_has_sve = vcpu_has_sve(vcpu);
+	bool pfr0_has_sve = id_aa64pfr0_sve(val);
+
+	simd = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_ASIMD_SHIFT);
+	fp = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_FP_SHIFT);
+	/* AdvSIMD field must have the same value as FP field */
+	if (simd != fp)
+		return -EINVAL;
+
+	/* fp must be supported when sve is supported */
+	if (pfr0_has_sve && (fp < 0))
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (vcpu_has_sve ^ pfr0_has_sve)
+		return -EPERM;
+
+	if ((irqchip_in_kernel(vcpu->kvm) &&
+	     vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)) {
+		gic = cpuid_feature_extract_unsigned_field(val,
+							ID_AA64PFR0_GIC_SHIFT);
+		if (gic == 0)
+			return -EPERM;
+
+		if (gic > ID_AA64PFR0_GIC3)
+			return -E2BIG;
+	} else {
+		u64 mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		int r = arm64_check_features(id_reg->ftr_bits, val & mask,
+					     id_reg->vcpu_limit_val & mask);
+
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	unsigned int gic;
+
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
+	if (!system_supports_sve())
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+
+	/*
+	 * The default is to expose CSV2 == 1 and CSV3 == 1 if the HW
+	 * isn't affected.  Userspace can override this as long as it
+	 * doesn't promise the impossible.
+	 */
+	limit &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2) |
+		   ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3));
+
+	if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), 1);
+	if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), 1);
+
+	gic = cpuid_feature_extract_unsigned_field(limit, ID_AA64PFR0_GIC_SHIFT);
+	if (gic > 1) {
+		/* Limit to GICv3.0/4.0 */
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), ID_AA64PFR0_GIC3);
+	}
+	id_reg->vcpu_limit_val = limit;
+}
+
+static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1330,20 +1423,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR0_EL1:
-		if (!vcpu_has_sve(vcpu))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
-		if (irqchip_in_kernel(vcpu->kvm) &&
-		    vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
-			val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1);
-		}
-		break;
 	case SYS_ID_AA64PFR1_EL1:
 		if (!kvm_has_mte(vcpu->kvm))
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
@@ -1443,48 +1522,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
-static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
-			       const struct sys_reg_desc *rd,
-			       const struct kvm_one_reg *reg, void __user *uaddr)
-{
-	const u64 id = sys_reg_to_index(rd);
-	u8 csv2, csv3;
-	int err;
-	u64 val;
-
-	err = reg_from_user(&val, uaddr, id);
-	if (err)
-		return err;
-
-	/*
-	 * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
-	 * it doesn't promise more than what is actually provided (the
-	 * guest could otherwise be covered in ectoplasmic residue).
-	 */
-	csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV2_SHIFT);
-	if (csv2 > 1 ||
-	    (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* Same thing for CSV3 */
-	csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV3_SHIFT);
-	if (csv3 > 1 ||
-	    (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* We can only differ with CSV[23], and anything else is an error */
-	val ^= read_id_reg(vcpu, rd, false);
-	val &= ~((0xFUL << ID_AA64PFR0_CSV2_SHIFT) |
-		 (0xFUL << ID_AA64PFR0_CSV3_SHIFT));
-	if (val)
-		return -EINVAL;
-
-	vcpu->kvm->arch.pfr0_csv2 = csv2;
-	vcpu->kvm->arch.pfr0_csv3 = csv3 ;
-
-	return 0;
-}
-
 /* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1809,8 +1846,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* AArch64 ID registers */
 	/* CRm=4 */
-	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, },
+	ID_SANITISED(ID_AA64PFR0_EL1),
 	ID_SANITISED(ID_AA64PFR1_EL1),
 	ID_UNALLOCATED(4,2),
 	ID_UNALLOCATED(4,3),
@@ -3175,8 +3211,26 @@ int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
 	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
 }
 
+static struct id_reg_desc id_aa64pfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR0_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC),
+	.init = init_id_aa64pfr0_el1_desc,
+	.validate = validate_id_aa64pfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
+	}
+};
+
+#define ID_DESC(id_reg_name, id_reg_desc)	\
+	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
+
 /* A table for ID registers's information. */
-static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=4 */
+	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+};
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 {
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
index fc00304fe7d8..f0632b46fbf9 100644
--- a/arch/arm64/kvm/vgic/vgic-init.c
+++ b/arch/arm64/kvm/vgic/vgic-init.c
@@ -117,6 +117,15 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	else
 		INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions);
 
+	if (type == KVM_DEV_TYPE_ARM_VGIC_V3)
+		/*
+		 * Set ID_AA64PFR0_EL1.GIC to 1.  This shouldn't fail unless
+		 * any vCPU in the guest have started.
+		 */
+		WARN_ON_ONCE(kvm_set_id_reg_feature(kvm, SYS_ID_AA64PFR0_EL1,
+						    ID_AA64PFR0_GIC3,
+						    ID_AA64PFR0_GIC_SHIFT));
+
 out_unlock:
 	unlock_all_vcpus(kvm);
 	return ret;
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 06/38] KVM: arm64: Make ID_AA64PFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64PFR0_EL1 to make it writable by
userspace.

Return an error if userspace tries to set SVE/GIC field of the register
to a value that conflicts with SVE/GIC configuration for the guest.
SIMD/FP/SVE fields of the requested value are validated according to
Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   1 +
 arch/arm64/kvm/sys_regs.c       | 172 +++++++++++++++++++++-----------
 arch/arm64/kvm/vgic/vgic-init.c |   9 ++
 3 files changed, 123 insertions(+), 59 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3d860108661b..3adb402fab86 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -834,6 +834,7 @@
 #define ID_AA64PFR0_ASIMD_SUPPORTED	0x0
 #define ID_AA64PFR0_ELx_64BIT_ONLY	0x1
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
+#define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bc06570523f4..67a0604fe6f1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,19 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
+	.sign = ftr_sign,					\
+	.type = ftr_type,					\
+	.shift = bit_pos,					\
+	.width = ARM64_FEATURE_FIELD_BITS,			\
+	.safe_val = safe,					\
+}
+
+#define S_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_SIGNED, ftr_type, bit_pos, safe_val)
+#define U_FTR_BITS(ftr_type, bit_pos, safe_val)	\
+	__FTR_BITS(FTR_UNSIGNED, ftr_type, bit_pos, safe_val)
+
 /*
  * Number of entries in id_reg_desc's ftr_bits[] (Number of 4 bits fields
  * in 64 bit register + 1 entry for a terminator entry).
@@ -354,6 +367,86 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 					  id_reg->vcpu_limit_val, val));
 }
 
+static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	int fp, simd;
+	unsigned int gic;
+	bool vcpu_has_sve = vcpu_has_sve(vcpu);
+	bool pfr0_has_sve = id_aa64pfr0_sve(val);
+
+	simd = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_ASIMD_SHIFT);
+	fp = cpuid_feature_extract_signed_field(val, ID_AA64PFR0_FP_SHIFT);
+	/* AdvSIMD field must have the same value as FP field */
+	if (simd != fp)
+		return -EINVAL;
+
+	/* fp must be supported when sve is supported */
+	if (pfr0_has_sve && (fp < 0))
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (vcpu_has_sve ^ pfr0_has_sve)
+		return -EPERM;
+
+	if ((irqchip_in_kernel(vcpu->kvm) &&
+	     vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)) {
+		gic = cpuid_feature_extract_unsigned_field(val,
+							ID_AA64PFR0_GIC_SHIFT);
+		if (gic == 0)
+			return -EPERM;
+
+		if (gic > ID_AA64PFR0_GIC3)
+			return -E2BIG;
+	} else {
+		u64 mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		int r = arm64_check_features(id_reg->ftr_bits, val & mask,
+					     id_reg->vcpu_limit_val & mask);
+
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	unsigned int gic;
+
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
+	if (!system_supports_sve())
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+
+	/*
+	 * The default is to expose CSV2 == 1 and CSV3 == 1 if the HW
+	 * isn't affected.  Userspace can override this as long as it
+	 * doesn't promise the impossible.
+	 */
+	limit &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2) |
+		   ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3));
+
+	if (arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), 1);
+	if (arm64_get_meltdown_state() == SPECTRE_UNAFFECTED)
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), 1);
+
+	gic = cpuid_feature_extract_unsigned_field(limit, ID_AA64PFR0_GIC_SHIFT);
+	if (gic > 1) {
+		/* Limit to GICv3.0/4.0 */
+		limit &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
+		limit |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), ID_AA64PFR0_GIC3);
+	}
+	id_reg->vcpu_limit_val = limit;
+}
+
+static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1330,20 +1423,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR0_EL1:
-		if (!vcpu_has_sve(vcpu))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_AMU);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV2), (u64)vcpu->kvm->arch.pfr0_csv2);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_CSV3), (u64)vcpu->kvm->arch.pfr0_csv3);
-		if (irqchip_in_kernel(vcpu->kvm) &&
-		    vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR0_GIC);
-			val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_GIC), 1);
-		}
-		break;
 	case SYS_ID_AA64PFR1_EL1:
 		if (!kvm_has_mte(vcpu->kvm))
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
@@ -1443,48 +1522,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
-static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
-			       const struct sys_reg_desc *rd,
-			       const struct kvm_one_reg *reg, void __user *uaddr)
-{
-	const u64 id = sys_reg_to_index(rd);
-	u8 csv2, csv3;
-	int err;
-	u64 val;
-
-	err = reg_from_user(&val, uaddr, id);
-	if (err)
-		return err;
-
-	/*
-	 * Allow AA64PFR0_EL1.CSV2 to be set from userspace as long as
-	 * it doesn't promise more than what is actually provided (the
-	 * guest could otherwise be covered in ectoplasmic residue).
-	 */
-	csv2 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV2_SHIFT);
-	if (csv2 > 1 ||
-	    (csv2 && arm64_get_spectre_v2_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* Same thing for CSV3 */
-	csv3 = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR0_CSV3_SHIFT);
-	if (csv3 > 1 ||
-	    (csv3 && arm64_get_meltdown_state() != SPECTRE_UNAFFECTED))
-		return -EINVAL;
-
-	/* We can only differ with CSV[23], and anything else is an error */
-	val ^= read_id_reg(vcpu, rd, false);
-	val &= ~((0xFUL << ID_AA64PFR0_CSV2_SHIFT) |
-		 (0xFUL << ID_AA64PFR0_CSV3_SHIFT));
-	if (val)
-		return -EINVAL;
-
-	vcpu->kvm->arch.pfr0_csv2 = csv2;
-	vcpu->kvm->arch.pfr0_csv3 = csv3 ;
-
-	return 0;
-}
-
 /* cpufeature ID register user accessors */
 static int __get_id_reg(const struct kvm_vcpu *vcpu,
 			const struct sys_reg_desc *rd, void __user *uaddr,
@@ -1809,8 +1846,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* AArch64 ID registers */
 	/* CRm=4 */
-	{ SYS_DESC(SYS_ID_AA64PFR0_EL1), .access = access_id_reg,
-	  .get_user = get_id_reg, .set_user = set_id_aa64pfr0_el1, },
+	ID_SANITISED(ID_AA64PFR0_EL1),
 	ID_SANITISED(ID_AA64PFR1_EL1),
 	ID_UNALLOCATED(4,2),
 	ID_UNALLOCATED(4,3),
@@ -3175,8 +3211,26 @@ int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval)
 	return __modify_kvm_id_reg(kvm, id, val, preserve_mask);
 }
 
+static struct id_reg_desc id_aa64pfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR0_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR0_GIC),
+	.init = init_id_aa64pfr0_el1_desc,
+	.validate = validate_id_aa64pfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
+	}
+};
+
+#define ID_DESC(id_reg_name, id_reg_desc)	\
+	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
+
 /* A table for ID registers's information. */
-static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {};
+static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=4 */
+	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+};
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 {
diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-init.c
index fc00304fe7d8..f0632b46fbf9 100644
--- a/arch/arm64/kvm/vgic/vgic-init.c
+++ b/arch/arm64/kvm/vgic/vgic-init.c
@@ -117,6 +117,15 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
 	else
 		INIT_LIST_HEAD(&kvm->arch.vgic.rd_regions);
 
+	if (type == KVM_DEV_TYPE_ARM_VGIC_V3)
+		/*
+		 * Set ID_AA64PFR0_EL1.GIC to 1.  This shouldn't fail unless
+		 * any vCPU in the guest have started.
+		 */
+		WARN_ON_ONCE(kvm_set_id_reg_feature(kvm, SYS_ID_AA64PFR0_EL1,
+						    ID_AA64PFR0_GIC3,
+						    ID_AA64PFR0_GIC_SHIFT));
+
 out_unlock:
 	unlock_all_vcpus(kvm);
 	return ret;
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 07/38] KVM: arm64: Make ID_AA64PFR1_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64PFR1_EL1 to make it writable
by userspace.

Return an error if userspace tries to set MTE field of the register
to a value that conflicts with KVM_CAP_ARM_MTE configuration for
the guest.
Skip fractional feature fields validation at present and they will
be handled by the following patches.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |  1 +
 arch/arm64/kvm/sys_regs.c       | 42 +++++++++++++++++++++++++++++----
 2 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3adb402fab86..b33b7ce87fb2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -837,6 +837,7 @@
 #define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 67a0604fe6f1..c3537cd4fe58 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -410,6 +410,21 @@ static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	bool kvm_mte = kvm_has_mte(vcpu->kvm);
+	unsigned int mte;
+
+	mte = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR1_MTE_SHIFT);
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT. */
+	if (kvm_mte ^ (mte > 0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -441,12 +456,24 @@ static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 	id_reg->vcpu_limit_val = limit;
 }
 
+static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_supports_mte())
+		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
 	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
 }
 
+static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1423,10 +1450,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR1_EL1:
-		if (!kvm_has_mte(vcpu->kvm))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
-		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
@@ -3223,6 +3246,16 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	}
 };
 
+static struct id_reg_desc id_aa64pfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
+	.init = init_id_aa64pfr1_el1_desc,
+	.validate = validate_id_aa64pfr1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3230,6 +3263,7 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 07/38] KVM: arm64: Make ID_AA64PFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64PFR1_EL1 to make it writable
by userspace.

Return an error if userspace tries to set MTE field of the register
to a value that conflicts with KVM_CAP_ARM_MTE configuration for
the guest.
Skip fractional feature fields validation at present and they will
be handled by the following patches.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |  1 +
 arch/arm64/kvm/sys_regs.c       | 42 +++++++++++++++++++++++++++++----
 2 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3adb402fab86..b33b7ce87fb2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -837,6 +837,7 @@
 #define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 67a0604fe6f1..c3537cd4fe58 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -410,6 +410,21 @@ static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	bool kvm_mte = kvm_has_mte(vcpu->kvm);
+	unsigned int mte;
+
+	mte = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR1_MTE_SHIFT);
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT. */
+	if (kvm_mte ^ (mte > 0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -441,12 +456,24 @@ static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 	id_reg->vcpu_limit_val = limit;
 }
 
+static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_supports_mte())
+		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
 	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
 }
 
+static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1423,10 +1450,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR1_EL1:
-		if (!kvm_has_mte(vcpu->kvm))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
-		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
@@ -3223,6 +3246,16 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	}
 };
 
+static struct id_reg_desc id_aa64pfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
+	.init = init_id_aa64pfr1_el1_desc,
+	.validate = validate_id_aa64pfr1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3230,6 +3263,7 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 07/38] KVM: arm64: Make ID_AA64PFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64PFR1_EL1 to make it writable
by userspace.

Return an error if userspace tries to set MTE field of the register
to a value that conflicts with KVM_CAP_ARM_MTE configuration for
the guest.
Skip fractional feature fields validation at present and they will
be handled by the following patches.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |  1 +
 arch/arm64/kvm/sys_regs.c       | 42 +++++++++++++++++++++++++++++----
 2 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 3adb402fab86..b33b7ce87fb2 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -837,6 +837,7 @@
 #define ID_AA64PFR0_GIC3		0x1
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 67a0604fe6f1..c3537cd4fe58 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -410,6 +410,21 @@ static int validate_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	bool kvm_mte = kvm_has_mte(vcpu->kvm);
+	unsigned int mte;
+
+	mte = cpuid_feature_extract_unsigned_field(val, ID_AA64PFR1_MTE_SHIFT);
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT. */
+	if (kvm_mte ^ (mte > 0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -441,12 +456,24 @@ static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 	id_reg->vcpu_limit_val = limit;
 }
 
+static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_supports_mte())
+		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
 	return vcpu_has_sve(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64PFR0_SVE);
 }
 
+static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1423,10 +1450,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64PFR1_EL1:
-		if (!kvm_has_mte(vcpu->kvm))
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
-		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
@@ -3223,6 +3246,16 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	}
 };
 
+static struct id_reg_desc id_aa64pfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
+		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
+	.init = init_id_aa64pfr1_el1_desc,
+	.validate = validate_id_aa64pfr1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3230,6 +3263,7 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
+	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 08/38] KVM: arm64: Make ID_AA64ISAR0_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR0_EL1 to make it writable
by userspace.

Updating sm3, sm4, sha1, sha2 and sha3 fields are allowed only
if values of those fields follow Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c3537cd4fe58..c01038cbdb31 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -425,6 +425,29 @@ static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int sm3, sm4, sha1, sha2, sha3;
+
+	/* Run consistency checkings according to Arm ARM */
+	sm3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM3_SHIFT);
+	sm4 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM4_SHIFT);
+	if (sm3 != sm4)
+		return -EINVAL;
+
+	sha1 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA1_SHIFT);
+	sha2 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA2_SHIFT);
+	if ((sha1 == 0) ^ (sha2 == 0))
+		return -EINVAL;
+
+	sha3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA3_SHIFT);
+	if (((sha2 == 2) ^ (sha3 == 1)) || (!sha1 && sha3))
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3256,6 +3279,11 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
 };
 
+static struct id_reg_desc id_aa64isar0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR0_EL1),
+	.validate = validate_id_aa64isar0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3264,6 +3292,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+
+	/* CRm=6 */
+	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 08/38] KVM: arm64: Make ID_AA64ISAR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64ISAR0_EL1 to make it writable
by userspace.

Updating sm3, sm4, sha1, sha2 and sha3 fields are allowed only
if values of those fields follow Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c3537cd4fe58..c01038cbdb31 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -425,6 +425,29 @@ static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int sm3, sm4, sha1, sha2, sha3;
+
+	/* Run consistency checkings according to Arm ARM */
+	sm3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM3_SHIFT);
+	sm4 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM4_SHIFT);
+	if (sm3 != sm4)
+		return -EINVAL;
+
+	sha1 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA1_SHIFT);
+	sha2 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA2_SHIFT);
+	if ((sha1 == 0) ^ (sha2 == 0))
+		return -EINVAL;
+
+	sha3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA3_SHIFT);
+	if (((sha2 == 2) ^ (sha3 == 1)) || (!sha1 && sha3))
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3256,6 +3279,11 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
 };
 
+static struct id_reg_desc id_aa64isar0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR0_EL1),
+	.validate = validate_id_aa64isar0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3264,6 +3292,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+
+	/* CRm=6 */
+	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 08/38] KVM: arm64: Make ID_AA64ISAR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR0_EL1 to make it writable
by userspace.

Updating sm3, sm4, sha1, sha2 and sha3 fields are allowed only
if values of those fields follow Arm ARM.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c3537cd4fe58..c01038cbdb31 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -425,6 +425,29 @@ static int validate_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int sm3, sm4, sha1, sha2, sha3;
+
+	/* Run consistency checkings according to Arm ARM */
+	sm3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM3_SHIFT);
+	sm4 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SM4_SHIFT);
+	if (sm3 != sm4)
+		return -EINVAL;
+
+	sha1 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA1_SHIFT);
+	sha2 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA2_SHIFT);
+	if ((sha1 == 0) ^ (sha2 == 0))
+		return -EINVAL;
+
+	sha3 = cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR0_SHA3_SHIFT);
+	if (((sha2 == 2) ^ (sha3 == 1)) || (!sha1 && sha3))
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3256,6 +3279,11 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
 };
 
+static struct id_reg_desc id_aa64isar0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR0_EL1),
+	.validate = validate_id_aa64isar0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3264,6 +3292,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+
+	/* CRm=6 */
+	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 09/38] KVM: arm64: Make ID_AA64ISAR1_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR1_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 90 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c01038cbdb31..dd4dcc1e4982 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,24 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define ISAR1_TRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI))
+
+#define aa64isar1_has_apa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \
+	 ID_AA64ISAR1_APA_ARCHITECTED)
+#define aa64isar1_has_api(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \
+	 ID_AA64ISAR1_API_IMP_DEF)
+#define aa64isar1_has_gpa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \
+	 ID_AA64ISAR1_GPA_ARCHITECTED)
+#define aa64isar1_has_gpi(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
+	 ID_AA64ISAR1_GPI_IMP_DEF)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -448,6 +466,47 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpi, has_gpa, has_api, has_apa;
+	bool generic, address, lim_generic, lim_address;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpi = aa64isar1_has_gpi(val);
+	has_gpa = aa64isar1_has_gpa(val);
+	has_api = aa64isar1_has_api(val);
+	has_apa = aa64isar1_has_apa(val);
+	if ((has_gpi && has_gpa) || (has_api && has_apa))
+		return -EINVAL;
+
+	generic = has_gpi || has_gpa;
+	address = has_api || has_apa;
+	lim_generic = aa64isar1_has_gpi(lim) || aa64isar1_has_gpa(lim);
+	lim_address = aa64isar1_has_api(lim) || aa64isar1_has_apa(lim);
+
+	/*
+	 * When PTRAUTH is configured for the vCPU via KVM_ARM_VCPU_INIT,
+	 * it should mean that userspace wants to expose
+	 * one of ID_AA64ISAR1_EL1.GPI, GPA or ID_AA64ISAR2_EL1.GPA3 and
+	 * one of ID_AA64ISAR1_EL1.API, APA or ID_AA64ISAR2_EL1.APA3 to
+	 * the guest (As per Arm ARM, for generic code authentication
+	 * and address authentication, only one of those field can be
+	 * non-zero).
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR1_EL1 with PTRAUTH configuration.
+	 * (When lim_generic/lim_address is 0, generic/address must be
+	 *  also 0, which is checked by arm64_check_features())
+	 */
+	if (lim_generic && (vcpu_has_ptrauth(vcpu) ^ generic))
+		return -EPERM;
+
+	if (lim_address && (vcpu_has_ptrauth(vcpu) ^ address))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -485,6 +544,12 @@ static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
 }
 
+static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -497,6 +562,12 @@ static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
 	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
 }
 
+static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1473,13 +1544,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR1_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI));
-		break;
 	case SYS_ID_AA64ISAR2_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
@@ -3284,6 +3348,17 @@ static struct id_reg_desc id_aa64isar0_el1_desc = {
 	.validate = validate_id_aa64isar0_el1,
 };
 
+static struct id_reg_desc id_aa64isar1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR1_EL1),
+	.init = init_id_aa64isar1_el1_desc,
+	.validate = validate_id_aa64isar1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar1_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3295,6 +3370,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
+	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 09/38] KVM: arm64: Make ID_AA64ISAR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64ISAR1_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 90 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c01038cbdb31..dd4dcc1e4982 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,24 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define ISAR1_TRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI))
+
+#define aa64isar1_has_apa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \
+	 ID_AA64ISAR1_APA_ARCHITECTED)
+#define aa64isar1_has_api(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \
+	 ID_AA64ISAR1_API_IMP_DEF)
+#define aa64isar1_has_gpa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \
+	 ID_AA64ISAR1_GPA_ARCHITECTED)
+#define aa64isar1_has_gpi(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
+	 ID_AA64ISAR1_GPI_IMP_DEF)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -448,6 +466,47 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpi, has_gpa, has_api, has_apa;
+	bool generic, address, lim_generic, lim_address;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpi = aa64isar1_has_gpi(val);
+	has_gpa = aa64isar1_has_gpa(val);
+	has_api = aa64isar1_has_api(val);
+	has_apa = aa64isar1_has_apa(val);
+	if ((has_gpi && has_gpa) || (has_api && has_apa))
+		return -EINVAL;
+
+	generic = has_gpi || has_gpa;
+	address = has_api || has_apa;
+	lim_generic = aa64isar1_has_gpi(lim) || aa64isar1_has_gpa(lim);
+	lim_address = aa64isar1_has_api(lim) || aa64isar1_has_apa(lim);
+
+	/*
+	 * When PTRAUTH is configured for the vCPU via KVM_ARM_VCPU_INIT,
+	 * it should mean that userspace wants to expose
+	 * one of ID_AA64ISAR1_EL1.GPI, GPA or ID_AA64ISAR2_EL1.GPA3 and
+	 * one of ID_AA64ISAR1_EL1.API, APA or ID_AA64ISAR2_EL1.APA3 to
+	 * the guest (As per Arm ARM, for generic code authentication
+	 * and address authentication, only one of those field can be
+	 * non-zero).
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR1_EL1 with PTRAUTH configuration.
+	 * (When lim_generic/lim_address is 0, generic/address must be
+	 *  also 0, which is checked by arm64_check_features())
+	 */
+	if (lim_generic && (vcpu_has_ptrauth(vcpu) ^ generic))
+		return -EPERM;
+
+	if (lim_address && (vcpu_has_ptrauth(vcpu) ^ address))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -485,6 +544,12 @@ static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
 }
 
+static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -497,6 +562,12 @@ static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
 	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
 }
 
+static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1473,13 +1544,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR1_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI));
-		break;
 	case SYS_ID_AA64ISAR2_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
@@ -3284,6 +3348,17 @@ static struct id_reg_desc id_aa64isar0_el1_desc = {
 	.validate = validate_id_aa64isar0_el1,
 };
 
+static struct id_reg_desc id_aa64isar1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR1_EL1),
+	.init = init_id_aa64isar1_el1_desc,
+	.validate = validate_id_aa64isar1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar1_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3295,6 +3370,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
+	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 09/38] KVM: arm64: Make ID_AA64ISAR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR1_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 90 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c01038cbdb31..dd4dcc1e4982 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -271,6 +271,24 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
+#define ISAR1_TRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |	\
+				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI))
+
+#define aa64isar1_has_apa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_APA_SHIFT) >= \
+	 ID_AA64ISAR1_APA_ARCHITECTED)
+#define aa64isar1_has_api(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_API_SHIFT) >= \
+	 ID_AA64ISAR1_API_IMP_DEF)
+#define aa64isar1_has_gpa(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPA_SHIFT) >= \
+	 ID_AA64ISAR1_GPA_ARCHITECTED)
+#define aa64isar1_has_gpi(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
+	 ID_AA64ISAR1_GPI_IMP_DEF)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -448,6 +466,47 @@ static int validate_id_aa64isar0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpi, has_gpa, has_api, has_apa;
+	bool generic, address, lim_generic, lim_address;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpi = aa64isar1_has_gpi(val);
+	has_gpa = aa64isar1_has_gpa(val);
+	has_api = aa64isar1_has_api(val);
+	has_apa = aa64isar1_has_apa(val);
+	if ((has_gpi && has_gpa) || (has_api && has_apa))
+		return -EINVAL;
+
+	generic = has_gpi || has_gpa;
+	address = has_api || has_apa;
+	lim_generic = aa64isar1_has_gpi(lim) || aa64isar1_has_gpa(lim);
+	lim_address = aa64isar1_has_api(lim) || aa64isar1_has_apa(lim);
+
+	/*
+	 * When PTRAUTH is configured for the vCPU via KVM_ARM_VCPU_INIT,
+	 * it should mean that userspace wants to expose
+	 * one of ID_AA64ISAR1_EL1.GPI, GPA or ID_AA64ISAR2_EL1.GPA3 and
+	 * one of ID_AA64ISAR1_EL1.API, APA or ID_AA64ISAR2_EL1.APA3 to
+	 * the guest (As per Arm ARM, for generic code authentication
+	 * and address authentication, only one of those field can be
+	 * non-zero).
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR1_EL1 with PTRAUTH configuration.
+	 * (When lim_generic/lim_address is 0, generic/address must be
+	 *  also 0, which is checked by arm64_check_features())
+	 */
+	if (lim_generic && (vcpu_has_ptrauth(vcpu) ^ generic))
+		return -EPERM;
+
+	if (lim_address && (vcpu_has_ptrauth(vcpu) ^ address))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -485,6 +544,12 @@ static void init_id_aa64pfr1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_MTE);
 }
 
+static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
+}
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -497,6 +562,12 @@ static u64 vcpu_mask_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu,
 	return kvm_has_mte(vcpu->kvm) ? 0 : (ARM64_FEATURE_MASK(ID_AA64PFR1_MTE));
 }
 
+static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1473,13 +1544,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR1_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_APA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_API) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_GPI));
-		break;
 	case SYS_ID_AA64ISAR2_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
 			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
@@ -3284,6 +3348,17 @@ static struct id_reg_desc id_aa64isar0_el1_desc = {
 	.validate = validate_id_aa64isar0_el1,
 };
 
+static struct id_reg_desc id_aa64isar1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR1_EL1),
+	.init = init_id_aa64isar1_el1_desc,
+	.validate = validate_id_aa64isar1_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar1_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3295,6 +3370,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
+	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 10/38] KVM: arm64: Make ID_AA64ISAR2_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR2_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 65 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dd4dcc1e4982..ba2e6dac7774 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -289,6 +289,16 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
 	 ID_AA64ISAR1_GPI_IMP_DEF)
 
+#define ISAR2_PTRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3))
+
+#define aa64isar2_has_apa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_APA3_SHIFT) >= \
+	 ID_AA64ISAR2_APA3_ARCHITECTED)
+#define aa64isar2_has_gpa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
+	 ID_AA64ISAR2_GPA3_ARCHITECTED)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -507,6 +517,31 @@ static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpa3, has_apa3, lim_has_gpa3, lim_has_apa3;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpa3 = aa64isar2_has_gpa3(val);
+	has_apa3 = aa64isar2_has_apa3(val);
+	lim_has_gpa3 = aa64isar2_has_gpa3(lim);
+	lim_has_apa3 = aa64isar2_has_apa3(lim);
+
+	/*
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR2_EL1 with PTRAUTH configuration.
+	 * See comments in validate_id_aa64isar1_el1() for more detail.
+	 */
+	if (lim_has_gpa3 && (vcpu_has_ptrauth(vcpu) ^ has_gpa3))
+		return -EPERM;
+
+	if (lim_has_apa3 && (vcpu_has_ptrauth(vcpu) ^ has_apa3))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -550,6 +585,13 @@ static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
 }
 
+static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
+}
+
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -568,6 +610,13 @@ static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
 	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
 }
 
+static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR2_PTRAUTH_MASK;
+}
+
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1544,11 +1593,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR2_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3));
-		break;
 	case SYS_ID_AA64DFR0_EL1:
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
@@ -3359,6 +3403,16 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64isar2_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR2_EL1),
+	.init = init_id_aa64isar2_el1_desc,
+	.validate = validate_id_aa64isar2_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar2_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3371,6 +3425,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
+	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 10/38] KVM: arm64: Make ID_AA64ISAR2_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64ISAR2_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 65 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dd4dcc1e4982..ba2e6dac7774 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -289,6 +289,16 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
 	 ID_AA64ISAR1_GPI_IMP_DEF)
 
+#define ISAR2_PTRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3))
+
+#define aa64isar2_has_apa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_APA3_SHIFT) >= \
+	 ID_AA64ISAR2_APA3_ARCHITECTED)
+#define aa64isar2_has_gpa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
+	 ID_AA64ISAR2_GPA3_ARCHITECTED)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -507,6 +517,31 @@ static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpa3, has_apa3, lim_has_gpa3, lim_has_apa3;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpa3 = aa64isar2_has_gpa3(val);
+	has_apa3 = aa64isar2_has_apa3(val);
+	lim_has_gpa3 = aa64isar2_has_gpa3(lim);
+	lim_has_apa3 = aa64isar2_has_apa3(lim);
+
+	/*
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR2_EL1 with PTRAUTH configuration.
+	 * See comments in validate_id_aa64isar1_el1() for more detail.
+	 */
+	if (lim_has_gpa3 && (vcpu_has_ptrauth(vcpu) ^ has_gpa3))
+		return -EPERM;
+
+	if (lim_has_apa3 && (vcpu_has_ptrauth(vcpu) ^ has_apa3))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -550,6 +585,13 @@ static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
 }
 
+static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
+}
+
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -568,6 +610,13 @@ static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
 	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
 }
 
+static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR2_PTRAUTH_MASK;
+}
+
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1544,11 +1593,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR2_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3));
-		break;
 	case SYS_ID_AA64DFR0_EL1:
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
@@ -3359,6 +3403,16 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64isar2_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR2_EL1),
+	.init = init_id_aa64isar2_el1_desc,
+	.validate = validate_id_aa64isar2_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar2_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3371,6 +3425,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
+	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 10/38] KVM: arm64: Make ID_AA64ISAR2_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64ISAR2_EL1 to make it
writable by userspace.

Return an error if userspace tries to set PTRAUTH related fields
of the register to values that conflict with PTRAUTH configuration,
which was configured by KVM_ARM_VCPU_INIT, for the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 65 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dd4dcc1e4982..ba2e6dac7774 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -289,6 +289,16 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR1_GPI_SHIFT) >= \
 	 ID_AA64ISAR1_GPI_IMP_DEF)
 
+#define ISAR2_PTRAUTH_MASK	(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) | \
+				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3))
+
+#define aa64isar2_has_apa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_APA3_SHIFT) >= \
+	 ID_AA64ISAR2_APA3_ARCHITECTED)
+#define aa64isar2_has_gpa3(val)	\
+	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
+	 ID_AA64ISAR2_GPA3_ARCHITECTED)
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -507,6 +517,31 @@ static int validate_id_aa64isar1_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	bool has_gpa3, has_apa3, lim_has_gpa3, lim_has_apa3;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	has_gpa3 = aa64isar2_has_gpa3(val);
+	has_apa3 = aa64isar2_has_apa3(val);
+	lim_has_gpa3 = aa64isar2_has_gpa3(lim);
+	lim_has_apa3 = aa64isar2_has_apa3(lim);
+
+	/*
+	 * Check if there is a conflict in the requested value for
+	 * ID_AA64ISAR2_EL1 with PTRAUTH configuration.
+	 * See comments in validate_id_aa64isar1_el1() for more detail.
+	 */
+	if (lim_has_gpa3 && (vcpu_has_ptrauth(vcpu) ^ has_gpa3))
+		return -EPERM;
+
+	if (lim_has_apa3 && (vcpu_has_ptrauth(vcpu) ^ has_apa3))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -550,6 +585,13 @@ static void init_id_aa64isar1_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR1_TRAUTH_MASK;
 }
 
+static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
+{
+	if (!system_has_full_ptr_auth())
+		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
+}
+
+
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
 {
@@ -568,6 +610,13 @@ static u64 vcpu_mask_id_aa64isar1_el1(const struct kvm_vcpu *vcpu,
 	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR1_TRAUTH_MASK;
 }
 
+static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *idr)
+{
+	return vcpu_has_ptrauth(vcpu) ? 0 : ISAR2_PTRAUTH_MASK;
+}
+
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -1544,11 +1593,6 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 
 	val = read_kvm_id_reg(vcpu->kvm, id);
 	switch (id) {
-	case SYS_ID_AA64ISAR2_EL1:
-		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_APA3) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR2_GPA3));
-		break;
 	case SYS_ID_AA64DFR0_EL1:
 		/* Limit debug to ARMv8.0 */
 		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
@@ -3359,6 +3403,16 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64isar2_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64ISAR2_EL1),
+	.init = init_id_aa64isar2_el1_desc,
+	.validate = validate_id_aa64isar2_el1,
+	.vcpu_mask = vcpu_mask_id_aa64isar2_el1,
+	.ftr_bits = {
+		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR2_APA3_SHIFT, 0),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3371,6 +3425,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
+	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 11/38] KVM: arm64: Make ID_AA64MMFR0_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64MMFR0_EL1 to make it
writable by userspace.

Since ID_AA64MMFR0_EL1 stage 2 granule size fields don't follow the
standard ID scheme, we need a special handling to validate those fields.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 133 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ba2e6dac7774..b68ae53af792 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -542,6 +542,118 @@ static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+/*
+ * Check if the requested stage2 translation granule size indicated in
+ * @mmfr0 is also indicated in @mmfr0_lim.
+ * If TGranX_2 field is zero, the value must be validated based on TGranX
+ * field because that indicates the feature support is identified in
+ * TGranX field.
+ * This function relies on the fact TGranX fields are validated before
+ * through arm64_check_features.
+ */
+static int aa64mmfr0_tgran2_check(int field, u64 mmfr0, u64 mmfr0_lim)
+{
+	s64 tgran2, lim_tgran2, rtgran1;
+	int f1;
+	bool is_signed;
+
+	tgran2 = cpuid_feature_extract_unsigned_field(mmfr0, field);
+	lim_tgran2 = cpuid_feature_extract_unsigned_field(mmfr0_lim, field);
+	if (tgran2 && lim_tgran2)
+		/*
+		 * We don't need to check TGranX field. We can simply
+		 * compare tgran2 and lim_tgran2.
+		 */
+		return (tgran2 > lim_tgran2) ? -E2BIG : 0;
+
+	if (tgran2 == lim_tgran2)
+		/*
+		 * Both of them are zero.  Since TGranX in @mmfr0 is already
+		 * validated by arm64_check_features, tgran2 must be fine.
+		 */
+		return 0;
+
+	/*
+	 * Either tgran2 or lim_tgran2 is zero.
+	 * Need stage1 granule size to validate tgran2.
+	 */
+
+	/*
+	 * Get TGranX's bit position by subtracting 12 from TGranX_2's bit
+	 * position.
+	 */
+	f1 = field - 12;
+
+	/* TGran4/TGran64 is signed and TGran16 is unsigned field. */
+	is_signed = (f1 == ID_AA64MMFR0_TGRAN16_SHIFT) ? false : true;
+
+	/*
+	 * If tgran2 == 0 (&& lim_tgran2 != 0), the requested stage2 granule
+	 * size is indicated in the stage1 granule size field of @mmfr0.
+	 * So, validate the stage1 granule size against the stage2 limit
+	 * granule size.
+	 * If lim_tgran2 == 0 (&& tgran2 != 0), the stage2 limit granule size
+	 * is indicated in the stage1 granule size field of @mmfr0_lim.
+	 * So, validate the requested stage2 granule size against the stage1
+	 * limit granule size.
+	 */
+
+	 /* Get the relevant stage1 granule size to validate tgran2 */
+	if (tgran2 == 0)
+		/* The requested stage1 granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0, f1, is_signed);
+	else /* lim_tgran2 == 0 */
+		/* The stage1 limit granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0_lim, f1, is_signed);
+
+	/*
+	 * Adjust the value of rtgran1 to compare with stage2 granule size,
+	 * which indicates: 1: Not supported, 2: Supported, etc.
+	 */
+	if (is_signed)
+		/* For signed, -1: Not supported, 0: Supported, etc. */
+		rtgran1 += 0x2;
+	else
+		/* For unsigned, 0: Not supported, 1: Supported, etc. */
+		rtgran1 += 0x1;
+
+	if ((tgran2 == 0) && (rtgran1 > lim_tgran2))
+		/*
+		 * The requested stage1 granule size (== the requested stage2
+		 * granule size) is larger than the stage2 limit granule size.
+		 */
+		return -E2BIG;
+	else if ((lim_tgran2 == 0) && (tgran2 > rtgran1))
+		/*
+		 * The requested stage2 granule size is larger than the stage1
+		 * limit granulze size (== the stage2 limit granule size).
+		 */
+		return -E2BIG;
+
+	return 0;
+}
+
+static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	int ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN4_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN64_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN16_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3413,6 +3525,24 @@ static struct id_reg_desc id_aa64isar2_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR0_EL1),
+	/*
+	 * When TGranX_2 value is 0, validity of the value depend on TGranX
+	 * value, and TGranX_2 value must be validated against TGranX value,
+	 * which is done by validate_id_aa64mmfr0_el1.
+	 * So, skip the regular validity checking for TGranX_2 fields.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16_2),
+	.validate = validate_id_aa64mmfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, ID_AA64MMFR0_TGRAN64_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, ID_AA64MMFR0_TGRAN4_NI),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3426,6 +3556,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+
+	/* CRm=7 */
+	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 11/38] KVM: arm64: Make ID_AA64MMFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64MMFR0_EL1 to make it
writable by userspace.

Since ID_AA64MMFR0_EL1 stage 2 granule size fields don't follow the
standard ID scheme, we need a special handling to validate those fields.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 133 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ba2e6dac7774..b68ae53af792 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -542,6 +542,118 @@ static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+/*
+ * Check if the requested stage2 translation granule size indicated in
+ * @mmfr0 is also indicated in @mmfr0_lim.
+ * If TGranX_2 field is zero, the value must be validated based on TGranX
+ * field because that indicates the feature support is identified in
+ * TGranX field.
+ * This function relies on the fact TGranX fields are validated before
+ * through arm64_check_features.
+ */
+static int aa64mmfr0_tgran2_check(int field, u64 mmfr0, u64 mmfr0_lim)
+{
+	s64 tgran2, lim_tgran2, rtgran1;
+	int f1;
+	bool is_signed;
+
+	tgran2 = cpuid_feature_extract_unsigned_field(mmfr0, field);
+	lim_tgran2 = cpuid_feature_extract_unsigned_field(mmfr0_lim, field);
+	if (tgran2 && lim_tgran2)
+		/*
+		 * We don't need to check TGranX field. We can simply
+		 * compare tgran2 and lim_tgran2.
+		 */
+		return (tgran2 > lim_tgran2) ? -E2BIG : 0;
+
+	if (tgran2 == lim_tgran2)
+		/*
+		 * Both of them are zero.  Since TGranX in @mmfr0 is already
+		 * validated by arm64_check_features, tgran2 must be fine.
+		 */
+		return 0;
+
+	/*
+	 * Either tgran2 or lim_tgran2 is zero.
+	 * Need stage1 granule size to validate tgran2.
+	 */
+
+	/*
+	 * Get TGranX's bit position by subtracting 12 from TGranX_2's bit
+	 * position.
+	 */
+	f1 = field - 12;
+
+	/* TGran4/TGran64 is signed and TGran16 is unsigned field. */
+	is_signed = (f1 == ID_AA64MMFR0_TGRAN16_SHIFT) ? false : true;
+
+	/*
+	 * If tgran2 == 0 (&& lim_tgran2 != 0), the requested stage2 granule
+	 * size is indicated in the stage1 granule size field of @mmfr0.
+	 * So, validate the stage1 granule size against the stage2 limit
+	 * granule size.
+	 * If lim_tgran2 == 0 (&& tgran2 != 0), the stage2 limit granule size
+	 * is indicated in the stage1 granule size field of @mmfr0_lim.
+	 * So, validate the requested stage2 granule size against the stage1
+	 * limit granule size.
+	 */
+
+	 /* Get the relevant stage1 granule size to validate tgran2 */
+	if (tgran2 == 0)
+		/* The requested stage1 granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0, f1, is_signed);
+	else /* lim_tgran2 == 0 */
+		/* The stage1 limit granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0_lim, f1, is_signed);
+
+	/*
+	 * Adjust the value of rtgran1 to compare with stage2 granule size,
+	 * which indicates: 1: Not supported, 2: Supported, etc.
+	 */
+	if (is_signed)
+		/* For signed, -1: Not supported, 0: Supported, etc. */
+		rtgran1 += 0x2;
+	else
+		/* For unsigned, 0: Not supported, 1: Supported, etc. */
+		rtgran1 += 0x1;
+
+	if ((tgran2 == 0) && (rtgran1 > lim_tgran2))
+		/*
+		 * The requested stage1 granule size (== the requested stage2
+		 * granule size) is larger than the stage2 limit granule size.
+		 */
+		return -E2BIG;
+	else if ((lim_tgran2 == 0) && (tgran2 > rtgran1))
+		/*
+		 * The requested stage2 granule size is larger than the stage1
+		 * limit granulze size (== the stage2 limit granule size).
+		 */
+		return -E2BIG;
+
+	return 0;
+}
+
+static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	int ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN4_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN64_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN16_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3413,6 +3525,24 @@ static struct id_reg_desc id_aa64isar2_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR0_EL1),
+	/*
+	 * When TGranX_2 value is 0, validity of the value depend on TGranX
+	 * value, and TGranX_2 value must be validated against TGranX value,
+	 * which is done by validate_id_aa64mmfr0_el1.
+	 * So, skip the regular validity checking for TGranX_2 fields.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16_2),
+	.validate = validate_id_aa64mmfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, ID_AA64MMFR0_TGRAN64_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, ID_AA64MMFR0_TGRAN4_NI),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3426,6 +3556,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+
+	/* CRm=7 */
+	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 11/38] KVM: arm64: Make ID_AA64MMFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64MMFR0_EL1 to make it
writable by userspace.

Since ID_AA64MMFR0_EL1 stage 2 granule size fields don't follow the
standard ID scheme, we need a special handling to validate those fields.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 133 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ba2e6dac7774..b68ae53af792 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -542,6 +542,118 @@ static int validate_id_aa64isar2_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+/*
+ * Check if the requested stage2 translation granule size indicated in
+ * @mmfr0 is also indicated in @mmfr0_lim.
+ * If TGranX_2 field is zero, the value must be validated based on TGranX
+ * field because that indicates the feature support is identified in
+ * TGranX field.
+ * This function relies on the fact TGranX fields are validated before
+ * through arm64_check_features.
+ */
+static int aa64mmfr0_tgran2_check(int field, u64 mmfr0, u64 mmfr0_lim)
+{
+	s64 tgran2, lim_tgran2, rtgran1;
+	int f1;
+	bool is_signed;
+
+	tgran2 = cpuid_feature_extract_unsigned_field(mmfr0, field);
+	lim_tgran2 = cpuid_feature_extract_unsigned_field(mmfr0_lim, field);
+	if (tgran2 && lim_tgran2)
+		/*
+		 * We don't need to check TGranX field. We can simply
+		 * compare tgran2 and lim_tgran2.
+		 */
+		return (tgran2 > lim_tgran2) ? -E2BIG : 0;
+
+	if (tgran2 == lim_tgran2)
+		/*
+		 * Both of them are zero.  Since TGranX in @mmfr0 is already
+		 * validated by arm64_check_features, tgran2 must be fine.
+		 */
+		return 0;
+
+	/*
+	 * Either tgran2 or lim_tgran2 is zero.
+	 * Need stage1 granule size to validate tgran2.
+	 */
+
+	/*
+	 * Get TGranX's bit position by subtracting 12 from TGranX_2's bit
+	 * position.
+	 */
+	f1 = field - 12;
+
+	/* TGran4/TGran64 is signed and TGran16 is unsigned field. */
+	is_signed = (f1 == ID_AA64MMFR0_TGRAN16_SHIFT) ? false : true;
+
+	/*
+	 * If tgran2 == 0 (&& lim_tgran2 != 0), the requested stage2 granule
+	 * size is indicated in the stage1 granule size field of @mmfr0.
+	 * So, validate the stage1 granule size against the stage2 limit
+	 * granule size.
+	 * If lim_tgran2 == 0 (&& tgran2 != 0), the stage2 limit granule size
+	 * is indicated in the stage1 granule size field of @mmfr0_lim.
+	 * So, validate the requested stage2 granule size against the stage1
+	 * limit granule size.
+	 */
+
+	 /* Get the relevant stage1 granule size to validate tgran2 */
+	if (tgran2 == 0)
+		/* The requested stage1 granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0, f1, is_signed);
+	else /* lim_tgran2 == 0 */
+		/* The stage1 limit granule size */
+		rtgran1 = cpuid_feature_extract_field(mmfr0_lim, f1, is_signed);
+
+	/*
+	 * Adjust the value of rtgran1 to compare with stage2 granule size,
+	 * which indicates: 1: Not supported, 2: Supported, etc.
+	 */
+	if (is_signed)
+		/* For signed, -1: Not supported, 0: Supported, etc. */
+		rtgran1 += 0x2;
+	else
+		/* For unsigned, 0: Not supported, 1: Supported, etc. */
+		rtgran1 += 0x1;
+
+	if ((tgran2 == 0) && (rtgran1 > lim_tgran2))
+		/*
+		 * The requested stage1 granule size (== the requested stage2
+		 * granule size) is larger than the stage2 limit granule size.
+		 */
+		return -E2BIG;
+	else if ((lim_tgran2 == 0) && (tgran2 > rtgran1))
+		/*
+		 * The requested stage2 granule size is larger than the stage1
+		 * limit granulze size (== the stage2 limit granule size).
+		 */
+		return -E2BIG;
+
+	return 0;
+}
+
+static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+	int ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN4_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN64_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	ret = aa64mmfr0_tgran2_check(ID_AA64MMFR0_TGRAN16_2_SHIFT, val, limit);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -3413,6 +3525,24 @@ static struct id_reg_desc id_aa64isar2_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR0_EL1),
+	/*
+	 * When TGranX_2 value is 0, validity of the value depend on TGranX
+	 * value, and TGranX_2 value must be validated against TGranX value,
+	 * which is done by validate_id_aa64mmfr0_el1.
+	 * So, skip the regular validity checking for TGranX_2 fields.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64_2) |
+		       ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16_2),
+	.validate = validate_id_aa64mmfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN64_SHIFT, ID_AA64MMFR0_TGRAN64_NI),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_TGRAN4_SHIFT, ID_AA64MMFR0_TGRAN4_NI),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -3426,6 +3556,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+
+	/* CRm=7 */
+	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 12/38] KVM: arm64: Add a KVM flag indicating emulating debug regs access is needed
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace, simply narrowing
the breakpoints will be problematic because it will lead to
narrowing context aware breakpoints for the guest.

Introduce KVM_ARCH_FLAG_EMULATE_DEBUG_REGS for kvm->arch.flags to
indicate trapping debug reg access is needed, and enable the trapping
when the flag is set.  Set the new flag at the first KVM_RUN if the
number of non-context aware breakpoints for the guest is decreased
by userspace.

No code sets the new flag yet since ID_AA64DFR0_EL1 is not configurable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/debug.c            |  7 ++++++-
 arch/arm64/kvm/sys_regs.c         | 35 +++++++++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a43fddd58e68..dbed94e759a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -136,6 +136,8 @@ struct kvm_arch {
 	 */
 #define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED		3
 #define KVM_ARCH_FLAG_EL1_32BIT				4
+	/* Access to debug registers need to be emulated ? */
+#define KVM_ARCH_FLAG_EMULATE_DEBUG_REGS		5
 
 	unsigned long flags;
 
@@ -786,6 +788,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..6eb146d908f8 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -106,10 +106,14 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 *  (KVM_GUESTDBG_USE_HW is set).
 	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
+	 *  - The guest's access to debug registers needs to be emulated
+	 *    (the number of non-context aware breakpoints for the guest
+	 *     is decreased by userspace).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
 	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
-	    kvm_vcpu_os_lock_enabled(vcpu))
+	    kvm_vcpu_os_lock_enabled(vcpu) ||
+	    test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
 	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
@@ -124,6 +128,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
  */
 void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
 {
+	kvm_vcpu_breakpoint_config(vcpu);
 	preempt_disable();
 	kvm_arm_setup_mdcr_el2(vcpu);
 	preempt_enable();
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b68ae53af792..f4aae4ccffd0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -844,6 +844,41 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+#define AA64DFR0_BRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_CTX_CMPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
+
+/*
+ * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
+ * non-context aware breakpoints for the guest is decreased by userspace
+ * (meaning that debug register accesses need to be emulated).
+ */
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	u8 v_nbpn, p_nbpn;
+	struct kvm *kvm = vcpu->kvm;
+
+	/*
+	 * Check the number of normal (non-context aware) breakpoints
+	 * for the guest and the host.
+	 */
+	v_nbpn = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	p_nbpn = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (v_nbpn >= p_nbpn)
+		/*
+		 * Nothing to do if the number of normal breakpoints for the
+		 * guest is not decreased by userspace (meaning KVM doesn't
+		 * need to emulate an access of debug registers).
+		 */
+		return;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
+		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 12/38] KVM: arm64: Add a KVM flag indicating emulating debug regs access is needed
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace, simply narrowing
the breakpoints will be problematic because it will lead to
narrowing context aware breakpoints for the guest.

Introduce KVM_ARCH_FLAG_EMULATE_DEBUG_REGS for kvm->arch.flags to
indicate trapping debug reg access is needed, and enable the trapping
when the flag is set.  Set the new flag at the first KVM_RUN if the
number of non-context aware breakpoints for the guest is decreased
by userspace.

No code sets the new flag yet since ID_AA64DFR0_EL1 is not configurable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/debug.c            |  7 ++++++-
 arch/arm64/kvm/sys_regs.c         | 35 +++++++++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a43fddd58e68..dbed94e759a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -136,6 +136,8 @@ struct kvm_arch {
 	 */
 #define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED		3
 #define KVM_ARCH_FLAG_EL1_32BIT				4
+	/* Access to debug registers need to be emulated ? */
+#define KVM_ARCH_FLAG_EMULATE_DEBUG_REGS		5
 
 	unsigned long flags;
 
@@ -786,6 +788,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..6eb146d908f8 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -106,10 +106,14 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 *  (KVM_GUESTDBG_USE_HW is set).
 	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
+	 *  - The guest's access to debug registers needs to be emulated
+	 *    (the number of non-context aware breakpoints for the guest
+	 *     is decreased by userspace).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
 	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
-	    kvm_vcpu_os_lock_enabled(vcpu))
+	    kvm_vcpu_os_lock_enabled(vcpu) ||
+	    test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
 	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
@@ -124,6 +128,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
  */
 void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
 {
+	kvm_vcpu_breakpoint_config(vcpu);
 	preempt_disable();
 	kvm_arm_setup_mdcr_el2(vcpu);
 	preempt_enable();
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b68ae53af792..f4aae4ccffd0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -844,6 +844,41 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+#define AA64DFR0_BRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_CTX_CMPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
+
+/*
+ * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
+ * non-context aware breakpoints for the guest is decreased by userspace
+ * (meaning that debug register accesses need to be emulated).
+ */
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	u8 v_nbpn, p_nbpn;
+	struct kvm *kvm = vcpu->kvm;
+
+	/*
+	 * Check the number of normal (non-context aware) breakpoints
+	 * for the guest and the host.
+	 */
+	v_nbpn = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	p_nbpn = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (v_nbpn >= p_nbpn)
+		/*
+		 * Nothing to do if the number of normal breakpoints for the
+		 * guest is not decreased by userspace (meaning KVM doesn't
+		 * need to emulate an access of debug registers).
+		 */
+		return;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
+		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 12/38] KVM: arm64: Add a KVM flag indicating emulating debug regs access is needed
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace, simply narrowing
the breakpoints will be problematic because it will lead to
narrowing context aware breakpoints for the guest.

Introduce KVM_ARCH_FLAG_EMULATE_DEBUG_REGS for kvm->arch.flags to
indicate trapping debug reg access is needed, and enable the trapping
when the flag is set.  Set the new flag at the first KVM_RUN if the
number of non-context aware breakpoints for the guest is decreased
by userspace.

No code sets the new flag yet since ID_AA64DFR0_EL1 is not configurable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |  3 +++
 arch/arm64/kvm/debug.c            |  7 ++++++-
 arch/arm64/kvm/sys_regs.c         | 35 +++++++++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a43fddd58e68..dbed94e759a8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -136,6 +136,8 @@ struct kvm_arch {
 	 */
 #define KVM_ARCH_FLAG_REG_WIDTH_CONFIGURED		3
 #define KVM_ARCH_FLAG_EL1_32BIT				4
+	/* Access to debug registers need to be emulated ? */
+#define KVM_ARCH_FLAG_EMULATE_DEBUG_REGS		5
 
 	unsigned long flags;
 
@@ -786,6 +788,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 4fd5c216c4bb..6eb146d908f8 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -106,10 +106,14 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 	 *  (KVM_GUESTDBG_USE_HW is set).
 	 *  - The guest is not using debug (KVM_ARM64_DEBUG_DIRTY is clear).
 	 *  - The guest has enabled the OS Lock (debug exceptions are blocked).
+	 *  - The guest's access to debug registers needs to be emulated
+	 *    (the number of non-context aware breakpoints for the guest
+	 *     is decreased by userspace).
 	 */
 	if ((vcpu->guest_debug & KVM_GUESTDBG_USE_HW) ||
 	    !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) ||
-	    kvm_vcpu_os_lock_enabled(vcpu))
+	    kvm_vcpu_os_lock_enabled(vcpu) ||
+	    test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
 		vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;
 
 	trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2);
@@ -124,6 +128,7 @@ static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
  */
 void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu)
 {
+	kvm_vcpu_breakpoint_config(vcpu);
 	preempt_disable();
 	kvm_arm_setup_mdcr_el2(vcpu);
 	preempt_enable();
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b68ae53af792..f4aae4ccffd0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -844,6 +844,41 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+#define AA64DFR0_BRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_CTX_CMPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
+
+/*
+ * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
+ * non-context aware breakpoints for the guest is decreased by userspace
+ * (meaning that debug register accesses need to be emulated).
+ */
+void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	u8 v_nbpn, p_nbpn;
+	struct kvm *kvm = vcpu->kvm;
+
+	/*
+	 * Check the number of normal (non-context aware) breakpoints
+	 * for the guest and the host.
+	 */
+	v_nbpn = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	p_nbpn = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (v_nbpn >= p_nbpn)
+		/*
+		 * Nothing to do if the number of normal breakpoints for the
+		 * guest is not decreased by userspace (meaning KVM doesn't
+		 * need to emulate an access of debug registers).
+		 */
+		return;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
+		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 13/38] KVM: arm64: Emulate dbgbcr/dbgbvr accesses
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest.

Emulate dbgbcr/dbgbvr accesses in that case and map context
aware breakpoints for the vCPU to different numbered breakpoints
for the pCPU, but will maintain the offset in context aware
breakpoints.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   9 +-
 arch/arm64/kvm/sys_regs.c       | 402 ++++++++++++++++++++++++++++++--
 2 files changed, 394 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b33b7ce87fb2..9b475ba95ffd 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -124,9 +124,16 @@
 #define SYS_OSDTRTX_EL1			sys_reg(2, 0, 0, 3, 2)
 #define SYS_OSECCR_EL1			sys_reg(2, 0, 0, 6, 2)
 #define SYS_DBGBVRn_EL1(n)		sys_reg(2, 0, 0, n, 4)
-#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
 #define SYS_DBGWVRn_EL1(n)		sys_reg(2, 0, 0, n, 6)
+
+#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
+#define SYS_DBGBCR_EL1_LBN_SHIFT	16
+#define SYS_DBGBCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_DBGWCRn_EL1(n)		sys_reg(2, 0, 0, n, 7)
+#define SYS_DBGWCR_EL1_LBN_SHIFT	16
+#define SYS_DBGWCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_MDRAR_EL1			sys_reg(2, 0, 1, 0, 0)
 
 #define SYS_OSLAR_EL1			sys_reg(2, 0, 1, 0, 4)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4aae4ccffd0..2ee1e0b6c4ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -849,17 +849,230 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
+#define INVALID_BRPN	((u8)-1)
+
+static u8 get_bcr_lbn(u64 val)
+{
+	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
+}
+
+static u64 update_bcr_lbn(u64 val, u8 lbn)
+{
+	u64 new;
+
+	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
+	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
+	return new;
+}
+
+/*
+ * KVM will emulate breakpoints access when the number of non-context
+ * aware (normal) breakpoints is decreased for the guest. For instsance,
+ * it will happen when userspace decreases the number of breakpoints
+ * for the guest keeping the same number of context aware breakpoints.
+ * Simply narrowing the number of breakpoints for the guest will lead
+ * to narrowing context aware breakpoints for the guest because as per
+ * Arm ARM, highest numbered breakpoints are context aware breakpoints.
+ * So, in that case, KVM will map context aware breakpoints for the
+ * vCPU to different numbered breakpoints for the pCPU, but will
+ * maintain the offset in context aware breakpoints.
+ * For instance, if 5 breakpoints are supported, and 2 of them are
+ * context aware breakpoints, breakpoint#0, #1 and #2 are normal
+ * breakpoints, and #3 and #4 are context aware breakpoints.
+ * If userspace decreases the number of breakpoints to 4 keeping the
+ * same number of context aware breakpoints (== 2), the guest expects
+ * breakpoint#0 and #1 to be normal breakpoints, and #2 and #3 to be
+ * context aware breakpoints. So, KVM will map the (virtual) context
+ * aware breakpoint #2 and #3 for the vCPU to (physical) context aware
+ * breakpoint #3 and #4 for the pCPU as follows.
+ *
+ * [Example]
+ *
+ *           Normal Breakpoints   Context aware breakpoints
+ * Virtual     #0  #1              #2  #3
+ *              |   |               |   |
+ * Physical    #0  #1  #2          #3  #4
+ *
+ * So, dbg{b,w}cr.lbn (linked breakpoint number) for vCPU might be
+ * different from the ones for pCPU (e.g. With the above example,
+ * when the guest sets dbgbcr0.lbn to 2 for the vCPU, dbgbcr0.lbn
+ * for the pCPU should be set to 3).
+ * Values in vcpu_debug_state of kvm_vcpu_arch will basically be the ones
+ * that are going to be set to the physical registers (indexed by physical
+ * context breakpoint number). But, they hold the values from the guest
+ * point of view until the first KVM_RUN (physical/virtual breakpoint
+ * numbers mapping is fixed) and they will be converted to the
+ * physical values during the process of first KVM_RUN.
+ *
+ * As there is no functional difference between any watchpoints,
+ * virtual watchpoint# will be always same as physical watchpoint#.
+ */
+
+/*
+ * Convert breakpoint# for the guest to breakpoint# for the real hardware.
+ * Return INVALID_BRPN if the given breakpoint# is invalid.
+ */
+static inline u8 virt_to_phys_bpn(struct kvm_vcpu *vcpu, u8 v_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base;
+	u64 p_val, v_val;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/*
+		 * The virtual bpn is out of valid virtual breakpoint number
+		 * range. Return the invalid breakpoint number.
+		 */
+		return INVALID_BRPN;
+	}
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		 /* physical bpn == virtual bpn when no emulation is needed */
+		return v_bpn;
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (v_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when v_bpn is not a
+		 * context aware breakpoint.
+		 */
+		return v_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn.  Map it to the same offset of physical
+	 * context aware registers.
+	 */
+	return phys_ctx_base + (v_bpn - virt_ctx_base);
+}
+
 /*
- * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
- * non-context aware breakpoints for the guest is decreased by userspace
- * (meaning that debug register accesses need to be emulated).
+ * Convert breakpoint# for the real hardware to breakpoint# for the guest.
+ * Return INVALID_BRPN if the given breakpoint# is not used for the guest.
+ */
+static inline u8 phys_to_virt_bpn(struct kvm_vcpu *vcpu, u8 p_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base, v_bpn;
+	u64 p_val, v_val;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		return p_bpn;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (p_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when p_bpn is smaller than
+		 * the lowest virutual context aware breakpoint number.
+		 */
+		return p_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (p_bpn < phys_ctx_base)
+		/*
+		 * Unused non-context aware breakpoint.
+		 * No virtual breakpoint is assigned for this.
+		 */
+		return INVALID_BRPN;
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn. Map it to the same offset of virtual
+	 * context aware registers.
+	 */
+	v_bpn = virt_ctx_base + (p_bpn - phys_ctx_base);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/* This pysical bpn is not mapped to any virtual bpn */
+		return INVALID_BRPN;
+	}
+
+	return v_bpn;
+}
+
+static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags));
+
+	/*
+	 * The last normal (non-context aware) break point is always unused
+	 * (and disabled) when kvm_arm_need_emulate_debug_regs() is true.
+	 */
+	return AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val) - 1;
+}
+
+/*
+ * virt_to_phys_bcr() converts the virtual bcr value (the bcr value from
+ * the guest point of view) to physical bcr value, which is going to be set
+ * to the real hardware.  More specifically, as a lbn field value of the
+ * virtual bcr includes the virtual breakpoint number, the function will
+ * update the bcr with physical breakpoint number, and will return it as
+ * the physical bcr value. phys_to_virt_bcr()) does the opposite.
+ *
+ * As per Arm ARM (ARM DDI 0487H.a), if a Linked Address breakpoint links
+ * to a breakpoint that is not implemented or that is not context aware,
+ * then reads of bcr.lbn return an unknown value, and the Linked Address
+ * breakpoint behaves as if it is either disabled or linked to an UNKNOWN
+ * context aware breakpoint. In such cases, KVM will return 0 to reads of
+ * bcr.lbn, and have the breakpoint behaves as if it is disabled by
+ * setting the lbn to unused (disabled) breakpoint.
+ */
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	v_lbn = get_bcr_lbn(v_bcr);
+	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
+	if (p_lbn == INVALID_BRPN)
+		p_lbn = get_unused_p_bpn(vcpu);
+
+	return update_bcr_lbn(v_bcr, p_lbn);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	p_lbn = get_bcr_lbn(p_bcr);
+	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
+	if (v_lbn == INVALID_BRPN)
+		v_lbn = 0;
+
+	return update_bcr_lbn(p_bcr, v_lbn);
+}
+
+/*
+ * Check if the number of normal breakpoints for the guest is same as
+ * the one for the host. If so, do nothing.
+ * Otherwise (accesses of debug registers needs to be emulated), set
+ * KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags, and convert values
+ * in vcpu->arch.vcpu_debug_state that are values from the guest
+ * point of view to values that are going to be set to hardware
+ * registers. See comments for set_bvr() for some more details.
  */
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 {
 	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 	u8 v_nbpn, p_nbpn;
+	u64 p_bcr;
 	struct kvm *kvm = vcpu->kvm;
+	int v;
+	u8 p_bpn;
+	struct kvm_guest_debug_arch *dbg = &vcpu->arch.vcpu_debug_state;
 
 	/*
 	 * Check the number of normal (non-context aware) breakpoints
@@ -877,11 +1090,39 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 
 	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
 		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+
+	/*
+	 * Before the first KVM_RUN, vcpu->arch.vcpu_debug_state holds
+	 * values of the registers to be exposed to the guest and their
+	 * positions are indexed by virtual breakpoint numbers.
+	 * Convert the values to physical values that are going to set
+	 * to hardware registers, and move them to positions indexed
+	 * by physical breakpoint numbers.
+	 */
+	for (v = KVM_ARM_MAX_DBG_REGS - 1; v >= 0; v--) {
+		/* Get physical breakpoint number */
+		p_bpn = virt_to_phys_bpn(vcpu, v);
+		WARN_ON_ONCE(p_bpn < v);
+
+		if (p_bpn != INVALID_BRPN) {
+			/* Get physical bcr */
+			p_bcr = virt_to_phys_bcr(vcpu, dbg->dbg_bcr[v]);
+			dbg->dbg_bcr[p_bpn] = p_bcr;
+			dbg->dbg_bvr[p_bpn] = dbg->dbg_bvr[v];
+		}
+
+		/* Clear dbg_b{c,v}r, which might not be used */
+		if (p_bpn != v) {
+			dbg->dbg_bcr[v] = 0;
+			dbg->dbg_bvr[v] = 0;
+		}
+	}
 }
 
 /*
  * We want to avoid world-switching all the DBG registers all the
- * time:
+ * time unless userspace decrease number of non-context break points,
+ * where emulating of access to debug registers is required.
  *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
@@ -963,8 +1204,17 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u64 p_bpn;
+	u64 *dbg_reg;
 
+	/* Convert the virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
 	if (p->is_write)
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
 	else
@@ -975,23 +1225,85 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/*
+ * The behaviors of {s,g}et_b{c,v}r change depending on whether they
+ * are called before or after the first KVM_RUN.
+ *
+ * Before the first KVM_RUN (the number of breakpoints is not fixed yet),
+ * the vcpu->arch.vcpu_debug_state holds debug register values from
+ * the guest point of view. The set_b{c,v}r() simply save the value
+ * from userspace in vcpu->arch.vcpu_debug_state, and get_b{c,v}r()
+ * simply return the value in vcpu->arch.vcpu_debug_state to userspace.
+ *
+ * At the first KVM_RUN (where the number of breakpoints is immutable),
+ * b{c,v}r values in vcpu->arch.vcpu_debug_state are converted to
+ * the values that are going to be set to hardware registers.
+ * After that, vcpu->arch.vcpu_debug_state holds debug register values that
+ * are going to set to hardware registers.  The set_b{c,v}r functions convert
+ * the value from userspace to the one that will be set to the hardware
+ * register and save the converted value in vcpu->arch.vcpu_debug_state.
+ * The get_b{c,v}r functions read the value from vcpu->arch.vcpu_debug_state,
+ * convert it to the value as seen by the guest and return the converted
+ * value to the userspace.
+ *
+ * The {s,g}et_b{c,v}r will treat the invalid breakpoint registers,
+ * which are not mapped to physical breakpoints, as RAZ/WI after the first
+ * KVM_RUN (values that userspace attempts to set in those registers will
+ * not be saved anywhere), which shouldn't be a problem because they will
+ * never be exposed to the guest anyway. Until the first KVM_RUN, setting
+ * and getting of those work normally though (The number of breakpoints
+ * could be changed by userspace until the first KVM_RUN).
+ */
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	__u64 bvr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&bvr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn] = bvr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn] = bvr;
+	}
+
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 bvr = 0;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
+	} else {
+		bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &bvr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
@@ -1005,12 +1317,27 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 p_bpn;
+	u64 *dbg_reg;
 
-	if (p->is_write)
+	/* Convert the given virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		/* Invalid breakpoint number */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+	if (p->is_write) {
+		/* Convert virtual bcr to physical bcr */
+		p->regval = virt_to_phys_bcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical bcr to virtual bcr */
+		p->regval = phys_to_virt_bcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1020,21 +1347,64 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr, p_bcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_bcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			/* Convert virt bcr to phys bcr, and save it */
+			p_bcr = virt_to_phys_bcr(vcpu, v_bcr);
+			vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn] = p_bcr;
+		}
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn] = v_bcr;
+	}
+
 	return 0;
 }
 
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr = 0;
+	u64 p_bcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/*
+		 * Convert the virtual breakpoint num to phys breakpoint num,
+		 * and get the physical bcr value.
+		 */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			p_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+
+			/* Convert physical bcr to  */
+			v_bcr = phys_to_virt_bcr(vcpu, p_bcr);
+		}
+	} else {
+		v_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &v_bcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 13/38] KVM: arm64: Emulate dbgbcr/dbgbvr accesses
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest.

Emulate dbgbcr/dbgbvr accesses in that case and map context
aware breakpoints for the vCPU to different numbered breakpoints
for the pCPU, but will maintain the offset in context aware
breakpoints.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   9 +-
 arch/arm64/kvm/sys_regs.c       | 402 ++++++++++++++++++++++++++++++--
 2 files changed, 394 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b33b7ce87fb2..9b475ba95ffd 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -124,9 +124,16 @@
 #define SYS_OSDTRTX_EL1			sys_reg(2, 0, 0, 3, 2)
 #define SYS_OSECCR_EL1			sys_reg(2, 0, 0, 6, 2)
 #define SYS_DBGBVRn_EL1(n)		sys_reg(2, 0, 0, n, 4)
-#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
 #define SYS_DBGWVRn_EL1(n)		sys_reg(2, 0, 0, n, 6)
+
+#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
+#define SYS_DBGBCR_EL1_LBN_SHIFT	16
+#define SYS_DBGBCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_DBGWCRn_EL1(n)		sys_reg(2, 0, 0, n, 7)
+#define SYS_DBGWCR_EL1_LBN_SHIFT	16
+#define SYS_DBGWCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_MDRAR_EL1			sys_reg(2, 0, 1, 0, 0)
 
 #define SYS_OSLAR_EL1			sys_reg(2, 0, 1, 0, 4)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4aae4ccffd0..2ee1e0b6c4ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -849,17 +849,230 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
+#define INVALID_BRPN	((u8)-1)
+
+static u8 get_bcr_lbn(u64 val)
+{
+	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
+}
+
+static u64 update_bcr_lbn(u64 val, u8 lbn)
+{
+	u64 new;
+
+	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
+	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
+	return new;
+}
+
+/*
+ * KVM will emulate breakpoints access when the number of non-context
+ * aware (normal) breakpoints is decreased for the guest. For instsance,
+ * it will happen when userspace decreases the number of breakpoints
+ * for the guest keeping the same number of context aware breakpoints.
+ * Simply narrowing the number of breakpoints for the guest will lead
+ * to narrowing context aware breakpoints for the guest because as per
+ * Arm ARM, highest numbered breakpoints are context aware breakpoints.
+ * So, in that case, KVM will map context aware breakpoints for the
+ * vCPU to different numbered breakpoints for the pCPU, but will
+ * maintain the offset in context aware breakpoints.
+ * For instance, if 5 breakpoints are supported, and 2 of them are
+ * context aware breakpoints, breakpoint#0, #1 and #2 are normal
+ * breakpoints, and #3 and #4 are context aware breakpoints.
+ * If userspace decreases the number of breakpoints to 4 keeping the
+ * same number of context aware breakpoints (== 2), the guest expects
+ * breakpoint#0 and #1 to be normal breakpoints, and #2 and #3 to be
+ * context aware breakpoints. So, KVM will map the (virtual) context
+ * aware breakpoint #2 and #3 for the vCPU to (physical) context aware
+ * breakpoint #3 and #4 for the pCPU as follows.
+ *
+ * [Example]
+ *
+ *           Normal Breakpoints   Context aware breakpoints
+ * Virtual     #0  #1              #2  #3
+ *              |   |               |   |
+ * Physical    #0  #1  #2          #3  #4
+ *
+ * So, dbg{b,w}cr.lbn (linked breakpoint number) for vCPU might be
+ * different from the ones for pCPU (e.g. With the above example,
+ * when the guest sets dbgbcr0.lbn to 2 for the vCPU, dbgbcr0.lbn
+ * for the pCPU should be set to 3).
+ * Values in vcpu_debug_state of kvm_vcpu_arch will basically be the ones
+ * that are going to be set to the physical registers (indexed by physical
+ * context breakpoint number). But, they hold the values from the guest
+ * point of view until the first KVM_RUN (physical/virtual breakpoint
+ * numbers mapping is fixed) and they will be converted to the
+ * physical values during the process of first KVM_RUN.
+ *
+ * As there is no functional difference between any watchpoints,
+ * virtual watchpoint# will be always same as physical watchpoint#.
+ */
+
+/*
+ * Convert breakpoint# for the guest to breakpoint# for the real hardware.
+ * Return INVALID_BRPN if the given breakpoint# is invalid.
+ */
+static inline u8 virt_to_phys_bpn(struct kvm_vcpu *vcpu, u8 v_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base;
+	u64 p_val, v_val;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/*
+		 * The virtual bpn is out of valid virtual breakpoint number
+		 * range. Return the invalid breakpoint number.
+		 */
+		return INVALID_BRPN;
+	}
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		 /* physical bpn == virtual bpn when no emulation is needed */
+		return v_bpn;
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (v_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when v_bpn is not a
+		 * context aware breakpoint.
+		 */
+		return v_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn.  Map it to the same offset of physical
+	 * context aware registers.
+	 */
+	return phys_ctx_base + (v_bpn - virt_ctx_base);
+}
+
 /*
- * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
- * non-context aware breakpoints for the guest is decreased by userspace
- * (meaning that debug register accesses need to be emulated).
+ * Convert breakpoint# for the real hardware to breakpoint# for the guest.
+ * Return INVALID_BRPN if the given breakpoint# is not used for the guest.
+ */
+static inline u8 phys_to_virt_bpn(struct kvm_vcpu *vcpu, u8 p_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base, v_bpn;
+	u64 p_val, v_val;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		return p_bpn;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (p_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when p_bpn is smaller than
+		 * the lowest virutual context aware breakpoint number.
+		 */
+		return p_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (p_bpn < phys_ctx_base)
+		/*
+		 * Unused non-context aware breakpoint.
+		 * No virtual breakpoint is assigned for this.
+		 */
+		return INVALID_BRPN;
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn. Map it to the same offset of virtual
+	 * context aware registers.
+	 */
+	v_bpn = virt_ctx_base + (p_bpn - phys_ctx_base);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/* This pysical bpn is not mapped to any virtual bpn */
+		return INVALID_BRPN;
+	}
+
+	return v_bpn;
+}
+
+static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags));
+
+	/*
+	 * The last normal (non-context aware) break point is always unused
+	 * (and disabled) when kvm_arm_need_emulate_debug_regs() is true.
+	 */
+	return AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val) - 1;
+}
+
+/*
+ * virt_to_phys_bcr() converts the virtual bcr value (the bcr value from
+ * the guest point of view) to physical bcr value, which is going to be set
+ * to the real hardware.  More specifically, as a lbn field value of the
+ * virtual bcr includes the virtual breakpoint number, the function will
+ * update the bcr with physical breakpoint number, and will return it as
+ * the physical bcr value. phys_to_virt_bcr()) does the opposite.
+ *
+ * As per Arm ARM (ARM DDI 0487H.a), if a Linked Address breakpoint links
+ * to a breakpoint that is not implemented or that is not context aware,
+ * then reads of bcr.lbn return an unknown value, and the Linked Address
+ * breakpoint behaves as if it is either disabled or linked to an UNKNOWN
+ * context aware breakpoint. In such cases, KVM will return 0 to reads of
+ * bcr.lbn, and have the breakpoint behaves as if it is disabled by
+ * setting the lbn to unused (disabled) breakpoint.
+ */
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	v_lbn = get_bcr_lbn(v_bcr);
+	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
+	if (p_lbn == INVALID_BRPN)
+		p_lbn = get_unused_p_bpn(vcpu);
+
+	return update_bcr_lbn(v_bcr, p_lbn);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	p_lbn = get_bcr_lbn(p_bcr);
+	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
+	if (v_lbn == INVALID_BRPN)
+		v_lbn = 0;
+
+	return update_bcr_lbn(p_bcr, v_lbn);
+}
+
+/*
+ * Check if the number of normal breakpoints for the guest is same as
+ * the one for the host. If so, do nothing.
+ * Otherwise (accesses of debug registers needs to be emulated), set
+ * KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags, and convert values
+ * in vcpu->arch.vcpu_debug_state that are values from the guest
+ * point of view to values that are going to be set to hardware
+ * registers. See comments for set_bvr() for some more details.
  */
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 {
 	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 	u8 v_nbpn, p_nbpn;
+	u64 p_bcr;
 	struct kvm *kvm = vcpu->kvm;
+	int v;
+	u8 p_bpn;
+	struct kvm_guest_debug_arch *dbg = &vcpu->arch.vcpu_debug_state;
 
 	/*
 	 * Check the number of normal (non-context aware) breakpoints
@@ -877,11 +1090,39 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 
 	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
 		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+
+	/*
+	 * Before the first KVM_RUN, vcpu->arch.vcpu_debug_state holds
+	 * values of the registers to be exposed to the guest and their
+	 * positions are indexed by virtual breakpoint numbers.
+	 * Convert the values to physical values that are going to set
+	 * to hardware registers, and move them to positions indexed
+	 * by physical breakpoint numbers.
+	 */
+	for (v = KVM_ARM_MAX_DBG_REGS - 1; v >= 0; v--) {
+		/* Get physical breakpoint number */
+		p_bpn = virt_to_phys_bpn(vcpu, v);
+		WARN_ON_ONCE(p_bpn < v);
+
+		if (p_bpn != INVALID_BRPN) {
+			/* Get physical bcr */
+			p_bcr = virt_to_phys_bcr(vcpu, dbg->dbg_bcr[v]);
+			dbg->dbg_bcr[p_bpn] = p_bcr;
+			dbg->dbg_bvr[p_bpn] = dbg->dbg_bvr[v];
+		}
+
+		/* Clear dbg_b{c,v}r, which might not be used */
+		if (p_bpn != v) {
+			dbg->dbg_bcr[v] = 0;
+			dbg->dbg_bvr[v] = 0;
+		}
+	}
 }
 
 /*
  * We want to avoid world-switching all the DBG registers all the
- * time:
+ * time unless userspace decrease number of non-context break points,
+ * where emulating of access to debug registers is required.
  *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
@@ -963,8 +1204,17 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u64 p_bpn;
+	u64 *dbg_reg;
 
+	/* Convert the virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
 	if (p->is_write)
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
 	else
@@ -975,23 +1225,85 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/*
+ * The behaviors of {s,g}et_b{c,v}r change depending on whether they
+ * are called before or after the first KVM_RUN.
+ *
+ * Before the first KVM_RUN (the number of breakpoints is not fixed yet),
+ * the vcpu->arch.vcpu_debug_state holds debug register values from
+ * the guest point of view. The set_b{c,v}r() simply save the value
+ * from userspace in vcpu->arch.vcpu_debug_state, and get_b{c,v}r()
+ * simply return the value in vcpu->arch.vcpu_debug_state to userspace.
+ *
+ * At the first KVM_RUN (where the number of breakpoints is immutable),
+ * b{c,v}r values in vcpu->arch.vcpu_debug_state are converted to
+ * the values that are going to be set to hardware registers.
+ * After that, vcpu->arch.vcpu_debug_state holds debug register values that
+ * are going to set to hardware registers.  The set_b{c,v}r functions convert
+ * the value from userspace to the one that will be set to the hardware
+ * register and save the converted value in vcpu->arch.vcpu_debug_state.
+ * The get_b{c,v}r functions read the value from vcpu->arch.vcpu_debug_state,
+ * convert it to the value as seen by the guest and return the converted
+ * value to the userspace.
+ *
+ * The {s,g}et_b{c,v}r will treat the invalid breakpoint registers,
+ * which are not mapped to physical breakpoints, as RAZ/WI after the first
+ * KVM_RUN (values that userspace attempts to set in those registers will
+ * not be saved anywhere), which shouldn't be a problem because they will
+ * never be exposed to the guest anyway. Until the first KVM_RUN, setting
+ * and getting of those work normally though (The number of breakpoints
+ * could be changed by userspace until the first KVM_RUN).
+ */
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	__u64 bvr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&bvr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn] = bvr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn] = bvr;
+	}
+
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 bvr = 0;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
+	} else {
+		bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &bvr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
@@ -1005,12 +1317,27 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 p_bpn;
+	u64 *dbg_reg;
 
-	if (p->is_write)
+	/* Convert the given virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		/* Invalid breakpoint number */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+	if (p->is_write) {
+		/* Convert virtual bcr to physical bcr */
+		p->regval = virt_to_phys_bcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical bcr to virtual bcr */
+		p->regval = phys_to_virt_bcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1020,21 +1347,64 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr, p_bcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_bcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			/* Convert virt bcr to phys bcr, and save it */
+			p_bcr = virt_to_phys_bcr(vcpu, v_bcr);
+			vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn] = p_bcr;
+		}
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn] = v_bcr;
+	}
+
 	return 0;
 }
 
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr = 0;
+	u64 p_bcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/*
+		 * Convert the virtual breakpoint num to phys breakpoint num,
+		 * and get the physical bcr value.
+		 */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			p_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+
+			/* Convert physical bcr to  */
+			v_bcr = phys_to_virt_bcr(vcpu, p_bcr);
+		}
+	} else {
+		v_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &v_bcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 13/38] KVM: arm64: Emulate dbgbcr/dbgbvr accesses
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Highest numbered breakpoints must be context aware breakpoints
(as specified by Arm ARM).  If the number of non-context aware
breakpoints for the guest is decreased by userspace (e.g. Lower
ID_AA64DFR0.BRPs keeping ID_AA64DFR0.CTX_CMPs the same), simply
narrowing the breakpoints will be problematic because it will
lead to narrowing context aware breakpoints for the guest.

Emulate dbgbcr/dbgbvr accesses in that case and map context
aware breakpoints for the vCPU to different numbered breakpoints
for the pCPU, but will maintain the offset in context aware
breakpoints.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/sysreg.h |   9 +-
 arch/arm64/kvm/sys_regs.c       | 402 ++++++++++++++++++++++++++++++--
 2 files changed, 394 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b33b7ce87fb2..9b475ba95ffd 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -124,9 +124,16 @@
 #define SYS_OSDTRTX_EL1			sys_reg(2, 0, 0, 3, 2)
 #define SYS_OSECCR_EL1			sys_reg(2, 0, 0, 6, 2)
 #define SYS_DBGBVRn_EL1(n)		sys_reg(2, 0, 0, n, 4)
-#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
 #define SYS_DBGWVRn_EL1(n)		sys_reg(2, 0, 0, n, 6)
+
+#define SYS_DBGBCRn_EL1(n)		sys_reg(2, 0, 0, n, 5)
+#define SYS_DBGBCR_EL1_LBN_SHIFT	16
+#define SYS_DBGBCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_DBGWCRn_EL1(n)		sys_reg(2, 0, 0, n, 7)
+#define SYS_DBGWCR_EL1_LBN_SHIFT	16
+#define SYS_DBGWCR_EL1_LBN_MASK		GENMASK(3, 0)
+
 #define SYS_MDRAR_EL1			sys_reg(2, 0, 1, 0, 0)
 
 #define SYS_OSLAR_EL1			sys_reg(2, 0, 1, 0, 4)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f4aae4ccffd0..2ee1e0b6c4ce 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -849,17 +849,230 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
+#define INVALID_BRPN	((u8)-1)
+
+static u8 get_bcr_lbn(u64 val)
+{
+	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
+}
+
+static u64 update_bcr_lbn(u64 val, u8 lbn)
+{
+	u64 new;
+
+	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
+	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
+	return new;
+}
+
+/*
+ * KVM will emulate breakpoints access when the number of non-context
+ * aware (normal) breakpoints is decreased for the guest. For instsance,
+ * it will happen when userspace decreases the number of breakpoints
+ * for the guest keeping the same number of context aware breakpoints.
+ * Simply narrowing the number of breakpoints for the guest will lead
+ * to narrowing context aware breakpoints for the guest because as per
+ * Arm ARM, highest numbered breakpoints are context aware breakpoints.
+ * So, in that case, KVM will map context aware breakpoints for the
+ * vCPU to different numbered breakpoints for the pCPU, but will
+ * maintain the offset in context aware breakpoints.
+ * For instance, if 5 breakpoints are supported, and 2 of them are
+ * context aware breakpoints, breakpoint#0, #1 and #2 are normal
+ * breakpoints, and #3 and #4 are context aware breakpoints.
+ * If userspace decreases the number of breakpoints to 4 keeping the
+ * same number of context aware breakpoints (== 2), the guest expects
+ * breakpoint#0 and #1 to be normal breakpoints, and #2 and #3 to be
+ * context aware breakpoints. So, KVM will map the (virtual) context
+ * aware breakpoint #2 and #3 for the vCPU to (physical) context aware
+ * breakpoint #3 and #4 for the pCPU as follows.
+ *
+ * [Example]
+ *
+ *           Normal Breakpoints   Context aware breakpoints
+ * Virtual     #0  #1              #2  #3
+ *              |   |               |   |
+ * Physical    #0  #1  #2          #3  #4
+ *
+ * So, dbg{b,w}cr.lbn (linked breakpoint number) for vCPU might be
+ * different from the ones for pCPU (e.g. With the above example,
+ * when the guest sets dbgbcr0.lbn to 2 for the vCPU, dbgbcr0.lbn
+ * for the pCPU should be set to 3).
+ * Values in vcpu_debug_state of kvm_vcpu_arch will basically be the ones
+ * that are going to be set to the physical registers (indexed by physical
+ * context breakpoint number). But, they hold the values from the guest
+ * point of view until the first KVM_RUN (physical/virtual breakpoint
+ * numbers mapping is fixed) and they will be converted to the
+ * physical values during the process of first KVM_RUN.
+ *
+ * As there is no functional difference between any watchpoints,
+ * virtual watchpoint# will be always same as physical watchpoint#.
+ */
+
+/*
+ * Convert breakpoint# for the guest to breakpoint# for the real hardware.
+ * Return INVALID_BRPN if the given breakpoint# is invalid.
+ */
+static inline u8 virt_to_phys_bpn(struct kvm_vcpu *vcpu, u8 v_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base;
+	u64 p_val, v_val;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/*
+		 * The virtual bpn is out of valid virtual breakpoint number
+		 * range. Return the invalid breakpoint number.
+		 */
+		return INVALID_BRPN;
+	}
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		 /* physical bpn == virtual bpn when no emulation is needed */
+		return v_bpn;
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (v_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when v_bpn is not a
+		 * context aware breakpoint.
+		 */
+		return v_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn.  Map it to the same offset of physical
+	 * context aware registers.
+	 */
+	return phys_ctx_base + (v_bpn - virt_ctx_base);
+}
+
 /*
- * Set KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags when the number of
- * non-context aware breakpoints for the guest is decreased by userspace
- * (meaning that debug register accesses need to be emulated).
+ * Convert breakpoint# for the real hardware to breakpoint# for the guest.
+ * Return INVALID_BRPN if the given breakpoint# is not used for the guest.
+ */
+static inline u8 phys_to_virt_bpn(struct kvm_vcpu *vcpu, u8 p_bpn)
+{
+	u8 virt_ctx_base, phys_ctx_base, v_bpn;
+	u64 p_val, v_val;
+
+	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags))
+		return p_bpn;
+
+	v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest virtual context aware bpn */
+	virt_ctx_base = AA64DFR0_BRPS(v_val) - AA64DFR0_CTX_CMPS(v_val);
+	if (p_bpn < virt_ctx_base)
+		/*
+		 * physical bpn == virtual bpn when p_bpn is smaller than
+		 * the lowest virutual context aware breakpoint number.
+		 */
+		return p_bpn;
+
+	p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	/* The lowest physical context aware bpn */
+	phys_ctx_base = AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val);
+	if (p_bpn < phys_ctx_base)
+		/*
+		 * Unused non-context aware breakpoint.
+		 * No virtual breakpoint is assigned for this.
+		 */
+		return INVALID_BRPN;
+
+	WARN_ON_ONCE(virt_ctx_base >= phys_ctx_base);
+
+	/*
+	 * Context aware bpn. Map it to the same offset of virtual
+	 * context aware registers.
+	 */
+	v_bpn = virt_ctx_base + (p_bpn - phys_ctx_base);
+	if (v_bpn > AA64DFR0_BRPS(v_val)) {
+		/* This pysical bpn is not mapped to any virtual bpn */
+		return INVALID_BRPN;
+	}
+
+	return v_bpn;
+}
+
+static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
+{
+	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+	WARN_ON_ONCE(!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &vcpu->kvm->arch.flags));
+
+	/*
+	 * The last normal (non-context aware) break point is always unused
+	 * (and disabled) when kvm_arm_need_emulate_debug_regs() is true.
+	 */
+	return AA64DFR0_BRPS(p_val) - AA64DFR0_CTX_CMPS(p_val) - 1;
+}
+
+/*
+ * virt_to_phys_bcr() converts the virtual bcr value (the bcr value from
+ * the guest point of view) to physical bcr value, which is going to be set
+ * to the real hardware.  More specifically, as a lbn field value of the
+ * virtual bcr includes the virtual breakpoint number, the function will
+ * update the bcr with physical breakpoint number, and will return it as
+ * the physical bcr value. phys_to_virt_bcr()) does the opposite.
+ *
+ * As per Arm ARM (ARM DDI 0487H.a), if a Linked Address breakpoint links
+ * to a breakpoint that is not implemented or that is not context aware,
+ * then reads of bcr.lbn return an unknown value, and the Linked Address
+ * breakpoint behaves as if it is either disabled or linked to an UNKNOWN
+ * context aware breakpoint. In such cases, KVM will return 0 to reads of
+ * bcr.lbn, and have the breakpoint behaves as if it is disabled by
+ * setting the lbn to unused (disabled) breakpoint.
+ */
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	v_lbn = get_bcr_lbn(v_bcr);
+	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
+	if (p_lbn == INVALID_BRPN)
+		p_lbn = get_unused_p_bpn(vcpu);
+
+	return update_bcr_lbn(v_bcr, p_lbn);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	u8 v_lbn, p_lbn;
+
+	p_lbn = get_bcr_lbn(p_bcr);
+	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
+	if (v_lbn == INVALID_BRPN)
+		v_lbn = 0;
+
+	return update_bcr_lbn(p_bcr, v_lbn);
+}
+
+/*
+ * Check if the number of normal breakpoints for the guest is same as
+ * the one for the host. If so, do nothing.
+ * Otherwise (accesses of debug registers needs to be emulated), set
+ * KVM_ARCH_FLAG_EMULATE_DEBUG_REGS in the VM flags, and convert values
+ * in vcpu->arch.vcpu_debug_state that are values from the guest
+ * point of view to values that are going to be set to hardware
+ * registers. See comments for set_bvr() for some more details.
  */
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 {
 	u64 p_val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
 	u64 v_val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 	u8 v_nbpn, p_nbpn;
+	u64 p_bcr;
 	struct kvm *kvm = vcpu->kvm;
+	int v;
+	u8 p_bpn;
+	struct kvm_guest_debug_arch *dbg = &vcpu->arch.vcpu_debug_state;
 
 	/*
 	 * Check the number of normal (non-context aware) breakpoints
@@ -877,11 +1090,39 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 
 	if (!test_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags))
 		set_bit(KVM_ARCH_FLAG_EMULATE_DEBUG_REGS, &kvm->arch.flags);
+
+	/*
+	 * Before the first KVM_RUN, vcpu->arch.vcpu_debug_state holds
+	 * values of the registers to be exposed to the guest and their
+	 * positions are indexed by virtual breakpoint numbers.
+	 * Convert the values to physical values that are going to set
+	 * to hardware registers, and move them to positions indexed
+	 * by physical breakpoint numbers.
+	 */
+	for (v = KVM_ARM_MAX_DBG_REGS - 1; v >= 0; v--) {
+		/* Get physical breakpoint number */
+		p_bpn = virt_to_phys_bpn(vcpu, v);
+		WARN_ON_ONCE(p_bpn < v);
+
+		if (p_bpn != INVALID_BRPN) {
+			/* Get physical bcr */
+			p_bcr = virt_to_phys_bcr(vcpu, dbg->dbg_bcr[v]);
+			dbg->dbg_bcr[p_bpn] = p_bcr;
+			dbg->dbg_bvr[p_bpn] = dbg->dbg_bvr[v];
+		}
+
+		/* Clear dbg_b{c,v}r, which might not be used */
+		if (p_bpn != v) {
+			dbg->dbg_bcr[v] = 0;
+			dbg->dbg_bvr[v] = 0;
+		}
+	}
 }
 
 /*
  * We want to avoid world-switching all the DBG registers all the
- * time:
+ * time unless userspace decrease number of non-context break points,
+ * where emulating of access to debug registers is required.
  *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
@@ -963,8 +1204,17 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u64 p_bpn;
+	u64 *dbg_reg;
 
+	/* Convert the virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
 	if (p->is_write)
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
 	else
@@ -975,23 +1225,85 @@ static bool trap_bvr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/*
+ * The behaviors of {s,g}et_b{c,v}r change depending on whether they
+ * are called before or after the first KVM_RUN.
+ *
+ * Before the first KVM_RUN (the number of breakpoints is not fixed yet),
+ * the vcpu->arch.vcpu_debug_state holds debug register values from
+ * the guest point of view. The set_b{c,v}r() simply save the value
+ * from userspace in vcpu->arch.vcpu_debug_state, and get_b{c,v}r()
+ * simply return the value in vcpu->arch.vcpu_debug_state to userspace.
+ *
+ * At the first KVM_RUN (where the number of breakpoints is immutable),
+ * b{c,v}r values in vcpu->arch.vcpu_debug_state are converted to
+ * the values that are going to be set to hardware registers.
+ * After that, vcpu->arch.vcpu_debug_state holds debug register values that
+ * are going to set to hardware registers.  The set_b{c,v}r functions convert
+ * the value from userspace to the one that will be set to the hardware
+ * register and save the converted value in vcpu->arch.vcpu_debug_state.
+ * The get_b{c,v}r functions read the value from vcpu->arch.vcpu_debug_state,
+ * convert it to the value as seen by the guest and return the converted
+ * value to the userspace.
+ *
+ * The {s,g}et_b{c,v}r will treat the invalid breakpoint registers,
+ * which are not mapped to physical breakpoints, as RAZ/WI after the first
+ * KVM_RUN (values that userspace attempts to set in those registers will
+ * not be saved anywhere), which shouldn't be a problem because they will
+ * never be exposed to the guest anyway. Until the first KVM_RUN, setting
+ * and getting of those work normally though (The number of breakpoints
+ * could be changed by userspace until the first KVM_RUN).
+ */
 static int set_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	__u64 bvr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&bvr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn] = bvr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn] = bvr;
+	}
+
 	return 0;
 }
 
 static int get_bvr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bvr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 bvr = 0;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bvr.
+	 * After that, vcpu_debug_state holds the physical bvr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN)
+			bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[p_bpn];
+	} else {
+		bvr = vcpu->arch.vcpu_debug_state.dbg_bvr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &bvr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
@@ -1005,12 +1317,27 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 p_bpn;
+	u64 *dbg_reg;
 
-	if (p->is_write)
+	/* Convert the given virt breakpoint num to phys breakpoint num */
+	p_bpn = virt_to_phys_bpn(vcpu, rd->CRm);
+	if (p_bpn == INVALID_BRPN) {
+		/* Invalid breakpoint number */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+	if (p->is_write) {
+		/* Convert virtual bcr to physical bcr */
+		p->regval = virt_to_phys_bcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical bcr to virtual bcr */
+		p->regval = phys_to_virt_bcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1020,21 +1347,64 @@ static bool trap_bcr(struct kvm_vcpu *vcpu,
 static int set_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr, p_bcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_bcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 
+	v_bpn = rd->CRm;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert the virt breakpoint num to phys breakpoint num */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			/* Convert virt bcr to phys bcr, and save it */
+			p_bcr = virt_to_phys_bcr(vcpu, v_bcr);
+			vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn] = p_bcr;
+		}
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn] = v_bcr;
+	}
+
 	return 0;
 }
 
 static int get_bcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_bcr[rd->CRm];
+	u8 v_bpn, p_bpn;
+	u64 v_bcr = 0;
+	u64 p_bcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	v_bpn = rd->CRm;
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual bcr.
+	 * After that, vcpu_debug_state holds the physical bcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/*
+		 * Convert the virtual breakpoint num to phys breakpoint num,
+		 * and get the physical bcr value.
+		 */
+		p_bpn = virt_to_phys_bpn(vcpu, v_bpn);
+		if (p_bpn != INVALID_BRPN) {
+			p_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[p_bpn];
+
+			/* Convert physical bcr to  */
+			v_bcr = phys_to_virt_bcr(vcpu, p_bcr);
+		}
+	} else {
+		v_bcr = vcpu->arch.vcpu_debug_state.dbg_bcr[v_bpn];
+	}
+
+	if (copy_to_user(uaddr, &v_bcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
 	return 0;
 }
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 14/38] KVM: arm64: Emulate dbgwcr accesses
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

When the number of non-context aware breakpoints for the guest is
decreased by userspace, KVM will map vCPU's context-aware breakpoints
(from the guest point of view) to pCPU's context aware breakpoints.
Since dbgwcr.lbn holds a linked breakpoint number, emulate dbgwcr
accesses to do conversion of virtual/physical dbgwcr.lbn as needed.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 110 ++++++++++++++++++++++++++++++++------
 1 file changed, 95 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2ee1e0b6c4ce..400fa7ff582f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -846,20 +846,28 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 #define AA64DFR0_BRPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_WRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_WRPS_SHIFT))
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
 #define INVALID_BRPN	((u8)-1)
 
-static u8 get_bcr_lbn(u64 val)
+static u8 get_bwcr_lbn(u64 val)
 {
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
 }
 
-static u64 update_bcr_lbn(u64 val, u8 lbn)
+static u64 update_bwcr_lbn(u64 val, u8 lbn)
 {
 	u64 new;
 
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
 	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
 	return new;
@@ -1029,29 +1037,51 @@ static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
  * context aware breakpoint. In such cases, KVM will return 0 to reads of
  * bcr.lbn, and have the breakpoint behaves as if it is disabled by
  * setting the lbn to unused (disabled) breakpoint.
+ *
+ * virt_to_phys_wcr()/phys_to_virt_wcr() does the same thing for wcr.
  */
-static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+static u64 virt_to_phys_bwcr(struct kvm_vcpu *vcpu, u64 v_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	v_lbn = get_bcr_lbn(v_bcr);
+	v_lbn = get_bwcr_lbn(v_bwcr);
 	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
 	if (p_lbn == INVALID_BRPN)
 		p_lbn = get_unused_p_bpn(vcpu);
 
-	return update_bcr_lbn(v_bcr, p_lbn);
+	return update_bwcr_lbn(v_bwcr, p_lbn);
 }
 
-static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+static u64 phys_to_virt_bwcr(struct kvm_vcpu *vcpu, u64 p_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	p_lbn = get_bcr_lbn(p_bcr);
+	p_lbn = get_bwcr_lbn(p_bwcr);
 	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
 	if (v_lbn == INVALID_BRPN)
 		v_lbn = 0;
 
-	return update_bcr_lbn(p_bcr, v_lbn);
+	return update_bwcr_lbn(p_bwcr, v_lbn);
+}
+
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_bcr);
+}
+
+static u64 virt_to_phys_wcr(struct kvm_vcpu *vcpu, u64 v_wcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_wcr);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_bcr);
+}
+
+static u64 phys_to_virt_wcr(struct kvm_vcpu *vcpu, u64 p_wcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_wcr);
 }
 
 /*
@@ -1116,6 +1146,12 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 			dbg->dbg_bcr[v] = 0;
 			dbg->dbg_bvr[v] = 0;
 		}
+
+		/*
+		 * There is no distinction between physical and virtual
+		 * watchpoint numbers. So, the index stays the same.
+		 */
+		dbg->dbg_wcr[v] = virt_to_phys_wcr(vcpu, dbg->dbg_wcr[v]);
 	}
 }
 
@@ -1461,12 +1497,26 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 *dbg_reg;
+	u64 v_dfr0 = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 
-	if (p->is_write)
+	if (wpn > AA64DFR0_WRPS(v_dfr0)) {
+		/* Invalid watchpoint number for the guest */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	if (p->is_write) {
+		/* Convert virtual wcr to physical wcr and update debug_reg */
+		p->regval = virt_to_phys_wcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical wcr to virtual wcr */
+		p->regval = phys_to_virt_wcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1476,19 +1526,49 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 v_wcr, p_wcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_wcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert virtual wcr to physical wcr, and save it */
+		p_wcr = virt_to_phys_wcr(vcpu, v_wcr);
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = p_wcr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = v_wcr;
+		return 0;
+	}
+
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 p_wcr, v_wcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Get the physical wcr value */
+		p_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+
+		/* Convert physical wcr to virtual wcr */
+		v_wcr = phys_to_virt_wcr(vcpu, p_wcr);
+	} else {
+		v_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	}
+
+	if (copy_to_user(uaddr, &v_wcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 14/38] KVM: arm64: Emulate dbgwcr accesses
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

When the number of non-context aware breakpoints for the guest is
decreased by userspace, KVM will map vCPU's context-aware breakpoints
(from the guest point of view) to pCPU's context aware breakpoints.
Since dbgwcr.lbn holds a linked breakpoint number, emulate dbgwcr
accesses to do conversion of virtual/physical dbgwcr.lbn as needed.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 110 ++++++++++++++++++++++++++++++++------
 1 file changed, 95 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2ee1e0b6c4ce..400fa7ff582f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -846,20 +846,28 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 #define AA64DFR0_BRPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_WRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_WRPS_SHIFT))
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
 #define INVALID_BRPN	((u8)-1)
 
-static u8 get_bcr_lbn(u64 val)
+static u8 get_bwcr_lbn(u64 val)
 {
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
 }
 
-static u64 update_bcr_lbn(u64 val, u8 lbn)
+static u64 update_bwcr_lbn(u64 val, u8 lbn)
 {
 	u64 new;
 
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
 	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
 	return new;
@@ -1029,29 +1037,51 @@ static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
  * context aware breakpoint. In such cases, KVM will return 0 to reads of
  * bcr.lbn, and have the breakpoint behaves as if it is disabled by
  * setting the lbn to unused (disabled) breakpoint.
+ *
+ * virt_to_phys_wcr()/phys_to_virt_wcr() does the same thing for wcr.
  */
-static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+static u64 virt_to_phys_bwcr(struct kvm_vcpu *vcpu, u64 v_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	v_lbn = get_bcr_lbn(v_bcr);
+	v_lbn = get_bwcr_lbn(v_bwcr);
 	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
 	if (p_lbn == INVALID_BRPN)
 		p_lbn = get_unused_p_bpn(vcpu);
 
-	return update_bcr_lbn(v_bcr, p_lbn);
+	return update_bwcr_lbn(v_bwcr, p_lbn);
 }
 
-static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+static u64 phys_to_virt_bwcr(struct kvm_vcpu *vcpu, u64 p_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	p_lbn = get_bcr_lbn(p_bcr);
+	p_lbn = get_bwcr_lbn(p_bwcr);
 	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
 	if (v_lbn == INVALID_BRPN)
 		v_lbn = 0;
 
-	return update_bcr_lbn(p_bcr, v_lbn);
+	return update_bwcr_lbn(p_bwcr, v_lbn);
+}
+
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_bcr);
+}
+
+static u64 virt_to_phys_wcr(struct kvm_vcpu *vcpu, u64 v_wcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_wcr);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_bcr);
+}
+
+static u64 phys_to_virt_wcr(struct kvm_vcpu *vcpu, u64 p_wcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_wcr);
 }
 
 /*
@@ -1116,6 +1146,12 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 			dbg->dbg_bcr[v] = 0;
 			dbg->dbg_bvr[v] = 0;
 		}
+
+		/*
+		 * There is no distinction between physical and virtual
+		 * watchpoint numbers. So, the index stays the same.
+		 */
+		dbg->dbg_wcr[v] = virt_to_phys_wcr(vcpu, dbg->dbg_wcr[v]);
 	}
 }
 
@@ -1461,12 +1497,26 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 *dbg_reg;
+	u64 v_dfr0 = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 
-	if (p->is_write)
+	if (wpn > AA64DFR0_WRPS(v_dfr0)) {
+		/* Invalid watchpoint number for the guest */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	if (p->is_write) {
+		/* Convert virtual wcr to physical wcr and update debug_reg */
+		p->regval = virt_to_phys_wcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical wcr to virtual wcr */
+		p->regval = phys_to_virt_wcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1476,19 +1526,49 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 v_wcr, p_wcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_wcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert virtual wcr to physical wcr, and save it */
+		p_wcr = virt_to_phys_wcr(vcpu, v_wcr);
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = p_wcr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = v_wcr;
+		return 0;
+	}
+
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 p_wcr, v_wcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Get the physical wcr value */
+		p_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+
+		/* Convert physical wcr to virtual wcr */
+		v_wcr = phys_to_virt_wcr(vcpu, p_wcr);
+	} else {
+		v_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	}
+
+	if (copy_to_user(uaddr, &v_wcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 14/38] KVM: arm64: Emulate dbgwcr accesses
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

When the number of non-context aware breakpoints for the guest is
decreased by userspace, KVM will map vCPU's context-aware breakpoints
(from the guest point of view) to pCPU's context aware breakpoints.
Since dbgwcr.lbn holds a linked breakpoint number, emulate dbgwcr
accesses to do conversion of virtual/physical dbgwcr.lbn as needed.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 110 ++++++++++++++++++++++++++++++++------
 1 file changed, 95 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2ee1e0b6c4ce..400fa7ff582f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -846,20 +846,28 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 #define AA64DFR0_BRPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_BRPS_SHIFT))
+#define AA64DFR0_WRPS(v)	\
+	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_WRPS_SHIFT))
 #define AA64DFR0_CTX_CMPS(v)	\
 	((u8)cpuid_feature_extract_unsigned_field(v, ID_AA64DFR0_CTX_CMPS_SHIFT))
 
 #define INVALID_BRPN	((u8)-1)
 
-static u8 get_bcr_lbn(u64 val)
+static u8 get_bwcr_lbn(u64 val)
 {
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	return ((val >> SYS_DBGBCR_EL1_LBN_SHIFT) & SYS_DBGBCR_EL1_LBN_MASK);
 }
 
-static u64 update_bcr_lbn(u64 val, u8 lbn)
+static u64 update_bwcr_lbn(u64 val, u8 lbn)
 {
 	u64 new;
 
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_SHIFT != SYS_DBGWCR_EL1_LBN_SHIFT);
+	WARN_ON_ONCE(SYS_DBGBCR_EL1_LBN_MASK != SYS_DBGWCR_EL1_LBN_MASK);
+
 	new = val & ~(SYS_DBGBCR_EL1_LBN_MASK << SYS_DBGBCR_EL1_LBN_SHIFT);
 	new |= ((u64)lbn & SYS_DBGBCR_EL1_LBN_MASK) << SYS_DBGBCR_EL1_LBN_SHIFT;
 	return new;
@@ -1029,29 +1037,51 @@ static u8 get_unused_p_bpn(struct kvm_vcpu *vcpu)
  * context aware breakpoint. In such cases, KVM will return 0 to reads of
  * bcr.lbn, and have the breakpoint behaves as if it is disabled by
  * setting the lbn to unused (disabled) breakpoint.
+ *
+ * virt_to_phys_wcr()/phys_to_virt_wcr() does the same thing for wcr.
  */
-static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+static u64 virt_to_phys_bwcr(struct kvm_vcpu *vcpu, u64 v_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	v_lbn = get_bcr_lbn(v_bcr);
+	v_lbn = get_bwcr_lbn(v_bwcr);
 	p_lbn = virt_to_phys_bpn(vcpu, v_lbn);
 	if (p_lbn == INVALID_BRPN)
 		p_lbn = get_unused_p_bpn(vcpu);
 
-	return update_bcr_lbn(v_bcr, p_lbn);
+	return update_bwcr_lbn(v_bwcr, p_lbn);
 }
 
-static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+static u64 phys_to_virt_bwcr(struct kvm_vcpu *vcpu, u64 p_bwcr)
 {
 	u8 v_lbn, p_lbn;
 
-	p_lbn = get_bcr_lbn(p_bcr);
+	p_lbn = get_bwcr_lbn(p_bwcr);
 	v_lbn = phys_to_virt_bpn(vcpu, p_lbn);
 	if (v_lbn == INVALID_BRPN)
 		v_lbn = 0;
 
-	return update_bcr_lbn(p_bcr, v_lbn);
+	return update_bwcr_lbn(p_bwcr, v_lbn);
+}
+
+static u64 virt_to_phys_bcr(struct kvm_vcpu *vcpu, u64 v_bcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_bcr);
+}
+
+static u64 virt_to_phys_wcr(struct kvm_vcpu *vcpu, u64 v_wcr)
+{
+	return virt_to_phys_bwcr(vcpu, v_wcr);
+}
+
+static u64 phys_to_virt_bcr(struct kvm_vcpu *vcpu, u64 p_bcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_bcr);
+}
+
+static u64 phys_to_virt_wcr(struct kvm_vcpu *vcpu, u64 p_wcr)
+{
+	return phys_to_virt_bwcr(vcpu, p_wcr);
 }
 
 /*
@@ -1116,6 +1146,12 @@ void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu)
 			dbg->dbg_bcr[v] = 0;
 			dbg->dbg_bvr[v] = 0;
 		}
+
+		/*
+		 * There is no distinction between physical and virtual
+		 * watchpoint numbers. So, the index stays the same.
+		 */
+		dbg->dbg_wcr[v] = virt_to_phys_wcr(vcpu, dbg->dbg_wcr[v]);
 	}
 }
 
@@ -1461,12 +1497,26 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 		     struct sys_reg_params *p,
 		     const struct sys_reg_desc *rd)
 {
-	u64 *dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 *dbg_reg;
+	u64 v_dfr0 = read_id_reg_with_encoding(vcpu, SYS_ID_AA64DFR0_EL1);
 
-	if (p->is_write)
+	if (wpn > AA64DFR0_WRPS(v_dfr0)) {
+		/* Invalid watchpoint number for the guest */
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	dbg_reg = &vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	if (p->is_write) {
+		/* Convert virtual wcr to physical wcr and update debug_reg */
+		p->regval = virt_to_phys_wcr(vcpu, p->regval);
 		reg_to_dbg(vcpu, p, rd, dbg_reg);
-	else
+	} else {
 		dbg_to_reg(vcpu, p, rd, dbg_reg);
+		/* Convert physical wcr to virtual wcr */
+		p->regval = phys_to_virt_wcr(vcpu, p->regval);
+	}
 
 	trace_trap_reg(__func__, rd->CRm, p->is_write, *dbg_reg);
 
@@ -1476,19 +1526,49 @@ static bool trap_wcr(struct kvm_vcpu *vcpu,
 static int set_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 		const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 v_wcr, p_wcr;
 
-	if (copy_from_user(r, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+	if (copy_from_user(&v_wcr, uaddr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
+
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Convert virtual wcr to physical wcr, and save it */
+		p_wcr = virt_to_phys_wcr(vcpu, v_wcr);
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = p_wcr;
+	} else {
+		vcpu->arch.vcpu_debug_state.dbg_wcr[wpn] = v_wcr;
+		return 0;
+	}
+
 	return 0;
 }
 
 static int get_wcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
 	const struct kvm_one_reg *reg, void __user *uaddr)
 {
-	__u64 *r = &vcpu->arch.vcpu_debug_state.dbg_wcr[rd->CRm];
+	u8 wpn = rd->CRm;
+	u64 p_wcr, v_wcr;
 
-	if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0)
+	/*
+	 * Until the first KVM_RUN, vcpu_debug_state holds the virtual wcr.
+	 * After that, vcpu_debug_state holds the physical wcr.
+	 */
+	if (vcpu_has_run_once(vcpu)) {
+		/* Get the physical wcr value */
+		p_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+
+		/* Convert physical wcr to virtual wcr */
+		v_wcr = phys_to_virt_wcr(vcpu, p_wcr);
+	} else {
+		v_wcr = vcpu->arch.vcpu_debug_state.dbg_wcr[wpn];
+	}
+
+	if (copy_to_user(uaddr, &v_wcr, KVM_REG_SIZE(reg->id)) != 0)
 		return -EFAULT;
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 15/38] KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64DFR0_EL1 and ID_DFR0_EL1
to make them writable by userspace.

Return an error if userspace tries to set PMUVER/PerfMon field
of ID_AA64DFR0_EL1/ID_DFR0_EL1 to a value that conflicts with the
PMU configuration.

When a value of ID_AA64DFR0_EL1.PMUVER or ID_DFR0_EL1.PERFMON on the
host is 0xf, which means IMPLEMENTATION DEFINED PMU supported, KVM
erroneously expose the value for the guest as it is even though KVM
doesn't support it for the guest. In that case, since KVM should
expose 0x0 (PMU is not implemented), change the initial value of
ID_AA64DFR0_EL1.PMUVER and ID_DFR0_EL1.PERFMON for the guest to 0x0.
If userspace requests KVM to set them to 0xf, which shouldn't be
allowed as KVM doesn't support IMPLEMENTATION DEFINED PMU for the
guest, ignore the request (set the fields to 0x0 instead) so that
a live migration from the older kernel works fine.

Since number of context-aware breakpoints must be no more than number
of supported breakpoints according to Arm ARM, return an error
if userspace tries to set CTX_CMPS field to such value.

Fixes: 8e35aa642ee4 ("arm64: cpufeature: Extract capped perfmon fields")
Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |   2 +-
 arch/arm64/kvm/sys_regs.c           | 164 ++++++++++++++++++++++++----
 2 files changed, 143 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7a009d4e18a6..7ed2d32b3854 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -554,7 +554,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
 
 	/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
 	if (val == ID_AA64DFR0_PMUVER_IMP_DEF)
-		val = 0;
+		return (features & ~mask);
 
 	if (val > cap) {
 		features &= ~mask;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 400fa7ff582f..9eca085886f5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,75 @@ static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int brps, ctx_cmps;
+	u64 pmu, lim_pmu;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	brps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_BRPS_SHIFT);
+	ctx_cmps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	/*
+	 * Number of context-aware breakpoints can be no more than number of
+	 * supported breakpoints.
+	 */
+	if (ctx_cmps > brps)
+		return -EINVAL;
+
+	/*
+	 * KVM will not set PMUVER to 0xf (IMPLEMENTATION DEFINED PMU)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmu = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_PMUVER_SHIFT);
+	pmu = (pmu == 0xf) ? 0 : pmu;
+	lim_pmu = cpuid_feature_extract_unsigned_field(lim, ID_AA64DFR0_PMUVER_SHIFT);
+	if (pmu > lim_pmu)
+		return -E2BIG;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmu >= ID_AA64DFR0_PMUVER_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
+static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
+				const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 pmon, lim_pmon;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	/*
+	 * KVM will not set PERFMON to 0xf (IMPLEMENTATION DEFINED PERFMON)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmon = cpuid_feature_extract_unsigned_field(val, ID_DFR0_PERFMON_SHIFT);
+	pmon = (pmon == 0xf) ? 0 : pmon;
+	lim_pmon = cpuid_feature_extract_unsigned_field(lim, ID_DFR0_PERFMON_SHIFT);
+	if (pmon > lim_pmon)
+		return -E2BIG;
+
+	if (pmon == 1 || pmon == 2)
+		/* PMUv1 or PMUv2 is not allowed on ARMv8. */
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmon >= ID_DFR0_PERFMON_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -703,6 +772,31 @@ static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
 }
 
+static void init_id_aa64dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	limit = cpuid_feature_cap_perfmon_field(limit, ID_AA64DFR0_PMUVER_SHIFT,
+						ID_AA64DFR0_PMUVER_8_4);
+	/* Limit debug to ARMv8.0 */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
+	limit |= (FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6));
+
+	/* Hide SPE from guests */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
+
+	id_reg->vcpu_limit_val = limit;
+}
+
+static void init_id_dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	id_reg->vcpu_limit_val =
+		cpuid_feature_cap_perfmon_field(id_reg->vcpu_limit_val,
+						ID_DFR0_PERFMON_SHIFT,
+						ID_DFR0_PERFMON_8_4);
+}
 
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
@@ -729,6 +823,18 @@ static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
 }
 
 
+static u64 vcpu_mask_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER);
+}
+
+static u64 vcpu_mask_id_dfr0_el1(const struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -2186,28 +2292,9 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
 
 	if (id_reg)
-		return __read_id_reg(vcpu, id_reg);
-
-	val = read_kvm_id_reg(vcpu->kvm, id);
-	switch (id) {
-	case SYS_ID_AA64DFR0_EL1:
-		/* Limit debug to ARMv8.0 */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_PMUVER_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0);
-		/* Hide SPE from guests */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
-		break;
-	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
-		break;
-	}
+		val = __read_id_reg(vcpu, id_reg);
+	else
+		val = read_kvm_id_reg(vcpu->kvm, id);
 
 	return val;
 }
@@ -4028,15 +4115,48 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
+	/*
+	 * PMUVER doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_aa64dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
+	.init = init_id_aa64dfr0_el1_desc,
+	.validate = validate_id_aa64dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64dfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
+	},
+};
+
+static struct id_reg_desc id_dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR0_EL1),
+	/*
+	 * PERFMON doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+	.init = init_id_dfr0_el1_desc,
+	.validate = validate_id_dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_dfr0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
 /* A table for ID registers's information. */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=1 */
+	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 
+	/* CRm=5 */
+	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 15/38] KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_AA64DFR0_EL1 and ID_DFR0_EL1
to make them writable by userspace.

Return an error if userspace tries to set PMUVER/PerfMon field
of ID_AA64DFR0_EL1/ID_DFR0_EL1 to a value that conflicts with the
PMU configuration.

When a value of ID_AA64DFR0_EL1.PMUVER or ID_DFR0_EL1.PERFMON on the
host is 0xf, which means IMPLEMENTATION DEFINED PMU supported, KVM
erroneously expose the value for the guest as it is even though KVM
doesn't support it for the guest. In that case, since KVM should
expose 0x0 (PMU is not implemented), change the initial value of
ID_AA64DFR0_EL1.PMUVER and ID_DFR0_EL1.PERFMON for the guest to 0x0.
If userspace requests KVM to set them to 0xf, which shouldn't be
allowed as KVM doesn't support IMPLEMENTATION DEFINED PMU for the
guest, ignore the request (set the fields to 0x0 instead) so that
a live migration from the older kernel works fine.

Since number of context-aware breakpoints must be no more than number
of supported breakpoints according to Arm ARM, return an error
if userspace tries to set CTX_CMPS field to such value.

Fixes: 8e35aa642ee4 ("arm64: cpufeature: Extract capped perfmon fields")
Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |   2 +-
 arch/arm64/kvm/sys_regs.c           | 164 ++++++++++++++++++++++++----
 2 files changed, 143 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7a009d4e18a6..7ed2d32b3854 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -554,7 +554,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
 
 	/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
 	if (val == ID_AA64DFR0_PMUVER_IMP_DEF)
-		val = 0;
+		return (features & ~mask);
 
 	if (val > cap) {
 		features &= ~mask;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 400fa7ff582f..9eca085886f5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,75 @@ static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int brps, ctx_cmps;
+	u64 pmu, lim_pmu;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	brps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_BRPS_SHIFT);
+	ctx_cmps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	/*
+	 * Number of context-aware breakpoints can be no more than number of
+	 * supported breakpoints.
+	 */
+	if (ctx_cmps > brps)
+		return -EINVAL;
+
+	/*
+	 * KVM will not set PMUVER to 0xf (IMPLEMENTATION DEFINED PMU)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmu = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_PMUVER_SHIFT);
+	pmu = (pmu == 0xf) ? 0 : pmu;
+	lim_pmu = cpuid_feature_extract_unsigned_field(lim, ID_AA64DFR0_PMUVER_SHIFT);
+	if (pmu > lim_pmu)
+		return -E2BIG;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmu >= ID_AA64DFR0_PMUVER_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
+static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
+				const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 pmon, lim_pmon;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	/*
+	 * KVM will not set PERFMON to 0xf (IMPLEMENTATION DEFINED PERFMON)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmon = cpuid_feature_extract_unsigned_field(val, ID_DFR0_PERFMON_SHIFT);
+	pmon = (pmon == 0xf) ? 0 : pmon;
+	lim_pmon = cpuid_feature_extract_unsigned_field(lim, ID_DFR0_PERFMON_SHIFT);
+	if (pmon > lim_pmon)
+		return -E2BIG;
+
+	if (pmon == 1 || pmon == 2)
+		/* PMUv1 or PMUv2 is not allowed on ARMv8. */
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmon >= ID_DFR0_PERFMON_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -703,6 +772,31 @@ static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
 }
 
+static void init_id_aa64dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	limit = cpuid_feature_cap_perfmon_field(limit, ID_AA64DFR0_PMUVER_SHIFT,
+						ID_AA64DFR0_PMUVER_8_4);
+	/* Limit debug to ARMv8.0 */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
+	limit |= (FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6));
+
+	/* Hide SPE from guests */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
+
+	id_reg->vcpu_limit_val = limit;
+}
+
+static void init_id_dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	id_reg->vcpu_limit_val =
+		cpuid_feature_cap_perfmon_field(id_reg->vcpu_limit_val,
+						ID_DFR0_PERFMON_SHIFT,
+						ID_DFR0_PERFMON_8_4);
+}
 
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
@@ -729,6 +823,18 @@ static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
 }
 
 
+static u64 vcpu_mask_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER);
+}
+
+static u64 vcpu_mask_id_dfr0_el1(const struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -2186,28 +2292,9 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
 
 	if (id_reg)
-		return __read_id_reg(vcpu, id_reg);
-
-	val = read_kvm_id_reg(vcpu->kvm, id);
-	switch (id) {
-	case SYS_ID_AA64DFR0_EL1:
-		/* Limit debug to ARMv8.0 */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_PMUVER_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0);
-		/* Hide SPE from guests */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
-		break;
-	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
-		break;
-	}
+		val = __read_id_reg(vcpu, id_reg);
+	else
+		val = read_kvm_id_reg(vcpu->kvm, id);
 
 	return val;
 }
@@ -4028,15 +4115,48 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
+	/*
+	 * PMUVER doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_aa64dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
+	.init = init_id_aa64dfr0_el1_desc,
+	.validate = validate_id_aa64dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64dfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
+	},
+};
+
+static struct id_reg_desc id_dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR0_EL1),
+	/*
+	 * PERFMON doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+	.init = init_id_dfr0_el1_desc,
+	.validate = validate_id_dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_dfr0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
 /* A table for ID registers's information. */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=1 */
+	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 
+	/* CRm=5 */
+	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 15/38] KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_AA64DFR0_EL1 and ID_DFR0_EL1
to make them writable by userspace.

Return an error if userspace tries to set PMUVER/PerfMon field
of ID_AA64DFR0_EL1/ID_DFR0_EL1 to a value that conflicts with the
PMU configuration.

When a value of ID_AA64DFR0_EL1.PMUVER or ID_DFR0_EL1.PERFMON on the
host is 0xf, which means IMPLEMENTATION DEFINED PMU supported, KVM
erroneously expose the value for the guest as it is even though KVM
doesn't support it for the guest. In that case, since KVM should
expose 0x0 (PMU is not implemented), change the initial value of
ID_AA64DFR0_EL1.PMUVER and ID_DFR0_EL1.PERFMON for the guest to 0x0.
If userspace requests KVM to set them to 0xf, which shouldn't be
allowed as KVM doesn't support IMPLEMENTATION DEFINED PMU for the
guest, ignore the request (set the fields to 0x0 instead) so that
a live migration from the older kernel works fine.

Since number of context-aware breakpoints must be no more than number
of supported breakpoints according to Arm ARM, return an error
if userspace tries to set CTX_CMPS field to such value.

Fixes: 8e35aa642ee4 ("arm64: cpufeature: Extract capped perfmon fields")
Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/cpufeature.h |   2 +-
 arch/arm64/kvm/sys_regs.c           | 164 ++++++++++++++++++++++++----
 2 files changed, 143 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 7a009d4e18a6..7ed2d32b3854 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -554,7 +554,7 @@ cpuid_feature_cap_perfmon_field(u64 features, int field, u64 cap)
 
 	/* Treat IMPLEMENTATION DEFINED functionality as unimplemented */
 	if (val == ID_AA64DFR0_PMUVER_IMP_DEF)
-		val = 0;
+		return (features & ~mask);
 
 	if (val > cap) {
 		features &= ~mask;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 400fa7ff582f..9eca085886f5 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,75 @@ static int validate_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
+				    const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int brps, ctx_cmps;
+	u64 pmu, lim_pmu;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	brps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_BRPS_SHIFT);
+	ctx_cmps = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	/*
+	 * Number of context-aware breakpoints can be no more than number of
+	 * supported breakpoints.
+	 */
+	if (ctx_cmps > brps)
+		return -EINVAL;
+
+	/*
+	 * KVM will not set PMUVER to 0xf (IMPLEMENTATION DEFINED PMU)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmu = cpuid_feature_extract_unsigned_field(val, ID_AA64DFR0_PMUVER_SHIFT);
+	pmu = (pmu == 0xf) ? 0 : pmu;
+	lim_pmu = cpuid_feature_extract_unsigned_field(lim, ID_AA64DFR0_PMUVER_SHIFT);
+	if (pmu > lim_pmu)
+		return -E2BIG;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmu >= ID_AA64DFR0_PMUVER_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
+static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
+				const struct id_reg_desc *id_reg, u64 val)
+{
+	u64 pmon, lim_pmon;
+	u64 lim = id_reg->vcpu_limit_val;
+
+	/*
+	 * KVM will not set PERFMON to 0xf (IMPLEMENTATION DEFINED PERFMON)
+	 * for the guest because KVM doesn't support it.
+	 * If userspace requests KVM to set the field to 0xf, KVM will treat
+	 * that as 0 instead of returning an error since userspace might do
+	 * that when the guest is migrated from a host with older KVM,
+	 * which sets the field to 0xf when the host value is 0xf.
+	 */
+	pmon = cpuid_feature_extract_unsigned_field(val, ID_DFR0_PERFMON_SHIFT);
+	pmon = (pmon == 0xf) ? 0 : pmon;
+	lim_pmon = cpuid_feature_extract_unsigned_field(lim, ID_DFR0_PERFMON_SHIFT);
+	if (pmon > lim_pmon)
+		return -E2BIG;
+
+	if (pmon == 1 || pmon == 2)
+		/* PMUv1 or PMUv2 is not allowed on ARMv8. */
+		return -EINVAL;
+
+	/* Check if there is a conflict with a request via KVM_ARM_VCPU_INIT */
+	if (kvm_vcpu_has_pmu(vcpu) ^ (pmon >= ID_DFR0_PERFMON_8_0))
+		return -EPERM;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -703,6 +772,31 @@ static void init_id_aa64isar2_el1_desc(struct id_reg_desc *id_reg)
 		id_reg->vcpu_limit_val &= ~ISAR2_PTRAUTH_MASK;
 }
 
+static void init_id_aa64dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	u64 limit = id_reg->vcpu_limit_val;
+
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	limit = cpuid_feature_cap_perfmon_field(limit, ID_AA64DFR0_PMUVER_SHIFT,
+						ID_AA64DFR0_PMUVER_8_4);
+	/* Limit debug to ARMv8.0 */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
+	limit |= (FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6));
+
+	/* Hide SPE from guests */
+	limit &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
+
+	id_reg->vcpu_limit_val = limit;
+}
+
+static void init_id_dfr0_el1_desc(struct id_reg_desc *id_reg)
+{
+	/* Limit guests to PMUv3 for ARMv8.4 */
+	id_reg->vcpu_limit_val =
+		cpuid_feature_cap_perfmon_field(id_reg->vcpu_limit_val,
+						ID_DFR0_PERFMON_SHIFT,
+						ID_DFR0_PERFMON_8_4);
+}
 
 static u64 vcpu_mask_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu,
 					 const struct id_reg_desc *idr)
@@ -729,6 +823,18 @@ static u64 vcpu_mask_id_aa64isar2_el1(const struct kvm_vcpu *vcpu,
 }
 
 
+static u64 vcpu_mask_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu,
+					 const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER);
+}
+
+static u64 vcpu_mask_id_dfr0_el1(const struct kvm_vcpu *vcpu,
+				     const struct id_reg_desc *idr)
+{
+	return kvm_vcpu_has_pmu(vcpu) ? 0 : ARM64_FEATURE_MASK(ID_DFR0_PERFMON);
+}
+
 static int validate_id_reg(struct kvm_vcpu *vcpu,
 			   const struct id_reg_desc *id_reg, u64 val)
 {
@@ -2186,28 +2292,9 @@ static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
 	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
 
 	if (id_reg)
-		return __read_id_reg(vcpu, id_reg);
-
-	val = read_kvm_id_reg(vcpu->kvm, id);
-	switch (id) {
-	case SYS_ID_AA64DFR0_EL1:
-		/* Limit debug to ARMv8.0 */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER);
-		val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), 6);
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_AA64DFR0_PMUVER_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_AA64DFR0_PMUVER_8_4 : 0);
-		/* Hide SPE from guests */
-		val &= ~ARM64_FEATURE_MASK(ID_AA64DFR0_PMSVER);
-		break;
-	case SYS_ID_DFR0_EL1:
-		/* Limit guests to PMUv3 for ARMv8.4 */
-		val = cpuid_feature_cap_perfmon_field(val,
-						      ID_DFR0_PERFMON_SHIFT,
-						      kvm_vcpu_has_pmu(vcpu) ? ID_DFR0_PERFMON_8_4 : 0);
-		break;
-	}
+		val = __read_id_reg(vcpu, id_reg);
+	else
+		val = read_kvm_id_reg(vcpu->kvm, id);
 
 	return val;
 }
@@ -4028,15 +4115,48 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
+	/*
+	 * PMUVER doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_aa64dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER),
+	.init = init_id_aa64dfr0_el1_desc,
+	.validate = validate_id_aa64dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_aa64dfr0_el1,
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
+	},
+};
+
+static struct id_reg_desc id_dfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR0_EL1),
+	/*
+	 * PERFMON doesn't follow the ID scheme for fields in ID registers.
+	 * So, it will be validated by validate_id_dfr0_el1.
+	 */
+	.ignore_mask = ARM64_FEATURE_MASK(ID_DFR0_PERFMON),
+	.init = init_id_dfr0_el1_desc,
+	.validate = validate_id_dfr0_el1,
+	.vcpu_mask = vcpu_mask_id_dfr0_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
 /* A table for ID registers's information. */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
+	/* CRm=1 */
+	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
 
+	/* CRm=5 */
+	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 16/38] KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_DFR1_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9eca085886f5..3892278deb09 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4142,6 +4142,13 @@ static struct id_reg_desc id_dfr0_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_dfr0_el1,
 };
 
+static struct id_reg_desc id_dfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR1_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4150,6 +4157,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
 
+	/* CRm=3 */
+	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 16/38] KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_DFR1_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9eca085886f5..3892278deb09 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4142,6 +4142,13 @@ static struct id_reg_desc id_dfr0_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_dfr0_el1,
 };
 
+static struct id_reg_desc id_dfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR1_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4150,6 +4157,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
 
+	/* CRm=3 */
+	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 16/38] KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_DFR1_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9eca085886f5..3892278deb09 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4142,6 +4142,13 @@ static struct id_reg_desc id_dfr0_el1_desc = {
 	.vcpu_mask = vcpu_mask_id_dfr0_el1,
 };
 
+static struct id_reg_desc id_dfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_DFR1_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_DFR1_MTPMU_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4150,6 +4157,9 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
 
+	/* CRm=3 */
+	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 17/38] KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_MMFR0_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3892278deb09..dfcf95eee139 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4149,6 +4149,14 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_MMFR0_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 0xf),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4156,6 +4164,7 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 17/38] KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for ID_MMFR0_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3892278deb09..dfcf95eee139 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4149,6 +4149,14 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_MMFR0_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 0xf),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4156,6 +4164,7 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 17/38] KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for ID_MMFR0_EL1 to make it writable
by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3892278deb09..dfcf95eee139 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4149,6 +4149,14 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_mmfr0_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_MMFR0_EL1),
+	.ftr_bits = {
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_OUTERSHR_SHIFT, 0xf),
+		S_FTR_BITS(FTR_LOWER_SAFE, ID_MMFR0_INNERSHR_SHIFT, 0xf),
+	},
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4156,6 +4164,7 @@ static struct id_reg_desc id_dfr1_el1_desc = {
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 18/38] KVM: arm64: Make MVFR1_EL1 writable
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for MVFR1_EL1 to make it writable
by userspace.

There are only a few valid combinations of values that can be set
for FPHP and SIMDHP fields according to Arm ARM.  Return an error
when userspace tries to set those fields to values that don't match
any of the valid combinations.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dfcf95eee139..9e090441057a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -723,6 +723,36 @@ static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_mvfr1_el1(struct kvm_vcpu *vcpu,
+			      const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int fphp, simdhp;
+	struct fphp_simdhp {
+		unsigned int fphp;
+		unsigned int simdhp;
+	};
+	/* Permitted fphp/simdhp value combinations according to Arm ARM */
+	struct fphp_simdhp valid_fphp_simdhp[3] = {{0, 0}, {2, 1}, {3, 2}};
+	int i;
+	bool is_valid_fphp_simdhp = false;
+
+	fphp = cpuid_feature_extract_unsigned_field(val, MVFR1_FPHP_SHIFT);
+	simdhp = cpuid_feature_extract_unsigned_field(val, MVFR1_SIMDHP_SHIFT);
+
+	for (i = 0; i < ARRAY_SIZE(valid_fphp_simdhp); i++) {
+		if (valid_fphp_simdhp[i].fphp == fphp &&
+		    valid_fphp_simdhp[i].simdhp == simdhp) {
+			is_valid_fphp_simdhp = true;
+			break;
+		}
+	}
+
+	if (!is_valid_fphp_simdhp)
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -4157,6 +4187,11 @@ static struct id_reg_desc id_mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc mvfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(MVFR1_EL1),
+	.validate = validate_mvfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4167,6 +4202,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
+	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
 
 	/* CRm=4 */
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 18/38] KVM: arm64: Make MVFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

This patch adds id_reg_desc for MVFR1_EL1 to make it writable
by userspace.

There are only a few valid combinations of values that can be set
for FPHP and SIMDHP fields according to Arm ARM.  Return an error
when userspace tries to set those fields to values that don't match
any of the valid combinations.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dfcf95eee139..9e090441057a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -723,6 +723,36 @@ static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_mvfr1_el1(struct kvm_vcpu *vcpu,
+			      const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int fphp, simdhp;
+	struct fphp_simdhp {
+		unsigned int fphp;
+		unsigned int simdhp;
+	};
+	/* Permitted fphp/simdhp value combinations according to Arm ARM */
+	struct fphp_simdhp valid_fphp_simdhp[3] = {{0, 0}, {2, 1}, {3, 2}};
+	int i;
+	bool is_valid_fphp_simdhp = false;
+
+	fphp = cpuid_feature_extract_unsigned_field(val, MVFR1_FPHP_SHIFT);
+	simdhp = cpuid_feature_extract_unsigned_field(val, MVFR1_SIMDHP_SHIFT);
+
+	for (i = 0; i < ARRAY_SIZE(valid_fphp_simdhp); i++) {
+		if (valid_fphp_simdhp[i].fphp == fphp &&
+		    valid_fphp_simdhp[i].simdhp == simdhp) {
+			is_valid_fphp_simdhp = true;
+			break;
+		}
+	}
+
+	if (!is_valid_fphp_simdhp)
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -4157,6 +4187,11 @@ static struct id_reg_desc id_mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc mvfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(MVFR1_EL1),
+	.validate = validate_mvfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4167,6 +4202,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
+	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
 
 	/* CRm=4 */
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 18/38] KVM: arm64: Make MVFR1_EL1 writable
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

This patch adds id_reg_desc for MVFR1_EL1 to make it writable
by userspace.

There are only a few valid combinations of values that can be set
for FPHP and SIMDHP fields according to Arm ARM.  Return an error
when userspace tries to set those fields to values that don't match
any of the valid combinations.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dfcf95eee139..9e090441057a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -723,6 +723,36 @@ static int validate_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int validate_mvfr1_el1(struct kvm_vcpu *vcpu,
+			      const struct id_reg_desc *id_reg, u64 val)
+{
+	unsigned int fphp, simdhp;
+	struct fphp_simdhp {
+		unsigned int fphp;
+		unsigned int simdhp;
+	};
+	/* Permitted fphp/simdhp value combinations according to Arm ARM */
+	struct fphp_simdhp valid_fphp_simdhp[3] = {{0, 0}, {2, 1}, {3, 2}};
+	int i;
+	bool is_valid_fphp_simdhp = false;
+
+	fphp = cpuid_feature_extract_unsigned_field(val, MVFR1_FPHP_SHIFT);
+	simdhp = cpuid_feature_extract_unsigned_field(val, MVFR1_SIMDHP_SHIFT);
+
+	for (i = 0; i < ARRAY_SIZE(valid_fphp_simdhp); i++) {
+		if (valid_fphp_simdhp[i].fphp == fphp &&
+		    valid_fphp_simdhp[i].simdhp == simdhp) {
+			is_valid_fphp_simdhp = true;
+			break;
+		}
+	}
+
+	if (!is_valid_fphp_simdhp)
+		return -EINVAL;
+
+	return 0;
+}
+
 static void init_id_aa64pfr0_el1_desc(struct id_reg_desc *id_reg)
 {
 	u64 limit = id_reg->vcpu_limit_val;
@@ -4157,6 +4187,11 @@ static struct id_reg_desc id_mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc mvfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(MVFR1_EL1),
+	.validate = validate_mvfr1_el1,
+};
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
@@ -4167,6 +4202,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
 
 	/* CRm=3 */
+	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
 
 	/* CRm=4 */
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 19/38] KVM: arm64: Add remaining ID registers to id_reg_desc_table
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add hidden or reserved ID registers, and remaining ID registers,
which don't require special handling, to id_reg_desc_table.
Add 'flags' field to id_reg_desc, which is used to indicates hiddden
or reserved registers. Since now id_reg_desc_init() is called even
for hidden/reserved registers, change it to not do anything for them.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 84 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9e090441057a..479208dedd79 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -331,6 +331,11 @@ struct id_reg_desc {
 	/* Fields that are not validated by arm64_check_features. */
 	u64	ignore_mask;
 
+	/* Miscellaneous flags */
+#define ID_DESC_REG_UNALLOC	(1UL << 0)
+#define ID_DESC_REG_HIDDEN	(1UL << 1)
+	u64	flags;
+
 	/* An optional initialization function of the id_reg_desc */
 	void (*init)(struct id_reg_desc *id_reg);
 
@@ -376,8 +381,13 @@ struct id_reg_desc {
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
-	u64 val = read_sanitised_ftr_reg(id);
+	u64 val;
+
+	if (id_reg->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+		/* Nothing to do for a hidden/unalloc ID register */
+		return;
 
+	val = read_sanitised_ftr_reg(id);
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -4192,33 +4202,103 @@ static struct id_reg_desc mvfr1_el1_desc = {
 	.validate = validate_mvfr1_el1,
 };
 
+#define ID_DESC_DEFAULT(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_SANITISED(name),			\
+	}
+
+#define ID_DESC_HIDDEN(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_HIDDEN(name),			\
+		.flags = ID_DESC_REG_HIDDEN,			\
+	}
+
+#define ID_DESC_UNALLOC(crm, op2)				\
+	[(crm - 1) << 3 | op2] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_UNALLOCATED(crm, op2),		\
+		.flags = ID_DESC_REG_UNALLOC,			\
+	}
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
-/* A table for ID registers's information. */
+/*
+ * A table for ID registers's information.
+ * All entries in the table except ID_DESC_HIDDEN and ID_DESC_UNALLOC
+ * must have corresponding entries in arm64_ftr_regs[] in
+ * arch/arm64/kernel/cpufeature.c because read_sanitised_ftr_reg() is
+ * called for each of the ID registers.
+ */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
+	ID_DESC_DEFAULT(ID_PFR0_EL1),
+	ID_DESC_DEFAULT(ID_PFR1_EL1),
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC_HIDDEN(ID_AFR0_EL1),
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_MMFR2_EL1),
+	ID_DESC_DEFAULT(ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	ID_DESC_DEFAULT(ID_ISAR0_EL1),
+	ID_DESC_DEFAULT(ID_ISAR1_EL1),
+	ID_DESC_DEFAULT(ID_ISAR2_EL1),
+	ID_DESC_DEFAULT(ID_ISAR3_EL1),
+	ID_DESC_DEFAULT(ID_ISAR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR5_EL1),
+	ID_DESC_DEFAULT(ID_MMFR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR6_EL1),
 
 	/* CRm=3 */
+	ID_DESC_DEFAULT(MVFR0_EL1),
 	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
+	ID_DESC_DEFAULT(MVFR2_EL1),
+	ID_DESC_UNALLOC(3, 3),
+	ID_DESC_DEFAULT(ID_PFR2_EL1),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR5_EL1),
+	ID_DESC_UNALLOC(3, 7),
 
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+	ID_DESC_UNALLOC(4, 2),
+	ID_DESC_UNALLOC(4, 3),
+	ID_DESC_DEFAULT(ID_AA64ZFR0_EL1),
+	ID_DESC_UNALLOC(4, 5),
+	ID_DESC_UNALLOC(4, 6),
+	ID_DESC_UNALLOC(4, 7),
 
 	/* CRm=5 */
 	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64DFR1_EL1),
+	ID_DESC_UNALLOC(5, 2),
+	ID_DESC_UNALLOC(5, 3),
+	ID_DESC_HIDDEN(ID_AA64AFR0_EL1),
+	ID_DESC_HIDDEN(ID_AA64AFR1_EL1),
+	ID_DESC_UNALLOC(5, 6),
+	ID_DESC_UNALLOC(5, 7),
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+	ID_DESC_UNALLOC(6, 3),
+	ID_DESC_UNALLOC(6, 4),
+	ID_DESC_UNALLOC(6, 5),
+	ID_DESC_UNALLOC(6, 6),
+	ID_DESC_UNALLOC(6, 7),
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
+	ID_DESC_UNALLOC(7, 3),
+	ID_DESC_UNALLOC(7, 4),
+	ID_DESC_UNALLOC(7, 5),
+	ID_DESC_UNALLOC(7, 6),
+	ID_DESC_UNALLOC(7, 7),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 19/38] KVM: arm64: Add remaining ID registers to id_reg_desc_table
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add hidden or reserved ID registers, and remaining ID registers,
which don't require special handling, to id_reg_desc_table.
Add 'flags' field to id_reg_desc, which is used to indicates hiddden
or reserved registers. Since now id_reg_desc_init() is called even
for hidden/reserved registers, change it to not do anything for them.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 84 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9e090441057a..479208dedd79 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -331,6 +331,11 @@ struct id_reg_desc {
 	/* Fields that are not validated by arm64_check_features. */
 	u64	ignore_mask;
 
+	/* Miscellaneous flags */
+#define ID_DESC_REG_UNALLOC	(1UL << 0)
+#define ID_DESC_REG_HIDDEN	(1UL << 1)
+	u64	flags;
+
 	/* An optional initialization function of the id_reg_desc */
 	void (*init)(struct id_reg_desc *id_reg);
 
@@ -376,8 +381,13 @@ struct id_reg_desc {
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
-	u64 val = read_sanitised_ftr_reg(id);
+	u64 val;
+
+	if (id_reg->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+		/* Nothing to do for a hidden/unalloc ID register */
+		return;
 
+	val = read_sanitised_ftr_reg(id);
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -4192,33 +4202,103 @@ static struct id_reg_desc mvfr1_el1_desc = {
 	.validate = validate_mvfr1_el1,
 };
 
+#define ID_DESC_DEFAULT(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_SANITISED(name),			\
+	}
+
+#define ID_DESC_HIDDEN(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_HIDDEN(name),			\
+		.flags = ID_DESC_REG_HIDDEN,			\
+	}
+
+#define ID_DESC_UNALLOC(crm, op2)				\
+	[(crm - 1) << 3 | op2] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_UNALLOCATED(crm, op2),		\
+		.flags = ID_DESC_REG_UNALLOC,			\
+	}
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
-/* A table for ID registers's information. */
+/*
+ * A table for ID registers's information.
+ * All entries in the table except ID_DESC_HIDDEN and ID_DESC_UNALLOC
+ * must have corresponding entries in arm64_ftr_regs[] in
+ * arch/arm64/kernel/cpufeature.c because read_sanitised_ftr_reg() is
+ * called for each of the ID registers.
+ */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
+	ID_DESC_DEFAULT(ID_PFR0_EL1),
+	ID_DESC_DEFAULT(ID_PFR1_EL1),
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC_HIDDEN(ID_AFR0_EL1),
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_MMFR2_EL1),
+	ID_DESC_DEFAULT(ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	ID_DESC_DEFAULT(ID_ISAR0_EL1),
+	ID_DESC_DEFAULT(ID_ISAR1_EL1),
+	ID_DESC_DEFAULT(ID_ISAR2_EL1),
+	ID_DESC_DEFAULT(ID_ISAR3_EL1),
+	ID_DESC_DEFAULT(ID_ISAR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR5_EL1),
+	ID_DESC_DEFAULT(ID_MMFR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR6_EL1),
 
 	/* CRm=3 */
+	ID_DESC_DEFAULT(MVFR0_EL1),
 	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
+	ID_DESC_DEFAULT(MVFR2_EL1),
+	ID_DESC_UNALLOC(3, 3),
+	ID_DESC_DEFAULT(ID_PFR2_EL1),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR5_EL1),
+	ID_DESC_UNALLOC(3, 7),
 
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+	ID_DESC_UNALLOC(4, 2),
+	ID_DESC_UNALLOC(4, 3),
+	ID_DESC_DEFAULT(ID_AA64ZFR0_EL1),
+	ID_DESC_UNALLOC(4, 5),
+	ID_DESC_UNALLOC(4, 6),
+	ID_DESC_UNALLOC(4, 7),
 
 	/* CRm=5 */
 	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64DFR1_EL1),
+	ID_DESC_UNALLOC(5, 2),
+	ID_DESC_UNALLOC(5, 3),
+	ID_DESC_HIDDEN(ID_AA64AFR0_EL1),
+	ID_DESC_HIDDEN(ID_AA64AFR1_EL1),
+	ID_DESC_UNALLOC(5, 6),
+	ID_DESC_UNALLOC(5, 7),
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+	ID_DESC_UNALLOC(6, 3),
+	ID_DESC_UNALLOC(6, 4),
+	ID_DESC_UNALLOC(6, 5),
+	ID_DESC_UNALLOC(6, 6),
+	ID_DESC_UNALLOC(6, 7),
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
+	ID_DESC_UNALLOC(7, 3),
+	ID_DESC_UNALLOC(7, 4),
+	ID_DESC_UNALLOC(7, 5),
+	ID_DESC_UNALLOC(7, 6),
+	ID_DESC_UNALLOC(7, 7),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 19/38] KVM: arm64: Add remaining ID registers to id_reg_desc_table
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add hidden or reserved ID registers, and remaining ID registers,
which don't require special handling, to id_reg_desc_table.
Add 'flags' field to id_reg_desc, which is used to indicates hiddden
or reserved registers. Since now id_reg_desc_init() is called even
for hidden/reserved registers, change it to not do anything for them.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 84 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 9e090441057a..479208dedd79 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -331,6 +331,11 @@ struct id_reg_desc {
 	/* Fields that are not validated by arm64_check_features. */
 	u64	ignore_mask;
 
+	/* Miscellaneous flags */
+#define ID_DESC_REG_UNALLOC	(1UL << 0)
+#define ID_DESC_REG_HIDDEN	(1UL << 1)
+	u64	flags;
+
 	/* An optional initialization function of the id_reg_desc */
 	void (*init)(struct id_reg_desc *id_reg);
 
@@ -376,8 +381,13 @@ struct id_reg_desc {
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
-	u64 val = read_sanitised_ftr_reg(id);
+	u64 val;
+
+	if (id_reg->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+		/* Nothing to do for a hidden/unalloc ID register */
+		return;
 
+	val = read_sanitised_ftr_reg(id);
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -4192,33 +4202,103 @@ static struct id_reg_desc mvfr1_el1_desc = {
 	.validate = validate_mvfr1_el1,
 };
 
+#define ID_DESC_DEFAULT(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_SANITISED(name),			\
+	}
+
+#define ID_DESC_HIDDEN(name)					\
+	[IDREG_IDX(SYS_##name)] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_HIDDEN(name),			\
+		.flags = ID_DESC_REG_HIDDEN,			\
+	}
+
+#define ID_DESC_UNALLOC(crm, op2)				\
+	[(crm - 1) << 3 | op2] = &(struct id_reg_desc) {	\
+		.reg_desc = ID_UNALLOCATED(crm, op2),		\
+		.flags = ID_DESC_REG_UNALLOC,			\
+	}
+
 #define ID_DESC(id_reg_name, id_reg_desc)	\
 	[IDREG_IDX(SYS_##id_reg_name)] = (id_reg_desc)
 
-/* A table for ID registers's information. */
+/*
+ * A table for ID registers's information.
+ * All entries in the table except ID_DESC_HIDDEN and ID_DESC_UNALLOC
+ * must have corresponding entries in arm64_ftr_regs[] in
+ * arch/arm64/kernel/cpufeature.c because read_sanitised_ftr_reg() is
+ * called for each of the ID registers.
+ */
 static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 	/* CRm=1 */
+	ID_DESC_DEFAULT(ID_PFR0_EL1),
+	ID_DESC_DEFAULT(ID_PFR1_EL1),
 	ID_DESC(ID_DFR0_EL1, &id_dfr0_el1_desc),
+	ID_DESC_HIDDEN(ID_AFR0_EL1),
 	ID_DESC(ID_MMFR0_EL1, &id_mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_MMFR2_EL1),
+	ID_DESC_DEFAULT(ID_MMFR3_EL1),
+
+	/* CRm=2 */
+	ID_DESC_DEFAULT(ID_ISAR0_EL1),
+	ID_DESC_DEFAULT(ID_ISAR1_EL1),
+	ID_DESC_DEFAULT(ID_ISAR2_EL1),
+	ID_DESC_DEFAULT(ID_ISAR3_EL1),
+	ID_DESC_DEFAULT(ID_ISAR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR5_EL1),
+	ID_DESC_DEFAULT(ID_MMFR4_EL1),
+	ID_DESC_DEFAULT(ID_ISAR6_EL1),
 
 	/* CRm=3 */
+	ID_DESC_DEFAULT(MVFR0_EL1),
 	ID_DESC(MVFR1_EL1, &mvfr1_el1_desc),
+	ID_DESC_DEFAULT(MVFR2_EL1),
+	ID_DESC_UNALLOC(3, 3),
+	ID_DESC_DEFAULT(ID_PFR2_EL1),
 	ID_DESC(ID_DFR1_EL1, &id_dfr1_el1_desc),
+	ID_DESC_DEFAULT(ID_MMFR5_EL1),
+	ID_DESC_UNALLOC(3, 7),
 
 	/* CRm=4 */
 	ID_DESC(ID_AA64PFR0_EL1, &id_aa64pfr0_el1_desc),
 	ID_DESC(ID_AA64PFR1_EL1, &id_aa64pfr1_el1_desc),
+	ID_DESC_UNALLOC(4, 2),
+	ID_DESC_UNALLOC(4, 3),
+	ID_DESC_DEFAULT(ID_AA64ZFR0_EL1),
+	ID_DESC_UNALLOC(4, 5),
+	ID_DESC_UNALLOC(4, 6),
+	ID_DESC_UNALLOC(4, 7),
 
 	/* CRm=5 */
 	ID_DESC(ID_AA64DFR0_EL1, &id_aa64dfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64DFR1_EL1),
+	ID_DESC_UNALLOC(5, 2),
+	ID_DESC_UNALLOC(5, 3),
+	ID_DESC_HIDDEN(ID_AA64AFR0_EL1),
+	ID_DESC_HIDDEN(ID_AA64AFR1_EL1),
+	ID_DESC_UNALLOC(5, 6),
+	ID_DESC_UNALLOC(5, 7),
 
 	/* CRm=6 */
 	ID_DESC(ID_AA64ISAR0_EL1, &id_aa64isar0_el1_desc),
 	ID_DESC(ID_AA64ISAR1_EL1, &id_aa64isar1_el1_desc),
 	ID_DESC(ID_AA64ISAR2_EL1, &id_aa64isar2_el1_desc),
+	ID_DESC_UNALLOC(6, 3),
+	ID_DESC_UNALLOC(6, 4),
+	ID_DESC_UNALLOC(6, 5),
+	ID_DESC_UNALLOC(6, 6),
+	ID_DESC_UNALLOC(6, 7),
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
+	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
+	ID_DESC_UNALLOC(7, 3),
+	ID_DESC_UNALLOC(7, 4),
+	ID_DESC_UNALLOC(7, 5),
+	ID_DESC_UNALLOC(7, 6),
+	ID_DESC_UNALLOC(7, 7),
 };
 
 static inline struct id_reg_desc *get_id_reg_desc(u32 id)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 20/38] KVM: arm64: Use id_reg_desc_table for ID registers
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Use id_reg_desc_table for ID registers instead of sys_reg_descs as
id_reg_desc_table has all ID register entries that sys_reg_descs
has, and remove the ID register entries from sys_reg_descs, which
are no longer used.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 213 ++++++++++++++++----------------------
 1 file changed, 92 insertions(+), 121 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 479208dedd79..1045319c474e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -378,6 +378,11 @@ struct id_reg_desc {
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
 
+static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
+{
+	return container_of(r, struct id_reg_desc, reg_desc);
+}
+
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
@@ -2326,23 +2331,15 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
-static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 encoding)
 {
-	u64 val;
-	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
-
-	if (id_reg)
-		val = __read_id_reg(vcpu, id_reg);
-	else
-		val = read_kvm_id_reg(vcpu->kvm, id);
-
-	return val;
+	return __read_id_reg(vcpu, get_id_reg_desc(encoding));
 }
 
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
+	return raz ? 0 : __read_id_reg(vcpu, sys_to_id_desc(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -2456,13 +2453,7 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
 		return -EPERM;
 
-	/*
-	 * Don't allow to modify the register's value if the register doesn't
-	 * have the id_reg_desc.
-	 */
-	id_reg = get_id_reg_desc(encoding);
-	if (!id_reg)
-		return -EINVAL;
+	id_reg = sys_to_id_desc(rd);
 
 	/*
 	 * Skip the validation of AArch32 ID registers if the system doesn't
@@ -2686,83 +2677,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
 
-	/*
-	 * ID regs: all ID_SANITISED() entries here must have corresponding
-	 * entries in arm64_ftr_regs[].
-	 */
-
-	/* AArch64 mappings of the AArch32 ID registers */
-	/* CRm=1 */
-	ID_SANITISED(ID_PFR0_EL1),
-	ID_SANITISED(ID_PFR1_EL1),
-	ID_SANITISED(ID_DFR0_EL1),
-	ID_HIDDEN(ID_AFR0_EL1),
-	ID_SANITISED(ID_MMFR0_EL1),
-	ID_SANITISED(ID_MMFR1_EL1),
-	ID_SANITISED(ID_MMFR2_EL1),
-	ID_SANITISED(ID_MMFR3_EL1),
-
-	/* CRm=2 */
-	ID_SANITISED(ID_ISAR0_EL1),
-	ID_SANITISED(ID_ISAR1_EL1),
-	ID_SANITISED(ID_ISAR2_EL1),
-	ID_SANITISED(ID_ISAR3_EL1),
-	ID_SANITISED(ID_ISAR4_EL1),
-	ID_SANITISED(ID_ISAR5_EL1),
-	ID_SANITISED(ID_MMFR4_EL1),
-	ID_SANITISED(ID_ISAR6_EL1),
-
-	/* CRm=3 */
-	ID_SANITISED(MVFR0_EL1),
-	ID_SANITISED(MVFR1_EL1),
-	ID_SANITISED(MVFR2_EL1),
-	ID_UNALLOCATED(3,3),
-	ID_SANITISED(ID_PFR2_EL1),
-	ID_HIDDEN(ID_DFR1_EL1),
-	ID_SANITISED(ID_MMFR5_EL1),
-	ID_UNALLOCATED(3,7),
-
-	/* AArch64 ID registers */
-	/* CRm=4 */
-	ID_SANITISED(ID_AA64PFR0_EL1),
-	ID_SANITISED(ID_AA64PFR1_EL1),
-	ID_UNALLOCATED(4,2),
-	ID_UNALLOCATED(4,3),
-	ID_SANITISED(ID_AA64ZFR0_EL1),
-	ID_UNALLOCATED(4,5),
-	ID_UNALLOCATED(4,6),
-	ID_UNALLOCATED(4,7),
-
-	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
-	ID_SANITISED(ID_AA64DFR1_EL1),
-	ID_UNALLOCATED(5,2),
-	ID_UNALLOCATED(5,3),
-	ID_HIDDEN(ID_AA64AFR0_EL1),
-	ID_HIDDEN(ID_AA64AFR1_EL1),
-	ID_UNALLOCATED(5,6),
-	ID_UNALLOCATED(5,7),
-
-	/* CRm=6 */
-	ID_SANITISED(ID_AA64ISAR0_EL1),
-	ID_SANITISED(ID_AA64ISAR1_EL1),
-	ID_SANITISED(ID_AA64ISAR2_EL1),
-	ID_UNALLOCATED(6,3),
-	ID_UNALLOCATED(6,4),
-	ID_UNALLOCATED(6,5),
-	ID_UNALLOCATED(6,6),
-	ID_UNALLOCATED(6,7),
-
-	/* CRm=7 */
-	ID_SANITISED(ID_AA64MMFR0_EL1),
-	ID_SANITISED(ID_AA64MMFR1_EL1),
-	ID_SANITISED(ID_AA64MMFR2_EL1),
-	ID_UNALLOCATED(7,3),
-	ID_UNALLOCATED(7,4),
-	ID_UNALLOCATED(7,5),
-	ID_UNALLOCATED(7,6),
-	ID_UNALLOCATED(7,7),
-
 	{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
 	{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
 	{ SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
@@ -3577,12 +3491,38 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
 	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
 }
 
+static inline const struct sys_reg_desc *
+find_id_reg(const struct sys_reg_params *params)
+{
+	u32 id = reg_to_encoding(params);
+	struct id_reg_desc *idr;
+
+	if (!is_id_reg(id))
+		return NULL;
+
+	idr = get_id_reg_desc(id);
+
+	return idr ? &idr->reg_desc : NULL;
+}
+
+static const struct sys_reg_desc *
+find_sys_reg(const struct sys_reg_params *params)
+{
+	const struct sys_reg_desc *r = NULL;
+
+	r = find_id_reg(params);
+	if (!r)
+		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+	return r;
+}
+
 static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 			   struct sys_reg_params *params)
 {
 	const struct sys_reg_desc *r;
 
-	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(params);
 
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
@@ -3597,6 +3537,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu);
+
 /**
  * kvm_reset_sys_regs - sets system registers to reset value
  * @vcpu: The VCPU pointer
@@ -3611,6 +3553,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 	for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++)
 		if (sys_reg_descs[i].reset)
 			sys_reg_descs[i].reset(vcpu, &sys_reg_descs[i]);
+
+	kvm_reset_id_regs(vcpu);
 }
 
 /**
@@ -3694,7 +3638,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	if (!index_to_params(id, &params))
 		return NULL;
 
-	r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(&params);
 
 	/* Not saved in the sys_reg array and not otherwise accessible? */
 	if (r && !(r->reg || r->get_user))
@@ -3991,6 +3935,8 @@ static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind);
+
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
 static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 {
@@ -4006,6 +3952,12 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 		if (err)
 			return err;
 	}
+
+	err = walk_id_regs(vcpu, uind);
+	if (err < 0)
+		return err;
+
+	total += err;
 	return total;
 }
 
@@ -4306,6 +4258,25 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+{
+	const struct sys_reg_desc *sys_reg;
+	int err, i;
+	unsigned int total = 0;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		if (!id_reg_desc_table[i])
+			continue;
+
+		sys_reg = &id_reg_desc_table[i]->reg_desc;
+		err = walk_one_sys_reg(vcpu, sys_reg, &uind, &total);
+		if (err)
+			return err;
+	}
+
+	return total;
+}
+
 void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
 {
 	ftrp->sign = FTR_UNSIGNED;
@@ -4376,35 +4347,35 @@ void set_default_id_regs(struct kvm *kvm)
 {
 	int i;
 	u32 id;
-	const struct sys_reg_desc *rd;
-	u64 val;
 	struct id_reg_desc *idr;
-	struct sys_reg_params params = {
-		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
-		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
-		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
-		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
-		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
-	};
 
-	/*
-	 * Find the first entry of the ID register (ID_PFR0_EL1) from
-	 * sys_reg_descs table, and walk through only the ID register
-	 * entries in the table.
-	 */
-	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
-		id = reg_to_encoding(rd);
-		if (WARN_ON_ONCE(!is_id_reg(id)))
-			/* Shouldn't happen */
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = id_reg_desc_table[i];
+		if (!idr)
 			continue;
 
-		if (rd->access != access_id_reg)
-			/* Hidden or reserved ID register */
+		if (idr->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+			/* Nothing to do for hidden/unalloc registers */
+			continue;
+
+		id = reg_to_encoding(&idr->reg_desc);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, idr->vcpu_limit_val));
+	}
+}
+
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+	const struct sys_reg_desc *r;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
 			continue;
 
-		idr = get_id_reg_desc(id);
-		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
-		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
+		r = &id_reg->reg_desc;
+		if (r->reset)
+			r->reset(vcpu, r);
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 20/38] KVM: arm64: Use id_reg_desc_table for ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Use id_reg_desc_table for ID registers instead of sys_reg_descs as
id_reg_desc_table has all ID register entries that sys_reg_descs
has, and remove the ID register entries from sys_reg_descs, which
are no longer used.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 213 ++++++++++++++++----------------------
 1 file changed, 92 insertions(+), 121 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 479208dedd79..1045319c474e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -378,6 +378,11 @@ struct id_reg_desc {
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
 
+static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
+{
+	return container_of(r, struct id_reg_desc, reg_desc);
+}
+
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
@@ -2326,23 +2331,15 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
-static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 encoding)
 {
-	u64 val;
-	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
-
-	if (id_reg)
-		val = __read_id_reg(vcpu, id_reg);
-	else
-		val = read_kvm_id_reg(vcpu->kvm, id);
-
-	return val;
+	return __read_id_reg(vcpu, get_id_reg_desc(encoding));
 }
 
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
+	return raz ? 0 : __read_id_reg(vcpu, sys_to_id_desc(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -2456,13 +2453,7 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
 		return -EPERM;
 
-	/*
-	 * Don't allow to modify the register's value if the register doesn't
-	 * have the id_reg_desc.
-	 */
-	id_reg = get_id_reg_desc(encoding);
-	if (!id_reg)
-		return -EINVAL;
+	id_reg = sys_to_id_desc(rd);
 
 	/*
 	 * Skip the validation of AArch32 ID registers if the system doesn't
@@ -2686,83 +2677,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
 
-	/*
-	 * ID regs: all ID_SANITISED() entries here must have corresponding
-	 * entries in arm64_ftr_regs[].
-	 */
-
-	/* AArch64 mappings of the AArch32 ID registers */
-	/* CRm=1 */
-	ID_SANITISED(ID_PFR0_EL1),
-	ID_SANITISED(ID_PFR1_EL1),
-	ID_SANITISED(ID_DFR0_EL1),
-	ID_HIDDEN(ID_AFR0_EL1),
-	ID_SANITISED(ID_MMFR0_EL1),
-	ID_SANITISED(ID_MMFR1_EL1),
-	ID_SANITISED(ID_MMFR2_EL1),
-	ID_SANITISED(ID_MMFR3_EL1),
-
-	/* CRm=2 */
-	ID_SANITISED(ID_ISAR0_EL1),
-	ID_SANITISED(ID_ISAR1_EL1),
-	ID_SANITISED(ID_ISAR2_EL1),
-	ID_SANITISED(ID_ISAR3_EL1),
-	ID_SANITISED(ID_ISAR4_EL1),
-	ID_SANITISED(ID_ISAR5_EL1),
-	ID_SANITISED(ID_MMFR4_EL1),
-	ID_SANITISED(ID_ISAR6_EL1),
-
-	/* CRm=3 */
-	ID_SANITISED(MVFR0_EL1),
-	ID_SANITISED(MVFR1_EL1),
-	ID_SANITISED(MVFR2_EL1),
-	ID_UNALLOCATED(3,3),
-	ID_SANITISED(ID_PFR2_EL1),
-	ID_HIDDEN(ID_DFR1_EL1),
-	ID_SANITISED(ID_MMFR5_EL1),
-	ID_UNALLOCATED(3,7),
-
-	/* AArch64 ID registers */
-	/* CRm=4 */
-	ID_SANITISED(ID_AA64PFR0_EL1),
-	ID_SANITISED(ID_AA64PFR1_EL1),
-	ID_UNALLOCATED(4,2),
-	ID_UNALLOCATED(4,3),
-	ID_SANITISED(ID_AA64ZFR0_EL1),
-	ID_UNALLOCATED(4,5),
-	ID_UNALLOCATED(4,6),
-	ID_UNALLOCATED(4,7),
-
-	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
-	ID_SANITISED(ID_AA64DFR1_EL1),
-	ID_UNALLOCATED(5,2),
-	ID_UNALLOCATED(5,3),
-	ID_HIDDEN(ID_AA64AFR0_EL1),
-	ID_HIDDEN(ID_AA64AFR1_EL1),
-	ID_UNALLOCATED(5,6),
-	ID_UNALLOCATED(5,7),
-
-	/* CRm=6 */
-	ID_SANITISED(ID_AA64ISAR0_EL1),
-	ID_SANITISED(ID_AA64ISAR1_EL1),
-	ID_SANITISED(ID_AA64ISAR2_EL1),
-	ID_UNALLOCATED(6,3),
-	ID_UNALLOCATED(6,4),
-	ID_UNALLOCATED(6,5),
-	ID_UNALLOCATED(6,6),
-	ID_UNALLOCATED(6,7),
-
-	/* CRm=7 */
-	ID_SANITISED(ID_AA64MMFR0_EL1),
-	ID_SANITISED(ID_AA64MMFR1_EL1),
-	ID_SANITISED(ID_AA64MMFR2_EL1),
-	ID_UNALLOCATED(7,3),
-	ID_UNALLOCATED(7,4),
-	ID_UNALLOCATED(7,5),
-	ID_UNALLOCATED(7,6),
-	ID_UNALLOCATED(7,7),
-
 	{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
 	{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
 	{ SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
@@ -3577,12 +3491,38 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
 	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
 }
 
+static inline const struct sys_reg_desc *
+find_id_reg(const struct sys_reg_params *params)
+{
+	u32 id = reg_to_encoding(params);
+	struct id_reg_desc *idr;
+
+	if (!is_id_reg(id))
+		return NULL;
+
+	idr = get_id_reg_desc(id);
+
+	return idr ? &idr->reg_desc : NULL;
+}
+
+static const struct sys_reg_desc *
+find_sys_reg(const struct sys_reg_params *params)
+{
+	const struct sys_reg_desc *r = NULL;
+
+	r = find_id_reg(params);
+	if (!r)
+		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+	return r;
+}
+
 static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 			   struct sys_reg_params *params)
 {
 	const struct sys_reg_desc *r;
 
-	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(params);
 
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
@@ -3597,6 +3537,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu);
+
 /**
  * kvm_reset_sys_regs - sets system registers to reset value
  * @vcpu: The VCPU pointer
@@ -3611,6 +3553,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 	for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++)
 		if (sys_reg_descs[i].reset)
 			sys_reg_descs[i].reset(vcpu, &sys_reg_descs[i]);
+
+	kvm_reset_id_regs(vcpu);
 }
 
 /**
@@ -3694,7 +3638,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	if (!index_to_params(id, &params))
 		return NULL;
 
-	r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(&params);
 
 	/* Not saved in the sys_reg array and not otherwise accessible? */
 	if (r && !(r->reg || r->get_user))
@@ -3991,6 +3935,8 @@ static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind);
+
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
 static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 {
@@ -4006,6 +3952,12 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 		if (err)
 			return err;
 	}
+
+	err = walk_id_regs(vcpu, uind);
+	if (err < 0)
+		return err;
+
+	total += err;
 	return total;
 }
 
@@ -4306,6 +4258,25 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+{
+	const struct sys_reg_desc *sys_reg;
+	int err, i;
+	unsigned int total = 0;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		if (!id_reg_desc_table[i])
+			continue;
+
+		sys_reg = &id_reg_desc_table[i]->reg_desc;
+		err = walk_one_sys_reg(vcpu, sys_reg, &uind, &total);
+		if (err)
+			return err;
+	}
+
+	return total;
+}
+
 void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
 {
 	ftrp->sign = FTR_UNSIGNED;
@@ -4376,35 +4347,35 @@ void set_default_id_regs(struct kvm *kvm)
 {
 	int i;
 	u32 id;
-	const struct sys_reg_desc *rd;
-	u64 val;
 	struct id_reg_desc *idr;
-	struct sys_reg_params params = {
-		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
-		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
-		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
-		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
-		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
-	};
 
-	/*
-	 * Find the first entry of the ID register (ID_PFR0_EL1) from
-	 * sys_reg_descs table, and walk through only the ID register
-	 * entries in the table.
-	 */
-	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
-		id = reg_to_encoding(rd);
-		if (WARN_ON_ONCE(!is_id_reg(id)))
-			/* Shouldn't happen */
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = id_reg_desc_table[i];
+		if (!idr)
 			continue;
 
-		if (rd->access != access_id_reg)
-			/* Hidden or reserved ID register */
+		if (idr->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+			/* Nothing to do for hidden/unalloc registers */
+			continue;
+
+		id = reg_to_encoding(&idr->reg_desc);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, idr->vcpu_limit_val));
+	}
+}
+
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+	const struct sys_reg_desc *r;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
 			continue;
 
-		idr = get_id_reg_desc(id);
-		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
-		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
+		r = &id_reg->reg_desc;
+		if (r->reset)
+			r->reset(vcpu, r);
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 20/38] KVM: arm64: Use id_reg_desc_table for ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Use id_reg_desc_table for ID registers instead of sys_reg_descs as
id_reg_desc_table has all ID register entries that sys_reg_descs
has, and remove the ID register entries from sys_reg_descs, which
are no longer used.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 213 ++++++++++++++++----------------------
 1 file changed, 92 insertions(+), 121 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 479208dedd79..1045319c474e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -378,6 +378,11 @@ struct id_reg_desc {
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
 };
 
+static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
+{
+	return container_of(r, struct id_reg_desc, reg_desc);
+}
+
 static void id_reg_desc_init(struct id_reg_desc *id_reg)
 {
 	u32 id = reg_to_encoding(&id_reg->reg_desc);
@@ -2326,23 +2331,15 @@ static u64 __read_id_reg(const struct kvm_vcpu *vcpu,
 	return val;
 }
 
-static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 id)
+static u64 read_id_reg_with_encoding(const struct kvm_vcpu *vcpu, u32 encoding)
 {
-	u64 val;
-	const struct id_reg_desc *id_reg = get_id_reg_desc(id);
-
-	if (id_reg)
-		val = __read_id_reg(vcpu, id_reg);
-	else
-		val = read_kvm_id_reg(vcpu->kvm, id);
-
-	return val;
+	return __read_id_reg(vcpu, get_id_reg_desc(encoding));
 }
 
 static u64 read_id_reg(const struct kvm_vcpu *vcpu,
 		       struct sys_reg_desc const *r, bool raz)
 {
-	return raz ? 0 : read_id_reg_with_encoding(vcpu, reg_to_encoding(r));
+	return raz ? 0 : __read_id_reg(vcpu, sys_to_id_desc(r));
 }
 
 static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
@@ -2456,13 +2453,7 @@ static int __set_id_reg(struct kvm_vcpu *vcpu,
 	if (test_bit(KVM_ARCH_FLAG_EL1_32BIT, &vcpu->kvm->arch.flags))
 		return -EPERM;
 
-	/*
-	 * Don't allow to modify the register's value if the register doesn't
-	 * have the id_reg_desc.
-	 */
-	id_reg = get_id_reg_desc(encoding);
-	if (!id_reg)
-		return -EINVAL;
+	id_reg = sys_to_id_desc(rd);
 
 	/*
 	 * Skip the validation of AArch32 ID registers if the system doesn't
@@ -2686,83 +2677,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
 
-	/*
-	 * ID regs: all ID_SANITISED() entries here must have corresponding
-	 * entries in arm64_ftr_regs[].
-	 */
-
-	/* AArch64 mappings of the AArch32 ID registers */
-	/* CRm=1 */
-	ID_SANITISED(ID_PFR0_EL1),
-	ID_SANITISED(ID_PFR1_EL1),
-	ID_SANITISED(ID_DFR0_EL1),
-	ID_HIDDEN(ID_AFR0_EL1),
-	ID_SANITISED(ID_MMFR0_EL1),
-	ID_SANITISED(ID_MMFR1_EL1),
-	ID_SANITISED(ID_MMFR2_EL1),
-	ID_SANITISED(ID_MMFR3_EL1),
-
-	/* CRm=2 */
-	ID_SANITISED(ID_ISAR0_EL1),
-	ID_SANITISED(ID_ISAR1_EL1),
-	ID_SANITISED(ID_ISAR2_EL1),
-	ID_SANITISED(ID_ISAR3_EL1),
-	ID_SANITISED(ID_ISAR4_EL1),
-	ID_SANITISED(ID_ISAR5_EL1),
-	ID_SANITISED(ID_MMFR4_EL1),
-	ID_SANITISED(ID_ISAR6_EL1),
-
-	/* CRm=3 */
-	ID_SANITISED(MVFR0_EL1),
-	ID_SANITISED(MVFR1_EL1),
-	ID_SANITISED(MVFR2_EL1),
-	ID_UNALLOCATED(3,3),
-	ID_SANITISED(ID_PFR2_EL1),
-	ID_HIDDEN(ID_DFR1_EL1),
-	ID_SANITISED(ID_MMFR5_EL1),
-	ID_UNALLOCATED(3,7),
-
-	/* AArch64 ID registers */
-	/* CRm=4 */
-	ID_SANITISED(ID_AA64PFR0_EL1),
-	ID_SANITISED(ID_AA64PFR1_EL1),
-	ID_UNALLOCATED(4,2),
-	ID_UNALLOCATED(4,3),
-	ID_SANITISED(ID_AA64ZFR0_EL1),
-	ID_UNALLOCATED(4,5),
-	ID_UNALLOCATED(4,6),
-	ID_UNALLOCATED(4,7),
-
-	/* CRm=5 */
-	ID_SANITISED(ID_AA64DFR0_EL1),
-	ID_SANITISED(ID_AA64DFR1_EL1),
-	ID_UNALLOCATED(5,2),
-	ID_UNALLOCATED(5,3),
-	ID_HIDDEN(ID_AA64AFR0_EL1),
-	ID_HIDDEN(ID_AA64AFR1_EL1),
-	ID_UNALLOCATED(5,6),
-	ID_UNALLOCATED(5,7),
-
-	/* CRm=6 */
-	ID_SANITISED(ID_AA64ISAR0_EL1),
-	ID_SANITISED(ID_AA64ISAR1_EL1),
-	ID_SANITISED(ID_AA64ISAR2_EL1),
-	ID_UNALLOCATED(6,3),
-	ID_UNALLOCATED(6,4),
-	ID_UNALLOCATED(6,5),
-	ID_UNALLOCATED(6,6),
-	ID_UNALLOCATED(6,7),
-
-	/* CRm=7 */
-	ID_SANITISED(ID_AA64MMFR0_EL1),
-	ID_SANITISED(ID_AA64MMFR1_EL1),
-	ID_SANITISED(ID_AA64MMFR2_EL1),
-	ID_UNALLOCATED(7,3),
-	ID_UNALLOCATED(7,4),
-	ID_UNALLOCATED(7,5),
-	ID_UNALLOCATED(7,6),
-	ID_UNALLOCATED(7,7),
-
 	{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
 	{ SYS_DESC(SYS_ACTLR_EL1), access_actlr, reset_actlr, ACTLR_EL1 },
 	{ SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
@@ -3577,12 +3491,38 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
 	return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
 }
 
+static inline const struct sys_reg_desc *
+find_id_reg(const struct sys_reg_params *params)
+{
+	u32 id = reg_to_encoding(params);
+	struct id_reg_desc *idr;
+
+	if (!is_id_reg(id))
+		return NULL;
+
+	idr = get_id_reg_desc(id);
+
+	return idr ? &idr->reg_desc : NULL;
+}
+
+static const struct sys_reg_desc *
+find_sys_reg(const struct sys_reg_params *params)
+{
+	const struct sys_reg_desc *r = NULL;
+
+	r = find_id_reg(params);
+	if (!r)
+		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+
+	return r;
+}
+
 static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 			   struct sys_reg_params *params)
 {
 	const struct sys_reg_desc *r;
 
-	r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(params);
 
 	if (likely(r)) {
 		perform_access(vcpu, params, r);
@@ -3597,6 +3537,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 	return 1;
 }
 
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu);
+
 /**
  * kvm_reset_sys_regs - sets system registers to reset value
  * @vcpu: The VCPU pointer
@@ -3611,6 +3553,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 	for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++)
 		if (sys_reg_descs[i].reset)
 			sys_reg_descs[i].reset(vcpu, &sys_reg_descs[i]);
+
+	kvm_reset_id_regs(vcpu);
 }
 
 /**
@@ -3694,7 +3638,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	if (!index_to_params(id, &params))
 		return NULL;
 
-	r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+	r = find_sys_reg(&params);
 
 	/* Not saved in the sys_reg array and not otherwise accessible? */
 	if (r && !(r->reg || r->get_user))
@@ -3991,6 +3935,8 @@ static int walk_one_sys_reg(const struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind);
+
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
 static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 {
@@ -4006,6 +3952,12 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
 		if (err)
 			return err;
 	}
+
+	err = walk_id_regs(vcpu, uind);
+	if (err < 0)
+		return err;
+
+	total += err;
 	return total;
 }
 
@@ -4306,6 +4258,25 @@ static inline struct id_reg_desc *get_id_reg_desc(u32 id)
 	return id_reg_desc_table[IDREG_IDX(id)];
 }
 
+static int walk_id_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+{
+	const struct sys_reg_desc *sys_reg;
+	int err, i;
+	unsigned int total = 0;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		if (!id_reg_desc_table[i])
+			continue;
+
+		sys_reg = &id_reg_desc_table[i]->reg_desc;
+		err = walk_one_sys_reg(vcpu, sys_reg, &uind, &total);
+		if (err)
+			return err;
+	}
+
+	return total;
+}
+
 void kvm_ftr_bits_set_default(u8 shift, struct arm64_ftr_bits *ftrp)
 {
 	ftrp->sign = FTR_UNSIGNED;
@@ -4376,35 +4347,35 @@ void set_default_id_regs(struct kvm *kvm)
 {
 	int i;
 	u32 id;
-	const struct sys_reg_desc *rd;
-	u64 val;
 	struct id_reg_desc *idr;
-	struct sys_reg_params params = {
-		Op0(sys_reg_Op0(SYS_ID_PFR0_EL1)),
-		Op1(sys_reg_Op1(SYS_ID_PFR0_EL1)),
-		CRn(sys_reg_CRn(SYS_ID_PFR0_EL1)),
-		CRm(sys_reg_CRm(SYS_ID_PFR0_EL1)),
-		Op2(sys_reg_Op2(SYS_ID_PFR0_EL1)),
-	};
 
-	/*
-	 * Find the first entry of the ID register (ID_PFR0_EL1) from
-	 * sys_reg_descs table, and walk through only the ID register
-	 * entries in the table.
-	 */
-	rd = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-	for (i = 0; i < KVM_ARM_ID_REG_MAX_NUM; i++, rd++) {
-		id = reg_to_encoding(rd);
-		if (WARN_ON_ONCE(!is_id_reg(id)))
-			/* Shouldn't happen */
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = id_reg_desc_table[i];
+		if (!idr)
 			continue;
 
-		if (rd->access != access_id_reg)
-			/* Hidden or reserved ID register */
+		if (idr->flags & (ID_DESC_REG_HIDDEN | ID_DESC_REG_UNALLOC))
+			/* Nothing to do for hidden/unalloc registers */
+			continue;
+
+		id = reg_to_encoding(&idr->reg_desc);
+		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, idr->vcpu_limit_val));
+	}
+}
+
+static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
+{
+	int i;
+	const struct sys_reg_desc *r;
+	struct id_reg_desc *id_reg;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!id_reg)
 			continue;
 
-		idr = get_id_reg_desc(id);
-		val = idr ? idr->vcpu_limit_val : read_sanitised_ftr_reg(id);
-		WARN_ON_ONCE(write_kvm_id_reg(kvm, id, val));
+		r = &id_reg->reg_desc;
+		if (r->reset)
+			r->reset(vcpu, r);
 	}
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 21/38] KVM: arm64: Add consistency checking for frac fields of ID registers
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Feature fractional field of an ID register cannot be simply validated
at KVM_SET_ONE_REG because its validity depends on its (main) feature
field value, which could be in a different ID register (and might be
set later).  Validate fractional fields at the first KVM_RUN instead.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |   3 +
 arch/arm64/kvm/sys_regs.c         | 113 +++++++++++++++++++++++++++++-
 3 files changed, 114 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index dbed94e759a8..b85af83b4542 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -789,6 +789,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 04312f7ee0da..5c1cee04aa95 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -524,6 +524,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 	if (likely(vcpu_has_run_once(vcpu)))
 		return 0;
 
+	if (!kvm_vm_is_protected(kvm) && kvm_id_regs_check_frac_fields(vcpu))
+		return -EPERM;
+
 	kvm_arm_vcpu_init_debug(vcpu);
 
 	if (likely(irqchip_in_kernel(kvm))) {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1045319c474e..fc7a8f2539a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4028,6 +4028,100 @@ void kvm_sys_reg_table_init(void)
 	id_reg_desc_init_all();
 }
 
+/* ID register's fractional field information with its feature field. */
+struct feature_frac {
+	u32	id;
+	u32	shift;
+	u32	frac_id;
+	u32	frac_shift;
+};
+
+static struct feature_frac feature_frac_table[] = {
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+	},
+};
+
+/*
+ * Return non-zero if the feature/fractional fields pair are not
+ * supported. Return zero otherwise.
+ * This function validates only the fractional feature field,
+ * and relies on the fact the feature field is validated before
+ * through arm64_check_features.
+ */
+static int vcpu_id_reg_feature_frac_check(const struct kvm_vcpu *vcpu,
+					  const struct feature_frac *ftr_frac)
+{
+	const struct id_reg_desc *id_reg;
+	u32 id;
+	u64 val, lim, mask;
+
+	/* Check if the feature field value is same as the limit */
+	id = ftr_frac->id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val != lim)
+		/*
+		 * The feature level is lower than the limit.
+		 * Any fractional version should be fine.
+		 */
+		return 0;
+
+	/* Check the fractional feature field */
+	id = ftr_frac->frac_id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->frac_shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val == lim)
+		/*
+		 * Both the feature and fractional fields are the same
+		 * as limit.
+		 */
+		return 0;
+
+	return arm64_check_features(id_reg->ftr_bits, val, lim);
+}
+
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu)
+{
+	int i, err;
+	const struct feature_frac *frac;
+
+	/*
+	 * Check ID registers' fractional fields, which aren't checked
+	 * at KVM_SET_ONE_REG.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		err = vcpu_id_reg_feature_frac_check(vcpu, frac);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
 /*
  * Update the ID register's field with @fval for the guest.
  * The caller is expected to hold the kvm->lock.
@@ -4055,9 +4149,6 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
-	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
@@ -4329,6 +4420,8 @@ static void id_reg_desc_init_all(void)
 {
 	int i;
 	struct id_reg_desc *id_reg;
+	struct feature_frac *frac;
+	u64 ftr_mask = ARM64_FEATURE_FIELD_MASK;
 
 	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
 		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
@@ -4337,6 +4430,20 @@ static void id_reg_desc_init_all(void)
 
 		id_reg_desc_init(id_reg);
 	}
+
+	/*
+	 * Update ignore_mask of ID registers based on fractional fields
+	 * information.  Any ID register that have fractional fields
+	 * is expected to have its own id_reg_desc.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		id_reg = get_id_reg_desc(frac->frac_id);
+		if (WARN_ON_ONCE(!id_reg))
+			continue;
+
+		id_reg->ignore_mask |= ftr_mask << frac->frac_shift;
+	}
 }
 
 /*
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 21/38] KVM: arm64: Add consistency checking for frac fields of ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Feature fractional field of an ID register cannot be simply validated
at KVM_SET_ONE_REG because its validity depends on its (main) feature
field value, which could be in a different ID register (and might be
set later).  Validate fractional fields at the first KVM_RUN instead.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |   3 +
 arch/arm64/kvm/sys_regs.c         | 113 +++++++++++++++++++++++++++++-
 3 files changed, 114 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index dbed94e759a8..b85af83b4542 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -789,6 +789,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 04312f7ee0da..5c1cee04aa95 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -524,6 +524,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 	if (likely(vcpu_has_run_once(vcpu)))
 		return 0;
 
+	if (!kvm_vm_is_protected(kvm) && kvm_id_regs_check_frac_fields(vcpu))
+		return -EPERM;
+
 	kvm_arm_vcpu_init_debug(vcpu);
 
 	if (likely(irqchip_in_kernel(kvm))) {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1045319c474e..fc7a8f2539a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4028,6 +4028,100 @@ void kvm_sys_reg_table_init(void)
 	id_reg_desc_init_all();
 }
 
+/* ID register's fractional field information with its feature field. */
+struct feature_frac {
+	u32	id;
+	u32	shift;
+	u32	frac_id;
+	u32	frac_shift;
+};
+
+static struct feature_frac feature_frac_table[] = {
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+	},
+};
+
+/*
+ * Return non-zero if the feature/fractional fields pair are not
+ * supported. Return zero otherwise.
+ * This function validates only the fractional feature field,
+ * and relies on the fact the feature field is validated before
+ * through arm64_check_features.
+ */
+static int vcpu_id_reg_feature_frac_check(const struct kvm_vcpu *vcpu,
+					  const struct feature_frac *ftr_frac)
+{
+	const struct id_reg_desc *id_reg;
+	u32 id;
+	u64 val, lim, mask;
+
+	/* Check if the feature field value is same as the limit */
+	id = ftr_frac->id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val != lim)
+		/*
+		 * The feature level is lower than the limit.
+		 * Any fractional version should be fine.
+		 */
+		return 0;
+
+	/* Check the fractional feature field */
+	id = ftr_frac->frac_id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->frac_shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val == lim)
+		/*
+		 * Both the feature and fractional fields are the same
+		 * as limit.
+		 */
+		return 0;
+
+	return arm64_check_features(id_reg->ftr_bits, val, lim);
+}
+
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu)
+{
+	int i, err;
+	const struct feature_frac *frac;
+
+	/*
+	 * Check ID registers' fractional fields, which aren't checked
+	 * at KVM_SET_ONE_REG.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		err = vcpu_id_reg_feature_frac_check(vcpu, frac);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
 /*
  * Update the ID register's field with @fval for the guest.
  * The caller is expected to hold the kvm->lock.
@@ -4055,9 +4149,6 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
-	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
@@ -4329,6 +4420,8 @@ static void id_reg_desc_init_all(void)
 {
 	int i;
 	struct id_reg_desc *id_reg;
+	struct feature_frac *frac;
+	u64 ftr_mask = ARM64_FEATURE_FIELD_MASK;
 
 	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
 		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
@@ -4337,6 +4430,20 @@ static void id_reg_desc_init_all(void)
 
 		id_reg_desc_init(id_reg);
 	}
+
+	/*
+	 * Update ignore_mask of ID registers based on fractional fields
+	 * information.  Any ID register that have fractional fields
+	 * is expected to have its own id_reg_desc.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		id_reg = get_id_reg_desc(frac->frac_id);
+		if (WARN_ON_ONCE(!id_reg))
+			continue;
+
+		id_reg->ignore_mask |= ftr_mask << frac->frac_shift;
+	}
 }
 
 /*
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 21/38] KVM: arm64: Add consistency checking for frac fields of ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Feature fractional field of an ID register cannot be simply validated
at KVM_SET_ONE_REG because its validity depends on its (main) feature
field value, which could be in a different ID register (and might be
set later).  Validate fractional fields at the first KVM_RUN instead.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |   3 +
 arch/arm64/kvm/sys_regs.c         | 113 +++++++++++++++++++++++++++++-
 3 files changed, 114 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index dbed94e759a8..b85af83b4542 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -789,6 +789,7 @@ long kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
 void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 04312f7ee0da..5c1cee04aa95 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -524,6 +524,9 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 	if (likely(vcpu_has_run_once(vcpu)))
 		return 0;
 
+	if (!kvm_vm_is_protected(kvm) && kvm_id_regs_check_frac_fields(vcpu))
+		return -EPERM;
+
 	kvm_arm_vcpu_init_debug(vcpu);
 
 	if (likely(irqchip_in_kernel(kvm))) {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 1045319c474e..fc7a8f2539a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4028,6 +4028,100 @@ void kvm_sys_reg_table_init(void)
 	id_reg_desc_init_all();
 }
 
+/* ID register's fractional field information with its feature field. */
+struct feature_frac {
+	u32	id;
+	u32	shift;
+	u32	frac_id;
+	u32	frac_shift;
+};
+
+static struct feature_frac feature_frac_table[] = {
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+	},
+	{
+		.frac_id = SYS_ID_AA64PFR1_EL1,
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+		.id = SYS_ID_AA64PFR0_EL1,
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+	},
+};
+
+/*
+ * Return non-zero if the feature/fractional fields pair are not
+ * supported. Return zero otherwise.
+ * This function validates only the fractional feature field,
+ * and relies on the fact the feature field is validated before
+ * through arm64_check_features.
+ */
+static int vcpu_id_reg_feature_frac_check(const struct kvm_vcpu *vcpu,
+					  const struct feature_frac *ftr_frac)
+{
+	const struct id_reg_desc *id_reg;
+	u32 id;
+	u64 val, lim, mask;
+
+	/* Check if the feature field value is same as the limit */
+	id = ftr_frac->id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val != lim)
+		/*
+		 * The feature level is lower than the limit.
+		 * Any fractional version should be fine.
+		 */
+		return 0;
+
+	/* Check the fractional feature field */
+	id = ftr_frac->frac_id;
+
+	mask = ARM64_FEATURE_FIELD_MASK << ftr_frac->frac_shift;
+	id_reg = get_id_reg_desc(id);
+	val = __read_id_reg(vcpu, id_reg) & mask;
+	lim = id_reg->vcpu_limit_val & mask;
+
+	if (val == lim)
+		/*
+		 * Both the feature and fractional fields are the same
+		 * as limit.
+		 */
+		return 0;
+
+	return arm64_check_features(id_reg->ftr_bits, val, lim);
+}
+
+int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu)
+{
+	int i, err;
+	const struct feature_frac *frac;
+
+	/*
+	 * Check ID registers' fractional fields, which aren't checked
+	 * at KVM_SET_ONE_REG.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		err = vcpu_id_reg_feature_frac_check(vcpu, frac);
+		if (err)
+			return err;
+	}
+	return 0;
+}
+
 /*
  * Update the ID register's field with @fval for the guest.
  * The caller is expected to hold the kvm->lock.
@@ -4055,9 +4149,6 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64PFR1_EL1),
-	.ignore_mask = ARM64_FEATURE_MASK(ID_AA64PFR1_RASFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_MPAMFRAC) |
-		       ARM64_FEATURE_MASK(ID_AA64PFR1_CSV2FRAC),
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
@@ -4329,6 +4420,8 @@ static void id_reg_desc_init_all(void)
 {
 	int i;
 	struct id_reg_desc *id_reg;
+	struct feature_frac *frac;
+	u64 ftr_mask = ARM64_FEATURE_FIELD_MASK;
 
 	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
 		id_reg = (struct id_reg_desc *)id_reg_desc_table[i];
@@ -4337,6 +4430,20 @@ static void id_reg_desc_init_all(void)
 
 		id_reg_desc_init(id_reg);
 	}
+
+	/*
+	 * Update ignore_mask of ID registers based on fractional fields
+	 * information.  Any ID register that have fractional fields
+	 * is expected to have its own id_reg_desc.
+	 */
+	for (i = 0; i < ARRAY_SIZE(feature_frac_table); i++) {
+		frac = &feature_frac_table[i];
+		id_reg = get_id_reg_desc(frac->frac_id);
+		if (WARN_ON_ONCE(!id_reg))
+			continue;
+
+		id_reg->ignore_mask |= ftr_mask << frac->frac_shift;
+	}
 }
 
 /*
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 22/38] KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a new capability KVM_CAP_ARM_ID_REG_CONFIGURABLE to indicate
that ID registers are writable by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 Documentation/virt/kvm/api.rst | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c           |  1 +
 include/uapi/linux/kvm.h       |  1 +
 3 files changed, 18 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85c7abc51af5..e2e7b08e64c1 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -2601,6 +2601,14 @@ EINVAL.
 After the vcpu's SVE configuration is finalized, further attempts to
 write this register will fail with EPERM.
 
+The arm64 ID registers with encoding Op0=3, Op1=0, CRn=0, 1<=CRm<8, 0<=Op2<8
+are allowed to modified by userspace only for AArch64 EL1 vCPUs if
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
+They become immutable after calling KVM_RUN on any of the
+vcpus in the guest (modifying values of those registers will fail).
+Those ID registers are always immutable for AArch32 EL1 vCPUs, which
+KVM_ARM_VCPU_EL1_32BIT is configured for, even when
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
 
 MIPS registers are mapped using the lower 32 bits.  The upper 16 of that is
 the register group type:
@@ -7724,6 +7732,14 @@ At this time, KVM_PMU_CAP_DISABLE is the only capability.  Setting
 this capability will disable PMU virtualization for that VM.  Usermode
 should adjust CPUID leaf 0xA to reflect that the PMU is disabled.
 
+8.35 KVM_CAP_ARM_ID_REG_CONFIGURABLE
+------------------------------------
+
+:Architectures: arm64
+
+This capability indicates that userspace can modify the ID registers
+via KVM_SET_ONE_REG ioctl.
+
 9. Known KVM API problems
 =========================
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5c1cee04aa95..b4db368948cc 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -211,6 +211,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_SET_GUEST_DEBUG:
 	case KVM_CAP_VCPU_ATTRIBUTES:
 	case KVM_CAP_PTP_KVM:
+	case KVM_CAP_ARM_ID_REG_CONFIGURABLE:
 		r = 1;
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 91a6fe4e02c0..171f1d0ea1e1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_MEM_OP_EXTENSION 211
 #define KVM_CAP_PMU_CAPABILITY 212
 #define KVM_CAP_DISABLE_QUIRKS2 213
+#define KVM_CAP_ARM_ID_REG_CONFIGURABLE 214
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 22/38] KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Introduce a new capability KVM_CAP_ARM_ID_REG_CONFIGURABLE to indicate
that ID registers are writable by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 Documentation/virt/kvm/api.rst | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c           |  1 +
 include/uapi/linux/kvm.h       |  1 +
 3 files changed, 18 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85c7abc51af5..e2e7b08e64c1 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -2601,6 +2601,14 @@ EINVAL.
 After the vcpu's SVE configuration is finalized, further attempts to
 write this register will fail with EPERM.
 
+The arm64 ID registers with encoding Op0=3, Op1=0, CRn=0, 1<=CRm<8, 0<=Op2<8
+are allowed to modified by userspace only for AArch64 EL1 vCPUs if
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
+They become immutable after calling KVM_RUN on any of the
+vcpus in the guest (modifying values of those registers will fail).
+Those ID registers are always immutable for AArch32 EL1 vCPUs, which
+KVM_ARM_VCPU_EL1_32BIT is configured for, even when
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
 
 MIPS registers are mapped using the lower 32 bits.  The upper 16 of that is
 the register group type:
@@ -7724,6 +7732,14 @@ At this time, KVM_PMU_CAP_DISABLE is the only capability.  Setting
 this capability will disable PMU virtualization for that VM.  Usermode
 should adjust CPUID leaf 0xA to reflect that the PMU is disabled.
 
+8.35 KVM_CAP_ARM_ID_REG_CONFIGURABLE
+------------------------------------
+
+:Architectures: arm64
+
+This capability indicates that userspace can modify the ID registers
+via KVM_SET_ONE_REG ioctl.
+
 9. Known KVM API problems
 =========================
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5c1cee04aa95..b4db368948cc 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -211,6 +211,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_SET_GUEST_DEBUG:
 	case KVM_CAP_VCPU_ATTRIBUTES:
 	case KVM_CAP_PTP_KVM:
+	case KVM_CAP_ARM_ID_REG_CONFIGURABLE:
 		r = 1;
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 91a6fe4e02c0..171f1d0ea1e1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_MEM_OP_EXTENSION 211
 #define KVM_CAP_PMU_CAPABILITY 212
 #define KVM_CAP_DISABLE_QUIRKS2 213
+#define KVM_CAP_ARM_ID_REG_CONFIGURABLE 214
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 22/38] KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a new capability KVM_CAP_ARM_ID_REG_CONFIGURABLE to indicate
that ID registers are writable by userspace.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 Documentation/virt/kvm/api.rst | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c           |  1 +
 include/uapi/linux/kvm.h       |  1 +
 3 files changed, 18 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 85c7abc51af5..e2e7b08e64c1 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -2601,6 +2601,14 @@ EINVAL.
 After the vcpu's SVE configuration is finalized, further attempts to
 write this register will fail with EPERM.
 
+The arm64 ID registers with encoding Op0=3, Op1=0, CRn=0, 1<=CRm<8, 0<=Op2<8
+are allowed to modified by userspace only for AArch64 EL1 vCPUs if
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
+They become immutable after calling KVM_RUN on any of the
+vcpus in the guest (modifying values of those registers will fail).
+Those ID registers are always immutable for AArch32 EL1 vCPUs, which
+KVM_ARM_VCPU_EL1_32BIT is configured for, even when
+KVM_CAP_ARM_ID_REG_CONFIGURABLE is available.
 
 MIPS registers are mapped using the lower 32 bits.  The upper 16 of that is
 the register group type:
@@ -7724,6 +7732,14 @@ At this time, KVM_PMU_CAP_DISABLE is the only capability.  Setting
 this capability will disable PMU virtualization for that VM.  Usermode
 should adjust CPUID leaf 0xA to reflect that the PMU is disabled.
 
+8.35 KVM_CAP_ARM_ID_REG_CONFIGURABLE
+------------------------------------
+
+:Architectures: arm64
+
+This capability indicates that userspace can modify the ID registers
+via KVM_SET_ONE_REG ioctl.
+
 9. Known KVM API problems
 =========================
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 5c1cee04aa95..b4db368948cc 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -211,6 +211,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_SET_GUEST_DEBUG:
 	case KVM_CAP_VCPU_ATTRIBUTES:
 	case KVM_CAP_PTP_KVM:
+	case KVM_CAP_ARM_ID_REG_CONFIGURABLE:
 		r = 1;
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG2:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 91a6fe4e02c0..171f1d0ea1e1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1144,6 +1144,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_S390_MEM_OP_EXTENSION 211
 #define KVM_CAP_PMU_CAPABILITY 212
 #define KVM_CAP_DISABLE_QUIRKS2 213
+#define KVM_CAP_ARM_ID_REG_CONFIGURABLE 214
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 23/38] KVM: arm64: Add kunit test for ID register validation
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add kunit tests for functions that are used for validation of ID
registers, CONFIG_KVM_KUNIT_TEST option to enable the tests, and
.kunitconfig to run the kunit tests.

Since tools/testing/kunit/qemu_configs/arm64.py, which is the
default qemu_config for arm64, doesn't have all params that the
new tests needs, 'extra_qemu_params' in the default one needs to be
replaced with the one below to fully run all of those kunit tests.

 extra_qemu_params=['-M virt,virtualization=on,mte=on', '-cpu max,sve=on'])
 (the default one: extra_qemu_params=['-machine virt', '-cpu cortex-a57'])

The outputs from the tests are:
-----------------------------------------------------------------------
$ tools/testing/kunit/kunit.py run --timeout=60 --jobs=`nproc --all` \
          --arch=arm64 --cross_compile=aarch64-linux-gnu- \
          --qemu_config arm64_kvm_min.py \
          --kunitconfig=arch/arm64/kvm/.kunitconfig
[22:45:39] Configuring KUnit Kernel ...
[22:45:39] Building KUnit Kernel ...
Populating config with:
$ make ARCH=arm64 olddefconfig CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
Building with:
$ make ARCH=arm64 --jobs=96 CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
[22:45:47] Starting KUnit Kernel (1/1)...
[22:45:47] ============================================================
Running tests with:
$ qemu-system-aarch64 -nodefaults -m 1024 -kernel .kunit/arch/arm64/boot/Image.gz -append 'mem=1G console=tty kunit_shutdown=halt console=ttyAMA0 kunit_shutdown=reboot' -no-reboot -nographic -serial stdio -M virt,virtualization=on,mte=on -cpu max,sve=on
[22:45:48] ========== kvm-sys-regs-test-suite (14 subtests) ===========
[22:45:48] =========== vcpu_id_reg_feature_frac_check_test ============
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:2, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:2, lim:1
[22:45:48] ======= [PASSED] vcpu_id_reg_feature_frac_check_test =======
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=36): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=32): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=2 limit=2
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=2
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] [PASSED] validate_id_aa64pfr0_el1_test
[22:45:48] [PASSED] validate_id_aa64pfr1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar0_el1_test
[22:45:48] [PASSED] validate_id_aa64isar1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar2_el1_test
[22:45:48] ============== validate_id_aa64mmfr0_el1_test ==============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ========= [PASSED] validate_id_aa64mmfr0_el1_test ==========
[22:45:48] [PASSED] validate_id_aa64dfr0_el1_test
[22:45:48] [PASSED] validate_id_dfr0_el1_test
[22:45:48] [PASSED] validate_mvfr1_el1_test
[22:45:48] [PASSED] validate_id_reg_test
[22:45:48] ============= [PASSED] kvm-sys-regs-test-suite =============
[22:45:48] ============================================================
[22:45:48] Testing complete. Passed: 63, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0
[22:45:48] Elapsed time: 8.977s total, 0.003s configuring, 7.300s building, 1.620s running
-----------------------------------------------------------------------

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/.kunitconfig    |    4 +
 arch/arm64/kvm/Kconfig         |   11 +
 arch/arm64/kvm/sys_regs.c      |    4 +
 arch/arm64/kvm/sys_regs_test.c | 1068 ++++++++++++++++++++++++++++++++
 4 files changed, 1087 insertions(+)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c

diff --git a/arch/arm64/kvm/.kunitconfig b/arch/arm64/kvm/.kunitconfig
new file mode 100644
index 000000000000..c564c98fc319
--- /dev/null
+++ b/arch/arm64/kvm/.kunitconfig
@@ -0,0 +1,4 @@
+CONFIG_KUNIT=y
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=y
+CONFIG_KVM_KUNIT_TEST=y
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8a5fbbf084df..0d628d0e7dd5 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -56,4 +56,15 @@ config NVHE_EL2_DEBUG
 
 	  If unsure, say N.
 
+config KVM_KUNIT_TEST
+	bool "KUnit tests for KVM on ARM64 processors" if !KUNIT_ALL_TESTS
+	depends on KVM && KUNIT
+	default KUNIT_ALL_TESTS
+	help
+	  Say Y here to enable KUnit tests for the KVM on ARM64.
+	  Only useful for KVM/ARM development and are not for inclusion into
+	  a production build.
+
+	  If unsure, say N.
+
 endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc7a8f2539a4..a71c52aee34e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4486,3 +4486,7 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 			r->reset(vcpu, r);
 	}
 }
+
+#if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
+#include "sys_regs_test.c"
+#endif
diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
new file mode 100644
index 000000000000..dff146fe0e62
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -0,0 +1,1068 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KUnit tests for arch/arm64/kvm/sys_regs.c.
+ */
+
+#include <linux/module.h>
+#include <kunit/test.h>
+#include <kunit/test.h>
+#include <linux/kvm_host.h>
+#include <asm/cpufeature.h>
+#include "asm/sysreg.h"
+
+/*
+ * Create a vcpu with the minimum fields required for testing in this file
+ * including the struct kvm.  Any resources that are allocated by this
+ * function must be allocated by kunit_* so that we don't need to explicitly
+ * free them.
+ */
+static struct kvm_vcpu *test_kvm_vcpu_init(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm;
+
+	kvm = kunit_kzalloc(test, sizeof(struct kvm), GFP_KERNEL);
+	if (!kvm)
+		return NULL;
+
+	vcpu = kunit_kzalloc(test, sizeof(struct kvm_vcpu), GFP_KERNEL);
+	if (!vcpu) {
+		kunit_kfree(test, kvm);
+		return NULL;
+	}
+
+	vcpu->cpu = -1;
+	vcpu->kvm = kvm;
+	vcpu->vcpu_id = 0;
+
+	return vcpu;
+}
+
+static void test_kvm_vcpu_fini(struct kunit *test, struct kvm_vcpu *vcpu)
+{
+	if (vcpu->kvm)
+		kunit_kfree(test, vcpu->kvm);
+
+	kunit_kfree(test, vcpu);
+}
+
+/* Test parameter information to test arm64_check_features */
+struct check_features_test {
+	u64	check_types;
+	u64	value;
+	u64	limit;
+	int	expected;
+};
+
+
+/* Used to define test parameters of vcpu_id_reg_feature_frac_check_test() */
+struct feat_info {
+	u32	id;
+	u32	shift;
+	u32	value;
+	u32	limit;
+};
+
+struct frac_check_test {
+	struct feat_info feat;
+	struct feat_info frac_feat;
+	int ret;
+};
+
+#define	FRAC_FEAT(id, shift, value, limit)	{id, shift, value, limit}
+
+/* Tests parameters of vcpu_id_reg_feature_frac_check_test() */
+struct frac_check_test frac_params[] = {
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		0,
+	},
+	{
+		/*
+		 * Both the feature and frac values are same as their limits.
+		 * Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is smaller than its limit. Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is larger than its limit. Expect an error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		-E2BIG,
+	},
+
+};
+
+static void frac_case_to_desc(struct frac_check_test *t, char *desc)
+{
+	struct feat_info *feat = &t->feat;
+	struct feat_info *frac = &t->frac_feat;
+
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "feat - shift:%d, val:%d, lim:%d, frac - shift:%d, val:%d, lim:%d\n",
+		 feat->shift, feat->value, feat->limit,
+		 frac->shift, frac->value, frac->limit);
+}
+
+KUNIT_ARRAY_PARAM(frac, frac_params, frac_case_to_desc);
+
+/* Tests for vcpu_id_reg_feature_frac_check(). */
+static void vcpu_id_reg_feature_frac_check_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id, frac_id;
+	struct id_reg_desc id_data, frac_id_data;
+	struct id_reg_desc *idr, *frac_idr;
+	struct feature_frac frac_data, *frac = &frac_data;
+	const struct frac_check_test *frct = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id = frct->feat.id;
+	frac_id = frct->frac_feat.id;
+
+	frac->id = id;
+	frac->shift = frct->feat.shift;
+	frac->frac_id = frac_id;
+	frac->frac_shift = frct->frac_feat.shift;
+
+	idr = get_id_reg_desc(id);
+	frac_idr = get_id_reg_desc(frac_id);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, idr, sizeof(id_data));
+	memcpy(&frac_id_data, frac_idr, sizeof(frac_id_data));
+
+	/* The id could be same as the frac_id */
+	idr->vcpu_limit_val = (u64)frct->feat.limit << frac->shift;
+	frac_idr->vcpu_limit_val |=
+			(u64)frct->frac_feat.limit << frac->frac_shift;
+
+	write_kvm_id_reg(vcpu->kvm, id, (u64)frct->feat.value << frac->shift);
+	write_kvm_id_reg(vcpu->kvm, frac_id,
+			  (u64)frct->frac_feat.value << frac->frac_shift);
+
+	KUNIT_EXPECT_EQ(test,
+			vcpu_id_reg_feature_frac_check(vcpu, frac),
+			frct->ret);
+
+	/* Restore id_reg_desc */
+	memcpy(idr, &id_data, sizeof(id_data));
+	memcpy(frac_idr, &frac_id_data, sizeof(frac_id_data));
+}
+
+/*
+ * Test parameter information to test validate_id_aa64mmfr0_tgran2
+ * and validate_id_aa64mmfr0_el1_test.
+ */
+struct tgran_test {
+	int gran2_field;
+	int gran2;
+	int gran2_lim;
+	int gran1;
+	int gran1_lim;
+	int ret;
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran4_2.
+ * Defined values for the field are:
+ *  0x0: Support for 4KB granule at stage 2 is identified in TGran4.
+ *  0x1: 4KB granule not supported at stage 2.
+ *  0x2: 4KB granule supported at stage 2.
+ *  0x3: 4KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran4 are:
+ *  0x0: 4KB granule supported.
+ *  0x1: 4KB granule supports 52-bit input and output addresses.
+ *  0xf: 4KB granule not supported.
+ */
+struct tgran_test tgran4_2_test_params[] = {
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 1,  0,   0, -E2BIG},
+	/* Disable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 0,  0,   0, 0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/*
+	 * Enable 4KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2,   1,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0, 0xf,  0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0,   0,  0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0, 0xf, 0xf,  -E2BIG},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0,   0,   0,  0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran64_2.
+ * Defined values for the field are:
+ *  0x0: Support for 64KB granule at stage 2 is identified in TGran64.
+ *  0x1: 64KB granule not supported at stage 2.
+ *  0x2: 64KB granule supported at stage 2.
+ *  0x3: 64KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran64 are:
+ *  0x0: 64KB granule supported.
+ *  0xf: 64KB granule not supported.
+ */
+struct tgran_test tgran64_2_test_params[] = {
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 1,   0,   0, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 0,   0,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0, 0xf, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0, 0xf, 0xf, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0,   0,   0, 0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran16_2
+ * Defined values for the field are:
+ *  0x0: Support for 16KB granule at stage 2 is identified in TGran16.
+ *  0x1: 16KB granule not supported at stage 2.
+ *  0x2: 16KB granule supported at stage 2.
+ *  0x3: 16KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran16 are:
+ *  0x0: 16KB granule not supported.
+ *  0x1: 16KB granule supported.
+ *  0x2: 16KB granule supports 52-bit input and output addresses.
+ */
+struct tgran_test tgran16_2_test_params[] = {
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 2,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 1,  0,  0, -E2BIG},
+	/* Disable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 2,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  1,  0, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  0,  0, 0},
+	/*
+	 * Enable 16KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  2,  2, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  0, -E2BIG},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  2, 0},
+};
+
+static void tgran2_case_to_desc(struct tgran_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "gran2(field=%d): val=%d, lim=%d gran1: val=%d limit=%d\n",
+		 t->gran2_field, t->gran2, t->gran2_lim,
+		 t->gran1, t->gran1_lim);
+}
+
+KUNIT_ARRAY_PARAM(tgran4_2, tgran4_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran64_2, tgran64_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran16_2, tgran16_2_test_params, tgran2_case_to_desc);
+
+#define	MAKE_MMFR0_TGRAN(shift1, gran1, shift2, gran2)		\
+	(((u64)((gran1) & 0xf) << (shift1)) |			\
+	 ((u64)((gran2) & 0xf) << (shift2)))
+
+/* Return the bit position of TGranX field for the given TGranX_2 field. */
+static int tgran2_to_tgran1_shift(int tgran2_shift)
+{
+	int tgran1_shift = -1;
+
+	switch (tgran2_shift) {
+	case ID_AA64MMFR0_TGRAN4_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN4_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN64_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN64_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN16_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN16_SHIFT;
+		break;
+	default:
+		break;
+	}
+
+	return tgran1_shift;
+}
+
+/* Tests for validate_id_aa64mmfr0_el1(). */
+static void validate_id_aa64mmfr0_tgran2_test(struct kunit *test)
+{
+	const struct tgran_test *t = test->param_value;
+	int shift1, shift2;
+	u64 v, lim;
+
+	shift2 = t->gran2_field;
+	shift1 = tgran2_to_tgran1_shift(shift2);
+	v = MAKE_MMFR0_TGRAN(shift1, t->gran1, shift2, t->gran2);
+	lim = MAKE_MMFR0_TGRAN(shift1, t->gran1_lim, shift2, t->gran2_lim);
+
+	KUNIT_EXPECT_EQ(test, aa64mmfr0_tgran2_check(shift2, v, lim), t->ret);
+}
+
+/* Tests for validate_id_aa64pfr0_el1(). */
+static void validate_id_aa64pfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for GIC.
+	 * GIC must be 1 when vGIC3 is configured.
+	 */
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V2 */
+	vcpu->kvm->arch.vgic.in_kernel = true;
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V2;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V3 */
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V3;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	v = 0x1000000;	/* GIC = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original VGIC state */
+	vcpu->kvm->arch.vgic.in_kernel = false;
+	vcpu->kvm->arch.vgic.vgic_model = 0;
+
+	/*
+	 * Tests for AdvSIMD/FP.
+	 * AdvSIMD must have the same value as FP.
+	 */
+
+	/* Tests with SVE disabled */
+	v = 0x000010000;	/* AdvSIMD = 0, FP = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000100000;	/* AdvSIMD = 1, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000ff0000;	/* AdvSIMD = 0xf, FP = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	if (!system_supports_sve()) {
+		kunit_skip(test, "No SVE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with SVE enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100ff0000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	vcpu->arch.flags &= ~KVM_ARM64_GUEST_HAS_SVE;
+}
+
+/* Tests for validate_id_aa64pfr1_el1() */
+static void validate_id_aa64pfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR1_EL1);
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for MTE */
+
+	/* Tests with MTE disabled */
+
+	v = 0x000;	/* MTE = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_supports_mte()) {
+		kunit_skip(test, "(No MTE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with MTE enabled */
+	set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &vcpu->kvm->arch.flags);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0;	/* MTE = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar0_el1(). */
+static void validate_id_aa64isar0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for SM3/SM4.
+	 * Arm ARM says SM3 must have the same value as SM4.
+	 */
+
+	v = 0x01000000000;	/* SM4 = 0, SM3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000000000;	/* SM4 = 1, SM3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000000000;	/* SM3 = SM4 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+
+	/*
+	 * Tests for SHA1/SHA2/SHA3.  Arm ARM says:
+	 * If SHA1 is 0x0, both SHA2 and SHA3 must be 0x0.
+	 * If SHA2 is 0x0, SHA1 must be 0x0.
+	 * If SHA2 is 0x2, SHA3 must be 0x1.
+	 * If SHA3 is 0x1, SHA2 msut be 0x2.
+	 */
+
+	v = 0x000000100;	/* SHA2 = 0, SHA1 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001000;	/* SHA2 = 1, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001100;	/* SHA2 = 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100002000;	/* SHA3 = 1, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000002000;	/* SHA3 = 0, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100001000;	/* SHA3 = 1, SHA2 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200000000;	/* SHA3 = 2, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200001100;	/* SHA3 = 2, SHA2= 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x300003300;	/* SHA3 = 3, SHA2 = 3, SHA1 = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar1_el1() */
+static void validate_id_aa64isar1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR1_EL1);
+
+	/*
+	 * Tests for GPI/GPA/API/APA.
+	 * Arm ARM says:
+	 * If GPA is non-zero, GPI must be zero.
+	 * If GPI is non-zero, GPA must be zero.
+	 * If APA is non-zero, API must be zero.
+	 * If API is non-zero, APA must be zero.
+	 */
+
+	v = 0x11000110;	/* GPI = 1, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000100;	/* GPI = 1, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000010;	/* GPI = 1, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000110;	/* GPI = 1, GPA = 0, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000110;	/* GPI = 0, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PTRAUTH disabled */
+
+	/* Just for convenience, set all of GPI/GPA/API/APA to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x11000110;
+
+	v = 0x00000000;	/* GPI = 0, GPA = 0, API = 0, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+/* Tests for validate_id_aa64isar2_el1() */
+static void validate_id_aa64isar2_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR2_EL1);
+
+	/* Tests for GPA3/APA3. */
+
+	/* Tests with PTRAUTH disabled  */
+
+	/* Set the limit of APA3/GPA3 to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x1100;
+
+	v = 0x0000;	/* GPA3 = 0, APA3 = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* GPA3 = 1, APA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* GPA3 = 0, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100;	/* GPA3 = 1, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x1100;	/* APA3 = 1, GPA3 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* APA3 = 1, GPA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* APA3 = 0, GPA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+
+/* Tests for validate_id_aa64mmfr0_el1() */
+static void validate_id_aa64mmfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc id_data, *id_reg;
+	const struct tgran_test *t4, *t64, *t16;
+	struct kvm_vcpu *vcpu;
+	int field4, field4_2, field64, field64_2, field16, field16_2;
+	u64 v, v4, lim4, v64, lim64, v16, lim16;
+	int i, j, ret;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64MMFR0_EL1);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, id_reg, sizeof(id_data));
+
+	vcpu = test_kvm_vcpu_init(test);
+
+	t4 = test->param_value;
+	field4_2 = t4->gran2_field;
+	field4 = tgran2_to_tgran1_shift(field4_2);
+	v4 = MAKE_MMFR0_TGRAN(field4, t4->gran1, field4_2, t4->gran2);
+	lim4 = MAKE_MMFR0_TGRAN(field4, t4->gran1_lim, field4_2, t4->gran2_lim);
+
+	/*
+	 * For each given gran4_2 params, test validate_id_aa64mmfr0_el1
+	 * with each of tgran64_2 and tgran16_2 params.
+	 */
+	for (i = 0; i < ARRAY_SIZE(tgran64_2_test_params); i++) {
+		t64 = &tgran64_2_test_params[i];
+		field64_2 = t64->gran2_field;
+		field64 = tgran2_to_tgran1_shift(field64_2);
+		v64 = MAKE_MMFR0_TGRAN(field64, t64->gran1,
+				       field64_2, t64->gran2);
+		lim64 = MAKE_MMFR0_TGRAN(field64, t64->gran1_lim,
+					 field64_2, t64->gran2_lim);
+
+		for (j = 0; j < ARRAY_SIZE(tgran16_2_test_params); j++) {
+			t16 = &tgran16_2_test_params[j];
+
+			field16_2 = t16->gran2_field;
+			field16 = tgran2_to_tgran1_shift(field16_2);
+			v16 = MAKE_MMFR0_TGRAN(field16, t16->gran1,
+					       field16_2, t16->gran2);
+			lim16 = MAKE_MMFR0_TGRAN(field16, t16->gran1_lim,
+						 field16_2, t16->gran2_lim);
+
+			/* Build id_aa64mmfr0_el1 from tgran16/64/4 values */
+			v = v16 | v64 | v4;
+			id_reg->vcpu_limit_val = lim16 | lim64 | lim4;
+
+			ret = t4->ret ? t4->ret : t64->ret;
+			ret = ret ? ret : t16->ret;
+			KUNIT_EXPECT_EQ(test,
+				validate_id_aa64mmfr0_el1(vcpu, id_reg, v),
+				ret);
+		}
+	}
+
+	/* Restore id_reg_desc */
+	memcpy(id_reg, &id_data, sizeof(id_data));
+}
+
+/* Tests for validate_id_aa64dfr0_el1() */
+static void validate_id_aa64dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for CTX_CMPS/BRPS.
+	 * Number of context-aware breakpoints can be no more than number
+	 * of supported breakpoints.
+	 */
+	v = 0x10001000;	/* CTX_CMPS = 1, BRPS = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x20001000;	/* CTX_CMPS = 2, BRPS = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PMUVer */
+
+	/* Tests with PMUv3 disabled. */
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf00;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_dfr0_el1() */
+static void validate_id_dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PERFMON */
+
+	/* Tests with PMUv3 disabled */
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_mvfr1_el1(). */
+static void validate_mvfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_MVFR1_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for FPHP/SIMDHP.
+	 * Arm ARM says the level of support indicated by FPHP must be
+	 * equivalent to the level of support indicated by the SIMDHP,
+	 * meaning the permitted values are:
+	 * FPHP = 0x0, SIMDHP = 0x0
+	 * FPHP = 0x2, SIMDHP = 0x1
+	 * FPHP = 0x3, SIMDHP = 0x2
+	 */
+	v = 0x0000000;	/* FPHP = 0, SIMDHP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2100000;	/* FPHP = 2, SIMDHP = 1 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3200000;	/* FPHP = 3, SIMDHP = 2 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100000;	/* FPHP = 1, SIMDHP = 1 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2200000;	/* FPHP = 2, SIMDHP = 2 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3300000;	/* FPHP = 3, SIMDHP = 3 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = (u64)-1;
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/*
+ * Helper function for validate_id_reg_test().
+ * We don't use KUNIT_ASSERT or kunit_skip because this is a helper test
+ * function and we are not sure if it's safe to exist from the test case.
+ */
+static void validate_id_reg_test_one_field(struct kunit *test,
+		u32 id, int pos, int fval, int flimit,
+		bool is_signed, struct id_reg_desc *idr)
+{
+	struct kvm_vcpu *vcpu;
+	int fmin = is_signed ? -1 : 0;
+	int fmax = is_signed ? 7 : 15;
+	u64 fmask = ARM64_FEATURE_FIELD_MASK;
+	u64 val;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	if (flimit > fmax) {
+		/* Shouldn't happen. Make the test failure. */
+		KUNIT_EXPECT_FALSE(test, flimit > fmax);
+		kunit_err(test, "%s: flimit(%d) > fmax(%d). Must be test bug",
+			  __func__, flimit, fmax);
+		return;
+	}
+
+	if (fval > fmin) {
+		/* Set the field to a smaller value */
+		val = ((u64)(fval - 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (fval < flimit) {
+		/* Set the field to a larger value, but smaller than flimit */
+		val = ((u64)(fval + 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Set the field to the flimit */
+		val = ((u64)flimit & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (flimit < fmax) {
+		/* Set the field to a larger value than flimit */
+		val = ((u64)(flimit + 1) & fmask) << pos;
+		KUNIT_EXPECT_NE(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Test with ignore_mask */
+		if (idr) {
+			idr->ignore_mask = fmask << pos;
+			KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+		}
+	}
+	test_kvm_vcpu_fini(test, vcpu);
+}
+
+static void set_sys_desc(struct sys_reg_desc *rd, u32 encoding)
+{
+	rd->Op0 = sys_reg_Op0(encoding);
+	rd->Op1 = sys_reg_Op1(encoding);
+	rd->CRn = sys_reg_CRn(encoding);
+	rd->CRm = sys_reg_CRm(encoding);
+	rd->Op2 = sys_reg_Op2(encoding);
+}
+
+/*
+ * Test for validate_id_reg().
+ */
+static void validate_id_reg_test(struct kunit *test)
+{
+	struct id_reg_desc idr_data, *idr, *original_idr;
+	u32 id;
+	int fval, flim, pos;
+	u64 val;
+	bool sign;
+
+	/* Use AA64PFR0_EL1 because it includes both sign/unsigned fields */
+	id = SYS_ID_AA64PFR0_EL1;
+
+	/* Test with a temporary id_reg_desc for testing */
+	idr = &idr_data;
+
+	fval = 0x1;
+	flim = 0x2;
+
+	/* Test with unsigned field */
+	pos = ID_AA64PFR0_RAS_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	original_idr = get_id_reg_desc(id);
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, false, idr);
+
+	/* Test with signed field */
+	pos = ID_AA64PFR0_FP_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, true, idr);
+
+	/* Test with the original limit val */
+	val = original_idr->vcpu_limit_val;
+	idr->vcpu_limit_val = val;
+
+	for (pos = 0; pos < 64; pos += 4) {
+		if (pos == ID_AA64PFR0_FP_SHIFT ||
+		    pos == ID_AA64PFR0_ASIMD_SHIFT)
+			sign = true;
+		else
+			sign = false;
+
+		fval = cpuid_feature_extract_field(val, pos, sign);
+		validate_id_reg_test_one_field(test, id, pos, fval, fval,
+					       sign, idr);
+	}
+}
+
+static struct kunit_case kvm_sys_regs_test_cases[] = {
+	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran64_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran16_2_gen_params),
+	KUNIT_CASE(validate_id_aa64pfr0_el1_test),
+	KUNIT_CASE(validate_id_aa64pfr1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar0_el1_test),
+	KUNIT_CASE(validate_id_aa64isar1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar2_el1_test),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_el1_test, tgran4_2_gen_params),
+	KUNIT_CASE(validate_id_aa64dfr0_el1_test),
+	KUNIT_CASE(validate_id_dfr0_el1_test),
+	KUNIT_CASE(validate_mvfr1_el1_test),
+	KUNIT_CASE(validate_id_reg_test),
+	{}
+};
+
+static struct kunit_suite kvm_sys_regs_test_suite = {
+	.name = "kvm-sys-regs-test-suite",
+	.test_cases = kvm_sys_regs_test_cases,
+};
+
+kunit_test_suites(&kvm_sys_regs_test_suite);
+MODULE_LICENSE("GPL");
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 23/38] KVM: arm64: Add kunit test for ID register validation
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add kunit tests for functions that are used for validation of ID
registers, CONFIG_KVM_KUNIT_TEST option to enable the tests, and
.kunitconfig to run the kunit tests.

Since tools/testing/kunit/qemu_configs/arm64.py, which is the
default qemu_config for arm64, doesn't have all params that the
new tests needs, 'extra_qemu_params' in the default one needs to be
replaced with the one below to fully run all of those kunit tests.

 extra_qemu_params=['-M virt,virtualization=on,mte=on', '-cpu max,sve=on'])
 (the default one: extra_qemu_params=['-machine virt', '-cpu cortex-a57'])

The outputs from the tests are:
-----------------------------------------------------------------------
$ tools/testing/kunit/kunit.py run --timeout=60 --jobs=`nproc --all` \
          --arch=arm64 --cross_compile=aarch64-linux-gnu- \
          --qemu_config arm64_kvm_min.py \
          --kunitconfig=arch/arm64/kvm/.kunitconfig
[22:45:39] Configuring KUnit Kernel ...
[22:45:39] Building KUnit Kernel ...
Populating config with:
$ make ARCH=arm64 olddefconfig CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
Building with:
$ make ARCH=arm64 --jobs=96 CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
[22:45:47] Starting KUnit Kernel (1/1)...
[22:45:47] ============================================================
Running tests with:
$ qemu-system-aarch64 -nodefaults -m 1024 -kernel .kunit/arch/arm64/boot/Image.gz -append 'mem=1G console=tty kunit_shutdown=halt console=ttyAMA0 kunit_shutdown=reboot' -no-reboot -nographic -serial stdio -M virt,virtualization=on,mte=on -cpu max,sve=on
[22:45:48] ========== kvm-sys-regs-test-suite (14 subtests) ===========
[22:45:48] =========== vcpu_id_reg_feature_frac_check_test ============
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:2, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:2, lim:1
[22:45:48] ======= [PASSED] vcpu_id_reg_feature_frac_check_test =======
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=36): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=32): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=2 limit=2
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=2
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] [PASSED] validate_id_aa64pfr0_el1_test
[22:45:48] [PASSED] validate_id_aa64pfr1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar0_el1_test
[22:45:48] [PASSED] validate_id_aa64isar1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar2_el1_test
[22:45:48] ============== validate_id_aa64mmfr0_el1_test ==============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ========= [PASSED] validate_id_aa64mmfr0_el1_test ==========
[22:45:48] [PASSED] validate_id_aa64dfr0_el1_test
[22:45:48] [PASSED] validate_id_dfr0_el1_test
[22:45:48] [PASSED] validate_mvfr1_el1_test
[22:45:48] [PASSED] validate_id_reg_test
[22:45:48] ============= [PASSED] kvm-sys-regs-test-suite =============
[22:45:48] ============================================================
[22:45:48] Testing complete. Passed: 63, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0
[22:45:48] Elapsed time: 8.977s total, 0.003s configuring, 7.300s building, 1.620s running
-----------------------------------------------------------------------

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/.kunitconfig    |    4 +
 arch/arm64/kvm/Kconfig         |   11 +
 arch/arm64/kvm/sys_regs.c      |    4 +
 arch/arm64/kvm/sys_regs_test.c | 1068 ++++++++++++++++++++++++++++++++
 4 files changed, 1087 insertions(+)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c

diff --git a/arch/arm64/kvm/.kunitconfig b/arch/arm64/kvm/.kunitconfig
new file mode 100644
index 000000000000..c564c98fc319
--- /dev/null
+++ b/arch/arm64/kvm/.kunitconfig
@@ -0,0 +1,4 @@
+CONFIG_KUNIT=y
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=y
+CONFIG_KVM_KUNIT_TEST=y
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8a5fbbf084df..0d628d0e7dd5 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -56,4 +56,15 @@ config NVHE_EL2_DEBUG
 
 	  If unsure, say N.
 
+config KVM_KUNIT_TEST
+	bool "KUnit tests for KVM on ARM64 processors" if !KUNIT_ALL_TESTS
+	depends on KVM && KUNIT
+	default KUNIT_ALL_TESTS
+	help
+	  Say Y here to enable KUnit tests for the KVM on ARM64.
+	  Only useful for KVM/ARM development and are not for inclusion into
+	  a production build.
+
+	  If unsure, say N.
+
 endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc7a8f2539a4..a71c52aee34e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4486,3 +4486,7 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 			r->reset(vcpu, r);
 	}
 }
+
+#if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
+#include "sys_regs_test.c"
+#endif
diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
new file mode 100644
index 000000000000..dff146fe0e62
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -0,0 +1,1068 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KUnit tests for arch/arm64/kvm/sys_regs.c.
+ */
+
+#include <linux/module.h>
+#include <kunit/test.h>
+#include <kunit/test.h>
+#include <linux/kvm_host.h>
+#include <asm/cpufeature.h>
+#include "asm/sysreg.h"
+
+/*
+ * Create a vcpu with the minimum fields required for testing in this file
+ * including the struct kvm.  Any resources that are allocated by this
+ * function must be allocated by kunit_* so that we don't need to explicitly
+ * free them.
+ */
+static struct kvm_vcpu *test_kvm_vcpu_init(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm;
+
+	kvm = kunit_kzalloc(test, sizeof(struct kvm), GFP_KERNEL);
+	if (!kvm)
+		return NULL;
+
+	vcpu = kunit_kzalloc(test, sizeof(struct kvm_vcpu), GFP_KERNEL);
+	if (!vcpu) {
+		kunit_kfree(test, kvm);
+		return NULL;
+	}
+
+	vcpu->cpu = -1;
+	vcpu->kvm = kvm;
+	vcpu->vcpu_id = 0;
+
+	return vcpu;
+}
+
+static void test_kvm_vcpu_fini(struct kunit *test, struct kvm_vcpu *vcpu)
+{
+	if (vcpu->kvm)
+		kunit_kfree(test, vcpu->kvm);
+
+	kunit_kfree(test, vcpu);
+}
+
+/* Test parameter information to test arm64_check_features */
+struct check_features_test {
+	u64	check_types;
+	u64	value;
+	u64	limit;
+	int	expected;
+};
+
+
+/* Used to define test parameters of vcpu_id_reg_feature_frac_check_test() */
+struct feat_info {
+	u32	id;
+	u32	shift;
+	u32	value;
+	u32	limit;
+};
+
+struct frac_check_test {
+	struct feat_info feat;
+	struct feat_info frac_feat;
+	int ret;
+};
+
+#define	FRAC_FEAT(id, shift, value, limit)	{id, shift, value, limit}
+
+/* Tests parameters of vcpu_id_reg_feature_frac_check_test() */
+struct frac_check_test frac_params[] = {
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		0,
+	},
+	{
+		/*
+		 * Both the feature and frac values are same as their limits.
+		 * Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is smaller than its limit. Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is larger than its limit. Expect an error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		-E2BIG,
+	},
+
+};
+
+static void frac_case_to_desc(struct frac_check_test *t, char *desc)
+{
+	struct feat_info *feat = &t->feat;
+	struct feat_info *frac = &t->frac_feat;
+
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "feat - shift:%d, val:%d, lim:%d, frac - shift:%d, val:%d, lim:%d\n",
+		 feat->shift, feat->value, feat->limit,
+		 frac->shift, frac->value, frac->limit);
+}
+
+KUNIT_ARRAY_PARAM(frac, frac_params, frac_case_to_desc);
+
+/* Tests for vcpu_id_reg_feature_frac_check(). */
+static void vcpu_id_reg_feature_frac_check_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id, frac_id;
+	struct id_reg_desc id_data, frac_id_data;
+	struct id_reg_desc *idr, *frac_idr;
+	struct feature_frac frac_data, *frac = &frac_data;
+	const struct frac_check_test *frct = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id = frct->feat.id;
+	frac_id = frct->frac_feat.id;
+
+	frac->id = id;
+	frac->shift = frct->feat.shift;
+	frac->frac_id = frac_id;
+	frac->frac_shift = frct->frac_feat.shift;
+
+	idr = get_id_reg_desc(id);
+	frac_idr = get_id_reg_desc(frac_id);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, idr, sizeof(id_data));
+	memcpy(&frac_id_data, frac_idr, sizeof(frac_id_data));
+
+	/* The id could be same as the frac_id */
+	idr->vcpu_limit_val = (u64)frct->feat.limit << frac->shift;
+	frac_idr->vcpu_limit_val |=
+			(u64)frct->frac_feat.limit << frac->frac_shift;
+
+	write_kvm_id_reg(vcpu->kvm, id, (u64)frct->feat.value << frac->shift);
+	write_kvm_id_reg(vcpu->kvm, frac_id,
+			  (u64)frct->frac_feat.value << frac->frac_shift);
+
+	KUNIT_EXPECT_EQ(test,
+			vcpu_id_reg_feature_frac_check(vcpu, frac),
+			frct->ret);
+
+	/* Restore id_reg_desc */
+	memcpy(idr, &id_data, sizeof(id_data));
+	memcpy(frac_idr, &frac_id_data, sizeof(frac_id_data));
+}
+
+/*
+ * Test parameter information to test validate_id_aa64mmfr0_tgran2
+ * and validate_id_aa64mmfr0_el1_test.
+ */
+struct tgran_test {
+	int gran2_field;
+	int gran2;
+	int gran2_lim;
+	int gran1;
+	int gran1_lim;
+	int ret;
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran4_2.
+ * Defined values for the field are:
+ *  0x0: Support for 4KB granule at stage 2 is identified in TGran4.
+ *  0x1: 4KB granule not supported at stage 2.
+ *  0x2: 4KB granule supported at stage 2.
+ *  0x3: 4KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran4 are:
+ *  0x0: 4KB granule supported.
+ *  0x1: 4KB granule supports 52-bit input and output addresses.
+ *  0xf: 4KB granule not supported.
+ */
+struct tgran_test tgran4_2_test_params[] = {
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 1,  0,   0, -E2BIG},
+	/* Disable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 0,  0,   0, 0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/*
+	 * Enable 4KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2,   1,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0, 0xf,  0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0,   0,  0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0, 0xf, 0xf,  -E2BIG},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0,   0,   0,  0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran64_2.
+ * Defined values for the field are:
+ *  0x0: Support for 64KB granule at stage 2 is identified in TGran64.
+ *  0x1: 64KB granule not supported at stage 2.
+ *  0x2: 64KB granule supported at stage 2.
+ *  0x3: 64KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran64 are:
+ *  0x0: 64KB granule supported.
+ *  0xf: 64KB granule not supported.
+ */
+struct tgran_test tgran64_2_test_params[] = {
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 1,   0,   0, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 0,   0,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0, 0xf, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0, 0xf, 0xf, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0,   0,   0, 0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran16_2
+ * Defined values for the field are:
+ *  0x0: Support for 16KB granule at stage 2 is identified in TGran16.
+ *  0x1: 16KB granule not supported at stage 2.
+ *  0x2: 16KB granule supported at stage 2.
+ *  0x3: 16KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran16 are:
+ *  0x0: 16KB granule not supported.
+ *  0x1: 16KB granule supported.
+ *  0x2: 16KB granule supports 52-bit input and output addresses.
+ */
+struct tgran_test tgran16_2_test_params[] = {
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 2,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 1,  0,  0, -E2BIG},
+	/* Disable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 2,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  1,  0, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  0,  0, 0},
+	/*
+	 * Enable 16KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  2,  2, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  0, -E2BIG},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  2, 0},
+};
+
+static void tgran2_case_to_desc(struct tgran_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "gran2(field=%d): val=%d, lim=%d gran1: val=%d limit=%d\n",
+		 t->gran2_field, t->gran2, t->gran2_lim,
+		 t->gran1, t->gran1_lim);
+}
+
+KUNIT_ARRAY_PARAM(tgran4_2, tgran4_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran64_2, tgran64_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran16_2, tgran16_2_test_params, tgran2_case_to_desc);
+
+#define	MAKE_MMFR0_TGRAN(shift1, gran1, shift2, gran2)		\
+	(((u64)((gran1) & 0xf) << (shift1)) |			\
+	 ((u64)((gran2) & 0xf) << (shift2)))
+
+/* Return the bit position of TGranX field for the given TGranX_2 field. */
+static int tgran2_to_tgran1_shift(int tgran2_shift)
+{
+	int tgran1_shift = -1;
+
+	switch (tgran2_shift) {
+	case ID_AA64MMFR0_TGRAN4_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN4_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN64_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN64_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN16_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN16_SHIFT;
+		break;
+	default:
+		break;
+	}
+
+	return tgran1_shift;
+}
+
+/* Tests for validate_id_aa64mmfr0_el1(). */
+static void validate_id_aa64mmfr0_tgran2_test(struct kunit *test)
+{
+	const struct tgran_test *t = test->param_value;
+	int shift1, shift2;
+	u64 v, lim;
+
+	shift2 = t->gran2_field;
+	shift1 = tgran2_to_tgran1_shift(shift2);
+	v = MAKE_MMFR0_TGRAN(shift1, t->gran1, shift2, t->gran2);
+	lim = MAKE_MMFR0_TGRAN(shift1, t->gran1_lim, shift2, t->gran2_lim);
+
+	KUNIT_EXPECT_EQ(test, aa64mmfr0_tgran2_check(shift2, v, lim), t->ret);
+}
+
+/* Tests for validate_id_aa64pfr0_el1(). */
+static void validate_id_aa64pfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for GIC.
+	 * GIC must be 1 when vGIC3 is configured.
+	 */
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V2 */
+	vcpu->kvm->arch.vgic.in_kernel = true;
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V2;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V3 */
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V3;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	v = 0x1000000;	/* GIC = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original VGIC state */
+	vcpu->kvm->arch.vgic.in_kernel = false;
+	vcpu->kvm->arch.vgic.vgic_model = 0;
+
+	/*
+	 * Tests for AdvSIMD/FP.
+	 * AdvSIMD must have the same value as FP.
+	 */
+
+	/* Tests with SVE disabled */
+	v = 0x000010000;	/* AdvSIMD = 0, FP = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000100000;	/* AdvSIMD = 1, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000ff0000;	/* AdvSIMD = 0xf, FP = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	if (!system_supports_sve()) {
+		kunit_skip(test, "No SVE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with SVE enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100ff0000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	vcpu->arch.flags &= ~KVM_ARM64_GUEST_HAS_SVE;
+}
+
+/* Tests for validate_id_aa64pfr1_el1() */
+static void validate_id_aa64pfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR1_EL1);
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for MTE */
+
+	/* Tests with MTE disabled */
+
+	v = 0x000;	/* MTE = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_supports_mte()) {
+		kunit_skip(test, "(No MTE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with MTE enabled */
+	set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &vcpu->kvm->arch.flags);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0;	/* MTE = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar0_el1(). */
+static void validate_id_aa64isar0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for SM3/SM4.
+	 * Arm ARM says SM3 must have the same value as SM4.
+	 */
+
+	v = 0x01000000000;	/* SM4 = 0, SM3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000000000;	/* SM4 = 1, SM3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000000000;	/* SM3 = SM4 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+
+	/*
+	 * Tests for SHA1/SHA2/SHA3.  Arm ARM says:
+	 * If SHA1 is 0x0, both SHA2 and SHA3 must be 0x0.
+	 * If SHA2 is 0x0, SHA1 must be 0x0.
+	 * If SHA2 is 0x2, SHA3 must be 0x1.
+	 * If SHA3 is 0x1, SHA2 msut be 0x2.
+	 */
+
+	v = 0x000000100;	/* SHA2 = 0, SHA1 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001000;	/* SHA2 = 1, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001100;	/* SHA2 = 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100002000;	/* SHA3 = 1, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000002000;	/* SHA3 = 0, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100001000;	/* SHA3 = 1, SHA2 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200000000;	/* SHA3 = 2, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200001100;	/* SHA3 = 2, SHA2= 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x300003300;	/* SHA3 = 3, SHA2 = 3, SHA1 = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar1_el1() */
+static void validate_id_aa64isar1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR1_EL1);
+
+	/*
+	 * Tests for GPI/GPA/API/APA.
+	 * Arm ARM says:
+	 * If GPA is non-zero, GPI must be zero.
+	 * If GPI is non-zero, GPA must be zero.
+	 * If APA is non-zero, API must be zero.
+	 * If API is non-zero, APA must be zero.
+	 */
+
+	v = 0x11000110;	/* GPI = 1, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000100;	/* GPI = 1, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000010;	/* GPI = 1, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000110;	/* GPI = 1, GPA = 0, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000110;	/* GPI = 0, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PTRAUTH disabled */
+
+	/* Just for convenience, set all of GPI/GPA/API/APA to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x11000110;
+
+	v = 0x00000000;	/* GPI = 0, GPA = 0, API = 0, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+/* Tests for validate_id_aa64isar2_el1() */
+static void validate_id_aa64isar2_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR2_EL1);
+
+	/* Tests for GPA3/APA3. */
+
+	/* Tests with PTRAUTH disabled  */
+
+	/* Set the limit of APA3/GPA3 to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x1100;
+
+	v = 0x0000;	/* GPA3 = 0, APA3 = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* GPA3 = 1, APA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* GPA3 = 0, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100;	/* GPA3 = 1, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x1100;	/* APA3 = 1, GPA3 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* APA3 = 1, GPA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* APA3 = 0, GPA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+
+/* Tests for validate_id_aa64mmfr0_el1() */
+static void validate_id_aa64mmfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc id_data, *id_reg;
+	const struct tgran_test *t4, *t64, *t16;
+	struct kvm_vcpu *vcpu;
+	int field4, field4_2, field64, field64_2, field16, field16_2;
+	u64 v, v4, lim4, v64, lim64, v16, lim16;
+	int i, j, ret;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64MMFR0_EL1);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, id_reg, sizeof(id_data));
+
+	vcpu = test_kvm_vcpu_init(test);
+
+	t4 = test->param_value;
+	field4_2 = t4->gran2_field;
+	field4 = tgran2_to_tgran1_shift(field4_2);
+	v4 = MAKE_MMFR0_TGRAN(field4, t4->gran1, field4_2, t4->gran2);
+	lim4 = MAKE_MMFR0_TGRAN(field4, t4->gran1_lim, field4_2, t4->gran2_lim);
+
+	/*
+	 * For each given gran4_2 params, test validate_id_aa64mmfr0_el1
+	 * with each of tgran64_2 and tgran16_2 params.
+	 */
+	for (i = 0; i < ARRAY_SIZE(tgran64_2_test_params); i++) {
+		t64 = &tgran64_2_test_params[i];
+		field64_2 = t64->gran2_field;
+		field64 = tgran2_to_tgran1_shift(field64_2);
+		v64 = MAKE_MMFR0_TGRAN(field64, t64->gran1,
+				       field64_2, t64->gran2);
+		lim64 = MAKE_MMFR0_TGRAN(field64, t64->gran1_lim,
+					 field64_2, t64->gran2_lim);
+
+		for (j = 0; j < ARRAY_SIZE(tgran16_2_test_params); j++) {
+			t16 = &tgran16_2_test_params[j];
+
+			field16_2 = t16->gran2_field;
+			field16 = tgran2_to_tgran1_shift(field16_2);
+			v16 = MAKE_MMFR0_TGRAN(field16, t16->gran1,
+					       field16_2, t16->gran2);
+			lim16 = MAKE_MMFR0_TGRAN(field16, t16->gran1_lim,
+						 field16_2, t16->gran2_lim);
+
+			/* Build id_aa64mmfr0_el1 from tgran16/64/4 values */
+			v = v16 | v64 | v4;
+			id_reg->vcpu_limit_val = lim16 | lim64 | lim4;
+
+			ret = t4->ret ? t4->ret : t64->ret;
+			ret = ret ? ret : t16->ret;
+			KUNIT_EXPECT_EQ(test,
+				validate_id_aa64mmfr0_el1(vcpu, id_reg, v),
+				ret);
+		}
+	}
+
+	/* Restore id_reg_desc */
+	memcpy(id_reg, &id_data, sizeof(id_data));
+}
+
+/* Tests for validate_id_aa64dfr0_el1() */
+static void validate_id_aa64dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for CTX_CMPS/BRPS.
+	 * Number of context-aware breakpoints can be no more than number
+	 * of supported breakpoints.
+	 */
+	v = 0x10001000;	/* CTX_CMPS = 1, BRPS = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x20001000;	/* CTX_CMPS = 2, BRPS = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PMUVer */
+
+	/* Tests with PMUv3 disabled. */
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf00;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_dfr0_el1() */
+static void validate_id_dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PERFMON */
+
+	/* Tests with PMUv3 disabled */
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_mvfr1_el1(). */
+static void validate_mvfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_MVFR1_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for FPHP/SIMDHP.
+	 * Arm ARM says the level of support indicated by FPHP must be
+	 * equivalent to the level of support indicated by the SIMDHP,
+	 * meaning the permitted values are:
+	 * FPHP = 0x0, SIMDHP = 0x0
+	 * FPHP = 0x2, SIMDHP = 0x1
+	 * FPHP = 0x3, SIMDHP = 0x2
+	 */
+	v = 0x0000000;	/* FPHP = 0, SIMDHP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2100000;	/* FPHP = 2, SIMDHP = 1 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3200000;	/* FPHP = 3, SIMDHP = 2 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100000;	/* FPHP = 1, SIMDHP = 1 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2200000;	/* FPHP = 2, SIMDHP = 2 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3300000;	/* FPHP = 3, SIMDHP = 3 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = (u64)-1;
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/*
+ * Helper function for validate_id_reg_test().
+ * We don't use KUNIT_ASSERT or kunit_skip because this is a helper test
+ * function and we are not sure if it's safe to exist from the test case.
+ */
+static void validate_id_reg_test_one_field(struct kunit *test,
+		u32 id, int pos, int fval, int flimit,
+		bool is_signed, struct id_reg_desc *idr)
+{
+	struct kvm_vcpu *vcpu;
+	int fmin = is_signed ? -1 : 0;
+	int fmax = is_signed ? 7 : 15;
+	u64 fmask = ARM64_FEATURE_FIELD_MASK;
+	u64 val;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	if (flimit > fmax) {
+		/* Shouldn't happen. Make the test failure. */
+		KUNIT_EXPECT_FALSE(test, flimit > fmax);
+		kunit_err(test, "%s: flimit(%d) > fmax(%d). Must be test bug",
+			  __func__, flimit, fmax);
+		return;
+	}
+
+	if (fval > fmin) {
+		/* Set the field to a smaller value */
+		val = ((u64)(fval - 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (fval < flimit) {
+		/* Set the field to a larger value, but smaller than flimit */
+		val = ((u64)(fval + 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Set the field to the flimit */
+		val = ((u64)flimit & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (flimit < fmax) {
+		/* Set the field to a larger value than flimit */
+		val = ((u64)(flimit + 1) & fmask) << pos;
+		KUNIT_EXPECT_NE(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Test with ignore_mask */
+		if (idr) {
+			idr->ignore_mask = fmask << pos;
+			KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+		}
+	}
+	test_kvm_vcpu_fini(test, vcpu);
+}
+
+static void set_sys_desc(struct sys_reg_desc *rd, u32 encoding)
+{
+	rd->Op0 = sys_reg_Op0(encoding);
+	rd->Op1 = sys_reg_Op1(encoding);
+	rd->CRn = sys_reg_CRn(encoding);
+	rd->CRm = sys_reg_CRm(encoding);
+	rd->Op2 = sys_reg_Op2(encoding);
+}
+
+/*
+ * Test for validate_id_reg().
+ */
+static void validate_id_reg_test(struct kunit *test)
+{
+	struct id_reg_desc idr_data, *idr, *original_idr;
+	u32 id;
+	int fval, flim, pos;
+	u64 val;
+	bool sign;
+
+	/* Use AA64PFR0_EL1 because it includes both sign/unsigned fields */
+	id = SYS_ID_AA64PFR0_EL1;
+
+	/* Test with a temporary id_reg_desc for testing */
+	idr = &idr_data;
+
+	fval = 0x1;
+	flim = 0x2;
+
+	/* Test with unsigned field */
+	pos = ID_AA64PFR0_RAS_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	original_idr = get_id_reg_desc(id);
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, false, idr);
+
+	/* Test with signed field */
+	pos = ID_AA64PFR0_FP_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, true, idr);
+
+	/* Test with the original limit val */
+	val = original_idr->vcpu_limit_val;
+	idr->vcpu_limit_val = val;
+
+	for (pos = 0; pos < 64; pos += 4) {
+		if (pos == ID_AA64PFR0_FP_SHIFT ||
+		    pos == ID_AA64PFR0_ASIMD_SHIFT)
+			sign = true;
+		else
+			sign = false;
+
+		fval = cpuid_feature_extract_field(val, pos, sign);
+		validate_id_reg_test_one_field(test, id, pos, fval, fval,
+					       sign, idr);
+	}
+}
+
+static struct kunit_case kvm_sys_regs_test_cases[] = {
+	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran64_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran16_2_gen_params),
+	KUNIT_CASE(validate_id_aa64pfr0_el1_test),
+	KUNIT_CASE(validate_id_aa64pfr1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar0_el1_test),
+	KUNIT_CASE(validate_id_aa64isar1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar2_el1_test),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_el1_test, tgran4_2_gen_params),
+	KUNIT_CASE(validate_id_aa64dfr0_el1_test),
+	KUNIT_CASE(validate_id_dfr0_el1_test),
+	KUNIT_CASE(validate_mvfr1_el1_test),
+	KUNIT_CASE(validate_id_reg_test),
+	{}
+};
+
+static struct kunit_suite kvm_sys_regs_test_suite = {
+	.name = "kvm-sys-regs-test-suite",
+	.test_cases = kvm_sys_regs_test_cases,
+};
+
+kunit_test_suites(&kvm_sys_regs_test_suite);
+MODULE_LICENSE("GPL");
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 23/38] KVM: arm64: Add kunit test for ID register validation
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add kunit tests for functions that are used for validation of ID
registers, CONFIG_KVM_KUNIT_TEST option to enable the tests, and
.kunitconfig to run the kunit tests.

Since tools/testing/kunit/qemu_configs/arm64.py, which is the
default qemu_config for arm64, doesn't have all params that the
new tests needs, 'extra_qemu_params' in the default one needs to be
replaced with the one below to fully run all of those kunit tests.

 extra_qemu_params=['-M virt,virtualization=on,mte=on', '-cpu max,sve=on'])
 (the default one: extra_qemu_params=['-machine virt', '-cpu cortex-a57'])

The outputs from the tests are:
-----------------------------------------------------------------------
$ tools/testing/kunit/kunit.py run --timeout=60 --jobs=`nproc --all` \
          --arch=arm64 --cross_compile=aarch64-linux-gnu- \
          --qemu_config arm64_kvm_min.py \
          --kunitconfig=arch/arm64/kvm/.kunitconfig
[22:45:39] Configuring KUnit Kernel ...
[22:45:39] Building KUnit Kernel ...
Populating config with:
$ make ARCH=arm64 olddefconfig CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
Building with:
$ make ARCH=arm64 --jobs=96 CROSS_COMPILE=aarch64-linux-gnu- O=.kunit
[22:45:47] Starting KUnit Kernel (1/1)...
[22:45:47] ============================================================
Running tests with:
$ qemu-system-aarch64 -nodefaults -m 1024 -kernel .kunit/arch/arm64/boot/Image.gz -append 'mem=1G console=tty kunit_shutdown=halt console=ttyAMA0 kunit_shutdown=reboot' -no-reboot -nographic -serial stdio -M virt,virtualization=on,mte=on -cpu max,sve=on
[22:45:48] ========== kvm-sys-regs-test-suite (14 subtests) ===========
[22:45:48] =========== vcpu_id_reg_feature_frac_check_test ============
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:2, frac - shift:12, val:2, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:1
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:1, lim:2
[22:45:48] [PASSED] feat - shift:28, val:1, lim:1, frac - shift:12, val:2, lim:1
[22:45:48] ======= [PASSED] vcpu_id_reg_feature_frac_check_test =======
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=36): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=36): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=36): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] ============ validate_id_aa64mmfr0_tgran2_test =============
[22:45:48] [PASSED] gran2(field=32): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=1 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=0, lim=2 gran1: val=2 limit=2
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=1, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=1
[22:45:48] [PASSED] gran2(field=32): val=2, lim=0 gran1: val=0 limit=2
[22:45:48] ======== [PASSED] validate_id_aa64mmfr0_tgran2_test ========
[22:45:48] [PASSED] validate_id_aa64pfr0_el1_test
[22:45:48] [PASSED] validate_id_aa64pfr1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar0_el1_test
[22:45:48] [PASSED] validate_id_aa64isar1_el1_test
[22:45:48] [PASSED] validate_id_aa64isar2_el1_test
[22:45:48] ============== validate_id_aa64mmfr0_el1_test ==============
[22:45:48] [PASSED] gran2(field=40): val=2, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=2 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=1 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=15 limit=0
[22:45:48] [PASSED] gran2(field=40): val=0, lim=2 gran1: val=1 limit=0
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=15
[22:45:48] [PASSED] gran2(field=40): val=1, lim=0 gran1: val=0 limit=0
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=15 limit=15
[22:45:48] [PASSED] gran2(field=40): val=2, lim=0 gran1: val=0 limit=0
[22:45:48] ========= [PASSED] validate_id_aa64mmfr0_el1_test ==========
[22:45:48] [PASSED] validate_id_aa64dfr0_el1_test
[22:45:48] [PASSED] validate_id_dfr0_el1_test
[22:45:48] [PASSED] validate_mvfr1_el1_test
[22:45:48] [PASSED] validate_id_reg_test
[22:45:48] ============= [PASSED] kvm-sys-regs-test-suite =============
[22:45:48] ============================================================
[22:45:48] Testing complete. Passed: 63, Failed: 0, Crashed: 0, Skipped: 0, Errors: 0
[22:45:48] Elapsed time: 8.977s total, 0.003s configuring, 7.300s building, 1.620s running
-----------------------------------------------------------------------

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/.kunitconfig    |    4 +
 arch/arm64/kvm/Kconfig         |   11 +
 arch/arm64/kvm/sys_regs.c      |    4 +
 arch/arm64/kvm/sys_regs_test.c | 1068 ++++++++++++++++++++++++++++++++
 4 files changed, 1087 insertions(+)
 create mode 100644 arch/arm64/kvm/.kunitconfig
 create mode 100644 arch/arm64/kvm/sys_regs_test.c

diff --git a/arch/arm64/kvm/.kunitconfig b/arch/arm64/kvm/.kunitconfig
new file mode 100644
index 000000000000..c564c98fc319
--- /dev/null
+++ b/arch/arm64/kvm/.kunitconfig
@@ -0,0 +1,4 @@
+CONFIG_KUNIT=y
+CONFIG_VIRTUALIZATION=y
+CONFIG_KVM=y
+CONFIG_KVM_KUNIT_TEST=y
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 8a5fbbf084df..0d628d0e7dd5 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -56,4 +56,15 @@ config NVHE_EL2_DEBUG
 
 	  If unsure, say N.
 
+config KVM_KUNIT_TEST
+	bool "KUnit tests for KVM on ARM64 processors" if !KUNIT_ALL_TESTS
+	depends on KVM && KUNIT
+	default KUNIT_ALL_TESTS
+	help
+	  Say Y here to enable KUnit tests for the KVM on ARM64.
+	  Only useful for KVM/ARM development and are not for inclusion into
+	  a production build.
+
+	  If unsure, say N.
+
 endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc7a8f2539a4..a71c52aee34e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4486,3 +4486,7 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 			r->reset(vcpu, r);
 	}
 }
+
+#if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
+#include "sys_regs_test.c"
+#endif
diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
new file mode 100644
index 000000000000..dff146fe0e62
--- /dev/null
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -0,0 +1,1068 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KUnit tests for arch/arm64/kvm/sys_regs.c.
+ */
+
+#include <linux/module.h>
+#include <kunit/test.h>
+#include <kunit/test.h>
+#include <linux/kvm_host.h>
+#include <asm/cpufeature.h>
+#include "asm/sysreg.h"
+
+/*
+ * Create a vcpu with the minimum fields required for testing in this file
+ * including the struct kvm.  Any resources that are allocated by this
+ * function must be allocated by kunit_* so that we don't need to explicitly
+ * free them.
+ */
+static struct kvm_vcpu *test_kvm_vcpu_init(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm;
+
+	kvm = kunit_kzalloc(test, sizeof(struct kvm), GFP_KERNEL);
+	if (!kvm)
+		return NULL;
+
+	vcpu = kunit_kzalloc(test, sizeof(struct kvm_vcpu), GFP_KERNEL);
+	if (!vcpu) {
+		kunit_kfree(test, kvm);
+		return NULL;
+	}
+
+	vcpu->cpu = -1;
+	vcpu->kvm = kvm;
+	vcpu->vcpu_id = 0;
+
+	return vcpu;
+}
+
+static void test_kvm_vcpu_fini(struct kunit *test, struct kvm_vcpu *vcpu)
+{
+	if (vcpu->kvm)
+		kunit_kfree(test, vcpu->kvm);
+
+	kunit_kfree(test, vcpu);
+}
+
+/* Test parameter information to test arm64_check_features */
+struct check_features_test {
+	u64	check_types;
+	u64	value;
+	u64	limit;
+	int	expected;
+};
+
+
+/* Used to define test parameters of vcpu_id_reg_feature_frac_check_test() */
+struct feat_info {
+	u32	id;
+	u32	shift;
+	u32	value;
+	u32	limit;
+};
+
+struct frac_check_test {
+	struct feat_info feat;
+	struct feat_info frac_feat;
+	int ret;
+};
+
+#define	FRAC_FEAT(id, shift, value, limit)	{id, shift, value, limit}
+
+/* Tests parameters of vcpu_id_reg_feature_frac_check_test() */
+struct frac_check_test frac_params[] = {
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is smaller than its limit.
+		 * Expect no error regardless of the frac value.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 2),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		0,
+	},
+	{
+		/*
+		 * Both the feature and frac values are same as their limits.
+		 * Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 1),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is smaller than its limit. Expect no error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 1, 2),
+		0,
+	},
+	{
+		/*
+		 * The feature value is same as its limit, and the frac value
+		 * is larger than its limit. Expect an error.
+		 */
+		FRAC_FEAT(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_RAS_SHIFT, 1, 1),
+		FRAC_FEAT(SYS_ID_AA64PFR1_EL1, ID_AA64PFR1_RASFRAC_SHIFT, 2, 1),
+		-E2BIG,
+	},
+
+};
+
+static void frac_case_to_desc(struct frac_check_test *t, char *desc)
+{
+	struct feat_info *feat = &t->feat;
+	struct feat_info *frac = &t->frac_feat;
+
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "feat - shift:%d, val:%d, lim:%d, frac - shift:%d, val:%d, lim:%d\n",
+		 feat->shift, feat->value, feat->limit,
+		 frac->shift, frac->value, frac->limit);
+}
+
+KUNIT_ARRAY_PARAM(frac, frac_params, frac_case_to_desc);
+
+/* Tests for vcpu_id_reg_feature_frac_check(). */
+static void vcpu_id_reg_feature_frac_check_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id, frac_id;
+	struct id_reg_desc id_data, frac_id_data;
+	struct id_reg_desc *idr, *frac_idr;
+	struct feature_frac frac_data, *frac = &frac_data;
+	const struct frac_check_test *frct = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id = frct->feat.id;
+	frac_id = frct->frac_feat.id;
+
+	frac->id = id;
+	frac->shift = frct->feat.shift;
+	frac->frac_id = frac_id;
+	frac->frac_shift = frct->frac_feat.shift;
+
+	idr = get_id_reg_desc(id);
+	frac_idr = get_id_reg_desc(frac_id);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, idr, sizeof(id_data));
+	memcpy(&frac_id_data, frac_idr, sizeof(frac_id_data));
+
+	/* The id could be same as the frac_id */
+	idr->vcpu_limit_val = (u64)frct->feat.limit << frac->shift;
+	frac_idr->vcpu_limit_val |=
+			(u64)frct->frac_feat.limit << frac->frac_shift;
+
+	write_kvm_id_reg(vcpu->kvm, id, (u64)frct->feat.value << frac->shift);
+	write_kvm_id_reg(vcpu->kvm, frac_id,
+			  (u64)frct->frac_feat.value << frac->frac_shift);
+
+	KUNIT_EXPECT_EQ(test,
+			vcpu_id_reg_feature_frac_check(vcpu, frac),
+			frct->ret);
+
+	/* Restore id_reg_desc */
+	memcpy(idr, &id_data, sizeof(id_data));
+	memcpy(frac_idr, &frac_id_data, sizeof(frac_id_data));
+}
+
+/*
+ * Test parameter information to test validate_id_aa64mmfr0_tgran2
+ * and validate_id_aa64mmfr0_el1_test.
+ */
+struct tgran_test {
+	int gran2_field;
+	int gran2;
+	int gran2_lim;
+	int gran1;
+	int gran1_lim;
+	int ret;
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran4_2.
+ * Defined values for the field are:
+ *  0x0: Support for 4KB granule at stage 2 is identified in TGran4.
+ *  0x1: 4KB granule not supported at stage 2.
+ *  0x2: 4KB granule supported at stage 2.
+ *  0x3: 4KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran4 are:
+ *  0x0: 4KB granule supported.
+ *  0x1: 4KB granule supports 52-bit input and output addresses.
+ *  0xf: 4KB granule not supported.
+ */
+struct tgran_test tgran4_2_test_params[] = {
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 1,  0,   0, -E2BIG},
+	/* Disable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 2,  0,   0, 0},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 0,  0,   0, 0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/*
+	 * Enable 4KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 0, 2,   1,   0, -E2BIG},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0, 0xf,  0},
+	/* Disable 4KB granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 1, 0,   0,   0,  0},
+	/* Enable 4KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0, 0xf, 0xf,  -E2BIG},
+	/* Enable 4KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN4_2_SHIFT, 2, 0,   0,   0,  0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran64_2.
+ * Defined values for the field are:
+ *  0x0: Support for 64KB granule at stage 2 is identified in TGran64.
+ *  0x1: 64KB granule not supported at stage 2.
+ *  0x2: 64KB granule supported at stage 2.
+ *  0x3: 64KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran64 are:
+ *  0x0: 64KB granule supported.
+ *  0xf: 64KB granule not supported.
+ */
+struct tgran_test tgran64_2_test_params[] = {
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 1,   0,   0, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 2,   0,   0, 0},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 0,   0,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1, 0xf,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 1,   0,   0, -E2BIG},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 0, 2, 0xf,   0, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0, 0xf, 0},
+	/* Disable 64KB granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 1, 0,   0,   0, 0},
+	/* Enable 64KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0, 0xf, 0xf, -E2BIG},
+	/* Enable 64KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN64_2_SHIFT, 2, 0,   0,   0, 0},
+};
+
+/*
+ * Test parameters of validate_id_aa64mmfr0_tgran2_test() for TGran16_2
+ * Defined values for the field are:
+ *  0x0: Support for 16KB granule at stage 2 is identified in TGran16.
+ *  0x1: 16KB granule not supported at stage 2.
+ *  0x2: 16KB granule supported at stage 2.
+ *  0x3: 16KB granule at stage 2 supports 52-bit input and output addresses.
+ *
+ * Defined values for the TGran16 are:
+ *  0x0: 16KB granule not supported.
+ *  0x1: 16KB granule supported.
+ *  0x2: 16KB granule supports 52-bit input and output addresses.
+ */
+struct tgran_test tgran16_2_test_params[] = {
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 2,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 1,  0,  0, -E2BIG},
+	/* Disable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 2,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  0,  0, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 1,  1,  0, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  0,  0, 0},
+	/*
+	 * Enable 16KB granule with 52 bit address on the host that doesn't
+	 * support it.
+	 */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 0, 2,  2,  2, -E2BIG},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  0, 0},
+	/* Disable 16KB granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 1, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that doesn't support the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  0, -E2BIG},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  1, 0},
+	/* Enable 16KB granule on the host that supports the granule */
+	{ID_AA64MMFR0_TGRAN16_2_SHIFT, 2, 0,  0,  2, 0},
+};
+
+static void tgran2_case_to_desc(struct tgran_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "gran2(field=%d): val=%d, lim=%d gran1: val=%d limit=%d\n",
+		 t->gran2_field, t->gran2, t->gran2_lim,
+		 t->gran1, t->gran1_lim);
+}
+
+KUNIT_ARRAY_PARAM(tgran4_2, tgran4_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran64_2, tgran64_2_test_params, tgran2_case_to_desc);
+KUNIT_ARRAY_PARAM(tgran16_2, tgran16_2_test_params, tgran2_case_to_desc);
+
+#define	MAKE_MMFR0_TGRAN(shift1, gran1, shift2, gran2)		\
+	(((u64)((gran1) & 0xf) << (shift1)) |			\
+	 ((u64)((gran2) & 0xf) << (shift2)))
+
+/* Return the bit position of TGranX field for the given TGranX_2 field. */
+static int tgran2_to_tgran1_shift(int tgran2_shift)
+{
+	int tgran1_shift = -1;
+
+	switch (tgran2_shift) {
+	case ID_AA64MMFR0_TGRAN4_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN4_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN64_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN64_SHIFT;
+		break;
+	case ID_AA64MMFR0_TGRAN16_2_SHIFT:
+		tgran1_shift = ID_AA64MMFR0_TGRAN16_SHIFT;
+		break;
+	default:
+		break;
+	}
+
+	return tgran1_shift;
+}
+
+/* Tests for validate_id_aa64mmfr0_el1(). */
+static void validate_id_aa64mmfr0_tgran2_test(struct kunit *test)
+{
+	const struct tgran_test *t = test->param_value;
+	int shift1, shift2;
+	u64 v, lim;
+
+	shift2 = t->gran2_field;
+	shift1 = tgran2_to_tgran1_shift(shift2);
+	v = MAKE_MMFR0_TGRAN(shift1, t->gran1, shift2, t->gran2);
+	lim = MAKE_MMFR0_TGRAN(shift1, t->gran1_lim, shift2, t->gran2_lim);
+
+	KUNIT_EXPECT_EQ(test, aa64mmfr0_tgran2_check(shift2, v, lim), t->ret);
+}
+
+/* Tests for validate_id_aa64pfr0_el1(). */
+static void validate_id_aa64pfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for GIC.
+	 * GIC must be 1 when vGIC3 is configured.
+	 */
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V2 */
+	vcpu->kvm->arch.vgic.in_kernel = true;
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V2;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Test with VGIC_V3 */
+	vcpu->kvm->arch.vgic.vgic_model = KVM_DEV_TYPE_ARM_VGIC_V3;
+
+	v = 0x0000000;	/* GIC = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	v = 0x1000000;	/* GIC = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original VGIC state */
+	vcpu->kvm->arch.vgic.in_kernel = false;
+	vcpu->kvm->arch.vgic.vgic_model = 0;
+
+	/*
+	 * Tests for AdvSIMD/FP.
+	 * AdvSIMD must have the same value as FP.
+	 */
+
+	/* Tests with SVE disabled */
+	v = 0x000010000;	/* AdvSIMD = 0, FP = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000100000;	/* AdvSIMD = 1, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000ff0000;	/* AdvSIMD = 0xf, FP = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+	if (!system_supports_sve()) {
+		kunit_skip(test, "No SVE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with SVE enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
+
+	v = 0x100000000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100ff0000;	/* SVE =1, AdvSIMD = 0, FP = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr0_el1(vcpu, id_reg, v), 0);
+
+	vcpu->arch.flags &= ~KVM_ARM64_GUEST_HAS_SVE;
+}
+
+/* Tests for validate_id_aa64pfr1_el1() */
+static void validate_id_aa64pfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64PFR1_EL1);
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for MTE */
+
+	/* Tests with MTE disabled */
+
+	v = 0x000;	/* MTE = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_supports_mte()) {
+		kunit_skip(test, "(No MTE support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with MTE enabled */
+	set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &vcpu->kvm->arch.flags);
+
+	v = 0x100;	/* MTE = 1*/
+	KUNIT_EXPECT_EQ(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0;	/* MTE = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64pfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar0_el1(). */
+static void validate_id_aa64isar0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR0_EL1);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for SM3/SM4.
+	 * Arm ARM says SM3 must have the same value as SM4.
+	 */
+
+	v = 0x01000000000;	/* SM4 = 0, SM3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000000000;	/* SM4 = 1, SM3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000000000;	/* SM3 = SM4 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+
+	/*
+	 * Tests for SHA1/SHA2/SHA3.  Arm ARM says:
+	 * If SHA1 is 0x0, both SHA2 and SHA3 must be 0x0.
+	 * If SHA2 is 0x0, SHA1 must be 0x0.
+	 * If SHA2 is 0x2, SHA3 must be 0x1.
+	 * If SHA3 is 0x1, SHA2 msut be 0x2.
+	 */
+
+	v = 0x000000100;	/* SHA2 = 0, SHA1 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001000;	/* SHA2 = 1, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000001100;	/* SHA2 = 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100002000;	/* SHA3 = 1, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000002000;	/* SHA3 = 0, SHA2 = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100001000;	/* SHA3 = 1, SHA2 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200000000;	/* SHA3 = 2, SHA1 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x200001100;	/* SHA3 = 2, SHA2= 1, SHA1 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x300003300;	/* SHA3 = 3, SHA2 = 3, SHA1 = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_aa64isar1_el1() */
+static void validate_id_aa64isar1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR1_EL1);
+
+	/*
+	 * Tests for GPI/GPA/API/APA.
+	 * Arm ARM says:
+	 * If GPA is non-zero, GPI must be zero.
+	 * If GPI is non-zero, GPA must be zero.
+	 * If APA is non-zero, API must be zero.
+	 * If API is non-zero, APA must be zero.
+	 */
+
+	v = 0x11000110;	/* GPI = 1, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000100;	/* GPI = 1, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x11000010;	/* GPI = 1, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000110;	/* GPI = 1, GPA = 0, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000110;	/* GPI = 0, GPA = 1, API = 1, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PTRAUTH disabled */
+
+	/* Just for convenience, set all of GPI/GPA/API/APA to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x11000110;
+
+	v = 0x00000000;	/* GPI = 0, GPA = 0, API = 0, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x10000100;	/* GPI = 1, GPA = 0, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x10000010;	/* GPI = 1, GPA = 0, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000100;	/* GPI = 0, GPA = 1, API = 1, APA = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x01000010;	/* GPI = 0, GPA = 1, API = 0, APA = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar1_el1(vcpu, id_reg, v), 0);
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+/* Tests for validate_id_aa64isar2_el1() */
+static void validate_id_aa64isar2_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v, org_limit;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64ISAR2_EL1);
+
+	/* Tests for GPA3/APA3. */
+
+	/* Tests with PTRAUTH disabled  */
+
+	/* Set the limit of APA3/GPA3 to 1. */
+	org_limit = id_reg->vcpu_limit_val;
+	id_reg->vcpu_limit_val = 0x1100;
+
+	v = 0x0000;	/* GPA3 = 0, APA3 = 0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* GPA3 = 1, APA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* GPA3 = 0, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100;	/* GPA3 = 1, APA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	if (!system_has_full_ptr_auth()) {
+		id_reg->vcpu_limit_val = org_limit;
+		kunit_skip(test, "(No PTRAUTH support. Partial skip)");
+		/* Not reached */
+	}
+
+	/* Tests with PTRAUTH enabled */
+	vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+	v = 0x1100;	/* APA3 = 1, GPA3 = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000;	/* APA3 = 1, GPA3 = 0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0x0100;	/* APA3 = 0, GPA3 = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	v = 0;
+	KUNIT_EXPECT_NE(test, validate_id_aa64isar2_el1(vcpu, id_reg, v), 0);
+
+	/* Restore the original value */
+	id_reg->vcpu_limit_val = org_limit;
+}
+
+
+/* Tests for validate_id_aa64mmfr0_el1() */
+static void validate_id_aa64mmfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc id_data, *id_reg;
+	const struct tgran_test *t4, *t64, *t16;
+	struct kvm_vcpu *vcpu;
+	int field4, field4_2, field64, field64_2, field16, field16_2;
+	u64 v, v4, lim4, v64, lim64, v16, lim16;
+	int i, j, ret;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64MMFR0_EL1);
+
+	/* Save the original id_reg_desc (and restore later) */
+	memcpy(&id_data, id_reg, sizeof(id_data));
+
+	vcpu = test_kvm_vcpu_init(test);
+
+	t4 = test->param_value;
+	field4_2 = t4->gran2_field;
+	field4 = tgran2_to_tgran1_shift(field4_2);
+	v4 = MAKE_MMFR0_TGRAN(field4, t4->gran1, field4_2, t4->gran2);
+	lim4 = MAKE_MMFR0_TGRAN(field4, t4->gran1_lim, field4_2, t4->gran2_lim);
+
+	/*
+	 * For each given gran4_2 params, test validate_id_aa64mmfr0_el1
+	 * with each of tgran64_2 and tgran16_2 params.
+	 */
+	for (i = 0; i < ARRAY_SIZE(tgran64_2_test_params); i++) {
+		t64 = &tgran64_2_test_params[i];
+		field64_2 = t64->gran2_field;
+		field64 = tgran2_to_tgran1_shift(field64_2);
+		v64 = MAKE_MMFR0_TGRAN(field64, t64->gran1,
+				       field64_2, t64->gran2);
+		lim64 = MAKE_MMFR0_TGRAN(field64, t64->gran1_lim,
+					 field64_2, t64->gran2_lim);
+
+		for (j = 0; j < ARRAY_SIZE(tgran16_2_test_params); j++) {
+			t16 = &tgran16_2_test_params[j];
+
+			field16_2 = t16->gran2_field;
+			field16 = tgran2_to_tgran1_shift(field16_2);
+			v16 = MAKE_MMFR0_TGRAN(field16, t16->gran1,
+					       field16_2, t16->gran2);
+			lim16 = MAKE_MMFR0_TGRAN(field16, t16->gran1_lim,
+						 field16_2, t16->gran2_lim);
+
+			/* Build id_aa64mmfr0_el1 from tgran16/64/4 values */
+			v = v16 | v64 | v4;
+			id_reg->vcpu_limit_val = lim16 | lim64 | lim4;
+
+			ret = t4->ret ? t4->ret : t64->ret;
+			ret = ret ? ret : t16->ret;
+			KUNIT_EXPECT_EQ(test,
+				validate_id_aa64mmfr0_el1(vcpu, id_reg, v),
+				ret);
+		}
+	}
+
+	/* Restore id_reg_desc */
+	memcpy(id_reg, &id_data, sizeof(id_data));
+}
+
+/* Tests for validate_id_aa64dfr0_el1() */
+static void validate_id_aa64dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_AA64DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for CTX_CMPS/BRPS.
+	 * Number of context-aware breakpoints can be no more than number
+	 * of supported breakpoints.
+	 */
+	v = 0x10001000;	/* CTX_CMPS = 1, BRPS = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x20001000;	/* CTX_CMPS = 2, BRPS = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PMUVer */
+
+	/* Tests with PMUv3 disabled. */
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf00;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x000;	/* PMUVER = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x000;	/* PMUVER = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x100;	/* PMUVER = 1 */
+	KUNIT_EXPECT_EQ(test, validate_id_aa64dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_id_dfr0_el1() */
+static void validate_id_dfr0_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_ID_DFR0_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	/* Tests for PERFMON */
+
+	/* Tests with PMUv3 disabled */
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+
+	/* Tests with PMUv3 enabled */
+	set_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features);
+
+	v = 0x0000000;	/* PERFMON = 0x0 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0xf000000;	/* PERFMON = 0xf */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1000000;	/* PERFMON = 1 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2000000;	/* PERFMON = 2 */
+	KUNIT_EXPECT_NE(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3000000;	/* PERFMON = 3 */
+	KUNIT_EXPECT_EQ(test, validate_id_dfr0_el1(vcpu, id_reg, v), 0);
+}
+
+/* Tests for validate_mvfr1_el1(). */
+static void validate_mvfr1_el1_test(struct kunit *test)
+{
+	struct id_reg_desc *id_reg;
+	struct kvm_vcpu *vcpu;
+	u64 v;
+
+	id_reg = get_id_reg_desc(SYS_MVFR1_EL1);
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	v = 0;
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	/*
+	 * Tests for FPHP/SIMDHP.
+	 * Arm ARM says the level of support indicated by FPHP must be
+	 * equivalent to the level of support indicated by the SIMDHP,
+	 * meaning the permitted values are:
+	 * FPHP = 0x0, SIMDHP = 0x0
+	 * FPHP = 0x2, SIMDHP = 0x1
+	 * FPHP = 0x3, SIMDHP = 0x2
+	 */
+	v = 0x0000000;	/* FPHP = 0, SIMDHP = 0 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2100000;	/* FPHP = 2, SIMDHP = 1 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3200000;	/* FPHP = 3, SIMDHP = 2 */
+	KUNIT_EXPECT_EQ(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x1100000;	/* FPHP = 1, SIMDHP = 1 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x2200000;	/* FPHP = 2, SIMDHP = 2 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = 0x3300000;	/* FPHP = 3, SIMDHP = 3 */
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+
+	v = (u64)-1;
+	KUNIT_EXPECT_NE(test, validate_mvfr1_el1(vcpu, id_reg, v), 0);
+}
+
+/*
+ * Helper function for validate_id_reg_test().
+ * We don't use KUNIT_ASSERT or kunit_skip because this is a helper test
+ * function and we are not sure if it's safe to exist from the test case.
+ */
+static void validate_id_reg_test_one_field(struct kunit *test,
+		u32 id, int pos, int fval, int flimit,
+		bool is_signed, struct id_reg_desc *idr)
+{
+	struct kvm_vcpu *vcpu;
+	int fmin = is_signed ? -1 : 0;
+	int fmax = is_signed ? 7 : 15;
+	u64 fmask = ARM64_FEATURE_FIELD_MASK;
+	u64 val;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	if (flimit > fmax) {
+		/* Shouldn't happen. Make the test failure. */
+		KUNIT_EXPECT_FALSE(test, flimit > fmax);
+		kunit_err(test, "%s: flimit(%d) > fmax(%d). Must be test bug",
+			  __func__, flimit, fmax);
+		return;
+	}
+
+	if (fval > fmin) {
+		/* Set the field to a smaller value */
+		val = ((u64)(fval - 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (fval < flimit) {
+		/* Set the field to a larger value, but smaller than flimit */
+		val = ((u64)(fval + 1) & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Set the field to the flimit */
+		val = ((u64)flimit & fmask) << pos;
+		KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+	}
+
+	if (flimit < fmax) {
+		/* Set the field to a larger value than flimit */
+		val = ((u64)(flimit + 1) & fmask) << pos;
+		KUNIT_EXPECT_NE(test, validate_id_reg(vcpu, idr, val), 0);
+
+		/* Test with ignore_mask */
+		if (idr) {
+			idr->ignore_mask = fmask << pos;
+			KUNIT_EXPECT_EQ(test, validate_id_reg(vcpu, idr, val), 0);
+		}
+	}
+	test_kvm_vcpu_fini(test, vcpu);
+}
+
+static void set_sys_desc(struct sys_reg_desc *rd, u32 encoding)
+{
+	rd->Op0 = sys_reg_Op0(encoding);
+	rd->Op1 = sys_reg_Op1(encoding);
+	rd->CRn = sys_reg_CRn(encoding);
+	rd->CRm = sys_reg_CRm(encoding);
+	rd->Op2 = sys_reg_Op2(encoding);
+}
+
+/*
+ * Test for validate_id_reg().
+ */
+static void validate_id_reg_test(struct kunit *test)
+{
+	struct id_reg_desc idr_data, *idr, *original_idr;
+	u32 id;
+	int fval, flim, pos;
+	u64 val;
+	bool sign;
+
+	/* Use AA64PFR0_EL1 because it includes both sign/unsigned fields */
+	id = SYS_ID_AA64PFR0_EL1;
+
+	/* Test with a temporary id_reg_desc for testing */
+	idr = &idr_data;
+
+	fval = 0x1;
+	flim = 0x2;
+
+	/* Test with unsigned field */
+	pos = ID_AA64PFR0_RAS_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	original_idr = get_id_reg_desc(id);
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, false, idr);
+
+	/* Test with signed field */
+	pos = ID_AA64PFR0_FP_SHIFT;
+
+	/* Set up id_reg_desc for testing */
+	memset(idr, 0, sizeof(*idr));
+	set_sys_desc((struct sys_reg_desc *)&idr->reg_desc, id);
+
+	/* Copy ftr_bits from the original one */
+	memcpy(idr->ftr_bits, original_idr->ftr_bits, sizeof(idr->ftr_bits));
+
+	idr->vcpu_limit_val = (u64)flim << pos;
+	validate_id_reg_test_one_field(test, id, pos, fval, flim, true, idr);
+
+	/* Test with the original limit val */
+	val = original_idr->vcpu_limit_val;
+	idr->vcpu_limit_val = val;
+
+	for (pos = 0; pos < 64; pos += 4) {
+		if (pos == ID_AA64PFR0_FP_SHIFT ||
+		    pos == ID_AA64PFR0_ASIMD_SHIFT)
+			sign = true;
+		else
+			sign = false;
+
+		fval = cpuid_feature_extract_field(val, pos, sign);
+		validate_id_reg_test_one_field(test, id, pos, fval, fval,
+					       sign, idr);
+	}
+}
+
+static struct kunit_case kvm_sys_regs_test_cases[] = {
+	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran64_2_gen_params),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran16_2_gen_params),
+	KUNIT_CASE(validate_id_aa64pfr0_el1_test),
+	KUNIT_CASE(validate_id_aa64pfr1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar0_el1_test),
+	KUNIT_CASE(validate_id_aa64isar1_el1_test),
+	KUNIT_CASE(validate_id_aa64isar2_el1_test),
+	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_el1_test, tgran4_2_gen_params),
+	KUNIT_CASE(validate_id_aa64dfr0_el1_test),
+	KUNIT_CASE(validate_id_dfr0_el1_test),
+	KUNIT_CASE(validate_mvfr1_el1_test),
+	KUNIT_CASE(validate_id_reg_test),
+	{}
+};
+
+static struct kunit_suite kvm_sys_regs_test_suite = {
+	.name = "kvm-sys-regs-test-suite",
+	.test_cases = kvm_sys_regs_test_cases,
+};
+
+kunit_test_suites(&kvm_sys_regs_test_suite);
+MODULE_LICENSE("GPL");
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 24/38] KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch
for VHE.  Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  5 ++++-
 arch/arm64/kvm/hyp/vhe/switch.c  | 14 ++------------
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 1767ded83888..3f74fb16104e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -288,6 +288,22 @@
 				 GENMASK(19, 14) |	\
 				 BIT(11))
 
+/*
+ * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
+ * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
+ * except for some missing controls, such as TAM.
+ * In this case, CPTR_EL2.TAM has the same position with or without
+ * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
+ * shift value for trapping the AMU accesses.
+ */
+#define CPTR_EL2_VHE_GUEST_DEFAULT	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
+/*
+ * Bits that are copied from vcpu->arch.cptr_el2 to set cptr_el2 for
+ * guest with VHE.
+ */
+#define CPTR_EL2_VHE_GUEST_TRACKED_MASK	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index b4db368948cc..e80c059b41d5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,7 +1123,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
-	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
+	if (has_vhe())
+		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
+	else
+		vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 262dfe03134d..066dc4629f02 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -40,20 +40,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 
 	val = read_sysreg(cpacr_el1);
-	val |= CPACR_EL1_TTA;
+	val &= ~CPTR_EL2_VHE_GUEST_TRACKED_MASK;
+	val |= (vcpu->arch.cptr_el2 & CPTR_EL2_VHE_GUEST_TRACKED_MASK);
 	val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
 
-	/*
-	 * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
-	 * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
-	 * except for some missing controls, such as TAM.
-	 * In this case, CPTR_EL2.TAM has the same position with or without
-	 * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
-	 * shift value for trapping the AMU accesses.
-	 */
-
-	val |= CPTR_EL2_TAM;
-
 	if (update_fp_enabled(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 24/38] KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch
for VHE.  Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  5 ++++-
 arch/arm64/kvm/hyp/vhe/switch.c  | 14 ++------------
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 1767ded83888..3f74fb16104e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -288,6 +288,22 @@
 				 GENMASK(19, 14) |	\
 				 BIT(11))
 
+/*
+ * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
+ * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
+ * except for some missing controls, such as TAM.
+ * In this case, CPTR_EL2.TAM has the same position with or without
+ * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
+ * shift value for trapping the AMU accesses.
+ */
+#define CPTR_EL2_VHE_GUEST_DEFAULT	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
+/*
+ * Bits that are copied from vcpu->arch.cptr_el2 to set cptr_el2 for
+ * guest with VHE.
+ */
+#define CPTR_EL2_VHE_GUEST_TRACKED_MASK	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index b4db368948cc..e80c059b41d5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,7 +1123,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
-	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
+	if (has_vhe())
+		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
+	else
+		vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 262dfe03134d..066dc4629f02 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -40,20 +40,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 
 	val = read_sysreg(cpacr_el1);
-	val |= CPACR_EL1_TTA;
+	val &= ~CPTR_EL2_VHE_GUEST_TRACKED_MASK;
+	val |= (vcpu->arch.cptr_el2 & CPTR_EL2_VHE_GUEST_TRACKED_MASK);
 	val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
 
-	/*
-	 * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
-	 * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
-	 * except for some missing controls, such as TAM.
-	 * In this case, CPTR_EL2.TAM has the same position with or without
-	 * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
-	 * shift value for trapping the AMU accesses.
-	 */
-
-	val |= CPTR_EL2_TAM;
-
 	if (update_fp_enabled(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 24/38] KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Track the baseline guest value for cptr_el2 in struct kvm_vcpu_arch
for VHE.  Use this value when setting cptr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  5 ++++-
 arch/arm64/kvm/hyp/vhe/switch.c  | 14 ++------------
 3 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 1767ded83888..3f74fb16104e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -288,6 +288,22 @@
 				 GENMASK(19, 14) |	\
 				 BIT(11))
 
+/*
+ * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
+ * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
+ * except for some missing controls, such as TAM.
+ * In this case, CPTR_EL2.TAM has the same position with or without
+ * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
+ * shift value for trapping the AMU accesses.
+ */
+#define CPTR_EL2_VHE_GUEST_DEFAULT	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
+/*
+ * Bits that are copied from vcpu->arch.cptr_el2 to set cptr_el2 for
+ * guest with VHE.
+ */
+#define CPTR_EL2_VHE_GUEST_TRACKED_MASK	(CPACR_EL1_TTA | CPTR_EL2_TAM)
+
 /* Hyp Debug Configuration Register bits */
 #define MDCR_EL2_E2TB_MASK	(UL(0x3))
 #define MDCR_EL2_E2TB_SHIFT	(UL(24))
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index b4db368948cc..e80c059b41d5 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,7 +1123,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
-	vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
+	if (has_vhe())
+		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
+	else
+		vcpu->arch.cptr_el2 = CPTR_EL2_DEFAULT;
 
 	/*
 	 * Handle the "start in power-off" case.
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 262dfe03134d..066dc4629f02 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -40,20 +40,10 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 	___activate_traps(vcpu);
 
 	val = read_sysreg(cpacr_el1);
-	val |= CPACR_EL1_TTA;
+	val &= ~CPTR_EL2_VHE_GUEST_TRACKED_MASK;
+	val |= (vcpu->arch.cptr_el2 & CPTR_EL2_VHE_GUEST_TRACKED_MASK);
 	val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
 
-	/*
-	 * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
-	 * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
-	 * except for some missing controls, such as TAM.
-	 * In this case, CPTR_EL2.TAM has the same position with or without
-	 * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
-	 * shift value for trapping the AMU accesses.
-	 */
-
-	val |= CPTR_EL2_TAM;
-
 	if (update_fp_enabled(vcpu)) {
 		if (vcpu_has_sve(vcpu))
 			val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 25/38] KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Track the baseline guest value for mdcr_el2 in struct kvm_vcpu_arch.
Use this value when setting mdcr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  1 +
 arch/arm64/kvm/debug.c           | 13 ++++---------
 3 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3f74fb16104e..90be933b5f08 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -333,6 +333,22 @@
 				 BIT(18) |		\
 				 GENMASK(16, 15))
 
+/*
+ * The default value for the guest below also clears MDCR_EL2_E2PB_MASK
+ * and MDCR_EL2_E2TB_MASK to disable guest access to the profiling and
+ * trace buffers.
+ */
+#define MDCR_GUEST_FLAGS_DEFAULT				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA)
+
+/* Bits that are copied from vcpu->arch.mdcr_el2 to set mdcr_el2 for guest. */
+#define MDCR_GUEST_FLAGS_TRACKED_MASK				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA |	\
+	 (MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT))
+
+
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
 #define FSC_ACCESS	ESR_ELx_FSC_ACCESS
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e80c059b41d5..69189907579c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,6 +1123,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.mdcr_el2 = MDCR_GUEST_FLAGS_DEFAULT;
 	if (has_vhe())
 		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
 	else
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 6eb146d908f8..8e1243972804 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -84,16 +84,11 @@ void kvm_arm_init_debug(void)
 static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 {
 	/*
-	 * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK
-	 * to disable guest access to the profiling and trace buffers
+	 * Keep the vcpu->arch.mdcr_el2 bits that are specified by
+	 * MDCR_GUEST_FLAGS_TRACKED_MASK.
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
-				MDCR_EL2_TPMS |
-				MDCR_EL2_TTRF |
-				MDCR_EL2_TPMCR |
-				MDCR_EL2_TDRA |
-				MDCR_EL2_TDOSA);
+	vcpu->arch.mdcr_el2 &= MDCR_GUEST_FLAGS_TRACKED_MASK;
+	vcpu->arch.mdcr_el2 |= __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
 
 	/* Is the VM being debugged by userspace? */
 	if (vcpu->guest_debug)
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 25/38] KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Track the baseline guest value for mdcr_el2 in struct kvm_vcpu_arch.
Use this value when setting mdcr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  1 +
 arch/arm64/kvm/debug.c           | 13 ++++---------
 3 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3f74fb16104e..90be933b5f08 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -333,6 +333,22 @@
 				 BIT(18) |		\
 				 GENMASK(16, 15))
 
+/*
+ * The default value for the guest below also clears MDCR_EL2_E2PB_MASK
+ * and MDCR_EL2_E2TB_MASK to disable guest access to the profiling and
+ * trace buffers.
+ */
+#define MDCR_GUEST_FLAGS_DEFAULT				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA)
+
+/* Bits that are copied from vcpu->arch.mdcr_el2 to set mdcr_el2 for guest. */
+#define MDCR_GUEST_FLAGS_TRACKED_MASK				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA |	\
+	 (MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT))
+
+
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
 #define FSC_ACCESS	ESR_ELx_FSC_ACCESS
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e80c059b41d5..69189907579c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,6 +1123,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.mdcr_el2 = MDCR_GUEST_FLAGS_DEFAULT;
 	if (has_vhe())
 		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
 	else
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 6eb146d908f8..8e1243972804 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -84,16 +84,11 @@ void kvm_arm_init_debug(void)
 static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 {
 	/*
-	 * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK
-	 * to disable guest access to the profiling and trace buffers
+	 * Keep the vcpu->arch.mdcr_el2 bits that are specified by
+	 * MDCR_GUEST_FLAGS_TRACKED_MASK.
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
-				MDCR_EL2_TPMS |
-				MDCR_EL2_TTRF |
-				MDCR_EL2_TPMCR |
-				MDCR_EL2_TDRA |
-				MDCR_EL2_TDOSA);
+	vcpu->arch.mdcr_el2 &= MDCR_GUEST_FLAGS_TRACKED_MASK;
+	vcpu->arch.mdcr_el2 |= __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
 
 	/* Is the VM being debugged by userspace? */
 	if (vcpu->guest_debug)
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 25/38] KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Track the baseline guest value for mdcr_el2 in struct kvm_vcpu_arch.
Use this value when setting mdcr_el2 for the guest.

Currently this value is unchanged, but the following patches will set
trapping bits based on features supported for the guest.

No functional change intended.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_arm.h | 16 ++++++++++++++++
 arch/arm64/kvm/arm.c             |  1 +
 arch/arm64/kvm/debug.c           | 13 ++++---------
 3 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3f74fb16104e..90be933b5f08 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -333,6 +333,22 @@
 				 BIT(18) |		\
 				 GENMASK(16, 15))
 
+/*
+ * The default value for the guest below also clears MDCR_EL2_E2PB_MASK
+ * and MDCR_EL2_E2TB_MASK to disable guest access to the profiling and
+ * trace buffers.
+ */
+#define MDCR_GUEST_FLAGS_DEFAULT				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA)
+
+/* Bits that are copied from vcpu->arch.mdcr_el2 to set mdcr_el2 for guest. */
+#define MDCR_GUEST_FLAGS_TRACKED_MASK				\
+	(MDCR_EL2_TPM  | MDCR_EL2_TPMS | MDCR_EL2_TTRF |	\
+	 MDCR_EL2_TPMCR | MDCR_EL2_TDRA | MDCR_EL2_TDOSA |	\
+	 (MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT))
+
+
 /* For compatibility with fault code shared with 32-bit */
 #define FSC_FAULT	ESR_ELx_FSC_FAULT
 #define FSC_ACCESS	ESR_ELx_FSC_ACCESS
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index e80c059b41d5..69189907579c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1123,6 +1123,7 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	}
 
 	vcpu_reset_hcr(vcpu);
+	vcpu->arch.mdcr_el2 = MDCR_GUEST_FLAGS_DEFAULT;
 	if (has_vhe())
 		vcpu->arch.cptr_el2 = CPTR_EL2_VHE_GUEST_DEFAULT;
 	else
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index 6eb146d908f8..8e1243972804 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -84,16 +84,11 @@ void kvm_arm_init_debug(void)
 static void kvm_arm_setup_mdcr_el2(struct kvm_vcpu *vcpu)
 {
 	/*
-	 * This also clears MDCR_EL2_E2PB_MASK and MDCR_EL2_E2TB_MASK
-	 * to disable guest access to the profiling and trace buffers
+	 * Keep the vcpu->arch.mdcr_el2 bits that are specified by
+	 * MDCR_GUEST_FLAGS_TRACKED_MASK.
 	 */
-	vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-	vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
-				MDCR_EL2_TPMS |
-				MDCR_EL2_TTRF |
-				MDCR_EL2_TPMCR |
-				MDCR_EL2_TDRA |
-				MDCR_EL2_TDOSA);
+	vcpu->arch.mdcr_el2 &= MDCR_GUEST_FLAGS_TRACKED_MASK;
+	vcpu->arch.mdcr_el2 |= __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
 
 	/* Is the VM being debugged by userspace? */
 	if (vcpu->guest_debug)
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 26/38] KVM: arm64: Introduce framework to trap disabled features
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

When a CPU feature that is supported on the host is not exposed to
its guest, emulating a real CPU's behavior (by trapping or disabling
guest's using the feature) is generally a desirable behavior (when
it's possible without any or little side effect).

Introduce feature_config_ctrl structure, which manages feature
information to program configuration register to trap or disable
the feature when the feature is not exposed to the guest, and
functions that uses the structure to activate the vcpu's trapping the
feature.  Those codes don't update trap configuration registers
themselves (HCR_EL2, etc) but values for the registers in
kvm_vcpu_arch at the first KVM_RUN.

At present, no feature has feature_config_ctrl yet and the following
patches will add the feature_config_ctrl for some features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |  13 ++--
 arch/arm64/kvm/sys_regs.c         | 111 ++++++++++++++++++++++++++++++
 3 files changed, 120 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b85af83b4542..92785b33df0f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -790,6 +790,7 @@ void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 69189907579c..bcccf3876fcf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -556,13 +556,16 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 		static_branch_inc(&userspace_irqchip_in_use);
 	}
 
-	/*
-	 * Initialize traps for protected VMs.
-	 * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
-	 * the code is in place for first run initialization at EL2.
-	 */
+	/* Initialize traps for the guest. */
 	if (kvm_vm_is_protected(kvm))
+		/*
+		 * NOTE: Move to run in EL2 directly, rather than via a
+		 * hypercall, once the code is in place for first run
+		 * initialization at EL2.
+		 */
 		kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+	else
+		kvm_vcpu_init_traps(vcpu);
 
 	mutex_lock(&kvm->lock);
 	set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a71c52aee34e..7fe44dec11fd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,27 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Feature information to program configuration register to trap or disable
+ * guest's using a feature when the feature is not exposed to the guest.
+ */
+struct feature_config_ctrl {
+	/* ID register/field for the feature */
+	u32	ftr_reg;	/* ID register */
+	bool	ftr_signed;	/* Is the feature field signed ? */
+	u8	ftr_shift;	/* Field of ID register for the feature */
+	s8	ftr_min;	/* Min value that indicate the feature */
+
+	/*
+	 * Function to check trapping is needed. This is used when the above
+	 * fields are not enough to determine if trapping is needed.
+	 */
+	bool	(*ftr_need_trap)(struct kvm_vcpu *vcpu);
+
+	/* Function to activate trapping the feature. */
+	void	(*trap_activate)(struct kvm_vcpu *vcpu);
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -321,6 +342,9 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 struct id_reg_desc {
 	const struct sys_reg_desc	reg_desc;
 
+	/* Sanitized system value */
+	u64	sys_val;
+
 	/*
 	 * Limit value of the register for a vcpu. The value is the sanitized
 	 * system value with bits set/cleared for unsupported features for the
@@ -376,6 +400,9 @@ struct id_reg_desc {
 	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+
+	/* Information to trap features that are disabled for the guest */
+	const struct feature_config_ctrl *(*trap_features)[];
 };
 
 static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
@@ -393,6 +420,7 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 		return;
 
 	val = read_sanitised_ftr_reg(id);
+	id_reg->sys_val = val;
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -908,6 +936,24 @@ static int validate_id_reg(struct kvm_vcpu *vcpu,
 	return err;
 }
 
+static inline bool feature_avail(const struct feature_config_ctrl *ctrl,
+				 u64 id_val)
+{
+	int field_val = cpuid_feature_extract_field(id_val,
+				ctrl->ftr_shift, ctrl->ftr_signed);
+
+	return (field_val >= ctrl->ftr_min);
+}
+
+static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
+					const struct feature_config_ctrl *ctrl)
+{
+	u64 val;
+
+	val = read_id_reg_with_encoding(vcpu, ctrl->ftr_reg);
+	return feature_avail(ctrl, val);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2387,6 +2433,46 @@ static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
 	return __access_id_reg(vcpu, p, r, true);
 }
 
+static void id_reg_features_trap_activate(struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *id_reg)
+{
+	u64 val;
+	int i = 0;
+	const struct feature_config_ctrl **ctrlp_array, *ctrl;
+
+	if (!id_reg->trap_features)
+		/* No information to trap a feature */
+		return;
+
+	val = __read_id_reg(vcpu, id_reg);
+	if (val == id_reg->sys_val)
+		/* No feature needs to be trapped (no feature is disabled). */
+		return;
+
+	ctrlp_array = *id_reg->trap_features;
+	while ((ctrl = ctrlp_array[i++]) != NULL) {
+		if (WARN_ON_ONCE(!ctrl->trap_activate))
+			/* Shouldn't happen */
+			continue;
+
+		if (ctrl->ftr_need_trap && ctrl->ftr_need_trap(vcpu)) {
+			ctrl->trap_activate(vcpu);
+			continue;
+		}
+
+		if (!feature_avail(ctrl, id_reg->sys_val))
+			/* The feature is not supported on the host. */
+			continue;
+
+		if (feature_avail(ctrl, val))
+			/* The feature is enabled for the guest. */
+			continue;
+
+		/* The feature is supported but disabled. */
+		ctrl->trap_activate(vcpu);
+	}
+}
+
 /* Visibility overrides for SVE-specific control registers */
 static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 				   const struct sys_reg_desc *rd)
@@ -4487,6 +4573,31 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 	}
 }
 
+/*
+ * This function activates vcpu's trapping of features that are included in
+ * trap_features[] of id_reg_desc if the features are supported on the
+ * host, but are hidden from the guest (i.e. values of ID registers for
+ * the guest are modified to not show the features' availability).
+ * This function just updates values for trap configuration registers (e.g.
+ * HCR_EL2, etc) in kvm_vcpu_arch, which will be restored before switching
+ * to the guest, but doesn't update the registers themselves.
+ * This function should be called once at the first KVM_RUN (ID registers
+ * are immutable after the first KVM_RUN).
+ */
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct id_reg_desc *idr;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!idr)
+			continue;
+
+		id_reg_features_trap_activate(vcpu, idr);
+	}
+}
+
 #if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
 #include "sys_regs_test.c"
 #endif
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 26/38] KVM: arm64: Introduce framework to trap disabled features
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

When a CPU feature that is supported on the host is not exposed to
its guest, emulating a real CPU's behavior (by trapping or disabling
guest's using the feature) is generally a desirable behavior (when
it's possible without any or little side effect).

Introduce feature_config_ctrl structure, which manages feature
information to program configuration register to trap or disable
the feature when the feature is not exposed to the guest, and
functions that uses the structure to activate the vcpu's trapping the
feature.  Those codes don't update trap configuration registers
themselves (HCR_EL2, etc) but values for the registers in
kvm_vcpu_arch at the first KVM_RUN.

At present, no feature has feature_config_ctrl yet and the following
patches will add the feature_config_ctrl for some features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |  13 ++--
 arch/arm64/kvm/sys_regs.c         | 111 ++++++++++++++++++++++++++++++
 3 files changed, 120 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b85af83b4542..92785b33df0f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -790,6 +790,7 @@ void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 69189907579c..bcccf3876fcf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -556,13 +556,16 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 		static_branch_inc(&userspace_irqchip_in_use);
 	}
 
-	/*
-	 * Initialize traps for protected VMs.
-	 * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
-	 * the code is in place for first run initialization at EL2.
-	 */
+	/* Initialize traps for the guest. */
 	if (kvm_vm_is_protected(kvm))
+		/*
+		 * NOTE: Move to run in EL2 directly, rather than via a
+		 * hypercall, once the code is in place for first run
+		 * initialization at EL2.
+		 */
 		kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+	else
+		kvm_vcpu_init_traps(vcpu);
 
 	mutex_lock(&kvm->lock);
 	set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a71c52aee34e..7fe44dec11fd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,27 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Feature information to program configuration register to trap or disable
+ * guest's using a feature when the feature is not exposed to the guest.
+ */
+struct feature_config_ctrl {
+	/* ID register/field for the feature */
+	u32	ftr_reg;	/* ID register */
+	bool	ftr_signed;	/* Is the feature field signed ? */
+	u8	ftr_shift;	/* Field of ID register for the feature */
+	s8	ftr_min;	/* Min value that indicate the feature */
+
+	/*
+	 * Function to check trapping is needed. This is used when the above
+	 * fields are not enough to determine if trapping is needed.
+	 */
+	bool	(*ftr_need_trap)(struct kvm_vcpu *vcpu);
+
+	/* Function to activate trapping the feature. */
+	void	(*trap_activate)(struct kvm_vcpu *vcpu);
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -321,6 +342,9 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 struct id_reg_desc {
 	const struct sys_reg_desc	reg_desc;
 
+	/* Sanitized system value */
+	u64	sys_val;
+
 	/*
 	 * Limit value of the register for a vcpu. The value is the sanitized
 	 * system value with bits set/cleared for unsupported features for the
@@ -376,6 +400,9 @@ struct id_reg_desc {
 	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+
+	/* Information to trap features that are disabled for the guest */
+	const struct feature_config_ctrl *(*trap_features)[];
 };
 
 static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
@@ -393,6 +420,7 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 		return;
 
 	val = read_sanitised_ftr_reg(id);
+	id_reg->sys_val = val;
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -908,6 +936,24 @@ static int validate_id_reg(struct kvm_vcpu *vcpu,
 	return err;
 }
 
+static inline bool feature_avail(const struct feature_config_ctrl *ctrl,
+				 u64 id_val)
+{
+	int field_val = cpuid_feature_extract_field(id_val,
+				ctrl->ftr_shift, ctrl->ftr_signed);
+
+	return (field_val >= ctrl->ftr_min);
+}
+
+static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
+					const struct feature_config_ctrl *ctrl)
+{
+	u64 val;
+
+	val = read_id_reg_with_encoding(vcpu, ctrl->ftr_reg);
+	return feature_avail(ctrl, val);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2387,6 +2433,46 @@ static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
 	return __access_id_reg(vcpu, p, r, true);
 }
 
+static void id_reg_features_trap_activate(struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *id_reg)
+{
+	u64 val;
+	int i = 0;
+	const struct feature_config_ctrl **ctrlp_array, *ctrl;
+
+	if (!id_reg->trap_features)
+		/* No information to trap a feature */
+		return;
+
+	val = __read_id_reg(vcpu, id_reg);
+	if (val == id_reg->sys_val)
+		/* No feature needs to be trapped (no feature is disabled). */
+		return;
+
+	ctrlp_array = *id_reg->trap_features;
+	while ((ctrl = ctrlp_array[i++]) != NULL) {
+		if (WARN_ON_ONCE(!ctrl->trap_activate))
+			/* Shouldn't happen */
+			continue;
+
+		if (ctrl->ftr_need_trap && ctrl->ftr_need_trap(vcpu)) {
+			ctrl->trap_activate(vcpu);
+			continue;
+		}
+
+		if (!feature_avail(ctrl, id_reg->sys_val))
+			/* The feature is not supported on the host. */
+			continue;
+
+		if (feature_avail(ctrl, val))
+			/* The feature is enabled for the guest. */
+			continue;
+
+		/* The feature is supported but disabled. */
+		ctrl->trap_activate(vcpu);
+	}
+}
+
 /* Visibility overrides for SVE-specific control registers */
 static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 				   const struct sys_reg_desc *rd)
@@ -4487,6 +4573,31 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 	}
 }
 
+/*
+ * This function activates vcpu's trapping of features that are included in
+ * trap_features[] of id_reg_desc if the features are supported on the
+ * host, but are hidden from the guest (i.e. values of ID registers for
+ * the guest are modified to not show the features' availability).
+ * This function just updates values for trap configuration registers (e.g.
+ * HCR_EL2, etc) in kvm_vcpu_arch, which will be restored before switching
+ * to the guest, but doesn't update the registers themselves.
+ * This function should be called once at the first KVM_RUN (ID registers
+ * are immutable after the first KVM_RUN).
+ */
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct id_reg_desc *idr;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!idr)
+			continue;
+
+		id_reg_features_trap_activate(vcpu, idr);
+	}
+}
+
 #if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
 #include "sys_regs_test.c"
 #endif
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 26/38] KVM: arm64: Introduce framework to trap disabled features
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

When a CPU feature that is supported on the host is not exposed to
its guest, emulating a real CPU's behavior (by trapping or disabling
guest's using the feature) is generally a desirable behavior (when
it's possible without any or little side effect).

Introduce feature_config_ctrl structure, which manages feature
information to program configuration register to trap or disable
the feature when the feature is not exposed to the guest, and
functions that uses the structure to activate the vcpu's trapping the
feature.  Those codes don't update trap configuration registers
themselves (HCR_EL2, etc) but values for the registers in
kvm_vcpu_arch at the first KVM_RUN.

At present, no feature has feature_config_ctrl yet and the following
patches will add the feature_config_ctrl for some features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/include/asm/kvm_host.h |   1 +
 arch/arm64/kvm/arm.c              |  13 ++--
 arch/arm64/kvm/sys_regs.c         | 111 ++++++++++++++++++++++++++++++
 3 files changed, 120 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b85af83b4542..92785b33df0f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -790,6 +790,7 @@ void set_default_id_regs(struct kvm *kvm);
 int kvm_set_id_reg_feature(struct kvm *kvm, u32 id, u8 field_shift, u8 fval);
 void kvm_vcpu_breakpoint_config(struct kvm_vcpu *vcpu);
 int kvm_id_regs_check_frac_fields(const struct kvm_vcpu *vcpu);
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu);
 
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 69189907579c..bcccf3876fcf 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -556,13 +556,16 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
 		static_branch_inc(&userspace_irqchip_in_use);
 	}
 
-	/*
-	 * Initialize traps for protected VMs.
-	 * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
-	 * the code is in place for first run initialization at EL2.
-	 */
+	/* Initialize traps for the guest. */
 	if (kvm_vm_is_protected(kvm))
+		/*
+		 * NOTE: Move to run in EL2 directly, rather than via a
+		 * hypercall, once the code is in place for first run
+		 * initialization at EL2.
+		 */
 		kvm_call_hyp_nvhe(__pkvm_vcpu_init_traps, vcpu);
+	else
+		kvm_vcpu_init_traps(vcpu);
 
 	mutex_lock(&kvm->lock);
 	set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a71c52aee34e..7fe44dec11fd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,27 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Feature information to program configuration register to trap or disable
+ * guest's using a feature when the feature is not exposed to the guest.
+ */
+struct feature_config_ctrl {
+	/* ID register/field for the feature */
+	u32	ftr_reg;	/* ID register */
+	bool	ftr_signed;	/* Is the feature field signed ? */
+	u8	ftr_shift;	/* Field of ID register for the feature */
+	s8	ftr_min;	/* Min value that indicate the feature */
+
+	/*
+	 * Function to check trapping is needed. This is used when the above
+	 * fields are not enough to determine if trapping is needed.
+	 */
+	bool	(*ftr_need_trap)(struct kvm_vcpu *vcpu);
+
+	/* Function to activate trapping the feature. */
+	void	(*trap_activate)(struct kvm_vcpu *vcpu);
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -321,6 +342,9 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 struct id_reg_desc {
 	const struct sys_reg_desc	reg_desc;
 
+	/* Sanitized system value */
+	u64	sys_val;
+
 	/*
 	 * Limit value of the register for a vcpu. The value is the sanitized
 	 * system value with bits set/cleared for unsupported features for the
@@ -376,6 +400,9 @@ struct id_reg_desc {
 	 * UNSIGNED+LOWER_SAFE entries during KVM's initialization.
 	 */
 	struct arm64_ftr_bits	ftr_bits[FTR_FIELDS_NUM];
+
+	/* Information to trap features that are disabled for the guest */
+	const struct feature_config_ctrl *(*trap_features)[];
 };
 
 static inline struct id_reg_desc *sys_to_id_desc(const struct sys_reg_desc *r)
@@ -393,6 +420,7 @@ static void id_reg_desc_init(struct id_reg_desc *id_reg)
 		return;
 
 	val = read_sanitised_ftr_reg(id);
+	id_reg->sys_val = val;
 	id_reg->vcpu_limit_val = val;
 
 	id_reg_desc_init_ftr(id_reg);
@@ -908,6 +936,24 @@ static int validate_id_reg(struct kvm_vcpu *vcpu,
 	return err;
 }
 
+static inline bool feature_avail(const struct feature_config_ctrl *ctrl,
+				 u64 id_val)
+{
+	int field_val = cpuid_feature_extract_field(id_val,
+				ctrl->ftr_shift, ctrl->ftr_signed);
+
+	return (field_val >= ctrl->ftr_min);
+}
+
+static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
+					const struct feature_config_ctrl *ctrl)
+{
+	u64 val;
+
+	val = read_id_reg_with_encoding(vcpu, ctrl->ftr_reg);
+	return feature_avail(ctrl, val);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2387,6 +2433,46 @@ static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
 	return __access_id_reg(vcpu, p, r, true);
 }
 
+static void id_reg_features_trap_activate(struct kvm_vcpu *vcpu,
+					  const struct id_reg_desc *id_reg)
+{
+	u64 val;
+	int i = 0;
+	const struct feature_config_ctrl **ctrlp_array, *ctrl;
+
+	if (!id_reg->trap_features)
+		/* No information to trap a feature */
+		return;
+
+	val = __read_id_reg(vcpu, id_reg);
+	if (val == id_reg->sys_val)
+		/* No feature needs to be trapped (no feature is disabled). */
+		return;
+
+	ctrlp_array = *id_reg->trap_features;
+	while ((ctrl = ctrlp_array[i++]) != NULL) {
+		if (WARN_ON_ONCE(!ctrl->trap_activate))
+			/* Shouldn't happen */
+			continue;
+
+		if (ctrl->ftr_need_trap && ctrl->ftr_need_trap(vcpu)) {
+			ctrl->trap_activate(vcpu);
+			continue;
+		}
+
+		if (!feature_avail(ctrl, id_reg->sys_val))
+			/* The feature is not supported on the host. */
+			continue;
+
+		if (feature_avail(ctrl, val))
+			/* The feature is enabled for the guest. */
+			continue;
+
+		/* The feature is supported but disabled. */
+		ctrl->trap_activate(vcpu);
+	}
+}
+
 /* Visibility overrides for SVE-specific control registers */
 static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
 				   const struct sys_reg_desc *rd)
@@ -4487,6 +4573,31 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
 	}
 }
 
+/*
+ * This function activates vcpu's trapping of features that are included in
+ * trap_features[] of id_reg_desc if the features are supported on the
+ * host, but are hidden from the guest (i.e. values of ID registers for
+ * the guest are modified to not show the features' availability).
+ * This function just updates values for trap configuration registers (e.g.
+ * HCR_EL2, etc) in kvm_vcpu_arch, which will be restored before switching
+ * to the guest, but doesn't update the registers themselves.
+ * This function should be called once at the first KVM_RUN (ID registers
+ * are immutable after the first KVM_RUN).
+ */
+void kvm_vcpu_init_traps(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct id_reg_desc *idr;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_desc_table); i++) {
+		idr = (struct id_reg_desc *)id_reg_desc_table[i];
+		if (!idr)
+			continue;
+
+		id_reg_features_trap_activate(vcpu, idr);
+	}
+}
+
 #if IS_ENABLED(CONFIG_KVM_KUNIT_TEST)
 #include "sys_regs_test.c"
 #endif
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 27/38] KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for RAS and AMU, which are indicated in
ID_AA64PFR0_EL1, to program configuration registers to trap
guest's using those features when they are not exposed to the guest.

Introduce trap_ras_regs() to change a behavior of guest's access to
the registers, which is currently raz/wi, depending on the feature's
availability for the guest (and inject undefined instruction
exception when guest's RAS register access are trapped and RAS is
not exposed to the guest).  In order to keep the current visibility
of the RAS registers from userspace (always visible), a visibility
function for RAS registers is not added.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 92 +++++++++++++++++++++++++++++++++++----
 1 file changed, 83 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7fe44dec11fd..fecd54a58d34 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -320,6 +320,63 @@ struct feature_config_ctrl {
 	void	(*trap_activate)(struct kvm_vcpu *vcpu);
 };
 
+enum vcpu_config_reg {
+	VCPU_HCR_EL2 = 1,
+	VCPU_MDCR_EL2,
+	VCPU_CPTR_EL2,
+};
+
+static void feature_trap_activate(struct kvm_vcpu *vcpu,
+				  enum vcpu_config_reg cfg_reg,
+				  u64 cfg_set, u64 cfg_clear)
+{
+	u64 *reg_ptr, reg_val;
+
+	switch (cfg_reg) {
+	case VCPU_HCR_EL2:
+		reg_ptr = &vcpu->arch.hcr_el2;
+		break;
+	case VCPU_MDCR_EL2:
+		reg_ptr = &vcpu->arch.mdcr_el2;
+		break;
+	case VCPU_CPTR_EL2:
+		reg_ptr = &vcpu->arch.cptr_el2;
+		break;
+	}
+
+	/* Clear/Set fields that are indicated by cfg_clear/cfg_set. */
+	reg_val = (*reg_ptr & ~cfg_clear);
+	reg_val |= cfg_set;
+	*reg_ptr = reg_val;
+}
+
+static void feature_ras_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TERR | HCR_TEA, HCR_FIEN);
+}
+
+static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
+}
+
+/* For ID_AA64PFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ras = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_RAS_SHIFT,
+	.ftr_min = ID_AA64PFR0_RAS_V1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_ras_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_amu = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_AMU_SHIFT,
+	.ftr_min = ID_AA64PFR0_AMU,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_amu_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -954,6 +1011,18 @@ static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
 	return feature_avail(ctrl, val);
 }
 
+static bool trap_ras_regs(struct kvm_vcpu *vcpu,
+			  struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_ras)) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	return trap_raz_wi(vcpu, p, r);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2786,14 +2855,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
 
-	{ SYS_DESC(SYS_ERRIDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERRSELR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXFR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXADDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_raz_wi },
+	{ SYS_DESC(SYS_ERRIDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERRSELR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXFR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXADDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_ras_regs },
 
 	MTE_REG(TFSR_EL1),
 	MTE_REG(TFSRE0_EL1),
@@ -4230,7 +4299,12 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
-	}
+	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ras,
+		&ftr_ctrl_amu,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 27/38] KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add feature_config_ctrl for RAS and AMU, which are indicated in
ID_AA64PFR0_EL1, to program configuration registers to trap
guest's using those features when they are not exposed to the guest.

Introduce trap_ras_regs() to change a behavior of guest's access to
the registers, which is currently raz/wi, depending on the feature's
availability for the guest (and inject undefined instruction
exception when guest's RAS register access are trapped and RAS is
not exposed to the guest).  In order to keep the current visibility
of the RAS registers from userspace (always visible), a visibility
function for RAS registers is not added.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 92 +++++++++++++++++++++++++++++++++++----
 1 file changed, 83 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7fe44dec11fd..fecd54a58d34 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -320,6 +320,63 @@ struct feature_config_ctrl {
 	void	(*trap_activate)(struct kvm_vcpu *vcpu);
 };
 
+enum vcpu_config_reg {
+	VCPU_HCR_EL2 = 1,
+	VCPU_MDCR_EL2,
+	VCPU_CPTR_EL2,
+};
+
+static void feature_trap_activate(struct kvm_vcpu *vcpu,
+				  enum vcpu_config_reg cfg_reg,
+				  u64 cfg_set, u64 cfg_clear)
+{
+	u64 *reg_ptr, reg_val;
+
+	switch (cfg_reg) {
+	case VCPU_HCR_EL2:
+		reg_ptr = &vcpu->arch.hcr_el2;
+		break;
+	case VCPU_MDCR_EL2:
+		reg_ptr = &vcpu->arch.mdcr_el2;
+		break;
+	case VCPU_CPTR_EL2:
+		reg_ptr = &vcpu->arch.cptr_el2;
+		break;
+	}
+
+	/* Clear/Set fields that are indicated by cfg_clear/cfg_set. */
+	reg_val = (*reg_ptr & ~cfg_clear);
+	reg_val |= cfg_set;
+	*reg_ptr = reg_val;
+}
+
+static void feature_ras_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TERR | HCR_TEA, HCR_FIEN);
+}
+
+static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
+}
+
+/* For ID_AA64PFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ras = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_RAS_SHIFT,
+	.ftr_min = ID_AA64PFR0_RAS_V1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_ras_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_amu = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_AMU_SHIFT,
+	.ftr_min = ID_AA64PFR0_AMU,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_amu_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -954,6 +1011,18 @@ static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
 	return feature_avail(ctrl, val);
 }
 
+static bool trap_ras_regs(struct kvm_vcpu *vcpu,
+			  struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_ras)) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	return trap_raz_wi(vcpu, p, r);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2786,14 +2855,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
 
-	{ SYS_DESC(SYS_ERRIDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERRSELR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXFR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXADDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_raz_wi },
+	{ SYS_DESC(SYS_ERRIDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERRSELR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXFR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXADDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_ras_regs },
 
 	MTE_REG(TFSR_EL1),
 	MTE_REG(TFSRE0_EL1),
@@ -4230,7 +4299,12 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
-	}
+	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ras,
+		&ftr_ctrl_amu,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 27/38] KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for RAS and AMU, which are indicated in
ID_AA64PFR0_EL1, to program configuration registers to trap
guest's using those features when they are not exposed to the guest.

Introduce trap_ras_regs() to change a behavior of guest's access to
the registers, which is currently raz/wi, depending on the feature's
availability for the guest (and inject undefined instruction
exception when guest's RAS register access are trapped and RAS is
not exposed to the guest).  In order to keep the current visibility
of the RAS registers from userspace (always visible), a visibility
function for RAS registers is not added.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 92 +++++++++++++++++++++++++++++++++++----
 1 file changed, 83 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7fe44dec11fd..fecd54a58d34 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -320,6 +320,63 @@ struct feature_config_ctrl {
 	void	(*trap_activate)(struct kvm_vcpu *vcpu);
 };
 
+enum vcpu_config_reg {
+	VCPU_HCR_EL2 = 1,
+	VCPU_MDCR_EL2,
+	VCPU_CPTR_EL2,
+};
+
+static void feature_trap_activate(struct kvm_vcpu *vcpu,
+				  enum vcpu_config_reg cfg_reg,
+				  u64 cfg_set, u64 cfg_clear)
+{
+	u64 *reg_ptr, reg_val;
+
+	switch (cfg_reg) {
+	case VCPU_HCR_EL2:
+		reg_ptr = &vcpu->arch.hcr_el2;
+		break;
+	case VCPU_MDCR_EL2:
+		reg_ptr = &vcpu->arch.mdcr_el2;
+		break;
+	case VCPU_CPTR_EL2:
+		reg_ptr = &vcpu->arch.cptr_el2;
+		break;
+	}
+
+	/* Clear/Set fields that are indicated by cfg_clear/cfg_set. */
+	reg_val = (*reg_ptr & ~cfg_clear);
+	reg_val |= cfg_set;
+	*reg_ptr = reg_val;
+}
+
+static void feature_ras_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TERR | HCR_TEA, HCR_FIEN);
+}
+
+static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
+}
+
+/* For ID_AA64PFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ras = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_RAS_SHIFT,
+	.ftr_min = ID_AA64PFR0_RAS_V1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_ras_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_amu = {
+	.ftr_reg = SYS_ID_AA64PFR0_EL1,
+	.ftr_shift = ID_AA64PFR0_AMU_SHIFT,
+	.ftr_min = ID_AA64PFR0_AMU,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_amu_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -954,6 +1011,18 @@ static inline bool vcpu_feature_is_available(struct kvm_vcpu *vcpu,
 	return feature_avail(ctrl, val);
 }
 
+static bool trap_ras_regs(struct kvm_vcpu *vcpu,
+			  struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_ras)) {
+		kvm_inject_undefined(vcpu);
+		return false;
+	}
+
+	return trap_raz_wi(vcpu, p, r);
+}
+
 /*
  * ARMv8.1 mandates at least a trivial LORegion implementation, where all the
  * RW registers are RES0 (which we can implement as RAZ/WI). On an ARMv8.0
@@ -2786,14 +2855,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
 	{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
 
-	{ SYS_DESC(SYS_ERRIDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERRSELR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXFR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXADDR_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_raz_wi },
-	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_raz_wi },
+	{ SYS_DESC(SYS_ERRIDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERRSELR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXFR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXCTLR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXSTATUS_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXADDR_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC0_EL1), trap_ras_regs },
+	{ SYS_DESC(SYS_ERXMISC1_EL1), trap_ras_regs },
 
 	MTE_REG(TFSR_EL1),
 	MTE_REG(TFSRE0_EL1),
@@ -4230,7 +4299,12 @@ static struct id_reg_desc id_aa64pfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, ID_AA64PFR0_FP_NI),
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, ID_AA64PFR0_ASIMD_NI),
-	}
+	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ras,
+		&ftr_ctrl_amu,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64pfr1_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 28/38] KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for MTE, which is indicated in
ID_AA64PFR1_EL1, to program configuration register to trap the
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fecd54a58d34..10f366957ce9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -360,6 +360,11 @@ static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
 }
 
+static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -377,6 +382,15 @@ static struct feature_config_ctrl ftr_ctrl_amu = {
 	.trap_activate = feature_amu_trap_activate,
 };
 
+/* For ID_AA64PFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_mte = {
+	.ftr_reg = SYS_ID_AA64PFR1_EL1,
+	.ftr_shift = ID_AA64PFR1_MTE_SHIFT,
+	.ftr_min = ID_AA64PFR1_MTE_EL0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_mte_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4312,6 +4326,10 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_mte,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 28/38] KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add feature_config_ctrl for MTE, which is indicated in
ID_AA64PFR1_EL1, to program configuration register to trap the
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fecd54a58d34..10f366957ce9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -360,6 +360,11 @@ static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
 }
 
+static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -377,6 +382,15 @@ static struct feature_config_ctrl ftr_ctrl_amu = {
 	.trap_activate = feature_amu_trap_activate,
 };
 
+/* For ID_AA64PFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_mte = {
+	.ftr_reg = SYS_ID_AA64PFR1_EL1,
+	.ftr_shift = ID_AA64PFR1_MTE_SHIFT,
+	.ftr_min = ID_AA64PFR1_MTE_EL0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_mte_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4312,6 +4326,10 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_mte,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 28/38] KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for MTE, which is indicated in
ID_AA64PFR1_EL1, to program configuration register to trap the
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fecd54a58d34..10f366957ce9 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -360,6 +360,11 @@ static void feature_amu_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TAM, 0);
 }
 
+static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -377,6 +382,15 @@ static struct feature_config_ctrl ftr_ctrl_amu = {
 	.trap_activate = feature_amu_trap_activate,
 };
 
+/* For ID_AA64PFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_mte = {
+	.ftr_reg = SYS_ID_AA64PFR1_EL1,
+	.ftr_shift = ID_AA64PFR1_MTE_SHIFT,
+	.ftr_min = ID_AA64PFR1_MTE_EL0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_mte_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4312,6 +4326,10 @@ static struct id_reg_desc id_aa64pfr1_el1_desc = {
 	.init = init_id_aa64pfr1_el1_desc,
 	.validate = validate_id_aa64pfr1_el1,
 	.vcpu_mask = vcpu_mask_id_aa64pfr1_el1,
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_mte,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 29/38] KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for PMUv3, PMS and TraceFilt, which are
indicated in ID_AA64DFR0_EL1, to program configuration registers
to trap guest's using those features when they are not exposed to
the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 64 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 10f366957ce9..a09c910198d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -365,6 +365,30 @@ static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
 }
 
+static void feature_trace_trap_activate(struct kvm_vcpu *vcpu)
+{
+	if (has_vhe())
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPACR_EL1_TTA, 0);
+	else
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TTA, 0);
+}
+
+static void feature_pmuv3_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPM, 0);
+}
+
+static void feature_pms_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPMS,
+			      MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT);
+}
+
+static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -391,6 +415,39 @@ static struct feature_config_ctrl ftr_ctrl_mte = {
 	.trap_activate = feature_mte_trap_activate,
 };
 
+/* For ID_AA64DFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_trace = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACEVER_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_trace_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pmuv3 = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMUVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMUVER_8_0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pmuv3_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pms = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMSVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMSVER_8_2,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pms_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_tracefilt = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACE_FILT_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_tracefilt_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4389,6 +4446,13 @@ static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_trace,
+		&ftr_ctrl_pmuv3,
+		&ftr_ctrl_pms,
+		&ftr_ctrl_tracefilt,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_dfr0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 29/38] KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add feature_config_ctrl for PMUv3, PMS and TraceFilt, which are
indicated in ID_AA64DFR0_EL1, to program configuration registers
to trap guest's using those features when they are not exposed to
the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 64 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 10f366957ce9..a09c910198d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -365,6 +365,30 @@ static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
 }
 
+static void feature_trace_trap_activate(struct kvm_vcpu *vcpu)
+{
+	if (has_vhe())
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPACR_EL1_TTA, 0);
+	else
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TTA, 0);
+}
+
+static void feature_pmuv3_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPM, 0);
+}
+
+static void feature_pms_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPMS,
+			      MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT);
+}
+
+static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -391,6 +415,39 @@ static struct feature_config_ctrl ftr_ctrl_mte = {
 	.trap_activate = feature_mte_trap_activate,
 };
 
+/* For ID_AA64DFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_trace = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACEVER_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_trace_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pmuv3 = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMUVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMUVER_8_0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pmuv3_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pms = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMSVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMSVER_8_2,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pms_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_tracefilt = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACE_FILT_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_tracefilt_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4389,6 +4446,13 @@ static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_trace,
+		&ftr_ctrl_pmuv3,
+		&ftr_ctrl_pms,
+		&ftr_ctrl_tracefilt,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_dfr0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 29/38] KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for PMUv3, PMS and TraceFilt, which are
indicated in ID_AA64DFR0_EL1, to program configuration registers
to trap guest's using those features when they are not exposed to
the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 64 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 10f366957ce9..a09c910198d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -365,6 +365,30 @@ static void feature_mte_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TID5, HCR_DCT | HCR_ATA);
 }
 
+static void feature_trace_trap_activate(struct kvm_vcpu *vcpu)
+{
+	if (has_vhe())
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPACR_EL1_TTA, 0);
+	else
+		feature_trap_activate(vcpu, VCPU_CPTR_EL2, CPTR_EL2_TTA, 0);
+}
+
+static void feature_pmuv3_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPM, 0);
+}
+
+static void feature_pms_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TPMS,
+			      MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT);
+}
+
+static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -391,6 +415,39 @@ static struct feature_config_ctrl ftr_ctrl_mte = {
 	.trap_activate = feature_mte_trap_activate,
 };
 
+/* For ID_AA64DFR0_EL1 */
+static struct feature_config_ctrl ftr_ctrl_trace = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACEVER_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_trace_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pmuv3 = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMUVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMUVER_8_0,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pmuv3_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_pms = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_PMSVER_SHIFT,
+	.ftr_min = ID_AA64DFR0_PMSVER_8_2,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_pms_trap_activate,
+};
+
+static struct feature_config_ctrl ftr_ctrl_tracefilt = {
+	.ftr_reg = SYS_ID_AA64DFR0_EL1,
+	.ftr_shift = ID_AA64DFR0_TRACE_FILT_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_tracefilt_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4389,6 +4446,13 @@ static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.ftr_bits = {
 		S_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_DOUBLELOCK_SHIFT, 0xf),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_trace,
+		&ftr_ctrl_pmuv3,
+		&ftr_ctrl_pms,
+		&ftr_ctrl_tracefilt,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_dfr0_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 30/38] KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for LORegions, which is indicated in
ID_AA64MMFR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Change trap_loregion() to use vcpu_feature_is_available()
to simplify checking of the feature's availability.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a09c910198d6..6a8ed59d8d90 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -389,6 +389,11 @@ static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
 }
 
+static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -448,6 +453,15 @@ static struct feature_config_ctrl ftr_ctrl_tracefilt = {
 	.trap_activate = feature_tracefilt_trap_activate,
 };
 
+/* For ID_AA64MMFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_lor = {
+	.ftr_reg = SYS_ID_AA64MMFR1_EL1,
+	.ftr_shift = ID_AA64MMFR1_LOR_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_lor_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -1104,10 +1118,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
-	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_lor)) {
 		kvm_inject_undefined(vcpu);
 		return false;
 	}
@@ -4433,6 +4446,14 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR1_EL1),
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_lor,
+		NULL,
+	},
+};
+
 static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
 	/*
@@ -4577,7 +4598,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
-	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC(ID_AA64MMFR1_EL1, &id_aa64mmfr1_el1_desc),
 	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
 	ID_DESC_UNALLOC(7, 3),
 	ID_DESC_UNALLOC(7, 4),
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 30/38] KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add feature_config_ctrl for LORegions, which is indicated in
ID_AA64MMFR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Change trap_loregion() to use vcpu_feature_is_available()
to simplify checking of the feature's availability.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a09c910198d6..6a8ed59d8d90 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -389,6 +389,11 @@ static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
 }
 
+static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -448,6 +453,15 @@ static struct feature_config_ctrl ftr_ctrl_tracefilt = {
 	.trap_activate = feature_tracefilt_trap_activate,
 };
 
+/* For ID_AA64MMFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_lor = {
+	.ftr_reg = SYS_ID_AA64MMFR1_EL1,
+	.ftr_shift = ID_AA64MMFR1_LOR_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_lor_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -1104,10 +1118,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
-	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_lor)) {
 		kvm_inject_undefined(vcpu);
 		return false;
 	}
@@ -4433,6 +4446,14 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR1_EL1),
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_lor,
+		NULL,
+	},
+};
+
 static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
 	/*
@@ -4577,7 +4598,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
-	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC(ID_AA64MMFR1_EL1, &id_aa64mmfr1_el1_desc),
 	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
 	ID_DESC_UNALLOC(7, 3),
 	ID_DESC_UNALLOC(7, 4),
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 30/38] KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for LORegions, which is indicated in
ID_AA64MMFR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Change trap_loregion() to use vcpu_feature_is_available()
to simplify checking of the feature's availability.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a09c910198d6..6a8ed59d8d90 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -389,6 +389,11 @@ static void feature_tracefilt_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_MDCR_EL2, MDCR_EL2_TTRF, 0);
 }
 
+static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -448,6 +453,15 @@ static struct feature_config_ctrl ftr_ctrl_tracefilt = {
 	.trap_activate = feature_tracefilt_trap_activate,
 };
 
+/* For ID_AA64MMFR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_lor = {
+	.ftr_reg = SYS_ID_AA64MMFR1_EL1,
+	.ftr_shift = ID_AA64MMFR1_LOR_SHIFT,
+	.ftr_min = 1,
+	.ftr_signed = FTR_UNSIGNED,
+	.trap_activate = feature_lor_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -1104,10 +1118,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
 			  struct sys_reg_params *p,
 			  const struct sys_reg_desc *r)
 {
-	u64 val = read_id_reg_with_encoding(vcpu, SYS_ID_AA64MMFR1_EL1);
 	u32 sr = reg_to_encoding(r);
 
-	if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) {
+	if (!vcpu_feature_is_available(vcpu, &ftr_ctrl_lor)) {
 		kvm_inject_undefined(vcpu);
 		return false;
 	}
@@ -4433,6 +4446,14 @@ static struct id_reg_desc id_aa64mmfr0_el1_desc = {
 	},
 };
 
+static struct id_reg_desc id_aa64mmfr1_el1_desc = {
+	.reg_desc = ID_SANITISED(ID_AA64MMFR1_EL1),
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_lor,
+		NULL,
+	},
+};
+
 static struct id_reg_desc id_aa64dfr0_el1_desc = {
 	.reg_desc = ID_SANITISED(ID_AA64DFR0_EL1),
 	/*
@@ -4577,7 +4598,7 @@ static struct id_reg_desc *id_reg_desc_table[KVM_ARM_ID_REG_MAX_NUM] = {
 
 	/* CRm=7 */
 	ID_DESC(ID_AA64MMFR0_EL1, &id_aa64mmfr0_el1_desc),
-	ID_DESC_DEFAULT(ID_AA64MMFR1_EL1),
+	ID_DESC(ID_AA64MMFR1_EL1, &id_aa64mmfr1_el1_desc),
 	ID_DESC_DEFAULT(ID_AA64MMFR2_EL1),
 	ID_DESC_UNALLOC(7, 3),
 	ID_DESC_UNALLOC(7, 4),
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 31/38] KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for PTRAUTH, which is indicated in
ID_AA64ISAR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a8ed59d8d90..0e3cff91f41d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,15 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Return true if ptrauth needs to be trapped.
+ * (i.e. if ptrauth is supported on the host but not exposed to the guest)
+ */
+static bool vcpu_need_trap_ptrauth(struct kvm_vcpu *vcpu)
+{
+	return (system_has_full_ptr_auth() && !vcpu_has_ptrauth(vcpu));
+}
+
 /*
  * Feature information to program configuration register to trap or disable
  * guest's using a feature when the feature is not exposed to the guest.
@@ -394,6 +403,11 @@ static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
 }
 
+static void feature_ptrauth_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, 0, HCR_API | HCR_APK);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -462,6 +476,12 @@ static struct feature_config_ctrl ftr_ctrl_lor = {
 	.trap_activate = feature_lor_trap_activate,
 };
 
+/* For SYS_ID_AA64ISAR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ptrauth = {
+	.ftr_need_trap = vcpu_need_trap_ptrauth,
+	.trap_activate = feature_ptrauth_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4416,6 +4436,10 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ptrauth,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar2_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 31/38] KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add feature_config_ctrl for PTRAUTH, which is indicated in
ID_AA64ISAR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a8ed59d8d90..0e3cff91f41d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,15 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Return true if ptrauth needs to be trapped.
+ * (i.e. if ptrauth is supported on the host but not exposed to the guest)
+ */
+static bool vcpu_need_trap_ptrauth(struct kvm_vcpu *vcpu)
+{
+	return (system_has_full_ptr_auth() && !vcpu_has_ptrauth(vcpu));
+}
+
 /*
  * Feature information to program configuration register to trap or disable
  * guest's using a feature when the feature is not exposed to the guest.
@@ -394,6 +403,11 @@ static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
 }
 
+static void feature_ptrauth_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, 0, HCR_API | HCR_APK);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -462,6 +476,12 @@ static struct feature_config_ctrl ftr_ctrl_lor = {
 	.trap_activate = feature_lor_trap_activate,
 };
 
+/* For SYS_ID_AA64ISAR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ptrauth = {
+	.ftr_need_trap = vcpu_need_trap_ptrauth,
+	.trap_activate = feature_ptrauth_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4416,6 +4436,10 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ptrauth,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar2_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 31/38] KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add feature_config_ctrl for PTRAUTH, which is indicated in
ID_AA64ISAR1_EL1, to program configuration register to trap
guest's using the feature when it is not exposed to the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a8ed59d8d90..0e3cff91f41d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -299,6 +299,15 @@ static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 	(cpuid_feature_extract_unsigned_field(val, ID_AA64ISAR2_GPA3_SHIFT) >= \
 	 ID_AA64ISAR2_GPA3_ARCHITECTED)
 
+/*
+ * Return true if ptrauth needs to be trapped.
+ * (i.e. if ptrauth is supported on the host but not exposed to the guest)
+ */
+static bool vcpu_need_trap_ptrauth(struct kvm_vcpu *vcpu)
+{
+	return (system_has_full_ptr_auth() && !vcpu_has_ptrauth(vcpu));
+}
+
 /*
  * Feature information to program configuration register to trap or disable
  * guest's using a feature when the feature is not exposed to the guest.
@@ -394,6 +403,11 @@ static void feature_lor_trap_activate(struct kvm_vcpu *vcpu)
 	feature_trap_activate(vcpu, VCPU_HCR_EL2, HCR_TLOR, 0);
 }
 
+static void feature_ptrauth_trap_activate(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, 0, HCR_API | HCR_APK);
+}
+
 /* For ID_AA64PFR0_EL1 */
 static struct feature_config_ctrl ftr_ctrl_ras = {
 	.ftr_reg = SYS_ID_AA64PFR0_EL1,
@@ -462,6 +476,12 @@ static struct feature_config_ctrl ftr_ctrl_lor = {
 	.trap_activate = feature_lor_trap_activate,
 };
 
+/* For SYS_ID_AA64ISAR1_EL1 */
+static struct feature_config_ctrl ftr_ctrl_ptrauth = {
+	.ftr_need_trap = vcpu_need_trap_ptrauth,
+	.trap_activate = feature_ptrauth_trap_activate,
+};
+
 #define __FTR_BITS(ftr_sign, ftr_type, bit_pos, safe) {		\
 	.sign = ftr_sign,					\
 	.type = ftr_type,					\
@@ -4416,6 +4436,10 @@ static struct id_reg_desc id_aa64isar1_el1_desc = {
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_APA_SHIFT, 0),
 		U_FTR_BITS(FTR_EXACT, ID_AA64ISAR1_API_SHIFT, 0),
 	},
+	.trap_features = &(const struct feature_config_ctrl *[]) {
+		&ftr_ctrl_ptrauth,
+		NULL,
+	},
 };
 
 static struct id_reg_desc id_aa64isar2_el1_desc = {
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 32/38] KVM: arm64: Add kunit test for trap initialization
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add KUnit tests for functions in arch/arm64/kvm/sys_regs_test.c
that activates traps for disabled features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs_test.c | 219 +++++++++++++++++++++++++++++++++
 1 file changed, 219 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
index dff146fe0e62..f9b032032ec3 100644
--- a/arch/arm64/kvm/sys_regs_test.c
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -1041,6 +1041,222 @@ static void validate_id_reg_test(struct kunit *test)
 	}
 }
 
+struct trap_config_test {
+	u64 set;
+	u64 clear;
+	u64 prev_val;
+	u64 expect_val;
+};
+
+struct trap_config_test trap_params[] = {
+	{0x30000800000, 0, 0, 0x30000800000},
+	{0, 0x30000800000, 0, 0},
+	{0x30000800000, 0, (u64)-1, (u64)-1},
+	{0, 0x30000800000, (u64)-1, (u64)0xfffffcffff7fffff},
+};
+
+static void trap_case_to_desc(struct trap_config_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "trap - set:0x%llx, clear:0x%llx, prev_val:0x%llx\n",
+		 t->set, t->clear, t->prev_val);
+}
+
+KUNIT_ARRAY_PARAM(trap, trap_params, trap_case_to_desc);
+
+/* Tests for feature_trap_activate(). */
+static void feature_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	const struct trap_config_test *trap = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	/* Test for HCR_EL2 */
+	vcpu->arch.hcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, trap->expect_val);
+
+	/* Test for MDCR_EL2 */
+	vcpu->arch.mdcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.mdcr_el2, trap->expect_val);
+
+	/* Test for CPTR_EL2 */
+	vcpu->arch.cptr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.cptr_el2, trap->expect_val);
+}
+
+static u64 test_trap_set0;
+static u64 test_trap_clear0;
+static void test_trap_activate0(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set0, test_trap_clear0);
+}
+
+static u64 test_trap_set1;
+static u64 test_trap_clear1;
+static void test_trap_activate1(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set1, test_trap_clear1);
+}
+
+static u64 test_trap_set2;
+static u64 test_trap_clear2;
+static void test_trap_activate2(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set2, test_trap_clear2);
+}
+
+
+static void setup_feature_config_ctrl(struct feature_config_ctrl *config,
+				      u32 id, int shift, int min, bool sign,
+				      void *fn)
+{
+	memset(config, 0, sizeof(*config));
+	config->ftr_reg = id;
+	config->ftr_shift = shift;
+	config->ftr_min = min;
+	config->ftr_signed = sign;
+	config->trap_activate = fn;
+}
+
+/*
+ * Tests for id_reg_features_trap_activate.
+ * Setup a id_reg_desc with three entries in id_reg_desc->trap_features[].
+ * Check if the config register is updated to enable trap for the disabled
+ * features.
+ */
+static void id_reg_features_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id;
+	u64 cfg_set, cfg_clear, id_reg_sys_val, id_reg_val;
+	struct id_reg_desc id_reg_data = {};
+	struct feature_config_ctrl config0, config1, config2;
+	struct feature_config_ctrl *trap_features[] = {
+		&config0, &config1, &config2, NULL,
+	};
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	/* Setup id_reg_desc */
+	id_reg_sys_val = 0x7777777777777777;
+	id = SYS_ID_AA64DFR0_EL1;
+	set_sys_desc((struct sys_reg_desc *)&id_reg_data.reg_desc, id);
+	id_reg_data.sys_val = id_reg_sys_val;
+	id_reg_data.vcpu_limit_val  = (u64)-1;
+	id_reg_data.trap_features =
+			(const struct feature_config_ctrl *(*)[])trap_features;
+
+	/* Setup the 1st feature_config_ctrl */
+	test_trap_set0 = 0x3;
+	test_trap_clear0 = 0x0;
+	setup_feature_config_ctrl(&config0, id, 60, 2, FTR_UNSIGNED,
+				  &test_trap_activate0);
+
+	/* Setup the 2nd feature_config_ctrl */
+	test_trap_set1 = 0x30000040;
+	test_trap_clear1 = 0x40000000;
+	setup_feature_config_ctrl(&config1, id, 0, 1, FTR_UNSIGNED,
+				  &test_trap_activate1);
+
+	/* Setup the 3rd feature_config_ctrl */
+	test_trap_set2 = 0x30000000800;
+	test_trap_clear2 = 0x40000000000;
+	setup_feature_config_ctrl(&config2, id, 4, 0, FTR_SIGNED,
+				  &test_trap_activate2);
+
+#define	ftr_dis(cfg)	\
+	((u64)(((cfg)->ftr_min - 1) & 0xf) << (cfg)->ftr_shift)
+
+#define	ftr_en(cfg)	\
+	((u64)(cfg)->ftr_min << (cfg)->ftr_shift)
+
+	/* Test with features enabled for config0, 1 and 2 */
+	id_reg_val = ftr_en(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+
+
+	/* Test with features disabled for config0 only */
+	id_reg_val = ftr_dis(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	cfg_set = test_trap_set0;
+	cfg_clear = test_trap_clear0;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0 and config1  */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1;
+	cfg_clear = test_trap_clear0 | test_trap_clear1;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0, config1, and config2 */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_dis(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1 | test_trap_set2;
+	cfg_clear = test_trap_clear0 | test_trap_clear1 | test_trap_clear2;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with id_reg_data.trap_features = NULL */
+	id_reg_data.trap_features = NULL;
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+}
+
+/* Tests for vcpu_need_trap_ptrauth(). */
+static void vcpu_need_trap_ptrauth_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	if (system_has_full_ptr_auth()) {
+		/* Tests with PTRAUTH disabled vCPU */
+		KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu));
+
+		/* Tests with PTRAUTH enabled vCPU */
+		vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	} else {
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	}
+}
+
 static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
 	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
@@ -1056,6 +1272,9 @@ static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE(validate_id_dfr0_el1_test),
 	KUNIT_CASE(validate_mvfr1_el1_test),
 	KUNIT_CASE(validate_id_reg_test),
+	KUNIT_CASE(vcpu_need_trap_ptrauth_test),
+	KUNIT_CASE_PARAM(feature_trap_activate_test, trap_gen_params),
+	KUNIT_CASE(id_reg_features_trap_activate_test),
 	{}
 };
 
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 32/38] KVM: arm64: Add kunit test for trap initialization
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add KUnit tests for functions in arch/arm64/kvm/sys_regs_test.c
that activates traps for disabled features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs_test.c | 219 +++++++++++++++++++++++++++++++++
 1 file changed, 219 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
index dff146fe0e62..f9b032032ec3 100644
--- a/arch/arm64/kvm/sys_regs_test.c
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -1041,6 +1041,222 @@ static void validate_id_reg_test(struct kunit *test)
 	}
 }
 
+struct trap_config_test {
+	u64 set;
+	u64 clear;
+	u64 prev_val;
+	u64 expect_val;
+};
+
+struct trap_config_test trap_params[] = {
+	{0x30000800000, 0, 0, 0x30000800000},
+	{0, 0x30000800000, 0, 0},
+	{0x30000800000, 0, (u64)-1, (u64)-1},
+	{0, 0x30000800000, (u64)-1, (u64)0xfffffcffff7fffff},
+};
+
+static void trap_case_to_desc(struct trap_config_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "trap - set:0x%llx, clear:0x%llx, prev_val:0x%llx\n",
+		 t->set, t->clear, t->prev_val);
+}
+
+KUNIT_ARRAY_PARAM(trap, trap_params, trap_case_to_desc);
+
+/* Tests for feature_trap_activate(). */
+static void feature_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	const struct trap_config_test *trap = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	/* Test for HCR_EL2 */
+	vcpu->arch.hcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, trap->expect_val);
+
+	/* Test for MDCR_EL2 */
+	vcpu->arch.mdcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.mdcr_el2, trap->expect_val);
+
+	/* Test for CPTR_EL2 */
+	vcpu->arch.cptr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.cptr_el2, trap->expect_val);
+}
+
+static u64 test_trap_set0;
+static u64 test_trap_clear0;
+static void test_trap_activate0(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set0, test_trap_clear0);
+}
+
+static u64 test_trap_set1;
+static u64 test_trap_clear1;
+static void test_trap_activate1(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set1, test_trap_clear1);
+}
+
+static u64 test_trap_set2;
+static u64 test_trap_clear2;
+static void test_trap_activate2(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set2, test_trap_clear2);
+}
+
+
+static void setup_feature_config_ctrl(struct feature_config_ctrl *config,
+				      u32 id, int shift, int min, bool sign,
+				      void *fn)
+{
+	memset(config, 0, sizeof(*config));
+	config->ftr_reg = id;
+	config->ftr_shift = shift;
+	config->ftr_min = min;
+	config->ftr_signed = sign;
+	config->trap_activate = fn;
+}
+
+/*
+ * Tests for id_reg_features_trap_activate.
+ * Setup a id_reg_desc with three entries in id_reg_desc->trap_features[].
+ * Check if the config register is updated to enable trap for the disabled
+ * features.
+ */
+static void id_reg_features_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id;
+	u64 cfg_set, cfg_clear, id_reg_sys_val, id_reg_val;
+	struct id_reg_desc id_reg_data = {};
+	struct feature_config_ctrl config0, config1, config2;
+	struct feature_config_ctrl *trap_features[] = {
+		&config0, &config1, &config2, NULL,
+	};
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	/* Setup id_reg_desc */
+	id_reg_sys_val = 0x7777777777777777;
+	id = SYS_ID_AA64DFR0_EL1;
+	set_sys_desc((struct sys_reg_desc *)&id_reg_data.reg_desc, id);
+	id_reg_data.sys_val = id_reg_sys_val;
+	id_reg_data.vcpu_limit_val  = (u64)-1;
+	id_reg_data.trap_features =
+			(const struct feature_config_ctrl *(*)[])trap_features;
+
+	/* Setup the 1st feature_config_ctrl */
+	test_trap_set0 = 0x3;
+	test_trap_clear0 = 0x0;
+	setup_feature_config_ctrl(&config0, id, 60, 2, FTR_UNSIGNED,
+				  &test_trap_activate0);
+
+	/* Setup the 2nd feature_config_ctrl */
+	test_trap_set1 = 0x30000040;
+	test_trap_clear1 = 0x40000000;
+	setup_feature_config_ctrl(&config1, id, 0, 1, FTR_UNSIGNED,
+				  &test_trap_activate1);
+
+	/* Setup the 3rd feature_config_ctrl */
+	test_trap_set2 = 0x30000000800;
+	test_trap_clear2 = 0x40000000000;
+	setup_feature_config_ctrl(&config2, id, 4, 0, FTR_SIGNED,
+				  &test_trap_activate2);
+
+#define	ftr_dis(cfg)	\
+	((u64)(((cfg)->ftr_min - 1) & 0xf) << (cfg)->ftr_shift)
+
+#define	ftr_en(cfg)	\
+	((u64)(cfg)->ftr_min << (cfg)->ftr_shift)
+
+	/* Test with features enabled for config0, 1 and 2 */
+	id_reg_val = ftr_en(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+
+
+	/* Test with features disabled for config0 only */
+	id_reg_val = ftr_dis(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	cfg_set = test_trap_set0;
+	cfg_clear = test_trap_clear0;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0 and config1  */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1;
+	cfg_clear = test_trap_clear0 | test_trap_clear1;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0, config1, and config2 */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_dis(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1 | test_trap_set2;
+	cfg_clear = test_trap_clear0 | test_trap_clear1 | test_trap_clear2;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with id_reg_data.trap_features = NULL */
+	id_reg_data.trap_features = NULL;
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+}
+
+/* Tests for vcpu_need_trap_ptrauth(). */
+static void vcpu_need_trap_ptrauth_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	if (system_has_full_ptr_auth()) {
+		/* Tests with PTRAUTH disabled vCPU */
+		KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu));
+
+		/* Tests with PTRAUTH enabled vCPU */
+		vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	} else {
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	}
+}
+
 static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
 	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
@@ -1056,6 +1272,9 @@ static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE(validate_id_dfr0_el1_test),
 	KUNIT_CASE(validate_mvfr1_el1_test),
 	KUNIT_CASE(validate_id_reg_test),
+	KUNIT_CASE(vcpu_need_trap_ptrauth_test),
+	KUNIT_CASE_PARAM(feature_trap_activate_test, trap_gen_params),
+	KUNIT_CASE(id_reg_features_trap_activate_test),
 	{}
 };
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 32/38] KVM: arm64: Add kunit test for trap initialization
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add KUnit tests for functions in arch/arm64/kvm/sys_regs_test.c
that activates traps for disabled features.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 arch/arm64/kvm/sys_regs_test.c | 219 +++++++++++++++++++++++++++++++++
 1 file changed, 219 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs_test.c b/arch/arm64/kvm/sys_regs_test.c
index dff146fe0e62..f9b032032ec3 100644
--- a/arch/arm64/kvm/sys_regs_test.c
+++ b/arch/arm64/kvm/sys_regs_test.c
@@ -1041,6 +1041,222 @@ static void validate_id_reg_test(struct kunit *test)
 	}
 }
 
+struct trap_config_test {
+	u64 set;
+	u64 clear;
+	u64 prev_val;
+	u64 expect_val;
+};
+
+struct trap_config_test trap_params[] = {
+	{0x30000800000, 0, 0, 0x30000800000},
+	{0, 0x30000800000, 0, 0},
+	{0x30000800000, 0, (u64)-1, (u64)-1},
+	{0, 0x30000800000, (u64)-1, (u64)0xfffffcffff7fffff},
+};
+
+static void trap_case_to_desc(struct trap_config_test *t, char *desc)
+{
+	snprintf(desc, KUNIT_PARAM_DESC_SIZE,
+		 "trap - set:0x%llx, clear:0x%llx, prev_val:0x%llx\n",
+		 t->set, t->clear, t->prev_val);
+}
+
+KUNIT_ARRAY_PARAM(trap, trap_params, trap_case_to_desc);
+
+/* Tests for feature_trap_activate(). */
+static void feature_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	const struct trap_config_test *trap = test->param_value;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_ASSERT_TRUE(test, vcpu);
+
+	/* Test for HCR_EL2 */
+	vcpu->arch.hcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_HCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, trap->expect_val);
+
+	/* Test for MDCR_EL2 */
+	vcpu->arch.mdcr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_MDCR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.mdcr_el2, trap->expect_val);
+
+	/* Test for CPTR_EL2 */
+	vcpu->arch.cptr_el2 = trap->prev_val;
+	feature_trap_activate(vcpu, VCPU_CPTR_EL2, trap->set, trap->clear);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.cptr_el2, trap->expect_val);
+}
+
+static u64 test_trap_set0;
+static u64 test_trap_clear0;
+static void test_trap_activate0(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set0, test_trap_clear0);
+}
+
+static u64 test_trap_set1;
+static u64 test_trap_clear1;
+static void test_trap_activate1(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set1, test_trap_clear1);
+}
+
+static u64 test_trap_set2;
+static u64 test_trap_clear2;
+static void test_trap_activate2(struct kvm_vcpu *vcpu)
+{
+	feature_trap_activate(vcpu, VCPU_HCR_EL2,
+			      test_trap_set2, test_trap_clear2);
+}
+
+
+static void setup_feature_config_ctrl(struct feature_config_ctrl *config,
+				      u32 id, int shift, int min, bool sign,
+				      void *fn)
+{
+	memset(config, 0, sizeof(*config));
+	config->ftr_reg = id;
+	config->ftr_shift = shift;
+	config->ftr_min = min;
+	config->ftr_signed = sign;
+	config->trap_activate = fn;
+}
+
+/*
+ * Tests for id_reg_features_trap_activate.
+ * Setup a id_reg_desc with three entries in id_reg_desc->trap_features[].
+ * Check if the config register is updated to enable trap for the disabled
+ * features.
+ */
+static void id_reg_features_trap_activate_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+	u32 id;
+	u64 cfg_set, cfg_clear, id_reg_sys_val, id_reg_val;
+	struct id_reg_desc id_reg_data = {};
+	struct feature_config_ctrl config0, config1, config2;
+	struct feature_config_ctrl *trap_features[] = {
+		&config0, &config1, &config2, NULL,
+	};
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	/* Setup id_reg_desc */
+	id_reg_sys_val = 0x7777777777777777;
+	id = SYS_ID_AA64DFR0_EL1;
+	set_sys_desc((struct sys_reg_desc *)&id_reg_data.reg_desc, id);
+	id_reg_data.sys_val = id_reg_sys_val;
+	id_reg_data.vcpu_limit_val  = (u64)-1;
+	id_reg_data.trap_features =
+			(const struct feature_config_ctrl *(*)[])trap_features;
+
+	/* Setup the 1st feature_config_ctrl */
+	test_trap_set0 = 0x3;
+	test_trap_clear0 = 0x0;
+	setup_feature_config_ctrl(&config0, id, 60, 2, FTR_UNSIGNED,
+				  &test_trap_activate0);
+
+	/* Setup the 2nd feature_config_ctrl */
+	test_trap_set1 = 0x30000040;
+	test_trap_clear1 = 0x40000000;
+	setup_feature_config_ctrl(&config1, id, 0, 1, FTR_UNSIGNED,
+				  &test_trap_activate1);
+
+	/* Setup the 3rd feature_config_ctrl */
+	test_trap_set2 = 0x30000000800;
+	test_trap_clear2 = 0x40000000000;
+	setup_feature_config_ctrl(&config2, id, 4, 0, FTR_SIGNED,
+				  &test_trap_activate2);
+
+#define	ftr_dis(cfg)	\
+	((u64)(((cfg)->ftr_min - 1) & 0xf) << (cfg)->ftr_shift)
+
+#define	ftr_en(cfg)	\
+	((u64)(cfg)->ftr_min << (cfg)->ftr_shift)
+
+	/* Test with features enabled for config0, 1 and 2 */
+	id_reg_val = ftr_en(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+
+
+	/* Test with features disabled for config0 only */
+	id_reg_val = ftr_dis(&config0) | ftr_en(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+	cfg_set = test_trap_set0;
+	cfg_clear = test_trap_clear0;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0 and config1  */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_en(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1;
+	cfg_clear = test_trap_clear0 | test_trap_clear1;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with features disabled for config0, config1, and config2 */
+	id_reg_val = ftr_dis(&config0) | ftr_dis(&config1) | ftr_dis(&config2);
+	write_kvm_id_reg(vcpu->kvm, id, id_reg_val);
+	vcpu->arch.hcr_el2 = 0;
+
+	cfg_set = test_trap_set0 | test_trap_set1 | test_trap_set2;
+	cfg_clear = test_trap_clear0 | test_trap_clear1 | test_trap_clear2;
+
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_set, cfg_set);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2 & cfg_clear, 0);
+
+
+	/* Test with id_reg_data.trap_features = NULL */
+	id_reg_data.trap_features = NULL;
+	vcpu->arch.hcr_el2 = 0;
+	id_reg_features_trap_activate(vcpu, &id_reg_data);
+	KUNIT_EXPECT_EQ(test, vcpu->arch.hcr_el2, 0);
+}
+
+/* Tests for vcpu_need_trap_ptrauth(). */
+static void vcpu_need_trap_ptrauth_test(struct kunit *test)
+{
+	struct kvm_vcpu *vcpu;
+
+	vcpu = test_kvm_vcpu_init(test);
+	KUNIT_EXPECT_TRUE(test, vcpu);
+	if (!vcpu)
+		return;
+
+	if (system_has_full_ptr_auth()) {
+		/* Tests with PTRAUTH disabled vCPU */
+		KUNIT_EXPECT_TRUE(test, vcpu_need_trap_ptrauth(vcpu));
+
+		/* Tests with PTRAUTH enabled vCPU */
+		vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_PTRAUTH;
+
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	} else {
+		KUNIT_EXPECT_FALSE(test, vcpu_need_trap_ptrauth(vcpu));
+	}
+}
+
 static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE_PARAM(vcpu_id_reg_feature_frac_check_test, frac_gen_params),
 	KUNIT_CASE_PARAM(validate_id_aa64mmfr0_tgran2_test, tgran4_2_gen_params),
@@ -1056,6 +1272,9 @@ static struct kunit_case kvm_sys_regs_test_cases[] = {
 	KUNIT_CASE(validate_id_dfr0_el1_test),
 	KUNIT_CASE(validate_mvfr1_el1_test),
 	KUNIT_CASE(validate_id_reg_test),
+	KUNIT_CASE(vcpu_need_trap_ptrauth_test),
+	KUNIT_CASE_PARAM(feature_trap_activate_test, trap_gen_params),
+	KUNIT_CASE(id_reg_features_trap_activate_test),
 	{}
 };
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 33/38] KVM: arm64: selftests: Add helpers to extract a field of ID registers
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Introduce a couple of helpers to extract a field of ID registers.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/include/aarch64/processor.h |  5 ++++
 .../selftests/kvm/lib/aarch64/processor.c     | 27 +++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 8f9f46979a00..e12411fec822 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -185,4 +185,9 @@ static inline void local_irq_disable(void)
 	asm volatile("msr daifset, #3" : : : "memory");
 }
 
+int extract_signed_field(uint64_t val, int field, int width);
+unsigned int extract_unsigned_field(uint64_t val, int field, int width);
+int cpuid_extract_ftr(uint64_t val, int field, bool sign);
+int cpuid_extract_sftr(uint64_t val, int field);
+unsigned int cpuid_extract_uftr(uint64_t val, int field);
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 9343d82519b4..c55f7dfc8567 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -500,3 +500,30 @@ void __attribute__((constructor)) init_guest_modes(void)
 {
        guest_modes_append_default();
 }
+
+/* Helpers to get a feature field from ID register value */
+int extract_signed_field(uint64_t val, int field, int width)
+{
+	return (int64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+unsigned int extract_unsigned_field(uint64_t val, int field, int width)
+{
+	return (uint64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+int cpuid_extract_ftr(uint64_t val, int field, bool sign)
+{
+	return (sign) ? extract_signed_field(val, field, 4) :
+			extract_unsigned_field(val, field, 4);
+}
+
+int cpuid_extract_sftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, true);
+}
+
+unsigned int cpuid_extract_uftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, false);
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 33/38] KVM: arm64: selftests: Add helpers to extract a field of ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a couple of helpers to extract a field of ID registers.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/include/aarch64/processor.h |  5 ++++
 .../selftests/kvm/lib/aarch64/processor.c     | 27 +++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 8f9f46979a00..e12411fec822 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -185,4 +185,9 @@ static inline void local_irq_disable(void)
 	asm volatile("msr daifset, #3" : : : "memory");
 }
 
+int extract_signed_field(uint64_t val, int field, int width);
+unsigned int extract_unsigned_field(uint64_t val, int field, int width);
+int cpuid_extract_ftr(uint64_t val, int field, bool sign);
+int cpuid_extract_sftr(uint64_t val, int field);
+unsigned int cpuid_extract_uftr(uint64_t val, int field);
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 9343d82519b4..c55f7dfc8567 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -500,3 +500,30 @@ void __attribute__((constructor)) init_guest_modes(void)
 {
        guest_modes_append_default();
 }
+
+/* Helpers to get a feature field from ID register value */
+int extract_signed_field(uint64_t val, int field, int width)
+{
+	return (int64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+unsigned int extract_unsigned_field(uint64_t val, int field, int width)
+{
+	return (uint64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+int cpuid_extract_ftr(uint64_t val, int field, bool sign)
+{
+	return (sign) ? extract_signed_field(val, field, 4) :
+			extract_unsigned_field(val, field, 4);
+}
+
+int cpuid_extract_sftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, true);
+}
+
+unsigned int cpuid_extract_uftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, false);
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 33/38] KVM: arm64: selftests: Add helpers to extract a field of ID registers
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a couple of helpers to extract a field of ID registers.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/include/aarch64/processor.h |  5 ++++
 .../selftests/kvm/lib/aarch64/processor.c     | 27 +++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 8f9f46979a00..e12411fec822 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -185,4 +185,9 @@ static inline void local_irq_disable(void)
 	asm volatile("msr daifset, #3" : : : "memory");
 }
 
+int extract_signed_field(uint64_t val, int field, int width);
+unsigned int extract_unsigned_field(uint64_t val, int field, int width);
+int cpuid_extract_ftr(uint64_t val, int field, bool sign);
+int cpuid_extract_sftr(uint64_t val, int field);
+unsigned int cpuid_extract_uftr(uint64_t val, int field);
 #endif /* SELFTEST_KVM_PROCESSOR_H */
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 9343d82519b4..c55f7dfc8567 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -500,3 +500,30 @@ void __attribute__((constructor)) init_guest_modes(void)
 {
        guest_modes_append_default();
 }
+
+/* Helpers to get a feature field from ID register value */
+int extract_signed_field(uint64_t val, int field, int width)
+{
+	return (int64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+unsigned int extract_unsigned_field(uint64_t val, int field, int width)
+{
+	return (uint64_t)(val << (64 - width - field)) >> (64 - width);
+}
+
+int cpuid_extract_ftr(uint64_t val, int field, bool sign)
+{
+	return (sign) ? extract_signed_field(val, field, 4) :
+			extract_unsigned_field(val, field, 4);
+}
+
+int cpuid_extract_sftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, true);
+}
+
+unsigned int cpuid_extract_uftr(uint64_t val, int field)
+{
+	return cpuid_extract_ftr(val, field, false);
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 34/38] KVM: arm64: selftests: Introduce id_reg_test
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a test for aarch64 to validate basic behavior of
KVM_GET_ONE_REG and KVM_SET_ONE_REG for ID registers.

This test runs only when KVM_CAP_ARM_ID_REG_CONFIGURABLE is supported.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++++++++++
 3 files changed, 1299 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c

diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 7640fa27be94..be3947c125f1 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -793,6 +793,7 @@
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 681b173aa87c..e94e4dc45297 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -105,6 +105,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test
 TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
 TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
 TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
+TEST_GEN_PROGS_aarch64 += aarch64/id_reg_test
 TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
diff --git a/tools/testing/selftests/kvm/aarch64/id_reg_test.c b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
new file mode 100644
index 000000000000..7e7e66b867c0
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
@@ -0,0 +1,1297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * id_reg_test.c - Tests reading/writing the aarch64's ID registers
+ *
+ * The test validates KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl for ID
+ * registers as well as reading ID register from the guest works fine.
+ *
+ * Copyright (c) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+#include <stdlib.h>
+#include <time.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "vgic.h"
+
+/* Reserved ID registers */
+#define	SYS_ID_REG_3_3_EL1		sys_reg(3, 0, 0, 3, 3)
+#define	SYS_ID_REG_3_7_EL1		sys_reg(3, 0, 0, 3, 7)
+
+#define	SYS_ID_REG_4_2_EL1		sys_reg(3, 0, 0, 4, 2)
+#define	SYS_ID_REG_4_3_EL1		sys_reg(3, 0, 0, 4, 3)
+#define	SYS_ID_REG_4_5_EL1		sys_reg(3, 0, 0, 4, 5)
+#define	SYS_ID_REG_4_6_EL1		sys_reg(3, 0, 0, 4, 6)
+#define	SYS_ID_REG_4_7_EL1		sys_reg(3, 0, 0, 4, 7)
+
+#define	SYS_ID_REG_5_2_EL1		sys_reg(3, 0, 0, 5, 2)
+#define	SYS_ID_REG_5_3_EL1		sys_reg(3, 0, 0, 5, 3)
+#define	SYS_ID_REG_5_6_EL1		sys_reg(3, 0, 0, 5, 6)
+#define	SYS_ID_REG_5_7_EL1		sys_reg(3, 0, 0, 5, 7)
+
+#define	SYS_ID_REG_6_2_EL1		sys_reg(3, 0, 0, 6, 2)
+#define	SYS_ID_REG_6_3_EL1		sys_reg(3, 0, 0, 6, 3)
+#define	SYS_ID_REG_6_4_EL1		sys_reg(3, 0, 0, 6, 4)
+#define	SYS_ID_REG_6_5_EL1		sys_reg(3, 0, 0, 6, 5)
+#define	SYS_ID_REG_6_6_EL1		sys_reg(3, 0, 0, 6, 6)
+#define	SYS_ID_REG_6_7_EL1		sys_reg(3, 0, 0, 6, 7)
+
+#define	SYS_ID_REG_7_3_EL1		sys_reg(3, 0, 0, 7, 3)
+#define	SYS_ID_REG_7_4_EL1		sys_reg(3, 0, 0, 7, 4)
+#define	SYS_ID_REG_7_5_EL1		sys_reg(3, 0, 0, 7, 5)
+#define	SYS_ID_REG_7_6_EL1		sys_reg(3, 0, 0, 7, 6)
+#define	SYS_ID_REG_7_7_EL1		sys_reg(3, 0, 0, 7, 7)
+
+#define	READ_ID_REG_FN(name)	read_## name ## _EL1
+
+#define	DEFINE_READ_SYS_REG(reg_name)			\
+uint64_t read_##reg_name(void)				\
+{							\
+	return read_sysreg_s(SYS_##reg_name);		\
+}
+
+#define DEFINE_READ_ID_REG(name)	\
+	DEFINE_READ_SYS_REG(name ## _EL1)
+
+#define	__ID_REG(reg_name)		\
+	.name = #reg_name,		\
+	.id = SYS_## reg_name ##_EL1,	\
+	.read_reg = READ_ID_REG_FN(reg_name),
+
+#define	ID_REG_ENT(reg_name)	\
+	[ID_IDX(reg_name)] = { __ID_REG(reg_name) }
+
+/* Functions to read each ID register */
+/* CRm=1 */
+DEFINE_READ_ID_REG(ID_PFR0)
+DEFINE_READ_ID_REG(ID_PFR1)
+DEFINE_READ_ID_REG(ID_DFR0)
+DEFINE_READ_ID_REG(ID_AFR0)
+DEFINE_READ_ID_REG(ID_MMFR0)
+DEFINE_READ_ID_REG(ID_MMFR1)
+DEFINE_READ_ID_REG(ID_MMFR2)
+DEFINE_READ_ID_REG(ID_MMFR3)
+
+/* CRm=2 */
+DEFINE_READ_ID_REG(ID_ISAR0)
+DEFINE_READ_ID_REG(ID_ISAR1)
+DEFINE_READ_ID_REG(ID_ISAR2)
+DEFINE_READ_ID_REG(ID_ISAR3)
+DEFINE_READ_ID_REG(ID_ISAR4)
+DEFINE_READ_ID_REG(ID_ISAR5)
+DEFINE_READ_ID_REG(ID_MMFR4)
+DEFINE_READ_ID_REG(ID_ISAR6)
+
+/* CRm=3 */
+DEFINE_READ_ID_REG(MVFR0)
+DEFINE_READ_ID_REG(MVFR1)
+DEFINE_READ_ID_REG(MVFR2)
+DEFINE_READ_ID_REG(ID_REG_3_3)
+DEFINE_READ_ID_REG(ID_PFR2)
+DEFINE_READ_ID_REG(ID_DFR1)
+DEFINE_READ_ID_REG(ID_MMFR5)
+DEFINE_READ_ID_REG(ID_REG_3_7)
+
+/* CRm=4 */
+DEFINE_READ_ID_REG(ID_AA64PFR0)
+DEFINE_READ_ID_REG(ID_AA64PFR1)
+DEFINE_READ_ID_REG(ID_REG_4_2)
+DEFINE_READ_ID_REG(ID_REG_4_3)
+DEFINE_READ_ID_REG(ID_AA64ZFR0)
+DEFINE_READ_ID_REG(ID_REG_4_5)
+DEFINE_READ_ID_REG(ID_REG_4_6)
+DEFINE_READ_ID_REG(ID_REG_4_7)
+
+/* CRm=5 */
+DEFINE_READ_ID_REG(ID_AA64DFR0)
+DEFINE_READ_ID_REG(ID_AA64DFR1)
+DEFINE_READ_ID_REG(ID_REG_5_2)
+DEFINE_READ_ID_REG(ID_REG_5_3)
+DEFINE_READ_ID_REG(ID_AA64AFR0)
+DEFINE_READ_ID_REG(ID_AA64AFR1)
+DEFINE_READ_ID_REG(ID_REG_5_6)
+DEFINE_READ_ID_REG(ID_REG_5_7)
+
+/* CRm=6 */
+DEFINE_READ_ID_REG(ID_AA64ISAR0)
+DEFINE_READ_ID_REG(ID_AA64ISAR1)
+DEFINE_READ_ID_REG(ID_REG_6_2)
+DEFINE_READ_ID_REG(ID_REG_6_3)
+DEFINE_READ_ID_REG(ID_REG_6_4)
+DEFINE_READ_ID_REG(ID_REG_6_5)
+DEFINE_READ_ID_REG(ID_REG_6_6)
+DEFINE_READ_ID_REG(ID_REG_6_7)
+
+/* CRm=7 */
+DEFINE_READ_ID_REG(ID_AA64MMFR0)
+DEFINE_READ_ID_REG(ID_AA64MMFR1)
+DEFINE_READ_ID_REG(ID_AA64MMFR2)
+DEFINE_READ_ID_REG(ID_REG_7_3)
+DEFINE_READ_ID_REG(ID_REG_7_4)
+DEFINE_READ_ID_REG(ID_REG_7_5)
+DEFINE_READ_ID_REG(ID_REG_7_6)
+DEFINE_READ_ID_REG(ID_REG_7_7)
+
+#define	ID_IDX(name)	REG_IDX_## name
+
+enum id_reg_idx {
+	/* CRm=1 */
+	ID_IDX(ID_PFR0) = 0,
+	ID_IDX(ID_PFR1),
+	ID_IDX(ID_DFR0),
+	ID_IDX(ID_AFR0),
+	ID_IDX(ID_MMFR0),
+	ID_IDX(ID_MMFR1),
+	ID_IDX(ID_MMFR2),
+	ID_IDX(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_IDX(ID_ISAR0),
+	ID_IDX(ID_ISAR1),
+	ID_IDX(ID_ISAR2),
+	ID_IDX(ID_ISAR3),
+	ID_IDX(ID_ISAR4),
+	ID_IDX(ID_ISAR5),
+	ID_IDX(ID_MMFR4),
+	ID_IDX(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_IDX(MVFR0),
+	ID_IDX(MVFR1),
+	ID_IDX(MVFR2),
+	ID_IDX(ID_REG_3_3),
+	ID_IDX(ID_PFR2),
+	ID_IDX(ID_DFR1),
+	ID_IDX(ID_MMFR5),
+	ID_IDX(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_IDX(ID_AA64PFR0),
+	ID_IDX(ID_AA64PFR1),
+	ID_IDX(ID_REG_4_2),
+	ID_IDX(ID_REG_4_3),
+	ID_IDX(ID_AA64ZFR0),
+	ID_IDX(ID_REG_4_5),
+	ID_IDX(ID_REG_4_6),
+	ID_IDX(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_IDX(ID_AA64DFR0),
+	ID_IDX(ID_AA64DFR1),
+	ID_IDX(ID_REG_5_2),
+	ID_IDX(ID_REG_5_3),
+	ID_IDX(ID_AA64AFR0),
+	ID_IDX(ID_AA64AFR1),
+	ID_IDX(ID_REG_5_6),
+	ID_IDX(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_IDX(ID_AA64ISAR0),
+	ID_IDX(ID_AA64ISAR1),
+	ID_IDX(ID_REG_6_2),
+	ID_IDX(ID_REG_6_3),
+	ID_IDX(ID_REG_6_4),
+	ID_IDX(ID_REG_6_5),
+	ID_IDX(ID_REG_6_6),
+	ID_IDX(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_IDX(ID_AA64MMFR0),
+	ID_IDX(ID_AA64MMFR1),
+	ID_IDX(ID_AA64MMFR2),
+	ID_IDX(ID_REG_7_3),
+	ID_IDX(ID_REG_7_4),
+	ID_IDX(ID_REG_7_5),
+	ID_IDX(ID_REG_7_6),
+	ID_IDX(ID_REG_7_7),
+};
+
+struct id_reg_test_info {
+	char		*name;
+	uint32_t	id;
+	/* Indicates the register can be set to 0 */
+	bool		can_clear;
+	uint64_t	initial_value;
+	uint64_t	current_value;
+	uint64_t	(*read_reg)(void);
+};
+
+#define	ID_REG_INFO(name)	(&id_reg_list[ID_IDX(name)])
+static struct id_reg_test_info id_reg_list[] = {
+	/* CRm=1 */
+	ID_REG_ENT(ID_PFR0),
+	ID_REG_ENT(ID_PFR1),
+	ID_REG_ENT(ID_DFR0),
+	ID_REG_ENT(ID_AFR0),
+	ID_REG_ENT(ID_MMFR0),
+	ID_REG_ENT(ID_MMFR1),
+	ID_REG_ENT(ID_MMFR2),
+	ID_REG_ENT(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_REG_ENT(ID_ISAR0),
+	ID_REG_ENT(ID_ISAR1),
+	ID_REG_ENT(ID_ISAR2),
+	ID_REG_ENT(ID_ISAR3),
+	ID_REG_ENT(ID_ISAR4),
+	ID_REG_ENT(ID_ISAR5),
+	ID_REG_ENT(ID_MMFR4),
+	ID_REG_ENT(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_REG_ENT(MVFR0),
+	ID_REG_ENT(MVFR1),
+	ID_REG_ENT(MVFR2),
+	ID_REG_ENT(ID_REG_3_3),
+	ID_REG_ENT(ID_PFR2),
+	ID_REG_ENT(ID_DFR1),
+	ID_REG_ENT(ID_MMFR5),
+	ID_REG_ENT(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_REG_ENT(ID_AA64PFR0),
+	ID_REG_ENT(ID_AA64PFR1),
+	ID_REG_ENT(ID_REG_4_2),
+	ID_REG_ENT(ID_REG_4_3),
+	ID_REG_ENT(ID_AA64ZFR0),
+	ID_REG_ENT(ID_REG_4_5),
+	ID_REG_ENT(ID_REG_4_6),
+	ID_REG_ENT(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_REG_ENT(ID_AA64DFR0),
+	ID_REG_ENT(ID_AA64DFR1),
+	ID_REG_ENT(ID_REG_5_2),
+	ID_REG_ENT(ID_REG_5_3),
+	ID_REG_ENT(ID_AA64AFR0),
+	ID_REG_ENT(ID_AA64AFR1),
+	ID_REG_ENT(ID_REG_5_6),
+	ID_REG_ENT(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_REG_ENT(ID_AA64ISAR0),
+	ID_REG_ENT(ID_AA64ISAR1),
+	ID_REG_ENT(ID_REG_6_2),
+	ID_REG_ENT(ID_REG_6_3),
+	ID_REG_ENT(ID_REG_6_4),
+	ID_REG_ENT(ID_REG_6_5),
+	ID_REG_ENT(ID_REG_6_6),
+	ID_REG_ENT(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_REG_ENT(ID_AA64MMFR0),
+	ID_REG_ENT(ID_AA64MMFR1),
+	ID_REG_ENT(ID_AA64MMFR2),
+	ID_REG_ENT(ID_REG_7_3),
+	ID_REG_ENT(ID_REG_7_4),
+	ID_REG_ENT(ID_REG_7_5),
+	ID_REG_ENT(ID_REG_7_6),
+	ID_REG_ENT(ID_REG_7_7),
+};
+
+static bool aarch32_support = true;
+
+#define is_id_reg(id)	\
+	(sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&	\
+	 sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 0 &&	\
+	 sys_reg_CRm(id) < 8)
+
+#define	UPDATE_ID_UFIELD(regval, shift, fval)	\
+	(((regval) & ~(0xfULL << (shift))) |	\
+	 (((uint64_t)((fval) & 0xf)) << (shift)))
+
+void *pmu_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	struct kvm_device_attr attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vcpu_ioctl(vm, vcpu, KVM_SET_DEVICE_ATTR, &attr);
+	return NULL;
+}
+
+void *sve_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	int feature = KVM_ARM_VCPU_SVE;
+
+	vcpu_ioctl(vm, vcpu, KVM_ARM_VCPU_FINALIZE, &feature);
+	return NULL;
+}
+
+#define GICD_BASE_GPA			0x8000000ULL
+#define GICR_BASE_GPA			0x80A0000ULL
+
+void *vgic_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	/* We jsut need to configure gic v3 (we don't use it though) */
+	int gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	return (void *)(intptr_t)gic_fd;
+}
+
+void vgic_fini(struct kvm_vm *vm, uint32_t vcpu, void *data)
+{
+	close((int)(intptr_t)data);
+}
+
+
+static bool is_aarch32_id_reg(uint32_t id)
+{
+	uint32_t crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+#define	MAX_CAPS	2
+struct feature_test_info {
+	char	*name;	/* Feature Name (Debug information) */
+
+	/* ID register that identifies the presence of the feature */
+	struct id_reg_test_info	*sreg;
+
+	/*
+	 * Bit position of the ID register field that identifies
+	 * the presence of the feature.
+	 */
+	int	shift;
+
+	/* Min value of the field that indicates the presence of the feature. */
+	int	min;
+	bool	is_sign;	/* Is the field signed or unsigned ? */
+	int	ncaps;		/* Number of valid Capabilities in caps[] */
+
+	/* KVM_CAP_* Capabilities to indicates that KVM supports this feature */
+	long	caps[MAX_CAPS];
+
+	/* struct kvm_enable_cap to use the capability if needed */
+	struct kvm_enable_cap	*opt_in_cap;
+
+	/* Should the guest check the ID register for this feature ? */
+	bool	run_test;
+
+	/*
+	 * Extra initialization function to enable the feature if needed.
+	 * (e.g. KVM_ARM_VCPU_FINALIZE for SVE)
+	 * The return value of this function will be passed to fini_feature().
+	 */
+	void	*(*init_feature)(struct kvm_vm *vm, uint32_t vcpuid);
+
+	/*
+	 * Clean up anything that init_feature() initialized or allocated
+	 * as needed. The 'data' is the return value from init_feature().
+	 */
+	void	(*fini_feature)(struct kvm_vm *vm, uint32_t vcpuid, void *data);
+
+	/* struct kvm_vcpu_init to opt-in the feature if needed */
+	struct kvm_vcpu_init	*vcpu_init;
+
+	/* Extra feature specific tests */
+	void	(*test_feature)(struct feature_test_info *finfo);
+};
+
+static void pmu_test(struct feature_test_info *finfo);
+
+/* Information for opt-in CPU features */
+static struct feature_test_info feature_test_info_table[] = {
+	{
+		.name = "SVE",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_SVE_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_ARM_SVE},
+		.ncaps = 1,
+		.init_feature = sve_init,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_SVE},
+		},
+	},
+	{
+		.name = "GIC",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_GIC_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_IRQCHIP},
+		.ncaps = 1,
+		.init_feature = vgic_init,
+		.fini_feature = vgic_fini,
+	},
+	{
+		.name = "MTE",
+		.sreg = ID_REG_INFO(ID_AA64PFR1),
+		.shift = ID_AA64PFR1_MTE_SHIFT,
+		.min = 2,
+		.caps = {KVM_CAP_ARM_MTE},
+		.ncaps = 1,
+		.opt_in_cap = &(struct kvm_enable_cap) {
+				.cap = KVM_CAP_ARM_MTE,
+		},
+	},
+	{
+		.name = "PMUV3",
+		.sreg = ID_REG_INFO(ID_AA64DFR0),
+		.shift = ID_AA64DFR0_PMUVER_SHIFT,
+		.min = 1,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+	{
+		.name = "PERFMON",
+		.sreg = ID_REG_INFO(ID_DFR0),
+		.shift = ID_DFR0_PERFMON_SHIFT,
+		.min = 3,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+};
+
+static void walk_id_reg_list(void (*fn)(struct id_reg_test_info *r, void *arg),
+			     void *arg)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_list); i++)
+		fn(&id_reg_list[i], arg);
+}
+
+static void guest_code_id_reg_check_one(struct id_reg_test_info *idr, void *arg)
+{
+	uint64_t v = idr->read_reg();
+
+	GUEST_ASSERT_2(v == idr->current_value, idr->name, idr->current_value);
+}
+
+static void guest_code_id_reg_check_all(uint32_t cpu)
+{
+	walk_id_reg_list(guest_code_id_reg_check_one, NULL);
+	GUEST_DONE();
+}
+
+static void guest_code_do_nothing(uint32_t cpu)
+{
+	GUEST_DONE();
+}
+
+static void guest_code_feature_check(uint32_t cpu)
+{
+	int i;
+	struct feature_test_info *finfo;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++) {
+		finfo = &feature_test_info_table[i];
+		if (finfo->run_test)
+			guest_code_id_reg_check_one(finfo->sreg, NULL);
+	}
+
+	GUEST_DONE();
+}
+
+static void guest_code_ptrauth_check(uint32_t cpuid)
+{
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint64_t val = sreg->read_reg();
+
+	GUEST_ASSERT_2(val == sreg->current_value, "PTRAUTH", val);
+	GUEST_DONE();
+}
+
+static void reset_id_reg_info_current_value(struct id_reg_test_info *info,
+					    void *arg)
+{
+	info->current_value = info->initial_value;
+}
+
+/* Reset current_value field of each id_reg_test_info */
+static void reset_id_reg_info(void)
+{
+	walk_id_reg_list(reset_id_reg_info_current_value, NULL);
+}
+
+static struct kvm_vm *test_vm_create(uint32_t nvcpus,
+		void (*guest_code)(uint32_t), struct kvm_vcpu_init *init,
+		struct kvm_enable_cap *cap)
+{
+	struct kvm_vm *vm;
+	uint32_t cpuid;
+	uint64_t mem_pages;
+
+	mem_pages = DEFAULT_GUEST_PHY_PAGES + DEFAULT_STACK_PGS * nvcpus;
+	mem_pages += mem_pages / (PTES_PER_MIN_PAGE * 2);
+	mem_pages = vm_adjust_num_guest_pages(VM_MODE_DEFAULT, mem_pages);
+
+	vm = vm_create(VM_MODE_DEFAULT, mem_pages, O_RDWR);
+	if (cap)
+		vm_enable_cap(vm, cap);
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	if (init && init->target == -1) {
+		struct kvm_vcpu_init preferred;
+
+		vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &preferred);
+		init->target = preferred.target;
+	}
+
+	vm_init_descriptor_tables(vm);
+	for (cpuid = 0; cpuid < nvcpus; cpuid++) {
+		aarch64_vcpu_add_default(vm, cpuid, init, guest_code);
+		vcpu_init_descriptor_tables(vm, cpuid);
+	}
+
+	ucall_init(vm, NULL);
+	return vm;
+}
+
+static void test_vm_free(struct kvm_vm *vm)
+{
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+#define	TEST_RUN(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, true))
+
+#define	TEST_RUN_NO_SYNC_DATA(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, false))
+
+static int test_vcpu_run(const char *test_name, int line,
+			 struct kvm_vm *vm, uint32_t vcpuid, bool sync_data)
+{
+	struct ucall uc;
+	int ret;
+
+	if (sync_data) {
+		sync_global_to_guest(vm, id_reg_list);
+		sync_global_to_guest(vm, feature_test_info_table);
+	}
+
+	vcpu_args_set(vm, vcpuid, 1, vcpuid);
+
+	ret = _vcpu_run(vm, vcpuid);
+	if (ret) {
+		ret = errno;
+		goto sync_exit;
+	}
+
+	switch (get_ucall(vm, vcpuid, &uc)) {
+	case UCALL_SYNC:
+	case UCALL_DONE:
+		ret = 0;
+		break;
+	case UCALL_ABORT:
+		TEST_FAIL(
+		    "%s (%s) at line %d (user %s at line %d), args[3]=0x%lx",
+		    (char *)uc.args[0], (char *)uc.args[2], (int)uc.args[1],
+		    test_name, line, uc.args[3]);
+		break;
+	default:
+		TEST_FAIL("Unexpected guest exit\n");
+	}
+
+sync_exit:
+	if (sync_data) {
+		sync_global_from_guest(vm, id_reg_list);
+		sync_global_from_guest(vm, feature_test_info_table);
+	}
+	return ret;
+}
+
+/*
+ * Test KVM's special handling for ID_AA64DFR0.PMUVER/DFR0.PERFMON, which
+ * is ignoring userspace's request to set the fields to 0xf (IMPLEMENTATION
+ * DEFINED PMU) and setting the field to 0 instead. This KVM's implementation
+ * is to make live migration work from the older KVM, which erroneously sets
+ * those fields to 0xf for the guest when their host sanitized value are
+ * 0xf (it should have been set to 0x0 as the KVM doesn't support
+ * IMPLEMENTATION DEFINED PMU for the guest).
+ */
+static void pmu_test(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vm *vm;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	int ret;
+
+	reset_id_reg_info();
+	finfo->run_test = 1;
+
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+
+	/* Make sure that ID_AA64DFR0.PMUVER/DFR0.PERFMON is 0. */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be initially 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	/* Try to set ID_AA64DFR0.PMUVER/DFR0.PERFMON to -1 (0xf). */
+	fval = -1;
+	reg_val = UPDATE_ID_UFIELD(reg_val, finfo->shift, fval);
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting %s field of %s to %ld failed (%d)\n",
+		    finfo->name, sreg->name, fval, ret);
+
+	/* Check if ID_AA64DFR0.PMUVER/DFR0.PERFMON is still 0. */
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	sreg->current_value = reg_val;
+	ret = TEST_RUN(vm, vcpu);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+}
+
+struct vm_vcpu_arg {
+	struct kvm_vm	*vm;
+	uint32_t	vcpuid;
+	bool		after_run;
+};
+
+/*
+ * Test if KVM_SET_ONE_REG can work with the value KVM_GET_ONE_REG returns,
+ * KVM_SET_ONE_REG with zero works before KVM_RUN (and fails after KVM_RUN),
+ * and KVM_GET_ONE_REG returns the value KVM_SET_ONE_REG sets.
+ */
+static void test_get_set_id_reg(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	bool after_run = ((struct vm_vcpu_arg *)arg)->after_run;
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val, tval;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/* Check the current register value */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == sreg->current_value,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+	tval = reg_val;
+
+	/* Try to clear the register that should be able to be cleared. */
+	if ((reg_val != 0) && (sreg->can_clear)) {
+		reg_val = 0;
+		ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+		if (after_run) {
+			/* Expect an error after KVM_RUN */
+			TEST_ASSERT(ret,
+				    "Clearing %s unexpectedly worked\n",
+				    sreg->name);
+		} else {
+			TEST_ASSERT(!ret,
+				    "Clearing %s didn't work\n", sreg->name);
+			/*
+			 * Make sure that KVM_GET_ONE_REG provides the value
+			 * we set.
+			 */
+			vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+			TEST_ASSERT(reg_val == 0,
+				    "GET(%s) didn't return 0x%lx but 0x%lx",
+				    sreg->name, (uint64_t)0, reg_val);
+		}
+	}
+
+	/* Check if KVM_SET_ONE_REG works with the original value. */
+	reg_val = tval;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting the same ID reg value should work\n");
+
+	/* Make sure that KVM_GET_ONE_REG provides the value we set. */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == tval,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+}
+
+/*
+ * Test if KVM_SET_ONE_REG with the current value works before KVM_RUN,
+ * values of ID registers the guest sees are consistent with the ones
+ * userspace sees, and KVM_SET_ONE_REG after KVM_RUN works when the
+ * specified value is the same as the current one (fails otherwise).
+ */
+static void test_id_regs_basic(void)
+{
+	struct kvm_vm *vm;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+	int ret;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	arg.vm = vm;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	ret = TEST_RUN(vm, 0);
+	assert(!ret);
+
+	arg.after_run = true;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	test_vm_free(vm);
+}
+
+static bool caps_are_supported(long *caps, int ncaps)
+{
+	int i;
+
+	for (i = 0; i < ncaps; i++) {
+		if (kvm_check_cap(caps[i]) <= 0)
+			return false;
+	}
+	return true;
+}
+
+#define	NCAPS_PTRAUTH	2
+
+/*
+ * Test if the ID register value reflects the ptrauth feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is consistent
+ * with the ptrauth feature configuration.
+ */
+static void test_feature_ptrauth(void)
+{
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init;
+	struct kvm_vm *vm = NULL;
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint32_t vcpu = 0;
+	int64_t rval;
+	int ret;
+	int apa, api, gpa, gpi;
+	char *name = "PTRAUTH";
+	long caps[NCAPS_PTRAUTH] = {KVM_CAP_ARM_PTRAUTH_ADDRESS,
+				    KVM_CAP_ARM_PTRAUTH_GENERIC};
+
+	reset_id_reg_info();
+	one_reg.addr = (uint64_t)&rval;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	if (caps_are_supported(caps, NCAPS_PTRAUTH)) {
+
+		/* Test with feature enabled */
+		memset(&init, 0, sizeof(init));
+		init.target = -1;
+		init.features[0] = (1ULL << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
+				    1ULL << KVM_ARM_VCPU_PTRAUTH_GENERIC);
+		vm = test_vm_create(1, guest_code_ptrauth_check, &init, NULL);
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+		/* Make sure values of apa/api/gpa/gpi fields are expected */
+		apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+		api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+		gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+		gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+
+		TEST_ASSERT((apa > 0) || (api > 0),
+			    "Either apa(0x%x) or api(0x%x) must be available",
+			    apa, gpa);
+		TEST_ASSERT((gpa > 0) || (gpi > 0),
+			    "Either gpa(0x%x) or gpi(0x%x) must be available",
+			    gpa, gpi);
+
+		TEST_ASSERT((apa > 0) ^ (api > 0),
+			    "Both apa(0x%x) and api(0x%x) must not be available",
+			    apa, api);
+		TEST_ASSERT((gpa > 0) ^ (gpi > 0),
+			    "Both gpa(0x%x) and gpi(0x%x) must not be available",
+			    gpa, gpi);
+
+		sreg->current_value = rval;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n",
+			 __func__, name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+
+		TEST_ASSERT(!ret, "%s:KVM_RUN failed with %s enabled",
+			    __func__, name);
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+	apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+	api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+	gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+	gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+	TEST_ASSERT(!apa && !api && !gpa && !gpi,
+	    "apa(0x%x), api(0x%x), gpa(0x%x), gpi(0x%x) must be zero",
+	    apa, api, gpa, gpi);
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s TEST_RUN failed with %s enabled, ret=0x%x",
+		    __func__, name, ret);
+
+	test_vm_free(vm);
+}
+
+static bool feature_caps_are_available(struct feature_test_info *finfo)
+{
+	return ((finfo->ncaps > 0) &&
+		caps_are_supported(finfo->caps, finfo->ncaps));
+}
+
+/*
+ * Test if the ID register value reflects the feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is
+ * consistent with the feature configuration.
+ */
+static void test_feature(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init, *initp = NULL;
+	struct kvm_vm *vm = NULL;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	bool is_sign = finfo->is_sign;
+	int min = finfo->min;
+	int shift = finfo->shift;
+	int ret;
+	void *data = NULL;
+
+	pr_debug("%s: %s (reg %s)\n", __func__, finfo->name, sreg->name);
+
+	reset_id_reg_info();
+
+	if (is_aarch32_id_reg(sreg->id) && !aarch32_support)
+		/*
+		 * AArch32 is not supported. Skip testing with the AArch32
+		 * ID register.
+		 */
+		return;
+
+	/* Indicate that guest runs the test for the feature */
+	finfo->run_test = 1;
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/*
+	 * Test with feature enabled if the feature is exposed in the default
+	 * ID register value or the capabilities are supported at KVM level.
+	 */
+	if ((cpuid_extract_ftr(sreg->initial_value, shift, is_sign) >= min) ||
+	    feature_caps_are_available(finfo)) {
+		if (finfo->vcpu_init) {
+			/* Need to enable the feature via KVM_ARM_VCPU_INIT. */
+			memset(&init, 0, sizeof(init));
+			init = *finfo->vcpu_init;
+			init.target = -1;
+			initp = &init;
+		}
+
+		vm = test_vm_create(1, guest_code_feature_check, initp,
+				    finfo->opt_in_cap);
+		if (finfo->init_feature)
+			/* Run any required extra process to use the feature */
+			data = finfo->init_feature(vm, vcpu);
+
+		/* Check if the ID register value indicates the feature */
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+		fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+		TEST_ASSERT(fval >= min, "%s field of %s is too small (%ld)",
+			    finfo->name, sreg->name, fval);
+		sreg->current_value = reg_val;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n", __func__,
+			 finfo->name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+		TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s enabled",
+			    __func__, finfo->name);
+
+		if (finfo->fini_feature)
+			finfo->fini_feature(vm, vcpu, data);
+
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+	if (finfo->vcpu_init || finfo->opt_in_cap) {
+		/*
+		 * If the feature needs to be enabled with KVM_ARM_VCPU_INIT
+		 * or opt-in capabilities, the default value of the ID register
+		 * shouldn't indicate the feature.
+		 */
+		TEST_ASSERT(fval < min, "%s field of %s is too big (%ld)",
+		    finfo->name, sreg->name, fval);
+	} else {
+		/* Update the relevant field to hide the feature. */
+		fval = is_sign ? 0xf : 0x0;
+		reg_val = UPDATE_ID_UFIELD(reg_val, shift, fval);
+		ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+		TEST_ASSERT(ret == 0, "Disabling %s failed %d (err %d)\n",
+			    finfo->name, ret, errno);
+		sreg->current_value = reg_val;
+	}
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, finfo->name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s disabled",
+		    __func__, finfo->name);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+
+	/* Run extra feature specific tests, if any */
+	if (finfo->test_feature)
+		finfo->test_feature(finfo);
+}
+
+/*
+ * For each opt-in feature in feature_test_info_table[],
+ * test if KVM_GET_ONE_REG/KVM_SET_ONE_REG works appropriately according
+ * to the feature configuration.  See test_feature's comment for more detail.
+ */
+static void test_feature_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++)
+		test_feature(&feature_test_info_table[i]);
+}
+
+int set_id_reg(struct kvm_vm *vm, uint32_t vcpu, struct id_reg_test_info *sreg,
+	       uint64_t new_val)
+{
+	int ret;
+	uint64_t reg_val;
+	struct kvm_one_reg one_reg;
+
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	one_reg.addr = (uint64_t)&reg_val;
+
+	reg_val = new_val;
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->current_value = new_val;
+
+	return ret;
+}
+
+
+/*
+ * Create a new VM with one vCPU, set the ID register to @new_val.
+ */
+int set_id_reg_vm(struct id_reg_test_info *sreg, uint64_t new_val)
+{
+	struct kvm_vm *vm;
+	int ret;
+	uint32_t vcpu = 0;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+struct frac_info {
+	char	*name;
+	struct id_reg_test_info *sreg;
+	struct id_reg_test_info *frac_sreg;
+	int	shift;
+	int	frac_shift;
+};
+
+struct frac_info frac_info_table[] = {
+	{
+		.name = "RAS",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+	},
+	{
+		.name = "MPAM",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+	},
+	{
+		.name = "CSV2",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+	},
+};
+
+
+/*
+ * Make sure that we can set the fractional reg field even before setting
+ * the feature reg field.
+ */
+int test_feature_frac_vm(struct frac_info *frac, uint64_t new_val,
+			 uint64_t frac_new_val)
+{
+	struct kvm_vm *vm;
+	uint32_t vcpu = 0;
+	struct id_reg_test_info *sreg, *frac_sreg;
+	int ret;
+
+	sreg = frac->sreg;
+	frac_sreg = frac->frac_sreg;
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	/* Set fractional reg field */
+	ret = set_id_reg(vm, vcpu, frac_sreg, frac_new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    frac_sreg->name, frac_new_val, ret);
+
+	/* Set feature reg field */
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    sreg->name, new_val, ret);
+
+	ret = TEST_RUN(vm, vcpu);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+/*
+ * Test for setting the feature fractional field of the ID register.
+ * When the (main) feature field of the ID register is the same as the host's,
+ * the fractional field value cannot be larger than the host's.
+ * (KVM_SET_ONE_REG should work but KVM_RUN with the larger value will fail)
+ * When the (main) feature field of the ID register is smaler than the host's,
+ * the fractional field can be any values.
+ * The function tests those behaviors.
+ */
+void test_feature_frac_one(struct frac_info *frac)
+{
+	uint64_t ftr_val, ftr_fval, frac_val, frac_fval;
+	int ret, shift, frac_shift;
+	struct id_reg_test_info *sreg, *frac_sreg;
+
+	reset_id_reg_info();
+
+	sreg = frac->sreg;
+	shift = frac->shift;
+	frac_sreg = frac->frac_sreg;
+	frac_shift = frac->frac_shift;
+
+	pr_debug("%s(%s Frac) reg:%s(shift:%d) frac reg:%s(shift:%d)\n",
+		 __func__, frac->name, sreg->name, shift, frac_sreg->name,
+		 frac_shift);
+
+	/*
+	 * Use the host's feature value for the guest.
+	 * KVM_RUN with a larger frac value than the host's should fail.
+	 * Otherwise, it should work.
+	 */
+
+	frac_fval = cpuid_extract_uftr(frac_sreg->initial_value, frac_shift);
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(!ret, "Test smaller %s frac (val:%lx) failed(%d)",
+			    frac->name, frac_val, ret);
+	}
+
+	reset_id_reg_info();
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+						frac_shift, frac_fval + 1);
+
+		/* Setting larger frac shouldn't fail at ioctl */
+		ret = set_id_reg_vm(frac_sreg, frac_val);
+		TEST_ASSERT(!ret,
+			"SET larger %s frac (%s org:%lx, val:%lx) failed(%d)",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val, ret);
+
+		/* KVM_RUN with larger frac should fail */
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(ret,
+			"Test with larger %s frac (%s org:%lx, val:%lx) worked",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val);
+	}
+
+	reset_id_reg_info();
+
+	/*
+	 * Test with a smaller (main) feature value than the host's.
+	 */
+	ftr_fval = cpuid_extract_uftr(sreg->initial_value, shift);
+	if (ftr_fval == 0)
+		/* Cannot set it to the smaller value */
+		return;
+
+	ftr_val = UPDATE_ID_UFIELD(sreg->initial_value, shift, ftr_fval - 1);
+	ret = test_feature_frac_vm(frac, ftr_val, frac_sreg->initial_value);
+	TEST_ASSERT(!ret, "Test with smaller %s (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval + 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and larger frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+}
+
+/*
+ * Test for setting feature fractional fields of ID registers.
+ * See test_feature_frac_one's comments for more detail.
+ */
+void test_feature_frac_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(frac_info_table); i++)
+		test_feature_frac_one(&frac_info_table[i]);
+}
+
+void run_test(void)
+{
+	test_id_regs_basic();
+	test_feature_all();
+	test_feature_ptrauth();
+	test_feature_frac_all();
+}
+
+static void init_id_reg_info_one(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val;
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	sreg->current_value = reg_val;
+
+	/* Keep the initial value to reset the register value later */
+	sreg->initial_value = reg_val;
+
+	/* Check if the register can be set to 0 */
+	reg_val = 0;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->can_clear = true;
+
+	pr_debug("%s (0x%x): 0x%lx%s\n", sreg->name, sreg->id,
+		 sreg->initial_value, sreg->can_clear ? ", can clear" : "");
+}
+
+/*
+ * Check if aarch32 is supported, and initialize id_reg_test_info for all
+ * the ID registers.  Loop over the idreg list and populates each id_reg
+ * info with the initial value, current value, and can_clear value.
+ */
+static void init_test_info(void)
+{
+	uint64_t reg_val;
+	int fval;
+	struct kvm_vm *vm;
+	struct kvm_one_reg one_reg;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+
+	vm = test_vm_create(1, guest_code_do_nothing, NULL, NULL);
+
+	/* Get ID_AA64PFR0_EL1 to check if AArch32 is supported */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1);
+	vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_uftr(reg_val, ID_AA64PFR0_EL0_SHIFT);
+	if (fval == 0x1)
+		/* No AArch32 support */
+		aarch32_support = false;
+
+	/* Initialize id_reg_test_info */
+	arg.vm = vm;
+	walk_id_reg_list(init_id_reg_info_one, &arg);
+	test_vm_free(vm);
+}
+
+int main(void)
+{
+
+	setbuf(stdout, NULL);
+
+	if (kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE) <= 0) {
+		print_skip("KVM_CAP_ARM_ID_REG_CONFIGURABLE is not supported");
+		exit(KSFT_SKIP);
+	}
+
+	init_test_info();
+	run_test();
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 34/38] KVM: arm64: selftests: Introduce id_reg_test
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Introduce a test for aarch64 to validate basic behavior of
KVM_GET_ONE_REG and KVM_SET_ONE_REG for ID registers.

This test runs only when KVM_CAP_ARM_ID_REG_CONFIGURABLE is supported.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++++++++++
 3 files changed, 1299 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c

diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 7640fa27be94..be3947c125f1 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -793,6 +793,7 @@
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 681b173aa87c..e94e4dc45297 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -105,6 +105,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test
 TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
 TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
 TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
+TEST_GEN_PROGS_aarch64 += aarch64/id_reg_test
 TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
diff --git a/tools/testing/selftests/kvm/aarch64/id_reg_test.c b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
new file mode 100644
index 000000000000..7e7e66b867c0
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
@@ -0,0 +1,1297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * id_reg_test.c - Tests reading/writing the aarch64's ID registers
+ *
+ * The test validates KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl for ID
+ * registers as well as reading ID register from the guest works fine.
+ *
+ * Copyright (c) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+#include <stdlib.h>
+#include <time.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "vgic.h"
+
+/* Reserved ID registers */
+#define	SYS_ID_REG_3_3_EL1		sys_reg(3, 0, 0, 3, 3)
+#define	SYS_ID_REG_3_7_EL1		sys_reg(3, 0, 0, 3, 7)
+
+#define	SYS_ID_REG_4_2_EL1		sys_reg(3, 0, 0, 4, 2)
+#define	SYS_ID_REG_4_3_EL1		sys_reg(3, 0, 0, 4, 3)
+#define	SYS_ID_REG_4_5_EL1		sys_reg(3, 0, 0, 4, 5)
+#define	SYS_ID_REG_4_6_EL1		sys_reg(3, 0, 0, 4, 6)
+#define	SYS_ID_REG_4_7_EL1		sys_reg(3, 0, 0, 4, 7)
+
+#define	SYS_ID_REG_5_2_EL1		sys_reg(3, 0, 0, 5, 2)
+#define	SYS_ID_REG_5_3_EL1		sys_reg(3, 0, 0, 5, 3)
+#define	SYS_ID_REG_5_6_EL1		sys_reg(3, 0, 0, 5, 6)
+#define	SYS_ID_REG_5_7_EL1		sys_reg(3, 0, 0, 5, 7)
+
+#define	SYS_ID_REG_6_2_EL1		sys_reg(3, 0, 0, 6, 2)
+#define	SYS_ID_REG_6_3_EL1		sys_reg(3, 0, 0, 6, 3)
+#define	SYS_ID_REG_6_4_EL1		sys_reg(3, 0, 0, 6, 4)
+#define	SYS_ID_REG_6_5_EL1		sys_reg(3, 0, 0, 6, 5)
+#define	SYS_ID_REG_6_6_EL1		sys_reg(3, 0, 0, 6, 6)
+#define	SYS_ID_REG_6_7_EL1		sys_reg(3, 0, 0, 6, 7)
+
+#define	SYS_ID_REG_7_3_EL1		sys_reg(3, 0, 0, 7, 3)
+#define	SYS_ID_REG_7_4_EL1		sys_reg(3, 0, 0, 7, 4)
+#define	SYS_ID_REG_7_5_EL1		sys_reg(3, 0, 0, 7, 5)
+#define	SYS_ID_REG_7_6_EL1		sys_reg(3, 0, 0, 7, 6)
+#define	SYS_ID_REG_7_7_EL1		sys_reg(3, 0, 0, 7, 7)
+
+#define	READ_ID_REG_FN(name)	read_## name ## _EL1
+
+#define	DEFINE_READ_SYS_REG(reg_name)			\
+uint64_t read_##reg_name(void)				\
+{							\
+	return read_sysreg_s(SYS_##reg_name);		\
+}
+
+#define DEFINE_READ_ID_REG(name)	\
+	DEFINE_READ_SYS_REG(name ## _EL1)
+
+#define	__ID_REG(reg_name)		\
+	.name = #reg_name,		\
+	.id = SYS_## reg_name ##_EL1,	\
+	.read_reg = READ_ID_REG_FN(reg_name),
+
+#define	ID_REG_ENT(reg_name)	\
+	[ID_IDX(reg_name)] = { __ID_REG(reg_name) }
+
+/* Functions to read each ID register */
+/* CRm=1 */
+DEFINE_READ_ID_REG(ID_PFR0)
+DEFINE_READ_ID_REG(ID_PFR1)
+DEFINE_READ_ID_REG(ID_DFR0)
+DEFINE_READ_ID_REG(ID_AFR0)
+DEFINE_READ_ID_REG(ID_MMFR0)
+DEFINE_READ_ID_REG(ID_MMFR1)
+DEFINE_READ_ID_REG(ID_MMFR2)
+DEFINE_READ_ID_REG(ID_MMFR3)
+
+/* CRm=2 */
+DEFINE_READ_ID_REG(ID_ISAR0)
+DEFINE_READ_ID_REG(ID_ISAR1)
+DEFINE_READ_ID_REG(ID_ISAR2)
+DEFINE_READ_ID_REG(ID_ISAR3)
+DEFINE_READ_ID_REG(ID_ISAR4)
+DEFINE_READ_ID_REG(ID_ISAR5)
+DEFINE_READ_ID_REG(ID_MMFR4)
+DEFINE_READ_ID_REG(ID_ISAR6)
+
+/* CRm=3 */
+DEFINE_READ_ID_REG(MVFR0)
+DEFINE_READ_ID_REG(MVFR1)
+DEFINE_READ_ID_REG(MVFR2)
+DEFINE_READ_ID_REG(ID_REG_3_3)
+DEFINE_READ_ID_REG(ID_PFR2)
+DEFINE_READ_ID_REG(ID_DFR1)
+DEFINE_READ_ID_REG(ID_MMFR5)
+DEFINE_READ_ID_REG(ID_REG_3_7)
+
+/* CRm=4 */
+DEFINE_READ_ID_REG(ID_AA64PFR0)
+DEFINE_READ_ID_REG(ID_AA64PFR1)
+DEFINE_READ_ID_REG(ID_REG_4_2)
+DEFINE_READ_ID_REG(ID_REG_4_3)
+DEFINE_READ_ID_REG(ID_AA64ZFR0)
+DEFINE_READ_ID_REG(ID_REG_4_5)
+DEFINE_READ_ID_REG(ID_REG_4_6)
+DEFINE_READ_ID_REG(ID_REG_4_7)
+
+/* CRm=5 */
+DEFINE_READ_ID_REG(ID_AA64DFR0)
+DEFINE_READ_ID_REG(ID_AA64DFR1)
+DEFINE_READ_ID_REG(ID_REG_5_2)
+DEFINE_READ_ID_REG(ID_REG_5_3)
+DEFINE_READ_ID_REG(ID_AA64AFR0)
+DEFINE_READ_ID_REG(ID_AA64AFR1)
+DEFINE_READ_ID_REG(ID_REG_5_6)
+DEFINE_READ_ID_REG(ID_REG_5_7)
+
+/* CRm=6 */
+DEFINE_READ_ID_REG(ID_AA64ISAR0)
+DEFINE_READ_ID_REG(ID_AA64ISAR1)
+DEFINE_READ_ID_REG(ID_REG_6_2)
+DEFINE_READ_ID_REG(ID_REG_6_3)
+DEFINE_READ_ID_REG(ID_REG_6_4)
+DEFINE_READ_ID_REG(ID_REG_6_5)
+DEFINE_READ_ID_REG(ID_REG_6_6)
+DEFINE_READ_ID_REG(ID_REG_6_7)
+
+/* CRm=7 */
+DEFINE_READ_ID_REG(ID_AA64MMFR0)
+DEFINE_READ_ID_REG(ID_AA64MMFR1)
+DEFINE_READ_ID_REG(ID_AA64MMFR2)
+DEFINE_READ_ID_REG(ID_REG_7_3)
+DEFINE_READ_ID_REG(ID_REG_7_4)
+DEFINE_READ_ID_REG(ID_REG_7_5)
+DEFINE_READ_ID_REG(ID_REG_7_6)
+DEFINE_READ_ID_REG(ID_REG_7_7)
+
+#define	ID_IDX(name)	REG_IDX_## name
+
+enum id_reg_idx {
+	/* CRm=1 */
+	ID_IDX(ID_PFR0) = 0,
+	ID_IDX(ID_PFR1),
+	ID_IDX(ID_DFR0),
+	ID_IDX(ID_AFR0),
+	ID_IDX(ID_MMFR0),
+	ID_IDX(ID_MMFR1),
+	ID_IDX(ID_MMFR2),
+	ID_IDX(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_IDX(ID_ISAR0),
+	ID_IDX(ID_ISAR1),
+	ID_IDX(ID_ISAR2),
+	ID_IDX(ID_ISAR3),
+	ID_IDX(ID_ISAR4),
+	ID_IDX(ID_ISAR5),
+	ID_IDX(ID_MMFR4),
+	ID_IDX(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_IDX(MVFR0),
+	ID_IDX(MVFR1),
+	ID_IDX(MVFR2),
+	ID_IDX(ID_REG_3_3),
+	ID_IDX(ID_PFR2),
+	ID_IDX(ID_DFR1),
+	ID_IDX(ID_MMFR5),
+	ID_IDX(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_IDX(ID_AA64PFR0),
+	ID_IDX(ID_AA64PFR1),
+	ID_IDX(ID_REG_4_2),
+	ID_IDX(ID_REG_4_3),
+	ID_IDX(ID_AA64ZFR0),
+	ID_IDX(ID_REG_4_5),
+	ID_IDX(ID_REG_4_6),
+	ID_IDX(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_IDX(ID_AA64DFR0),
+	ID_IDX(ID_AA64DFR1),
+	ID_IDX(ID_REG_5_2),
+	ID_IDX(ID_REG_5_3),
+	ID_IDX(ID_AA64AFR0),
+	ID_IDX(ID_AA64AFR1),
+	ID_IDX(ID_REG_5_6),
+	ID_IDX(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_IDX(ID_AA64ISAR0),
+	ID_IDX(ID_AA64ISAR1),
+	ID_IDX(ID_REG_6_2),
+	ID_IDX(ID_REG_6_3),
+	ID_IDX(ID_REG_6_4),
+	ID_IDX(ID_REG_6_5),
+	ID_IDX(ID_REG_6_6),
+	ID_IDX(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_IDX(ID_AA64MMFR0),
+	ID_IDX(ID_AA64MMFR1),
+	ID_IDX(ID_AA64MMFR2),
+	ID_IDX(ID_REG_7_3),
+	ID_IDX(ID_REG_7_4),
+	ID_IDX(ID_REG_7_5),
+	ID_IDX(ID_REG_7_6),
+	ID_IDX(ID_REG_7_7),
+};
+
+struct id_reg_test_info {
+	char		*name;
+	uint32_t	id;
+	/* Indicates the register can be set to 0 */
+	bool		can_clear;
+	uint64_t	initial_value;
+	uint64_t	current_value;
+	uint64_t	(*read_reg)(void);
+};
+
+#define	ID_REG_INFO(name)	(&id_reg_list[ID_IDX(name)])
+static struct id_reg_test_info id_reg_list[] = {
+	/* CRm=1 */
+	ID_REG_ENT(ID_PFR0),
+	ID_REG_ENT(ID_PFR1),
+	ID_REG_ENT(ID_DFR0),
+	ID_REG_ENT(ID_AFR0),
+	ID_REG_ENT(ID_MMFR0),
+	ID_REG_ENT(ID_MMFR1),
+	ID_REG_ENT(ID_MMFR2),
+	ID_REG_ENT(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_REG_ENT(ID_ISAR0),
+	ID_REG_ENT(ID_ISAR1),
+	ID_REG_ENT(ID_ISAR2),
+	ID_REG_ENT(ID_ISAR3),
+	ID_REG_ENT(ID_ISAR4),
+	ID_REG_ENT(ID_ISAR5),
+	ID_REG_ENT(ID_MMFR4),
+	ID_REG_ENT(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_REG_ENT(MVFR0),
+	ID_REG_ENT(MVFR1),
+	ID_REG_ENT(MVFR2),
+	ID_REG_ENT(ID_REG_3_3),
+	ID_REG_ENT(ID_PFR2),
+	ID_REG_ENT(ID_DFR1),
+	ID_REG_ENT(ID_MMFR5),
+	ID_REG_ENT(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_REG_ENT(ID_AA64PFR0),
+	ID_REG_ENT(ID_AA64PFR1),
+	ID_REG_ENT(ID_REG_4_2),
+	ID_REG_ENT(ID_REG_4_3),
+	ID_REG_ENT(ID_AA64ZFR0),
+	ID_REG_ENT(ID_REG_4_5),
+	ID_REG_ENT(ID_REG_4_6),
+	ID_REG_ENT(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_REG_ENT(ID_AA64DFR0),
+	ID_REG_ENT(ID_AA64DFR1),
+	ID_REG_ENT(ID_REG_5_2),
+	ID_REG_ENT(ID_REG_5_3),
+	ID_REG_ENT(ID_AA64AFR0),
+	ID_REG_ENT(ID_AA64AFR1),
+	ID_REG_ENT(ID_REG_5_6),
+	ID_REG_ENT(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_REG_ENT(ID_AA64ISAR0),
+	ID_REG_ENT(ID_AA64ISAR1),
+	ID_REG_ENT(ID_REG_6_2),
+	ID_REG_ENT(ID_REG_6_3),
+	ID_REG_ENT(ID_REG_6_4),
+	ID_REG_ENT(ID_REG_6_5),
+	ID_REG_ENT(ID_REG_6_6),
+	ID_REG_ENT(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_REG_ENT(ID_AA64MMFR0),
+	ID_REG_ENT(ID_AA64MMFR1),
+	ID_REG_ENT(ID_AA64MMFR2),
+	ID_REG_ENT(ID_REG_7_3),
+	ID_REG_ENT(ID_REG_7_4),
+	ID_REG_ENT(ID_REG_7_5),
+	ID_REG_ENT(ID_REG_7_6),
+	ID_REG_ENT(ID_REG_7_7),
+};
+
+static bool aarch32_support = true;
+
+#define is_id_reg(id)	\
+	(sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&	\
+	 sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 0 &&	\
+	 sys_reg_CRm(id) < 8)
+
+#define	UPDATE_ID_UFIELD(regval, shift, fval)	\
+	(((regval) & ~(0xfULL << (shift))) |	\
+	 (((uint64_t)((fval) & 0xf)) << (shift)))
+
+void *pmu_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	struct kvm_device_attr attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vcpu_ioctl(vm, vcpu, KVM_SET_DEVICE_ATTR, &attr);
+	return NULL;
+}
+
+void *sve_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	int feature = KVM_ARM_VCPU_SVE;
+
+	vcpu_ioctl(vm, vcpu, KVM_ARM_VCPU_FINALIZE, &feature);
+	return NULL;
+}
+
+#define GICD_BASE_GPA			0x8000000ULL
+#define GICR_BASE_GPA			0x80A0000ULL
+
+void *vgic_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	/* We jsut need to configure gic v3 (we don't use it though) */
+	int gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	return (void *)(intptr_t)gic_fd;
+}
+
+void vgic_fini(struct kvm_vm *vm, uint32_t vcpu, void *data)
+{
+	close((int)(intptr_t)data);
+}
+
+
+static bool is_aarch32_id_reg(uint32_t id)
+{
+	uint32_t crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+#define	MAX_CAPS	2
+struct feature_test_info {
+	char	*name;	/* Feature Name (Debug information) */
+
+	/* ID register that identifies the presence of the feature */
+	struct id_reg_test_info	*sreg;
+
+	/*
+	 * Bit position of the ID register field that identifies
+	 * the presence of the feature.
+	 */
+	int	shift;
+
+	/* Min value of the field that indicates the presence of the feature. */
+	int	min;
+	bool	is_sign;	/* Is the field signed or unsigned ? */
+	int	ncaps;		/* Number of valid Capabilities in caps[] */
+
+	/* KVM_CAP_* Capabilities to indicates that KVM supports this feature */
+	long	caps[MAX_CAPS];
+
+	/* struct kvm_enable_cap to use the capability if needed */
+	struct kvm_enable_cap	*opt_in_cap;
+
+	/* Should the guest check the ID register for this feature ? */
+	bool	run_test;
+
+	/*
+	 * Extra initialization function to enable the feature if needed.
+	 * (e.g. KVM_ARM_VCPU_FINALIZE for SVE)
+	 * The return value of this function will be passed to fini_feature().
+	 */
+	void	*(*init_feature)(struct kvm_vm *vm, uint32_t vcpuid);
+
+	/*
+	 * Clean up anything that init_feature() initialized or allocated
+	 * as needed. The 'data' is the return value from init_feature().
+	 */
+	void	(*fini_feature)(struct kvm_vm *vm, uint32_t vcpuid, void *data);
+
+	/* struct kvm_vcpu_init to opt-in the feature if needed */
+	struct kvm_vcpu_init	*vcpu_init;
+
+	/* Extra feature specific tests */
+	void	(*test_feature)(struct feature_test_info *finfo);
+};
+
+static void pmu_test(struct feature_test_info *finfo);
+
+/* Information for opt-in CPU features */
+static struct feature_test_info feature_test_info_table[] = {
+	{
+		.name = "SVE",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_SVE_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_ARM_SVE},
+		.ncaps = 1,
+		.init_feature = sve_init,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_SVE},
+		},
+	},
+	{
+		.name = "GIC",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_GIC_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_IRQCHIP},
+		.ncaps = 1,
+		.init_feature = vgic_init,
+		.fini_feature = vgic_fini,
+	},
+	{
+		.name = "MTE",
+		.sreg = ID_REG_INFO(ID_AA64PFR1),
+		.shift = ID_AA64PFR1_MTE_SHIFT,
+		.min = 2,
+		.caps = {KVM_CAP_ARM_MTE},
+		.ncaps = 1,
+		.opt_in_cap = &(struct kvm_enable_cap) {
+				.cap = KVM_CAP_ARM_MTE,
+		},
+	},
+	{
+		.name = "PMUV3",
+		.sreg = ID_REG_INFO(ID_AA64DFR0),
+		.shift = ID_AA64DFR0_PMUVER_SHIFT,
+		.min = 1,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+	{
+		.name = "PERFMON",
+		.sreg = ID_REG_INFO(ID_DFR0),
+		.shift = ID_DFR0_PERFMON_SHIFT,
+		.min = 3,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+};
+
+static void walk_id_reg_list(void (*fn)(struct id_reg_test_info *r, void *arg),
+			     void *arg)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_list); i++)
+		fn(&id_reg_list[i], arg);
+}
+
+static void guest_code_id_reg_check_one(struct id_reg_test_info *idr, void *arg)
+{
+	uint64_t v = idr->read_reg();
+
+	GUEST_ASSERT_2(v == idr->current_value, idr->name, idr->current_value);
+}
+
+static void guest_code_id_reg_check_all(uint32_t cpu)
+{
+	walk_id_reg_list(guest_code_id_reg_check_one, NULL);
+	GUEST_DONE();
+}
+
+static void guest_code_do_nothing(uint32_t cpu)
+{
+	GUEST_DONE();
+}
+
+static void guest_code_feature_check(uint32_t cpu)
+{
+	int i;
+	struct feature_test_info *finfo;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++) {
+		finfo = &feature_test_info_table[i];
+		if (finfo->run_test)
+			guest_code_id_reg_check_one(finfo->sreg, NULL);
+	}
+
+	GUEST_DONE();
+}
+
+static void guest_code_ptrauth_check(uint32_t cpuid)
+{
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint64_t val = sreg->read_reg();
+
+	GUEST_ASSERT_2(val == sreg->current_value, "PTRAUTH", val);
+	GUEST_DONE();
+}
+
+static void reset_id_reg_info_current_value(struct id_reg_test_info *info,
+					    void *arg)
+{
+	info->current_value = info->initial_value;
+}
+
+/* Reset current_value field of each id_reg_test_info */
+static void reset_id_reg_info(void)
+{
+	walk_id_reg_list(reset_id_reg_info_current_value, NULL);
+}
+
+static struct kvm_vm *test_vm_create(uint32_t nvcpus,
+		void (*guest_code)(uint32_t), struct kvm_vcpu_init *init,
+		struct kvm_enable_cap *cap)
+{
+	struct kvm_vm *vm;
+	uint32_t cpuid;
+	uint64_t mem_pages;
+
+	mem_pages = DEFAULT_GUEST_PHY_PAGES + DEFAULT_STACK_PGS * nvcpus;
+	mem_pages += mem_pages / (PTES_PER_MIN_PAGE * 2);
+	mem_pages = vm_adjust_num_guest_pages(VM_MODE_DEFAULT, mem_pages);
+
+	vm = vm_create(VM_MODE_DEFAULT, mem_pages, O_RDWR);
+	if (cap)
+		vm_enable_cap(vm, cap);
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	if (init && init->target == -1) {
+		struct kvm_vcpu_init preferred;
+
+		vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &preferred);
+		init->target = preferred.target;
+	}
+
+	vm_init_descriptor_tables(vm);
+	for (cpuid = 0; cpuid < nvcpus; cpuid++) {
+		aarch64_vcpu_add_default(vm, cpuid, init, guest_code);
+		vcpu_init_descriptor_tables(vm, cpuid);
+	}
+
+	ucall_init(vm, NULL);
+	return vm;
+}
+
+static void test_vm_free(struct kvm_vm *vm)
+{
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+#define	TEST_RUN(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, true))
+
+#define	TEST_RUN_NO_SYNC_DATA(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, false))
+
+static int test_vcpu_run(const char *test_name, int line,
+			 struct kvm_vm *vm, uint32_t vcpuid, bool sync_data)
+{
+	struct ucall uc;
+	int ret;
+
+	if (sync_data) {
+		sync_global_to_guest(vm, id_reg_list);
+		sync_global_to_guest(vm, feature_test_info_table);
+	}
+
+	vcpu_args_set(vm, vcpuid, 1, vcpuid);
+
+	ret = _vcpu_run(vm, vcpuid);
+	if (ret) {
+		ret = errno;
+		goto sync_exit;
+	}
+
+	switch (get_ucall(vm, vcpuid, &uc)) {
+	case UCALL_SYNC:
+	case UCALL_DONE:
+		ret = 0;
+		break;
+	case UCALL_ABORT:
+		TEST_FAIL(
+		    "%s (%s) at line %d (user %s at line %d), args[3]=0x%lx",
+		    (char *)uc.args[0], (char *)uc.args[2], (int)uc.args[1],
+		    test_name, line, uc.args[3]);
+		break;
+	default:
+		TEST_FAIL("Unexpected guest exit\n");
+	}
+
+sync_exit:
+	if (sync_data) {
+		sync_global_from_guest(vm, id_reg_list);
+		sync_global_from_guest(vm, feature_test_info_table);
+	}
+	return ret;
+}
+
+/*
+ * Test KVM's special handling for ID_AA64DFR0.PMUVER/DFR0.PERFMON, which
+ * is ignoring userspace's request to set the fields to 0xf (IMPLEMENTATION
+ * DEFINED PMU) and setting the field to 0 instead. This KVM's implementation
+ * is to make live migration work from the older KVM, which erroneously sets
+ * those fields to 0xf for the guest when their host sanitized value are
+ * 0xf (it should have been set to 0x0 as the KVM doesn't support
+ * IMPLEMENTATION DEFINED PMU for the guest).
+ */
+static void pmu_test(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vm *vm;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	int ret;
+
+	reset_id_reg_info();
+	finfo->run_test = 1;
+
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+
+	/* Make sure that ID_AA64DFR0.PMUVER/DFR0.PERFMON is 0. */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be initially 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	/* Try to set ID_AA64DFR0.PMUVER/DFR0.PERFMON to -1 (0xf). */
+	fval = -1;
+	reg_val = UPDATE_ID_UFIELD(reg_val, finfo->shift, fval);
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting %s field of %s to %ld failed (%d)\n",
+		    finfo->name, sreg->name, fval, ret);
+
+	/* Check if ID_AA64DFR0.PMUVER/DFR0.PERFMON is still 0. */
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	sreg->current_value = reg_val;
+	ret = TEST_RUN(vm, vcpu);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+}
+
+struct vm_vcpu_arg {
+	struct kvm_vm	*vm;
+	uint32_t	vcpuid;
+	bool		after_run;
+};
+
+/*
+ * Test if KVM_SET_ONE_REG can work with the value KVM_GET_ONE_REG returns,
+ * KVM_SET_ONE_REG with zero works before KVM_RUN (and fails after KVM_RUN),
+ * and KVM_GET_ONE_REG returns the value KVM_SET_ONE_REG sets.
+ */
+static void test_get_set_id_reg(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	bool after_run = ((struct vm_vcpu_arg *)arg)->after_run;
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val, tval;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/* Check the current register value */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == sreg->current_value,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+	tval = reg_val;
+
+	/* Try to clear the register that should be able to be cleared. */
+	if ((reg_val != 0) && (sreg->can_clear)) {
+		reg_val = 0;
+		ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+		if (after_run) {
+			/* Expect an error after KVM_RUN */
+			TEST_ASSERT(ret,
+				    "Clearing %s unexpectedly worked\n",
+				    sreg->name);
+		} else {
+			TEST_ASSERT(!ret,
+				    "Clearing %s didn't work\n", sreg->name);
+			/*
+			 * Make sure that KVM_GET_ONE_REG provides the value
+			 * we set.
+			 */
+			vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+			TEST_ASSERT(reg_val == 0,
+				    "GET(%s) didn't return 0x%lx but 0x%lx",
+				    sreg->name, (uint64_t)0, reg_val);
+		}
+	}
+
+	/* Check if KVM_SET_ONE_REG works with the original value. */
+	reg_val = tval;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting the same ID reg value should work\n");
+
+	/* Make sure that KVM_GET_ONE_REG provides the value we set. */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == tval,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+}
+
+/*
+ * Test if KVM_SET_ONE_REG with the current value works before KVM_RUN,
+ * values of ID registers the guest sees are consistent with the ones
+ * userspace sees, and KVM_SET_ONE_REG after KVM_RUN works when the
+ * specified value is the same as the current one (fails otherwise).
+ */
+static void test_id_regs_basic(void)
+{
+	struct kvm_vm *vm;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+	int ret;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	arg.vm = vm;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	ret = TEST_RUN(vm, 0);
+	assert(!ret);
+
+	arg.after_run = true;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	test_vm_free(vm);
+}
+
+static bool caps_are_supported(long *caps, int ncaps)
+{
+	int i;
+
+	for (i = 0; i < ncaps; i++) {
+		if (kvm_check_cap(caps[i]) <= 0)
+			return false;
+	}
+	return true;
+}
+
+#define	NCAPS_PTRAUTH	2
+
+/*
+ * Test if the ID register value reflects the ptrauth feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is consistent
+ * with the ptrauth feature configuration.
+ */
+static void test_feature_ptrauth(void)
+{
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init;
+	struct kvm_vm *vm = NULL;
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint32_t vcpu = 0;
+	int64_t rval;
+	int ret;
+	int apa, api, gpa, gpi;
+	char *name = "PTRAUTH";
+	long caps[NCAPS_PTRAUTH] = {KVM_CAP_ARM_PTRAUTH_ADDRESS,
+				    KVM_CAP_ARM_PTRAUTH_GENERIC};
+
+	reset_id_reg_info();
+	one_reg.addr = (uint64_t)&rval;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	if (caps_are_supported(caps, NCAPS_PTRAUTH)) {
+
+		/* Test with feature enabled */
+		memset(&init, 0, sizeof(init));
+		init.target = -1;
+		init.features[0] = (1ULL << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
+				    1ULL << KVM_ARM_VCPU_PTRAUTH_GENERIC);
+		vm = test_vm_create(1, guest_code_ptrauth_check, &init, NULL);
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+		/* Make sure values of apa/api/gpa/gpi fields are expected */
+		apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+		api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+		gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+		gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+
+		TEST_ASSERT((apa > 0) || (api > 0),
+			    "Either apa(0x%x) or api(0x%x) must be available",
+			    apa, gpa);
+		TEST_ASSERT((gpa > 0) || (gpi > 0),
+			    "Either gpa(0x%x) or gpi(0x%x) must be available",
+			    gpa, gpi);
+
+		TEST_ASSERT((apa > 0) ^ (api > 0),
+			    "Both apa(0x%x) and api(0x%x) must not be available",
+			    apa, api);
+		TEST_ASSERT((gpa > 0) ^ (gpi > 0),
+			    "Both gpa(0x%x) and gpi(0x%x) must not be available",
+			    gpa, gpi);
+
+		sreg->current_value = rval;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n",
+			 __func__, name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+
+		TEST_ASSERT(!ret, "%s:KVM_RUN failed with %s enabled",
+			    __func__, name);
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+	apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+	api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+	gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+	gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+	TEST_ASSERT(!apa && !api && !gpa && !gpi,
+	    "apa(0x%x), api(0x%x), gpa(0x%x), gpi(0x%x) must be zero",
+	    apa, api, gpa, gpi);
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s TEST_RUN failed with %s enabled, ret=0x%x",
+		    __func__, name, ret);
+
+	test_vm_free(vm);
+}
+
+static bool feature_caps_are_available(struct feature_test_info *finfo)
+{
+	return ((finfo->ncaps > 0) &&
+		caps_are_supported(finfo->caps, finfo->ncaps));
+}
+
+/*
+ * Test if the ID register value reflects the feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is
+ * consistent with the feature configuration.
+ */
+static void test_feature(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init, *initp = NULL;
+	struct kvm_vm *vm = NULL;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	bool is_sign = finfo->is_sign;
+	int min = finfo->min;
+	int shift = finfo->shift;
+	int ret;
+	void *data = NULL;
+
+	pr_debug("%s: %s (reg %s)\n", __func__, finfo->name, sreg->name);
+
+	reset_id_reg_info();
+
+	if (is_aarch32_id_reg(sreg->id) && !aarch32_support)
+		/*
+		 * AArch32 is not supported. Skip testing with the AArch32
+		 * ID register.
+		 */
+		return;
+
+	/* Indicate that guest runs the test for the feature */
+	finfo->run_test = 1;
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/*
+	 * Test with feature enabled if the feature is exposed in the default
+	 * ID register value or the capabilities are supported at KVM level.
+	 */
+	if ((cpuid_extract_ftr(sreg->initial_value, shift, is_sign) >= min) ||
+	    feature_caps_are_available(finfo)) {
+		if (finfo->vcpu_init) {
+			/* Need to enable the feature via KVM_ARM_VCPU_INIT. */
+			memset(&init, 0, sizeof(init));
+			init = *finfo->vcpu_init;
+			init.target = -1;
+			initp = &init;
+		}
+
+		vm = test_vm_create(1, guest_code_feature_check, initp,
+				    finfo->opt_in_cap);
+		if (finfo->init_feature)
+			/* Run any required extra process to use the feature */
+			data = finfo->init_feature(vm, vcpu);
+
+		/* Check if the ID register value indicates the feature */
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+		fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+		TEST_ASSERT(fval >= min, "%s field of %s is too small (%ld)",
+			    finfo->name, sreg->name, fval);
+		sreg->current_value = reg_val;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n", __func__,
+			 finfo->name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+		TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s enabled",
+			    __func__, finfo->name);
+
+		if (finfo->fini_feature)
+			finfo->fini_feature(vm, vcpu, data);
+
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+	if (finfo->vcpu_init || finfo->opt_in_cap) {
+		/*
+		 * If the feature needs to be enabled with KVM_ARM_VCPU_INIT
+		 * or opt-in capabilities, the default value of the ID register
+		 * shouldn't indicate the feature.
+		 */
+		TEST_ASSERT(fval < min, "%s field of %s is too big (%ld)",
+		    finfo->name, sreg->name, fval);
+	} else {
+		/* Update the relevant field to hide the feature. */
+		fval = is_sign ? 0xf : 0x0;
+		reg_val = UPDATE_ID_UFIELD(reg_val, shift, fval);
+		ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+		TEST_ASSERT(ret == 0, "Disabling %s failed %d (err %d)\n",
+			    finfo->name, ret, errno);
+		sreg->current_value = reg_val;
+	}
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, finfo->name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s disabled",
+		    __func__, finfo->name);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+
+	/* Run extra feature specific tests, if any */
+	if (finfo->test_feature)
+		finfo->test_feature(finfo);
+}
+
+/*
+ * For each opt-in feature in feature_test_info_table[],
+ * test if KVM_GET_ONE_REG/KVM_SET_ONE_REG works appropriately according
+ * to the feature configuration.  See test_feature's comment for more detail.
+ */
+static void test_feature_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++)
+		test_feature(&feature_test_info_table[i]);
+}
+
+int set_id_reg(struct kvm_vm *vm, uint32_t vcpu, struct id_reg_test_info *sreg,
+	       uint64_t new_val)
+{
+	int ret;
+	uint64_t reg_val;
+	struct kvm_one_reg one_reg;
+
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	one_reg.addr = (uint64_t)&reg_val;
+
+	reg_val = new_val;
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->current_value = new_val;
+
+	return ret;
+}
+
+
+/*
+ * Create a new VM with one vCPU, set the ID register to @new_val.
+ */
+int set_id_reg_vm(struct id_reg_test_info *sreg, uint64_t new_val)
+{
+	struct kvm_vm *vm;
+	int ret;
+	uint32_t vcpu = 0;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+struct frac_info {
+	char	*name;
+	struct id_reg_test_info *sreg;
+	struct id_reg_test_info *frac_sreg;
+	int	shift;
+	int	frac_shift;
+};
+
+struct frac_info frac_info_table[] = {
+	{
+		.name = "RAS",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+	},
+	{
+		.name = "MPAM",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+	},
+	{
+		.name = "CSV2",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+	},
+};
+
+
+/*
+ * Make sure that we can set the fractional reg field even before setting
+ * the feature reg field.
+ */
+int test_feature_frac_vm(struct frac_info *frac, uint64_t new_val,
+			 uint64_t frac_new_val)
+{
+	struct kvm_vm *vm;
+	uint32_t vcpu = 0;
+	struct id_reg_test_info *sreg, *frac_sreg;
+	int ret;
+
+	sreg = frac->sreg;
+	frac_sreg = frac->frac_sreg;
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	/* Set fractional reg field */
+	ret = set_id_reg(vm, vcpu, frac_sreg, frac_new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    frac_sreg->name, frac_new_val, ret);
+
+	/* Set feature reg field */
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    sreg->name, new_val, ret);
+
+	ret = TEST_RUN(vm, vcpu);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+/*
+ * Test for setting the feature fractional field of the ID register.
+ * When the (main) feature field of the ID register is the same as the host's,
+ * the fractional field value cannot be larger than the host's.
+ * (KVM_SET_ONE_REG should work but KVM_RUN with the larger value will fail)
+ * When the (main) feature field of the ID register is smaler than the host's,
+ * the fractional field can be any values.
+ * The function tests those behaviors.
+ */
+void test_feature_frac_one(struct frac_info *frac)
+{
+	uint64_t ftr_val, ftr_fval, frac_val, frac_fval;
+	int ret, shift, frac_shift;
+	struct id_reg_test_info *sreg, *frac_sreg;
+
+	reset_id_reg_info();
+
+	sreg = frac->sreg;
+	shift = frac->shift;
+	frac_sreg = frac->frac_sreg;
+	frac_shift = frac->frac_shift;
+
+	pr_debug("%s(%s Frac) reg:%s(shift:%d) frac reg:%s(shift:%d)\n",
+		 __func__, frac->name, sreg->name, shift, frac_sreg->name,
+		 frac_shift);
+
+	/*
+	 * Use the host's feature value for the guest.
+	 * KVM_RUN with a larger frac value than the host's should fail.
+	 * Otherwise, it should work.
+	 */
+
+	frac_fval = cpuid_extract_uftr(frac_sreg->initial_value, frac_shift);
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(!ret, "Test smaller %s frac (val:%lx) failed(%d)",
+			    frac->name, frac_val, ret);
+	}
+
+	reset_id_reg_info();
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+						frac_shift, frac_fval + 1);
+
+		/* Setting larger frac shouldn't fail at ioctl */
+		ret = set_id_reg_vm(frac_sreg, frac_val);
+		TEST_ASSERT(!ret,
+			"SET larger %s frac (%s org:%lx, val:%lx) failed(%d)",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val, ret);
+
+		/* KVM_RUN with larger frac should fail */
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(ret,
+			"Test with larger %s frac (%s org:%lx, val:%lx) worked",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val);
+	}
+
+	reset_id_reg_info();
+
+	/*
+	 * Test with a smaller (main) feature value than the host's.
+	 */
+	ftr_fval = cpuid_extract_uftr(sreg->initial_value, shift);
+	if (ftr_fval == 0)
+		/* Cannot set it to the smaller value */
+		return;
+
+	ftr_val = UPDATE_ID_UFIELD(sreg->initial_value, shift, ftr_fval - 1);
+	ret = test_feature_frac_vm(frac, ftr_val, frac_sreg->initial_value);
+	TEST_ASSERT(!ret, "Test with smaller %s (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval + 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and larger frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+}
+
+/*
+ * Test for setting feature fractional fields of ID registers.
+ * See test_feature_frac_one's comments for more detail.
+ */
+void test_feature_frac_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(frac_info_table); i++)
+		test_feature_frac_one(&frac_info_table[i]);
+}
+
+void run_test(void)
+{
+	test_id_regs_basic();
+	test_feature_all();
+	test_feature_ptrauth();
+	test_feature_frac_all();
+}
+
+static void init_id_reg_info_one(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val;
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	sreg->current_value = reg_val;
+
+	/* Keep the initial value to reset the register value later */
+	sreg->initial_value = reg_val;
+
+	/* Check if the register can be set to 0 */
+	reg_val = 0;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->can_clear = true;
+
+	pr_debug("%s (0x%x): 0x%lx%s\n", sreg->name, sreg->id,
+		 sreg->initial_value, sreg->can_clear ? ", can clear" : "");
+}
+
+/*
+ * Check if aarch32 is supported, and initialize id_reg_test_info for all
+ * the ID registers.  Loop over the idreg list and populates each id_reg
+ * info with the initial value, current value, and can_clear value.
+ */
+static void init_test_info(void)
+{
+	uint64_t reg_val;
+	int fval;
+	struct kvm_vm *vm;
+	struct kvm_one_reg one_reg;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+
+	vm = test_vm_create(1, guest_code_do_nothing, NULL, NULL);
+
+	/* Get ID_AA64PFR0_EL1 to check if AArch32 is supported */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1);
+	vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_uftr(reg_val, ID_AA64PFR0_EL0_SHIFT);
+	if (fval == 0x1)
+		/* No AArch32 support */
+		aarch32_support = false;
+
+	/* Initialize id_reg_test_info */
+	arg.vm = vm;
+	walk_id_reg_list(init_id_reg_info_one, &arg);
+	test_vm_free(vm);
+}
+
+int main(void)
+{
+
+	setbuf(stdout, NULL);
+
+	if (kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE) <= 0) {
+		print_skip("KVM_CAP_ARM_ID_REG_CONFIGURABLE is not supported");
+		exit(KSFT_SKIP);
+	}
+
+	init_test_info();
+	run_test();
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 34/38] KVM: arm64: selftests: Introduce id_reg_test
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Introduce a test for aarch64 to validate basic behavior of
KVM_GET_ONE_REG and KVM_SET_ONE_REG for ID registers.

This test runs only when KVM_CAP_ARM_ID_REG_CONFIGURABLE is supported.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 tools/arch/arm64/include/asm/sysreg.h         |    1 +
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../selftests/kvm/aarch64/id_reg_test.c       | 1297 +++++++++++++++++
 3 files changed, 1299 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/id_reg_test.c

diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 7640fa27be94..be3947c125f1 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -793,6 +793,7 @@
 #define ID_AA64PFR0_ELx_32BIT_64BIT	0x2
 
 /* id_aa64pfr1 */
+#define ID_AA64PFR1_CSV2FRAC_SHIFT	32
 #define ID_AA64PFR1_MPAMFRAC_SHIFT	16
 #define ID_AA64PFR1_RASFRAC_SHIFT	12
 #define ID_AA64PFR1_MTE_SHIFT		8
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 681b173aa87c..e94e4dc45297 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -105,6 +105,7 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test
 TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
 TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
 TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
+TEST_GEN_PROGS_aarch64 += aarch64/id_reg_test
 TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
diff --git a/tools/testing/selftests/kvm/aarch64/id_reg_test.c b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
new file mode 100644
index 000000000000..7e7e66b867c0
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/id_reg_test.c
@@ -0,0 +1,1297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * id_reg_test.c - Tests reading/writing the aarch64's ID registers
+ *
+ * The test validates KVM_SET_ONE_REG/KVM_GET_ONE_REG ioctl for ID
+ * registers as well as reading ID register from the guest works fine.
+ *
+ * Copyright (c) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+#include <stdlib.h>
+#include <time.h>
+#include <pthread.h>
+#include <linux/kvm.h>
+#include <linux/sizes.h>
+
+#include "kvm_util.h"
+#include "processor.h"
+#include "vgic.h"
+
+/* Reserved ID registers */
+#define	SYS_ID_REG_3_3_EL1		sys_reg(3, 0, 0, 3, 3)
+#define	SYS_ID_REG_3_7_EL1		sys_reg(3, 0, 0, 3, 7)
+
+#define	SYS_ID_REG_4_2_EL1		sys_reg(3, 0, 0, 4, 2)
+#define	SYS_ID_REG_4_3_EL1		sys_reg(3, 0, 0, 4, 3)
+#define	SYS_ID_REG_4_5_EL1		sys_reg(3, 0, 0, 4, 5)
+#define	SYS_ID_REG_4_6_EL1		sys_reg(3, 0, 0, 4, 6)
+#define	SYS_ID_REG_4_7_EL1		sys_reg(3, 0, 0, 4, 7)
+
+#define	SYS_ID_REG_5_2_EL1		sys_reg(3, 0, 0, 5, 2)
+#define	SYS_ID_REG_5_3_EL1		sys_reg(3, 0, 0, 5, 3)
+#define	SYS_ID_REG_5_6_EL1		sys_reg(3, 0, 0, 5, 6)
+#define	SYS_ID_REG_5_7_EL1		sys_reg(3, 0, 0, 5, 7)
+
+#define	SYS_ID_REG_6_2_EL1		sys_reg(3, 0, 0, 6, 2)
+#define	SYS_ID_REG_6_3_EL1		sys_reg(3, 0, 0, 6, 3)
+#define	SYS_ID_REG_6_4_EL1		sys_reg(3, 0, 0, 6, 4)
+#define	SYS_ID_REG_6_5_EL1		sys_reg(3, 0, 0, 6, 5)
+#define	SYS_ID_REG_6_6_EL1		sys_reg(3, 0, 0, 6, 6)
+#define	SYS_ID_REG_6_7_EL1		sys_reg(3, 0, 0, 6, 7)
+
+#define	SYS_ID_REG_7_3_EL1		sys_reg(3, 0, 0, 7, 3)
+#define	SYS_ID_REG_7_4_EL1		sys_reg(3, 0, 0, 7, 4)
+#define	SYS_ID_REG_7_5_EL1		sys_reg(3, 0, 0, 7, 5)
+#define	SYS_ID_REG_7_6_EL1		sys_reg(3, 0, 0, 7, 6)
+#define	SYS_ID_REG_7_7_EL1		sys_reg(3, 0, 0, 7, 7)
+
+#define	READ_ID_REG_FN(name)	read_## name ## _EL1
+
+#define	DEFINE_READ_SYS_REG(reg_name)			\
+uint64_t read_##reg_name(void)				\
+{							\
+	return read_sysreg_s(SYS_##reg_name);		\
+}
+
+#define DEFINE_READ_ID_REG(name)	\
+	DEFINE_READ_SYS_REG(name ## _EL1)
+
+#define	__ID_REG(reg_name)		\
+	.name = #reg_name,		\
+	.id = SYS_## reg_name ##_EL1,	\
+	.read_reg = READ_ID_REG_FN(reg_name),
+
+#define	ID_REG_ENT(reg_name)	\
+	[ID_IDX(reg_name)] = { __ID_REG(reg_name) }
+
+/* Functions to read each ID register */
+/* CRm=1 */
+DEFINE_READ_ID_REG(ID_PFR0)
+DEFINE_READ_ID_REG(ID_PFR1)
+DEFINE_READ_ID_REG(ID_DFR0)
+DEFINE_READ_ID_REG(ID_AFR0)
+DEFINE_READ_ID_REG(ID_MMFR0)
+DEFINE_READ_ID_REG(ID_MMFR1)
+DEFINE_READ_ID_REG(ID_MMFR2)
+DEFINE_READ_ID_REG(ID_MMFR3)
+
+/* CRm=2 */
+DEFINE_READ_ID_REG(ID_ISAR0)
+DEFINE_READ_ID_REG(ID_ISAR1)
+DEFINE_READ_ID_REG(ID_ISAR2)
+DEFINE_READ_ID_REG(ID_ISAR3)
+DEFINE_READ_ID_REG(ID_ISAR4)
+DEFINE_READ_ID_REG(ID_ISAR5)
+DEFINE_READ_ID_REG(ID_MMFR4)
+DEFINE_READ_ID_REG(ID_ISAR6)
+
+/* CRm=3 */
+DEFINE_READ_ID_REG(MVFR0)
+DEFINE_READ_ID_REG(MVFR1)
+DEFINE_READ_ID_REG(MVFR2)
+DEFINE_READ_ID_REG(ID_REG_3_3)
+DEFINE_READ_ID_REG(ID_PFR2)
+DEFINE_READ_ID_REG(ID_DFR1)
+DEFINE_READ_ID_REG(ID_MMFR5)
+DEFINE_READ_ID_REG(ID_REG_3_7)
+
+/* CRm=4 */
+DEFINE_READ_ID_REG(ID_AA64PFR0)
+DEFINE_READ_ID_REG(ID_AA64PFR1)
+DEFINE_READ_ID_REG(ID_REG_4_2)
+DEFINE_READ_ID_REG(ID_REG_4_3)
+DEFINE_READ_ID_REG(ID_AA64ZFR0)
+DEFINE_READ_ID_REG(ID_REG_4_5)
+DEFINE_READ_ID_REG(ID_REG_4_6)
+DEFINE_READ_ID_REG(ID_REG_4_7)
+
+/* CRm=5 */
+DEFINE_READ_ID_REG(ID_AA64DFR0)
+DEFINE_READ_ID_REG(ID_AA64DFR1)
+DEFINE_READ_ID_REG(ID_REG_5_2)
+DEFINE_READ_ID_REG(ID_REG_5_3)
+DEFINE_READ_ID_REG(ID_AA64AFR0)
+DEFINE_READ_ID_REG(ID_AA64AFR1)
+DEFINE_READ_ID_REG(ID_REG_5_6)
+DEFINE_READ_ID_REG(ID_REG_5_7)
+
+/* CRm=6 */
+DEFINE_READ_ID_REG(ID_AA64ISAR0)
+DEFINE_READ_ID_REG(ID_AA64ISAR1)
+DEFINE_READ_ID_REG(ID_REG_6_2)
+DEFINE_READ_ID_REG(ID_REG_6_3)
+DEFINE_READ_ID_REG(ID_REG_6_4)
+DEFINE_READ_ID_REG(ID_REG_6_5)
+DEFINE_READ_ID_REG(ID_REG_6_6)
+DEFINE_READ_ID_REG(ID_REG_6_7)
+
+/* CRm=7 */
+DEFINE_READ_ID_REG(ID_AA64MMFR0)
+DEFINE_READ_ID_REG(ID_AA64MMFR1)
+DEFINE_READ_ID_REG(ID_AA64MMFR2)
+DEFINE_READ_ID_REG(ID_REG_7_3)
+DEFINE_READ_ID_REG(ID_REG_7_4)
+DEFINE_READ_ID_REG(ID_REG_7_5)
+DEFINE_READ_ID_REG(ID_REG_7_6)
+DEFINE_READ_ID_REG(ID_REG_7_7)
+
+#define	ID_IDX(name)	REG_IDX_## name
+
+enum id_reg_idx {
+	/* CRm=1 */
+	ID_IDX(ID_PFR0) = 0,
+	ID_IDX(ID_PFR1),
+	ID_IDX(ID_DFR0),
+	ID_IDX(ID_AFR0),
+	ID_IDX(ID_MMFR0),
+	ID_IDX(ID_MMFR1),
+	ID_IDX(ID_MMFR2),
+	ID_IDX(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_IDX(ID_ISAR0),
+	ID_IDX(ID_ISAR1),
+	ID_IDX(ID_ISAR2),
+	ID_IDX(ID_ISAR3),
+	ID_IDX(ID_ISAR4),
+	ID_IDX(ID_ISAR5),
+	ID_IDX(ID_MMFR4),
+	ID_IDX(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_IDX(MVFR0),
+	ID_IDX(MVFR1),
+	ID_IDX(MVFR2),
+	ID_IDX(ID_REG_3_3),
+	ID_IDX(ID_PFR2),
+	ID_IDX(ID_DFR1),
+	ID_IDX(ID_MMFR5),
+	ID_IDX(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_IDX(ID_AA64PFR0),
+	ID_IDX(ID_AA64PFR1),
+	ID_IDX(ID_REG_4_2),
+	ID_IDX(ID_REG_4_3),
+	ID_IDX(ID_AA64ZFR0),
+	ID_IDX(ID_REG_4_5),
+	ID_IDX(ID_REG_4_6),
+	ID_IDX(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_IDX(ID_AA64DFR0),
+	ID_IDX(ID_AA64DFR1),
+	ID_IDX(ID_REG_5_2),
+	ID_IDX(ID_REG_5_3),
+	ID_IDX(ID_AA64AFR0),
+	ID_IDX(ID_AA64AFR1),
+	ID_IDX(ID_REG_5_6),
+	ID_IDX(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_IDX(ID_AA64ISAR0),
+	ID_IDX(ID_AA64ISAR1),
+	ID_IDX(ID_REG_6_2),
+	ID_IDX(ID_REG_6_3),
+	ID_IDX(ID_REG_6_4),
+	ID_IDX(ID_REG_6_5),
+	ID_IDX(ID_REG_6_6),
+	ID_IDX(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_IDX(ID_AA64MMFR0),
+	ID_IDX(ID_AA64MMFR1),
+	ID_IDX(ID_AA64MMFR2),
+	ID_IDX(ID_REG_7_3),
+	ID_IDX(ID_REG_7_4),
+	ID_IDX(ID_REG_7_5),
+	ID_IDX(ID_REG_7_6),
+	ID_IDX(ID_REG_7_7),
+};
+
+struct id_reg_test_info {
+	char		*name;
+	uint32_t	id;
+	/* Indicates the register can be set to 0 */
+	bool		can_clear;
+	uint64_t	initial_value;
+	uint64_t	current_value;
+	uint64_t	(*read_reg)(void);
+};
+
+#define	ID_REG_INFO(name)	(&id_reg_list[ID_IDX(name)])
+static struct id_reg_test_info id_reg_list[] = {
+	/* CRm=1 */
+	ID_REG_ENT(ID_PFR0),
+	ID_REG_ENT(ID_PFR1),
+	ID_REG_ENT(ID_DFR0),
+	ID_REG_ENT(ID_AFR0),
+	ID_REG_ENT(ID_MMFR0),
+	ID_REG_ENT(ID_MMFR1),
+	ID_REG_ENT(ID_MMFR2),
+	ID_REG_ENT(ID_MMFR3),
+
+	/* CRm=2 */
+	ID_REG_ENT(ID_ISAR0),
+	ID_REG_ENT(ID_ISAR1),
+	ID_REG_ENT(ID_ISAR2),
+	ID_REG_ENT(ID_ISAR3),
+	ID_REG_ENT(ID_ISAR4),
+	ID_REG_ENT(ID_ISAR5),
+	ID_REG_ENT(ID_MMFR4),
+	ID_REG_ENT(ID_ISAR6),
+
+	/* CRm=3 */
+	ID_REG_ENT(MVFR0),
+	ID_REG_ENT(MVFR1),
+	ID_REG_ENT(MVFR2),
+	ID_REG_ENT(ID_REG_3_3),
+	ID_REG_ENT(ID_PFR2),
+	ID_REG_ENT(ID_DFR1),
+	ID_REG_ENT(ID_MMFR5),
+	ID_REG_ENT(ID_REG_3_7),
+
+	/* CRm=4 */
+	ID_REG_ENT(ID_AA64PFR0),
+	ID_REG_ENT(ID_AA64PFR1),
+	ID_REG_ENT(ID_REG_4_2),
+	ID_REG_ENT(ID_REG_4_3),
+	ID_REG_ENT(ID_AA64ZFR0),
+	ID_REG_ENT(ID_REG_4_5),
+	ID_REG_ENT(ID_REG_4_6),
+	ID_REG_ENT(ID_REG_4_7),
+
+	/* CRm=5 */
+	ID_REG_ENT(ID_AA64DFR0),
+	ID_REG_ENT(ID_AA64DFR1),
+	ID_REG_ENT(ID_REG_5_2),
+	ID_REG_ENT(ID_REG_5_3),
+	ID_REG_ENT(ID_AA64AFR0),
+	ID_REG_ENT(ID_AA64AFR1),
+	ID_REG_ENT(ID_REG_5_6),
+	ID_REG_ENT(ID_REG_5_7),
+
+	/* CRm=6 */
+	ID_REG_ENT(ID_AA64ISAR0),
+	ID_REG_ENT(ID_AA64ISAR1),
+	ID_REG_ENT(ID_REG_6_2),
+	ID_REG_ENT(ID_REG_6_3),
+	ID_REG_ENT(ID_REG_6_4),
+	ID_REG_ENT(ID_REG_6_5),
+	ID_REG_ENT(ID_REG_6_6),
+	ID_REG_ENT(ID_REG_6_7),
+
+	/* CRm=7 */
+	ID_REG_ENT(ID_AA64MMFR0),
+	ID_REG_ENT(ID_AA64MMFR1),
+	ID_REG_ENT(ID_AA64MMFR2),
+	ID_REG_ENT(ID_REG_7_3),
+	ID_REG_ENT(ID_REG_7_4),
+	ID_REG_ENT(ID_REG_7_5),
+	ID_REG_ENT(ID_REG_7_6),
+	ID_REG_ENT(ID_REG_7_7),
+};
+
+static bool aarch32_support = true;
+
+#define is_id_reg(id)	\
+	(sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&	\
+	 sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 0 &&	\
+	 sys_reg_CRm(id) < 8)
+
+#define	UPDATE_ID_UFIELD(regval, shift, fval)	\
+	(((regval) & ~(0xfULL << (shift))) |	\
+	 (((uint64_t)((fval) & 0xf)) << (shift)))
+
+void *pmu_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	struct kvm_device_attr attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vcpu_ioctl(vm, vcpu, KVM_SET_DEVICE_ATTR, &attr);
+	return NULL;
+}
+
+void *sve_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	int feature = KVM_ARM_VCPU_SVE;
+
+	vcpu_ioctl(vm, vcpu, KVM_ARM_VCPU_FINALIZE, &feature);
+	return NULL;
+}
+
+#define GICD_BASE_GPA			0x8000000ULL
+#define GICR_BASE_GPA			0x80A0000ULL
+
+void *vgic_init(struct kvm_vm *vm, uint32_t vcpu)
+{
+	/* We jsut need to configure gic v3 (we don't use it though) */
+	int gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	return (void *)(intptr_t)gic_fd;
+}
+
+void vgic_fini(struct kvm_vm *vm, uint32_t vcpu, void *data)
+{
+	close((int)(intptr_t)data);
+}
+
+
+static bool is_aarch32_id_reg(uint32_t id)
+{
+	uint32_t crm, op2;
+
+	if (!is_id_reg(id))
+		return false;
+
+	crm = sys_reg_CRm(id);
+	op2 = sys_reg_Op2(id);
+	if (crm == 1 || crm == 2 || (crm == 3 && (op2 != 3 && op2 != 7)))
+		/* AArch32 ID register */
+		return true;
+
+	return false;
+}
+
+#define	MAX_CAPS	2
+struct feature_test_info {
+	char	*name;	/* Feature Name (Debug information) */
+
+	/* ID register that identifies the presence of the feature */
+	struct id_reg_test_info	*sreg;
+
+	/*
+	 * Bit position of the ID register field that identifies
+	 * the presence of the feature.
+	 */
+	int	shift;
+
+	/* Min value of the field that indicates the presence of the feature. */
+	int	min;
+	bool	is_sign;	/* Is the field signed or unsigned ? */
+	int	ncaps;		/* Number of valid Capabilities in caps[] */
+
+	/* KVM_CAP_* Capabilities to indicates that KVM supports this feature */
+	long	caps[MAX_CAPS];
+
+	/* struct kvm_enable_cap to use the capability if needed */
+	struct kvm_enable_cap	*opt_in_cap;
+
+	/* Should the guest check the ID register for this feature ? */
+	bool	run_test;
+
+	/*
+	 * Extra initialization function to enable the feature if needed.
+	 * (e.g. KVM_ARM_VCPU_FINALIZE for SVE)
+	 * The return value of this function will be passed to fini_feature().
+	 */
+	void	*(*init_feature)(struct kvm_vm *vm, uint32_t vcpuid);
+
+	/*
+	 * Clean up anything that init_feature() initialized or allocated
+	 * as needed. The 'data' is the return value from init_feature().
+	 */
+	void	(*fini_feature)(struct kvm_vm *vm, uint32_t vcpuid, void *data);
+
+	/* struct kvm_vcpu_init to opt-in the feature if needed */
+	struct kvm_vcpu_init	*vcpu_init;
+
+	/* Extra feature specific tests */
+	void	(*test_feature)(struct feature_test_info *finfo);
+};
+
+static void pmu_test(struct feature_test_info *finfo);
+
+/* Information for opt-in CPU features */
+static struct feature_test_info feature_test_info_table[] = {
+	{
+		.name = "SVE",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_SVE_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_ARM_SVE},
+		.ncaps = 1,
+		.init_feature = sve_init,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_SVE},
+		},
+	},
+	{
+		.name = "GIC",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_GIC_SHIFT,
+		.min = 1,
+		.caps = {KVM_CAP_IRQCHIP},
+		.ncaps = 1,
+		.init_feature = vgic_init,
+		.fini_feature = vgic_fini,
+	},
+	{
+		.name = "MTE",
+		.sreg = ID_REG_INFO(ID_AA64PFR1),
+		.shift = ID_AA64PFR1_MTE_SHIFT,
+		.min = 2,
+		.caps = {KVM_CAP_ARM_MTE},
+		.ncaps = 1,
+		.opt_in_cap = &(struct kvm_enable_cap) {
+				.cap = KVM_CAP_ARM_MTE,
+		},
+	},
+	{
+		.name = "PMUV3",
+		.sreg = ID_REG_INFO(ID_AA64DFR0),
+		.shift = ID_AA64DFR0_PMUVER_SHIFT,
+		.min = 1,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+	{
+		.name = "PERFMON",
+		.sreg = ID_REG_INFO(ID_DFR0),
+		.shift = ID_DFR0_PERFMON_SHIFT,
+		.min = 3,
+		.init_feature = pmu_init,
+		.test_feature = pmu_test,
+		.caps = {KVM_CAP_ARM_PMU_V3},
+		.ncaps = 1,
+		.vcpu_init = &(struct kvm_vcpu_init) {
+			.features = {1ULL << KVM_ARM_VCPU_PMU_V3},
+		},
+	},
+};
+
+static void walk_id_reg_list(void (*fn)(struct id_reg_test_info *r, void *arg),
+			     void *arg)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(id_reg_list); i++)
+		fn(&id_reg_list[i], arg);
+}
+
+static void guest_code_id_reg_check_one(struct id_reg_test_info *idr, void *arg)
+{
+	uint64_t v = idr->read_reg();
+
+	GUEST_ASSERT_2(v == idr->current_value, idr->name, idr->current_value);
+}
+
+static void guest_code_id_reg_check_all(uint32_t cpu)
+{
+	walk_id_reg_list(guest_code_id_reg_check_one, NULL);
+	GUEST_DONE();
+}
+
+static void guest_code_do_nothing(uint32_t cpu)
+{
+	GUEST_DONE();
+}
+
+static void guest_code_feature_check(uint32_t cpu)
+{
+	int i;
+	struct feature_test_info *finfo;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++) {
+		finfo = &feature_test_info_table[i];
+		if (finfo->run_test)
+			guest_code_id_reg_check_one(finfo->sreg, NULL);
+	}
+
+	GUEST_DONE();
+}
+
+static void guest_code_ptrauth_check(uint32_t cpuid)
+{
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint64_t val = sreg->read_reg();
+
+	GUEST_ASSERT_2(val == sreg->current_value, "PTRAUTH", val);
+	GUEST_DONE();
+}
+
+static void reset_id_reg_info_current_value(struct id_reg_test_info *info,
+					    void *arg)
+{
+	info->current_value = info->initial_value;
+}
+
+/* Reset current_value field of each id_reg_test_info */
+static void reset_id_reg_info(void)
+{
+	walk_id_reg_list(reset_id_reg_info_current_value, NULL);
+}
+
+static struct kvm_vm *test_vm_create(uint32_t nvcpus,
+		void (*guest_code)(uint32_t), struct kvm_vcpu_init *init,
+		struct kvm_enable_cap *cap)
+{
+	struct kvm_vm *vm;
+	uint32_t cpuid;
+	uint64_t mem_pages;
+
+	mem_pages = DEFAULT_GUEST_PHY_PAGES + DEFAULT_STACK_PGS * nvcpus;
+	mem_pages += mem_pages / (PTES_PER_MIN_PAGE * 2);
+	mem_pages = vm_adjust_num_guest_pages(VM_MODE_DEFAULT, mem_pages);
+
+	vm = vm_create(VM_MODE_DEFAULT, mem_pages, O_RDWR);
+	if (cap)
+		vm_enable_cap(vm, cap);
+
+	kvm_vm_elf_load(vm, program_invocation_name);
+
+	if (init && init->target == -1) {
+		struct kvm_vcpu_init preferred;
+
+		vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &preferred);
+		init->target = preferred.target;
+	}
+
+	vm_init_descriptor_tables(vm);
+	for (cpuid = 0; cpuid < nvcpus; cpuid++) {
+		aarch64_vcpu_add_default(vm, cpuid, init, guest_code);
+		vcpu_init_descriptor_tables(vm, cpuid);
+	}
+
+	ucall_init(vm, NULL);
+	return vm;
+}
+
+static void test_vm_free(struct kvm_vm *vm)
+{
+	ucall_uninit(vm);
+	kvm_vm_free(vm);
+}
+
+#define	TEST_RUN(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, true))
+
+#define	TEST_RUN_NO_SYNC_DATA(vm, cpu)	\
+	(test_vcpu_run(__func__, __LINE__, vm, cpu, false))
+
+static int test_vcpu_run(const char *test_name, int line,
+			 struct kvm_vm *vm, uint32_t vcpuid, bool sync_data)
+{
+	struct ucall uc;
+	int ret;
+
+	if (sync_data) {
+		sync_global_to_guest(vm, id_reg_list);
+		sync_global_to_guest(vm, feature_test_info_table);
+	}
+
+	vcpu_args_set(vm, vcpuid, 1, vcpuid);
+
+	ret = _vcpu_run(vm, vcpuid);
+	if (ret) {
+		ret = errno;
+		goto sync_exit;
+	}
+
+	switch (get_ucall(vm, vcpuid, &uc)) {
+	case UCALL_SYNC:
+	case UCALL_DONE:
+		ret = 0;
+		break;
+	case UCALL_ABORT:
+		TEST_FAIL(
+		    "%s (%s) at line %d (user %s at line %d), args[3]=0x%lx",
+		    (char *)uc.args[0], (char *)uc.args[2], (int)uc.args[1],
+		    test_name, line, uc.args[3]);
+		break;
+	default:
+		TEST_FAIL("Unexpected guest exit\n");
+	}
+
+sync_exit:
+	if (sync_data) {
+		sync_global_from_guest(vm, id_reg_list);
+		sync_global_from_guest(vm, feature_test_info_table);
+	}
+	return ret;
+}
+
+/*
+ * Test KVM's special handling for ID_AA64DFR0.PMUVER/DFR0.PERFMON, which
+ * is ignoring userspace's request to set the fields to 0xf (IMPLEMENTATION
+ * DEFINED PMU) and setting the field to 0 instead. This KVM's implementation
+ * is to make live migration work from the older KVM, which erroneously sets
+ * those fields to 0xf for the guest when their host sanitized value are
+ * 0xf (it should have been set to 0x0 as the KVM doesn't support
+ * IMPLEMENTATION DEFINED PMU for the guest).
+ */
+static void pmu_test(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vm *vm;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	int ret;
+
+	reset_id_reg_info();
+	finfo->run_test = 1;
+
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+
+	/* Make sure that ID_AA64DFR0.PMUVER/DFR0.PERFMON is 0. */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be initially 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	/* Try to set ID_AA64DFR0.PMUVER/DFR0.PERFMON to -1 (0xf). */
+	fval = -1;
+	reg_val = UPDATE_ID_UFIELD(reg_val, finfo->shift, fval);
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting %s field of %s to %ld failed (%d)\n",
+		    finfo->name, sreg->name, fval, ret);
+
+	/* Check if ID_AA64DFR0.PMUVER/DFR0.PERFMON is still 0. */
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, finfo->shift, finfo->is_sign);
+	TEST_ASSERT(fval == 0, "%s field of %s should be 0 but %ld",
+		    finfo->name, sreg->name, fval);
+
+	sreg->current_value = reg_val;
+	ret = TEST_RUN(vm, vcpu);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+}
+
+struct vm_vcpu_arg {
+	struct kvm_vm	*vm;
+	uint32_t	vcpuid;
+	bool		after_run;
+};
+
+/*
+ * Test if KVM_SET_ONE_REG can work with the value KVM_GET_ONE_REG returns,
+ * KVM_SET_ONE_REG with zero works before KVM_RUN (and fails after KVM_RUN),
+ * and KVM_GET_ONE_REG returns the value KVM_SET_ONE_REG sets.
+ */
+static void test_get_set_id_reg(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	bool after_run = ((struct vm_vcpu_arg *)arg)->after_run;
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val, tval;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/* Check the current register value */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == sreg->current_value,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+	tval = reg_val;
+
+	/* Try to clear the register that should be able to be cleared. */
+	if ((reg_val != 0) && (sreg->can_clear)) {
+		reg_val = 0;
+		ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+		if (after_run) {
+			/* Expect an error after KVM_RUN */
+			TEST_ASSERT(ret,
+				    "Clearing %s unexpectedly worked\n",
+				    sreg->name);
+		} else {
+			TEST_ASSERT(!ret,
+				    "Clearing %s didn't work\n", sreg->name);
+			/*
+			 * Make sure that KVM_GET_ONE_REG provides the value
+			 * we set.
+			 */
+			vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+			TEST_ASSERT(reg_val == 0,
+				    "GET(%s) didn't return 0x%lx but 0x%lx",
+				    sreg->name, (uint64_t)0, reg_val);
+		}
+	}
+
+	/* Check if KVM_SET_ONE_REG works with the original value. */
+	reg_val = tval;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	TEST_ASSERT(ret == 0, "Setting the same ID reg value should work\n");
+
+	/* Make sure that KVM_GET_ONE_REG provides the value we set. */
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	TEST_ASSERT(reg_val == tval,
+		    "GET(%s) didn't return 0x%lx but 0x%lx",
+		    sreg->name, sreg->current_value, reg_val);
+}
+
+/*
+ * Test if KVM_SET_ONE_REG with the current value works before KVM_RUN,
+ * values of ID registers the guest sees are consistent with the ones
+ * userspace sees, and KVM_SET_ONE_REG after KVM_RUN works when the
+ * specified value is the same as the current one (fails otherwise).
+ */
+static void test_id_regs_basic(void)
+{
+	struct kvm_vm *vm;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+	int ret;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	arg.vm = vm;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	ret = TEST_RUN(vm, 0);
+	assert(!ret);
+
+	arg.after_run = true;
+	walk_id_reg_list(test_get_set_id_reg, &arg);
+
+	test_vm_free(vm);
+}
+
+static bool caps_are_supported(long *caps, int ncaps)
+{
+	int i;
+
+	for (i = 0; i < ncaps; i++) {
+		if (kvm_check_cap(caps[i]) <= 0)
+			return false;
+	}
+	return true;
+}
+
+#define	NCAPS_PTRAUTH	2
+
+/*
+ * Test if the ID register value reflects the ptrauth feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is consistent
+ * with the ptrauth feature configuration.
+ */
+static void test_feature_ptrauth(void)
+{
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init;
+	struct kvm_vm *vm = NULL;
+	struct id_reg_test_info *sreg = ID_REG_INFO(ID_AA64ISAR1);
+	uint32_t vcpu = 0;
+	int64_t rval;
+	int ret;
+	int apa, api, gpa, gpi;
+	char *name = "PTRAUTH";
+	long caps[NCAPS_PTRAUTH] = {KVM_CAP_ARM_PTRAUTH_ADDRESS,
+				    KVM_CAP_ARM_PTRAUTH_GENERIC};
+
+	reset_id_reg_info();
+	one_reg.addr = (uint64_t)&rval;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	if (caps_are_supported(caps, NCAPS_PTRAUTH)) {
+
+		/* Test with feature enabled */
+		memset(&init, 0, sizeof(init));
+		init.target = -1;
+		init.features[0] = (1ULL << KVM_ARM_VCPU_PTRAUTH_ADDRESS |
+				    1ULL << KVM_ARM_VCPU_PTRAUTH_GENERIC);
+		vm = test_vm_create(1, guest_code_ptrauth_check, &init, NULL);
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+		/* Make sure values of apa/api/gpa/gpi fields are expected */
+		apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+		api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+		gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+		gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+
+		TEST_ASSERT((apa > 0) || (api > 0),
+			    "Either apa(0x%x) or api(0x%x) must be available",
+			    apa, gpa);
+		TEST_ASSERT((gpa > 0) || (gpi > 0),
+			    "Either gpa(0x%x) or gpi(0x%x) must be available",
+			    gpa, gpi);
+
+		TEST_ASSERT((apa > 0) ^ (api > 0),
+			    "Both apa(0x%x) and api(0x%x) must not be available",
+			    apa, api);
+		TEST_ASSERT((gpa > 0) ^ (gpi > 0),
+			    "Both gpa(0x%x) and gpi(0x%x) must not be available",
+			    gpa, gpi);
+
+		sreg->current_value = rval;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n",
+			 __func__, name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+
+		TEST_ASSERT(!ret, "%s:KVM_RUN failed with %s enabled",
+			    __func__, name);
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+
+	apa = cpuid_extract_uftr(rval, ID_AA64ISAR1_APA_SHIFT);
+	api = cpuid_extract_uftr(rval, ID_AA64ISAR1_API_SHIFT);
+	gpa = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPA_SHIFT);
+	gpi = cpuid_extract_uftr(rval, ID_AA64ISAR1_GPI_SHIFT);
+	TEST_ASSERT(!apa && !api && !gpa && !gpi,
+	    "apa(0x%x), api(0x%x), gpa(0x%x), gpi(0x%x) must be zero",
+	    apa, api, gpa, gpi);
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s TEST_RUN failed with %s enabled, ret=0x%x",
+		    __func__, name, ret);
+
+	test_vm_free(vm);
+}
+
+static bool feature_caps_are_available(struct feature_test_info *finfo)
+{
+	return ((finfo->ncaps > 0) &&
+		caps_are_supported(finfo->caps, finfo->ncaps));
+}
+
+/*
+ * Test if the ID register value reflects the feature configuration.
+ * KVM_SET_ONE_REG should work as long as the requested value is
+ * consistent with the feature configuration.
+ */
+static void test_feature(struct feature_test_info *finfo)
+{
+	struct id_reg_test_info *sreg = finfo->sreg;
+	struct kvm_one_reg one_reg;
+	struct kvm_vcpu_init init, *initp = NULL;
+	struct kvm_vm *vm = NULL;
+	int64_t fval, reg_val;
+	uint32_t vcpu = 0;
+	bool is_sign = finfo->is_sign;
+	int min = finfo->min;
+	int shift = finfo->shift;
+	int ret;
+	void *data = NULL;
+
+	pr_debug("%s: %s (reg %s)\n", __func__, finfo->name, sreg->name);
+
+	reset_id_reg_info();
+
+	if (is_aarch32_id_reg(sreg->id) && !aarch32_support)
+		/*
+		 * AArch32 is not supported. Skip testing with the AArch32
+		 * ID register.
+		 */
+		return;
+
+	/* Indicate that guest runs the test for the feature */
+	finfo->run_test = 1;
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+
+	/*
+	 * Test with feature enabled if the feature is exposed in the default
+	 * ID register value or the capabilities are supported at KVM level.
+	 */
+	if ((cpuid_extract_ftr(sreg->initial_value, shift, is_sign) >= min) ||
+	    feature_caps_are_available(finfo)) {
+		if (finfo->vcpu_init) {
+			/* Need to enable the feature via KVM_ARM_VCPU_INIT. */
+			memset(&init, 0, sizeof(init));
+			init = *finfo->vcpu_init;
+			init.target = -1;
+			initp = &init;
+		}
+
+		vm = test_vm_create(1, guest_code_feature_check, initp,
+				    finfo->opt_in_cap);
+		if (finfo->init_feature)
+			/* Run any required extra process to use the feature */
+			data = finfo->init_feature(vm, vcpu);
+
+		/* Check if the ID register value indicates the feature */
+		vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+		fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+		TEST_ASSERT(fval >= min, "%s field of %s is too small (%ld)",
+			    finfo->name, sreg->name, fval);
+		sreg->current_value = reg_val;
+
+		pr_debug("%s: Test with %s enabled (%s: 0x%lx)\n", __func__,
+			 finfo->name, sreg->name, sreg->current_value);
+
+		/* Make sure that the guest sees the same ID register value. */
+		ret = TEST_RUN(vm, vcpu);
+		TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s enabled",
+			    __func__, finfo->name);
+
+		if (finfo->fini_feature)
+			finfo->fini_feature(vm, vcpu, data);
+
+		test_vm_free(vm);
+	}
+
+	reset_id_reg_info();
+
+	/* Test with feature disabled */
+	vm = test_vm_create(1, guest_code_feature_check, NULL, NULL);
+	vcpu_ioctl(vm, vcpu, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_ftr(reg_val, shift, is_sign);
+	if (finfo->vcpu_init || finfo->opt_in_cap) {
+		/*
+		 * If the feature needs to be enabled with KVM_ARM_VCPU_INIT
+		 * or opt-in capabilities, the default value of the ID register
+		 * shouldn't indicate the feature.
+		 */
+		TEST_ASSERT(fval < min, "%s field of %s is too big (%ld)",
+		    finfo->name, sreg->name, fval);
+	} else {
+		/* Update the relevant field to hide the feature. */
+		fval = is_sign ? 0xf : 0x0;
+		reg_val = UPDATE_ID_UFIELD(reg_val, shift, fval);
+		ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+		TEST_ASSERT(ret == 0, "Disabling %s failed %d (err %d)\n",
+			    finfo->name, ret, errno);
+		sreg->current_value = reg_val;
+	}
+
+	pr_debug("%s: Test with %s disabled (%s: 0x%lx)\n",
+		 __func__, finfo->name, sreg->name, sreg->current_value);
+
+	/* Make sure that the guest sees the same ID register value. */
+	ret = TEST_RUN(vm, vcpu);
+	TEST_ASSERT(!ret, "%s:TEST_RUN failed with %s disabled",
+		    __func__, finfo->name);
+	finfo->run_test = 0;
+	test_vm_free(vm);
+
+	/* Run extra feature specific tests, if any */
+	if (finfo->test_feature)
+		finfo->test_feature(finfo);
+}
+
+/*
+ * For each opt-in feature in feature_test_info_table[],
+ * test if KVM_GET_ONE_REG/KVM_SET_ONE_REG works appropriately according
+ * to the feature configuration.  See test_feature's comment for more detail.
+ */
+static void test_feature_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(feature_test_info_table); i++)
+		test_feature(&feature_test_info_table[i]);
+}
+
+int set_id_reg(struct kvm_vm *vm, uint32_t vcpu, struct id_reg_test_info *sreg,
+	       uint64_t new_val)
+{
+	int ret;
+	uint64_t reg_val;
+	struct kvm_one_reg one_reg;
+
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	one_reg.addr = (uint64_t)&reg_val;
+
+	reg_val = new_val;
+	ret = _vcpu_ioctl(vm, vcpu, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->current_value = new_val;
+
+	return ret;
+}
+
+
+/*
+ * Create a new VM with one vCPU, set the ID register to @new_val.
+ */
+int set_id_reg_vm(struct id_reg_test_info *sreg, uint64_t new_val)
+{
+	struct kvm_vm *vm;
+	int ret;
+	uint32_t vcpu = 0;
+
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+struct frac_info {
+	char	*name;
+	struct id_reg_test_info *sreg;
+	struct id_reg_test_info *frac_sreg;
+	int	shift;
+	int	frac_shift;
+};
+
+struct frac_info frac_info_table[] = {
+	{
+		.name = "RAS",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_RAS_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_RASFRAC_SHIFT,
+	},
+	{
+		.name = "MPAM",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_MPAM_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_MPAMFRAC_SHIFT,
+	},
+	{
+		.name = "CSV2",
+		.sreg = ID_REG_INFO(ID_AA64PFR0),
+		.shift = ID_AA64PFR0_CSV2_SHIFT,
+		.frac_sreg = ID_REG_INFO(ID_AA64PFR1),
+		.frac_shift = ID_AA64PFR1_CSV2FRAC_SHIFT,
+	},
+};
+
+
+/*
+ * Make sure that we can set the fractional reg field even before setting
+ * the feature reg field.
+ */
+int test_feature_frac_vm(struct frac_info *frac, uint64_t new_val,
+			 uint64_t frac_new_val)
+{
+	struct kvm_vm *vm;
+	uint32_t vcpu = 0;
+	struct id_reg_test_info *sreg, *frac_sreg;
+	int ret;
+
+	sreg = frac->sreg;
+	frac_sreg = frac->frac_sreg;
+	reset_id_reg_info();
+
+	vm = test_vm_create(1, guest_code_id_reg_check_all, NULL, NULL);
+
+	/* Set fractional reg field */
+	ret = set_id_reg(vm, vcpu, frac_sreg, frac_new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    frac_sreg->name, frac_new_val, ret);
+
+	/* Set feature reg field */
+	ret = set_id_reg(vm, vcpu, sreg, new_val);
+	TEST_ASSERT(!ret, "SET_REG(%s=0x%lx) failed, ret=0x%x",
+		    sreg->name, new_val, ret);
+
+	ret = TEST_RUN(vm, vcpu);
+	test_vm_free(vm);
+
+	return ret;
+}
+
+/*
+ * Test for setting the feature fractional field of the ID register.
+ * When the (main) feature field of the ID register is the same as the host's,
+ * the fractional field value cannot be larger than the host's.
+ * (KVM_SET_ONE_REG should work but KVM_RUN with the larger value will fail)
+ * When the (main) feature field of the ID register is smaler than the host's,
+ * the fractional field can be any values.
+ * The function tests those behaviors.
+ */
+void test_feature_frac_one(struct frac_info *frac)
+{
+	uint64_t ftr_val, ftr_fval, frac_val, frac_fval;
+	int ret, shift, frac_shift;
+	struct id_reg_test_info *sreg, *frac_sreg;
+
+	reset_id_reg_info();
+
+	sreg = frac->sreg;
+	shift = frac->shift;
+	frac_sreg = frac->frac_sreg;
+	frac_shift = frac->frac_shift;
+
+	pr_debug("%s(%s Frac) reg:%s(shift:%d) frac reg:%s(shift:%d)\n",
+		 __func__, frac->name, sreg->name, shift, frac_sreg->name,
+		 frac_shift);
+
+	/*
+	 * Use the host's feature value for the guest.
+	 * KVM_RUN with a larger frac value than the host's should fail.
+	 * Otherwise, it should work.
+	 */
+
+	frac_fval = cpuid_extract_uftr(frac_sreg->initial_value, frac_shift);
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(!ret, "Test smaller %s frac (val:%lx) failed(%d)",
+			    frac->name, frac_val, ret);
+	}
+
+	reset_id_reg_info();
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+						frac_shift, frac_fval + 1);
+
+		/* Setting larger frac shouldn't fail at ioctl */
+		ret = set_id_reg_vm(frac_sreg, frac_val);
+		TEST_ASSERT(!ret,
+			"SET larger %s frac (%s org:%lx, val:%lx) failed(%d)",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val, ret);
+
+		/* KVM_RUN with larger frac should fail */
+		ret = test_feature_frac_vm(frac, sreg->initial_value, frac_val);
+		TEST_ASSERT(ret,
+			"Test with larger %s frac (%s org:%lx, val:%lx) worked",
+			frac->name, frac_sreg->name, frac_sreg->initial_value,
+			frac_val);
+	}
+
+	reset_id_reg_info();
+
+	/*
+	 * Test with a smaller (main) feature value than the host's.
+	 */
+	ftr_fval = cpuid_extract_uftr(sreg->initial_value, shift);
+	if (ftr_fval == 0)
+		/* Cannot set it to the smaller value */
+		return;
+
+	ftr_val = UPDATE_ID_UFIELD(sreg->initial_value, shift, ftr_fval - 1);
+	ret = test_feature_frac_vm(frac, ftr_val, frac_sreg->initial_value);
+	TEST_ASSERT(!ret, "Test with smaller %s (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+
+	if (frac_fval > 0) {
+		/* Test with smaller frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval - 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+
+	if (frac_fval != 0xf) {
+		/* Test with larger frac value */
+		frac_val = UPDATE_ID_UFIELD(frac_sreg->initial_value,
+					    frac_shift, frac_fval + 1);
+		ret = test_feature_frac_vm(frac, ftr_val, frac_val);
+		TEST_ASSERT(!ret,
+		    "Test with smaller %s and larger frac (val:%lx) failed(%d)",
+		    frac->name, ftr_val, ret);
+	}
+}
+
+/*
+ * Test for setting feature fractional fields of ID registers.
+ * See test_feature_frac_one's comments for more detail.
+ */
+void test_feature_frac_all(void)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(frac_info_table); i++)
+		test_feature_frac_one(&frac_info_table[i]);
+}
+
+void run_test(void)
+{
+	test_id_regs_basic();
+	test_feature_all();
+	test_feature_ptrauth();
+	test_feature_frac_all();
+}
+
+static void init_id_reg_info_one(struct id_reg_test_info *sreg, void *arg)
+{
+	struct kvm_one_reg one_reg;
+	uint64_t reg_val;
+	struct kvm_vm *vm = ((struct vm_vcpu_arg *)arg)->vm;
+	uint32_t vcpuid = ((struct vm_vcpu_arg *)arg)->vcpuid;
+	int ret;
+
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(sreg->id);
+	vcpu_ioctl(vm, vcpuid, KVM_GET_ONE_REG, &one_reg);
+	sreg->current_value = reg_val;
+
+	/* Keep the initial value to reset the register value later */
+	sreg->initial_value = reg_val;
+
+	/* Check if the register can be set to 0 */
+	reg_val = 0;
+	ret = _vcpu_ioctl(vm, vcpuid, KVM_SET_ONE_REG, &one_reg);
+	if (!ret)
+		sreg->can_clear = true;
+
+	pr_debug("%s (0x%x): 0x%lx%s\n", sreg->name, sreg->id,
+		 sreg->initial_value, sreg->can_clear ? ", can clear" : "");
+}
+
+/*
+ * Check if aarch32 is supported, and initialize id_reg_test_info for all
+ * the ID registers.  Loop over the idreg list and populates each id_reg
+ * info with the initial value, current value, and can_clear value.
+ */
+static void init_test_info(void)
+{
+	uint64_t reg_val;
+	int fval;
+	struct kvm_vm *vm;
+	struct kvm_one_reg one_reg;
+	struct vm_vcpu_arg arg = { .vcpuid = 0 };
+
+	vm = test_vm_create(1, guest_code_do_nothing, NULL, NULL);
+
+	/* Get ID_AA64PFR0_EL1 to check if AArch32 is supported */
+	one_reg.addr = (uint64_t)&reg_val;
+	one_reg.id = KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1);
+	vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, &one_reg);
+	fval = cpuid_extract_uftr(reg_val, ID_AA64PFR0_EL0_SHIFT);
+	if (fval == 0x1)
+		/* No AArch32 support */
+		aarch32_support = false;
+
+	/* Initialize id_reg_test_info */
+	arg.vm = vm;
+	walk_id_reg_list(init_id_reg_info_one, &arg);
+	test_vm_free(vm);
+}
+
+int main(void)
+{
+
+	setbuf(stdout, NULL);
+
+	if (kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE) <= 0) {
+		print_skip("KVM_CAP_ARM_ID_REG_CONFIGURABLE is not supported");
+		exit(KSFT_SKIP);
+	}
+
+	init_test_info();
+	run_test();
+	return 0;
+}
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 35/38] KVM: arm64: selftests: Test linked breakpoint and watchpoint
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases for a linkted breakpoint and watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>

---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 225 +++++++++++++++---
 1 file changed, 197 insertions(+), 28 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 63b2178210c4..876257be5960 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -13,25 +13,99 @@
 #define DBGBCR_EXEC	(0x0 << 3)
 #define DBGBCR_EL1	(0x1 << 1)
 #define DBGBCR_E	(0x1 << 0)
+#define DBGBCR_LBN_SHIFT	16
+#define DBGBCR_BT_SHIFT		20
+#define DBGBCR_BT_ADDR_LINK_CTX	(0x1 << DBGBCR_BT_SHIFT)
+#define DBGBCR_BT_CTX_LINK	(0x3 << DBGBCR_BT_SHIFT)
 
 #define DBGWCR_LEN8	(0xff << 5)
 #define DBGWCR_RD	(0x1 << 3)
 #define DBGWCR_WR	(0x2 << 3)
 #define DBGWCR_EL1	(0x1 << 1)
 #define DBGWCR_E	(0x1 << 0)
+#define DBGWCR_LBN_SHIFT	16
+#define DBGWCR_WT_SHIFT		20
+#define DBGWCR_WT_LINK		(0x1 << DBGWCR_WT_SHIFT)
 
 #define SPSR_D		(1 << 9)
 #define SPSR_SS		(1 << 21)
 
-extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start;
+extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start, hw_bp_ctx;
 static volatile uint64_t sw_bp_addr, hw_bp_addr;
 static volatile uint64_t wp_addr, wp_data_addr;
 static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+#define GEN_DEBUG_WRITE_REG(reg_name)			\
+static void write_##reg_name(int num, uint64_t val)	\
+{							\
+	switch (num) {					\
+	case 0:						\
+		write_sysreg(val, reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		write_sysreg(val, reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		write_sysreg(val, reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		write_sysreg(val, reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		write_sysreg(val, reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		write_sysreg(val, reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		write_sysreg(val, reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		write_sysreg(val, reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		write_sysreg(val, reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		write_sysreg(val, reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		write_sysreg(val, reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		write_sysreg(val, reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		write_sysreg(val, reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		write_sysreg(val, reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		write_sysreg(val, reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		write_sysreg(val, reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+}
+
+/* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
+GEN_DEBUG_WRITE_REG(dbgbcr)
+GEN_DEBUG_WRITE_REG(dbgbvr)
+GEN_DEBUG_WRITE_REG(dbgwcr)
+GEN_DEBUG_WRITE_REG(dbgwvr)
+
+
 static void reset_debug_state(void)
 {
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t brps, wrps, i;
+
 	asm volatile("msr daifset, #8");
 
 	write_sysreg(0, osdlr_el1);
@@ -39,11 +113,19 @@ static void reset_debug_state(void)
 	isb();
 
 	write_sysreg(0, mdscr_el1);
-	/* This test only uses the first bp and wp slot. */
-	write_sysreg(0, dbgbvr0_el1);
-	write_sysreg(0, dbgbcr0_el1);
-	write_sysreg(0, dbgwcr0_el1);
-	write_sysreg(0, dbgwvr0_el1);
+	write_sysreg(0, contextidr_el1);
+
+	/* Reset bcr/bvr/wcr/wvr registers */
+	brps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	for (i = 0; i <= brps; i++) {
+		write_dbgbcr(i, 0);
+		write_dbgbvr(i, 0);
+	}
+	for (i = 0; i <= wrps; i++) {
+		write_dbgwcr(i, 0);
+		write_dbgwvr(i, 0);
+	}
 	isb();
 }
 
@@ -55,14 +137,15 @@ static void enable_os_lock(void)
 	GUEST_ASSERT(read_sysreg(oslsr_el1) & 2);
 }
 
-static void install_wp(uint64_t addr)
+static void install_wp(uint8_t wpn, uint64_t addr)
 {
 	uint32_t wcr;
 	uint32_t mdscr;
 
 	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E;
-	write_sysreg(wcr, dbgwcr0_el1);
-	write_sysreg(addr, dbgwvr0_el1);
+	write_dbgwcr(wpn, wcr);
+	write_dbgwvr(wpn, addr);
+
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -72,14 +155,69 @@ static void install_wp(uint64_t addr)
 	isb();
 }
 
-static void install_hw_bp(uint64_t addr)
+static void install_hw_bp(uint8_t bpn, uint64_t addr)
 {
 	uint32_t bcr;
 	uint32_t mdscr;
 
 	bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E;
-	write_sysreg(bcr, dbgbcr0_el1);
-	write_sysreg(addr, dbgbvr0_el1);
+	write_dbgbcr(bpn, bcr);
+	write_dbgbvr(bpn, addr);
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, uint64_t addr,
+			   uint64_t ctx)
+{
+	uint32_t wcr;
+	uint64_t ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by watchpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked watchpoint  */
+	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E |
+	      DBGWCR_WT_LINK | ((uint32_t)ctx_bp << DBGWCR_LBN_SHIFT);
+	write_dbgwcr(addr_wp, wcr);
+	write_dbgwvr(addr_wp, addr);
+
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, uint64_t addr,
+		       uint64_t ctx)
+{
+	uint32_t addr_bcr, ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by breakpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked breakpoint  */
+	addr_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		   DBGBCR_BT_ADDR_LINK_CTX |
+		   ((uint32_t)ctx_bp << DBGBCR_LBN_SHIFT);
+	write_dbgbcr(addr_bp, addr_bcr);
+	write_dbgbvr(addr_bp, addr);
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -102,8 +240,10 @@ static void install_ss(void)
 
 static volatile char write_data;
 
-static void guest_code(void)
+static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
+	uint64_t ctx = 0xc;	/* a random context number */
+
 	GUEST_SYNC(0);
 
 	/* Software-breakpoint */
@@ -115,7 +255,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(hw_bp));
+	install_hw_bp(bpn, PC(hw_bp));
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
@@ -123,7 +263,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
-	install_hw_bp(PC(bp_svc));
+	install_hw_bp(bpn, PC(bp_svc));
 	asm volatile("bp_svc: svc #0");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
@@ -132,7 +272,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(bp_brk));
+	install_hw_bp(bpn, PC(bp_brk));
 	asm volatile("bp_brk: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
@@ -141,7 +281,7 @@ static void guest_code(void)
 
 	/* Watchpoint */
 	reset_debug_state();
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
@@ -175,7 +315,7 @@ static void guest_code(void)
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
 	enable_os_lock();
-	install_hw_bp(PC(hw_bp2));
+	install_hw_bp(bpn, PC(hw_bp2));
 	hw_bp_addr = 0;
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
@@ -187,7 +327,7 @@ static void guest_code(void)
 	enable_os_lock();
 	write_data = '\0';
 	wp_data_addr = 0;
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
@@ -206,6 +346,28 @@ static void guest_code(void)
 		     : : : "x0");
 	GUEST_ASSERT_EQ(ss_addr[0], 0);
 
+	/* Linked hardware-breakpoint */
+	hw_bp_addr = 0;
+	reset_debug_state();
+	install_hw_bp_ctx(bpn, ctx_bpn, PC(hw_bp_ctx), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	asm volatile("hw_bp_ctx: nop");
+	write_sysreg(0, contextidr_el1);
+	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
+	GUEST_SYNC(10);
+
+	/* Linked watchpoint */
+	reset_debug_state();
+	install_wp_ctx(wpn, ctx_bpn, PC(write_data), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	write_data = 'x';
+	GUEST_ASSERT_EQ(write_data, 'x');
+	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
+
 	GUEST_DONE();
 }
 
@@ -240,19 +402,13 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
-static int debug_version(struct kvm_vm *vm)
-{
-	uint64_t id_aa64dfr0;
-
-	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &id_aa64dfr0);
-	return id_aa64dfr0 & 0xf;
-}
-
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
+	uint64_t aa64dfr0;
+	uint8_t max_brps;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -260,7 +416,8 @@ int main(int argc, char *argv[])
 	vm_init_descriptor_tables(vm);
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
-	if (debug_version(vm) < 6) {
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
 		print_skip("Armv8 debug architecture not supported.");
 		kvm_vm_free(vm);
 		exit(KSFT_SKIP);
@@ -277,6 +434,18 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
+	/* Number of breakpoints, minus 1 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+
+	/* The value of 0x0 is reserved */
+	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+
+	/*
+	 * Test with breakpoint#0 and watchpoint#0, and the higiest
+	 * numbered breakpoint (the context aware breakpoint).
+	 */
+	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+
 	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 35/38] KVM: arm64: selftests: Test linked breakpoint and watchpoint
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add test cases for a linkted breakpoint and watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>

---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 225 +++++++++++++++---
 1 file changed, 197 insertions(+), 28 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 63b2178210c4..876257be5960 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -13,25 +13,99 @@
 #define DBGBCR_EXEC	(0x0 << 3)
 #define DBGBCR_EL1	(0x1 << 1)
 #define DBGBCR_E	(0x1 << 0)
+#define DBGBCR_LBN_SHIFT	16
+#define DBGBCR_BT_SHIFT		20
+#define DBGBCR_BT_ADDR_LINK_CTX	(0x1 << DBGBCR_BT_SHIFT)
+#define DBGBCR_BT_CTX_LINK	(0x3 << DBGBCR_BT_SHIFT)
 
 #define DBGWCR_LEN8	(0xff << 5)
 #define DBGWCR_RD	(0x1 << 3)
 #define DBGWCR_WR	(0x2 << 3)
 #define DBGWCR_EL1	(0x1 << 1)
 #define DBGWCR_E	(0x1 << 0)
+#define DBGWCR_LBN_SHIFT	16
+#define DBGWCR_WT_SHIFT		20
+#define DBGWCR_WT_LINK		(0x1 << DBGWCR_WT_SHIFT)
 
 #define SPSR_D		(1 << 9)
 #define SPSR_SS		(1 << 21)
 
-extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start;
+extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start, hw_bp_ctx;
 static volatile uint64_t sw_bp_addr, hw_bp_addr;
 static volatile uint64_t wp_addr, wp_data_addr;
 static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+#define GEN_DEBUG_WRITE_REG(reg_name)			\
+static void write_##reg_name(int num, uint64_t val)	\
+{							\
+	switch (num) {					\
+	case 0:						\
+		write_sysreg(val, reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		write_sysreg(val, reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		write_sysreg(val, reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		write_sysreg(val, reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		write_sysreg(val, reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		write_sysreg(val, reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		write_sysreg(val, reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		write_sysreg(val, reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		write_sysreg(val, reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		write_sysreg(val, reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		write_sysreg(val, reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		write_sysreg(val, reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		write_sysreg(val, reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		write_sysreg(val, reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		write_sysreg(val, reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		write_sysreg(val, reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+}
+
+/* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
+GEN_DEBUG_WRITE_REG(dbgbcr)
+GEN_DEBUG_WRITE_REG(dbgbvr)
+GEN_DEBUG_WRITE_REG(dbgwcr)
+GEN_DEBUG_WRITE_REG(dbgwvr)
+
+
 static void reset_debug_state(void)
 {
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t brps, wrps, i;
+
 	asm volatile("msr daifset, #8");
 
 	write_sysreg(0, osdlr_el1);
@@ -39,11 +113,19 @@ static void reset_debug_state(void)
 	isb();
 
 	write_sysreg(0, mdscr_el1);
-	/* This test only uses the first bp and wp slot. */
-	write_sysreg(0, dbgbvr0_el1);
-	write_sysreg(0, dbgbcr0_el1);
-	write_sysreg(0, dbgwcr0_el1);
-	write_sysreg(0, dbgwvr0_el1);
+	write_sysreg(0, contextidr_el1);
+
+	/* Reset bcr/bvr/wcr/wvr registers */
+	brps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	for (i = 0; i <= brps; i++) {
+		write_dbgbcr(i, 0);
+		write_dbgbvr(i, 0);
+	}
+	for (i = 0; i <= wrps; i++) {
+		write_dbgwcr(i, 0);
+		write_dbgwvr(i, 0);
+	}
 	isb();
 }
 
@@ -55,14 +137,15 @@ static void enable_os_lock(void)
 	GUEST_ASSERT(read_sysreg(oslsr_el1) & 2);
 }
 
-static void install_wp(uint64_t addr)
+static void install_wp(uint8_t wpn, uint64_t addr)
 {
 	uint32_t wcr;
 	uint32_t mdscr;
 
 	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E;
-	write_sysreg(wcr, dbgwcr0_el1);
-	write_sysreg(addr, dbgwvr0_el1);
+	write_dbgwcr(wpn, wcr);
+	write_dbgwvr(wpn, addr);
+
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -72,14 +155,69 @@ static void install_wp(uint64_t addr)
 	isb();
 }
 
-static void install_hw_bp(uint64_t addr)
+static void install_hw_bp(uint8_t bpn, uint64_t addr)
 {
 	uint32_t bcr;
 	uint32_t mdscr;
 
 	bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E;
-	write_sysreg(bcr, dbgbcr0_el1);
-	write_sysreg(addr, dbgbvr0_el1);
+	write_dbgbcr(bpn, bcr);
+	write_dbgbvr(bpn, addr);
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, uint64_t addr,
+			   uint64_t ctx)
+{
+	uint32_t wcr;
+	uint64_t ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by watchpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked watchpoint  */
+	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E |
+	      DBGWCR_WT_LINK | ((uint32_t)ctx_bp << DBGWCR_LBN_SHIFT);
+	write_dbgwcr(addr_wp, wcr);
+	write_dbgwvr(addr_wp, addr);
+
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, uint64_t addr,
+		       uint64_t ctx)
+{
+	uint32_t addr_bcr, ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by breakpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked breakpoint  */
+	addr_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		   DBGBCR_BT_ADDR_LINK_CTX |
+		   ((uint32_t)ctx_bp << DBGBCR_LBN_SHIFT);
+	write_dbgbcr(addr_bp, addr_bcr);
+	write_dbgbvr(addr_bp, addr);
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -102,8 +240,10 @@ static void install_ss(void)
 
 static volatile char write_data;
 
-static void guest_code(void)
+static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
+	uint64_t ctx = 0xc;	/* a random context number */
+
 	GUEST_SYNC(0);
 
 	/* Software-breakpoint */
@@ -115,7 +255,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(hw_bp));
+	install_hw_bp(bpn, PC(hw_bp));
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
@@ -123,7 +263,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
-	install_hw_bp(PC(bp_svc));
+	install_hw_bp(bpn, PC(bp_svc));
 	asm volatile("bp_svc: svc #0");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
@@ -132,7 +272,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(bp_brk));
+	install_hw_bp(bpn, PC(bp_brk));
 	asm volatile("bp_brk: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
@@ -141,7 +281,7 @@ static void guest_code(void)
 
 	/* Watchpoint */
 	reset_debug_state();
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
@@ -175,7 +315,7 @@ static void guest_code(void)
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
 	enable_os_lock();
-	install_hw_bp(PC(hw_bp2));
+	install_hw_bp(bpn, PC(hw_bp2));
 	hw_bp_addr = 0;
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
@@ -187,7 +327,7 @@ static void guest_code(void)
 	enable_os_lock();
 	write_data = '\0';
 	wp_data_addr = 0;
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
@@ -206,6 +346,28 @@ static void guest_code(void)
 		     : : : "x0");
 	GUEST_ASSERT_EQ(ss_addr[0], 0);
 
+	/* Linked hardware-breakpoint */
+	hw_bp_addr = 0;
+	reset_debug_state();
+	install_hw_bp_ctx(bpn, ctx_bpn, PC(hw_bp_ctx), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	asm volatile("hw_bp_ctx: nop");
+	write_sysreg(0, contextidr_el1);
+	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
+	GUEST_SYNC(10);
+
+	/* Linked watchpoint */
+	reset_debug_state();
+	install_wp_ctx(wpn, ctx_bpn, PC(write_data), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	write_data = 'x';
+	GUEST_ASSERT_EQ(write_data, 'x');
+	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
+
 	GUEST_DONE();
 }
 
@@ -240,19 +402,13 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
-static int debug_version(struct kvm_vm *vm)
-{
-	uint64_t id_aa64dfr0;
-
-	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &id_aa64dfr0);
-	return id_aa64dfr0 & 0xf;
-}
-
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
+	uint64_t aa64dfr0;
+	uint8_t max_brps;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -260,7 +416,8 @@ int main(int argc, char *argv[])
 	vm_init_descriptor_tables(vm);
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
-	if (debug_version(vm) < 6) {
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
 		print_skip("Armv8 debug architecture not supported.");
 		kvm_vm_free(vm);
 		exit(KSFT_SKIP);
@@ -277,6 +434,18 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
+	/* Number of breakpoints, minus 1 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+
+	/* The value of 0x0 is reserved */
+	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+
+	/*
+	 * Test with breakpoint#0 and watchpoint#0, and the higiest
+	 * numbered breakpoint (the context aware breakpoint).
+	 */
+	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+
 	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 35/38] KVM: arm64: selftests: Test linked breakpoint and watchpoint
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases for a linkted breakpoint and watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>

---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 225 +++++++++++++++---
 1 file changed, 197 insertions(+), 28 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 63b2178210c4..876257be5960 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -13,25 +13,99 @@
 #define DBGBCR_EXEC	(0x0 << 3)
 #define DBGBCR_EL1	(0x1 << 1)
 #define DBGBCR_E	(0x1 << 0)
+#define DBGBCR_LBN_SHIFT	16
+#define DBGBCR_BT_SHIFT		20
+#define DBGBCR_BT_ADDR_LINK_CTX	(0x1 << DBGBCR_BT_SHIFT)
+#define DBGBCR_BT_CTX_LINK	(0x3 << DBGBCR_BT_SHIFT)
 
 #define DBGWCR_LEN8	(0xff << 5)
 #define DBGWCR_RD	(0x1 << 3)
 #define DBGWCR_WR	(0x2 << 3)
 #define DBGWCR_EL1	(0x1 << 1)
 #define DBGWCR_E	(0x1 << 0)
+#define DBGWCR_LBN_SHIFT	16
+#define DBGWCR_WT_SHIFT		20
+#define DBGWCR_WT_LINK		(0x1 << DBGWCR_WT_SHIFT)
 
 #define SPSR_D		(1 << 9)
 #define SPSR_SS		(1 << 21)
 
-extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start;
+extern unsigned char sw_bp, sw_bp2, hw_bp, hw_bp2, bp_svc, bp_brk, hw_wp, ss_start, hw_bp_ctx;
 static volatile uint64_t sw_bp_addr, hw_bp_addr;
 static volatile uint64_t wp_addr, wp_data_addr;
 static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+#define GEN_DEBUG_WRITE_REG(reg_name)			\
+static void write_##reg_name(int num, uint64_t val)	\
+{							\
+	switch (num) {					\
+	case 0:						\
+		write_sysreg(val, reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		write_sysreg(val, reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		write_sysreg(val, reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		write_sysreg(val, reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		write_sysreg(val, reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		write_sysreg(val, reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		write_sysreg(val, reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		write_sysreg(val, reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		write_sysreg(val, reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		write_sysreg(val, reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		write_sysreg(val, reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		write_sysreg(val, reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		write_sysreg(val, reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		write_sysreg(val, reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		write_sysreg(val, reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		write_sysreg(val, reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+}
+
+/* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
+GEN_DEBUG_WRITE_REG(dbgbcr)
+GEN_DEBUG_WRITE_REG(dbgbvr)
+GEN_DEBUG_WRITE_REG(dbgwcr)
+GEN_DEBUG_WRITE_REG(dbgwvr)
+
+
 static void reset_debug_state(void)
 {
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t brps, wrps, i;
+
 	asm volatile("msr daifset, #8");
 
 	write_sysreg(0, osdlr_el1);
@@ -39,11 +113,19 @@ static void reset_debug_state(void)
 	isb();
 
 	write_sysreg(0, mdscr_el1);
-	/* This test only uses the first bp and wp slot. */
-	write_sysreg(0, dbgbvr0_el1);
-	write_sysreg(0, dbgbcr0_el1);
-	write_sysreg(0, dbgwcr0_el1);
-	write_sysreg(0, dbgwvr0_el1);
+	write_sysreg(0, contextidr_el1);
+
+	/* Reset bcr/bvr/wcr/wvr registers */
+	brps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	for (i = 0; i <= brps; i++) {
+		write_dbgbcr(i, 0);
+		write_dbgbvr(i, 0);
+	}
+	for (i = 0; i <= wrps; i++) {
+		write_dbgwcr(i, 0);
+		write_dbgwvr(i, 0);
+	}
 	isb();
 }
 
@@ -55,14 +137,15 @@ static void enable_os_lock(void)
 	GUEST_ASSERT(read_sysreg(oslsr_el1) & 2);
 }
 
-static void install_wp(uint64_t addr)
+static void install_wp(uint8_t wpn, uint64_t addr)
 {
 	uint32_t wcr;
 	uint32_t mdscr;
 
 	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E;
-	write_sysreg(wcr, dbgwcr0_el1);
-	write_sysreg(addr, dbgwvr0_el1);
+	write_dbgwcr(wpn, wcr);
+	write_dbgwvr(wpn, addr);
+
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -72,14 +155,69 @@ static void install_wp(uint64_t addr)
 	isb();
 }
 
-static void install_hw_bp(uint64_t addr)
+static void install_hw_bp(uint8_t bpn, uint64_t addr)
 {
 	uint32_t bcr;
 	uint32_t mdscr;
 
 	bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E;
-	write_sysreg(bcr, dbgbcr0_el1);
-	write_sysreg(addr, dbgbvr0_el1);
+	write_dbgbcr(bpn, bcr);
+	write_dbgbvr(bpn, addr);
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+static void install_wp_ctx(uint8_t addr_wp, uint8_t ctx_bp, uint64_t addr,
+			   uint64_t ctx)
+{
+	uint32_t wcr;
+	uint64_t ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by watchpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked watchpoint  */
+	wcr = DBGWCR_LEN8 | DBGWCR_RD | DBGWCR_WR | DBGWCR_EL1 | DBGWCR_E |
+	      DBGWCR_WT_LINK | ((uint32_t)ctx_bp << DBGWCR_LBN_SHIFT);
+	write_dbgwcr(addr_wp, wcr);
+	write_dbgwvr(addr_wp, addr);
+
+	isb();
+
+	asm volatile("msr daifclr, #8");
+
+	mdscr = read_sysreg(mdscr_el1) | MDSCR_KDE | MDSCR_MDE;
+	write_sysreg(mdscr, mdscr_el1);
+	isb();
+}
+
+void install_hw_bp_ctx(uint8_t addr_bp, uint8_t ctx_bp, uint64_t addr,
+		       uint64_t ctx)
+{
+	uint32_t addr_bcr, ctx_bcr;
+	uint32_t mdscr;
+
+	/* Setup a context-aware breakpoint to be linked by breakpoint */
+	ctx_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		  DBGBCR_BT_CTX_LINK;
+	write_dbgbcr(ctx_bp, ctx_bcr);
+	write_dbgbvr(ctx_bp, ctx);
+
+	/* Setup a linked breakpoint  */
+	addr_bcr = DBGBCR_LEN8 | DBGBCR_EXEC | DBGBCR_EL1 | DBGBCR_E |
+		   DBGBCR_BT_ADDR_LINK_CTX |
+		   ((uint32_t)ctx_bp << DBGBCR_LBN_SHIFT);
+	write_dbgbcr(addr_bp, addr_bcr);
+	write_dbgbvr(addr_bp, addr);
 	isb();
 
 	asm volatile("msr daifclr, #8");
@@ -102,8 +240,10 @@ static void install_ss(void)
 
 static volatile char write_data;
 
-static void guest_code(void)
+static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
+	uint64_t ctx = 0xc;	/* a random context number */
+
 	GUEST_SYNC(0);
 
 	/* Software-breakpoint */
@@ -115,7 +255,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(hw_bp));
+	install_hw_bp(bpn, PC(hw_bp));
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
@@ -123,7 +263,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
-	install_hw_bp(PC(bp_svc));
+	install_hw_bp(bpn, PC(bp_svc));
 	asm volatile("bp_svc: svc #0");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
@@ -132,7 +272,7 @@ static void guest_code(void)
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
-	install_hw_bp(PC(bp_brk));
+	install_hw_bp(bpn, PC(bp_brk));
 	asm volatile("bp_brk: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
@@ -141,7 +281,7 @@ static void guest_code(void)
 
 	/* Watchpoint */
 	reset_debug_state();
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
@@ -175,7 +315,7 @@ static void guest_code(void)
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
 	enable_os_lock();
-	install_hw_bp(PC(hw_bp2));
+	install_hw_bp(bpn, PC(hw_bp2));
 	hw_bp_addr = 0;
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
@@ -187,7 +327,7 @@ static void guest_code(void)
 	enable_os_lock();
 	write_data = '\0';
 	wp_data_addr = 0;
-	install_wp(PC(write_data));
+	install_wp(wpn, PC(write_data));
 	write_data = 'x';
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
@@ -206,6 +346,28 @@ static void guest_code(void)
 		     : : : "x0");
 	GUEST_ASSERT_EQ(ss_addr[0], 0);
 
+	/* Linked hardware-breakpoint */
+	hw_bp_addr = 0;
+	reset_debug_state();
+	install_hw_bp_ctx(bpn, ctx_bpn, PC(hw_bp_ctx), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	asm volatile("hw_bp_ctx: nop");
+	write_sysreg(0, contextidr_el1);
+	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
+	GUEST_SYNC(10);
+
+	/* Linked watchpoint */
+	reset_debug_state();
+	install_wp_ctx(wpn, ctx_bpn, PC(write_data), ctx);
+	/* Set context id */
+	write_sysreg(ctx, contextidr_el1);
+	isb();
+	write_data = 'x';
+	GUEST_ASSERT_EQ(write_data, 'x');
+	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
+
 	GUEST_DONE();
 }
 
@@ -240,19 +402,13 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
-static int debug_version(struct kvm_vm *vm)
-{
-	uint64_t id_aa64dfr0;
-
-	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &id_aa64dfr0);
-	return id_aa64dfr0 & 0xf;
-}
-
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
+	uint64_t aa64dfr0;
+	uint8_t max_brps;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -260,7 +416,8 @@ int main(int argc, char *argv[])
 	vm_init_descriptor_tables(vm);
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
-	if (debug_version(vm) < 6) {
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
 		print_skip("Armv8 debug architecture not supported.");
 		kvm_vm_free(vm);
 		exit(KSFT_SKIP);
@@ -277,6 +434,18 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
+	/* Number of breakpoints, minus 1 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+
+	/* The value of 0x0 is reserved */
+	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+
+	/*
+	 * Test with breakpoint#0 and watchpoint#0, and the higiest
+	 * numbered breakpoint (the context aware breakpoint).
+	 */
+	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+
 	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 36/38] KVM: arm64: selftests: Test breakpoint/watchpoint register access
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases for reading and writing of dbgbcr/dbgbvr/dbgwcr/dbgwvr
registers from userspace and the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 350 +++++++++++++++++-
 1 file changed, 332 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 876257be5960..4e00100b9aa1 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -37,6 +37,26 @@ static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+struct kvm_guest_debug_arch debug_regs;
+
+static uint64_t update_bcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGBCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGBCR_LBN_SHIFT);
+	return new;
+}
+
+static uint64_t update_wcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGWCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGWCR_LBN_SHIFT);
+	return new;
+}
+
 #define GEN_DEBUG_WRITE_REG(reg_name)			\
 static void write_##reg_name(int num, uint64_t val)	\
 {							\
@@ -94,12 +114,77 @@ static void write_##reg_name(int num, uint64_t val)	\
 	}						\
 }
 
+#define GEN_DEBUG_READ_REG(reg_name)			\
+u64 read_##reg_name(int num)				\
+{							\
+	u64 val = 0;					\
+							\
+	switch (num) {					\
+	case 0:						\
+		val = read_sysreg(reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		val = read_sysreg(reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		val = read_sysreg(reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		val = read_sysreg(reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		val = read_sysreg(reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		val = read_sysreg(reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		val = read_sysreg(reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		val = read_sysreg(reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		val = read_sysreg(reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		val = read_sysreg(reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		val = read_sysreg(reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		val = read_sysreg(reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		val = read_sysreg(reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		val = read_sysreg(reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		val = read_sysreg(reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		val = read_sysreg(reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+	return val;					\
+}
+
 /* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
 GEN_DEBUG_WRITE_REG(dbgbcr)
 GEN_DEBUG_WRITE_REG(dbgbvr)
 GEN_DEBUG_WRITE_REG(dbgwcr)
 GEN_DEBUG_WRITE_REG(dbgwvr)
 
+/* Define read_dbgbcr()/read_dbgbvr()/read_dbgwcr()/read_dbgwvr() */
+GEN_DEBUG_READ_REG(dbgbcr)
+GEN_DEBUG_READ_REG(dbgbvr)
+GEN_DEBUG_READ_REG(dbgwcr)
+GEN_DEBUG_READ_REG(dbgwvr)
 
 static void reset_debug_state(void)
 {
@@ -238,20 +323,126 @@ static void install_ss(void)
 	isb();
 }
 
+/*
+ * Check if the guest sees bcr/bvr/wcr/wvr register values that userspace
+ * set (by set_debug_regs()), and update them for userspace to verify
+ * the same.
+ */
+static void guest_code_bwp_reg_test(struct kvm_guest_debug_arch *dregs)
+{
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t nbps, nwps;
+	int i;
+	u64 val, rval;
+
+	/* Set nbps/nwps to the number of breakpoints/watchpoints. */
+	nbps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	nwps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+
+	for (i = 0; i < nbps; i++) {
+		/*
+		 * Check if the dbgbcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbcr to some value for userspace to read later */
+		val = update_bcr_lbn(0, (i + 1) % nbps);
+		write_dbgbcr(i, val);
+		rval = read_dbgbcr(i);
+
+		/* Make sure written value could be read */
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/*
+		 * Check if the dbgbvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bvr[i]);
+
+		/* Set dbgbvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgbvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bvr[i] = val;
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/*
+		 * Check if the dbgwcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwcr to some value for userspace to read later */
+		val = update_wcr_lbn(0, (i + 1) % nbps);
+		write_dbgwcr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/*
+		 * Check if the dbgwvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wvr[i]);
+
+		/* Set dbgwvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgwvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wvr[i] = val;
+	}
+}
+
 static volatile char write_data;
 
-static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void guest_code(struct kvm_guest_debug_arch *dregs,
+		       uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	uint64_t ctx = 0xc;	/* a random context number */
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set before the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
 	GUEST_SYNC(0);
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set after the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
+	GUEST_SYNC(1);
+
 	/* Software-breakpoint */
 	reset_debug_state();
 	asm volatile("sw_bp: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp));
 
-	GUEST_SYNC(1);
+	GUEST_SYNC(2);
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
@@ -259,7 +450,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
-	GUEST_SYNC(2);
+	GUEST_SYNC(3);
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
@@ -268,7 +459,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
 
-	GUEST_SYNC(3);
+	GUEST_SYNC(4);
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
@@ -277,7 +468,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
 
-	GUEST_SYNC(4);
+	GUEST_SYNC(5);
 
 	/* Watchpoint */
 	reset_debug_state();
@@ -286,7 +477,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
 
-	GUEST_SYNC(5);
+	GUEST_SYNC(6);
 
 	/* Single-step */
 	reset_debug_state();
@@ -301,7 +492,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(ss_addr[1], PC(ss_start) + 4);
 	GUEST_ASSERT_EQ(ss_addr[2], PC(ss_start) + 8);
 
-	GUEST_SYNC(6);
+	GUEST_SYNC(7);
 
 	/* OS Lock does not block software-breakpoint */
 	reset_debug_state();
@@ -310,7 +501,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("sw_bp2: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp2));
 
-	GUEST_SYNC(7);
+	GUEST_SYNC(8);
 
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
@@ -320,7 +511,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
 
-	GUEST_SYNC(8);
+	GUEST_SYNC(9);
 
 	/* OS Lock blocking watchpoint */
 	reset_debug_state();
@@ -332,7 +523,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
 
-	GUEST_SYNC(9);
+	GUEST_SYNC(10);
 
 	/* OS Lock blocking single-step */
 	reset_debug_state();
@@ -356,7 +547,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp_ctx: nop");
 	write_sysreg(0, contextidr_el1);
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
-	GUEST_SYNC(10);
+	GUEST_SYNC(11);
 
 	/* Linked watchpoint */
 	reset_debug_state();
@@ -402,13 +593,122 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
+/*
+ * Set bcr/bwr/wcr/wbr for register read/write testing.
+ * The values that are set by userspace are saved in dregs, which will
+ * be used by the guest code (guest_code_bwp_reg_test()) to make sure
+ * that the guest sees bcr/bvr/wcr/wvr register values that are set
+ * by userspace.
+ */
+static void set_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			       struct kvm_guest_debug_arch *dregs,
+			       uint8_t nbps, uint8_t nwps)
+{
+	int i;
+	uint64_t val;
+
+	for (i = 0; i < nbps; i++) {
+		/* Set dbgbcr to some value for the guest to read later */
+		val = update_bcr_lbn(0, i);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/* Set dbgwcr to some value for the guest to read later */
+		val = update_wcr_lbn(0, i % nbps);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
+/*
+ * Check if the userspace sees bcr/bvr/wcr/wvr register values that are
+ * set by the guest (guest_code_bwp_reg_test()), which are saved in the
+ * given dregs.
+ */
+static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			     struct kvm_guest_debug_arch *dregs,
+			     int nbps, int nwps)
+{
+	uint64_t val;
+	int i;
+
+	for (i = 0; i < nbps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
 	uint64_t aa64dfr0;
-	uint8_t max_brps;
+	uint8_t nbps, nwps;
+	bool debug_reg_test = false;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -434,19 +734,28 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
-	/* Number of breakpoints, minus 1 */
-	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	/* Number of breakpoints */
+	nbps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	TEST_ASSERT(nbps >= 2, "Number of breakpoints must be >= 2");
 
-	/* The value of 0x0 is reserved */
-	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+	/* Number of watchpoints */
+	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
 	/*
 	 * Test with breakpoint#0 and watchpoint#0, and the higiest
 	 * numbered breakpoint (the context aware breakpoint).
 	 */
-	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+
+	for (stage = 0; stage < 13; stage++) {
+		/* First two stages are sanity debug regs read/write check */
+		if (stage < 2) {
+			set_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			sync_global_to_guest(vm, debug_regs);
+			debug_reg_test = true;
+		}
 
-	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
 		switch (get_ucall(vm, VCPU_ID, &uc)) {
@@ -454,6 +763,11 @@ int main(int argc, char *argv[])
 			TEST_ASSERT(uc.args[1] == stage,
 				"Stage %d: Unexpected sync ucall, got %lx",
 				stage, (ulong)uc.args[1]);
+			if (debug_reg_test) {
+				debug_reg_test = false;
+				sync_global_from_guest(vm, debug_regs);
+				check_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			}
 			break;
 		case UCALL_ABORT:
 			TEST_FAIL("%s at %s:%ld\n\tvalues: %#lx, %#lx",
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 36/38] KVM: arm64: selftests: Test breakpoint/watchpoint register access
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add test cases for reading and writing of dbgbcr/dbgbvr/dbgwcr/dbgwvr
registers from userspace and the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 350 +++++++++++++++++-
 1 file changed, 332 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 876257be5960..4e00100b9aa1 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -37,6 +37,26 @@ static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+struct kvm_guest_debug_arch debug_regs;
+
+static uint64_t update_bcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGBCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGBCR_LBN_SHIFT);
+	return new;
+}
+
+static uint64_t update_wcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGWCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGWCR_LBN_SHIFT);
+	return new;
+}
+
 #define GEN_DEBUG_WRITE_REG(reg_name)			\
 static void write_##reg_name(int num, uint64_t val)	\
 {							\
@@ -94,12 +114,77 @@ static void write_##reg_name(int num, uint64_t val)	\
 	}						\
 }
 
+#define GEN_DEBUG_READ_REG(reg_name)			\
+u64 read_##reg_name(int num)				\
+{							\
+	u64 val = 0;					\
+							\
+	switch (num) {					\
+	case 0:						\
+		val = read_sysreg(reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		val = read_sysreg(reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		val = read_sysreg(reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		val = read_sysreg(reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		val = read_sysreg(reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		val = read_sysreg(reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		val = read_sysreg(reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		val = read_sysreg(reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		val = read_sysreg(reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		val = read_sysreg(reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		val = read_sysreg(reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		val = read_sysreg(reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		val = read_sysreg(reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		val = read_sysreg(reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		val = read_sysreg(reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		val = read_sysreg(reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+	return val;					\
+}
+
 /* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
 GEN_DEBUG_WRITE_REG(dbgbcr)
 GEN_DEBUG_WRITE_REG(dbgbvr)
 GEN_DEBUG_WRITE_REG(dbgwcr)
 GEN_DEBUG_WRITE_REG(dbgwvr)
 
+/* Define read_dbgbcr()/read_dbgbvr()/read_dbgwcr()/read_dbgwvr() */
+GEN_DEBUG_READ_REG(dbgbcr)
+GEN_DEBUG_READ_REG(dbgbvr)
+GEN_DEBUG_READ_REG(dbgwcr)
+GEN_DEBUG_READ_REG(dbgwvr)
 
 static void reset_debug_state(void)
 {
@@ -238,20 +323,126 @@ static void install_ss(void)
 	isb();
 }
 
+/*
+ * Check if the guest sees bcr/bvr/wcr/wvr register values that userspace
+ * set (by set_debug_regs()), and update them for userspace to verify
+ * the same.
+ */
+static void guest_code_bwp_reg_test(struct kvm_guest_debug_arch *dregs)
+{
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t nbps, nwps;
+	int i;
+	u64 val, rval;
+
+	/* Set nbps/nwps to the number of breakpoints/watchpoints. */
+	nbps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	nwps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+
+	for (i = 0; i < nbps; i++) {
+		/*
+		 * Check if the dbgbcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbcr to some value for userspace to read later */
+		val = update_bcr_lbn(0, (i + 1) % nbps);
+		write_dbgbcr(i, val);
+		rval = read_dbgbcr(i);
+
+		/* Make sure written value could be read */
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/*
+		 * Check if the dbgbvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bvr[i]);
+
+		/* Set dbgbvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgbvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bvr[i] = val;
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/*
+		 * Check if the dbgwcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwcr to some value for userspace to read later */
+		val = update_wcr_lbn(0, (i + 1) % nbps);
+		write_dbgwcr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/*
+		 * Check if the dbgwvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wvr[i]);
+
+		/* Set dbgwvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgwvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wvr[i] = val;
+	}
+}
+
 static volatile char write_data;
 
-static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void guest_code(struct kvm_guest_debug_arch *dregs,
+		       uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	uint64_t ctx = 0xc;	/* a random context number */
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set before the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
 	GUEST_SYNC(0);
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set after the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
+	GUEST_SYNC(1);
+
 	/* Software-breakpoint */
 	reset_debug_state();
 	asm volatile("sw_bp: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp));
 
-	GUEST_SYNC(1);
+	GUEST_SYNC(2);
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
@@ -259,7 +450,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
-	GUEST_SYNC(2);
+	GUEST_SYNC(3);
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
@@ -268,7 +459,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
 
-	GUEST_SYNC(3);
+	GUEST_SYNC(4);
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
@@ -277,7 +468,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
 
-	GUEST_SYNC(4);
+	GUEST_SYNC(5);
 
 	/* Watchpoint */
 	reset_debug_state();
@@ -286,7 +477,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
 
-	GUEST_SYNC(5);
+	GUEST_SYNC(6);
 
 	/* Single-step */
 	reset_debug_state();
@@ -301,7 +492,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(ss_addr[1], PC(ss_start) + 4);
 	GUEST_ASSERT_EQ(ss_addr[2], PC(ss_start) + 8);
 
-	GUEST_SYNC(6);
+	GUEST_SYNC(7);
 
 	/* OS Lock does not block software-breakpoint */
 	reset_debug_state();
@@ -310,7 +501,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("sw_bp2: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp2));
 
-	GUEST_SYNC(7);
+	GUEST_SYNC(8);
 
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
@@ -320,7 +511,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
 
-	GUEST_SYNC(8);
+	GUEST_SYNC(9);
 
 	/* OS Lock blocking watchpoint */
 	reset_debug_state();
@@ -332,7 +523,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
 
-	GUEST_SYNC(9);
+	GUEST_SYNC(10);
 
 	/* OS Lock blocking single-step */
 	reset_debug_state();
@@ -356,7 +547,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp_ctx: nop");
 	write_sysreg(0, contextidr_el1);
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
-	GUEST_SYNC(10);
+	GUEST_SYNC(11);
 
 	/* Linked watchpoint */
 	reset_debug_state();
@@ -402,13 +593,122 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
+/*
+ * Set bcr/bwr/wcr/wbr for register read/write testing.
+ * The values that are set by userspace are saved in dregs, which will
+ * be used by the guest code (guest_code_bwp_reg_test()) to make sure
+ * that the guest sees bcr/bvr/wcr/wvr register values that are set
+ * by userspace.
+ */
+static void set_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			       struct kvm_guest_debug_arch *dregs,
+			       uint8_t nbps, uint8_t nwps)
+{
+	int i;
+	uint64_t val;
+
+	for (i = 0; i < nbps; i++) {
+		/* Set dbgbcr to some value for the guest to read later */
+		val = update_bcr_lbn(0, i);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/* Set dbgwcr to some value for the guest to read later */
+		val = update_wcr_lbn(0, i % nbps);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
+/*
+ * Check if the userspace sees bcr/bvr/wcr/wvr register values that are
+ * set by the guest (guest_code_bwp_reg_test()), which are saved in the
+ * given dregs.
+ */
+static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			     struct kvm_guest_debug_arch *dregs,
+			     int nbps, int nwps)
+{
+	uint64_t val;
+	int i;
+
+	for (i = 0; i < nbps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
 	uint64_t aa64dfr0;
-	uint8_t max_brps;
+	uint8_t nbps, nwps;
+	bool debug_reg_test = false;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -434,19 +734,28 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
-	/* Number of breakpoints, minus 1 */
-	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	/* Number of breakpoints */
+	nbps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	TEST_ASSERT(nbps >= 2, "Number of breakpoints must be >= 2");
 
-	/* The value of 0x0 is reserved */
-	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+	/* Number of watchpoints */
+	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
 	/*
 	 * Test with breakpoint#0 and watchpoint#0, and the higiest
 	 * numbered breakpoint (the context aware breakpoint).
 	 */
-	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+
+	for (stage = 0; stage < 13; stage++) {
+		/* First two stages are sanity debug regs read/write check */
+		if (stage < 2) {
+			set_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			sync_global_to_guest(vm, debug_regs);
+			debug_reg_test = true;
+		}
 
-	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
 		switch (get_ucall(vm, VCPU_ID, &uc)) {
@@ -454,6 +763,11 @@ int main(int argc, char *argv[])
 			TEST_ASSERT(uc.args[1] == stage,
 				"Stage %d: Unexpected sync ucall, got %lx",
 				stage, (ulong)uc.args[1]);
+			if (debug_reg_test) {
+				debug_reg_test = false;
+				sync_global_from_guest(vm, debug_regs);
+				check_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			}
 			break;
 		case UCALL_ABORT:
 			TEST_FAIL("%s at %s:%ld\n\tvalues: %#lx, %#lx",
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 36/38] KVM: arm64: selftests: Test breakpoint/watchpoint register access
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases for reading and writing of dbgbcr/dbgbvr/dbgwcr/dbgwvr
registers from userspace and the guest.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 350 +++++++++++++++++-
 1 file changed, 332 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 876257be5960..4e00100b9aa1 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -37,6 +37,26 @@ static volatile uint64_t svc_addr;
 static volatile uint64_t ss_addr[4], ss_idx;
 #define  PC(v)  ((uint64_t)&(v))
 
+struct kvm_guest_debug_arch debug_regs;
+
+static uint64_t update_bcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGBCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGBCR_LBN_SHIFT);
+	return new;
+}
+
+static uint64_t update_wcr_lbn(uint64_t val, uint8_t lbn)
+{
+	uint64_t new;
+
+	new = val & ~((uint64_t)0xf << DBGWCR_LBN_SHIFT);
+	new |= (uint64_t)((lbn & 0xf) << DBGWCR_LBN_SHIFT);
+	return new;
+}
+
 #define GEN_DEBUG_WRITE_REG(reg_name)			\
 static void write_##reg_name(int num, uint64_t val)	\
 {							\
@@ -94,12 +114,77 @@ static void write_##reg_name(int num, uint64_t val)	\
 	}						\
 }
 
+#define GEN_DEBUG_READ_REG(reg_name)			\
+u64 read_##reg_name(int num)				\
+{							\
+	u64 val = 0;					\
+							\
+	switch (num) {					\
+	case 0:						\
+		val = read_sysreg(reg_name##0_el1);	\
+		break;					\
+	case 1:						\
+		val = read_sysreg(reg_name##1_el1);	\
+		break;					\
+	case 2:						\
+		val = read_sysreg(reg_name##2_el1);	\
+		break;					\
+	case 3:						\
+		val = read_sysreg(reg_name##3_el1);	\
+		break;					\
+	case 4:						\
+		val = read_sysreg(reg_name##4_el1);	\
+		break;					\
+	case 5:						\
+		val = read_sysreg(reg_name##5_el1);	\
+		break;					\
+	case 6:						\
+		val = read_sysreg(reg_name##6_el1);	\
+		break;					\
+	case 7:						\
+		val = read_sysreg(reg_name##7_el1);	\
+		break;					\
+	case 8:						\
+		val = read_sysreg(reg_name##8_el1);	\
+		break;					\
+	case 9:						\
+		val = read_sysreg(reg_name##9_el1);	\
+		break;					\
+	case 10:					\
+		val = read_sysreg(reg_name##10_el1);	\
+		break;					\
+	case 11:					\
+		val = read_sysreg(reg_name##11_el1);	\
+		break;					\
+	case 12:					\
+		val = read_sysreg(reg_name##12_el1);	\
+		break;					\
+	case 13:					\
+		val = read_sysreg(reg_name##13_el1);	\
+		break;					\
+	case 14:					\
+		val = read_sysreg(reg_name##14_el1);	\
+		break;					\
+	case 15:					\
+		val = read_sysreg(reg_name##15_el1);	\
+		break;					\
+	default:					\
+		GUEST_ASSERT(0);			\
+	}						\
+	return val;					\
+}
+
 /* Define write_dbgbcr()/write_dbgbvr()/write_dbgwcr()/write_dbgwvr() */
 GEN_DEBUG_WRITE_REG(dbgbcr)
 GEN_DEBUG_WRITE_REG(dbgbvr)
 GEN_DEBUG_WRITE_REG(dbgwcr)
 GEN_DEBUG_WRITE_REG(dbgwvr)
 
+/* Define read_dbgbcr()/read_dbgbvr()/read_dbgwcr()/read_dbgwvr() */
+GEN_DEBUG_READ_REG(dbgbcr)
+GEN_DEBUG_READ_REG(dbgbvr)
+GEN_DEBUG_READ_REG(dbgwcr)
+GEN_DEBUG_READ_REG(dbgwvr)
 
 static void reset_debug_state(void)
 {
@@ -238,20 +323,126 @@ static void install_ss(void)
 	isb();
 }
 
+/*
+ * Check if the guest sees bcr/bvr/wcr/wvr register values that userspace
+ * set (by set_debug_regs()), and update them for userspace to verify
+ * the same.
+ */
+static void guest_code_bwp_reg_test(struct kvm_guest_debug_arch *dregs)
+{
+	uint64_t dfr0 = read_sysreg(id_aa64dfr0_el1);
+	uint8_t nbps, nwps;
+	int i;
+	u64 val, rval;
+
+	/* Set nbps/nwps to the number of breakpoints/watchpoints. */
+	nbps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	nwps = cpuid_extract_uftr(dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+
+	for (i = 0; i < nbps; i++) {
+		/*
+		 * Check if the dbgbcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbcr to some value for userspace to read later */
+		val = update_bcr_lbn(0, (i + 1) % nbps);
+		write_dbgbcr(i, val);
+		rval = read_dbgbcr(i);
+
+		/* Make sure written value could be read */
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/*
+		 * Check if the dbgbvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_bvr[i]);
+
+		/* Set dbgbvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgbvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgbvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_bvr[i] = val;
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/*
+		 * Check if the dbgwcr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwcr to some value for userspace to read later */
+		val = update_wcr_lbn(0, (i + 1) % nbps);
+		write_dbgwcr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwcr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/*
+		 * Check if the dbgwvr value is the same as the one set by
+		 * userspace.
+		 */
+		val = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, dregs->dbg_wvr[i]);
+
+		/* Set dbgwvr to some value for userspace to read later */
+		val = (uint64_t)(nbps - i - 1) << 32;
+		write_dbgwvr(i, val);
+
+		/* Make sure written value could be read */
+		rval = read_dbgwvr(i);
+		GUEST_ASSERT_EQ(val, rval);
+
+		/* Save the written value for userspace to refer later */
+		dregs->dbg_wvr[i] = val;
+	}
+}
+
 static volatile char write_data;
 
-static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+static void guest_code(struct kvm_guest_debug_arch *dregs,
+		       uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	uint64_t ctx = 0xc;	/* a random context number */
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set before the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
 	GUEST_SYNC(0);
 
+	/*
+	 * Check if the guest sees bcr/bvr/wcr/wvr register values that
+	 * userspace set after the first KVM_RUN.
+	 */
+	guest_code_bwp_reg_test(dregs);
+	GUEST_SYNC(1);
+
 	/* Software-breakpoint */
 	reset_debug_state();
 	asm volatile("sw_bp: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp));
 
-	GUEST_SYNC(1);
+	GUEST_SYNC(2);
 
 	/* Hardware-breakpoint */
 	reset_debug_state();
@@ -259,7 +450,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp));
 
-	GUEST_SYNC(2);
+	GUEST_SYNC(3);
 
 	/* Hardware-breakpoint + svc */
 	reset_debug_state();
@@ -268,7 +459,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_svc));
 	GUEST_ASSERT_EQ(svc_addr, PC(bp_svc) + 4);
 
-	GUEST_SYNC(3);
+	GUEST_SYNC(4);
 
 	/* Hardware-breakpoint + software-breakpoint */
 	reset_debug_state();
@@ -277,7 +468,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(bp_brk));
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(bp_brk));
 
-	GUEST_SYNC(4);
+	GUEST_SYNC(5);
 
 	/* Watchpoint */
 	reset_debug_state();
@@ -286,7 +477,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, PC(write_data));
 
-	GUEST_SYNC(5);
+	GUEST_SYNC(6);
 
 	/* Single-step */
 	reset_debug_state();
@@ -301,7 +492,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(ss_addr[1], PC(ss_start) + 4);
 	GUEST_ASSERT_EQ(ss_addr[2], PC(ss_start) + 8);
 
-	GUEST_SYNC(6);
+	GUEST_SYNC(7);
 
 	/* OS Lock does not block software-breakpoint */
 	reset_debug_state();
@@ -310,7 +501,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("sw_bp2: brk #0");
 	GUEST_ASSERT_EQ(sw_bp_addr, PC(sw_bp2));
 
-	GUEST_SYNC(7);
+	GUEST_SYNC(8);
 
 	/* OS Lock blocking hardware-breakpoint */
 	reset_debug_state();
@@ -320,7 +511,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp2: nop");
 	GUEST_ASSERT_EQ(hw_bp_addr, 0);
 
-	GUEST_SYNC(8);
+	GUEST_SYNC(9);
 
 	/* OS Lock blocking watchpoint */
 	reset_debug_state();
@@ -332,7 +523,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	GUEST_ASSERT_EQ(write_data, 'x');
 	GUEST_ASSERT_EQ(wp_data_addr, 0);
 
-	GUEST_SYNC(9);
+	GUEST_SYNC(10);
 
 	/* OS Lock blocking single-step */
 	reset_debug_state();
@@ -356,7 +547,7 @@ static void guest_code(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 	asm volatile("hw_bp_ctx: nop");
 	write_sysreg(0, contextidr_el1);
 	GUEST_ASSERT_EQ(hw_bp_addr, PC(hw_bp_ctx));
-	GUEST_SYNC(10);
+	GUEST_SYNC(11);
 
 	/* Linked watchpoint */
 	reset_debug_state();
@@ -402,13 +593,122 @@ static void guest_svc_handler(struct ex_regs *regs)
 	svc_addr = regs->pc;
 }
 
+/*
+ * Set bcr/bwr/wcr/wbr for register read/write testing.
+ * The values that are set by userspace are saved in dregs, which will
+ * be used by the guest code (guest_code_bwp_reg_test()) to make sure
+ * that the guest sees bcr/bvr/wcr/wvr register values that are set
+ * by userspace.
+ */
+static void set_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			       struct kvm_guest_debug_arch *dregs,
+			       uint8_t nbps, uint8_t nwps)
+{
+	int i;
+	uint64_t val;
+
+	for (i = 0; i < nbps; i++) {
+		/* Set dbgbcr to some value for the guest to read later */
+		val = update_bcr_lbn(0, i);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		/* Set dbgbvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_bvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		/* Set dbgwcr to some value for the guest to read later */
+		val = update_wcr_lbn(0, i % nbps);
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wcr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		/* Set dbgwvr to some value for the guest to read later */
+		val = (uint64_t)i << 8;
+		set_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), val);
+
+		/* Save the written value for the guest to refer later */
+		dregs->dbg_wvr[i] = val;
+
+		/* Make sure the written value could be read */
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (expected:0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
+/*
+ * Check if the userspace sees bcr/bvr/wcr/wvr register values that are
+ * set by the guest (guest_code_bwp_reg_test()), which are saved in the
+ * given dregs.
+ */
+static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
+			     struct kvm_guest_debug_arch *dregs,
+			     int nbps, int nwps)
+{
+	uint64_t val;
+	int i;
+
+	for (i = 0; i < nbps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bcr[i],
+			    "Unexpected bcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGBVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_bvr[i],
+			    "Unexpected bvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_bvr[i]);
+	}
+
+	for (i = 0; i < nwps; i++) {
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWCRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wcr[i],
+			    "Unexpected wcr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wcr[i]);
+
+		get_reg(vm, vcpu, KVM_ARM64_SYS_REG(SYS_DBGWVRn_EL1(i)), &val);
+		TEST_ASSERT(val == dregs->dbg_wvr[i],
+			    "Unexpected wvr[%d]:0x%lx (Expected: 0x%llx)\n",
+			    i, val, dregs->dbg_wvr[i]);
+	}
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
 	uint64_t aa64dfr0;
-	uint8_t max_brps;
+	uint8_t nbps, nwps;
+	bool debug_reg_test = false;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
@@ -434,19 +734,28 @@ int main(int argc, char *argv[])
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_SVC64, guest_svc_handler);
 
-	/* Number of breakpoints, minus 1 */
-	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	/* Number of breakpoints */
+	nbps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT) + 1;
+	TEST_ASSERT(nbps >= 2, "Number of breakpoints must be >= 2");
 
-	/* The value of 0x0 is reserved */
-	TEST_ASSERT(max_brps > 0, "ID_AA64DFR0_EL1.BRPS must be > 0");
+	/* Number of watchpoints */
+	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
+	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
 	/*
 	 * Test with breakpoint#0 and watchpoint#0, and the higiest
 	 * numbered breakpoint (the context aware breakpoint).
 	 */
-	vcpu_args_set(vm, VCPU_ID, 3, 0, 0, max_brps);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+
+	for (stage = 0; stage < 13; stage++) {
+		/* First two stages are sanity debug regs read/write check */
+		if (stage < 2) {
+			set_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			sync_global_to_guest(vm, debug_regs);
+			debug_reg_test = true;
+		}
 
-	for (stage = 0; stage < 11; stage++) {
 		vcpu_run(vm, VCPU_ID);
 
 		switch (get_ucall(vm, VCPU_ID, &uc)) {
@@ -454,6 +763,11 @@ int main(int argc, char *argv[])
 			TEST_ASSERT(uc.args[1] == stage,
 				"Stage %d: Unexpected sync ucall, got %lx",
 				stage, (ulong)uc.args[1]);
+			if (debug_reg_test) {
+				debug_reg_test = false;
+				sync_global_from_guest(vm, debug_regs);
+				check_debug_regs(vm, VCPU_ID, &debug_regs, nbps, nwps);
+			}
 			break;
 		case UCALL_ABORT:
 			TEST_FAIL("%s at %s:%ld\n\tvalues: %#lx, %#lx",
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 37/38] KVM: arm64: selftests: Test with every breakpoint/watchpoint
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases that uses every breakpoint/watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 70 ++++++++++++++++---
 1 file changed, 59 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 4e00100b9aa1..829fad6c7d58 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,7 +701,7 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-int main(int argc, char *argv[])
+void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
@@ -710,6 +710,8 @@ int main(int argc, char *argv[])
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
+	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
+
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
 
@@ -717,11 +719,6 @@ int main(int argc, char *argv[])
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
-	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
-		print_skip("Armv8 debug architecture not supported.");
-		kvm_vm_free(vm);
-		exit(KSFT_SKIP);
-	}
 
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_BRK_INS, guest_sw_bp_handler);
@@ -742,11 +739,7 @@ int main(int argc, char *argv[])
 	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
 	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
-	/*
-	 * Test with breakpoint#0 and watchpoint#0, and the higiest
-	 * numbered breakpoint (the context aware breakpoint).
-	 */
-	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, bpn, wpn, ctx_bpn);
 
 	for (stage = 0; stage < 13; stage++) {
 		/* First two stages are sanity debug regs read/write check */
@@ -783,5 +776,60 @@ int main(int argc, char *argv[])
 
 done:
 	kvm_vm_free(vm);
+}
+
+/*
+ * Run debug testing using the various breakpoint#, watchpoint# and
+ * context-aware breakpoint# with the given ID_AA64DFR0_EL1 configuration.
+ */
+void test_debug(uint64_t aa64dfr0)
+{
+	uint8_t brps, wrps, ctx_cmps;
+	uint8_t normal_brp_num, wrp_num, ctx_brp_base, ctx_brp_num;
+	int b, w, c;
+
+	brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	ctx_cmps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	pr_debug("%s brps:%d, wrps:%d, ctx_cmps:%d\n", __func__,
+		 brps, wrps, ctx_cmps);
+
+	/* Number of normal (non-context aware) breakpoints */
+	normal_brp_num = brps - ctx_cmps;
+
+	/* Number of watchpoints */
+	wrp_num = wrps + 1;
+
+	/* Number of context aware breakpoints */
+	ctx_brp_num = ctx_cmps + 1;
+
+	/* Lowest context aware breakpoint number */
+	ctx_brp_base = normal_brp_num;
+
+	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
+		for (b = 0; b < normal_brp_num; b++) {
+			for (w = 0; w < wrp_num; w++)
+				run_test(b, w, c);
+		}
+	}
+}
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	uint64_t aa64dfr0;
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	kvm_vm_free(vm);
+
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
+		print_skip("Armv8 debug architecture not supported.");
+		exit(KSFT_SKIP);
+	}
+
+	/* Run debug tests with the default configuration */
+	test_debug(aa64dfr0);
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 37/38] KVM: arm64: selftests: Test with every breakpoint/watchpoint
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add test cases that uses every breakpoint/watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 70 ++++++++++++++++---
 1 file changed, 59 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 4e00100b9aa1..829fad6c7d58 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,7 +701,7 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-int main(int argc, char *argv[])
+void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
@@ -710,6 +710,8 @@ int main(int argc, char *argv[])
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
+	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
+
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
 
@@ -717,11 +719,6 @@ int main(int argc, char *argv[])
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
-	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
-		print_skip("Armv8 debug architecture not supported.");
-		kvm_vm_free(vm);
-		exit(KSFT_SKIP);
-	}
 
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_BRK_INS, guest_sw_bp_handler);
@@ -742,11 +739,7 @@ int main(int argc, char *argv[])
 	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
 	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
-	/*
-	 * Test with breakpoint#0 and watchpoint#0, and the higiest
-	 * numbered breakpoint (the context aware breakpoint).
-	 */
-	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, bpn, wpn, ctx_bpn);
 
 	for (stage = 0; stage < 13; stage++) {
 		/* First two stages are sanity debug regs read/write check */
@@ -783,5 +776,60 @@ int main(int argc, char *argv[])
 
 done:
 	kvm_vm_free(vm);
+}
+
+/*
+ * Run debug testing using the various breakpoint#, watchpoint# and
+ * context-aware breakpoint# with the given ID_AA64DFR0_EL1 configuration.
+ */
+void test_debug(uint64_t aa64dfr0)
+{
+	uint8_t brps, wrps, ctx_cmps;
+	uint8_t normal_brp_num, wrp_num, ctx_brp_base, ctx_brp_num;
+	int b, w, c;
+
+	brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	ctx_cmps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	pr_debug("%s brps:%d, wrps:%d, ctx_cmps:%d\n", __func__,
+		 brps, wrps, ctx_cmps);
+
+	/* Number of normal (non-context aware) breakpoints */
+	normal_brp_num = brps - ctx_cmps;
+
+	/* Number of watchpoints */
+	wrp_num = wrps + 1;
+
+	/* Number of context aware breakpoints */
+	ctx_brp_num = ctx_cmps + 1;
+
+	/* Lowest context aware breakpoint number */
+	ctx_brp_base = normal_brp_num;
+
+	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
+		for (b = 0; b < normal_brp_num; b++) {
+			for (w = 0; w < wrp_num; w++)
+				run_test(b, w, c);
+		}
+	}
+}
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	uint64_t aa64dfr0;
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	kvm_vm_free(vm);
+
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
+		print_skip("Armv8 debug architecture not supported.");
+		exit(KSFT_SKIP);
+	}
+
+	/* Run debug tests with the default configuration */
+	test_debug(aa64dfr0);
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 37/38] KVM: arm64: selftests: Test with every breakpoint/watchpoint
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases that uses every breakpoint/watchpoint to the
debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 70 ++++++++++++++++---
 1 file changed, 59 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 4e00100b9aa1..829fad6c7d58 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,7 +701,7 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-int main(int argc, char *argv[])
+void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
@@ -710,6 +710,8 @@ int main(int argc, char *argv[])
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
+	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
+
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	ucall_init(vm, NULL);
 
@@ -717,11 +719,6 @@ int main(int argc, char *argv[])
 	vcpu_init_descriptor_tables(vm, VCPU_ID);
 
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
-	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
-		print_skip("Armv8 debug architecture not supported.");
-		kvm_vm_free(vm);
-		exit(KSFT_SKIP);
-	}
 
 	vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT,
 				ESR_EC_BRK_INS, guest_sw_bp_handler);
@@ -742,11 +739,7 @@ int main(int argc, char *argv[])
 	nwps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT) + 1;
 	TEST_ASSERT(nwps >= 2, "Number of watchpoints must be >= 2");
 
-	/*
-	 * Test with breakpoint#0 and watchpoint#0, and the higiest
-	 * numbered breakpoint (the context aware breakpoint).
-	 */
-	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, 0, 0, nbps - 1);
+	vcpu_args_set(vm, VCPU_ID, 4, &debug_regs, bpn, wpn, ctx_bpn);
 
 	for (stage = 0; stage < 13; stage++) {
 		/* First two stages are sanity debug regs read/write check */
@@ -783,5 +776,60 @@ int main(int argc, char *argv[])
 
 done:
 	kvm_vm_free(vm);
+}
+
+/*
+ * Run debug testing using the various breakpoint#, watchpoint# and
+ * context-aware breakpoint# with the given ID_AA64DFR0_EL1 configuration.
+ */
+void test_debug(uint64_t aa64dfr0)
+{
+	uint8_t brps, wrps, ctx_cmps;
+	uint8_t normal_brp_num, wrp_num, ctx_brp_base, ctx_brp_num;
+	int b, w, c;
+
+	brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	ctx_cmps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	pr_debug("%s brps:%d, wrps:%d, ctx_cmps:%d\n", __func__,
+		 brps, wrps, ctx_cmps);
+
+	/* Number of normal (non-context aware) breakpoints */
+	normal_brp_num = brps - ctx_cmps;
+
+	/* Number of watchpoints */
+	wrp_num = wrps + 1;
+
+	/* Number of context aware breakpoints */
+	ctx_brp_num = ctx_cmps + 1;
+
+	/* Lowest context aware breakpoint number */
+	ctx_brp_base = normal_brp_num;
+
+	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
+		for (b = 0; b < normal_brp_num; b++) {
+			for (w = 0; w < wrp_num; w++)
+				run_test(b, w, c);
+		}
+	}
+}
+
+int main(int argc, char *argv[])
+{
+	struct kvm_vm *vm;
+	uint64_t aa64dfr0;
+
+	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
+	kvm_vm_free(vm);
+
+	if (cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_DEBUGVER_SHIFT) < 6) {
+		print_skip("Armv8 debug architecture not supported.");
+		exit(KSFT_SKIP);
+	}
+
+	/* Run debug tests with the default configuration */
+	test_debug(aa64dfr0);
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 38/38] KVM: arm64: selftests: Test breakpoint/watchpoint changing ID_AA64DFR0_EL1
  2022-04-19  6:55 ` Reiji Watanabe
  (?)
@ 2022-04-19  6:55   ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases that uses every breakpoint/watchpoint with various
combination of ID_AA64DFR0_EL1.BRPs, WRPs, and CTX_CMPs
configuration to the debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 52 ++++++++++++++++---
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 829fad6c7d58..d8ebbb7985da 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,18 +701,19 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+void run_test(uint64_t aa64dfr0, uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
-	uint64_t aa64dfr0;
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
-	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
-
+	pr_debug("%s aa64dfr0:0x%lx, bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__,
+		 aa64dfr0, bpn, wpn, ctx_bpn);
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	set_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), aa64dfr0);
+
 	ucall_init(vm, NULL);
 
 	vm_init_descriptor_tables(vm);
@@ -810,15 +811,33 @@ void test_debug(uint64_t aa64dfr0)
 	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
 		for (b = 0; b < normal_brp_num; b++) {
 			for (w = 0; w < wrp_num; w++)
-				run_test(b, w, c);
+				run_test(aa64dfr0, b, w, c);
 		}
 	}
 }
 
+uint64_t update_aa64dfr0_bwrp(uint64_t dfr0, uint8_t brps, uint8_t wrps,
+			      uint8_t ctx_brps)
+{
+	/* Clear brps/wrps/ctx_cmps fields */
+	dfr0 &= ~(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS));
+
+	/* Set new brps/wrps/ctx_cmps fields */
+	dfr0 |= ((uint64_t)brps << ID_AA64DFR0_BRPS_SHIFT) |
+		((uint64_t)wrps << ID_AA64DFR0_WRPS_SHIFT) |
+		((uint64_t)ctx_brps << ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	return dfr0;
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
-	uint64_t aa64dfr0;
+	uint64_t aa64dfr0, test_aa64dfr0;
+	uint8_t max_brps, max_wrps, max_ctx_brps;
+	int bs, ws, cs;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
@@ -831,5 +850,26 @@ int main(int argc, char *argv[])
 
 	/* Run debug tests with the default configuration */
 	test_debug(aa64dfr0);
+
+	if (!kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE))
+		return 0;
+
+	/*
+	 * Run debug tests with various number of breakpoints/watchpoints
+	 * configuration.
+	 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	max_wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	max_ctx_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+	for (cs = 0; cs <= max_ctx_brps; cs++) {
+		for (bs = cs + 1; bs <= max_brps; bs++) {
+			for (ws = 1; ws <= max_wrps; ws++) {
+				test_aa64dfr0 = update_aa64dfr0_bwrp(aa64dfr0,
+								    bs, ws, cs);
+				test_debug(test_aa64dfr0);
+			}
+		}
+	}
+
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 38/38] KVM: arm64: selftests: Test breakpoint/watchpoint changing ID_AA64DFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, Will Deacon, Peter Shier, Paolo Bonzini, linux-arm-kernel

Add test cases that uses every breakpoint/watchpoint with various
combination of ID_AA64DFR0_EL1.BRPs, WRPs, and CTX_CMPs
configuration to the debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 52 ++++++++++++++++---
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 829fad6c7d58..d8ebbb7985da 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,18 +701,19 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+void run_test(uint64_t aa64dfr0, uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
-	uint64_t aa64dfr0;
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
-	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
-
+	pr_debug("%s aa64dfr0:0x%lx, bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__,
+		 aa64dfr0, bpn, wpn, ctx_bpn);
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	set_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), aa64dfr0);
+
 	ucall_init(vm, NULL);
 
 	vm_init_descriptor_tables(vm);
@@ -810,15 +811,33 @@ void test_debug(uint64_t aa64dfr0)
 	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
 		for (b = 0; b < normal_brp_num; b++) {
 			for (w = 0; w < wrp_num; w++)
-				run_test(b, w, c);
+				run_test(aa64dfr0, b, w, c);
 		}
 	}
 }
 
+uint64_t update_aa64dfr0_bwrp(uint64_t dfr0, uint8_t brps, uint8_t wrps,
+			      uint8_t ctx_brps)
+{
+	/* Clear brps/wrps/ctx_cmps fields */
+	dfr0 &= ~(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS));
+
+	/* Set new brps/wrps/ctx_cmps fields */
+	dfr0 |= ((uint64_t)brps << ID_AA64DFR0_BRPS_SHIFT) |
+		((uint64_t)wrps << ID_AA64DFR0_WRPS_SHIFT) |
+		((uint64_t)ctx_brps << ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	return dfr0;
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
-	uint64_t aa64dfr0;
+	uint64_t aa64dfr0, test_aa64dfr0;
+	uint8_t max_brps, max_wrps, max_ctx_brps;
+	int bs, ws, cs;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
@@ -831,5 +850,26 @@ int main(int argc, char *argv[])
 
 	/* Run debug tests with the default configuration */
 	test_debug(aa64dfr0);
+
+	if (!kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE))
+		return 0;
+
+	/*
+	 * Run debug tests with various number of breakpoints/watchpoints
+	 * configuration.
+	 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	max_wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	max_ctx_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+	for (cs = 0; cs <= max_ctx_brps; cs++) {
+		for (bs = cs + 1; bs <= max_brps; bs++) {
+			for (ws = 1; ws <= max_wrps; ws++) {
+				test_aa64dfr0 = update_aa64dfr0_bwrp(aa64dfr0,
+								    bs, ws, cs);
+				test_debug(test_aa64dfr0);
+			}
+		}
+	}
+
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* [PATCH v7 38/38] KVM: arm64: selftests: Test breakpoint/watchpoint changing ID_AA64DFR0_EL1
@ 2022-04-19  6:55   ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-04-19  6:55 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm
  Cc: kvm, linux-arm-kernel, James Morse, Alexandru Elisei,
	Suzuki K Poulose, Paolo Bonzini, Will Deacon, Andrew Jones,
	Fuad Tabba, Peng Liang, Peter Shier, Ricardo Koller,
	Oliver Upton, Jing Zhang, Raghavendra Rao Anata, Reiji Watanabe

Add test cases that uses every breakpoint/watchpoint with various
combination of ID_AA64DFR0_EL1.BRPs, WRPs, and CTX_CMPs
configuration to the debug-exceptions test.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
---
 .../selftests/kvm/aarch64/debug-exceptions.c  | 52 ++++++++++++++++---
 1 file changed, 46 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
index 829fad6c7d58..d8ebbb7985da 100644
--- a/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/aarch64/debug-exceptions.c
@@ -701,18 +701,19 @@ static void check_debug_regs(struct kvm_vm *vm, uint32_t vcpu,
 	}
 }
 
-void run_test(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
+void run_test(uint64_t aa64dfr0, uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
 {
 	struct kvm_vm *vm;
 	struct ucall uc;
 	int stage;
-	uint64_t aa64dfr0;
 	uint8_t nbps, nwps;
 	bool debug_reg_test = false;
 
-	pr_debug("%s bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__, bpn, wpn, ctx_bpn);
-
+	pr_debug("%s aa64dfr0:0x%lx, bpn:%d, wpn:%d, ctx_bpn:%d\n", __func__,
+		 aa64dfr0, bpn, wpn, ctx_bpn);
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
+	set_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), aa64dfr0);
+
 	ucall_init(vm, NULL);
 
 	vm_init_descriptor_tables(vm);
@@ -810,15 +811,33 @@ void test_debug(uint64_t aa64dfr0)
 	for (c = ctx_brp_base; c < ctx_brp_base + ctx_brp_num; c++) {
 		for (b = 0; b < normal_brp_num; b++) {
 			for (w = 0; w < wrp_num; w++)
-				run_test(b, w, c);
+				run_test(aa64dfr0, b, w, c);
 		}
 	}
 }
 
+uint64_t update_aa64dfr0_bwrp(uint64_t dfr0, uint8_t brps, uint8_t wrps,
+			      uint8_t ctx_brps)
+{
+	/* Clear brps/wrps/ctx_cmps fields */
+	dfr0 &= ~(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS) |
+		  ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS));
+
+	/* Set new brps/wrps/ctx_cmps fields */
+	dfr0 |= ((uint64_t)brps << ID_AA64DFR0_BRPS_SHIFT) |
+		((uint64_t)wrps << ID_AA64DFR0_WRPS_SHIFT) |
+		((uint64_t)ctx_brps << ID_AA64DFR0_CTX_CMPS_SHIFT);
+
+	return dfr0;
+}
+
 int main(int argc, char *argv[])
 {
 	struct kvm_vm *vm;
-	uint64_t aa64dfr0;
+	uint64_t aa64dfr0, test_aa64dfr0;
+	uint8_t max_brps, max_wrps, max_ctx_brps;
+	int bs, ws, cs;
 
 	vm = vm_create_default(VCPU_ID, 0, guest_code);
 	get_reg(vm, VCPU_ID, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &aa64dfr0);
@@ -831,5 +850,26 @@ int main(int argc, char *argv[])
 
 	/* Run debug tests with the default configuration */
 	test_debug(aa64dfr0);
+
+	if (!kvm_check_cap(KVM_CAP_ARM_ID_REG_CONFIGURABLE))
+		return 0;
+
+	/*
+	 * Run debug tests with various number of breakpoints/watchpoints
+	 * configuration.
+	 */
+	max_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_BRPS_SHIFT);
+	max_wrps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_WRPS_SHIFT);
+	max_ctx_brps = cpuid_extract_uftr(aa64dfr0, ID_AA64DFR0_CTX_CMPS_SHIFT);
+	for (cs = 0; cs <= max_ctx_brps; cs++) {
+		for (bs = cs + 1; bs <= max_brps; bs++) {
+			for (ws = 1; ws <= max_wrps; ws++) {
+				test_aa64dfr0 = update_aa64dfr0_bwrp(aa64dfr0,
+								    bs, ws, cs);
+				test_debug(test_aa64dfr0);
+			}
+		}
+	}
+
 	return 0;
 }
-- 
2.36.0.rc0.470.gd361397f0d-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
  2022-04-19  6:55   ` Reiji Watanabe
  (?)
@ 2022-05-04  6:35     ` Oliver Upton
  -1 siblings, 0 replies; 123+ messages in thread
From: Oliver Upton @ 2022-05-04  6:35 UTC (permalink / raw)
  To: Reiji Watanabe, h
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Paolo Bonzini, Will Deacon,
	Andrew Jones, Fuad Tabba, Peng Liang, Peter Shier,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> Introduce arm64_check_features(), which does a basic validity checking
> of an ID register value against the register's limit value, which is
> generally the host's sanitized value.
> 
> This function will be used by the following patches to check if an ID
> register value that userspace tries to set for a guest can be supported
> on the host.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  1 +
>  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index c62e7e5e2f0c..7a009d4e18a6 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
>  
>  u64 read_sanitised_ftr_reg(u32 id);
>  u64 __read_sysreg_by_encoding(u32 sys_id);
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
>  
>  static inline bool cpu_supports_mixed_endian_el0(void)
>  {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d72c4b4d389c..dbbc69745f22 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>  		return sprintf(buf, "Vulnerable\n");
>  	}
>  }
> +
> +/**
> + * arm64_check_features() - Check if a feature register value constitutes
> + * a subset of features indicated by @limit.
> + *
> + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> + * an item whose width field is zero.
> + * @val: The feature register value to check
> + * @limit: The limit value of the feature register
> + *
> + * This function will check if each feature field of @val is the "safe" value
> + * against @limit based on @ftrp[], each of which specifies the target field
> + * (shift, width), whether or not the field is for a signed value (sign),
> + * how the field is determined to be "safe" (type), and the safe value
> + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> + * won't be used by this function. If a field value in @val is the same
> + * as the one in @limit, it is always considered the safe value regardless
> + * of the type. For register fields that are not in @ftrp[], only the value
> + * in @limit is considered the safe value.
> + *
> + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> + */
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> +{
> +	u64 mask = 0;
> +
> +	for (; ftrp->width; ftrp++) {
> +		s64 f_val, f_lim, safe_val;
> +
> +		f_val = arm64_ftr_value(ftrp, val);
> +		f_lim = arm64_ftr_value(ftrp, limit);
> +		mask |= arm64_ftr_mask(ftrp);
> +
> +		if (f_val == f_lim)
> +			safe_val = f_val;
> +		else
> +			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> +
> +		if (safe_val != f_val)
> +			return -E2BIG;
> +	}
> +
> +	/*
> +	 * For fields that are not indicated in ftrp, values in limit are the
> +	 * safe values.
> +	 */
> +	if ((val & ~mask) != (limit & ~mask))
> +		return -E2BIG;

This bit is interesting. Apologies if I paged out relevant context. What
features are we trying to limit that exist outside of an arm64_ftr_bits
definition? I'll follow the series and see if I figure out later :-P

Generally speaking, though, it seems to me that we'd prefer to have an
arm64_ftr_bits struct plumbed up for whatever hits this case.

--
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-05-04  6:35     ` Oliver Upton
  0 siblings, 0 replies; 123+ messages in thread
From: Oliver Upton @ 2022-05-04  6:35 UTC (permalink / raw)
  To: Reiji Watanabe, h
  Cc: kvm, Marc Zyngier, Peter Shier, Peng Liang, Will Deacon,
	Paolo Bonzini, kvmarm, linux-arm-kernel

On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> Introduce arm64_check_features(), which does a basic validity checking
> of an ID register value against the register's limit value, which is
> generally the host's sanitized value.
> 
> This function will be used by the following patches to check if an ID
> register value that userspace tries to set for a guest can be supported
> on the host.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  1 +
>  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index c62e7e5e2f0c..7a009d4e18a6 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
>  
>  u64 read_sanitised_ftr_reg(u32 id);
>  u64 __read_sysreg_by_encoding(u32 sys_id);
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
>  
>  static inline bool cpu_supports_mixed_endian_el0(void)
>  {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d72c4b4d389c..dbbc69745f22 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>  		return sprintf(buf, "Vulnerable\n");
>  	}
>  }
> +
> +/**
> + * arm64_check_features() - Check if a feature register value constitutes
> + * a subset of features indicated by @limit.
> + *
> + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> + * an item whose width field is zero.
> + * @val: The feature register value to check
> + * @limit: The limit value of the feature register
> + *
> + * This function will check if each feature field of @val is the "safe" value
> + * against @limit based on @ftrp[], each of which specifies the target field
> + * (shift, width), whether or not the field is for a signed value (sign),
> + * how the field is determined to be "safe" (type), and the safe value
> + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> + * won't be used by this function. If a field value in @val is the same
> + * as the one in @limit, it is always considered the safe value regardless
> + * of the type. For register fields that are not in @ftrp[], only the value
> + * in @limit is considered the safe value.
> + *
> + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> + */
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> +{
> +	u64 mask = 0;
> +
> +	for (; ftrp->width; ftrp++) {
> +		s64 f_val, f_lim, safe_val;
> +
> +		f_val = arm64_ftr_value(ftrp, val);
> +		f_lim = arm64_ftr_value(ftrp, limit);
> +		mask |= arm64_ftr_mask(ftrp);
> +
> +		if (f_val == f_lim)
> +			safe_val = f_val;
> +		else
> +			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> +
> +		if (safe_val != f_val)
> +			return -E2BIG;
> +	}
> +
> +	/*
> +	 * For fields that are not indicated in ftrp, values in limit are the
> +	 * safe values.
> +	 */
> +	if ((val & ~mask) != (limit & ~mask))
> +		return -E2BIG;

This bit is interesting. Apologies if I paged out relevant context. What
features are we trying to limit that exist outside of an arm64_ftr_bits
definition? I'll follow the series and see if I figure out later :-P

Generally speaking, though, it seems to me that we'd prefer to have an
arm64_ftr_bits struct plumbed up for whatever hits this case.

--
Thanks,
Oliver
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-05-04  6:35     ` Oliver Upton
  0 siblings, 0 replies; 123+ messages in thread
From: Oliver Upton @ 2022-05-04  6:35 UTC (permalink / raw)
  To: Reiji Watanabe, h
  Cc: Marc Zyngier, kvmarm, kvm, linux-arm-kernel, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Paolo Bonzini, Will Deacon,
	Andrew Jones, Fuad Tabba, Peng Liang, Peter Shier,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> Introduce arm64_check_features(), which does a basic validity checking
> of an ID register value against the register's limit value, which is
> generally the host's sanitized value.
> 
> This function will be used by the following patches to check if an ID
> register value that userspace tries to set for a guest can be supported
> on the host.
> 
> Signed-off-by: Reiji Watanabe <reijiw@google.com>
> ---
>  arch/arm64/include/asm/cpufeature.h |  1 +
>  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index c62e7e5e2f0c..7a009d4e18a6 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
>  
>  u64 read_sanitised_ftr_reg(u32 id);
>  u64 __read_sysreg_by_encoding(u32 sys_id);
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
>  
>  static inline bool cpu_supports_mixed_endian_el0(void)
>  {
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d72c4b4d389c..dbbc69745f22 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
>  		return sprintf(buf, "Vulnerable\n");
>  	}
>  }
> +
> +/**
> + * arm64_check_features() - Check if a feature register value constitutes
> + * a subset of features indicated by @limit.
> + *
> + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> + * an item whose width field is zero.
> + * @val: The feature register value to check
> + * @limit: The limit value of the feature register
> + *
> + * This function will check if each feature field of @val is the "safe" value
> + * against @limit based on @ftrp[], each of which specifies the target field
> + * (shift, width), whether or not the field is for a signed value (sign),
> + * how the field is determined to be "safe" (type), and the safe value
> + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> + * won't be used by this function. If a field value in @val is the same
> + * as the one in @limit, it is always considered the safe value regardless
> + * of the type. For register fields that are not in @ftrp[], only the value
> + * in @limit is considered the safe value.
> + *
> + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> + */
> +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> +{
> +	u64 mask = 0;
> +
> +	for (; ftrp->width; ftrp++) {
> +		s64 f_val, f_lim, safe_val;
> +
> +		f_val = arm64_ftr_value(ftrp, val);
> +		f_lim = arm64_ftr_value(ftrp, limit);
> +		mask |= arm64_ftr_mask(ftrp);
> +
> +		if (f_val == f_lim)
> +			safe_val = f_val;
> +		else
> +			safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> +
> +		if (safe_val != f_val)
> +			return -E2BIG;
> +	}
> +
> +	/*
> +	 * For fields that are not indicated in ftrp, values in limit are the
> +	 * safe values.
> +	 */
> +	if ((val & ~mask) != (limit & ~mask))
> +		return -E2BIG;

This bit is interesting. Apologies if I paged out relevant context. What
features are we trying to limit that exist outside of an arm64_ftr_bits
definition? I'll follow the series and see if I figure out later :-P

Generally speaking, though, it seems to me that we'd prefer to have an
arm64_ftr_bits struct plumbed up for whatever hits this case.

--
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
  2022-05-04  6:35     ` Oliver Upton
  (?)
@ 2022-06-01  6:16       ` Reiji Watanabe
  -1 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-06-01  6:16 UTC (permalink / raw)
  To: Oliver Upton
  Cc: h, Marc Zyngier, kvmarm, kvm, Linux ARM, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Paolo Bonzini, Will Deacon,
	Andrew Jones, Fuad Tabba, Peng Liang, Peter Shier,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hi Oliver,

On Tue, May 3, 2022 at 11:35 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> > Introduce arm64_check_features(), which does a basic validity checking
> > of an ID register value against the register's limit value, which is
> > generally the host's sanitized value.
> >
> > This function will be used by the following patches to check if an ID
> > register value that userspace tries to set for a guest can be supported
> > on the host.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  1 +
> >  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
> >  2 files changed, 53 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index c62e7e5e2f0c..7a009d4e18a6 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
> >
> >  u64 read_sanitised_ftr_reg(u32 id);
> >  u64 __read_sysreg_by_encoding(u32 sys_id);
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
> >
> >  static inline bool cpu_supports_mixed_endian_el0(void)
> >  {
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index d72c4b4d389c..dbbc69745f22 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> >               return sprintf(buf, "Vulnerable\n");
> >       }
> >  }
> > +
> > +/**
> > + * arm64_check_features() - Check if a feature register value constitutes
> > + * a subset of features indicated by @limit.
> > + *
> > + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> > + * an item whose width field is zero.
> > + * @val: The feature register value to check
> > + * @limit: The limit value of the feature register
> > + *
> > + * This function will check if each feature field of @val is the "safe" value
> > + * against @limit based on @ftrp[], each of which specifies the target field
> > + * (shift, width), whether or not the field is for a signed value (sign),
> > + * how the field is determined to be "safe" (type), and the safe value
> > + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> > + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> > + * won't be used by this function. If a field value in @val is the same
> > + * as the one in @limit, it is always considered the safe value regardless
> > + * of the type. For register fields that are not in @ftrp[], only the value
> > + * in @limit is considered the safe value.
> > + *
> > + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> > + */
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> > +{
> > +     u64 mask = 0;
> > +
> > +     for (; ftrp->width; ftrp++) {
> > +             s64 f_val, f_lim, safe_val;
> > +
> > +             f_val = arm64_ftr_value(ftrp, val);
> > +             f_lim = arm64_ftr_value(ftrp, limit);
> > +             mask |= arm64_ftr_mask(ftrp);
> > +
> > +             if (f_val == f_lim)
> > +                     safe_val = f_val;
> > +             else
> > +                     safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> > +
> > +             if (safe_val != f_val)
> > +                     return -E2BIG;
> > +     }
> > +
> > +     /*
> > +      * For fields that are not indicated in ftrp, values in limit are the
> > +      * safe values.
> > +      */
> > +     if ((val & ~mask) != (limit & ~mask))
> > +             return -E2BIG;
>
> This bit is interesting. Apologies if I paged out relevant context. What
> features are we trying to limit that exist outside of an arm64_ftr_bits
> definition? I'll follow the series and see if I figure out later :-P
>
> Generally speaking, though, it seems to me that we'd prefer to have an
> arm64_ftr_bits struct plumbed up for whatever hits this case.

I'm sorry that I completely overlooked this until now...

This code is not currently used by this series since KVM will fill
any statically undefined fields as a lower safe unsigned field.

But, considering that arm64_ftr_bits that are defined in cpufeature.c
doesn't have all bits definitions, I wanted to have the function
handle such arm64_ftr_bits as well (the code is basically to make
sure that undefined fields are 0).

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-06-01  6:16       ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-06-01  6:16 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvm, Marc Zyngier, Peter Shier, Peng Liang, Will Deacon, h,
	Paolo Bonzini, kvmarm, Linux ARM

Hi Oliver,

On Tue, May 3, 2022 at 11:35 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> > Introduce arm64_check_features(), which does a basic validity checking
> > of an ID register value against the register's limit value, which is
> > generally the host's sanitized value.
> >
> > This function will be used by the following patches to check if an ID
> > register value that userspace tries to set for a guest can be supported
> > on the host.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  1 +
> >  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
> >  2 files changed, 53 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index c62e7e5e2f0c..7a009d4e18a6 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
> >
> >  u64 read_sanitised_ftr_reg(u32 id);
> >  u64 __read_sysreg_by_encoding(u32 sys_id);
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
> >
> >  static inline bool cpu_supports_mixed_endian_el0(void)
> >  {
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index d72c4b4d389c..dbbc69745f22 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> >               return sprintf(buf, "Vulnerable\n");
> >       }
> >  }
> > +
> > +/**
> > + * arm64_check_features() - Check if a feature register value constitutes
> > + * a subset of features indicated by @limit.
> > + *
> > + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> > + * an item whose width field is zero.
> > + * @val: The feature register value to check
> > + * @limit: The limit value of the feature register
> > + *
> > + * This function will check if each feature field of @val is the "safe" value
> > + * against @limit based on @ftrp[], each of which specifies the target field
> > + * (shift, width), whether or not the field is for a signed value (sign),
> > + * how the field is determined to be "safe" (type), and the safe value
> > + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> > + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> > + * won't be used by this function. If a field value in @val is the same
> > + * as the one in @limit, it is always considered the safe value regardless
> > + * of the type. For register fields that are not in @ftrp[], only the value
> > + * in @limit is considered the safe value.
> > + *
> > + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> > + */
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> > +{
> > +     u64 mask = 0;
> > +
> > +     for (; ftrp->width; ftrp++) {
> > +             s64 f_val, f_lim, safe_val;
> > +
> > +             f_val = arm64_ftr_value(ftrp, val);
> > +             f_lim = arm64_ftr_value(ftrp, limit);
> > +             mask |= arm64_ftr_mask(ftrp);
> > +
> > +             if (f_val == f_lim)
> > +                     safe_val = f_val;
> > +             else
> > +                     safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> > +
> > +             if (safe_val != f_val)
> > +                     return -E2BIG;
> > +     }
> > +
> > +     /*
> > +      * For fields that are not indicated in ftrp, values in limit are the
> > +      * safe values.
> > +      */
> > +     if ((val & ~mask) != (limit & ~mask))
> > +             return -E2BIG;
>
> This bit is interesting. Apologies if I paged out relevant context. What
> features are we trying to limit that exist outside of an arm64_ftr_bits
> definition? I'll follow the series and see if I figure out later :-P
>
> Generally speaking, though, it seems to me that we'd prefer to have an
> arm64_ftr_bits struct plumbed up for whatever hits this case.

I'm sorry that I completely overlooked this until now...

This code is not currently used by this series since KVM will fill
any statically undefined fields as a lower safe unsigned field.

But, considering that arm64_ftr_bits that are defined in cpufeature.c
doesn't have all bits definitions, I wanted to have the function
handle such arm64_ftr_bits as well (the code is basically to make
sure that undefined fields are 0).

Thanks,
Reiji
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 123+ messages in thread

* Re: [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register
@ 2022-06-01  6:16       ` Reiji Watanabe
  0 siblings, 0 replies; 123+ messages in thread
From: Reiji Watanabe @ 2022-06-01  6:16 UTC (permalink / raw)
  To: Oliver Upton
  Cc: h, Marc Zyngier, kvmarm, kvm, Linux ARM, James Morse,
	Alexandru Elisei, Suzuki K Poulose, Paolo Bonzini, Will Deacon,
	Andrew Jones, Fuad Tabba, Peng Liang, Peter Shier,
	Ricardo Koller, Jing Zhang, Raghavendra Rao Anata

Hi Oliver,

On Tue, May 3, 2022 at 11:35 PM Oliver Upton <oupton@google.com> wrote:
>
> On Mon, Apr 18, 2022 at 11:55:07PM -0700, Reiji Watanabe wrote:
> > Introduce arm64_check_features(), which does a basic validity checking
> > of an ID register value against the register's limit value, which is
> > generally the host's sanitized value.
> >
> > This function will be used by the following patches to check if an ID
> > register value that userspace tries to set for a guest can be supported
> > on the host.
> >
> > Signed-off-by: Reiji Watanabe <reijiw@google.com>
> > ---
> >  arch/arm64/include/asm/cpufeature.h |  1 +
> >  arch/arm64/kernel/cpufeature.c      | 52 +++++++++++++++++++++++++++++
> >  2 files changed, 53 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index c62e7e5e2f0c..7a009d4e18a6 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -634,6 +634,7 @@ void check_local_cpu_capabilities(void);
> >
> >  u64 read_sanitised_ftr_reg(u32 id);
> >  u64 __read_sysreg_by_encoding(u32 sys_id);
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit);
> >
> >  static inline bool cpu_supports_mixed_endian_el0(void)
> >  {
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index d72c4b4d389c..dbbc69745f22 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -3239,3 +3239,55 @@ ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
> >               return sprintf(buf, "Vulnerable\n");
> >       }
> >  }
> > +
> > +/**
> > + * arm64_check_features() - Check if a feature register value constitutes
> > + * a subset of features indicated by @limit.
> > + *
> > + * @ftrp: Pointer to an array of arm64_ftr_bits. It must be terminated by
> > + * an item whose width field is zero.
> > + * @val: The feature register value to check
> > + * @limit: The limit value of the feature register
> > + *
> > + * This function will check if each feature field of @val is the "safe" value
> > + * against @limit based on @ftrp[], each of which specifies the target field
> > + * (shift, width), whether or not the field is for a signed value (sign),
> > + * how the field is determined to be "safe" (type), and the safe value
> > + * (safe_val) when type == FTR_EXACT (safe_val won't be used by this
> > + * function when type != FTR_EXACT). Any other fields in arm64_ftr_bits
> > + * won't be used by this function. If a field value in @val is the same
> > + * as the one in @limit, it is always considered the safe value regardless
> > + * of the type. For register fields that are not in @ftrp[], only the value
> > + * in @limit is considered the safe value.
> > + *
> > + * Return: 0 if all the fields are safe. Otherwise, return negative errno.
> > + */
> > +int arm64_check_features(const struct arm64_ftr_bits *ftrp, u64 val, u64 limit)
> > +{
> > +     u64 mask = 0;
> > +
> > +     for (; ftrp->width; ftrp++) {
> > +             s64 f_val, f_lim, safe_val;
> > +
> > +             f_val = arm64_ftr_value(ftrp, val);
> > +             f_lim = arm64_ftr_value(ftrp, limit);
> > +             mask |= arm64_ftr_mask(ftrp);
> > +
> > +             if (f_val == f_lim)
> > +                     safe_val = f_val;
> > +             else
> > +                     safe_val = arm64_ftr_safe_value(ftrp, f_val, f_lim);
> > +
> > +             if (safe_val != f_val)
> > +                     return -E2BIG;
> > +     }
> > +
> > +     /*
> > +      * For fields that are not indicated in ftrp, values in limit are the
> > +      * safe values.
> > +      */
> > +     if ((val & ~mask) != (limit & ~mask))
> > +             return -E2BIG;
>
> This bit is interesting. Apologies if I paged out relevant context. What
> features are we trying to limit that exist outside of an arm64_ftr_bits
> definition? I'll follow the series and see if I figure out later :-P
>
> Generally speaking, though, it seems to me that we'd prefer to have an
> arm64_ftr_bits struct plumbed up for whatever hits this case.

I'm sorry that I completely overlooked this until now...

This code is not currently used by this series since KVM will fill
any statically undefined fields as a lower safe unsigned field.

But, considering that arm64_ftr_bits that are defined in cpufeature.c
doesn't have all bits definitions, I wanted to have the function
handle such arm64_ftr_bits as well (the code is basically to make
sure that undefined fields are 0).

Thanks,
Reiji

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 123+ messages in thread

end of thread, other threads:[~2022-06-01  6:18 UTC | newest]

Thread overview: 123+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-19  6:55 [PATCH v7 00/38] KVM: arm64: Make CPU ID registers writable by userspace Reiji Watanabe
2022-04-19  6:55 ` Reiji Watanabe
2022-04-19  6:55 ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 01/38] KVM: arm64: Introduce a validation function for an ID register Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-05-04  6:35   ` Oliver Upton
2022-05-04  6:35     ` Oliver Upton
2022-05-04  6:35     ` Oliver Upton
2022-06-01  6:16     ` Reiji Watanabe
2022-06-01  6:16       ` Reiji Watanabe
2022-06-01  6:16       ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 02/38] KVM: arm64: Save ID registers' sanitized value per guest Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 03/38] KVM: arm64: Introduce struct id_reg_desc Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 04/38] KVM: arm64: Generate id_reg_desc's ftr_bits at KVM init when needed Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 05/38] KVM: arm64: Prohibit modifying values of ID regs for 32bit EL1 guests Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 06/38] KVM: arm64: Make ID_AA64PFR0_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 07/38] KVM: arm64: Make ID_AA64PFR1_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 08/38] KVM: arm64: Make ID_AA64ISAR0_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 09/38] KVM: arm64: Make ID_AA64ISAR1_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 10/38] KVM: arm64: Make ID_AA64ISAR2_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 11/38] KVM: arm64: Make ID_AA64MMFR0_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 12/38] KVM: arm64: Add a KVM flag indicating emulating debug regs access is needed Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 13/38] KVM: arm64: Emulate dbgbcr/dbgbvr accesses Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 14/38] KVM: arm64: Emulate dbgwcr accesses Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 15/38] KVM: arm64: Make ID_AA64DFR0_EL1/ID_DFR0_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 16/38] KVM: arm64: KVM: arm64: Make ID_DFR1_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 17/38] KVM: arm64: KVM: arm64: Make ID_MMFR0_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 18/38] KVM: arm64: Make MVFR1_EL1 writable Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 19/38] KVM: arm64: Add remaining ID registers to id_reg_desc_table Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 20/38] KVM: arm64: Use id_reg_desc_table for ID registers Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 21/38] KVM: arm64: Add consistency checking for frac fields of " Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 22/38] KVM: arm64: Introduce KVM_CAP_ARM_ID_REG_CONFIGURABLE capability Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 23/38] KVM: arm64: Add kunit test for ID register validation Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 24/38] KVM: arm64: Use vcpu->arch cptr_el2 to track value of cptr_el2 for VHE Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 25/38] KVM: arm64: Use vcpu->arch.mdcr_el2 to track value of mdcr_el2 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 26/38] KVM: arm64: Introduce framework to trap disabled features Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 27/38] KVM: arm64: Trap disabled features of ID_AA64PFR0_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 28/38] KVM: arm64: Trap disabled features of ID_AA64PFR1_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 29/38] KVM: arm64: Trap disabled features of ID_AA64DFR0_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 30/38] KVM: arm64: Trap disabled features of ID_AA64MMFR1_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 31/38] KVM: arm64: Trap disabled features of ID_AA64ISAR1_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 32/38] KVM: arm64: Add kunit test for trap initialization Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 33/38] KVM: arm64: selftests: Add helpers to extract a field of ID registers Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 34/38] KVM: arm64: selftests: Introduce id_reg_test Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 35/38] KVM: arm64: selftests: Test linked breakpoint and watchpoint Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 36/38] KVM: arm64: selftests: Test breakpoint/watchpoint register access Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 37/38] KVM: arm64: selftests: Test with every breakpoint/watchpoint Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55 ` [PATCH v7 38/38] KVM: arm64: selftests: Test breakpoint/watchpoint changing ID_AA64DFR0_EL1 Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe
2022-04-19  6:55   ` Reiji Watanabe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.