kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [REPOST PATCH 00/16] Add support for vPMU selftests
@ 2023-02-15  1:07 Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 01/16] tools: arm64: Import perf_event.h Raghavendra Rao Ananta
                   ` (15 more replies)
  0 siblings, 16 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hello,

The series aims to add vPMU selftests to improve the test coverage
for KVM's PMU emulation. It includes the tests that validates actions
from userspace, such as verifying the guest read/write accesses to the
PMu registers while limiting the number for PMCs, and allowing or denying
certain events via KVM_ARM_VCPU_PMU_V3_FILTER attribute. It also includes
tests for KVM's guarding of the PMU attributes to count EL2/EL3 events,
and formal KVM behavior that enables PMU emulation. The last part validates
the guest expectations of the vPMU by setting up a stress test that
force-migrates multiple vCPUs frequently across random pCPUs in the system,
thus ensuring KVM's management of vCPU PMU contexts correctly.

As suggested by Oliver in my original post of the series [1] (and with
Reiji's permission), I'm re-posting the series to include the
selftest patches from Reiji's series that aims to limit the number
of PMCs for the guest [2].

Patches 1-4 are unmodified patches 11-14 from Reiji's series [2],
which introduces the vPMU selftests that adds a test to validate
the read/write functionalities for the guest accesses to the
implemented and unimplemented counters.

Patch-5 refactors the existing tests for plugging-in the upcoming tests
easily and rename the test file to a more generic name.

Patch-6 and 7 add helper macros and functions respectively to interact
with the cycle counter.

Patch-8 extends create_vpmu_vm() to accept an array of event filters
as an argument that are to be applied to the VM.

Patch-9 tests the KVM_ARM_VCPU_PMU_V3_FILTER attribute by scripting
various combinations of events that are to be allowed or denied to
the guest and verifying guest's behavior.

Patch-10 adds test to validate KVM's handling of guest requests to count
events in EL2/EL3.

Patch-11 introduces the vCPU migration stress testing by validating cycle
counter and general purpose counter's behavior across vCPU migrations.

Patch-12, 13, and 14 expands the tests in patch-8 to validate
overflow/IRQ functionality, chained events, and occupancy of all the PMU
counters, respectively.

Patch-15 extends create_vpmu_vm() to create multiple vCPUs for the VM.

Patch-16 expands the stress tests for multiple vCPUs.

The series has been tested on hardwares with PMUv3p1 and PMUv3p5 on
top of v6.2-rc7 plus Reiji's series [2].

Thank you.
Raghavendra

[1]: https://lore.kernel.org/all/20230213180234.2885032-1-rananta@google.com/
[2]: https://lore.kernel.org/all/20230211031506.4159098-1-reijiw@google.com/

Raghavendra Rao Ananta (12):
  selftests: KVM: aarch64: Refactor the vPMU counter access tests
  tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  selftests: KVM: aarch64: Add PMU cycle counter helpers
  selftests: KVM: aarch64: Consider PMU event filters for VM creation
  selftests: KVM: aarch64: Add KVM PMU event filter test
  selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  selftests: KVM: aarch64: Add vCPU migration test for PMU
  selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  selftests: KVM: aarch64: Test chained events for PMU
  selftests: KVM: aarch64: Add PMU test to chain all the counters
  selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation
  selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs

Reiji Watanabe (4):
  tools: arm64: Import perf_event.h
  KVM: selftests: aarch64: Introduce vpmu_counter_access test
  KVM: selftests: aarch64: vPMU register test for implemented counters
  KVM: selftests: aarch64: vPMU register test for unimplemented counters

 tools/arch/arm64/include/asm/perf_event.h     |  265 +++
 tools/testing/selftests/kvm/Makefile          |    1 +
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 1710 +++++++++++++++++
 .../selftests/kvm/include/aarch64/processor.h |    1 +
 4 files changed, 1977 insertions(+)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_test.c

-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [REPOST PATCH 01/16] tools: arm64: Import perf_event.h
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 02/16] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Copy perf_event.h from the kernel's arch/arm64/include/asm/perf_event.h.
The following patches will use macros defined in this header.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/arch/arm64/include/asm/perf_event.h | 258 ++++++++++++++++++++++
 1 file changed, 258 insertions(+)
 create mode 100644 tools/arch/arm64/include/asm/perf_event.h

diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
new file mode 100644
index 0000000000000..97e49a4d4969f
--- /dev/null
+++ b/tools/arch/arm64/include/asm/perf_event.h
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+#ifndef __ASM_PERF_EVENT_H
+#define __ASM_PERF_EVENT_H
+
+#define	ARMV8_PMU_MAX_COUNTERS	32
+#define	ARMV8_PMU_COUNTER_MASK	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * Common architectural and microarchitectural event numbers.
+ */
+#define ARMV8_PMUV3_PERFCTR_SW_INCR				0x0000
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL			0x0001
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL			0x0002
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL			0x0003
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE				0x0004
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL			0x0005
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED				0x0006
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED				0x0007
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED			0x0008
+#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN				0x0009
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN				0x000A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED			0x000B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED			0x000C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED			0x000D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED			0x000E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED		0x000F
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED				0x0010
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES				0x0011
+#define ARMV8_PMUV3_PERFCTR_BR_PRED				0x0012
+#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS				0x0013
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE				0x0014
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB			0x0015
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE				0x0016
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL			0x0017
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB			0x0018
+#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS				0x0019
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR			0x001A
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC				0x001B
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED			0x001C
+#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES				0x001D
+#define ARMV8_PMUV3_PERFCTR_CHAIN				0x001E
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE			0x001F
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE			0x0020
+#define ARMV8_PMUV3_PERFCTR_BR_RETIRED				0x0021
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED			0x0022
+#define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND			0x0023
+#define ARMV8_PMUV3_PERFCTR_STALL_BACKEND			0x0024
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB				0x0025
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB				0x0026
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE				0x0027
+#define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL			0x0028
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE			0x0029
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL			0x002A
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE				0x002B
+#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB			0x002C
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL			0x002D
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL			0x002E
+#define ARMV8_PMUV3_PERFCTR_L2D_TLB				0x002F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB				0x0030
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS			0x0031
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE				0x0032
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS			0x0033
+#define ARMV8_PMUV3_PERFCTR_DTLB_WALK				0x0034
+#define ARMV8_PMUV3_PERFCTR_ITLB_WALK				0x0035
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD				0x0036
+#define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD			0x0037
+#define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD			0x0038
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD			0x0039
+#define ARMV8_PMUV3_PERFCTR_OP_RETIRED				0x003A
+#define ARMV8_PMUV3_PERFCTR_OP_SPEC				0x003B
+#define ARMV8_PMUV3_PERFCTR_STALL				0x003C
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND			0x003D
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND			0x003E
+#define ARMV8_PMUV3_PERFCTR_STALL_SLOT				0x003F
+
+/* Statistical profiling extension microarchitectural events */
+#define	ARMV8_SPE_PERFCTR_SAMPLE_POP				0x4000
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FEED				0x4001
+#define	ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE			0x4002
+#define	ARMV8_SPE_PERFCTR_SAMPLE_COLLISION			0x4003
+
+/* AMUv1 architecture events */
+#define	ARMV8_AMU_PERFCTR_CNT_CYCLES				0x4004
+#define	ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM			0x4005
+
+/* long-latency read miss events */
+#define	ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS			0x4006
+#define	ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD			0x4009
+#define	ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS			0x400A
+#define	ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD			0x400B
+
+/* Trace buffer events */
+#define ARMV8_PMUV3_PERFCTR_TRB_WRAP				0x400C
+#define ARMV8_PMUV3_PERFCTR_TRB_TRIG				0x400E
+
+/* Trace unit events */
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0				0x4010
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1				0x4011
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2				0x4012
+#define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3				0x4013
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4			0x4018
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5			0x4019
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6			0x401A
+#define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7			0x401B
+
+/* additional latency from alignment events */
+#define	ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT			0x4020
+#define	ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT			0x4021
+#define	ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT			0x4022
+
+/* Armv8.5 Memory Tagging Extension events */
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED			0x4024
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD			0x4025
+#define	ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR			0x4026
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD			0x0040
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR			0x0041
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD		0x0042
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR		0x0043
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER		0x0044
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER		0x0045
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM		0x0046
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN			0x0047
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL			0x0048
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD			0x004C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR			0x004D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD				0x004E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR				0x004F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD			0x0050
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR			0x0051
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD		0x0052
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR		0x0053
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM		0x0056
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN			0x0057
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL			0x0058
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD			0x005C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR			0x005D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD				0x005E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR				0x005F
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD			0x0060
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR			0x0061
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED			0x0062
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED		0x0063
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL			0x0064
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH			0x0065
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD			0x0066
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR			0x0067
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC			0x0068
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC			0x0069
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC		0x006A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC				0x006C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC			0x006D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC			0x006E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC				0x006F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC				0x0070
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC				0x0071
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC				0x0072
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC				0x0073
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC				0x0074
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC				0x0075
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC			0x0076
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC			0x0077
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC			0x0078
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC			0x0079
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC			0x007A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC				0x007C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC				0x007D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC				0x007E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF				0x0081
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC				0x0082
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT				0x0083
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT				0x0084
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ				0x0086
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ				0x0087
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC				0x0088
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC				0x008A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT			0x008B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT			0x008C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER			0x008D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ			0x008E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ			0x008F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC				0x0090
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC				0x0091
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD			0x00A0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR			0x00A1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD		0x00A2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR		0x00A3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM		0x00A6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN			0x00A7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL			0x00A8
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMU_PMCR_E	(1 << 0) /* Enable all counters */
+#define ARMV8_PMU_PMCR_P	(1 << 1) /* Reset all counters */
+#define ARMV8_PMU_PMCR_C	(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMU_PMCR_D	(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMU_PMCR_X	(1 << 4) /* Export to ETM */
+#define ARMV8_PMU_PMCR_DP	(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define ARMV8_PMU_PMCR_LC	(1 << 6) /* Overflow on 64 bit cycle counter */
+#define ARMV8_PMU_PMCR_LP	(1 << 7) /* Long event counter enable */
+#define	ARMV8_PMU_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMU_PMCR_N	(0x1f << ARMV8_PMU_PMCR_N_SHIFT)
+#define	ARMV8_PMU_PMCR_MASK	0xff	 /* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_PMU_EVTYPE_MASK	0xc800ffff	/* Mask for writable bits */
+#define	ARMV8_PMU_EVTYPE_EVENT	0xffff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_PMU_EXCLUDE_EL1	(1U << 31)
+#define	ARMV8_PMU_EXCLUDE_EL0	(1U << 30)
+#define	ARMV8_PMU_INCLUDE_EL2	(1U << 27)
+
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_PMU_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_PMU_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_PMU_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
+/* PMMIR_EL1.SLOTS mask */
+#define ARMV8_PMU_SLOTS_MASK	0xff
+
+#define ARMV8_PMU_BUS_SLOTS_SHIFT 8
+#define ARMV8_PMU_BUS_SLOTS_MASK 0xff
+#define ARMV8_PMU_BUS_WIDTH_SHIFT 16
+#define ARMV8_PMU_BUS_WIDTH_MASK 0xf
+
+#endif /* __ASM_PERF_EVENT_H */
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 02/16] KVM: selftests: aarch64: Introduce vpmu_counter_access test
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 01/16] tools: arm64: Import perf_event.h Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 03/16] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Introduce vpmu_counter_access test for arm64 platforms.
The test configures PMUv3 for a vCPU, sets PMCR_EL0.N for the vCPU,
and check if the guest can consistently see the same number of the
PMU event counters (PMCR_EL0.N) that userspace sets.
This test case is done with each of the PMCR_EL0.N values from
0 to 31 (With the PMCR_EL0.N values greater than the host value,
the test expects KVM_SET_ONE_REG for the PMCR_EL0 to fail).

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   1 +
 .../kvm/aarch64/vpmu_counter_access.c         | 207 ++++++++++++++++++
 2 files changed, 208 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 1750f91dd9362..b27fea0ce5918 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -143,6 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
new file mode 100644
index 0000000000000..7a4333f64daef
--- /dev/null
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -0,0 +1,207 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * vpmu_counter_access - Test vPMU event counter access
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
+ * This test checks if the guest can see the same number of the PMU event
+ * counters (PMCR_EL0.N) that userspace sets.
+ * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ */
+#include <kvm_util.h>
+#include <processor.h>
+#include <test_util.h>
+#include <vgic.h>
+#include <asm/perf_event.h>
+#include <linux/bitfield.h>
+
+/* The max number of the PMU event counters (excluding the cycle counter) */
+#define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
+
+/*
+ * The guest is configured with PMUv3 with @expected_pmcr_n number of
+ * event counters.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ */
+static void guest_code(uint64_t expected_pmcr_n)
+{
+	uint64_t pmcr, pmcr_n;
+
+	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
+
+	pmcr = read_sysreg(pmcr_el0);
+	pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+
+	/* Make sure that PMCR_EL0.N indicates the value userspace set */
+	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
+
+	GUEST_DONE();
+}
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
+/* Create a VM that has one vCPU with PMUv3 configured. */
+static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
+				     int *gic_fd)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct kvm_vcpu_init init;
+	uint8_t pmuver;
+	uint64_t dfr0, irq = 23;
+	struct kvm_device_attr irq_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
+		.addr = (uint64_t)&irq,
+	};
+	struct kvm_device_attr init_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
+	};
+
+	vm = vm_create(1);
+
+	/* Create vCPU with PMUv3 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+
+	/* Make sure that PMUv3 support is indicated in the ID register */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
+		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+
+	/* Initialize vPMU */
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+
+	*vcpup = vcpu;
+	return vm;
+}
+
+static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+{
+	struct ucall uc;
+
+	vcpu_args_set(vcpu, 1, pmcr_n);
+	vcpu_run(vcpu);
+	switch (get_ucall(vcpu, &uc)) {
+	case UCALL_ABORT:
+		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		break;
+	case UCALL_DONE:
+		break;
+	default:
+		TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		break;
+	}
+}
+
+/*
+ * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
+ * and run the test.
+ */
+static void run_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t sp, pmcr, pmcr_orig;
+	struct kvm_vcpu_init init;
+
+	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Save the initial sp to restore them later to run the guest again */
+	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
+
+	/* Update the PMCR_EL0.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
+	pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
+	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	/*
+	 * Reset and re-initialize the vCPU, and run the guest code again to
+	 * check if PMCR_EL0.N is preserved.
+	 */
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
+	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
+
+	run_vcpu(vcpu, pmcr_n);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for
+ * the vCPU to @pmcr_n, which is larger than the host value.
+ * The attempt should fail as @pmcr_n is too big to set for the vCPU.
+ */
+static void run_error_test(uint64_t pmcr_n)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd, ret;
+	uint64_t pmcr, pmcr_orig;
+
+	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+
+	/* Update the PMCR_EL0.N with @pmcr_n */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
+	pmcr = pmcr_orig & ~ARMV8_PMU_PMCR_N;
+	pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
+
+	/* This should fail as @pmcr_n is too big to set for the vCPU */
+	ret = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
+	TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
+		    pmcr, pmcr_orig);
+
+	close(gic_fd);
+	kvm_vm_free(vm);
+}
+
+/*
+ * Return the default number of implemented PMU event counters excluding
+ * the cycle counter (i.e. PMCR_EL0.N value) for the guest.
+ */
+static uint64_t get_pmcr_n_limit(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+	uint64_t pmcr;
+
+	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	close(gic_fd);
+	kvm_vm_free(vm);
+	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+}
+
+int main(void)
+{
+	uint64_t i, pmcr_n;
+
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
+
+	pmcr_n = get_pmcr_n_limit();
+	for (i = 0; i <= pmcr_n; i++)
+		run_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
+		run_error_test(i);
+
+	return 0;
+}
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 03/16] KVM: selftests: aarch64: vPMU register test for implemented counters
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 01/16] tools: arm64: Import perf_event.h Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 02/16] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 04/16] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Add a new test case to the vpmu_counter_access test to check if PMU
registers or their bits for implemented counters on the vCPU are
readable/writable as expected, and can be programmed to count events.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 350 +++++++++++++++++-
 1 file changed, 347 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index 7a4333f64daef..b6593eee2be3d 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,7 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets.
+ * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
+ * those counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -18,14 +19,348 @@
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/*
+ * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
+ * were basically copied from arch/arm64/kernel/perf_event.c.
+ */
+#define PMEVN_CASE(n, case_macro) \
+	case n: case_macro(n); break
+
+#define PMEVN_SWITCH(x, case_macro)				\
+	do {							\
+		switch (x) {					\
+		PMEVN_CASE(0,  case_macro);			\
+		PMEVN_CASE(1,  case_macro);			\
+		PMEVN_CASE(2,  case_macro);			\
+		PMEVN_CASE(3,  case_macro);			\
+		PMEVN_CASE(4,  case_macro);			\
+		PMEVN_CASE(5,  case_macro);			\
+		PMEVN_CASE(6,  case_macro);			\
+		PMEVN_CASE(7,  case_macro);			\
+		PMEVN_CASE(8,  case_macro);			\
+		PMEVN_CASE(9,  case_macro);			\
+		PMEVN_CASE(10, case_macro);			\
+		PMEVN_CASE(11, case_macro);			\
+		PMEVN_CASE(12, case_macro);			\
+		PMEVN_CASE(13, case_macro);			\
+		PMEVN_CASE(14, case_macro);			\
+		PMEVN_CASE(15, case_macro);			\
+		PMEVN_CASE(16, case_macro);			\
+		PMEVN_CASE(17, case_macro);			\
+		PMEVN_CASE(18, case_macro);			\
+		PMEVN_CASE(19, case_macro);			\
+		PMEVN_CASE(20, case_macro);			\
+		PMEVN_CASE(21, case_macro);			\
+		PMEVN_CASE(22, case_macro);			\
+		PMEVN_CASE(23, case_macro);			\
+		PMEVN_CASE(24, case_macro);			\
+		PMEVN_CASE(25, case_macro);			\
+		PMEVN_CASE(26, case_macro);			\
+		PMEVN_CASE(27, case_macro);			\
+		PMEVN_CASE(28, case_macro);			\
+		PMEVN_CASE(29, case_macro);			\
+		PMEVN_CASE(30, case_macro);			\
+		default:					\
+			GUEST_ASSERT_1(0, x);			\
+		}						\
+	} while (0)
+
+#define RETURN_READ_PMEVCNTRN(n) \
+	return read_sysreg(pmevcntr##n##_el0)
+static unsigned long read_pmevcntrn(int n)
+{
+	PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN);
+	return 0;
+}
+
+#define WRITE_PMEVCNTRN(n) \
+	write_sysreg(val, pmevcntr##n##_el0)
+static void write_pmevcntrn(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVCNTRN);
+	isb();
+}
+
+#define READ_PMEVTYPERN(n) \
+	return read_sysreg(pmevtyper##n##_el0)
+static unsigned long read_pmevtypern(int n)
+{
+	PMEVN_SWITCH(n, READ_PMEVTYPERN);
+	return 0;
+}
+
+#define WRITE_PMEVTYPERN(n) \
+	write_sysreg(val, pmevtyper##n##_el0)
+static void write_pmevtypern(int n, unsigned long val)
+{
+	PMEVN_SWITCH(n, WRITE_PMEVTYPERN);
+	isb();
+}
+
+/* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline unsigned long read_sel_evcntr(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevcntr_el0);
+}
+
+/* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */
+static inline void write_sel_evcntr(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevcntr_el0);
+	isb();
+}
+
+/* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline unsigned long read_sel_evtyper(int sel)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	return read_sysreg(pmxevtyper_el0);
+}
+
+/* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */
+static inline void write_sel_evtyper(int sel, unsigned long val)
+{
+	write_sysreg(sel, pmselr_el0);
+	isb();
+	write_sysreg(val, pmxevtyper_el0);
+	isb();
+}
+
+static inline void enable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_counter(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmcntenclr_el0);
+	isb();
+}
+
+/*
+ * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * accessors that test cases will use. Each of the accessors will
+ * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
+ * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
+ *
+ * This is used to test that combinations of those accessors provide
+ * the consistent behavior.
+ */
+struct pmc_accessor {
+	/* A function to be used to read PMEVTCNTR<n>_EL0 */
+	unsigned long	(*read_cntr)(int idx);
+	/* A function to be used to write PMEVTCNTR<n>_EL0 */
+	void		(*write_cntr)(int idx, unsigned long val);
+	/* A function to be used to read PMEVTYPER<n>_EL0 */
+	unsigned long	(*read_typer)(int idx);
+	/* A function to be used to write PMEVTYPER<n>_EL0 */
+	void		(*write_typer)(int idx, unsigned long val);
+};
+
+struct pmc_accessor pmc_accessors[] = {
+	/* test with all direct accesses */
+	{ read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern },
+	/* test with all indirect accesses */
+	{ read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper },
+	/* read with direct accesses, and write with indirect accesses */
+	{ read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper },
+	/* read with indirect accesses, and write with direct accesses */
+	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
+};
+
+static void pmu_disable_reset(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr &= ~ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static void pmu_enable(void)
+{
+	uint64_t pmcr = read_sysreg(pmcr_el0);
+
+	/* Reset all counters, disabling them */
+	pmcr |= ARMV8_PMU_PMCR_E;
+	write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0);
+	isb();
+}
+
+static bool pmu_event_is_supported(uint64_t event)
+{
+	GUEST_ASSERT_1(event < 64, event);
+	return (read_sysreg(pmceid0_el0) & BIT(event));
+}
+
+#define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected)		\
+{									\
+	uint64_t _tval = read_sysreg(regname);				\
+									\
+	if (set_expected)						\
+		GUEST_ASSERT_3((_tval & mask), _tval, mask, set_expected); \
+	else								   \
+		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
+}
+
+/*
+ * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
+ * are set or cleared as specified in @set_expected.
+ */
+static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected)
+{
+	GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected);
+	GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected);
+}
+
+/*
+ * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding
+ * to the specified counter (@pmc_idx) can be read/written as expected.
+ * When @set_op is true, it tries to set the bit for the counter in
+ * those registers by writing the SET registers (the bit won't be set
+ * if the counter is not implemented though).
+ * Otherwise, it tries to clear the bits in the registers by writing
+ * the CLR registers.
+ * Then, it checks if the values indicated in the registers are as expected.
+ */
+static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
+{
+	uint64_t pmcr_n, test_bit = BIT(pmc_idx);
+	bool set_expected = false;
+
+	if (set_op) {
+		write_sysreg(test_bit, pmcntenset_el0);
+		write_sysreg(test_bit, pmintenset_el1);
+		write_sysreg(test_bit, pmovsset_el0);
+
+		/* The bit will be set only if the counter is implemented */
+		pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+		set_expected = (pmc_idx < pmcr_n) ? true : false;
+	} else {
+		write_sysreg(test_bit, pmcntenclr_el0);
+		write_sysreg(test_bit, pmintenclr_el1);
+		write_sysreg(test_bit, pmovsclr_el0);
+	}
+	check_bitmap_pmu_regs(test_bit, set_expected);
+}
+
+/*
+ * Tests for reading/writing registers for the (implemented) event counter
+ * specified by @pmc_idx.
+ */
+static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	uint64_t write_data, read_data, read_data_prev;
+
+	/* Disable all PMCs and reset all PMCs to zero. */
+	pmu_disable_reset();
+
+
+	/*
+	 * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1.
+	 */
+
+	/* Make sure that the bit in those registers are set to 0 */
+	test_bitmap_pmu_regs(pmc_idx, false);
+	/* Test if setting the bit in those registers works */
+	test_bitmap_pmu_regs(pmc_idx, true);
+	/* Test if clearing the bit in those registers works */
+	test_bitmap_pmu_regs(pmc_idx, false);
+
+
+	/*
+	 * Tests for reading/writing the event type register.
+	 */
+
+	read_data = acc->read_typer(pmc_idx);
+	/*
+	 * Set the event type register to an arbitrary value just for testing
+	 * of reading/writing the register.
+	 * ArmARM says that for the event from 0x0000 to 0x003F,
+	 * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is
+	 * the value written to the field even when the specified event
+	 * is not supported.
+	 */
+	write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+	acc->write_typer(pmc_idx, write_data);
+	read_data = acc->read_typer(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/*
+	 * Tests for reading/writing the event count register.
+	 */
+
+	read_data = acc->read_cntr(pmc_idx);
+
+	/* The count value must be 0, as it is not used after the reset */
+	GUEST_ASSERT_3(read_data == 0, pmc_idx, acc, read_data);
+
+	write_data = read_data + pmc_idx + 0x12345;
+	acc->write_cntr(pmc_idx, write_data);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == write_data,
+		       pmc_idx, acc, read_data, write_data);
+
+
+	/* The following test requires the INST_RETIRED event support. */
+	if (!pmu_event_is_supported(ARMV8_PMUV3_PERFCTR_INST_RETIRED))
+		return;
+
+	pmu_enable();
+	acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+
+	/*
+	 * Make sure that the counter doesn't count the INST_RETIRED
+	 * event when disabled, and the counter counts the event when enabled.
+	 */
+	disable_counter(pmc_idx);
+	read_data_prev = acc->read_cntr(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+	GUEST_ASSERT_4(read_data == read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+
+	enable_counter(pmc_idx);
+	read_data = acc->read_cntr(pmc_idx);
+
+	/*
+	 * The counter should be increased by at least 1, as there is at
+	 * least one instruction between enabling the counter and reading
+	 * the counter (the test assumes that all event counters are not
+	 * being used by the host's higher priority events).
+	 */
+	GUEST_ASSERT_4(read_data > read_data_prev,
+		       pmc_idx, acc, read_data, read_data_prev);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
- * Check if @expected_pmcr_n is consistent with PMCR_EL0.N.
+ * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
+ * if reading/writing PMU registers for implemented counters can work
+ * as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
 	uint64_t pmcr, pmcr_n;
+	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
 
@@ -35,6 +370,15 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Tests for reading/writing PMU registers for implemented counters.
+	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = 0; pmc < pmcr_n; pmc++)
+			test_access_pmc_regs(&pmc_accessors[i], pmc);
+	}
+
 	GUEST_DONE();
 }
 
@@ -91,7 +435,7 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
-		REPORT_GUEST_ASSERT_2(uc, "values:%#lx %#lx");
+		REPORT_GUEST_ASSERT_4(uc, "values:%#lx %#lx %#lx %#lx");
 		break;
 	case UCALL_DONE:
 		break;
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 04/16] KVM: selftests: aarch64: vPMU register test for unimplemented counters
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (2 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 03/16] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 05/16] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

From: Reiji Watanabe <reijiw@google.com>

Add a new test case to the vpmu_counter_access test to check
if PMU registers or their bits for unimplemented counters are not
accessible or are RAZ, as expected.

Signed-off-by: Reiji Watanabe <reijiw@google.com>
Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../kvm/aarch64/vpmu_counter_access.c         | 111 ++++++++++++++++--
 .../selftests/kvm/include/aarch64/processor.h |   1 +
 2 files changed, 102 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
index b6593eee2be3d..453f0dd240f44 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
@@ -5,8 +5,8 @@
  * Copyright (c) 2022 Google LLC.
  *
  * This test checks if the guest can see the same number of the PMU event
- * counters (PMCR_EL0.N) that userspace sets, and if the guest can access
- * those counters.
+ * counters (PMCR_EL0.N) that userspace sets, if the guest can access
+ * those counters, and if the guest cannot access any other counters.
  * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
  */
 #include <kvm_util.h>
@@ -20,7 +20,7 @@
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
 /*
- * The macros and functions below for reading/writing PMEVT{CNTR,TYPER}<n>_EL0
+ * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
  * were basically copied from arch/arm64/kernel/perf_event.c.
  */
 #define PMEVN_CASE(n, case_macro) \
@@ -148,9 +148,9 @@ static inline void disable_counter(int idx)
 }
 
 /*
- * The pmc_accessor structure has pointers to PMEVT{CNTR,TYPER}<n>_EL0
+ * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
  * accessors that test cases will use. Each of the accessors will
- * either directly reads/writes PMEVT{CNTR,TYPER}<n>_EL0
+ * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0
  * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through
  * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()).
  *
@@ -179,6 +179,51 @@ struct pmc_accessor pmc_accessors[] = {
 	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
 };
 
+#define INVALID_EC	(-1ul)
+uint64_t expected_ec = INVALID_EC;
+uint64_t op_end_addr;
+
+static void guest_sync_handler(struct ex_regs *regs)
+{
+	uint64_t esr, ec;
+
+	esr = read_sysreg(esr_el1);
+	ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK;
+	GUEST_ASSERT_4(op_end_addr && (expected_ec == ec),
+		       regs->pc, esr, ec, expected_ec);
+
+	/* Will go back to op_end_addr after the handler exits */
+	regs->pc = op_end_addr;
+
+	/*
+	 * Clear op_end_addr, and setting expected_ec to INVALID_EC
+	 * as a sign that an exception has occurred.
+	 */
+	op_end_addr = 0;
+	expected_ec = INVALID_EC;
+}
+
+/*
+ * Run the given operation that should trigger an exception with the
+ * given exception class. The exception handler (guest_sync_handler)
+ * will reset op_end_addr to 0, and expected_ec to INVALID_EC, and
+ * will come back to the instruction at the @done_label.
+ * The @done_label must be a unique label in this test program.
+ */
+#define TEST_EXCEPTION(ec, ops, done_label)		\
+{							\
+	extern int done_label;				\
+							\
+	WRITE_ONCE(op_end_addr, (uint64_t)&done_label);	\
+	GUEST_ASSERT(ec != INVALID_EC);			\
+	WRITE_ONCE(expected_ec, ec);			\
+	dsb(ish);					\
+	ops;						\
+	asm volatile(#done_label":");			\
+	GUEST_ASSERT(!op_end_addr);			\
+	GUEST_ASSERT(expected_ec == INVALID_EC);	\
+}
+
 static void pmu_disable_reset(void)
 {
 	uint64_t pmcr = read_sysreg(pmcr_el0);
@@ -350,16 +395,38 @@ static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
 		       pmc_idx, acc, read_data, read_data_prev);
 }
 
+/*
+ * Tests for reading/writing registers for the unimplemented event counter
+ * specified by @pmc_idx (>= PMCR_EL0.N).
+ */
+static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
+{
+	/*
+	 * Reading/writing the event count/type registers should cause
+	 * an UNDEFINED exception.
+	 */
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx), inv_rd_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0), inv_wr_cntr);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx), inv_rd_typer);
+	TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0), inv_wr_typer);
+	/*
+	 * The bit corresponding to the (unimplemented) counter in
+	 * {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers should be RAZ.
+	 */
+	test_bitmap_pmu_regs(pmc_idx, 1);
+	test_bitmap_pmu_regs(pmc_idx, 0);
+}
+
 /*
  * The guest is configured with PMUv3 with @expected_pmcr_n number of
  * event counters.
  * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and
- * if reading/writing PMU registers for implemented counters can work
- * as expected.
+ * if reading/writing PMU registers for implemented or unimplemented
+ * counters can work as expected.
  */
 static void guest_code(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n;
+	uint64_t pmcr, pmcr_n, unimp_mask;
 	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
@@ -370,15 +437,31 @@ static void guest_code(uint64_t expected_pmcr_n)
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
 
+	/*
+	 * Make sure that (RAZ) bits corresponding to unimplemented event
+	 * counters in {PMCNTEN,PMOVS}{SET,CLR}_EL1 registers are reset to zero.
+	 * (NOTE: bits for implemented event counters are reset to UNKNOWN)
+	 */
+	unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n);
+	check_bitmap_pmu_regs(unimp_mask, false);
+
 	/*
 	 * Tests for reading/writing PMU registers for implemented counters.
-	 * Use each combination of PMEVT{CNTR,TYPER}<n>_EL0 accessor functions.
+	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
 	 */
 	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
 		for (pmc = 0; pmc < pmcr_n; pmc++)
 			test_access_pmc_regs(&pmc_accessors[i], pmc);
 	}
 
+	/*
+	 * Tests for reading/writing PMU registers for unimplemented counters.
+	 * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
+			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
+	}
 	GUEST_DONE();
 }
 
@@ -392,7 +475,7 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
-	uint8_t pmuver;
+	uint8_t pmuver, ec;
 	uint64_t dfr0, irq = 23;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -405,11 +488,18 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	};
 
 	vm = vm_create(1);
+	vm_init_descriptor_tables(vm);
+	/* Catch exceptions for easier debugging */
+	for (ec = 0; ec < ESR_EC_NUM; ec++) {
+		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
+					guest_sync_handler);
+	}
 
 	/* Create vCPU with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	vcpu_init_descriptor_tables(vcpu);
 	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
@@ -478,6 +568,7 @@ static void run_test(uint64_t pmcr_n)
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	aarch64_vcpu_setup(vcpu, &init);
+	vcpu_init_descriptor_tables(vcpu);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
 
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 5f977528e09c0..52d87809356c8 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -104,6 +104,7 @@ enum {
 #define ESR_EC_SHIFT		26
 #define ESR_EC_MASK		(ESR_EC_NUM - 1)
 
+#define ESR_EC_UNKNOWN		0x0
 #define ESR_EC_SVC64		0x15
 #define ESR_EC_IABT		0x21
 #define ESR_EC_DABT		0x25
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 05/16] selftests: KVM: aarch64: Refactor the vPMU counter access tests
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (3 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 04/16] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Refactor the existing counter access tests into its own
independent functions and make running the tests generic
to make way for the upcoming tests.

As a part of the refactoring, rename the test file to a more
genetric name vpmu_test.c.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/testing/selftests/kvm/Makefile          |   2 +-
 .../{vpmu_counter_access.c => vpmu_test.c}    | 142 ++++++++++++------
 2 files changed, 100 insertions(+), 44 deletions(-)
 rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} (88%)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index b27fea0ce5918..a4d262e139b18 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
-TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_test
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
similarity index 88%
rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 453f0dd240f44..d72c3c9b9c39f 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * vpmu_counter_access - Test vPMU event counter access
+ * vpmu_test - Test the vPMU
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -147,6 +147,11 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline uint64_t get_pmcr_n(void)
+{
+	return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+}
+
 /*
  * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
  * accessors that test cases will use. Each of the accessors will
@@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] = {
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
 
+struct vpmu_vm {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+};
+
+enum test_stage {
+	TEST_STAGE_COUNTER_ACCESS = 1,
+};
+
+struct guest_data {
+	enum test_stage test_stage;
+	uint64_t expected_pmcr_n;
+};
+
+static struct guest_data guest_data;
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
@@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
 		write_sysreg(test_bit, pmovsset_el0);
 
 		/* The bit will be set only if the counter is implemented */
-		pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+		pmcr_n = get_pmcr_n();
 		set_expected = (pmc_idx < pmcr_n) ? true : false;
 	} else {
 		write_sysreg(test_bit, pmcntenclr_el0);
@@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
  * if reading/writing PMU registers for implemented or unimplemented
  * counters can work as expected.
  */
-static void guest_code(uint64_t expected_pmcr_n)
+static void guest_counter_access_test(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n, unimp_mask;
+	uint64_t pmcr_n, unimp_mask;
 	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
 
-	pmcr = read_sysreg(pmcr_el0);
-	pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+	pmcr_n = get_pmcr_n();
 
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
@@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n)
 		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
 			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
 	}
+}
+
+static void guest_code(void)
+{
+	switch (guest_data.test_stage) {
+	case TEST_STAGE_COUNTER_ACCESS:
+		guest_counter_access_test(guest_data.expected_pmcr_n);
+		break;
+	default:
+		GUEST_ASSERT_1(0, guest_data.test_stage);
+	}
+
 	GUEST_DONE();
 }
 
@@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 #define GICR_BASE_GPA	0x80A0000ULL
 
 /* Create a VM that has one vCPU with PMUv3 configured. */
-static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
-				     int *gic_fd)
+static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
 	uint8_t pmuver, ec;
 	uint64_t dfr0, irq = 23;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
 		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
@@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
 	};
 
-	vm = vm_create(1);
+	vpmu_vm = calloc(1, sizeof(*vpmu_vm));
+	TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm");
+
+	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
@@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	/* Create vCPU with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
-	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
 	vcpu_init_descriptor_tables(vcpu);
-	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+	vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
@@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
 
-	*vcpup = vcpu;
-	return vm;
+	return vpmu_vm;
+}
+
+static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
+{
+	close(vpmu_vm->gic_fd);
+	kvm_vm_free(vpmu_vm->vm);
+	free(vpmu_vm);
 }
 
-static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+static void run_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct ucall uc;
 
-	vcpu_args_set(vcpu, 1, pmcr_n);
+	sync_global_to_guest(vcpu->vm, guest_data);
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
@@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
  * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
  * and run the test.
  */
-static void run_test(uint64_t pmcr_n)
+static void run_counter_access_test(uint64_t pmcr_n)
 {
-	struct kvm_vm *vm;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_vcpu *vcpu;
-	int gic_fd;
 	uint64_t sp, pmcr, pmcr_orig;
 	struct kvm_vcpu_init init;
 
+	guest_data.expected_pmcr_n = pmcr_n;
+
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu = vpmu_vm->vcpu;
 
 	/* Save the initial sp to restore them later to run the guest again */
 	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
@@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n)
 	pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
 	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
 
-	run_vcpu(vcpu, pmcr_n);
+	run_vcpu(vcpu);
 
 	/*
 	 * Reset and re-initialize the vCPU, and run the guest code again to
 	 * check if PMCR_EL0.N is preserved.
 	 */
-	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	aarch64_vcpu_setup(vcpu, &init);
 	vcpu_init_descriptor_tables(vcpu);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
 
-	run_vcpu(vcpu, pmcr_n);
+	run_vcpu(vcpu);
 
-	close(gic_fd);
-	kvm_vm_free(vm);
+	destroy_vpmu_vm(vpmu_vm);
 }
 
 /*
@@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n)
  * the vCPU to @pmcr_n, which is larger than the host value.
  * The attempt should fail as @pmcr_n is too big to set for the vCPU.
  */
-static void run_error_test(uint64_t pmcr_n)
+static void run_counter_access_error_test(uint64_t pmcr_n)
 {
-	struct kvm_vm *vm;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_vcpu *vcpu;
-	int gic_fd, ret;
+	int ret;
 	uint64_t pmcr, pmcr_orig;
 
+	guest_data.expected_pmcr_n = pmcr_n;
+
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu = vpmu_vm->vcpu;
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
@@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n)
 	TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
 		    pmcr, pmcr_orig);
 
-	close(gic_fd);
-	kvm_vm_free(vm);
+	destroy_vpmu_vm(vpmu_vm);
+}
+
+static void run_counter_access_tests(uint64_t pmcr_n)
+{
+	uint64_t i;
+
+	guest_data.test_stage = TEST_STAGE_COUNTER_ACCESS;
+
+	for (i = 0; i <= pmcr_n; i++)
+		run_counter_access_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
+		run_counter_access_error_test(i);
+}
+
+static void run_tests(uint64_t pmcr_n)
+{
+	run_counter_access_tests(pmcr_n);
 }
 
 /*
@@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n)
  */
 static uint64_t get_pmcr_n_limit(void)
 {
-	struct kvm_vm *vm;
-	struct kvm_vcpu *vcpu;
-	int gic_fd;
+	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
-	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
-	close(gic_fd);
-	kvm_vm_free(vm);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	destroy_vpmu_vm(vpmu_vm);
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
 int main(void)
 {
-	uint64_t i, pmcr_n;
+	uint64_t pmcr_n;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
 
 	pmcr_n = get_pmcr_n_limit();
-	for (i = 0; i <= pmcr_n; i++)
-		run_test(i);
-
-	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
-		run_error_test(i);
+	run_tests(pmcr_n);
 
 	return 0;
 }
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (4 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 05/16] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-03  0:46   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow
bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle
counter enable bit) for PMCNTENSET_EL0 register.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/arch/arm64/include/asm/perf_event.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
index 97e49a4d4969f..8ce23aabf6fe6 100644
--- a/tools/arch/arm64/include/asm/perf_event.h
+++ b/tools/arch/arm64/include/asm/perf_event.h
@@ -222,9 +222,11 @@
 /*
  * PMOVSR: counters overflow flag status reg
  */
+#define ARMV8_PMU_CNTOVS_C      (1 << 31) /* Cycle counter overflow bit */
 #define	ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
 #define	ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
 
+
 /*
  * PMXEVTYPER: Event selection reg
  */
@@ -247,6 +249,11 @@
 #define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
 #define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
 
+/*
+ * PMCNTENSET: Count Enable set reg
+ */
+#define ARMV8_PMU_CNTENSET_C    (1 << 31) /* Cycle counter enable bit */
+
 /* PMMIR_EL1.SLOTS mask */
 #define ARMV8_PMU_SLOTS_MASK	0xff
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (5 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-03  3:06   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add basic helpers for the test to access the cycle counter
registers. The helpers will be used in the upcoming patches
to run the tests related to cycle counter.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index d72c3c9b9c39f..15aebc7d7dc94 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -147,6 +147,46 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline uint64_t read_cycle_counter(void)
+{
+	return read_sysreg(pmccntr_el0);
+}
+
+static inline void reset_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcr_el0);
+
+	write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0);
+	isb();
+}
+
+static inline void enable_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0);
+	isb();
+}
+
+static inline void write_pmccfiltr(unsigned long val)
+{
+	write_sysreg(val, pmccfiltr_el0);
+	isb();
+}
+
+static inline uint64_t read_pmccfiltr(void)
+{
+	return read_sysreg(pmccfiltr_el0);
+}
+
 static inline uint64_t get_pmcr_n(void)
 {
 	return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (6 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-03  4:30   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
                   ` (7 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Accept a list of KVM PMU event filters as an argument while creating
a VM via create_vpmu_vm(). Upcoming patches would leverage this to
test the event filters' functionality.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++--
 1 file changed, 60 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 15aebc7d7dc94..2b3a4fa3afa9c 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -15,10 +15,14 @@
 #include <vgic.h>
 #include <asm/perf_event.h>
 #include <linux/bitfield.h>
+#include <linux/bitmap.h>
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/* The max number of event numbers that's supported */
+#define ARMV8_PMU_MAX_EVENTS		64
+
 /*
  * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
  * were basically copied from arch/arm64/kernel/perf_event.c.
@@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = {
 	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
 };
 
+#define MAX_EVENT_FILTERS_PER_VM 10
+
 #define INVALID_EC	(-1ul)
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
@@ -232,6 +238,7 @@ struct vpmu_vm {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	int gic_fd;
+	unsigned long *pmu_filter;
 };
 
 enum test_stage {
@@ -541,8 +548,51 @@ static void guest_code(void)
 #define GICD_BASE_GPA	0x8000000ULL
 #define GICR_BASE_GPA	0x80A0000ULL
 
+static unsigned long *
+set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
+{
+	int j;
+	unsigned long *pmu_filter;
+	struct kvm_device_attr filter_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_FILTER,
+	};
+
+	/*
+	 * Setting up of the bitmap is similar to what KVM does.
+	 * If the first filter denys an event, default all the others to allow, and vice-versa.
+	 */
+	pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS);
+	TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter");
+
+	if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY)
+		bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS);
+
+	for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) {
+		struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j];
+
+		if (!pmu_event_filter->nevents)
+			break;
+
+		pr_debug("Applying event filter:: event: 0x%x; action: %s\n",
+				pmu_event_filter->base_event,
+				pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY");
+
+		filter_attr.addr = (uint64_t) pmu_event_filter;
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+
+		if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW)
+			__set_bit(pmu_event_filter->base_event, pmu_filter);
+		else
+			__clear_bit(pmu_event_filter->base_event, pmu_filter);
+	}
+
+	return pmu_filter;
+}
+
 /* Create a VM that has one vCPU with PMUv3 configured. */
-static struct vpmu_vm *create_vpmu_vm(void *guest_code)
+static struct vpmu_vm *
+create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
@@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
 
 	/* Initialize vPMU */
+	if (pmu_event_filters)
+		vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
 
@@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 
 static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
 {
+	if (vpmu_vm->pmu_filter)
+		bitmap_free(vpmu_vm->pmu_filter);
 	close(vpmu_vm->gic_fd);
 	kvm_vm_free(vpmu_vm->vm);
 	free(vpmu_vm);
@@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu = vpmu_vm->vcpu;
 
 	/* Save the initial sp to restore them later to run the guest again */
@@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu = vpmu_vm->vcpu;
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
@@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void)
 	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
 	destroy_vpmu_vm(vpmu_vm);
+
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (7 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-04 20:28   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER
attribute by applying a series of filters to allow or
deny events from the userspace. Validation is done by
the guest in a way that it should be able to count
only the events that are allowed.

The workload to execute a precise number of instructions
(execute_precise_instrs() and precise_instrs_loop()) is taken
from the kvm-unit-tests' arm/pmu.c.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++-
 1 file changed, 258 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 2b3a4fa3afa9c..3dfb770b538e9 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -2,12 +2,21 @@
 /*
  * vpmu_test - Test the vPMU
  *
- * Copyright (c) 2022 Google LLC.
+ * The test suit contains a series of checks to validate the vPMU
+ * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is
+ * supported on the host. The tests include:
  *
- * This test checks if the guest can see the same number of the PMU event
+ * 1. Check if the guest can see the same number of the PMU event
  * counters (PMCR_EL0.N) that userspace sets, if the guest can access
  * those counters, and if the guest cannot access any other counters.
- * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ *
+ * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER
+ * attribute by applying a series of filters in various combinations
+ * of allowing or denying the events. The guest validates it by
+ * checking if it's able to count only the events that are allowed.
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
  */
 #include <kvm_util.h>
 #include <processor.h>
@@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = {
 
 #define MAX_EVENT_FILTERS_PER_VM 10
 
+#define EVENT_ALLOW(ev) \
+	{.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW}
+
+#define EVENT_DENY(ev) \
+	{.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY}
+
 #define INVALID_EC	(-1ul)
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
@@ -243,11 +258,13 @@ struct vpmu_vm {
 
 enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
+	TEST_STAGE_KVM_EVENT_FILTER,
 };
 
 struct guest_data {
 	enum test_stage test_stage;
 	uint64_t expected_pmcr_n;
+	unsigned long *pmu_filter;
 };
 
 static struct guest_data guest_data;
@@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event)
 		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
 }
 
+
+/*
+ * Extra instructions inserted by the compiler would be difficult to compensate
+ * for, so hand assemble everything between, and including, the PMCR accesses
+ * to start and stop counting. isb instructions are inserted to make sure
+ * pmccntr read after this function returns the exact instructions executed
+ * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop.
+ */
+static inline void precise_instrs_loop(int loop, uint32_t pmcr)
+{
+	uint64_t pmcr64 = pmcr;
+
+	asm volatile(
+	"	msr	pmcr_el0, %[pmcr]\n"
+	"	isb\n"
+	"1:	subs	%w[loop], %w[loop], #1\n"
+	"	b.gt	1b\n"
+	"	nop\n"
+	"	msr	pmcr_el0, xzr\n"
+	"	isb\n"
+	: [loop] "+r" (loop)
+	: [pmcr] "r" (pmcr64)
+	: "cc");
+}
+
+/*
+ * Execute a known number of guest instructions. Only even instruction counts
+ * greater than or equal to 4 are supported by the in-line assembly code. The
+ * control register (PMCR_EL0) is initialized with the provided value (allowing
+ * for example for the cycle counter or event counters to be reset). At the end
+ * of the exact instruction loop, zero is written to PMCR_EL0 to disable
+ * counting, allowing the cycle counter or event counters to be read at the
+ * leisure of the calling code.
+ */
+static void execute_precise_instrs(int num, uint32_t pmcr)
+{
+	int loop = (num - 2) / 2;
+
+	GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop);
+	precise_instrs_loop(loop, pmcr);
+}
+
+static void test_instructions_count(int pmc_idx, bool expect_count)
+{
+	int i;
+	struct pmc_accessor *acc;
+	uint64_t cnt;
+	int instrs_count = 100;
+
+	enable_counter(pmc_idx);
+
+	/* Test the event using all the possible way to configure the event */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		pmu_disable_reset();
+
+		acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+
+		/* Enable the PMU and execute precisely number of instructions as a workload */
+		execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
+
+		/* If a count is expected, the counter should be increased by 'instrs_count' */
+		cnt = acc->read_cntr(pmc_idx);
+		GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
+				i, expect_count, cnt, instrs_count);
+	}
+
+	disable_counter(pmc_idx);
+}
+
+static void test_cycles_count(bool expect_count)
+{
+	uint64_t cnt;
+
+	pmu_enable();
+	reset_cycle_counter();
+
+	/* Count cycles in EL0 and EL1 */
+	write_pmccfiltr(0);
+	enable_cycle_counter();
+
+	cnt = read_cycle_counter();
+
+	/*
+	 * If a count is expected by the test, the cycle counter should be increased by
+	 * at least 1, as there is at least one instruction between enabling the
+	 * counter and reading the counter.
+	 */
+	GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
+
+	disable_cycle_counter();
+	pmu_disable_reset();
+}
+
+static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
+{
+	switch (event) {
+	case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
+		test_instructions_count(pmc_idx, expect_count);
+		break;
+	case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
+		test_cycles_count(expect_count);
+		break;
+	}
+}
+
 /*
  * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
  * are set or cleared as specified in @set_expected.
@@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n)
 	}
 }
 
+static void guest_event_filter_test(unsigned long *pmu_filter)
+{
+	uint64_t event;
+
+	/*
+	 * Check if PMCEIDx_EL0 is advertized as configured by the userspace.
+	 * It's possible that even though the userspace allowed it, it may not be supported
+	 * by the hardware and could be advertized as 'disabled'. Hence, only validate against
+	 * the events that are advertized.
+	 *
+	 * Furthermore, check if the event is in fact counting if enabled, or vice-versa.
+	 */
+	for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) {
+		if (pmu_event_is_supported(event)) {
+			GUEST_ASSERT_1(test_bit(event, pmu_filter), event);
+			test_event_count(event, 0, true);
+		} else {
+			test_event_count(event, 0, false);
+		}
+	}
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
 	case TEST_STAGE_COUNTER_ACCESS:
 		guest_counter_access_test(guest_data.expected_pmcr_n);
 		break;
+	case TEST_STAGE_KVM_EVENT_FILTER:
+		guest_event_filter_test(guest_data.pmu_filter);
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n)
 		run_counter_access_error_test(i);
 }
 
+static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = {
+	/*
+	 * Each set of events denotes a filter configuration for that VM.
+	 * During VM creation, the filters will be applied in the sequence mentioned here.
+	 */
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+};
+
+static void run_kvm_event_filter_error_tests(void)
+{
+	int ret;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct vpmu_vm *vpmu_vm;
+	struct kvm_vcpu_init init;
+	struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+	struct kvm_device_attr filter_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_FILTER,
+		.addr = (uint64_t) &pmu_event_filter,
+	};
+
+	/* KVM should not allow configuring filters after the PMU is initialized */
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	TEST_ASSERT(ret == -1 && errno == EBUSY,
+			"Failed to disallow setting an event filter after PMU init");
+	destroy_vpmu_vm(vpmu_vm);
+
+	/* Check for invalid event filter setting */
+	vm = vm_create(1);
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+
+	pmu_event_filter.base_event = UINT16_MAX;
+	pmu_event_filter.nevents = 5;
+	ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration");
+	kvm_vm_free(vm);
+}
+
+static void run_kvm_event_filter_test(void)
+{
+	int i;
+	struct vpmu_vm *vpmu_vm;
+	struct kvm_vm *vm;
+	vm_vaddr_t pmu_filter_gva;
+	size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long);
+
+	guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER;
+
+	/* Test for valid filter configurations */
+	for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
+		vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
+		vm = vpmu_vm->vm;
+
+		pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
+		memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
+		guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
+
+		run_vcpu(vpmu_vm->vcpu);
+
+		destroy_vpmu_vm(vpmu_vm);
+	}
+
+	/* Check if KVM is handling the errors correctly */
+	run_kvm_event_filter_error_tests();
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
+	run_kvm_event_filter_test();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (8 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-07  1:19   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
                   ` (5 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

KVM doest't allow the guests to modify the filter types
such counting events in nonsecure/secure-EL2, EL3, and
so on. Validate the same by force-configuring the bits
in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0
registers.

The test extends further by trying to create an event
for counting only in EL2 and validates if the counter
is not progressing.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 3dfb770b538e9..5c166df245589 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -15,6 +15,10 @@
  * of allowing or denying the events. The guest validates it by
  * checking if it's able to count only the events that are allowed.
  *
+ * 3. KVM doesn't allow the guest to count the events attributed with
+ * higher exception levels (EL2, EL3). Verify this functionality by
+ * configuring and trying to count the events for EL2 in the guest.
+ *
  * Copyright (c) 2022 Google LLC.
  *
  */
@@ -23,6 +27,7 @@
 #include <test_util.h>
 #include <vgic.h>
 #include <asm/perf_event.h>
+#include <linux/arm-smccc.h>
 #include <linux/bitfield.h>
 #include <linux/bitmap.h>
 
@@ -259,6 +264,7 @@ struct vpmu_vm {
 enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
 	TEST_STAGE_KVM_EVENT_FILTER,
+	TEST_STAGE_KVM_EVTYPE_FILTER,
 };
 
 struct guest_data {
@@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter)
 	}
 }
 
+static void guest_evtype_filter_test(void)
+{
+	int i;
+	struct pmc_accessor *acc;
+	uint64_t typer, cnt;
+	struct arm_smccc_res res;
+
+	pmu_enable();
+
+	/*
+	 * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2),
+	 * Monitor (EL3), and Multithreading configuration. It applies the mask
+	 * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0,
+	 * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible
+	 * ways to configure the EVTYPER.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		/* Set all filter bits (31-24), readback, and check against the mask */
+		acc->write_typer(0, 0xff000000);
+		typer = acc->read_typer(0);
+
+		GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
+				typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
+
+		/*
+		 * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv
+		 * to not count NS-EL2 events. Verify this functionality by configuring
+		 * a NS-EL2 event, for which the couunt shouldn't increment.
+		 */
+		typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED;
+		typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
+		acc->write_typer(0, typer);
+		acc->write_cntr(0, 0);
+		enable_counter(0);
+
+		/* Issue a hypercall to enter EL2 and return */
+		memset(&res, 0, sizeof(res));
+		smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
+
+		cnt = acc->read_cntr(0);
+		GUEST_ASSERT_3(cnt == 0, cnt, typer, i);
+	}
+
+	/* Check the same sequence for the Cycle counter */
+	write_pmccfiltr(0xff000000);
+	typer = read_pmccfiltr();
+	GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
+				typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
+
+	typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
+	write_pmccfiltr(typer);
+	reset_cycle_counter();
+	enable_cycle_counter();
+
+	/* Issue a hypercall to enter EL2 and return */
+	memset(&res, 0, sizeof(res));
+	smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
+
+	cnt = read_cycle_counter();
+	GUEST_ASSERT_2(cnt == 0, cnt, typer);
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
@@ -687,6 +757,9 @@ static void guest_code(void)
 	case TEST_STAGE_KVM_EVENT_FILTER:
 		guest_event_filter_test(guest_data.pmu_filter);
 		break;
+	case TEST_STAGE_KVM_EVTYPE_FILTER:
+		guest_evtype_filter_test();
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void)
 	run_kvm_event_filter_error_tests();
 }
 
+static void run_kvm_evtype_filter_test(void)
+{
+	struct vpmu_vm *vpmu_vm;
+
+	guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
+
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	run_vcpu(vpmu_vm->vcpu);
+	destroy_vpmu_vm(vpmu_vm);
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
 	run_kvm_event_filter_test();
+	run_kvm_evtype_filter_test();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (9 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-07  3:43   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
                   ` (4 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Implement a stress test for KVM by frequently force-migrating the
vCPU to random pCPUs in the system. This would validate the
save/restore functionality of KVM and starting/stopping of
PMU counters as necessary.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++-
 1 file changed, 193 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 5c166df245589..0c9d801f4e602 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -19,9 +19,15 @@
  * higher exception levels (EL2, EL3). Verify this functionality by
  * configuring and trying to count the events for EL2 in the guest.
  *
+ * 4. Since the PMU registers are per-cpu, stress KVM by frequently
+ * migrating the guest vCPU to random pCPUs in the system, and check
+ * if the vPMU is still behaving as expected.
+ *
  * Copyright (c) 2022 Google LLC.
  *
  */
+#define _GNU_SOURCE
+
 #include <kvm_util.h>
 #include <processor.h>
 #include <test_util.h>
@@ -30,6 +36,11 @@
 #include <linux/arm-smccc.h>
 #include <linux/bitfield.h>
 #include <linux/bitmap.h>
+#include <stdlib.h>
+#include <pthread.h>
+#include <sys/sysinfo.h>
+
+#include "delay.h"
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
@@ -37,6 +48,8 @@
 /* The max number of event numbers that's supported */
 #define ARMV8_PMU_MAX_EVENTS		64
 
+#define msecs_to_usecs(msec)		((msec) * 1000LL)
+
 /*
  * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
  * were basically copied from arch/arm64/kernel/perf_event.c.
@@ -265,6 +278,7 @@ enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
 	TEST_STAGE_KVM_EVENT_FILTER,
 	TEST_STAGE_KVM_EVTYPE_FILTER,
+	TEST_STAGE_VCPU_MIGRATION,
 };
 
 struct guest_data {
@@ -275,6 +289,19 @@ struct guest_data {
 
 static struct guest_data guest_data;
 
+#define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
+#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
+
+struct test_args {
+	int vcpu_migration_test_iter;
+	int vcpu_migration_test_migrate_freq_ms;
+};
+
+static struct test_args test_args = {
+	.vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
+	.vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
+};
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
@@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event)
 		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
 }
 
-
 /*
  * Extra instructions inserted by the compiler would be difficult to compensate
  * for, so hand assemble everything between, and including, the PMCR accesses
@@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 	}
 }
 
+static void test_basic_pmu_functionality(void)
+{
+	/* Test events on generic and cycle counters */
+	test_instructions_count(0, true);
+	test_cycles_count(true);
+}
+
 /*
  * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
  * are set or cleared as specified in @set_expected.
@@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void)
 	GUEST_ASSERT_2(cnt == 0, cnt, typer);
 }
 
+static void guest_vcpu_migration_test(void)
+{
+	/*
+	 * While the userspace continuously migrates this vCPU to random pCPUs,
+	 * run basic PMU functionalities and verify the results.
+	 */
+	while (test_args.vcpu_migration_test_iter--)
+		test_basic_pmu_functionality();
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
@@ -760,6 +803,9 @@ static void guest_code(void)
 	case TEST_STAGE_KVM_EVTYPE_FILTER:
 		guest_evtype_filter_test();
 		break;
+	case TEST_STAGE_VCPU_MIGRATION:
+		guest_vcpu_migration_test();
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 
 	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
+
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
 		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
@@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
 	struct ucall uc;
 
 	sync_global_to_guest(vcpu->vm, guest_data);
+	sync_global_to_guest(vcpu->vm, test_args);
+
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
@@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void)
 	destroy_vpmu_vm(vpmu_vm);
 }
 
+struct vcpu_migrate_data {
+	struct vpmu_vm *vpmu_vm;
+	pthread_t *pt_vcpu;
+	bool vcpu_done;
+};
+
+static void *run_vcpus_migrate_test_func(void *arg)
+{
+	struct vcpu_migrate_data *migrate_data = arg;
+	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
+
+	run_vcpu(vpmu_vm->vcpu);
+	migrate_data->vcpu_done = true;
+
+	return NULL;
+}
+
+static uint32_t get_pcpu(void)
+{
+	uint32_t pcpu;
+	unsigned int nproc_conf;
+	cpu_set_t online_cpuset;
+
+	nproc_conf = get_nprocs_conf();
+	sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
+
+	/* Randomly find an available pCPU to place the vCPU on */
+	do {
+		pcpu = rand() % nproc_conf;
+	} while (!CPU_ISSET(pcpu, &online_cpuset));
+
+	return pcpu;
+}
+
+static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
+{
+	int ret;
+	cpu_set_t cpuset;
+	uint32_t new_pcpu = get_pcpu();
+
+	CPU_ZERO(&cpuset);
+	CPU_SET(new_pcpu, &cpuset);
+
+	pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
+
+	ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
+
+	/* Allow the error where the vCPU thread is already finished */
+	TEST_ASSERT(ret == 0 || ret == ESRCH,
+		    "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret);
+
+	return ret;
+}
+
+static void *vcpus_migrate_func(void *arg)
+{
+	struct vcpu_migrate_data *migrate_data = arg;
+
+	while (!migrate_data->vcpu_done) {
+		usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
+		migrate_vcpu(migrate_data);
+	}
+
+	return NULL;
+}
+
+static void run_vcpu_migration_test(uint64_t pmcr_n)
+{
+	int ret;
+	struct vpmu_vm *vpmu_vm;
+	pthread_t pt_vcpu, pt_sched;
+	struct vcpu_migrate_data migrate_data = {
+		.pt_vcpu = &pt_vcpu,
+		.vcpu_done = false,
+	};
+
+	__TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
+
+	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
+	guest_data.expected_pmcr_n = pmcr_n;
+
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
+
+	/* Initialize random number generation for migrating vCPUs to random pCPUs */
+	srand(time(NULL));
+
+	/* Spawn a vCPU thread */
+	ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
+	TEST_ASSERT(!ret, "Failed to create the vCPU thread");
+
+	/* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
+	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
+	TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
+
+	pthread_join(pt_sched, NULL);
+	pthread_join(pt_vcpu, NULL);
+
+	destroy_vpmu_vm(vpmu_vm);
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
 	run_kvm_event_filter_test();
 	run_kvm_evtype_filter_test();
+	run_vcpu_migration_test(pmcr_n);
 }
 
 /*
@@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void)
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
-int main(void)
+static void print_help(char *name)
+{
+	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
+		name);
+	pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_ITERS_DEF);
+	pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
+	pr_info("\t-h: print this help screen\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+	int opt;
+
+	while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
+		switch (opt) {
+		case 'i':
+			test_args.vcpu_migration_test_iter =
+				atoi_positive("Nr vCPU migration iterations", optarg);
+			break;
+		case 'm':
+			test_args.vcpu_migration_test_migrate_freq_ms =
+				atoi_positive("vCPU migration frequency", optarg);
+			break;
+		case 'h':
+		default:
+			goto err;
+		}
+	}
+
+	return true;
+
+err:
+	print_help(argv[0]);
+	return false;
+}
+
+int main(int argc, char *argv[])
 {
 	uint64_t pmcr_n;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
 
+	if (!parse_args(argc, argv))
+		exit(KSFT_SKIP);
+
 	pmcr_n = get_pmcr_n_limit();
 	run_tests(pmcr_n);
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (10 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-07  6:09   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vCPU migration test to also validate the vPMU's
functionality when set up for overflow conditions.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++--
 1 file changed, 198 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 0c9d801f4e602..066dc17fa3906 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -21,7 +21,9 @@
  *
  * 4. Since the PMU registers are per-cpu, stress KVM by frequently
  * migrating the guest vCPU to random pCPUs in the system, and check
- * if the vPMU is still behaving as expected.
+ * if the vPMU is still behaving as expected. The sub-tests include
+ * testing basic functionalities such as basic counters behavior,
+ * overflow, and overflow interrupts.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -41,13 +43,27 @@
 #include <sys/sysinfo.h>
 
 #include "delay.h"
+#include "gic.h"
+#include "spinlock.h"
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/* The cycle counter bit position that's common among the PMU registers */
+#define ARMV8_PMU_CYCLE_COUNTER_IDX	31
+
 /* The max number of event numbers that's supported */
 #define ARMV8_PMU_MAX_EVENTS		64
 
+#define PMU_IRQ				23
+
+#define COUNT_TO_OVERFLOW	0xFULL
+#define PRE_OVERFLOW_32		(GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
+#define PRE_OVERFLOW_64		(GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
 #define msecs_to_usecs(msec)		((msec) * 1000LL)
 
 /*
@@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
 	isb();
 }
 
+static inline void write_pmovsclr(unsigned long val)
+{
+	write_sysreg(val, pmovsclr_el0);
+	isb();
+}
+
+static unsigned long read_pmovsclr(void)
+{
+	return read_sysreg(pmovsclr_el0);
+}
+
 static inline void enable_counter(int idx)
 {
 	uint64_t v = read_sysreg(pmcntenset_el0);
@@ -178,11 +205,33 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline void enable_irq(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmintenset_el1);
+	isb();
+}
+
+static inline void disable_irq(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmintenclr_el1);
+	isb();
+}
+
 static inline uint64_t read_cycle_counter(void)
 {
 	return read_sysreg(pmccntr_el0);
 }
 
+static inline void write_cycle_counter(uint64_t v)
+{
+	write_sysreg(v, pmccntr_el0);
+	isb();
+}
+
 static inline void reset_cycle_counter(void)
 {
 	uint64_t v = read_sysreg(pmcr_el0);
@@ -289,6 +338,15 @@ struct guest_data {
 
 static struct guest_data guest_data;
 
+/* Data to communicate among guest threads */
+struct guest_irq_data {
+	uint32_t pmc_idx_bmap;
+	uint32_t irq_received_bmap;
+	struct spinlock lock;
+};
+
+static struct guest_irq_data guest_irq_data;
+
 #define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
 
@@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs)
 	expected_ec = INVALID_EC;
 }
 
+static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap)
+{
+	/*
+	 * Fail if there's an interrupt from unexpected PMCs.
+	 * All the expected events' IRQs may not arrive at the same time.
+	 * Hence, check if the interrupt is valid only if it's expected.
+	 */
+	if (pmovsclr & BIT(pmc_idx)) {
+		GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap);
+		write_pmovsclr(BIT(pmc_idx));
+	}
+}
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+	uint32_t pmc_idx_bmap;
+	uint64_t i, pmcr_n = get_pmcr_n();
+	uint32_t pmovsclr = read_pmovsclr();
+	unsigned int intid = gic_get_and_ack_irq();
+
+	/* No other IRQ apart from the PMU IRQ is expected */
+	GUEST_ASSERT_1(intid == PMU_IRQ, intid);
+
+	spin_lock(&guest_irq_data.lock);
+	pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
+
+	for (i = 0; i < pmcr_n; i++)
+		guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
+	guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
+
+	/* Mark IRQ as recived for the corresponding PMCs */
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
+	spin_unlock(&guest_irq_data.lock);
+
+	gic_set_eoi(intid);
+}
+
+static int pmu_irq_received(int pmc_idx)
+{
+	bool irq_received;
+
+	spin_lock(&guest_irq_data.lock);
+	irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	return irq_received;
+}
+
+static void pmu_irq_init(int pmc_idx)
+{
+	write_pmovsclr(BIT(pmc_idx));
+
+	spin_lock(&guest_irq_data.lock);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	enable_irq(pmc_idx);
+}
+
+static void pmu_irq_exit(int pmc_idx)
+{
+	write_pmovsclr(BIT(pmc_idx));
+
+	spin_lock(&guest_irq_data.lock);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	disable_irq(pmc_idx);
+}
+
 /*
  * Run the given operation that should trigger an exception with the
  * given exception class. The exception handler (guest_sync_handler)
@@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr)
 	precise_instrs_loop(loop, pmcr);
 }
 
-static void test_instructions_count(int pmc_idx, bool expect_count)
+static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow)
 {
 	int i;
 	struct pmc_accessor *acc;
-	uint64_t cnt;
-	int instrs_count = 100;
+	uint64_t cntr_val = 0;
+	int instrs_count = 500;
+
+	if (test_overflow) {
+		/* Overflow scenarios can only be tested when a count is expected */
+		GUEST_ASSERT_1(expect_count, pmc_idx);
+
+		cntr_val = PRE_OVERFLOW_32;
+		pmu_irq_init(pmc_idx);
+	}
 
 	enable_counter(pmc_idx);
 
@@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count)
 	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
 		acc = &pmc_accessors[i];
 
-		pmu_disable_reset();
-
+		acc->write_cntr(pmc_idx, cntr_val);
 		acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
 
-		/* Enable the PMU and execute precisely number of instructions as a workload */
-		execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
+		/*
+		 * Enable the PMU and execute a precise number of instructions as a workload.
+		 * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count'
+		 * should have enough instructions to raise an IRQ.
+		 */
+		execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E);
 
-		/* If a count is expected, the counter should be increased by 'instrs_count' */
-		cnt = acc->read_cntr(pmc_idx);
-		GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
-				i, expect_count, cnt, instrs_count);
+		/*
+		 * If an overflow is expected, only check for the overflag flag.
+		 * As overflow interrupt is enabled, the interrupt would add additional
+		 * instructions and mess up the precise instruction count. Hence, measure
+		 * the instructions count only when the test is not set up for an overflow.
+		 */
+		if (test_overflow) {
+			GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i);
+		} else {
+			uint64_t cnt = acc->read_cntr(pmc_idx);
+
+			GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
+					pmc_idx, i, cnt, expect_count);
+		}
 	}
 
-	disable_counter(pmc_idx);
+	if (test_overflow)
+		pmu_irq_exit(pmc_idx);
 }
 
-static void test_cycles_count(bool expect_count)
+static void test_cycles_count(bool expect_count, bool test_overflow)
 {
 	uint64_t cnt;
 
-	pmu_enable();
-	reset_cycle_counter();
+	if (test_overflow) {
+		/* Overflow scenarios can only be tested when a count is expected */
+		GUEST_ASSERT(expect_count);
+
+		write_cycle_counter(PRE_OVERFLOW_64);
+		pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX);
+	} else {
+		reset_cycle_counter();
+	}
 
 	/* Count cycles in EL0 and EL1 */
 	write_pmccfiltr(0);
 	enable_cycle_counter();
 
+	/* Enable the PMU and execute precisely number of instructions as a workload */
+	execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
 	cnt = read_cycle_counter();
 
 	/*
 	 * If a count is expected by the test, the cycle counter should be increased by
-	 * at least 1, as there is at least one instruction between enabling the
+	 * at least 1, as there are a number of instructions between enabling the
 	 * counter and reading the counter.
 	 */
 	GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
+	if (test_overflow) {
+		GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count);
+		pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX);
+	}
 
 	disable_cycle_counter();
 	pmu_disable_reset();
@@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
 	case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
-		test_instructions_count(pmc_idx, expect_count);
+		test_instructions_count(pmc_idx, expect_count, false);
 		break;
 	case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
-		test_cycles_count(expect_count);
+		test_cycles_count(expect_count, false);
 		break;
 	}
 }
 
 static void test_basic_pmu_functionality(void)
 {
+	local_irq_disable();
+	gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+	gic_irq_enable(PMU_IRQ);
+	local_irq_enable();
+
 	/* Test events on generic and cycle counters */
-	test_instructions_count(0, true);
-	test_cycles_count(true);
+	test_instructions_count(0, true, false);
+	test_cycles_count(true, false);
+
+	/* Test overflow with interrupts on generic and cycle counters */
+	test_instructions_count(0, true, true);
+	test_cycles_count(true, true);
 }
 
 /*
@@ -813,9 +988,6 @@ static void guest_code(void)
 	GUEST_DONE();
 }
 
-#define GICD_BASE_GPA	0x8000000ULL
-#define GICR_BASE_GPA	0x80A0000ULL
-
 static unsigned long *
 set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
 {
@@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
 	uint8_t pmuver, ec;
-	uint64_t dfr0, irq = 23;
+	uint64_t dfr0, irq = PMU_IRQ;
 	struct vpmu_vm *vpmu_vm;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 
 	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
+	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
 
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (11 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-08  3:15   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vPMU's vCPU migration test to validate
chained events, and their overflow conditions.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++-
 1 file changed, 75 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 066dc17fa3906..de725f4339ad5 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -23,7 +23,7 @@
  * migrating the guest vCPU to random pCPUs in the system, and check
  * if the vPMU is still behaving as expected. The sub-tests include
  * testing basic functionalities such as basic counters behavior,
- * overflow, and overflow interrupts.
+ * overflow, overflow interrupts, and chained events.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -61,6 +61,8 @@
 #define PRE_OVERFLOW_32		(GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
 #define PRE_OVERFLOW_64		(GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
 
+#define ALL_SET_64		GENMASK(63, 0)
+
 #define GICD_BASE_GPA	0x8000000ULL
 #define GICR_BASE_GPA	0x80A0000ULL
 
@@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow)
 	pmu_disable_reset();
 }
 
+static void test_chained_count(int pmc_idx)
+{
+	int i, chained_pmc_idx;
+	struct pmc_accessor *acc;
+	uint64_t pmcr_n, cnt, cntr_val;
+
+	/* The test needs at least two PMCs */
+	pmcr_n = get_pmcr_n();
+	GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n);
+
+	/*
+	 * The chained counter's idx is always chained with (pmc_idx + 1).
+	 * pmc_idx should be even as the chained event doesn't count on
+	 * odd numbered counters.
+	 */
+	GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx);
+
+	/*
+	 * The max counter idx that the chained counter can occupy is
+	 * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2).
+	 */
+	chained_pmc_idx = pmc_idx + 1;
+	GUEST_ASSERT(chained_pmc_idx < pmcr_n);
+
+	enable_counter(chained_pmc_idx);
+	pmu_irq_init(chained_pmc_idx);
+
+	/* Configure the chained event using all the possible ways*/
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		/* Test if the chained counter increments when the base event overflows */
+
+		cntr_val = 1;
+		acc->write_cntr(chained_pmc_idx, cntr_val);
+		acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN);
+
+		/* Chain the counter with pmc_idx that's configured for an overflow */
+		test_instructions_count(pmc_idx, true, true);
+
+		/*
+		 * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors)
+		 * combinations. Hence, the chained chained_pmc_idx is expected to be
+		 * cntr_val + ARRAY_SIZE(pmc_accessors).
+		 */
+		cnt = acc->read_cntr(chained_pmc_idx);
+		GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors),
+				pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors));
+
+		/* Test for the overflow of the chained counter itself */
+
+		cntr_val = ALL_SET_64;
+		acc->write_cntr(chained_pmc_idx, cntr_val);
+
+		test_instructions_count(pmc_idx, true, true);
+
+		/*
+		 * At this point, an interrupt should've been fired for the chained
+		 * counter (which validates the overflow bit), and the counter should've
+		 * wrapped around to ARRAY_SIZE(pmc_accessors) - 1.
+		 */
+		cnt = acc->read_cntr(chained_pmc_idx);
+		GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1,
+				pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors));
+	}
+
+	pmu_irq_exit(chained_pmc_idx);
+}
+
 static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
@@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void)
 	/* Test overflow with interrupts on generic and cycle counters */
 	test_instructions_count(0, true, true);
 	test_cycles_count(true, true);
+
+	/* Test chained events */
+	test_chained_count(0);
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (12 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-08  3:40   ` Reiji Watanabe
  2023-02-15  1:07 ` [REPOST PATCH 15/16] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vCPU migration test to occupy all the vPMU counters,
by configuring chained events on alternate counter-ids and chaining
them with its corresponding predecessor counter, and verify against
the extended behavior.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index de725f4339ad5..fd00acb9391c8 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx)
 	pmu_irq_exit(chained_pmc_idx);
 }
 
+static void test_chain_all_counters(void)
+{
+	int i;
+	uint64_t cnt, pmcr_n = get_pmcr_n();
+	struct pmc_accessor *acc = &pmc_accessors[0];
+
+	/*
+	 * Test the occupancy of all the event counters, by chaining the
+	 * alternate counters. The test assumes that the host hasn't
+	 * occupied any counters. Hence, if the test fails, it could be
+	 * because all the counters weren't available to the guest or
+	 * there's actually a bug in KVM.
+	 */
+
+	/*
+	 * Configure even numbered counters to count cpu-cycles, and chain
+	 * each of them with its odd numbered counter.
+	 */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2) {
+			acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN);
+			acc->write_cntr(i, 1);
+		} else {
+			pmu_irq_init(i);
+			acc->write_cntr(i, PRE_OVERFLOW_32);
+			acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+		}
+		enable_counter(i);
+	}
+
+	/* Introduce some cycles */
+	execute_precise_instrs(500, ARMV8_PMU_PMCR_E);
+
+	/*
+	 * An overflow interrupt should've arrived for all the even numbered
+	 * counters but none for the odd numbered ones. The odd numbered ones
+	 * should've incremented exactly by 1.
+	 */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2) {
+			GUEST_ASSERT_1(!pmu_irq_received(i), i);
+
+			cnt = acc->read_cntr(i);
+			GUEST_ASSERT_2(cnt == 2, i, cnt);
+		} else {
+			GUEST_ASSERT_1(pmu_irq_received(i), i);
+		}
+	}
+
+	/* Cleanup the states */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2 == 0)
+			pmu_irq_exit(i);
+		disable_counter(i);
+	}
+}
+
 static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
@@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void)
 
 	/* Test chained events */
 	test_chained_count(0);
+
+	/* Test running chained events on all the implemented counters */
+	test_chain_all_counters();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 15/16] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (13 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-02-15  1:07 ` [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
  15 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

The PMU test's create_vpmu_vm() currently creates a VM with only
one cpu. Extend this to accept a number of cpus as a argument
to create a multi-vCPU VM. This would help the upcoming patches
to test the vPMU context across multiple vCPUs.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++--------
 1 file changed, 49 insertions(+), 33 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index fd00acb9391c8..239fc7e06b3b9 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -320,7 +320,8 @@ uint64_t op_end_addr;
 
 struct vpmu_vm {
 	struct kvm_vm *vm;
-	struct kvm_vcpu *vcpu;
+	int nr_vcpus;
+	struct kvm_vcpu **vcpus;
 	int gic_fd;
 	unsigned long *pmu_filter;
 };
@@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_
 	return pmu_filter;
 }
 
-/* Create a VM that has one vCPU with PMUv3 configured. */
+/* Create a VM that with PMUv3 configured. */
 static struct vpmu_vm *
-create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
+create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 {
+	int i;
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
@@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 	vpmu_vm = calloc(1, sizeof(*vpmu_vm));
 	TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm");
 
-	vpmu_vm->vm = vm = vm_create(1);
+	vpmu_vm->vcpus = calloc(nr_vcpus, sizeof(struct kvm_vcpu *));
+	TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus");
+	vpmu_vm->nr_vcpus = nr_vcpus;
+
+	vpmu_vm->vm = vm = vm_create(nr_vcpus);
 	vm_init_descriptor_tables(vm);
 	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
 
@@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 					guest_sync_handler);
 	}
 
-	/* Create vCPU with PMUv3 */
+	/* Create vCPUs with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
-	vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
-	vcpu_init_descriptor_tables(vcpu);
-	vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
-	/* Make sure that PMUv3 support is indicated in the ID register */
-	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
-	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
-	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
-		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
-		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+	for (i = 0; i < nr_vcpus; i++) {
+		vpmu_vm->vcpus[i] = vcpu = aarch64_vcpu_add(vm, i, &init, guest_code);
+		vcpu_init_descriptor_tables(vcpu);
+	}
 
-	/* Initialize vPMU */
-	if (pmu_event_filters)
-		vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+	/* vGIC setup is expected after the vCPUs are created but before the vPMU is initialized */
+	vpmu_vm->gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
-	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
-	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+	for (i = 0; i < nr_vcpus; i++) {
+		vcpu = vpmu_vm->vcpus[i];
+
+		/* Make sure that PMUv3 support is indicated in the ID register */
+		vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+		pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+		TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+			pmuver >= ID_AA64DFR0_PMUVER_8_0,
+			"Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver);
+
+		/* Initialize vPMU */
+		if (pmu_event_filters)
+			vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+	}
 
 	return vpmu_vm;
 }
@@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
 		bitmap_free(vpmu_vm->pmu_filter);
 	close(vpmu_vm->gic_fd);
 	kvm_vm_free(vpmu_vm->vm);
+	free(vpmu_vm->vcpus);
 	free(vpmu_vm);
 }
 
@@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu = vpmu_vm->vcpu;
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu = vpmu_vm->vcpus[0];
 
 	/* Save the initial sp to restore them later to run the guest again */
 	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
@@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu = vpmu_vm->vcpu;
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu = vpmu_vm->vcpus[0];
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
@@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void)
 	};
 
 	/* KVM should not allow configuring filters after the PMU is initialized */
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	ret = __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr);
 	TEST_ASSERT(ret == -1 && errno == EBUSY,
 			"Failed to disallow setting an event filter after PMU init");
 	destroy_vpmu_vm(vpmu_vm);
@@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void)
 
 	/* Test for valid filter configurations */
 	for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
-		vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
+		vpmu_vm = create_vpmu_vm(1, guest_code, pmu_event_filters[i]);
 		vm = vpmu_vm->vm;
 
 		pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
 		memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
 		guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
 
-		run_vcpu(vpmu_vm->vcpu);
+		run_vcpu(vpmu_vm->vcpus[0]);
 
 		destroy_vpmu_vm(vpmu_vm);
 	}
@@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void)
 
 	guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
 
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	run_vcpu(vpmu_vm->vcpu);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	run_vcpu(vpmu_vm->vcpus[0]);
 	destroy_vpmu_vm(vpmu_vm);
 }
 
@@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg)
 	struct vcpu_migrate_data *migrate_data = arg;
 	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
 
-	run_vcpu(vpmu_vm->vcpu);
+	run_vcpu(vpmu_vm->vcpus[0]);
 	migrate_data->vcpu_done = true;
 
 	return NULL;
@@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n)
 	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
 	guest_data.expected_pmcr_n = pmcr_n;
 
-	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
 
 	/* Initialize random number generation for migrating vCPUs to random pCPUs */
 	srand(time(NULL));
@@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void)
 	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
 	destroy_vpmu_vm(vpmu_vm);
 
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs
  2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
                   ` (14 preceding siblings ...)
  2023-02-15  1:07 ` [REPOST PATCH 15/16] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
@ 2023-02-15  1:07 ` Raghavendra Rao Ananta
  2023-03-08  4:44   ` Reiji Watanabe
  15 siblings, 1 reply; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-15  1:07 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

To test KVM's handling of multiple vCPU contexts together, that are
frequently migrating across random pCPUs in the system, extend the test
to create a VM with multiple vCPUs and validate the behavior.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------
 1 file changed, 114 insertions(+), 52 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 239fc7e06b3b9..c9d8e5f9a22ab 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -19,11 +19,12 @@
  * higher exception levels (EL2, EL3). Verify this functionality by
  * configuring and trying to count the events for EL2 in the guest.
  *
- * 4. Since the PMU registers are per-cpu, stress KVM by frequently
- * migrating the guest vCPU to random pCPUs in the system, and check
- * if the vPMU is still behaving as expected. The sub-tests include
- * testing basic functionalities such as basic counters behavior,
- * overflow, overflow interrupts, and chained events.
+ * 4. Since the PMU registers are per-cpu, stress KVM by creating a
+ * multi-vCPU VM, then frequently migrate the guest vCPUs to random
+ * pCPUs in the system, and check if the vPMU is still behaving as
+ * expected. The sub-tests include testing basic functionalities such
+ * as basic counters behavior, overflow, overflow interrupts, and
+ * chained events.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -348,19 +349,22 @@ struct guest_irq_data {
 	struct spinlock lock;
 };
 
-static struct guest_irq_data guest_irq_data;
+static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS];
 
 #define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
+#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF	2
 
 struct test_args {
 	int vcpu_migration_test_iter;
 	int vcpu_migration_test_migrate_freq_ms;
+	int vcpu_migration_test_nr_vcpus;
 };
 
 static struct test_args test_args = {
 	.vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
 	.vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
+	.vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF,
 };
 
 static void guest_sync_handler(struct ex_regs *regs)
@@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_
 	}
 }
 
+static struct guest_irq_data *get_irq_data(void)
+{
+	uint32_t cpu = guest_get_vcpuid();
+
+	return &guest_irq_data[cpu];
+}
+
 static void guest_irq_handler(struct ex_regs *regs)
 {
 	uint32_t pmc_idx_bmap;
 	uint64_t i, pmcr_n = get_pmcr_n();
 	uint32_t pmovsclr = read_pmovsclr();
 	unsigned int intid = gic_get_and_ack_irq();
+	struct guest_irq_data *irq_data = get_irq_data();
 
 	/* No other IRQ apart from the PMU IRQ is expected */
 	GUEST_ASSERT_1(intid == PMU_IRQ, intid);
 
-	spin_lock(&guest_irq_data.lock);
-	pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
+	spin_lock(&irq_data->lock);
+	pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap);
 
 	for (i = 0; i < pmcr_n; i++)
 		guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
 	guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
 
 	/* Mark IRQ as recived for the corresponding PMCs */
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
-	spin_unlock(&guest_irq_data.lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr);
+	spin_unlock(&irq_data->lock);
 
 	gic_set_eoi(intid);
 }
@@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs)
 static int pmu_irq_received(int pmc_idx)
 {
 	bool irq_received;
+	struct guest_irq_data *irq_data = get_irq_data();
 
-	spin_lock(&guest_irq_data.lock);
-	irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	return irq_received;
 }
 
 static void pmu_irq_init(int pmc_idx)
 {
+	struct guest_irq_data *irq_data = get_irq_data();
+
 	write_pmovsclr(BIT(pmc_idx));
 
-	spin_lock(&guest_irq_data.lock);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	enable_irq(pmc_idx);
 }
 
 static void pmu_irq_exit(int pmc_idx)
 {
+	struct guest_irq_data *irq_data = get_irq_data();
+
 	write_pmovsclr(BIT(pmc_idx));
 
-	spin_lock(&guest_irq_data.lock);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	disable_irq(pmc_idx);
 }
@@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 static void test_basic_pmu_functionality(void)
 {
 	local_irq_disable();
-	gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+	gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus,
+			(void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
 	gic_irq_enable(PMU_IRQ);
 	local_irq_enable();
 
@@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void)
 
 static void guest_vcpu_migration_test(void)
 {
+	int iter = test_args.vcpu_migration_test_iter;
+
 	/*
 	 * While the userspace continuously migrates this vCPU to random pCPUs,
 	 * run basic PMU functionalities and verify the results.
 	 */
-	while (test_args.vcpu_migration_test_iter--)
+	while (iter--)
 		test_basic_pmu_functionality();
 }
 
@@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void)
 
 struct vcpu_migrate_data {
 	struct vpmu_vm *vpmu_vm;
-	pthread_t *pt_vcpu;
-	bool vcpu_done;
+	pthread_t *pt_vcpus;
+	unsigned long *vcpu_done_map;
+	pthread_mutex_t vcpu_done_map_lock;
 };
 
+struct vcpu_migrate_data migrate_data;
+
 static void *run_vcpus_migrate_test_func(void *arg)
 {
-	struct vcpu_migrate_data *migrate_data = arg;
-	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
+	struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
+	unsigned int vcpu_idx = (unsigned long)arg;
 
-	run_vcpu(vpmu_vm->vcpus[0]);
-	migrate_data->vcpu_done = true;
+	run_vcpu(vpmu_vm->vcpus[vcpu_idx]);
+
+	pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
+	__set_bit(vcpu_idx, migrate_data.vcpu_done_map);
+	pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);
 
 	return NULL;
 }
@@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void)
 	return pcpu;
 }
 
-static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
+static int migrate_vcpu(int vcpu_idx)
 {
 	int ret;
 	cpu_set_t cpuset;
@@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
 	CPU_ZERO(&cpuset);
 	CPU_SET(new_pcpu, &cpuset);
 
-	pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
+	pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu);
 
-	ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
+	ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset);
 
 	/* Allow the error where the vCPU thread is already finished */
 	TEST_ASSERT(ret == 0 || ret == ESRCH,
@@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
 
 static void *vcpus_migrate_func(void *arg)
 {
-	struct vcpu_migrate_data *migrate_data = arg;
+	struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
+	int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus;
+	bool vcpu_done;
 
-	while (!migrate_data->vcpu_done) {
+	do {
 		usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
-		migrate_vcpu(migrate_data);
-	}
+		for (n_done = 0, i = 0; i < nr_vcpus; i++) {
+			pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
+			vcpu_done = test_bit(i, migrate_data.vcpu_done_map);
+			pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);
+
+			if (vcpu_done) {
+				n_done++;
+				continue;
+			}
+
+			migrate_vcpu(i);
+		}
+
+	} while (nr_vcpus != n_done);
 
 	return NULL;
 }
 
 static void run_vcpu_migration_test(uint64_t pmcr_n)
 {
-	int ret;
+	int i, nr_vcpus, ret;
 	struct vpmu_vm *vpmu_vm;
-	pthread_t pt_vcpu, pt_sched;
-	struct vcpu_migrate_data migrate_data = {
-		.pt_vcpu = &pt_vcpu,
-		.vcpu_done = false,
-	};
+	pthread_t pt_sched, *pt_vcpus;
 
 	__TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
 
 	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
 	guest_data.expected_pmcr_n = pmcr_n;
 
-	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	nr_vcpus = test_args.vcpu_migration_test_nr_vcpus;
+
+	migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus);
+	TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap");
+	pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL);
+
+	migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus));
+	TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers");
+
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL);
 
 	/* Initialize random number generation for migrating vCPUs to random pCPUs */
 	srand(time(NULL));
 
-	/* Spawn a vCPU thread */
-	ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
-	TEST_ASSERT(!ret, "Failed to create the vCPU thread");
+	/* Spawn vCPU threads */
+	for (i = 0; i < nr_vcpus; i++) {
+		ret = pthread_create(&pt_vcpus[i], NULL,
+					run_vcpus_migrate_test_func,  (void *)(unsigned long)i);
+		TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i);
+	}
 
 	/* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
-	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
+	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL);
 	TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
 
 	pthread_join(pt_sched, NULL);
-	pthread_join(pt_vcpu, NULL);
+
+	for (i = 0; i < nr_vcpus; i++)
+		pthread_join(pt_vcpus[i], NULL);
 
 	destroy_vpmu_vm(vpmu_vm);
+	free(pt_vcpus);
+	bitmap_free(migrate_data.vcpu_done_map);
 }
 
 static void run_tests(uint64_t pmcr_n)
@@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void)
 
 static void print_help(char *name)
 {
-	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
-		name);
+	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]"
+		"[-n vcpu_migration_nr_vcpus]\n", name);
 	pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
 		VCPU_MIGRATIONS_TEST_ITERS_DEF);
 	pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
 		VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
+	pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_NR_VPUS_DEF);
 	pr_info("\t-h: print this help screen\n");
 }
 
@@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[])
 {
 	int opt;
 
-	while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
+	while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) {
 		switch (opt) {
 		case 'i':
 			test_args.vcpu_migration_test_iter =
@@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[])
 			test_args.vcpu_migration_test_migrate_freq_ms =
 				atoi_positive("vCPU migration frequency", optarg);
 			break;
+		case 'n':
+			test_args.vcpu_migration_test_nr_vcpus =
+				atoi_positive("Nr vCPUs for vCPU migrations", optarg);
+			if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) {
+				pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS);
+				goto err;
+			}
+			break;
 		case 'h':
 		default:
 			goto err;
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  2023-02-15  1:07 ` [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
@ 2023-03-03  0:46   ` Reiji Watanabe
  2023-03-09 22:14     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-03  0:46 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow
> bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle
> counter enable bit) for PMCNTENSET_EL0 register.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  tools/arch/arm64/include/asm/perf_event.h | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
> index 97e49a4d4969f..8ce23aabf6fe6 100644
> --- a/tools/arch/arm64/include/asm/perf_event.h
> +++ b/tools/arch/arm64/include/asm/perf_event.h
> @@ -222,9 +222,11 @@
>  /*
>   * PMOVSR: counters overflow flag status reg
>   */
> +#define ARMV8_PMU_CNTOVS_C      (1 << 31) /* Cycle counter overflow bit */

Nit: This macro doesn't seem to be used in any of the patches.
Do we need this ?

Thank you,
Reiji


>  #define        ARMV8_PMU_OVSR_MASK             0xffffffff      /* Mask for writable bits */
>  #define        ARMV8_PMU_OVERFLOWED_MASK       ARMV8_PMU_OVSR_MASK
>
> +
>  /*
>   * PMXEVTYPER: Event selection reg
>   */
> @@ -247,6 +249,11 @@
>  #define ARMV8_PMU_USERENR_CR   (1 << 2) /* Cycle counter can be read at EL0 */
>  #define ARMV8_PMU_USERENR_ER   (1 << 3) /* Event counter can be read at EL0 */
>
> +/*
> + * PMCNTENSET: Count Enable set reg
> + */
> +#define ARMV8_PMU_CNTENSET_C    (1 << 31) /* Cycle counter enable bit */
> +
>  /* PMMIR_EL1.SLOTS mask */
>  #define ARMV8_PMU_SLOTS_MASK   0xff
>
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers
  2023-02-15  1:07 ` [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
@ 2023-03-03  3:06   ` Reiji Watanabe
  2023-03-09 22:19     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-03  3:06 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Add basic helpers for the test to access the cycle counter
> registers. The helpers will be used in the upcoming patches
> to run the tests related to cycle counter.
>
> No functional change intended.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++
>  1 file changed, 40 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index d72c3c9b9c39f..15aebc7d7dc94 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -147,6 +147,46 @@ static inline void disable_counter(int idx)
>         isb();
>  }
>
> +static inline uint64_t read_cycle_counter(void)
> +{
> +       return read_sysreg(pmccntr_el0);
> +}
> +
> +static inline void reset_cycle_counter(void)
> +{
> +       uint64_t v = read_sysreg(pmcr_el0);
> +
> +       write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0);
> +       isb();
> +}
> +
> +static inline void enable_cycle_counter(void)
> +{
> +       uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +       write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0);
> +       isb();
> +}

You might want to use enable_counter() and disable_counter()
from enable_cycle_counter() and disable_cycle_counter() respectively?

Thank you,
Reiji

> +
> +static inline void disable_cycle_counter(void)
> +{
> +       uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +       write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0);
> +       isb();
> +}
> +
> +static inline void write_pmccfiltr(unsigned long val)
> +{
> +       write_sysreg(val, pmccfiltr_el0);
> +       isb();
> +}
> +
> +static inline uint64_t read_pmccfiltr(void)
> +{
> +       return read_sysreg(pmccfiltr_el0);
> +}
> +
>  static inline uint64_t get_pmcr_n(void)
>  {
>         return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation
  2023-02-15  1:07 ` [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
@ 2023-03-03  4:30   ` Reiji Watanabe
  2023-03-09 22:45     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-03  4:30 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Accept a list of KVM PMU event filters as an argument while creating
> a VM via create_vpmu_vm(). Upcoming patches would leverage this to
> test the event filters' functionality.
>
> No functional change intended.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++--
>  1 file changed, 60 insertions(+), 4 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 15aebc7d7dc94..2b3a4fa3afa9c 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -15,10 +15,14 @@
>  #include <vgic.h>
>  #include <asm/perf_event.h>
>  #include <linux/bitfield.h>
> +#include <linux/bitmap.h>
>
>  /* The max number of the PMU event counters (excluding the cycle counter) */
>  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
>
> +/* The max number of event numbers that's supported */
> +#define ARMV8_PMU_MAX_EVENTS           64

The name and the comment would be a bit misleading.
(This sounds like a max number of events that are supported by ARMv8)

Perhaps 'MAX_EVENT_FILTER_BITS' would be more clear ?


> +
>  /*
>   * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
>   * were basically copied from arch/arm64/kernel/perf_event.c.
> @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = {
>         { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
>  };
>
> +#define MAX_EVENT_FILTERS_PER_VM 10

(Looking at just this patch,) it appears 'PER_VM' in the name
might be rather misleading ?

> +
>  #define INVALID_EC     (-1ul)
>  uint64_t expected_ec = INVALID_EC;
>  uint64_t op_end_addr;
> @@ -232,6 +238,7 @@ struct vpmu_vm {
>         struct kvm_vm *vm;
>         struct kvm_vcpu *vcpu;
>         int gic_fd;
> +       unsigned long *pmu_filter;
>  };
>
>  enum test_stage {
> @@ -541,8 +548,51 @@ static void guest_code(void)
>  #define GICD_BASE_GPA  0x8000000ULL
>  #define GICR_BASE_GPA  0x80A0000ULL
>
> +static unsigned long *
> +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)

Can you add a comment that explains the function ?
(especially for @pmu_event_filters and the return value ?)

> +{
> +       int j;
> +       unsigned long *pmu_filter;
> +       struct kvm_device_attr filter_attr = {
> +               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +               .attr = KVM_ARM_VCPU_PMU_V3_FILTER,
> +       };
> +
> +       /*
> +        * Setting up of the bitmap is similar to what KVM does.
> +        * If the first filter denys an event, default all the others to allow, and vice-versa.
> +        */
> +       pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS);
> +       TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter");
> +
> +       if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY)
> +               bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS);
> +
> +       for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) {
> +               struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j];
> +
> +               if (!pmu_event_filter->nevents)

What does this mean ? (the end of the valid entry in the array ?)


> +                       break;
> +
> +               pr_debug("Applying event filter:: event: 0x%x; action: %s\n",
> +                               pmu_event_filter->base_event,
> +                               pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY");
> +
> +               filter_attr.addr = (uint64_t) pmu_event_filter;
> +               vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> +
> +               if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW)
> +                       __set_bit(pmu_event_filter->base_event, pmu_filter);
> +               else
> +                       __clear_bit(pmu_event_filter->base_event, pmu_filter);
> +       }
> +
> +       return pmu_filter;
> +}
> +
>  /* Create a VM that has one vCPU with PMUv3 configured. */
> -static struct vpmu_vm *create_vpmu_vm(void *guest_code)
> +static struct vpmu_vm *
> +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
>  {
>         struct kvm_vm *vm;
>         struct kvm_vcpu *vcpu;
> @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
>                     "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
>
>         /* Initialize vPMU */
> +       if (pmu_event_filters)
> +               vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
> +
>         vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
>         vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
>
> @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
>
>  static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
>  {
> +       if (vpmu_vm->pmu_filter)
> +               bitmap_free(vpmu_vm->pmu_filter);
>         close(vpmu_vm->gic_fd);
>         kvm_vm_free(vpmu_vm->vm);
>         free(vpmu_vm);
> @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n)
>         guest_data.expected_pmcr_n = pmcr_n;
>
>         pr_debug("Test with pmcr_n %lu\n", pmcr_n);
> -       vpmu_vm = create_vpmu_vm(guest_code);
> +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
>         vcpu = vpmu_vm->vcpu;
>
>         /* Save the initial sp to restore them later to run the guest again */
> @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
>         guest_data.expected_pmcr_n = pmcr_n;
>
>         pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> -       vpmu_vm = create_vpmu_vm(guest_code);
> +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
>         vcpu = vpmu_vm->vcpu;
>
>         /* Update the PMCR_EL0.N with @pmcr_n */
> @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void)
>         struct vpmu_vm *vpmu_vm;
>         uint64_t pmcr;
>
> -       vpmu_vm = create_vpmu_vm(guest_code);
> +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
>         vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
>         destroy_vpmu_vm(vpmu_vm);
> +
>         return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
>  }

Thank you,
Reiji


>
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test
  2023-02-15  1:07 ` [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
@ 2023-03-04 20:28   ` Reiji Watanabe
  2023-03-09 23:17     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-04 20:28 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER
> attribute by applying a series of filters to allow or
> deny events from the userspace. Validation is done by
> the guest in a way that it should be able to count
> only the events that are allowed.
>
> The workload to execute a precise number of instructions
> (execute_precise_instrs() and precise_instrs_loop()) is taken
> from the kvm-unit-tests' arm/pmu.c.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++-
>  1 file changed, 258 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 2b3a4fa3afa9c..3dfb770b538e9 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -2,12 +2,21 @@
>  /*
>   * vpmu_test - Test the vPMU
>   *
> - * Copyright (c) 2022 Google LLC.
> + * The test suit contains a series of checks to validate the vPMU
> + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is
> + * supported on the host. The tests include:
>   *
> - * This test checks if the guest can see the same number of the PMU event
> + * 1. Check if the guest can see the same number of the PMU event
>   * counters (PMCR_EL0.N) that userspace sets, if the guest can access
>   * those counters, and if the guest cannot access any other counters.
> - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> + *
> + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER
> + * attribute by applying a series of filters in various combinations
> + * of allowing or denying the events. The guest validates it by
> + * checking if it's able to count only the events that are allowed.
> + *
> + * Copyright (c) 2022 Google LLC.
> + *
>   */
>  #include <kvm_util.h>
>  #include <processor.h>
> @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = {
>
>  #define MAX_EVENT_FILTERS_PER_VM 10
>
> +#define EVENT_ALLOW(ev) \
> +       {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW}
> +
> +#define EVENT_DENY(ev) \
> +       {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY}
> +
>  #define INVALID_EC     (-1ul)
>  uint64_t expected_ec = INVALID_EC;
>  uint64_t op_end_addr;
> @@ -243,11 +258,13 @@ struct vpmu_vm {
>
>  enum test_stage {
>         TEST_STAGE_COUNTER_ACCESS = 1,
> +       TEST_STAGE_KVM_EVENT_FILTER,
>  };
>
>  struct guest_data {
>         enum test_stage test_stage;
>         uint64_t expected_pmcr_n;
> +       unsigned long *pmu_filter;
>  };
>
>  static struct guest_data guest_data;
> @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event)
>                 GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
>  }
>
> +
> +/*
> + * Extra instructions inserted by the compiler would be difficult to compensate
> + * for, so hand assemble everything between, and including, the PMCR accesses
> + * to start and stop counting. isb instructions are inserted to make sure
> + * pmccntr read after this function returns the exact instructions executed
> + * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop.
> + */
> +static inline void precise_instrs_loop(int loop, uint32_t pmcr)
> +{
> +       uint64_t pmcr64 = pmcr;
> +
> +       asm volatile(
> +       "       msr     pmcr_el0, %[pmcr]\n"
> +       "       isb\n"
> +       "1:     subs    %w[loop], %w[loop], #1\n"
> +       "       b.gt    1b\n"
> +       "       nop\n"
> +       "       msr     pmcr_el0, xzr\n"
> +       "       isb\n"
> +       : [loop] "+r" (loop)
> +       : [pmcr] "r" (pmcr64)
> +       : "cc");
> +}
> +
> +/*
> + * Execute a known number of guest instructions. Only even instruction counts
> + * greater than or equal to 4 are supported by the in-line assembly code. The
> + * control register (PMCR_EL0) is initialized with the provided value (allowing
> + * for example for the cycle counter or event counters to be reset). At the end
> + * of the exact instruction loop, zero is written to PMCR_EL0 to disable
> + * counting, allowing the cycle counter or event counters to be read at the
> + * leisure of the calling code.
> + */
> +static void execute_precise_instrs(int num, uint32_t pmcr)
> +{
> +       int loop = (num - 2) / 2;
> +
> +       GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop);
> +       precise_instrs_loop(loop, pmcr);
> +}
> +
> +static void test_instructions_count(int pmc_idx, bool expect_count)
> +{
> +       int i;
> +       struct pmc_accessor *acc;
> +       uint64_t cnt;
> +       int instrs_count = 100;
> +
> +       enable_counter(pmc_idx);
> +
> +       /* Test the event using all the possible way to configure the event */
> +       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +               acc = &pmc_accessors[i];
> +
> +               pmu_disable_reset();
> +
> +               acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> +
> +               /* Enable the PMU and execute precisely number of instructions as a workload */
> +               execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> +
> +               /* If a count is expected, the counter should be increased by 'instrs_count' */
> +               cnt = acc->read_cntr(pmc_idx);
> +               GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> +                               i, expect_count, cnt, instrs_count);
> +       }
> +
> +       disable_counter(pmc_idx);
> +}
> +
> +static void test_cycles_count(bool expect_count)
> +{
> +       uint64_t cnt;
> +
> +       pmu_enable();
> +       reset_cycle_counter();
> +
> +       /* Count cycles in EL0 and EL1 */
> +       write_pmccfiltr(0);
> +       enable_cycle_counter();
> +
> +       cnt = read_cycle_counter();
> +
> +       /*
> +        * If a count is expected by the test, the cycle counter should be increased by
> +        * at least 1, as there is at least one instruction between enabling the
> +        * counter and reading the counter.
> +        */
> +       GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
> +
> +       disable_cycle_counter();

It would be nicer to also test using a generic PMC with
ARMV8_PMUV3_PERFCTR_CPU_CYCLES (not just with a cycle counter),
as the filter should be applied to both.

> +       pmu_disable_reset();
> +}
> +
> +static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
> +{
> +       switch (event) {
> +       case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
> +               test_instructions_count(pmc_idx, expect_count);
> +               break;
> +       case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
> +               test_cycles_count(expect_count);
> +               break;
> +       }
> +}
> +
>  /*
>   * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
>   * are set or cleared as specified in @set_expected.
> @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n)
>         }
>  }
>
> +static void guest_event_filter_test(unsigned long *pmu_filter)
> +{
> +       uint64_t event;
> +
> +       /*
> +        * Check if PMCEIDx_EL0 is advertized as configured by the userspace.
> +        * It's possible that even though the userspace allowed it, it may not be supported
> +        * by the hardware and could be advertized as 'disabled'. Hence, only validate against
> +        * the events that are advertized.

How about checking events that are supported by the hardware
initially (without setting the event filter) ?
Then, we can test if events that userspace tried to hide are
not exposed to guests correctly.

Can we also add a case for events that we can test both upper
32bits and lower 32 bits of both of PMCEID{0,1}_EL0 registers ?
(pmu_event_is_supported() needs to be fixed as well)



> +        *
> +        * Furthermore, check if the event is in fact counting if enabled, or vice-versa.
> +        */
> +       for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) {
> +               if (pmu_event_is_supported(event)) {
> +                       GUEST_ASSERT_1(test_bit(event, pmu_filter), event);
> +                       test_event_count(event, 0, true);
> +               } else {
> +                       test_event_count(event, 0, false);
> +               }
> +       }
> +}
> +
>  static void guest_code(void)
>  {
>         switch (guest_data.test_stage) {
>         case TEST_STAGE_COUNTER_ACCESS:
>                 guest_counter_access_test(guest_data.expected_pmcr_n);
>                 break;
> +       case TEST_STAGE_KVM_EVENT_FILTER:
> +               guest_event_filter_test(guest_data.pmu_filter);
> +               break;
>         default:
>                 GUEST_ASSERT_1(0, guest_data.test_stage);
>         }

IMHO running a guest from a different guest_code_xxx might be more
straightforward rather than controlling through the test_stage,
as it appears each test 'stage' is a different test case rather than
a test stage, and the test creates a new guest for each test 'stage'.
I don't find any reason to share the guest_code for those test
cases (Unless we are going to run some common guest codes for test
cases in the following patches)


> @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n)
>                 run_counter_access_error_test(i);
>  }
>
> +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = {

It looks like KVM_ARM_VCPU_PMU_V3_FILTER is always used with
one entry in the filter (.nevents == 1).
Could we also test with .nevents > 1 ?

> +       /*
> +        * Each set of events denotes a filter configuration for that VM.
> +        * During VM creation, the filters will be applied in the sequence mentioned here.
> +        */
> +       {
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +       },
> +       {
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +       },
> +       {
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +       },
> +       {
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +       },
> +       {
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +       },
> +       {
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +       },
> +       {
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> +       },
> +       {
> +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> +       },
> +};
> +
> +static void run_kvm_event_filter_error_tests(void)
> +{
> +       int ret;
> +       struct kvm_vm *vm;
> +       struct kvm_vcpu *vcpu;
> +       struct vpmu_vm *vpmu_vm;
> +       struct kvm_vcpu_init init;
> +       struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
> +       struct kvm_device_attr filter_attr = {
> +               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> +               .attr = KVM_ARM_VCPU_PMU_V3_FILTER,
> +               .addr = (uint64_t) &pmu_event_filter,
> +       };
> +
> +       /* KVM should not allow configuring filters after the PMU is initialized */
> +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> +       ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> +       TEST_ASSERT(ret == -1 && errno == EBUSY,
> +                       "Failed to disallow setting an event filter after PMU init");
> +       destroy_vpmu_vm(vpmu_vm);
> +
> +       /* Check for invalid event filter setting */
> +       vm = vm_create(1);
> +       vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
> +       init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> +       vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
> +
> +       pmu_event_filter.base_event = UINT16_MAX;
> +       pmu_event_filter.nevents = 5;
> +       ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> +       TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration");
> +       kvm_vm_free(vm);
> +}
> +
> +static void run_kvm_event_filter_test(void)
> +{
> +       int i;
> +       struct vpmu_vm *vpmu_vm;
> +       struct kvm_vm *vm;
> +       vm_vaddr_t pmu_filter_gva;
> +       size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long);
> +
> +       guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER;
> +
> +       /* Test for valid filter configurations */
> +       for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
> +               vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
> +               vm = vpmu_vm->vm;
> +
> +               pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
> +               memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
> +               guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
> +
> +               run_vcpu(vpmu_vm->vcpu);
> +
> +               destroy_vpmu_vm(vpmu_vm);
> +       }
> +
> +       /* Check if KVM is handling the errors correctly */
> +       run_kvm_event_filter_error_tests();
> +}
> +
>  static void run_tests(uint64_t pmcr_n)
>  {
>         run_counter_access_tests(pmcr_n);
> +       run_kvm_event_filter_test();
>  }
>
>  /*
> --
> 2.39.1.581.gbfd45094c4-goog
>

Thank you,
Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  2023-02-15  1:07 ` [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
@ 2023-03-07  1:19   ` Reiji Watanabe
  2023-03-07 16:09     ` Sean Christopherson
  2023-03-10 21:57     ` Raghavendra Rao Ananta
  0 siblings, 2 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-07  1:19 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> KVM doest't allow the guests to modify the filter types
> such counting events in nonsecure/secure-EL2, EL3, and
> so on. Validate the same by force-configuring the bits
> in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0
> registers.
>
> The test extends further by trying to create an event
> for counting only in EL2 and validates if the counter
> is not progressing.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++
>  1 file changed, 85 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 3dfb770b538e9..5c166df245589 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -15,6 +15,10 @@
>   * of allowing or denying the events. The guest validates it by
>   * checking if it's able to count only the events that are allowed.
>   *
> + * 3. KVM doesn't allow the guest to count the events attributed with
> + * higher exception levels (EL2, EL3). Verify this functionality by
> + * configuring and trying to count the events for EL2 in the guest.
> + *
>   * Copyright (c) 2022 Google LLC.
>   *
>   */
> @@ -23,6 +27,7 @@
>  #include <test_util.h>
>  #include <vgic.h>
>  #include <asm/perf_event.h>
> +#include <linux/arm-smccc.h>
>  #include <linux/bitfield.h>
>  #include <linux/bitmap.h>
>
> @@ -259,6 +264,7 @@ struct vpmu_vm {
>  enum test_stage {
>         TEST_STAGE_COUNTER_ACCESS = 1,
>         TEST_STAGE_KVM_EVENT_FILTER,
> +       TEST_STAGE_KVM_EVTYPE_FILTER,
>  };
>
>  struct guest_data {
> @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter)
>         }
>  }
>
> +static void guest_evtype_filter_test(void)
> +{
> +       int i;
> +       struct pmc_accessor *acc;
> +       uint64_t typer, cnt;
> +       struct arm_smccc_res res;
> +
> +       pmu_enable();
> +
> +       /*
> +        * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2),
> +        * Monitor (EL3), and Multithreading configuration. It applies the mask
> +        * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0,
> +        * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible
> +        * ways to configure the EVTYPER.
> +        */

I would prefer to break long lines into multiple lines for these comments
(or other comments in these patches), as "Linux kernel coding style"
suggests.
---
[https://www.kernel.org/doc/html/latest/process/coding-style.html#breaking-long-lines-and-strings]

The preferred limit on the length of a single line is 80 columns.

Statements longer than 80 columns should be broken into sensible
chunks, unless exceeding 80 columns significantly increases
readability and does not hide information.
---

> +       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +               acc = &pmc_accessors[i];
> +
> +               /* Set all filter bits (31-24), readback, and check against the mask */
> +               acc->write_typer(0, 0xff000000);
> +               typer = acc->read_typer(0);
> +
> +               GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
> +                               typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);

It appears that bits[29:26] don't have to be zero depending on
feature availability to the guest (Those bits needs to be zero
only when relevant features are not available on the guest).
So, the expected value must be changed depending on the feature
availability if the test checks those bits.
I have the same comment for the cycle counter.

> +
> +               /*
> +                * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv
> +                * to not count NS-EL2 events. Verify this functionality by configuring
> +                * a NS-EL2 event, for which the couunt shouldn't increment.
> +                */
> +               typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED;
> +               typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
> +               acc->write_typer(0, typer);
> +               acc->write_cntr(0, 0);
> +               enable_counter(0);
> +
> +               /* Issue a hypercall to enter EL2 and return */
> +               memset(&res, 0, sizeof(res));
> +               smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
> +
> +               cnt = acc->read_cntr(0);
> +               GUEST_ASSERT_3(cnt == 0, cnt, typer, i);
> +       }
> +
> +       /* Check the same sequence for the Cycle counter */
> +       write_pmccfiltr(0xff000000);
> +       typer = read_pmccfiltr();
> +       GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
> +                               typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
> +
> +       typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
> +       write_pmccfiltr(typer);
> +       reset_cycle_counter();
> +       enable_cycle_counter();
> +
> +       /* Issue a hypercall to enter EL2 and return */
> +       memset(&res, 0, sizeof(res));
> +       smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
> +
> +       cnt = read_cycle_counter();

Perhaps it's worth considering having the helpers for PMC registers
(e.g. write_cntr()) accepting the cycle counter as the index==31
to simplify the test code implementation ?

Thank you,
Reiji

> +       GUEST_ASSERT_2(cnt == 0, cnt, typer);
> +}
> +
>  static void guest_code(void)
>  {
>         switch (guest_data.test_stage) {
> @@ -687,6 +757,9 @@ static void guest_code(void)
>         case TEST_STAGE_KVM_EVENT_FILTER:
>                 guest_event_filter_test(guest_data.pmu_filter);
>                 break;
> +       case TEST_STAGE_KVM_EVTYPE_FILTER:
> +               guest_evtype_filter_test();
> +               break;
>         default:
>                 GUEST_ASSERT_1(0, guest_data.test_stage);
>         }
> @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void)
>         run_kvm_event_filter_error_tests();
>  }
>
> +static void run_kvm_evtype_filter_test(void)
> +{
> +       struct vpmu_vm *vpmu_vm;
> +
> +       guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
> +
> +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> +       run_vcpu(vpmu_vm->vcpu);
> +       destroy_vpmu_vm(vpmu_vm);
> +}
> +
>  static void run_tests(uint64_t pmcr_n)
>  {
>         run_counter_access_tests(pmcr_n);
>         run_kvm_event_filter_test();
> +       run_kvm_evtype_filter_test();
>  }
>
>  /*
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU
  2023-02-15  1:07 ` [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
@ 2023-03-07  3:43   ` Reiji Watanabe
  2023-03-10  2:28     ` Raghavendra Rao Ananta
  0 siblings, 1 reply; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-07  3:43 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Implement a stress test for KVM by frequently force-migrating the
> vCPU to random pCPUs in the system. This would validate the
> save/restore functionality of KVM and starting/stopping of
> PMU counters as necessary.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++-
>  1 file changed, 193 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 5c166df245589..0c9d801f4e602 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -19,9 +19,15 @@
>   * higher exception levels (EL2, EL3). Verify this functionality by
>   * configuring and trying to count the events for EL2 in the guest.
>   *
> + * 4. Since the PMU registers are per-cpu, stress KVM by frequently
> + * migrating the guest vCPU to random pCPUs in the system, and check
> + * if the vPMU is still behaving as expected.
> + *
>   * Copyright (c) 2022 Google LLC.
>   *
>   */
> +#define _GNU_SOURCE
> +
>  #include <kvm_util.h>
>  #include <processor.h>
>  #include <test_util.h>
> @@ -30,6 +36,11 @@
>  #include <linux/arm-smccc.h>
>  #include <linux/bitfield.h>
>  #include <linux/bitmap.h>
> +#include <stdlib.h>
> +#include <pthread.h>
> +#include <sys/sysinfo.h>
> +
> +#include "delay.h"
>
>  /* The max number of the PMU event counters (excluding the cycle counter) */
>  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> @@ -37,6 +48,8 @@
>  /* The max number of event numbers that's supported */
>  #define ARMV8_PMU_MAX_EVENTS           64
>
> +#define msecs_to_usecs(msec)           ((msec) * 1000LL)
> +
>  /*
>   * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
>   * were basically copied from arch/arm64/kernel/perf_event.c.
> @@ -265,6 +278,7 @@ enum test_stage {
>         TEST_STAGE_COUNTER_ACCESS = 1,
>         TEST_STAGE_KVM_EVENT_FILTER,
>         TEST_STAGE_KVM_EVTYPE_FILTER,
> +       TEST_STAGE_VCPU_MIGRATION,
>  };
>
>  struct guest_data {
> @@ -275,6 +289,19 @@ struct guest_data {
>
>  static struct guest_data guest_data;
>
> +#define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
> +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
> +
> +struct test_args {
> +       int vcpu_migration_test_iter;
> +       int vcpu_migration_test_migrate_freq_ms;
> +};
> +
> +static struct test_args test_args = {
> +       .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
> +       .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
> +};
> +
>  static void guest_sync_handler(struct ex_regs *regs)
>  {
>         uint64_t esr, ec;
> @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event)
>                 GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
>  }
>
> -
>  /*
>   * Extra instructions inserted by the compiler would be difficult to compensate
>   * for, so hand assemble everything between, and including, the PMCR accesses
> @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
>         }
>  }
>
> +static void test_basic_pmu_functionality(void)
> +{
> +       /* Test events on generic and cycle counters */
> +       test_instructions_count(0, true);
> +       test_cycles_count(true);
> +}
> +
>  /*
>   * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
>   * are set or cleared as specified in @set_expected.
> @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void)
>         GUEST_ASSERT_2(cnt == 0, cnt, typer);
>  }
>
> +static void guest_vcpu_migration_test(void)
> +{
> +       /*
> +        * While the userspace continuously migrates this vCPU to random pCPUs,
> +        * run basic PMU functionalities and verify the results.
> +        */
> +       while (test_args.vcpu_migration_test_iter--)
> +               test_basic_pmu_functionality();
> +}
> +
>  static void guest_code(void)
>  {
>         switch (guest_data.test_stage) {
> @@ -760,6 +803,9 @@ static void guest_code(void)
>         case TEST_STAGE_KVM_EVTYPE_FILTER:
>                 guest_evtype_filter_test();
>                 break;
> +       case TEST_STAGE_VCPU_MIGRATION:
> +               guest_vcpu_migration_test();
> +               break;
>         default:
>                 GUEST_ASSERT_1(0, guest_data.test_stage);
>         }
> @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
>
>         vpmu_vm->vm = vm = vm_create(1);
>         vm_init_descriptor_tables(vm);
> +
>         /* Catch exceptions for easier debugging */
>         for (ec = 0; ec < ESR_EC_NUM; ec++) {
>                 vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
> @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
>         struct ucall uc;
>
>         sync_global_to_guest(vcpu->vm, guest_data);
> +       sync_global_to_guest(vcpu->vm, test_args);
> +
>         vcpu_run(vcpu);
>         switch (get_ucall(vcpu, &uc)) {
>         case UCALL_ABORT:
> @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void)
>         destroy_vpmu_vm(vpmu_vm);
>  }
>
> +struct vcpu_migrate_data {
> +       struct vpmu_vm *vpmu_vm;
> +       pthread_t *pt_vcpu;

Nit: Originally, I wasn't sure what 'pt' stands for.
Also, the 'pt_vcpu' made me think this would be a pointer to a vCPU.
Perhaps renaming this to 'vcpu_pthread' might be more clear ?


> +       bool vcpu_done;
> +};
> +
> +static void *run_vcpus_migrate_test_func(void *arg)
> +{
> +       struct vcpu_migrate_data *migrate_data = arg;
> +       struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
> +
> +       run_vcpu(vpmu_vm->vcpu);
> +       migrate_data->vcpu_done = true;
> +
> +       return NULL;
> +}
> +
> +static uint32_t get_pcpu(void)
> +{
> +       uint32_t pcpu;
> +       unsigned int nproc_conf;
> +       cpu_set_t online_cpuset;
> +
> +       nproc_conf = get_nprocs_conf();
> +       sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
> +
> +       /* Randomly find an available pCPU to place the vCPU on */
> +       do {
> +               pcpu = rand() % nproc_conf;
> +       } while (!CPU_ISSET(pcpu, &online_cpuset));
> +
> +       return pcpu;
> +}
> +
> +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)

Nit: You might want to pass a pthread_t rather than migrate_data
unless the function uses some more fields of the data in the
following patches.

> +{
> +       int ret;
> +       cpu_set_t cpuset;
> +       uint32_t new_pcpu = get_pcpu();
> +
> +       CPU_ZERO(&cpuset);
> +       CPU_SET(new_pcpu, &cpuset);
> +
> +       pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
> +
> +       ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
> +
> +       /* Allow the error where the vCPU thread is already finished */
> +       TEST_ASSERT(ret == 0 || ret == ESRCH,
> +                   "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret);
> +
> +       return ret;
> +}
> +
> +static void *vcpus_migrate_func(void *arg)
> +{
> +       struct vcpu_migrate_data *migrate_data = arg;
> +
> +       while (!migrate_data->vcpu_done) {
> +               usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
> +               migrate_vcpu(migrate_data);
> +       }
> +
> +       return NULL;
> +}
> +
> +static void run_vcpu_migration_test(uint64_t pmcr_n)
> +{
> +       int ret;
> +       struct vpmu_vm *vpmu_vm;
> +       pthread_t pt_vcpu, pt_sched;
> +       struct vcpu_migrate_data migrate_data = {
> +               .pt_vcpu = &pt_vcpu,
> +               .vcpu_done = false,
> +       };
> +
> +       __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");

Considering that get_pcpu() chooses the target CPU from CPUs returned
from sched_getaffinity(), I would think the test should use the number of
the bits set in the returned cpu_set_t from sched_getaffinity() here
instead of get_nprocs(), as those numbers could be different (e.g.  if the
test runs with taskset with a subset of the CPUs on the system).


> +
> +       guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
> +       guest_data.expected_pmcr_n = pmcr_n;
> +
> +       migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
> +
> +       /* Initialize random number generation for migrating vCPUs to random pCPUs */
> +       srand(time(NULL));
> +
> +       /* Spawn a vCPU thread */
> +       ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
> +       TEST_ASSERT(!ret, "Failed to create the vCPU thread");
> +
> +       /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
> +       ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);

Why do you want to spawn another thread to run vcpus_migrate_func(),
rather than calling that from the current thread ?


> +       TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
> +
> +       pthread_join(pt_sched, NULL);
> +       pthread_join(pt_vcpu, NULL);
> +
> +       destroy_vpmu_vm(vpmu_vm);
> +}
> +
>  static void run_tests(uint64_t pmcr_n)
>  {
>         run_counter_access_tests(pmcr_n);
>         run_kvm_event_filter_test();
>         run_kvm_evtype_filter_test();
> +       run_vcpu_migration_test(pmcr_n);
>  }
>
>  /*
> @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void)
>         return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
>  }
>
> -int main(void)
> +static void print_help(char *name)
> +{
> +       pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
> +               name);
> +       pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
> +               VCPU_MIGRATIONS_TEST_ITERS_DEF);
> +       pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
> +               VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
> +       pr_info("\t-h: print this help screen\n");
> +}
> +
> +static bool parse_args(int argc, char *argv[])
> +{
> +       int opt;
> +
> +       while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
> +               switch (opt) {
> +               case 'i':
> +                       test_args.vcpu_migration_test_iter =
> +                               atoi_positive("Nr vCPU migration iterations", optarg);
> +                       break;
> +               case 'm':
> +                       test_args.vcpu_migration_test_migrate_freq_ms =
> +                               atoi_positive("vCPU migration frequency", optarg);
> +                       break;
> +               case 'h':
> +               default:
> +                       goto err;
> +               }
> +       }
> +
> +       return true;
> +
> +err:
> +       print_help(argv[0]);
> +       return false;
> +}
> +
> +int main(int argc, char *argv[])
>  {
>         uint64_t pmcr_n;
>
>         TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
>
> +       if (!parse_args(argc, argv))
> +               exit(KSFT_SKIP);
> +
>         pmcr_n = get_pmcr_n_limit();
>         run_tests(pmcr_n);
>
> --
> 2.39.1.581.gbfd45094c4-goog
>

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  2023-02-15  1:07 ` [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
@ 2023-03-07  6:09   ` Reiji Watanabe
  2023-03-08  1:19     ` Reiji Watanabe
  2023-03-10 23:58     ` Raghavendra Rao Ananta
  0 siblings, 2 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-07  6:09 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Extend the vCPU migration test to also validate the vPMU's
> functionality when set up for overflow conditions.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++--
>  1 file changed, 198 insertions(+), 25 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 0c9d801f4e602..066dc17fa3906 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -21,7 +21,9 @@
>   *
>   * 4. Since the PMU registers are per-cpu, stress KVM by frequently
>   * migrating the guest vCPU to random pCPUs in the system, and check
> - * if the vPMU is still behaving as expected.
> + * if the vPMU is still behaving as expected. The sub-tests include
> + * testing basic functionalities such as basic counters behavior,
> + * overflow, and overflow interrupts.
>   *
>   * Copyright (c) 2022 Google LLC.
>   *
> @@ -41,13 +43,27 @@
>  #include <sys/sysinfo.h>
>
>  #include "delay.h"
> +#include "gic.h"
> +#include "spinlock.h"
>
>  /* The max number of the PMU event counters (excluding the cycle counter) */
>  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
>
> +/* The cycle counter bit position that's common among the PMU registers */
> +#define ARMV8_PMU_CYCLE_COUNTER_IDX    31
> +
>  /* The max number of event numbers that's supported */
>  #define ARMV8_PMU_MAX_EVENTS           64
>
> +#define PMU_IRQ                                23
> +
> +#define COUNT_TO_OVERFLOW      0xFULL
> +#define PRE_OVERFLOW_32                (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
> +#define PRE_OVERFLOW_64                (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
> +
> +#define GICD_BASE_GPA  0x8000000ULL
> +#define GICR_BASE_GPA  0x80A0000ULL
> +
>  #define msecs_to_usecs(msec)           ((msec) * 1000LL)
>
>  /*
> @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
>         isb();
>  }
>
> +static inline void write_pmovsclr(unsigned long val)
> +{
> +       write_sysreg(val, pmovsclr_el0);
> +       isb();
> +}
> +
> +static unsigned long read_pmovsclr(void)
> +{
> +       return read_sysreg(pmovsclr_el0);
> +}
> +
>  static inline void enable_counter(int idx)
>  {
>         uint64_t v = read_sysreg(pmcntenset_el0);
> @@ -178,11 +205,33 @@ static inline void disable_counter(int idx)
>         isb();
>  }
>
> +static inline void enable_irq(int idx)
> +{
> +       uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +       write_sysreg(BIT(idx) | v, pmintenset_el1);
> +       isb();
> +}
> +
> +static inline void disable_irq(int idx)
> +{
> +       uint64_t v = read_sysreg(pmcntenset_el0);
> +
> +       write_sysreg(BIT(idx) | v, pmintenclr_el1);
> +       isb();
> +}
> +
>  static inline uint64_t read_cycle_counter(void)
>  {
>         return read_sysreg(pmccntr_el0);
>  }
>
> +static inline void write_cycle_counter(uint64_t v)
> +{
> +       write_sysreg(v, pmccntr_el0);
> +       isb();
> +}
> +
>  static inline void reset_cycle_counter(void)
>  {
>         uint64_t v = read_sysreg(pmcr_el0);
> @@ -289,6 +338,15 @@ struct guest_data {
>
>  static struct guest_data guest_data;
>
> +/* Data to communicate among guest threads */
> +struct guest_irq_data {
> +       uint32_t pmc_idx_bmap;
> +       uint32_t irq_received_bmap;
> +       struct spinlock lock;
> +};
> +
> +static struct guest_irq_data guest_irq_data;
> +
>  #define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
>  #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
>
> @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs)
>         expected_ec = INVALID_EC;
>  }
>
> +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap)

Can you please add a comment about what is pmc_idx_bmap ?


> +{
> +       /*
> +        * Fail if there's an interrupt from unexpected PMCs.
> +        * All the expected events' IRQs may not arrive at the same time.
> +        * Hence, check if the interrupt is valid only if it's expected.
> +        */
> +       if (pmovsclr & BIT(pmc_idx)) {
> +               GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap);
> +               write_pmovsclr(BIT(pmc_idx));
> +       }
> +}
> +
> +static void guest_irq_handler(struct ex_regs *regs)
> +{
> +       uint32_t pmc_idx_bmap;
> +       uint64_t i, pmcr_n = get_pmcr_n();
> +       uint32_t pmovsclr = read_pmovsclr();
> +       unsigned int intid = gic_get_and_ack_irq();
> +
> +       /* No other IRQ apart from the PMU IRQ is expected */
> +       GUEST_ASSERT_1(intid == PMU_IRQ, intid);
> +
> +       spin_lock(&guest_irq_data.lock);

Could you explain why this lock is required in this patch ??
If this is used to serialize the interrupt context code and
the normal (non-interrupt) context code, you might want to
disable the IRQ ?  Using the spin lock won't work well for
that if the interrupt handler is invoked while the normal
context code grabs the lock.
Having said that, since execute_precise_instrs() disables the PMU
 via PMCR, and does isb after that, I don't think the overflow
interrupt is delivered while the normal context code is in
pmu_irq_*() anyway.

> +       pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
> +
> +       for (i = 0; i < pmcr_n; i++)
> +               guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
> +       guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
> +
> +       /* Mark IRQ as recived for the corresponding PMCs */
> +       WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
> +       spin_unlock(&guest_irq_data.lock);
> +
> +       gic_set_eoi(intid);
> +}
> +
> +static int pmu_irq_received(int pmc_idx)
> +{
> +       bool irq_received;
> +
> +       spin_lock(&guest_irq_data.lock);
> +       irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
> +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> +       spin_unlock(&guest_irq_data.lock);
> +
> +       return irq_received;
> +}
> +
> +static void pmu_irq_init(int pmc_idx)
> +{
> +       write_pmovsclr(BIT(pmc_idx));
> +
> +       spin_lock(&guest_irq_data.lock);
> +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
> +       spin_unlock(&guest_irq_data.lock);
> +
> +       enable_irq(pmc_idx);
> +}
> +
> +static void pmu_irq_exit(int pmc_idx)
> +{
> +       write_pmovsclr(BIT(pmc_idx));
> +
> +       spin_lock(&guest_irq_data.lock);
> +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> +       spin_unlock(&guest_irq_data.lock);
> +
> +       disable_irq(pmc_idx);
> +}
> +
>  /*
>   * Run the given operation that should trigger an exception with the
>   * given exception class. The exception handler (guest_sync_handler)
> @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr)
>         precise_instrs_loop(loop, pmcr);
>  }
>
> -static void test_instructions_count(int pmc_idx, bool expect_count)
> +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow)
>  {
>         int i;
>         struct pmc_accessor *acc;
> -       uint64_t cnt;
> -       int instrs_count = 100;
> +       uint64_t cntr_val = 0;
> +       int instrs_count = 500;

Can we set instrs_count based on the value we set for cntr_val?
(so that instrs_count can be adjusted automatically when we change the
value of cntr_val ?)

> +
> +       if (test_overflow) {
> +               /* Overflow scenarios can only be tested when a count is expected */
> +               GUEST_ASSERT_1(expect_count, pmc_idx);
> +
> +               cntr_val = PRE_OVERFLOW_32;
> +               pmu_irq_init(pmc_idx);
> +       }
>
>         enable_counter(pmc_idx);
>
> @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count)
>         for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
>                 acc = &pmc_accessors[i];
>
> -               pmu_disable_reset();
> -
> +               acc->write_cntr(pmc_idx, cntr_val);
>                 acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
>
> -               /* Enable the PMU and execute precisely number of instructions as a workload */
> -               execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> +               /*
> +                * Enable the PMU and execute a precise number of instructions as a workload.
> +                * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count'
> +                * should have enough instructions to raise an IRQ.
> +                */
> +               execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E);
>
> -               /* If a count is expected, the counter should be increased by 'instrs_count' */
> -               cnt = acc->read_cntr(pmc_idx);
> -               GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> -                               i, expect_count, cnt, instrs_count);
> +               /*
> +                * If an overflow is expected, only check for the overflag flag.
> +                * As overflow interrupt is enabled, the interrupt would add additional
> +                * instructions and mess up the precise instruction count. Hence, measure
> +                * the instructions count only when the test is not set up for an overflow.
> +                */
> +               if (test_overflow) {
> +                       GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i);
> +               } else {
> +                       uint64_t cnt = acc->read_cntr(pmc_idx);
> +
> +                       GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> +                                       pmc_idx, i, cnt, expect_count);
> +               }
>         }
>
> -       disable_counter(pmc_idx);
> +       if (test_overflow)
> +               pmu_irq_exit(pmc_idx);
>  }
>
> -static void test_cycles_count(bool expect_count)
> +static void test_cycles_count(bool expect_count, bool test_overflow)
>  {
>         uint64_t cnt;
>
> -       pmu_enable();
> -       reset_cycle_counter();
> +       if (test_overflow) {
> +               /* Overflow scenarios can only be tested when a count is expected */
> +               GUEST_ASSERT(expect_count);
> +
> +               write_cycle_counter(PRE_OVERFLOW_64);
> +               pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX);
> +       } else {
> +               reset_cycle_counter();
> +       }
>
>         /* Count cycles in EL0 and EL1 */
>         write_pmccfiltr(0);
>         enable_cycle_counter();
>
> +       /* Enable the PMU and execute precisely number of instructions as a workload */

Can you please add a comment why we do this (500 times) iterations ?
Can we set the iteration number based on the initial value of the
cycle counter ?

> +       execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
>         cnt = read_cycle_counter();
  >
>         /*
>          * If a count is expected by the test, the cycle counter should be increased by
> -        * at least 1, as there is at least one instruction between enabling the
> +        * at least 1, as there are a number of instructions between enabling the
>          * counter and reading the counter.
>          */

"at least 1" doesn't seem to be consistent with the GUEST_ASSERT_2 below
when test_overflow is true, considering the initial value of the cycle counter.
Shouldn't this GUEST_ASSERT_2 be executed only if test_overflow is false ?
(Or do you want to adjust the comment ?)

>         GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
> +       if (test_overflow) {
> +               GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count);
> +               pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX);
> +       }
>
>         disable_cycle_counter();
>         pmu_disable_reset();
> @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
>  {
>         switch (event) {
>         case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
> -               test_instructions_count(pmc_idx, expect_count);
> +               test_instructions_count(pmc_idx, expect_count, false);
>                 break;
>         case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
> -               test_cycles_count(expect_count);
> +               test_cycles_count(expect_count, false);
>                 break;
>         }
>  }
>
>  static void test_basic_pmu_functionality(void)
>  {
> +       local_irq_disable();
> +       gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> +       gic_irq_enable(PMU_IRQ);
> +       local_irq_enable();
> +
>         /* Test events on generic and cycle counters */
> -       test_instructions_count(0, true);
> -       test_cycles_count(true);
> +       test_instructions_count(0, true, false);
> +       test_cycles_count(true, false);
> +
> +       /* Test overflow with interrupts on generic and cycle counters */
> +       test_instructions_count(0, true, true);
> +       test_cycles_count(true, true);
>  }
>
>  /*
> @@ -813,9 +988,6 @@ static void guest_code(void)
>         GUEST_DONE();
>  }
>
> -#define GICD_BASE_GPA  0x8000000ULL
> -#define GICR_BASE_GPA  0x80A0000ULL
> -
>  static unsigned long *
>  set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
>  {
> @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
>         struct kvm_vcpu *vcpu;
>         struct kvm_vcpu_init init;
>         uint8_t pmuver, ec;
> -       uint64_t dfr0, irq = 23;
> +       uint64_t dfr0, irq = PMU_IRQ;
>         struct vpmu_vm *vpmu_vm;
>         struct kvm_device_attr irq_attr = {
>                 .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
>
>         vpmu_vm->vm = vm = vm_create(1);
>         vm_init_descriptor_tables(vm);
> +       vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
>
>         /* Catch exceptions for easier debugging */
>         for (ec = 0; ec < ESR_EC_NUM; ec++) {
> --
> 2.39.1.581.gbfd45094c4-goog
>

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  2023-03-07  1:19   ` Reiji Watanabe
@ 2023-03-07 16:09     ` Sean Christopherson
  2023-03-10 21:57     ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 36+ messages in thread
From: Sean Christopherson @ 2023-03-07 16:09 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Raghavendra Rao Ananta, Oliver Upton, Marc Zyngier,
	Ricardo Koller, James Morse, Suzuki K Poulose, Paolo Bonzini,
	Jing Zhang, Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel,
	kvm

RESEND is the "standard" tag, not REPOST.

On Mon, Mar 06, 2023, Reiji Watanabe wrote:
> Hi Raghu,
> 
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > KVM doest't allow the guests to modify the filter types
> > such counting events in nonsecure/secure-EL2, EL3, and
> > so on. Validate the same by force-configuring the bits
> > in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0
> > registers.
> >
> > The test extends further by trying to create an event
> > for counting only in EL2 and validates if the counter
> > is not progressing.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > +static void guest_evtype_filter_test(void)
> > +{
> > +       int i;
> > +       struct pmc_accessor *acc;
> > +       uint64_t typer, cnt;
> > +       struct arm_smccc_res res;
> > +
> > +       pmu_enable();
> > +
> > +       /*
> > +        * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2),
> > +        * Monitor (EL3), and Multithreading configuration. It applies the mask
> > +        * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0,
> > +        * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible
> > +        * ways to configure the EVTYPER.
> > +        */
> 
> I would prefer to break long lines into multiple lines for these comments
> (or other comments in these patches), as "Linux kernel coding style"
> suggests.

+1.  And on the other side of the coin, wrap the changelog closer to ~75 chars,
~54 chars is waaay too aggressive.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  2023-03-07  6:09   ` Reiji Watanabe
@ 2023-03-08  1:19     ` Reiji Watanabe
  2023-03-10 23:58     ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-08  1:19 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

On Mon, Mar 6, 2023 at 10:09 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Extend the vCPU migration test to also validate the vPMU's
> > functionality when set up for overflow conditions.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++--
> >  1 file changed, 198 insertions(+), 25 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 0c9d801f4e602..066dc17fa3906 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -21,7 +21,9 @@
> >   *
> >   * 4. Since the PMU registers are per-cpu, stress KVM by frequently
> >   * migrating the guest vCPU to random pCPUs in the system, and check
> > - * if the vPMU is still behaving as expected.
> > + * if the vPMU is still behaving as expected. The sub-tests include
> > + * testing basic functionalities such as basic counters behavior,
> > + * overflow, and overflow interrupts.
> >   *
> >   * Copyright (c) 2022 Google LLC.
> >   *
> > @@ -41,13 +43,27 @@
> >  #include <sys/sysinfo.h>
> >
> >  #include "delay.h"
> > +#include "gic.h"
> > +#include "spinlock.h"
> >
> >  /* The max number of the PMU event counters (excluding the cycle counter) */
> >  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> >
> > +/* The cycle counter bit position that's common among the PMU registers */
> > +#define ARMV8_PMU_CYCLE_COUNTER_IDX    31
> > +
> >  /* The max number of event numbers that's supported */
> >  #define ARMV8_PMU_MAX_EVENTS           64
> >
> > +#define PMU_IRQ                                23
> > +
> > +#define COUNT_TO_OVERFLOW      0xFULL
> > +#define PRE_OVERFLOW_32                (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
> > +#define PRE_OVERFLOW_64                (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)

Reset values of PMCR_EL0.LP and PMCR_EL0.LC are UNKNOWN.
As the test seems to expect a 64-bit overflow on the cycle counter
and a 32-bit overflow on the other counters, the guest code should
explicitly clear PMCR_EL0.LP and set PMCR_EL0.LC, before the test.

Thanks,
Reiji


> > +
> > +#define GICD_BASE_GPA  0x8000000ULL
> > +#define GICR_BASE_GPA  0x80A0000ULL
> > +
> >  #define msecs_to_usecs(msec)           ((msec) * 1000LL)
> >
> >  /*
> > @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
> >         isb();
> >  }
> >
> > +static inline void write_pmovsclr(unsigned long val)
> > +{
> > +       write_sysreg(val, pmovsclr_el0);
> > +       isb();
> > +}
> > +
> > +static unsigned long read_pmovsclr(void)
> > +{
> > +       return read_sysreg(pmovsclr_el0);
> > +}
> > +
> >  static inline void enable_counter(int idx)
> >  {
> >         uint64_t v = read_sysreg(pmcntenset_el0);
> > @@ -178,11 +205,33 @@ static inline void disable_counter(int idx)
> >         isb();
> >  }
> >
> > +static inline void enable_irq(int idx)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(BIT(idx) | v, pmintenset_el1);
> > +       isb();
> > +}
> > +
> > +static inline void disable_irq(int idx)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(BIT(idx) | v, pmintenclr_el1);
> > +       isb();
> > +}
> > +
> >  static inline uint64_t read_cycle_counter(void)
> >  {
> >         return read_sysreg(pmccntr_el0);
> >  }
> >
> > +static inline void write_cycle_counter(uint64_t v)
> > +{
> > +       write_sysreg(v, pmccntr_el0);
> > +       isb();
> > +}
> > +
> >  static inline void reset_cycle_counter(void)
> >  {
> >         uint64_t v = read_sysreg(pmcr_el0);
> > @@ -289,6 +338,15 @@ struct guest_data {
> >
> >  static struct guest_data guest_data;
> >
> > +/* Data to communicate among guest threads */
> > +struct guest_irq_data {
> > +       uint32_t pmc_idx_bmap;
> > +       uint32_t irq_received_bmap;
> > +       struct spinlock lock;
> > +};
> > +
> > +static struct guest_irq_data guest_irq_data;
> > +
> >  #define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
> >  #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
> >
> > @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs)
> >         expected_ec = INVALID_EC;
> >  }
> >
> > +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap)
>
> Can you please add a comment about what is pmc_idx_bmap ?
>
>
> > +{
> > +       /*
> > +        * Fail if there's an interrupt from unexpected PMCs.
> > +        * All the expected events' IRQs may not arrive at the same time.
> > +        * Hence, check if the interrupt is valid only if it's expected.
> > +        */
> > +       if (pmovsclr & BIT(pmc_idx)) {
> > +               GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap);
> > +               write_pmovsclr(BIT(pmc_idx));
> > +       }
> > +}
> > +
> > +static void guest_irq_handler(struct ex_regs *regs)
> > +{
> > +       uint32_t pmc_idx_bmap;
> > +       uint64_t i, pmcr_n = get_pmcr_n();
> > +       uint32_t pmovsclr = read_pmovsclr();
> > +       unsigned int intid = gic_get_and_ack_irq();
> > +
> > +       /* No other IRQ apart from the PMU IRQ is expected */
> > +       GUEST_ASSERT_1(intid == PMU_IRQ, intid);
> > +
> > +       spin_lock(&guest_irq_data.lock);
>
> Could you explain why this lock is required in this patch ??
> If this is used to serialize the interrupt context code and
> the normal (non-interrupt) context code, you might want to
> disable the IRQ ?  Using the spin lock won't work well for
> that if the interrupt handler is invoked while the normal
> context code grabs the lock.
> Having said that, since execute_precise_instrs() disables the PMU
>  via PMCR, and does isb after that, I don't think the overflow
> interrupt is delivered while the normal context code is in
> pmu_irq_*() anyway.
>
> > +       pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
> > +
> > +       for (i = 0; i < pmcr_n; i++)
> > +               guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
> > +       guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
> > +
> > +       /* Mark IRQ as recived for the corresponding PMCs */
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       gic_set_eoi(intid);
> > +}
> > +
> > +static int pmu_irq_received(int pmc_idx)
> > +{
> > +       bool irq_received;
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       return irq_received;
> > +}
> > +
> > +static void pmu_irq_init(int pmc_idx)
> > +{
> > +       write_pmovsclr(BIT(pmc_idx));
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       enable_irq(pmc_idx);
> > +}
> > +
> > +static void pmu_irq_exit(int pmc_idx)
> > +{
> > +       write_pmovsclr(BIT(pmc_idx));
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       disable_irq(pmc_idx);
> > +}
> > +
> >  /*
> >   * Run the given operation that should trigger an exception with the
> >   * given exception class. The exception handler (guest_sync_handler)
> > @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr)
> >         precise_instrs_loop(loop, pmcr);
> >  }
> >
> > -static void test_instructions_count(int pmc_idx, bool expect_count)
> > +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow)
> >  {
> >         int i;
> >         struct pmc_accessor *acc;
> > -       uint64_t cnt;
> > -       int instrs_count = 100;
> > +       uint64_t cntr_val = 0;
> > +       int instrs_count = 500;
>
> Can we set instrs_count based on the value we set for cntr_val?
> (so that instrs_count can be adjusted automatically when we change the
> value of cntr_val ?)
>
> > +
> > +       if (test_overflow) {



> > +               /* Overflow scenarios can only be tested when a count is expected */
> > +               GUEST_ASSERT_1(expect_count, pmc_idx);
> > +
> > +               cntr_val = PRE_OVERFLOW_32;
> > +               pmu_irq_init(pmc_idx);
> > +       }
> >
> >         enable_counter(pmc_idx);
> >
> > @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count)
> >         for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> >                 acc = &pmc_accessors[i];
> >
> > -               pmu_disable_reset();
> > -
> > +               acc->write_cntr(pmc_idx, cntr_val);
> >                 acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> >
> > -               /* Enable the PMU and execute precisely number of instructions as a workload */
> > -               execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> > +               /*
> > +                * Enable the PMU and execute a precise number of instructions as a workload.
> > +                * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count'
> > +                * should have enough instructions to raise an IRQ.
> > +                */
> > +               execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E);
> >
> > -               /* If a count is expected, the counter should be increased by 'instrs_count' */
> > -               cnt = acc->read_cntr(pmc_idx);
> > -               GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> > -                               i, expect_count, cnt, instrs_count);
> > +               /*
> > +                * If an overflow is expected, only check for the overflag flag.
> > +                * As overflow interrupt is enabled, the interrupt would add additional
> > +                * instructions and mess up the precise instruction count. Hence, measure
> > +                * the instructions count only when the test is not set up for an overflow.
> > +                */
> > +               if (test_overflow) {
> > +                       GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i);
> > +               } else {
> > +                       uint64_t cnt = acc->read_cntr(pmc_idx);
> > +
> > +                       GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> > +                                       pmc_idx, i, cnt, expect_count);
> > +               }
> >         }
> >
> > -       disable_counter(pmc_idx);
> > +       if (test_overflow)
> > +               pmu_irq_exit(pmc_idx);
> >  }
> >
> > -static void test_cycles_count(bool expect_count)
> > +static void test_cycles_count(bool expect_count, bool test_overflow)
> >  {
> >         uint64_t cnt;
> >
> > -       pmu_enable();
> > -       reset_cycle_counter();
> > +       if (test_overflow) {
> > +               /* Overflow scenarios can only be tested when a count is expected */
> > +               GUEST_ASSERT(expect_count);
> > +
> > +               write_cycle_counter(PRE_OVERFLOW_64);
> > +               pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX);
> > +       } else {
> > +               reset_cycle_counter();
> > +       }
> >
> >         /* Count cycles in EL0 and EL1 */
> >         write_pmccfiltr(0);
> >         enable_cycle_counter();
> >
> > +       /* Enable the PMU and execute precisely number of instructions as a workload */
>
> Can you please add a comment why we do this (500 times) iterations ?
> Can we set the iteration number based on the initial value of the
> cycle counter ?
>
> > +       execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> >         cnt = read_cycle_counter();
>   >
> >         /*
> >          * If a count is expected by the test, the cycle counter should be increased by
> > -        * at least 1, as there is at least one instruction between enabling the
> > +        * at least 1, as there are a number of instructions between enabling the
> >          * counter and reading the counter.
> >          */
>
> "at least 1" doesn't seem to be consistent with the GUEST_ASSERT_2 below
> when test_overflow is true, considering the initial value of the cycle counter.
> Shouldn't this GUEST_ASSERT_2 be executed only if test_overflow is false ?
> (Or do you want to adjust the comment ?)
>
> >         GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
> > +       if (test_overflow) {
> > +               GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count);
> > +               pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX);
> > +       }
> >
> >         disable_cycle_counter();
> >         pmu_disable_reset();
> > @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
> >  {
> >         switch (event) {
> >         case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
> > -               test_instructions_count(pmc_idx, expect_count);
> > +               test_instructions_count(pmc_idx, expect_count, false);
> >                 break;
> >         case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
> > -               test_cycles_count(expect_count);
> > +               test_cycles_count(expect_count, false);
> >                 break;
> >         }
> >  }
> >
> >  static void test_basic_pmu_functionality(void)
> >  {
> > +       local_irq_disable();
> > +       gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> > +       gic_irq_enable(PMU_IRQ);
> > +       local_irq_enable();
> > +
> >         /* Test events on generic and cycle counters */
> > -       test_instructions_count(0, true);
> > -       test_cycles_count(true);
> > +       test_instructions_count(0, true, false);
> > +       test_cycles_count(true, false);
> > +
> > +       /* Test overflow with interrupts on generic and cycle counters */
> > +       test_instructions_count(0, true, true);
> > +       test_cycles_count(true, true);
> >  }
> >
> >  /*
> > @@ -813,9 +988,6 @@ static void guest_code(void)
> >         GUEST_DONE();
> >  }
> >
> > -#define GICD_BASE_GPA  0x8000000ULL
> > -#define GICR_BASE_GPA  0x80A0000ULL
> > -
> >  static unsigned long *
> >  set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
> >  {
> > @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >         struct kvm_vcpu *vcpu;
> >         struct kvm_vcpu_init init;
> >         uint8_t pmuver, ec;
> > -       uint64_t dfr0, irq = 23;
> > +       uint64_t dfr0, irq = PMU_IRQ;
> >         struct vpmu_vm *vpmu_vm;
> >         struct kvm_device_attr irq_attr = {
> >                 .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >
> >         vpmu_vm->vm = vm = vm_create(1);
> >         vm_init_descriptor_tables(vm);
> > +       vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
> >
> >         /* Catch exceptions for easier debugging */
> >         for (ec = 0; ec < ESR_EC_NUM; ec++) {
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >
>
> Thanks,
> Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU
  2023-02-15  1:07 ` [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
@ 2023-03-08  3:15   ` Reiji Watanabe
  0 siblings, 0 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-08  3:15 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

HI Raghu,


On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Extend the vPMU's vCPU migration test to validate
> chained events, and their overflow conditions.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++-
>  1 file changed, 75 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 066dc17fa3906..de725f4339ad5 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -23,7 +23,7 @@
>   * migrating the guest vCPU to random pCPUs in the system, and check
>   * if the vPMU is still behaving as expected. The sub-tests include
>   * testing basic functionalities such as basic counters behavior,
> - * overflow, and overflow interrupts.
> + * overflow, overflow interrupts, and chained events.
>   *
>   * Copyright (c) 2022 Google LLC.
>   *
> @@ -61,6 +61,8 @@
>  #define PRE_OVERFLOW_32                (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
>  #define PRE_OVERFLOW_64                (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
>
> +#define ALL_SET_64             GENMASK(63, 0)
> +
>  #define GICD_BASE_GPA  0x8000000ULL
>  #define GICR_BASE_GPA  0x80A0000ULL
>
> @@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow)
>         pmu_disable_reset();
>  }
>
> +static void test_chained_count(int pmc_idx)
> +{
> +       int i, chained_pmc_idx;
> +       struct pmc_accessor *acc;
> +       uint64_t pmcr_n, cnt, cntr_val;
> +
> +       /* The test needs at least two PMCs */
> +       pmcr_n = get_pmcr_n();
> +       GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n);

Nit: As the architecture doesn't require this, rather than causing the
test failure, I would suggest gracefully skipping this test case or
make this the requirement of the test.

Thanks,
Reiji


> +
> +       /*
> +        * The chained counter's idx is always chained with (pmc_idx + 1).
> +        * pmc_idx should be even as the chained event doesn't count on
> +        * odd numbered counters.
> +        */
> +       GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx);
> +
> +       /*
> +        * The max counter idx that the chained counter can occupy is
> +        * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2).
> +        */
> +       chained_pmc_idx = pmc_idx + 1;
> +       GUEST_ASSERT(chained_pmc_idx < pmcr_n);
> +
> +       enable_counter(chained_pmc_idx);
> +       pmu_irq_init(chained_pmc_idx);
> +
> +       /* Configure the chained event using all the possible ways*/
> +       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> +               acc = &pmc_accessors[i];
> +
> +               /* Test if the chained counter increments when the base event overflows */
> +
> +               cntr_val = 1;
> +               acc->write_cntr(chained_pmc_idx, cntr_val);
> +               acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN);
> +
> +               /* Chain the counter with pmc_idx that's configured for an overflow */
> +               test_instructions_count(pmc_idx, true, true);
> +
> +               /*
> +                * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors)
> +                * combinations. Hence, the chained chained_pmc_idx is expected to be
> +                * cntr_val + ARRAY_SIZE(pmc_accessors).
> +                */
> +               cnt = acc->read_cntr(chained_pmc_idx);
> +               GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors),
> +                               pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors));
> +
> +               /* Test for the overflow of the chained counter itself */
> +
> +               cntr_val = ALL_SET_64;
> +               acc->write_cntr(chained_pmc_idx, cntr_val);
> +
> +               test_instructions_count(pmc_idx, true, true);
> +
> +               /*
> +                * At this point, an interrupt should've been fired for the chained
> +                * counter (which validates the overflow bit), and the counter should've
> +                * wrapped around to ARRAY_SIZE(pmc_accessors) - 1.
> +                */
> +               cnt = acc->read_cntr(chained_pmc_idx);
> +               GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1,
> +                               pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors));
> +       }
> +
> +       pmu_irq_exit(chained_pmc_idx);
> +}
> +
>  static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
>  {
>         switch (event) {
> @@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void)
>         /* Test overflow with interrupts on generic and cycle counters */
>         test_instructions_count(0, true, true);
>         test_cycles_count(true, true);
> +
> +       /* Test chained events */
> +       test_chained_count(0);
>  }
>
>  /*
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters
  2023-02-15  1:07 ` [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
@ 2023-03-08  3:40   ` Reiji Watanabe
  0 siblings, 0 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-08  3:40 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Extend the vCPU migration test to occupy all the vPMU counters,
> by configuring chained events on alternate counter-ids and chaining
> them with its corresponding predecessor counter, and verify against
> the extended behavior.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++
>  1 file changed, 60 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index de725f4339ad5..fd00acb9391c8 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx)
>         pmu_irq_exit(chained_pmc_idx);
>  }
>
> +static void test_chain_all_counters(void)
> +{
> +       int i;
> +       uint64_t cnt, pmcr_n = get_pmcr_n();
> +       struct pmc_accessor *acc = &pmc_accessors[0];

How do you decide whether to test with all accessors ?
Perhaps, it might be simpler and more consistent if we implement each
test case with one specified accessor as an argument, and run those
test with each accessors?


> +
> +       /*
> +        * Test the occupancy of all the event counters, by chaining the
> +        * alternate counters. The test assumes that the host hasn't
> +        * occupied any counters. Hence, if the test fails, it could be
> +        * because all the counters weren't available to the guest or
> +        * there's actually a bug in KVM.
> +        */
> +
> +       /*
> +        * Configure even numbered counters to count cpu-cycles, and chain
> +        * each of them with its odd numbered counter.
> +        */

You might want to use the cycle counter as well ?

Thank you,
Reiji

> +       for (i = 0; i < pmcr_n; i++) {
> +               if (i % 2) {
> +                       acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN);
> +                       acc->write_cntr(i, 1);
> +               } else {
> +                       pmu_irq_init(i);
> +                       acc->write_cntr(i, PRE_OVERFLOW_32);
> +                       acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
> +               }
> +               enable_counter(i);
> +       }
> +
> +       /* Introduce some cycles */
> +       execute_precise_instrs(500, ARMV8_PMU_PMCR_E);
> +
> +       /*
> +        * An overflow interrupt should've arrived for all the even numbered
> +        * counters but none for the odd numbered ones. The odd numbered ones
> +        * should've incremented exactly by 1.
> +        */
> +       for (i = 0; i < pmcr_n; i++) {
> +               if (i % 2) {
> +                       GUEST_ASSERT_1(!pmu_irq_received(i), i);
> +
> +                       cnt = acc->read_cntr(i);
> +                       GUEST_ASSERT_2(cnt == 2, i, cnt);
> +               } else {
> +                       GUEST_ASSERT_1(pmu_irq_received(i), i);
> +               }
> +       }
> +
> +       /* Cleanup the states */
> +       for (i = 0; i < pmcr_n; i++) {
> +               if (i % 2 == 0)
> +                       pmu_irq_exit(i);
> +               disable_counter(i);
> +       }
> +}
> +
>  static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
>  {
>         switch (event) {
> @@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void)
>
>         /* Test chained events */
>         test_chained_count(0);
> +
> +       /* Test running chained events on all the implemented counters */
> +       test_chain_all_counters();
>  }
>
>  /*
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs
  2023-02-15  1:07 ` [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
@ 2023-03-08  4:44   ` Reiji Watanabe
  0 siblings, 0 replies; 36+ messages in thread
From: Reiji Watanabe @ 2023-03-08  4:44 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghu,

On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> To test KVM's handling of multiple vCPU contexts together, that are
> frequently migrating across random pCPUs in the system, extend the test
> to create a VM with multiple vCPUs and validate the behavior.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> ---
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------
>  1 file changed, 114 insertions(+), 52 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> index 239fc7e06b3b9..c9d8e5f9a22ab 100644
> --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> @@ -19,11 +19,12 @@
>   * higher exception levels (EL2, EL3). Verify this functionality by
>   * configuring and trying to count the events for EL2 in the guest.
>   *
> - * 4. Since the PMU registers are per-cpu, stress KVM by frequently
> - * migrating the guest vCPU to random pCPUs in the system, and check
> - * if the vPMU is still behaving as expected. The sub-tests include
> - * testing basic functionalities such as basic counters behavior,
> - * overflow, overflow interrupts, and chained events.
> + * 4. Since the PMU registers are per-cpu, stress KVM by creating a
> + * multi-vCPU VM, then frequently migrate the guest vCPUs to random
> + * pCPUs in the system, and check if the vPMU is still behaving as
> + * expected. The sub-tests include testing basic functionalities such
> + * as basic counters behavior, overflow, overflow interrupts, and
> + * chained events.
>   *
>   * Copyright (c) 2022 Google LLC.
>   *
> @@ -348,19 +349,22 @@ struct guest_irq_data {
>         struct spinlock lock;
>  };
>
> -static struct guest_irq_data guest_irq_data;
> +static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS];
>
>  #define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
>  #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
> +#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF       2
>
>  struct test_args {
>         int vcpu_migration_test_iter;
>         int vcpu_migration_test_migrate_freq_ms;
> +       int vcpu_migration_test_nr_vcpus;
>  };
>
>  static struct test_args test_args = {
>         .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
>         .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
> +       .vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF,
>  };
>
>  static void guest_sync_handler(struct ex_regs *regs)
> @@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_
>         }
>  }
>
> +static struct guest_irq_data *get_irq_data(void)
> +{
> +       uint32_t cpu = guest_get_vcpuid();
> +
> +       return &guest_irq_data[cpu];
> +}
> +
>  static void guest_irq_handler(struct ex_regs *regs)
>  {
>         uint32_t pmc_idx_bmap;
>         uint64_t i, pmcr_n = get_pmcr_n();
>         uint32_t pmovsclr = read_pmovsclr();
>         unsigned int intid = gic_get_and_ack_irq();
> +       struct guest_irq_data *irq_data = get_irq_data();
>
>         /* No other IRQ apart from the PMU IRQ is expected */
>         GUEST_ASSERT_1(intid == PMU_IRQ, intid);
>
> -       spin_lock(&guest_irq_data.lock);
> -       pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
> +       spin_lock(&irq_data->lock);
> +       pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap);
>
>         for (i = 0; i < pmcr_n; i++)
>                 guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
>         guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
>
>         /* Mark IRQ as recived for the corresponding PMCs */
> -       WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
> -       spin_unlock(&guest_irq_data.lock);
> +       WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr);
> +       spin_unlock(&irq_data->lock);
>
>         gic_set_eoi(intid);
>  }
> @@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs)
>  static int pmu_irq_received(int pmc_idx)
>  {
>         bool irq_received;
> +       struct guest_irq_data *irq_data = get_irq_data();
>
> -       spin_lock(&guest_irq_data.lock);
> -       irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
> -       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> -       spin_unlock(&guest_irq_data.lock);
> +       spin_lock(&irq_data->lock);
> +       irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx);
> +       WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
> +       spin_unlock(&irq_data->lock);
>
>         return irq_received;
>  }
>
>  static void pmu_irq_init(int pmc_idx)
>  {
> +       struct guest_irq_data *irq_data = get_irq_data();
> +
>         write_pmovsclr(BIT(pmc_idx));
>
> -       spin_lock(&guest_irq_data.lock);
> -       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> -       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
> -       spin_unlock(&guest_irq_data.lock);
> +       spin_lock(&irq_data->lock);
> +       WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
> +       WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx));
> +       spin_unlock(&irq_data->lock);
>
>         enable_irq(pmc_idx);
>  }
>
>  static void pmu_irq_exit(int pmc_idx)
>  {
> +       struct guest_irq_data *irq_data = get_irq_data();
> +
>         write_pmovsclr(BIT(pmc_idx));
>
> -       spin_lock(&guest_irq_data.lock);
> -       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> -       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> -       spin_unlock(&guest_irq_data.lock);
> +       spin_lock(&irq_data->lock);
> +       WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
> +       WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
> +       spin_unlock(&irq_data->lock);
>
>         disable_irq(pmc_idx);
>  }
> @@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
>  static void test_basic_pmu_functionality(void)
>  {
>         local_irq_disable();
> -       gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> +       gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus,
> +                       (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
>         gic_irq_enable(PMU_IRQ);
>         local_irq_enable();
>
> @@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void)
>
>  static void guest_vcpu_migration_test(void)
>  {
> +       int iter = test_args.vcpu_migration_test_iter;
> +
>         /*
>          * While the userspace continuously migrates this vCPU to random pCPUs,
>          * run basic PMU functionalities and verify the results.
>          */
> -       while (test_args.vcpu_migration_test_iter--)
> +       while (iter--)
>                 test_basic_pmu_functionality();
>  }
>
> @@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void)
>
>  struct vcpu_migrate_data {
>         struct vpmu_vm *vpmu_vm;
> -       pthread_t *pt_vcpu;
> -       bool vcpu_done;
> +       pthread_t *pt_vcpus;
> +       unsigned long *vcpu_done_map;
> +       pthread_mutex_t vcpu_done_map_lock;
>  };
>
> +struct vcpu_migrate_data migrate_data;
> +
>  static void *run_vcpus_migrate_test_func(void *arg)
>  {
> -       struct vcpu_migrate_data *migrate_data = arg;
> -       struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
> +       struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
> +       unsigned int vcpu_idx = (unsigned long)arg;
>
> -       run_vcpu(vpmu_vm->vcpus[0]);
> -       migrate_data->vcpu_done = true;
> +       run_vcpu(vpmu_vm->vcpus[vcpu_idx]);
> +
> +       pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
> +       __set_bit(vcpu_idx, migrate_data.vcpu_done_map);
> +       pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);
>
>         return NULL;
>  }
> @@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void)
>         return pcpu;
>  }
>
> -static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
> +static int migrate_vcpu(int vcpu_idx)
>  {
>         int ret;
>         cpu_set_t cpuset;
> @@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
>         CPU_ZERO(&cpuset);
>         CPU_SET(new_pcpu, &cpuset);
>
> -       pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
> +       pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu);
>
> -       ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
> +       ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset);
>
>         /* Allow the error where the vCPU thread is already finished */
>         TEST_ASSERT(ret == 0 || ret == ESRCH,
> @@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
>
>  static void *vcpus_migrate_func(void *arg)
>  {
> -       struct vcpu_migrate_data *migrate_data = arg;
> +       struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
> +       int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus;
> +       bool vcpu_done;
>
> -       while (!migrate_data->vcpu_done) {
> +       do {
>                 usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
> -               migrate_vcpu(migrate_data);
> -       }
> +               for (n_done = 0, i = 0; i < nr_vcpus; i++) {
> +                       pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
> +                       vcpu_done = test_bit(i, migrate_data.vcpu_done_map);
> +                       pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);

Do we need to hold the lock here ?


> +
> +                       if (vcpu_done) {
> +                               n_done++;
> +                               continue;
> +                       }
> +
> +                       migrate_vcpu(i);
> +               }
> +
> +       } while (nr_vcpus != n_done);
>
>         return NULL;
>  }
>
>  static void run_vcpu_migration_test(uint64_t pmcr_n)
>  {
> -       int ret;
> +       int i, nr_vcpus, ret;
>         struct vpmu_vm *vpmu_vm;
> -       pthread_t pt_vcpu, pt_sched;
> -       struct vcpu_migrate_data migrate_data = {
> -               .pt_vcpu = &pt_vcpu,
> -               .vcpu_done = false,
> -       };
> +       pthread_t pt_sched, *pt_vcpus;
>
>         __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
>
>         guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
>         guest_data.expected_pmcr_n = pmcr_n;
>
> -       migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
> +       nr_vcpus = test_args.vcpu_migration_test_nr_vcpus;
> +
> +       migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus);
> +       TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap");
> +       pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL);
> +
> +       migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus));
> +       TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers");
> +
> +       migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL);
>
>         /* Initialize random number generation for migrating vCPUs to random pCPUs */
>         srand(time(NULL));
>
> -       /* Spawn a vCPU thread */
> -       ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
> -       TEST_ASSERT(!ret, "Failed to create the vCPU thread");
> +       /* Spawn vCPU threads */
> +       for (i = 0; i < nr_vcpus; i++) {
> +               ret = pthread_create(&pt_vcpus[i], NULL,
> +                                       run_vcpus_migrate_test_func,  (void *)(unsigned long)i);
> +               TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i);
> +       }
>
>         /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
> -       ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
> +       ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL);
>         TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
>
>         pthread_join(pt_sched, NULL);
> -       pthread_join(pt_vcpu, NULL);
> +
> +       for (i = 0; i < nr_vcpus; i++)
> +               pthread_join(pt_vcpus[i], NULL);
>
>         destroy_vpmu_vm(vpmu_vm);
> +       free(pt_vcpus);
> +       bitmap_free(migrate_data.vcpu_done_map);
>  }
>
>  static void run_tests(uint64_t pmcr_n)
> @@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void)
>
>  static void print_help(char *name)
>  {
> -       pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
> -               name);
> +       pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]"
> +               "[-n vcpu_migration_nr_vcpus]\n", name);
>         pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
>                 VCPU_MIGRATIONS_TEST_ITERS_DEF);
>         pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
>                 VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
> +       pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n",
> +               VCPU_MIGRATIONS_TEST_NR_VPUS_DEF);
>         pr_info("\t-h: print this help screen\n");
>  }
>
> @@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[])
>  {
>         int opt;
>
> -       while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
> +       while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) {
>                 switch (opt) {
>                 case 'i':
>                         test_args.vcpu_migration_test_iter =
> @@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[])
>                         test_args.vcpu_migration_test_migrate_freq_ms =
>                                 atoi_positive("vCPU migration frequency", optarg);
>                         break;
> +               case 'n':
> +                       test_args.vcpu_migration_test_nr_vcpus =
> +                               atoi_positive("Nr vCPUs for vCPU migrations", optarg);
> +                       if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) {
> +                               pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS);
> +                               goto err;
> +                       }
> +                       break;
>                 case 'h':
>                 default:
>                         goto err;
> --
> 2.39.1.581.gbfd45094c4-goog
>
>

Thanks,
Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  2023-03-03  0:46   ` Reiji Watanabe
@ 2023-03-09 22:14     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-09 22:14 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On Thu, Mar 2, 2023 at 4:47 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow
> > bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle
> > counter enable bit) for PMCNTENSET_EL0 register.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  tools/arch/arm64/include/asm/perf_event.h | 7 +++++++
> >  1 file changed, 7 insertions(+)
> >
> > diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
> > index 97e49a4d4969f..8ce23aabf6fe6 100644
> > --- a/tools/arch/arm64/include/asm/perf_event.h
> > +++ b/tools/arch/arm64/include/asm/perf_event.h
> > @@ -222,9 +222,11 @@
> >  /*
> >   * PMOVSR: counters overflow flag status reg
> >   */
> > +#define ARMV8_PMU_CNTOVS_C      (1 << 31) /* Cycle counter overflow bit */
>
> Nit: This macro doesn't seem to be used in any of the patches.
> Do we need this ?
>
Ah, I think originally I intended to use this instead of defining my
own ARMV8_PMU_CYCLE_COUNTER_IDX to align with other pmc idx-es. But I
think the latter is better. I'll remove ARMV8_PMU_CNTOVS_C.

Thank you.
Raghavendra

> Thank you,
> Reiji
>
>
> >  #define        ARMV8_PMU_OVSR_MASK             0xffffffff      /* Mask for writable bits */
> >  #define        ARMV8_PMU_OVERFLOWED_MASK       ARMV8_PMU_OVSR_MASK
> >
> > +
> >  /*
> >   * PMXEVTYPER: Event selection reg
> >   */
> > @@ -247,6 +249,11 @@
> >  #define ARMV8_PMU_USERENR_CR   (1 << 2) /* Cycle counter can be read at EL0 */
> >  #define ARMV8_PMU_USERENR_ER   (1 << 3) /* Event counter can be read at EL0 */
> >
> > +/*
> > + * PMCNTENSET: Count Enable set reg
> > + */
> > +#define ARMV8_PMU_CNTENSET_C    (1 << 31) /* Cycle counter enable bit */
> > +
> >  /* PMMIR_EL1.SLOTS mask */
> >  #define ARMV8_PMU_SLOTS_MASK   0xff
> >
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers
  2023-03-03  3:06   ` Reiji Watanabe
@ 2023-03-09 22:19     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-09 22:19 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On Thu, Mar 2, 2023 at 7:06 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Add basic helpers for the test to access the cycle counter
> > registers. The helpers will be used in the upcoming patches
> > to run the tests related to cycle counter.
> >
> > No functional change intended.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++
> >  1 file changed, 40 insertions(+)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index d72c3c9b9c39f..15aebc7d7dc94 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -147,6 +147,46 @@ static inline void disable_counter(int idx)
> >         isb();
> >  }
> >
> > +static inline uint64_t read_cycle_counter(void)
> > +{
> > +       return read_sysreg(pmccntr_el0);
> > +}
> > +
> > +static inline void reset_cycle_counter(void)
> > +{
> > +       uint64_t v = read_sysreg(pmcr_el0);
> > +
> > +       write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0);
> > +       isb();
> > +}
> > +
> > +static inline void enable_cycle_counter(void)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0);
> > +       isb();
> > +}
>
> You might want to use enable_counter() and disable_counter()
> from enable_cycle_counter() and disable_cycle_counter() respectively?
>
Yes, that should work. I'll do that.

Thank you.
Raghavendra

> Thank you,
> Reiji
>
> > +
> > +static inline void disable_cycle_counter(void)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0);
> > +       isb();
> > +}
> > +
> > +static inline void write_pmccfiltr(unsigned long val)
> > +{
> > +       write_sysreg(val, pmccfiltr_el0);
> > +       isb();
> > +}
> > +
> > +static inline uint64_t read_pmccfiltr(void)
> > +{
> > +       return read_sysreg(pmccfiltr_el0);
> > +}
> > +
> >  static inline uint64_t get_pmcr_n(void)
> >  {
> >         return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation
  2023-03-03  4:30   ` Reiji Watanabe
@ 2023-03-09 22:45     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-09 22:45 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On Thu, Mar 2, 2023 at 8:31 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Accept a list of KVM PMU event filters as an argument while creating
> > a VM via create_vpmu_vm(). Upcoming patches would leverage this to
> > test the event filters' functionality.
> >
> > No functional change intended.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++--
> >  1 file changed, 60 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 15aebc7d7dc94..2b3a4fa3afa9c 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -15,10 +15,14 @@
> >  #include <vgic.h>
> >  #include <asm/perf_event.h>
> >  #include <linux/bitfield.h>
> > +#include <linux/bitmap.h>
> >
> >  /* The max number of the PMU event counters (excluding the cycle counter) */
> >  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> >
> > +/* The max number of event numbers that's supported */
> > +#define ARMV8_PMU_MAX_EVENTS           64
>
> The name and the comment would be a bit misleading.
> (This sounds like a max number of events that are supported by ARMv8)
>
> Perhaps 'MAX_EVENT_FILTER_BITS' would be more clear ?
>
>
You are right. It should actually represent the event filter bits.
Even the value is incorrect. It should be 16 and would change the loop
iteration logic in guest_event_filter_test(). Thanks for catching
this!
> > +
> >  /*
> >   * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
> >   * were basically copied from arch/arm64/kernel/perf_event.c.
> > @@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = {
> >         { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
> >  };
> >
> > +#define MAX_EVENT_FILTERS_PER_VM 10
>
> (Looking at just this patch,) it appears 'PER_VM' in the name
> might be rather misleading ?
>
Probably it's not clear. It should represent the max number of event
filter configurations that can be applied to a VM. Would a comment
help?

> > +
> >  #define INVALID_EC     (-1ul)
> >  uint64_t expected_ec = INVALID_EC;
> >  uint64_t op_end_addr;
> > @@ -232,6 +238,7 @@ struct vpmu_vm {
> >         struct kvm_vm *vm;
> >         struct kvm_vcpu *vcpu;
> >         int gic_fd;
> > +       unsigned long *pmu_filter;
> >  };
> >
> >  enum test_stage {
> > @@ -541,8 +548,51 @@ static void guest_code(void)
> >  #define GICD_BASE_GPA  0x8000000ULL
> >  #define GICR_BASE_GPA  0x80A0000ULL
> >
> > +static unsigned long *
> > +set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
>
> Can you add a comment that explains the function ?
> (especially for @pmu_event_filters and the return value ?)
>
Yes, I'll add a comment
> > +{
> > +       int j;
> > +       unsigned long *pmu_filter;
> > +       struct kvm_device_attr filter_attr = {
> > +               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > +               .attr = KVM_ARM_VCPU_PMU_V3_FILTER,
> > +       };
> > +
> > +       /*
> > +        * Setting up of the bitmap is similar to what KVM does.
> > +        * If the first filter denys an event, default all the others to allow, and vice-versa.
> > +        */
> > +       pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS);
> > +       TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter");
> > +
> > +       if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY)
> > +               bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS);
> > +
> > +       for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) {
> > +               struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j];
> > +
> > +               if (!pmu_event_filter->nevents)
>
> What does this mean ? (the end of the valid entry in the array ?)
>
Yes, it should represent the end of an array. I can add a comment if
it's unclear.
>
> > +                       break;
> > +
> > +               pr_debug("Applying event filter:: event: 0x%x; action: %s\n",
> > +                               pmu_event_filter->base_event,
> > +                               pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY");
> > +
> > +               filter_attr.addr = (uint64_t) pmu_event_filter;
> > +               vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> > +
> > +               if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW)
> > +                       __set_bit(pmu_event_filter->base_event, pmu_filter);
> > +               else
> > +                       __clear_bit(pmu_event_filter->base_event, pmu_filter);
> > +       }
> > +
> > +       return pmu_filter;
> > +}
> > +
> >  /* Create a VM that has one vCPU with PMUv3 configured. */
> > -static struct vpmu_vm *create_vpmu_vm(void *guest_code)
> > +static struct vpmu_vm *
> > +create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >  {
> >         struct kvm_vm *vm;
> >         struct kvm_vcpu *vcpu;
> > @@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
> >                     "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> >
> >         /* Initialize vPMU */
> > +       if (pmu_event_filters)
> > +               vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
> > +
> >         vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
> >         vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
> >
> > @@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
> >
> >  static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
> >  {
> > +       if (vpmu_vm->pmu_filter)
> > +               bitmap_free(vpmu_vm->pmu_filter);
> >         close(vpmu_vm->gic_fd);
> >         kvm_vm_free(vpmu_vm->vm);
> >         free(vpmu_vm);
> > @@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n)
> >         guest_data.expected_pmcr_n = pmcr_n;
> >
> >         pr_debug("Test with pmcr_n %lu\n", pmcr_n);
> > -       vpmu_vm = create_vpmu_vm(guest_code);
> > +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> >         vcpu = vpmu_vm->vcpu;
> >
> >         /* Save the initial sp to restore them later to run the guest again */
> > @@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
> >         guest_data.expected_pmcr_n = pmcr_n;
> >
> >         pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
> > -       vpmu_vm = create_vpmu_vm(guest_code);
> > +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> >         vcpu = vpmu_vm->vcpu;
> >
> >         /* Update the PMCR_EL0.N with @pmcr_n */
> > @@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void)
> >         struct vpmu_vm *vpmu_vm;
> >         uint64_t pmcr;
> >
> > -       vpmu_vm = create_vpmu_vm(guest_code);
> > +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> >         vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
> >         destroy_vpmu_vm(vpmu_vm);
> > +
> >         return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
> >  }
>
> Thank you,
> Reiji
>
>
> >
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test
  2023-03-04 20:28   ` Reiji Watanabe
@ 2023-03-09 23:17     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-09 23:17 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On Sat, Mar 4, 2023 at 12:28 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER
> > attribute by applying a series of filters to allow or
> > deny events from the userspace. Validation is done by
> > the guest in a way that it should be able to count
> > only the events that are allowed.
> >
> > The workload to execute a precise number of instructions
> > (execute_precise_instrs() and precise_instrs_loop()) is taken
> > from the kvm-unit-tests' arm/pmu.c.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++-
> >  1 file changed, 258 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 2b3a4fa3afa9c..3dfb770b538e9 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -2,12 +2,21 @@
> >  /*
> >   * vpmu_test - Test the vPMU
> >   *
> > - * Copyright (c) 2022 Google LLC.
> > + * The test suit contains a series of checks to validate the vPMU
> > + * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is
> > + * supported on the host. The tests include:
> >   *
> > - * This test checks if the guest can see the same number of the PMU event
> > + * 1. Check if the guest can see the same number of the PMU event
> >   * counters (PMCR_EL0.N) that userspace sets, if the guest can access
> >   * those counters, and if the guest cannot access any other counters.
> > - * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
> > + *
> > + * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER
> > + * attribute by applying a series of filters in various combinations
> > + * of allowing or denying the events. The guest validates it by
> > + * checking if it's able to count only the events that are allowed.
> > + *
> > + * Copyright (c) 2022 Google LLC.
> > + *
> >   */
> >  #include <kvm_util.h>
> >  #include <processor.h>
> > @@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = {
> >
> >  #define MAX_EVENT_FILTERS_PER_VM 10
> >
> > +#define EVENT_ALLOW(ev) \
> > +       {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW}
> > +
> > +#define EVENT_DENY(ev) \
> > +       {.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY}
> > +
> >  #define INVALID_EC     (-1ul)
> >  uint64_t expected_ec = INVALID_EC;
> >  uint64_t op_end_addr;
> > @@ -243,11 +258,13 @@ struct vpmu_vm {
> >
> >  enum test_stage {
> >         TEST_STAGE_COUNTER_ACCESS = 1,
> > +       TEST_STAGE_KVM_EVENT_FILTER,
> >  };
> >
> >  struct guest_data {
> >         enum test_stage test_stage;
> >         uint64_t expected_pmcr_n;
> > +       unsigned long *pmu_filter;
> >  };
> >
> >  static struct guest_data guest_data;
> > @@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event)
> >                 GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> >  }
> >
> > +
> > +/*
> > + * Extra instructions inserted by the compiler would be difficult to compensate
> > + * for, so hand assemble everything between, and including, the PMCR accesses
> > + * to start and stop counting. isb instructions are inserted to make sure
> > + * pmccntr read after this function returns the exact instructions executed
> > + * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop.
> > + */
> > +static inline void precise_instrs_loop(int loop, uint32_t pmcr)
> > +{
> > +       uint64_t pmcr64 = pmcr;
> > +
> > +       asm volatile(
> > +       "       msr     pmcr_el0, %[pmcr]\n"
> > +       "       isb\n"
> > +       "1:     subs    %w[loop], %w[loop], #1\n"
> > +       "       b.gt    1b\n"
> > +       "       nop\n"
> > +       "       msr     pmcr_el0, xzr\n"
> > +       "       isb\n"
> > +       : [loop] "+r" (loop)
> > +       : [pmcr] "r" (pmcr64)
> > +       : "cc");
> > +}
> > +
> > +/*
> > + * Execute a known number of guest instructions. Only even instruction counts
> > + * greater than or equal to 4 are supported by the in-line assembly code. The
> > + * control register (PMCR_EL0) is initialized with the provided value (allowing
> > + * for example for the cycle counter or event counters to be reset). At the end
> > + * of the exact instruction loop, zero is written to PMCR_EL0 to disable
> > + * counting, allowing the cycle counter or event counters to be read at the
> > + * leisure of the calling code.
> > + */
> > +static void execute_precise_instrs(int num, uint32_t pmcr)
> > +{
> > +       int loop = (num - 2) / 2;
> > +
> > +       GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop);
> > +       precise_instrs_loop(loop, pmcr);
> > +}
> > +
> > +static void test_instructions_count(int pmc_idx, bool expect_count)
> > +{
> > +       int i;
> > +       struct pmc_accessor *acc;
> > +       uint64_t cnt;
> > +       int instrs_count = 100;
> > +
> > +       enable_counter(pmc_idx);
> > +
> > +       /* Test the event using all the possible way to configure the event */
> > +       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +               acc = &pmc_accessors[i];
> > +
> > +               pmu_disable_reset();
> > +
> > +               acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> > +
> > +               /* Enable the PMU and execute precisely number of instructions as a workload */
> > +               execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> > +
> > +               /* If a count is expected, the counter should be increased by 'instrs_count' */
> > +               cnt = acc->read_cntr(pmc_idx);
> > +               GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> > +                               i, expect_count, cnt, instrs_count);
> > +       }
> > +
> > +       disable_counter(pmc_idx);
> > +}
> > +
> > +static void test_cycles_count(bool expect_count)
> > +{
> > +       uint64_t cnt;
> > +
> > +       pmu_enable();
> > +       reset_cycle_counter();
> > +
> > +       /* Count cycles in EL0 and EL1 */
> > +       write_pmccfiltr(0);
> > +       enable_cycle_counter();
> > +
> > +       cnt = read_cycle_counter();
> > +
> > +       /*
> > +        * If a count is expected by the test, the cycle counter should be increased by
> > +        * at least 1, as there is at least one instruction between enabling the
> > +        * counter and reading the counter.
> > +        */
> > +       GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
> > +
> > +       disable_cycle_counter();
>
> It would be nicer to also test using a generic PMC with
> ARMV8_PMUV3_PERFCTR_CPU_CYCLES (not just with a cycle counter),
> as the filter should be applied to both.
>
Actually, my original intention was to check if the filters are being
applied to generic PMCs and the cycle counter, irrespective of the
event type. Hence, I did not focus too much on any other events.
But I understand that the cycles event is a special case. I'll check
the filter with cycles events on a generic counter.

> > +       pmu_disable_reset();
> > +}
> > +
> > +static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
> > +{
> > +       switch (event) {
> > +       case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
> > +               test_instructions_count(pmc_idx, expect_count);
> > +               break;
> > +       case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
> > +               test_cycles_count(expect_count);
> > +               break;
> > +       }
> > +}
> > +
> >  /*
> >   * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
> >   * are set or cleared as specified in @set_expected.
> > @@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n)
> >         }
> >  }
> >
> > +static void guest_event_filter_test(unsigned long *pmu_filter)
> > +{
> > +       uint64_t event;
> > +
> > +       /*
> > +        * Check if PMCEIDx_EL0 is advertized as configured by the userspace.
> > +        * It's possible that even though the userspace allowed it, it may not be supported
> > +        * by the hardware and could be advertized as 'disabled'. Hence, only validate against
> > +        * the events that are advertized.
>
> How about checking events that are supported by the hardware
> initially (without setting the event filter) ?
> Then, we can test if events that userspace tried to hide are
> not exposed to guests correctly.
>
Yes, that would be a way to go.

> Can we also add a case for events that we can test both upper
> 32bits and lower 32 bits of both of PMCEID{0,1}_EL0 registers ?
> (pmu_event_is_supported() needs to be fixed as well)
>
Of course, I'll cherry-pick some events.
>
>
> > +        *
> > +        * Furthermore, check if the event is in fact counting if enabled, or vice-versa.
> > +        */
> > +       for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) {
> > +               if (pmu_event_is_supported(event)) {
> > +                       GUEST_ASSERT_1(test_bit(event, pmu_filter), event);
> > +                       test_event_count(event, 0, true);
> > +               } else {
> > +                       test_event_count(event, 0, false);
> > +               }
> > +       }
> > +}
> > +
> >  static void guest_code(void)
> >  {
> >         switch (guest_data.test_stage) {
> >         case TEST_STAGE_COUNTER_ACCESS:
> >                 guest_counter_access_test(guest_data.expected_pmcr_n);
> >                 break;
> > +       case TEST_STAGE_KVM_EVENT_FILTER:
> > +               guest_event_filter_test(guest_data.pmu_filter);
> > +               break;
> >         default:
> >                 GUEST_ASSERT_1(0, guest_data.test_stage);
> >         }
>
> IMHO running a guest from a different guest_code_xxx might be more
> straightforward rather than controlling through the test_stage,
> as it appears each test 'stage' is a different test case rather than
> a test stage, and the test creates a new guest for each test 'stage'.
> I don't find any reason to share the guest_code for those test
> cases (Unless we are going to run some common guest codes for test
> cases in the following patches)
>
Yes, I guess it should be okay to split the cases into independent
guest_code_xxx().
>
> > @@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n)
> >                 run_counter_access_error_test(i);
> >  }
> >
> > +static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = {
>
> It looks like KVM_ARM_VCPU_PMU_V3_FILTER is always used with
> one entry in the filter (.nevents == 1).
> Could we also test with .nevents > 1 ?
>
The only reason why I went with 1 is I wanted to test the cycles and
instructions events with a workload, and these two aren't neighbours
when it comes to event numbers.
Anyway, I can also pick another supported event, plus its neighbours,
and test it only to the extent with pmu_event_is_supported(). This
way, I can also test .nevents > 2.

Thank you.
Raghavendra
> > +       /*
> > +        * Each set of events denotes a filter configuration for that VM.
> > +        * During VM creation, the filters will be applied in the sequence mentioned here.
> > +        */
> > +       {
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +       },
> > +       {
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +       },
> > +       {
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +       },
> > +       {
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +       },
> > +       {
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +       },
> > +       {
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +               EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +       },
> > +       {
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
> > +       },
> > +       {
> > +               EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
> > +       },
> > +};
> > +
> > +static void run_kvm_event_filter_error_tests(void)
> > +{
> > +       int ret;
> > +       struct kvm_vm *vm;
> > +       struct kvm_vcpu *vcpu;
> > +       struct vpmu_vm *vpmu_vm;
> > +       struct kvm_vcpu_init init;
> > +       struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
> > +       struct kvm_device_attr filter_attr = {
> > +               .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > +               .attr = KVM_ARM_VCPU_PMU_V3_FILTER,
> > +               .addr = (uint64_t) &pmu_event_filter,
> > +       };
> > +
> > +       /* KVM should not allow configuring filters after the PMU is initialized */
> > +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> > +       ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> > +       TEST_ASSERT(ret == -1 && errno == EBUSY,
> > +                       "Failed to disallow setting an event filter after PMU init");
> > +       destroy_vpmu_vm(vpmu_vm);
> > +
> > +       /* Check for invalid event filter setting */
> > +       vm = vm_create(1);
> > +       vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
> > +       init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
> > +       vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
> > +
> > +       pmu_event_filter.base_event = UINT16_MAX;
> > +       pmu_event_filter.nevents = 5;
> > +       ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
> > +       TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration");
> > +       kvm_vm_free(vm);
> > +}
> > +
> > +static void run_kvm_event_filter_test(void)
> > +{
> > +       int i;
> > +       struct vpmu_vm *vpmu_vm;
> > +       struct kvm_vm *vm;
> > +       vm_vaddr_t pmu_filter_gva;
> > +       size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long);
> > +
> > +       guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER;
> > +
> > +       /* Test for valid filter configurations */
> > +       for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
> > +               vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
> > +               vm = vpmu_vm->vm;
> > +
> > +               pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
> > +               memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
> > +               guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
> > +
> > +               run_vcpu(vpmu_vm->vcpu);
> > +
> > +               destroy_vpmu_vm(vpmu_vm);
> > +       }
> > +
> > +       /* Check if KVM is handling the errors correctly */
> > +       run_kvm_event_filter_error_tests();
> > +}
> > +
> >  static void run_tests(uint64_t pmcr_n)
> >  {
> >         run_counter_access_tests(pmcr_n);
> > +       run_kvm_event_filter_test();
> >  }
> >
> >  /*
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >
>
> Thank you,
> Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU
  2023-03-07  3:43   ` Reiji Watanabe
@ 2023-03-10  2:28     ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-10  2:28 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

On Mon, Mar 6, 2023 at 7:44 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Implement a stress test for KVM by frequently force-migrating the
> > vCPU to random pCPUs in the system. This would validate the
> > save/restore functionality of KVM and starting/stopping of
> > PMU counters as necessary.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++-
> >  1 file changed, 193 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 5c166df245589..0c9d801f4e602 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -19,9 +19,15 @@
> >   * higher exception levels (EL2, EL3). Verify this functionality by
> >   * configuring and trying to count the events for EL2 in the guest.
> >   *
> > + * 4. Since the PMU registers are per-cpu, stress KVM by frequently
> > + * migrating the guest vCPU to random pCPUs in the system, and check
> > + * if the vPMU is still behaving as expected.
> > + *
> >   * Copyright (c) 2022 Google LLC.
> >   *
> >   */
> > +#define _GNU_SOURCE
> > +
> >  #include <kvm_util.h>
> >  #include <processor.h>
> >  #include <test_util.h>
> > @@ -30,6 +36,11 @@
> >  #include <linux/arm-smccc.h>
> >  #include <linux/bitfield.h>
> >  #include <linux/bitmap.h>
> > +#include <stdlib.h>
> > +#include <pthread.h>
> > +#include <sys/sysinfo.h>
> > +
> > +#include "delay.h"
> >
> >  /* The max number of the PMU event counters (excluding the cycle counter) */
> >  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> > @@ -37,6 +48,8 @@
> >  /* The max number of event numbers that's supported */
> >  #define ARMV8_PMU_MAX_EVENTS           64
> >
> > +#define msecs_to_usecs(msec)           ((msec) * 1000LL)
> > +
> >  /*
> >   * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
> >   * were basically copied from arch/arm64/kernel/perf_event.c.
> > @@ -265,6 +278,7 @@ enum test_stage {
> >         TEST_STAGE_COUNTER_ACCESS = 1,
> >         TEST_STAGE_KVM_EVENT_FILTER,
> >         TEST_STAGE_KVM_EVTYPE_FILTER,
> > +       TEST_STAGE_VCPU_MIGRATION,
> >  };
> >
> >  struct guest_data {
> > @@ -275,6 +289,19 @@ struct guest_data {
> >
> >  static struct guest_data guest_data;
> >
> > +#define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
> > +#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
> > +
> > +struct test_args {
> > +       int vcpu_migration_test_iter;
> > +       int vcpu_migration_test_migrate_freq_ms;
> > +};
> > +
> > +static struct test_args test_args = {
> > +       .vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
> > +       .vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
> > +};
> > +
> >  static void guest_sync_handler(struct ex_regs *regs)
> >  {
> >         uint64_t esr, ec;
> > @@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event)
> >                 GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
> >  }
> >
> > -
> >  /*
> >   * Extra instructions inserted by the compiler would be difficult to compensate
> >   * for, so hand assemble everything between, and including, the PMCR accesses
> > @@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
> >         }
> >  }
> >
> > +static void test_basic_pmu_functionality(void)
> > +{
> > +       /* Test events on generic and cycle counters */
> > +       test_instructions_count(0, true);
> > +       test_cycles_count(true);
> > +}
> > +
> >  /*
> >   * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
> >   * are set or cleared as specified in @set_expected.
> > @@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void)
> >         GUEST_ASSERT_2(cnt == 0, cnt, typer);
> >  }
> >
> > +static void guest_vcpu_migration_test(void)
> > +{
> > +       /*
> > +        * While the userspace continuously migrates this vCPU to random pCPUs,
> > +        * run basic PMU functionalities and verify the results.
> > +        */
> > +       while (test_args.vcpu_migration_test_iter--)
> > +               test_basic_pmu_functionality();
> > +}
> > +
> >  static void guest_code(void)
> >  {
> >         switch (guest_data.test_stage) {
> > @@ -760,6 +803,9 @@ static void guest_code(void)
> >         case TEST_STAGE_KVM_EVTYPE_FILTER:
> >                 guest_evtype_filter_test();
> >                 break;
> > +       case TEST_STAGE_VCPU_MIGRATION:
> > +               guest_vcpu_migration_test();
> > +               break;
> >         default:
> >                 GUEST_ASSERT_1(0, guest_data.test_stage);
> >         }
> > @@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >
> >         vpmu_vm->vm = vm = vm_create(1);
> >         vm_init_descriptor_tables(vm);
> > +
> >         /* Catch exceptions for easier debugging */
> >         for (ec = 0; ec < ESR_EC_NUM; ec++) {
> >                 vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
> > @@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
> >         struct ucall uc;
> >
> >         sync_global_to_guest(vcpu->vm, guest_data);
> > +       sync_global_to_guest(vcpu->vm, test_args);
> > +
> >         vcpu_run(vcpu);
> >         switch (get_ucall(vcpu, &uc)) {
> >         case UCALL_ABORT:
> > @@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void)
> >         destroy_vpmu_vm(vpmu_vm);
> >  }
> >
> > +struct vcpu_migrate_data {
> > +       struct vpmu_vm *vpmu_vm;
> > +       pthread_t *pt_vcpu;
>
> Nit: Originally, I wasn't sure what 'pt' stands for.
> Also, the 'pt_vcpu' made me think this would be a pointer to a vCPU.
> Perhaps renaming this to 'vcpu_pthread' might be more clear ?
>
Haha, no problem. I'll change it to vcpu_pthread.
>
> > +       bool vcpu_done;
> > +};
> > +
> > +static void *run_vcpus_migrate_test_func(void *arg)
> > +{
> > +       struct vcpu_migrate_data *migrate_data = arg;
> > +       struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
> > +
> > +       run_vcpu(vpmu_vm->vcpu);
> > +       migrate_data->vcpu_done = true;
> > +
> > +       return NULL;
> > +}
> > +
> > +static uint32_t get_pcpu(void)
> > +{
> > +       uint32_t pcpu;
> > +       unsigned int nproc_conf;
> > +       cpu_set_t online_cpuset;
> > +
> > +       nproc_conf = get_nprocs_conf();
> > +       sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
> > +
> > +       /* Randomly find an available pCPU to place the vCPU on */
> > +       do {
> > +               pcpu = rand() % nproc_conf;
> > +       } while (!CPU_ISSET(pcpu, &online_cpuset));
> > +
> > +       return pcpu;
> > +}
> > +
> > +static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
>
> Nit: You might want to pass a pthread_t rather than migrate_data
> unless the function uses some more fields of the data in the
> following patches.
>
The upcoming patch, which introduces multiple-vcpus, moves
migrate_date into a global array (one element per-vCPU). That patch
passes only the vcpu index as an arg to migrate_vcpu().
I originally thought we would embed more stuff into migrate_data, and
passed this. But I guess I can just pass pthread_t.

> > +{
> > +       int ret;
> > +       cpu_set_t cpuset;
> > +       uint32_t new_pcpu = get_pcpu();
> > +
> > +       CPU_ZERO(&cpuset);
> > +       CPU_SET(new_pcpu, &cpuset);
> > +
> > +       pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
> > +
> > +       ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
> > +
> > +       /* Allow the error where the vCPU thread is already finished */
> > +       TEST_ASSERT(ret == 0 || ret == ESRCH,
> > +                   "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret);
> > +
> > +       return ret;
> > +}
> > +
> > +static void *vcpus_migrate_func(void *arg)
> > +{
> > +       struct vcpu_migrate_data *migrate_data = arg;
> > +
> > +       while (!migrate_data->vcpu_done) {
> > +               usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
> > +               migrate_vcpu(migrate_data);
> > +       }
> > +
> > +       return NULL;
> > +}
> > +
> > +static void run_vcpu_migration_test(uint64_t pmcr_n)
> > +{
> > +       int ret;
> > +       struct vpmu_vm *vpmu_vm;
> > +       pthread_t pt_vcpu, pt_sched;
> > +       struct vcpu_migrate_data migrate_data = {
> > +               .pt_vcpu = &pt_vcpu,
> > +               .vcpu_done = false,
> > +       };
> > +
> > +       __TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
>
> Considering that get_pcpu() chooses the target CPU from CPUs returned
> from sched_getaffinity(), I would think the test should use the number of
> the bits set in the returned cpu_set_t from sched_getaffinity() here
> instead of get_nprocs(), as those numbers could be different (e.g.  if the
> test runs with taskset with a subset of the CPUs on the system).
>
I'm not familiar with tasksets, but if you feel the current approach
could cause problems, I'll switch to your suggestion. Thanks.
>
> > +
> > +       guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
> > +       guest_data.expected_pmcr_n = pmcr_n;
> > +
> > +       migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
> > +
> > +       /* Initialize random number generation for migrating vCPUs to random pCPUs */
> > +       srand(time(NULL));
> > +
> > +       /* Spawn a vCPU thread */
> > +       ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
> > +       TEST_ASSERT(!ret, "Failed to create the vCPU thread");
> > +
> > +       /* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
> > +       ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
>
> Why do you want to spawn another thread to run vcpus_migrate_func(),
> rather than calling that from the current thread ?
>
>
I suppose it should be fine calling from the current thread (unless
I'm forgetting a reason why I had a similar behavior in arch_timer
test).

Thank you.
Raghavendra
> > +       TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
> > +
> > +       pthread_join(pt_sched, NULL);
> > +       pthread_join(pt_vcpu, NULL);
> > +
> > +       destroy_vpmu_vm(vpmu_vm);
> > +}
> > +
> >  static void run_tests(uint64_t pmcr_n)
> >  {
> >         run_counter_access_tests(pmcr_n);
> >         run_kvm_event_filter_test();
> >         run_kvm_evtype_filter_test();
> > +       run_vcpu_migration_test(pmcr_n);
> >  }
> >
> >  /*
> > @@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void)
> >         return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
> >  }
> >
> > -int main(void)
> > +static void print_help(char *name)
> > +{
> > +       pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
> > +               name);
> > +       pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
> > +               VCPU_MIGRATIONS_TEST_ITERS_DEF);
> > +       pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
> > +               VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
> > +       pr_info("\t-h: print this help screen\n");
> > +}
> > +
> > +static bool parse_args(int argc, char *argv[])
> > +{
> > +       int opt;
> > +
> > +       while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
> > +               switch (opt) {
> > +               case 'i':
> > +                       test_args.vcpu_migration_test_iter =
> > +                               atoi_positive("Nr vCPU migration iterations", optarg);
> > +                       break;
> > +               case 'm':
> > +                       test_args.vcpu_migration_test_migrate_freq_ms =
> > +                               atoi_positive("vCPU migration frequency", optarg);
> > +                       break;
> > +               case 'h':
> > +               default:
> > +                       goto err;
> > +               }
> > +       }
> > +
> > +       return true;
> > +
> > +err:
> > +       print_help(argv[0]);
> > +       return false;
> > +}
> > +
> > +int main(int argc, char *argv[])
> >  {
> >         uint64_t pmcr_n;
> >
> >         TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
> >
> > +       if (!parse_args(argc, argv))
> > +               exit(KSFT_SKIP);
> > +
> >         pmcr_n = get_pmcr_n_limit();
> >         run_tests(pmcr_n);
> >
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >
>
> Thanks,
> Reiji

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  2023-03-07  1:19   ` Reiji Watanabe
  2023-03-07 16:09     ` Sean Christopherson
@ 2023-03-10 21:57     ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-10 21:57 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

On Mon, Mar 6, 2023 at 5:19 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > KVM doest't allow the guests to modify the filter types
> > such counting events in nonsecure/secure-EL2, EL3, and
> > so on. Validate the same by force-configuring the bits
> > in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0
> > registers.
> >
> > The test extends further by trying to create an event
> > for counting only in EL2 and validates if the counter
> > is not progressing.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++
> >  1 file changed, 85 insertions(+)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 3dfb770b538e9..5c166df245589 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -15,6 +15,10 @@
> >   * of allowing or denying the events. The guest validates it by
> >   * checking if it's able to count only the events that are allowed.
> >   *
> > + * 3. KVM doesn't allow the guest to count the events attributed with
> > + * higher exception levels (EL2, EL3). Verify this functionality by
> > + * configuring and trying to count the events for EL2 in the guest.
> > + *
> >   * Copyright (c) 2022 Google LLC.
> >   *
> >   */
> > @@ -23,6 +27,7 @@
> >  #include <test_util.h>
> >  #include <vgic.h>
> >  #include <asm/perf_event.h>
> > +#include <linux/arm-smccc.h>
> >  #include <linux/bitfield.h>
> >  #include <linux/bitmap.h>
> >
> > @@ -259,6 +264,7 @@ struct vpmu_vm {
> >  enum test_stage {
> >         TEST_STAGE_COUNTER_ACCESS = 1,
> >         TEST_STAGE_KVM_EVENT_FILTER,
> > +       TEST_STAGE_KVM_EVTYPE_FILTER,
> >  };
> >
> >  struct guest_data {
> > @@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter)
> >         }
> >  }
> >
> > +static void guest_evtype_filter_test(void)
> > +{
> > +       int i;
> > +       struct pmc_accessor *acc;
> > +       uint64_t typer, cnt;
> > +       struct arm_smccc_res res;
> > +
> > +       pmu_enable();
> > +
> > +       /*
> > +        * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2),
> > +        * Monitor (EL3), and Multithreading configuration. It applies the mask
> > +        * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0,
> > +        * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible
> > +        * ways to configure the EVTYPER.
> > +        */
>
> I would prefer to break long lines into multiple lines for these comments
> (or other comments in these patches), as "Linux kernel coding style"
> suggests.
> ---
> [https://www.kernel.org/doc/html/latest/process/coding-style.html#breaking-long-lines-and-strings]
>
> The preferred limit on the length of a single line is 80 columns.
>
> Statements longer than 80 columns should be broken into sensible
> chunks, unless exceeding 80 columns significantly increases
> readability and does not hide information.
> ---
>
Sure, I'll fix it.
> > +       for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> > +               acc = &pmc_accessors[i];
> > +
> > +               /* Set all filter bits (31-24), readback, and check against the mask */
> > +               acc->write_typer(0, 0xff000000);
> > +               typer = acc->read_typer(0);
> > +
> > +               GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
> > +                               typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
>
> It appears that bits[29:26] don't have to be zero depending on
> feature availability to the guest (Those bits needs to be zero
> only when relevant features are not available on the guest).
> So, the expected value must be changed depending on the feature
> availability if the test checks those bits.
> I have the same comment for the cycle counter.
>
But doesn't KVM (and the ARM PMU driver) ignore these bits upon write
using ARMV8_PMU_EVTYPE_MASK?
> > +
> > +               /*
> > +                * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv
> > +                * to not count NS-EL2 events. Verify this functionality by configuring
> > +                * a NS-EL2 event, for which the couunt shouldn't increment.
> > +                */
> > +               typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED;
> > +               typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
> > +               acc->write_typer(0, typer);
> > +               acc->write_cntr(0, 0);
> > +               enable_counter(0);
> > +
> > +               /* Issue a hypercall to enter EL2 and return */
> > +               memset(&res, 0, sizeof(res));
> > +               smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
> > +
> > +               cnt = acc->read_cntr(0);
> > +               GUEST_ASSERT_3(cnt == 0, cnt, typer, i);
> > +       }
> > +
> > +       /* Check the same sequence for the Cycle counter */
> > +       write_pmccfiltr(0xff000000);
> > +       typer = read_pmccfiltr();
> > +       GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
> > +                               typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
> > +
> > +       typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
> > +       write_pmccfiltr(typer);
> > +       reset_cycle_counter();
> > +       enable_cycle_counter();
> > +
> > +       /* Issue a hypercall to enter EL2 and return */
> > +       memset(&res, 0, sizeof(res));
> > +       smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
> > +
> > +       cnt = read_cycle_counter();
>
> Perhaps it's worth considering having the helpers for PMC registers
> (e.g. write_cntr()) accepting the cycle counter as the index==31
> to simplify the test code implementation ?
>
> Thank you,
> Reiji
>
> > +       GUEST_ASSERT_2(cnt == 0, cnt, typer);
> > +}
> > +
> >  static void guest_code(void)
> >  {
> >         switch (guest_data.test_stage) {
> > @@ -687,6 +757,9 @@ static void guest_code(void)
> >         case TEST_STAGE_KVM_EVENT_FILTER:
> >                 guest_event_filter_test(guest_data.pmu_filter);
> >                 break;
> > +       case TEST_STAGE_KVM_EVTYPE_FILTER:
> > +               guest_evtype_filter_test();
> > +               break;
> >         default:
> >                 GUEST_ASSERT_1(0, guest_data.test_stage);
> >         }
> > @@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void)
> >         run_kvm_event_filter_error_tests();
> >  }
> >
> > +static void run_kvm_evtype_filter_test(void)
> > +{
> > +       struct vpmu_vm *vpmu_vm;
> > +
> > +       guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
> > +
> > +       vpmu_vm = create_vpmu_vm(guest_code, NULL);
> > +       run_vcpu(vpmu_vm->vcpu);
> > +       destroy_vpmu_vm(vpmu_vm);
> > +}
> > +
> >  static void run_tests(uint64_t pmcr_n)
> >  {
> >         run_counter_access_tests(pmcr_n);
> >         run_kvm_event_filter_test();
> > +       run_kvm_evtype_filter_test();
> >  }
> >
> >  /*
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  2023-03-07  6:09   ` Reiji Watanabe
  2023-03-08  1:19     ` Reiji Watanabe
@ 2023-03-10 23:58     ` Raghavendra Rao Ananta
  1 sibling, 0 replies; 36+ messages in thread
From: Raghavendra Rao Ananta @ 2023-03-10 23:58 UTC (permalink / raw)
  To: Reiji Watanabe
  Cc: Oliver Upton, Marc Zyngier, Ricardo Koller, James Morse,
	Suzuki K Poulose, Paolo Bonzini, Jing Zhang, Colton Lewis,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Reiji,

On Mon, Mar 6, 2023 at 10:10 PM Reiji Watanabe <reijiw@google.com> wrote:
>
> Hi Raghu,
>
> On Tue, Feb 14, 2023 at 5:07 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > Extend the vCPU migration test to also validate the vPMU's
> > functionality when set up for overflow conditions.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> >  .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++--
> >  1 file changed, 198 insertions(+), 25 deletions(-)
> >
> > diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > index 0c9d801f4e602..066dc17fa3906 100644
> > --- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > +++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
> > @@ -21,7 +21,9 @@
> >   *
> >   * 4. Since the PMU registers are per-cpu, stress KVM by frequently
> >   * migrating the guest vCPU to random pCPUs in the system, and check
> > - * if the vPMU is still behaving as expected.
> > + * if the vPMU is still behaving as expected. The sub-tests include
> > + * testing basic functionalities such as basic counters behavior,
> > + * overflow, and overflow interrupts.
> >   *
> >   * Copyright (c) 2022 Google LLC.
> >   *
> > @@ -41,13 +43,27 @@
> >  #include <sys/sysinfo.h>
> >
> >  #include "delay.h"
> > +#include "gic.h"
> > +#include "spinlock.h"
> >
> >  /* The max number of the PMU event counters (excluding the cycle counter) */
> >  #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1)
> >
> > +/* The cycle counter bit position that's common among the PMU registers */
> > +#define ARMV8_PMU_CYCLE_COUNTER_IDX    31
> > +
> >  /* The max number of event numbers that's supported */
> >  #define ARMV8_PMU_MAX_EVENTS           64
> >
> > +#define PMU_IRQ                                23
> > +
> > +#define COUNT_TO_OVERFLOW      0xFULL
> > +#define PRE_OVERFLOW_32                (GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
> > +#define PRE_OVERFLOW_64                (GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
> > +
> > +#define GICD_BASE_GPA  0x8000000ULL
> > +#define GICR_BASE_GPA  0x80A0000ULL
> > +
> >  #define msecs_to_usecs(msec)           ((msec) * 1000LL)
> >
> >  /*
> > @@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
> >         isb();
> >  }
> >
> > +static inline void write_pmovsclr(unsigned long val)
> > +{
> > +       write_sysreg(val, pmovsclr_el0);
> > +       isb();
> > +}
> > +
> > +static unsigned long read_pmovsclr(void)
> > +{
> > +       return read_sysreg(pmovsclr_el0);
> > +}
> > +
> >  static inline void enable_counter(int idx)
> >  {
> >         uint64_t v = read_sysreg(pmcntenset_el0);
> > @@ -178,11 +205,33 @@ static inline void disable_counter(int idx)
> >         isb();
> >  }
> >
> > +static inline void enable_irq(int idx)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(BIT(idx) | v, pmintenset_el1);
> > +       isb();
> > +}
> > +
> > +static inline void disable_irq(int idx)
> > +{
> > +       uint64_t v = read_sysreg(pmcntenset_el0);
> > +
> > +       write_sysreg(BIT(idx) | v, pmintenclr_el1);
> > +       isb();
> > +}
> > +
> >  static inline uint64_t read_cycle_counter(void)
> >  {
> >         return read_sysreg(pmccntr_el0);
> >  }
> >
> > +static inline void write_cycle_counter(uint64_t v)
> > +{
> > +       write_sysreg(v, pmccntr_el0);
> > +       isb();
> > +}
> > +
> >  static inline void reset_cycle_counter(void)
> >  {
> >         uint64_t v = read_sysreg(pmcr_el0);
> > @@ -289,6 +338,15 @@ struct guest_data {
> >
> >  static struct guest_data guest_data;
> >
> > +/* Data to communicate among guest threads */
> > +struct guest_irq_data {
> > +       uint32_t pmc_idx_bmap;
> > +       uint32_t irq_received_bmap;
> > +       struct spinlock lock;
> > +};
> > +
> > +static struct guest_irq_data guest_irq_data;
> > +
> >  #define VCPU_MIGRATIONS_TEST_ITERS_DEF         1000
> >  #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS 2
> >
> > @@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs)
> >         expected_ec = INVALID_EC;
> >  }
> >
> > +static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap)
>
> Can you please add a comment about what is pmc_idx_bmap ?
>
Of course! Now that I see, it's not that clear. It's actually the
bitmap of the PMC(s) that we should expect an interrupt from. I'll a
comment in v2.
>
> > +{
> > +       /*
> > +        * Fail if there's an interrupt from unexpected PMCs.
> > +        * All the expected events' IRQs may not arrive at the same time.
> > +        * Hence, check if the interrupt is valid only if it's expected.
> > +        */
> > +       if (pmovsclr & BIT(pmc_idx)) {
> > +               GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap);
> > +               write_pmovsclr(BIT(pmc_idx));
> > +       }
> > +}
> > +
> > +static void guest_irq_handler(struct ex_regs *regs)
> > +{
> > +       uint32_t pmc_idx_bmap;
> > +       uint64_t i, pmcr_n = get_pmcr_n();
> > +       uint32_t pmovsclr = read_pmovsclr();
> > +       unsigned int intid = gic_get_and_ack_irq();
> > +
> > +       /* No other IRQ apart from the PMU IRQ is expected */
> > +       GUEST_ASSERT_1(intid == PMU_IRQ, intid);
> > +
> > +       spin_lock(&guest_irq_data.lock);
>
> Could you explain why this lock is required in this patch ??
> If this is used to serialize the interrupt context code and
> the normal (non-interrupt) context code, you might want to
> disable the IRQ ?  Using the spin lock won't work well for
> that if the interrupt handler is invoked while the normal
> context code grabs the lock.
> Having said that, since execute_precise_instrs() disables the PMU
>  via PMCR, and does isb after that, I don't think the overflow
> interrupt is delivered while the normal context code is in
> pmu_irq_*() anyway.
>
I think you are right. At least in the current state of the patch, we
don't need this lock, nor do we explicitly have to enable/disable IRQs
to deal with a race. I've checked further patches as well, and even in
the case of multi-vCPU config, we wouldn't need it as the
guest_irq_data is per-cpu.
(Probably I introduced it by forward-thinking things). Thanks for
catching this. I'll remove it in v2.

> > +       pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
> > +
> > +       for (i = 0; i < pmcr_n; i++)
> > +               guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
> > +       guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
> > +
> > +       /* Mark IRQ as recived for the corresponding PMCs */
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       gic_set_eoi(intid);
> > +}
> > +
> > +static int pmu_irq_received(int pmc_idx)
> > +{
> > +       bool irq_received;
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       return irq_received;
> > +}
> > +
> > +static void pmu_irq_init(int pmc_idx)
> > +{
> > +       write_pmovsclr(BIT(pmc_idx));
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       enable_irq(pmc_idx);
> > +}
> > +
> > +static void pmu_irq_exit(int pmc_idx)
> > +{
> > +       write_pmovsclr(BIT(pmc_idx));
> > +
> > +       spin_lock(&guest_irq_data.lock);
> > +       WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
> > +       spin_unlock(&guest_irq_data.lock);
> > +
> > +       disable_irq(pmc_idx);
> > +}
> > +
> >  /*
> >   * Run the given operation that should trigger an exception with the
> >   * given exception class. The exception handler (guest_sync_handler)
> > @@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr)
> >         precise_instrs_loop(loop, pmcr);
> >  }
> >
> > -static void test_instructions_count(int pmc_idx, bool expect_count)
> > +static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow)
> >  {
> >         int i;
> >         struct pmc_accessor *acc;
> > -       uint64_t cnt;
> > -       int instrs_count = 100;
> > +       uint64_t cntr_val = 0;
> > +       int instrs_count = 500;
>
> Can we set instrs_count based on the value we set for cntr_val?
> (so that instrs_count can be adjusted automatically when we change the
> value of cntr_val ?)
>
Sure, I can do that to keep things safe.

> > +
> > +       if (test_overflow) {
> > +               /* Overflow scenarios can only be tested when a count is expected */
> > +               GUEST_ASSERT_1(expect_count, pmc_idx);
> > +
> > +               cntr_val = PRE_OVERFLOW_32;
> > +               pmu_irq_init(pmc_idx);
> > +       }
> >
> >         enable_counter(pmc_idx);
> >
> > @@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count)
> >         for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
> >                 acc = &pmc_accessors[i];
> >
> > -               pmu_disable_reset();
> > -
> > +               acc->write_cntr(pmc_idx, cntr_val);
> >                 acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
> >
> > -               /* Enable the PMU and execute precisely number of instructions as a workload */
> > -               execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> > +               /*
> > +                * Enable the PMU and execute a precise number of instructions as a workload.
> > +                * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count'
> > +                * should have enough instructions to raise an IRQ.
> > +                */
> > +               execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E);
> >
> > -               /* If a count is expected, the counter should be increased by 'instrs_count' */
> > -               cnt = acc->read_cntr(pmc_idx);
> > -               GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> > -                               i, expect_count, cnt, instrs_count);
> > +               /*
> > +                * If an overflow is expected, only check for the overflag flag.
> > +                * As overflow interrupt is enabled, the interrupt would add additional
> > +                * instructions and mess up the precise instruction count. Hence, measure
> > +                * the instructions count only when the test is not set up for an overflow.
> > +                */
> > +               if (test_overflow) {
> > +                       GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i);
> > +               } else {
> > +                       uint64_t cnt = acc->read_cntr(pmc_idx);
> > +
> > +                       GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
> > +                                       pmc_idx, i, cnt, expect_count);
> > +               }
> >         }
> >
> > -       disable_counter(pmc_idx);
> > +       if (test_overflow)
> > +               pmu_irq_exit(pmc_idx);
> >  }
> >
> > -static void test_cycles_count(bool expect_count)
> > +static void test_cycles_count(bool expect_count, bool test_overflow)
> >  {
> >         uint64_t cnt;
> >
> > -       pmu_enable();
> > -       reset_cycle_counter();
> > +       if (test_overflow) {
> > +               /* Overflow scenarios can only be tested when a count is expected */
> > +               GUEST_ASSERT(expect_count);
> > +
> > +               write_cycle_counter(PRE_OVERFLOW_64);
> > +               pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX);
> > +       } else {
> > +               reset_cycle_counter();
> > +       }
> >
> >         /* Count cycles in EL0 and EL1 */
> >         write_pmccfiltr(0);
> >         enable_cycle_counter();
> >
> > +       /* Enable the PMU and execute precisely number of instructions as a workload */
>
> Can you please add a comment why we do this (500 times) iterations ?
> Can we set the iteration number based on the initial value of the
> cycle counter ?
>
I believe I have a comment explaining it in the upcoming patches.
Should've had it on this
one though. I'll move it in v2.

> > +       execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
> >         cnt = read_cycle_counter();
>   >
> >         /*
> >          * If a count is expected by the test, the cycle counter should be increased by
> > -        * at least 1, as there is at least one instruction between enabling the
> > +        * at least 1, as there are a number of instructions between enabling the
> >          * counter and reading the counter.
> >          */
>
> "at least 1" doesn't seem to be consistent with the GUEST_ASSERT_2 below
> when test_overflow is true, considering the initial value of the cycle counter.
> Shouldn't this GUEST_ASSERT_2 be executed only if test_overflow is false ?
> (Or do you want to adjust the comment ?)
>
Yes, I may have to tweak the comment to make things clear.

> >         GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
> > +       if (test_overflow) {
> > +               GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count);
> > +               pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX);
> > +       }
> >
> >         disable_cycle_counter();
> >         pmu_disable_reset();
> > @@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
> >  {
> >         switch (event) {
> >         case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
> > -               test_instructions_count(pmc_idx, expect_count);
> > +               test_instructions_count(pmc_idx, expect_count, false);
> >                 break;
> >         case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
> > -               test_cycles_count(expect_count);
> > +               test_cycles_count(expect_count, false);
> >                 break;
> >         }
> >  }
> >
> >  static void test_basic_pmu_functionality(void)
> >  {
> > +       local_irq_disable();
> > +       gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
> > +       gic_irq_enable(PMU_IRQ);
> > +       local_irq_enable();
> > +
> >         /* Test events on generic and cycle counters */
> > -       test_instructions_count(0, true);
> > -       test_cycles_count(true);
> > +       test_instructions_count(0, true, false);
> > +       test_cycles_count(true, false);
> > +
> > +       /* Test overflow with interrupts on generic and cycle counters */
> > +       test_instructions_count(0, true, true);
> > +       test_cycles_count(true, true);
> >  }
> >
> >  /*
> > @@ -813,9 +988,6 @@ static void guest_code(void)
> >         GUEST_DONE();
> >  }
> >
> > -#define GICD_BASE_GPA  0x8000000ULL
> > -#define GICR_BASE_GPA  0x80A0000ULL
> > -
> >  static unsigned long *
> >  set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
> >  {
> > @@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >         struct kvm_vcpu *vcpu;
> >         struct kvm_vcpu_init init;
> >         uint8_t pmuver, ec;
> > -       uint64_t dfr0, irq = 23;
> > +       uint64_t dfr0, irq = PMU_IRQ;
> >         struct vpmu_vm *vpmu_vm;
> >         struct kvm_device_attr irq_attr = {
> >                 .group = KVM_ARM_VCPU_PMU_V3_CTRL,
> > @@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
> >
> >         vpmu_vm->vm = vm = vm_create(1);
> >         vm_init_descriptor_tables(vm);
> > +       vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
> >
> >         /* Catch exceptions for easier debugging */
> >         for (ec = 0; ec < ESR_EC_NUM; ec++) {
> > --
> > 2.39.1.581.gbfd45094c4-goog
> >
>
> Thanks,
> Reiji
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-03-10 23:59 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-15  1:07 [REPOST PATCH 00/16] Add support for vPMU selftests Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 01/16] tools: arm64: Import perf_event.h Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 02/16] KVM: selftests: aarch64: Introduce vpmu_counter_access test Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 03/16] KVM: selftests: aarch64: vPMU register test for implemented counters Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 04/16] KVM: selftests: aarch64: vPMU register test for unimplemented counters Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 05/16] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 06/16] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
2023-03-03  0:46   ` Reiji Watanabe
2023-03-09 22:14     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 07/16] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
2023-03-03  3:06   ` Reiji Watanabe
2023-03-09 22:19     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 08/16] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
2023-03-03  4:30   ` Reiji Watanabe
2023-03-09 22:45     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 09/16] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
2023-03-04 20:28   ` Reiji Watanabe
2023-03-09 23:17     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 10/16] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
2023-03-07  1:19   ` Reiji Watanabe
2023-03-07 16:09     ` Sean Christopherson
2023-03-10 21:57     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 11/16] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
2023-03-07  3:43   ` Reiji Watanabe
2023-03-10  2:28     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 12/16] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
2023-03-07  6:09   ` Reiji Watanabe
2023-03-08  1:19     ` Reiji Watanabe
2023-03-10 23:58     ` Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 13/16] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
2023-03-08  3:15   ` Reiji Watanabe
2023-02-15  1:07 ` [REPOST PATCH 14/16] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
2023-03-08  3:40   ` Reiji Watanabe
2023-02-15  1:07 ` [REPOST PATCH 15/16] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
2023-02-15  1:07 ` [REPOST PATCH 16/16] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
2023-03-08  4:44   ` Reiji Watanabe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).