linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] Extend the vPMU selftest
@ 2023-02-13 18:02 Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c Raghavendra Rao Ananta
                   ` (14 more replies)
  0 siblings, 15 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Hello,

This vPMU KVM selftest series is an extension to the selftests
introduced by Reiji Watanabe in his series aims to limit the number
of PMCs on vCPU from userspace [1].

The idea behind this series is to expand the test coverage to include
the tests that validates actions from userspace, such as allowing or
denying certain events via KVM_ARM_VCPU_PMU_V3_FILTER attribute, KVM's
guarding of the PMU attributes to count EL2/EL3 events, and formal KVM
behavior that enables PMU emulation. The last part validates the guest
expectations of the vPMU by setting up a stress test that force-migrates
multiple vCPUs frequently across random pCPUs in the system, thus
ensuring KVM's management of vCPU PMU contexts correctly.

Patch-1 renames the test file to be more generic.

Patch-2 refactors the existing tests for plugging-in the upcoming tests
easily.

Patch-3 and 4 add helper macros and functions respectively to interact
with the cycle counter.

Patch-5 extends create_vpmu_vm() to accept an array of event filters
as an argument that are to be applied to the VM.

Patch-6 tests the KVM_ARM_VCPU_PMU_V3_FILTER attribute by scripting
various combinations of events that are to be allowed or denied to
the guest and verifying guest's behavior.

Patch-7 adds test to validate KVM's handling of guest requests to count
events in EL2/EL3.

Patch-8 introduces the vCPU migration stress testing by validating cycle
counter and general purpose counter's behavior across vCPU migrations.

Patch-9, 10, and 11 expands the tests in patch-8 to validate
overflow/IRQ functionality, chained events, and occupancy of all the PMU
counters, respectively.

Patch-12 extends create_vpmu_vm() to create multiple vCPUs for the VM.

Patch-13 expands the stress tests for multiple vCPUs.

The series has been tested on hardwares with PMUv8p1 and PMUvp5.

Thank you.
Raghavendra

[1]: https://lore.kernel.org/all/20230203040242.1792453-1-reijiw@google.com/


Raghavendra Rao Ananta (13):
  selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c
  selftests: KVM: aarch64: Refactor the vPMU counter access tests
  tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  selftests: KVM: aarch64: Add PMU cycle counter helpers
  selftests: KVM: aarch64: Consider PMU event filters for VM creation
  selftests: KVM: aarch64: Add KVM PMU event filter test
  selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  selftests: KVM: aarch64: Add vCPU migration test for PMU
  selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  selftests: KVM: aarch64: Test chained events for PMU
  selftests: KVM: aarch64: Add PMU test to chain all the counters
  selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation
  selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs

 tools/arch/arm64/include/asm/perf_event.h     |    7 +
 tools/testing/selftests/kvm/Makefile          |    2 +-
 .../kvm/aarch64/vpmu_counter_access.c         |  642 -------
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 1710 +++++++++++++++++
 4 files changed, 1718 insertions(+), 643 deletions(-)
 delete mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
 create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_test.c

-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

The upcoming patches would add more vPMU related tests to the file.
Hence, rename it to be more generic.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/testing/selftests/kvm/Makefile                            | 2 +-
 .../kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c}          | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
 rename tools/testing/selftests/kvm/aarch64/{vpmu_counter_access.c => vpmu_test.c} (99%)

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index b27fea0ce5918..a4d262e139b18 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -143,7 +143,7 @@ TEST_GEN_PROGS_aarch64 += aarch64/psci_test
 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
-TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access
+TEST_GEN_PROGS_aarch64 += aarch64/vpmu_test
 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test
 TEST_GEN_PROGS_aarch64 += demand_paging_test
 TEST_GEN_PROGS_aarch64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
similarity index 99%
rename from tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
rename to tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 453f0dd240f44..581be0c463ad1 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -1,6 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 /*
- * vpmu_counter_access - Test vPMU event counter access
+ * vpmu_test - Test the vPMU
  *
  * Copyright (c) 2022 Google LLC.
  *
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Refactor the existing counter access tests into its own
independent functions and make running the tests generic
to make way for the upcoming tests.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 140 ++++++++++++------
 1 file changed, 98 insertions(+), 42 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 581be0c463ad1..d72c3c9b9c39f 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -147,6 +147,11 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline uint64_t get_pmcr_n(void)
+{
+	return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+}
+
 /*
  * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0
  * accessors that test cases will use. Each of the accessors will
@@ -183,6 +188,23 @@ struct pmc_accessor pmc_accessors[] = {
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
 
+struct vpmu_vm {
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	int gic_fd;
+};
+
+enum test_stage {
+	TEST_STAGE_COUNTER_ACCESS = 1,
+};
+
+struct guest_data {
+	enum test_stage test_stage;
+	uint64_t expected_pmcr_n;
+};
+
+static struct guest_data guest_data;
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
@@ -295,7 +317,7 @@ static void test_bitmap_pmu_regs(int pmc_idx, bool set_op)
 		write_sysreg(test_bit, pmovsset_el0);
 
 		/* The bit will be set only if the counter is implemented */
-		pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
+		pmcr_n = get_pmcr_n();
 		set_expected = (pmc_idx < pmcr_n) ? true : false;
 	} else {
 		write_sysreg(test_bit, pmcntenclr_el0);
@@ -424,15 +446,14 @@ static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx)
  * if reading/writing PMU registers for implemented or unimplemented
  * counters can work as expected.
  */
-static void guest_code(uint64_t expected_pmcr_n)
+static void guest_counter_access_test(uint64_t expected_pmcr_n)
 {
-	uint64_t pmcr, pmcr_n, unimp_mask;
+	uint64_t pmcr_n, unimp_mask;
 	int i, pmc;
 
 	GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS);
 
-	pmcr = read_sysreg(pmcr_el0);
-	pmcr_n = FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
+	pmcr_n = get_pmcr_n();
 
 	/* Make sure that PMCR_EL0.N indicates the value userspace set */
 	GUEST_ASSERT_2(pmcr_n == expected_pmcr_n, pmcr_n, expected_pmcr_n);
@@ -462,6 +483,18 @@ static void guest_code(uint64_t expected_pmcr_n)
 		for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++)
 			test_access_invalid_pmc_regs(&pmc_accessors[i], pmc);
 	}
+}
+
+static void guest_code(void)
+{
+	switch (guest_data.test_stage) {
+	case TEST_STAGE_COUNTER_ACCESS:
+		guest_counter_access_test(guest_data.expected_pmcr_n);
+		break;
+	default:
+		GUEST_ASSERT_1(0, guest_data.test_stage);
+	}
+
 	GUEST_DONE();
 }
 
@@ -469,14 +502,14 @@ static void guest_code(uint64_t expected_pmcr_n)
 #define GICR_BASE_GPA	0x80A0000ULL
 
 /* Create a VM that has one vCPU with PMUv3 configured. */
-static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
-				     int *gic_fd)
+static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
 	uint8_t pmuver, ec;
 	uint64_t dfr0, irq = 23;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
 		.attr = KVM_ARM_VCPU_PMU_V3_IRQ,
@@ -487,7 +520,10 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 		.attr = KVM_ARM_VCPU_PMU_V3_INIT,
 	};
 
-	vm = vm_create(1);
+	vpmu_vm = calloc(1, sizeof(*vpmu_vm));
+	TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm");
+
+	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
@@ -498,9 +534,9 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	/* Create vCPU with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
-	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+	vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
 	vcpu_init_descriptor_tables(vcpu);
-	*gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
+	vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
@@ -513,15 +549,21 @@ static struct kvm_vm *create_vpmu_vm(void *guest_code, struct kvm_vcpu **vcpup,
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
 
-	*vcpup = vcpu;
-	return vm;
+	return vpmu_vm;
+}
+
+static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
+{
+	close(vpmu_vm->gic_fd);
+	kvm_vm_free(vpmu_vm->vm);
+	free(vpmu_vm);
 }
 
-static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
+static void run_vcpu(struct kvm_vcpu *vcpu)
 {
 	struct ucall uc;
 
-	vcpu_args_set(vcpu, 1, pmcr_n);
+	sync_global_to_guest(vcpu->vm, guest_data);
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
@@ -539,16 +581,18 @@ static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n)
  * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n,
  * and run the test.
  */
-static void run_test(uint64_t pmcr_n)
+static void run_counter_access_test(uint64_t pmcr_n)
 {
-	struct kvm_vm *vm;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_vcpu *vcpu;
-	int gic_fd;
 	uint64_t sp, pmcr, pmcr_orig;
 	struct kvm_vcpu_init init;
 
+	guest_data.expected_pmcr_n = pmcr_n;
+
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu = vpmu_vm->vcpu;
 
 	/* Save the initial sp to restore them later to run the guest again */
 	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
@@ -559,23 +603,22 @@ static void run_test(uint64_t pmcr_n)
 	pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT);
 	vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr);
 
-	run_vcpu(vcpu, pmcr_n);
+	run_vcpu(vcpu);
 
 	/*
 	 * Reset and re-initialize the vCPU, and run the guest code again to
 	 * check if PMCR_EL0.N is preserved.
 	 */
-	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	vm_ioctl(vpmu_vm->vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
 	aarch64_vcpu_setup(vcpu, &init);
 	vcpu_init_descriptor_tables(vcpu);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp);
 	vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code);
 
-	run_vcpu(vcpu, pmcr_n);
+	run_vcpu(vcpu);
 
-	close(gic_fd);
-	kvm_vm_free(vm);
+	destroy_vpmu_vm(vpmu_vm);
 }
 
 /*
@@ -583,15 +626,18 @@ static void run_test(uint64_t pmcr_n)
  * the vCPU to @pmcr_n, which is larger than the host value.
  * The attempt should fail as @pmcr_n is too big to set for the vCPU.
  */
-static void run_error_test(uint64_t pmcr_n)
+static void run_counter_access_error_test(uint64_t pmcr_n)
 {
-	struct kvm_vm *vm;
+	struct vpmu_vm *vpmu_vm;
 	struct kvm_vcpu *vcpu;
-	int gic_fd, ret;
+	int ret;
 	uint64_t pmcr, pmcr_orig;
 
+	guest_data.expected_pmcr_n = pmcr_n;
+
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu = vpmu_vm->vcpu;
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
@@ -603,8 +649,25 @@ static void run_error_test(uint64_t pmcr_n)
 	TEST_ASSERT(ret, "Setting PMCR to 0x%lx (orig PMCR 0x%lx) didn't fail",
 		    pmcr, pmcr_orig);
 
-	close(gic_fd);
-	kvm_vm_free(vm);
+	destroy_vpmu_vm(vpmu_vm);
+}
+
+static void run_counter_access_tests(uint64_t pmcr_n)
+{
+	uint64_t i;
+
+	guest_data.test_stage = TEST_STAGE_COUNTER_ACCESS;
+
+	for (i = 0; i <= pmcr_n; i++)
+		run_counter_access_test(i);
+
+	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
+		run_counter_access_error_test(i);
+}
+
+static void run_tests(uint64_t pmcr_n)
+{
+	run_counter_access_tests(pmcr_n);
 }
 
 /*
@@ -613,30 +676,23 @@ static void run_error_test(uint64_t pmcr_n)
  */
 static uint64_t get_pmcr_n_limit(void)
 {
-	struct kvm_vm *vm;
-	struct kvm_vcpu *vcpu;
-	int gic_fd;
+	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vm = create_vpmu_vm(guest_code, &vcpu, &gic_fd);
-	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
-	close(gic_fd);
-	kvm_vm_free(vm);
+	vpmu_vm = create_vpmu_vm(guest_code);
+	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	destroy_vpmu_vm(vpmu_vm);
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
 int main(void)
 {
-	uint64_t i, pmcr_n;
+	uint64_t pmcr_n;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
 
 	pmcr_n = get_pmcr_n_limit();
-	for (i = 0; i <= pmcr_n; i++)
-		run_test(i);
-
-	for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++)
-		run_error_test(i);
+	run_tests(pmcr_n);
 
 	return 0;
 }
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add the definitions of ARMV8_PMU_CNTOVS_C (Cycle counter overflow
bit) for overflow status registers and ARMV8_PMU_CNTENSET_C (Cycle
counter enable bit) for PMCNTENSET_EL0 register.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 tools/arch/arm64/include/asm/perf_event.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/arch/arm64/include/asm/perf_event.h b/tools/arch/arm64/include/asm/perf_event.h
index 97e49a4d4969f..8ce23aabf6fe6 100644
--- a/tools/arch/arm64/include/asm/perf_event.h
+++ b/tools/arch/arm64/include/asm/perf_event.h
@@ -222,9 +222,11 @@
 /*
  * PMOVSR: counters overflow flag status reg
  */
+#define ARMV8_PMU_CNTOVS_C      (1 << 31) /* Cycle counter overflow bit */
 #define	ARMV8_PMU_OVSR_MASK		0xffffffff	/* Mask for writable bits */
 #define	ARMV8_PMU_OVERFLOWED_MASK	ARMV8_PMU_OVSR_MASK
 
+
 /*
  * PMXEVTYPER: Event selection reg
  */
@@ -247,6 +249,11 @@
 #define ARMV8_PMU_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
 #define ARMV8_PMU_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
 
+/*
+ * PMCNTENSET: Count Enable set reg
+ */
+#define ARMV8_PMU_CNTENSET_C    (1 << 31) /* Cycle counter enable bit */
+
 /* PMMIR_EL1.SLOTS mask */
 #define ARMV8_PMU_SLOTS_MASK	0xff
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (2 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add basic helpers for the test to access the cycle counter
registers. The helpers will be used in the upcoming patches
to run the tests related to cycle counter.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 40 +++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index d72c3c9b9c39f..15aebc7d7dc94 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -147,6 +147,46 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline uint64_t read_cycle_counter(void)
+{
+	return read_sysreg(pmccntr_el0);
+}
+
+static inline void reset_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcr_el0);
+
+	write_sysreg(ARMV8_PMU_PMCR_C | v, pmcr_el0);
+	isb();
+}
+
+static inline void enable_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenset_el0);
+	isb();
+}
+
+static inline void disable_cycle_counter(void)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(ARMV8_PMU_CNTENSET_C | v, pmcntenclr_el0);
+	isb();
+}
+
+static inline void write_pmccfiltr(unsigned long val)
+{
+	write_sysreg(val, pmccfiltr_el0);
+	isb();
+}
+
+static inline uint64_t read_pmccfiltr(void)
+{
+	return read_sysreg(pmccfiltr_el0);
+}
+
 static inline uint64_t get_pmcr_n(void)
 {
 	return FIELD_GET(ARMV8_PMU_PMCR_N, read_sysreg(pmcr_el0));
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (3 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Accept a list of KVM PMU event filters as an argument while creating
a VM via create_vpmu_vm(). Upcoming patches would leverage this to
test the event filters' functionality.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 64 +++++++++++++++++--
 1 file changed, 60 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 15aebc7d7dc94..2b3a4fa3afa9c 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -15,10 +15,14 @@
 #include <vgic.h>
 #include <asm/perf_event.h>
 #include <linux/bitfield.h>
+#include <linux/bitmap.h>
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/* The max number of event numbers that's supported */
+#define ARMV8_PMU_MAX_EVENTS		64
+
 /*
  * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
  * were basically copied from arch/arm64/kernel/perf_event.c.
@@ -224,6 +228,8 @@ struct pmc_accessor pmc_accessors[] = {
 	{ read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern },
 };
 
+#define MAX_EVENT_FILTERS_PER_VM 10
+
 #define INVALID_EC	(-1ul)
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
@@ -232,6 +238,7 @@ struct vpmu_vm {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	int gic_fd;
+	unsigned long *pmu_filter;
 };
 
 enum test_stage {
@@ -541,8 +548,51 @@ static void guest_code(void)
 #define GICD_BASE_GPA	0x8000000ULL
 #define GICR_BASE_GPA	0x80A0000ULL
 
+static unsigned long *
+set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
+{
+	int j;
+	unsigned long *pmu_filter;
+	struct kvm_device_attr filter_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_FILTER,
+	};
+
+	/*
+	 * Setting up of the bitmap is similar to what KVM does.
+	 * If the first filter denys an event, default all the others to allow, and vice-versa.
+	 */
+	pmu_filter = bitmap_zalloc(ARMV8_PMU_MAX_EVENTS);
+	TEST_ASSERT(pmu_filter, "Failed to allocate the pmu_filter");
+
+	if (pmu_event_filters[0].action == KVM_PMU_EVENT_DENY)
+		bitmap_fill(pmu_filter, ARMV8_PMU_MAX_EVENTS);
+
+	for (j = 0; j < MAX_EVENT_FILTERS_PER_VM; j++) {
+		struct kvm_pmu_event_filter *pmu_event_filter = &pmu_event_filters[j];
+
+		if (!pmu_event_filter->nevents)
+			break;
+
+		pr_debug("Applying event filter:: event: 0x%x; action: %s\n",
+				pmu_event_filter->base_event,
+				pmu_event_filter->action == KVM_PMU_EVENT_ALLOW ? "ALLOW" : "DENY");
+
+		filter_attr.addr = (uint64_t) pmu_event_filter;
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+
+		if (pmu_event_filter->action == KVM_PMU_EVENT_ALLOW)
+			__set_bit(pmu_event_filter->base_event, pmu_filter);
+		else
+			__clear_bit(pmu_event_filter->base_event, pmu_filter);
+	}
+
+	return pmu_filter;
+}
+
 /* Create a VM that has one vCPU with PMUv3 configured. */
-static struct vpmu_vm *create_vpmu_vm(void *guest_code)
+static struct vpmu_vm *
+create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 {
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
@@ -586,6 +636,9 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
 
 	/* Initialize vPMU */
+	if (pmu_event_filters)
+		vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
 	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
 
@@ -594,6 +647,8 @@ static struct vpmu_vm *create_vpmu_vm(void *guest_code)
 
 static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
 {
+	if (vpmu_vm->pmu_filter)
+		bitmap_free(vpmu_vm->pmu_filter);
 	close(vpmu_vm->gic_fd);
 	kvm_vm_free(vpmu_vm->vm);
 	free(vpmu_vm);
@@ -631,7 +686,7 @@ static void run_counter_access_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu = vpmu_vm->vcpu;
 
 	/* Save the initial sp to restore them later to run the guest again */
@@ -676,7 +731,7 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu = vpmu_vm->vcpu;
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
@@ -719,9 +774,10 @@ static uint64_t get_pmcr_n_limit(void)
 	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vpmu_vm = create_vpmu_vm(guest_code);
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
 	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
 	destroy_vpmu_vm(vpmu_vm);
+
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (4 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Add tests to validate KVM's KVM_ARM_VCPU_PMU_V3_FILTER
attribute by applying a series of filters to allow or
deny events from the userspace. Validation is done by
the guest in a way that it should be able to count
only the events that are allowed.

The workload to execute a precise number of instructions
(execute_precise_instrs() and precise_instrs_loop()) is taken
from the kvm-unit-tests' arm/pmu.c.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 261 +++++++++++++++++-
 1 file changed, 258 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 2b3a4fa3afa9c..3dfb770b538e9 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -2,12 +2,21 @@
 /*
  * vpmu_test - Test the vPMU
  *
- * Copyright (c) 2022 Google LLC.
+ * The test suit contains a series of checks to validate the vPMU
+ * functionality. This test runs only when KVM_CAP_ARM_PMU_V3 is
+ * supported on the host. The tests include:
  *
- * This test checks if the guest can see the same number of the PMU event
+ * 1. Check if the guest can see the same number of the PMU event
  * counters (PMCR_EL0.N) that userspace sets, if the guest can access
  * those counters, and if the guest cannot access any other counters.
- * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host.
+ *
+ * 2. Test the functionality of KVM's KVM_ARM_VCPU_PMU_V3_FILTER
+ * attribute by applying a series of filters in various combinations
+ * of allowing or denying the events. The guest validates it by
+ * checking if it's able to count only the events that are allowed.
+ *
+ * Copyright (c) 2022 Google LLC.
+ *
  */
 #include <kvm_util.h>
 #include <processor.h>
@@ -230,6 +239,12 @@ struct pmc_accessor pmc_accessors[] = {
 
 #define MAX_EVENT_FILTERS_PER_VM 10
 
+#define EVENT_ALLOW(ev) \
+	{.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_ALLOW}
+
+#define EVENT_DENY(ev) \
+	{.base_event = ev, .nevents = 1, .action = KVM_PMU_EVENT_DENY}
+
 #define INVALID_EC	(-1ul)
 uint64_t expected_ec = INVALID_EC;
 uint64_t op_end_addr;
@@ -243,11 +258,13 @@ struct vpmu_vm {
 
 enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
+	TEST_STAGE_KVM_EVENT_FILTER,
 };
 
 struct guest_data {
 	enum test_stage test_stage;
 	uint64_t expected_pmcr_n;
+	unsigned long *pmu_filter;
 };
 
 static struct guest_data guest_data;
@@ -329,6 +346,113 @@ static bool pmu_event_is_supported(uint64_t event)
 		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
 }
 
+
+/*
+ * Extra instructions inserted by the compiler would be difficult to compensate
+ * for, so hand assemble everything between, and including, the PMCR accesses
+ * to start and stop counting. isb instructions are inserted to make sure
+ * pmccntr read after this function returns the exact instructions executed
+ * in the controlled block. Total instrs = isb + nop + 2*loop = 2 + 2*loop.
+ */
+static inline void precise_instrs_loop(int loop, uint32_t pmcr)
+{
+	uint64_t pmcr64 = pmcr;
+
+	asm volatile(
+	"	msr	pmcr_el0, %[pmcr]\n"
+	"	isb\n"
+	"1:	subs	%w[loop], %w[loop], #1\n"
+	"	b.gt	1b\n"
+	"	nop\n"
+	"	msr	pmcr_el0, xzr\n"
+	"	isb\n"
+	: [loop] "+r" (loop)
+	: [pmcr] "r" (pmcr64)
+	: "cc");
+}
+
+/*
+ * Execute a known number of guest instructions. Only even instruction counts
+ * greater than or equal to 4 are supported by the in-line assembly code. The
+ * control register (PMCR_EL0) is initialized with the provided value (allowing
+ * for example for the cycle counter or event counters to be reset). At the end
+ * of the exact instruction loop, zero is written to PMCR_EL0 to disable
+ * counting, allowing the cycle counter or event counters to be read at the
+ * leisure of the calling code.
+ */
+static void execute_precise_instrs(int num, uint32_t pmcr)
+{
+	int loop = (num - 2) / 2;
+
+	GUEST_ASSERT_2(num >= 4 && ((num - 2) % 2 == 0), num, loop);
+	precise_instrs_loop(loop, pmcr);
+}
+
+static void test_instructions_count(int pmc_idx, bool expect_count)
+{
+	int i;
+	struct pmc_accessor *acc;
+	uint64_t cnt;
+	int instrs_count = 100;
+
+	enable_counter(pmc_idx);
+
+	/* Test the event using all the possible way to configure the event */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		pmu_disable_reset();
+
+		acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
+
+		/* Enable the PMU and execute precisely number of instructions as a workload */
+		execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
+
+		/* If a count is expected, the counter should be increased by 'instrs_count' */
+		cnt = acc->read_cntr(pmc_idx);
+		GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
+				i, expect_count, cnt, instrs_count);
+	}
+
+	disable_counter(pmc_idx);
+}
+
+static void test_cycles_count(bool expect_count)
+{
+	uint64_t cnt;
+
+	pmu_enable();
+	reset_cycle_counter();
+
+	/* Count cycles in EL0 and EL1 */
+	write_pmccfiltr(0);
+	enable_cycle_counter();
+
+	cnt = read_cycle_counter();
+
+	/*
+	 * If a count is expected by the test, the cycle counter should be increased by
+	 * at least 1, as there is at least one instruction between enabling the
+	 * counter and reading the counter.
+	 */
+	GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
+
+	disable_cycle_counter();
+	pmu_disable_reset();
+}
+
+static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
+{
+	switch (event) {
+	case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
+		test_instructions_count(pmc_idx, expect_count);
+		break;
+	case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
+		test_cycles_count(expect_count);
+		break;
+	}
+}
+
 /*
  * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
  * are set or cleared as specified in @set_expected.
@@ -532,12 +656,37 @@ static void guest_counter_access_test(uint64_t expected_pmcr_n)
 	}
 }
 
+static void guest_event_filter_test(unsigned long *pmu_filter)
+{
+	uint64_t event;
+
+	/*
+	 * Check if PMCEIDx_EL0 is advertized as configured by the userspace.
+	 * It's possible that even though the userspace allowed it, it may not be supported
+	 * by the hardware and could be advertized as 'disabled'. Hence, only validate against
+	 * the events that are advertized.
+	 *
+	 * Furthermore, check if the event is in fact counting if enabled, or vice-versa.
+	 */
+	for (event = 0; event < ARMV8_PMU_MAX_EVENTS - 1; event++) {
+		if (pmu_event_is_supported(event)) {
+			GUEST_ASSERT_1(test_bit(event, pmu_filter), event);
+			test_event_count(event, 0, true);
+		} else {
+			test_event_count(event, 0, false);
+		}
+	}
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
 	case TEST_STAGE_COUNTER_ACCESS:
 		guest_counter_access_test(guest_data.expected_pmcr_n);
 		break;
+	case TEST_STAGE_KVM_EVENT_FILTER:
+		guest_event_filter_test(guest_data.pmu_filter);
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -760,9 +909,115 @@ static void run_counter_access_tests(uint64_t pmcr_n)
 		run_counter_access_error_test(i);
 }
 
+static struct kvm_pmu_event_filter pmu_event_filters[][MAX_EVENT_FILTERS_PER_VM] = {
+	/*
+	 * Each set of events denotes a filter configuration for that VM.
+	 * During VM creation, the filters will be applied in the sequence mentioned here.
+	 */
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+		EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_CPU_CYCLES),
+	},
+	{
+		EVENT_DENY(ARMV8_PMUV3_PERFCTR_INST_RETIRED),
+	},
+};
+
+static void run_kvm_event_filter_error_tests(void)
+{
+	int ret;
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct vpmu_vm *vpmu_vm;
+	struct kvm_vcpu_init init;
+	struct kvm_pmu_event_filter pmu_event_filter = EVENT_ALLOW(ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+	struct kvm_device_attr filter_attr = {
+		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
+		.attr = KVM_ARM_VCPU_PMU_V3_FILTER,
+		.addr = (uint64_t) &pmu_event_filter,
+	};
+
+	/* KVM should not allow configuring filters after the PMU is initialized */
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	TEST_ASSERT(ret == -1 && errno == EBUSY,
+			"Failed to disallow setting an event filter after PMU init");
+	destroy_vpmu_vm(vpmu_vm);
+
+	/* Check for invalid event filter setting */
+	vm = vm_create(1);
+	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
+	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
+	vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
+
+	pmu_event_filter.base_event = UINT16_MAX;
+	pmu_event_filter.nevents = 5;
+	ret = __vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	TEST_ASSERT(ret == -1 && errno == EINVAL, "Failed check for invalid filter configuration");
+	kvm_vm_free(vm);
+}
+
+static void run_kvm_event_filter_test(void)
+{
+	int i;
+	struct vpmu_vm *vpmu_vm;
+	struct kvm_vm *vm;
+	vm_vaddr_t pmu_filter_gva;
+	size_t pmu_filter_bmap_sz = BITS_TO_LONGS(ARMV8_PMU_MAX_EVENTS) * sizeof(unsigned long);
+
+	guest_data.test_stage = TEST_STAGE_KVM_EVENT_FILTER;
+
+	/* Test for valid filter configurations */
+	for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
+		vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
+		vm = vpmu_vm->vm;
+
+		pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
+		memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
+		guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
+
+		run_vcpu(vpmu_vm->vcpu);
+
+		destroy_vpmu_vm(vpmu_vm);
+	}
+
+	/* Check if KVM is handling the errors correctly */
+	run_kvm_event_filter_error_tests();
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
+	run_kvm_event_filter_test();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (5 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

KVM doest't allow the guests to modify the filter types
such counting events in nonsecure/secure-EL2, EL3, and
so on. Validate the same by force-configuring the bits
in PMXEVTYPER_EL0, PMEVTYPERn_EL0, and PMCCFILTR_EL0
registers.

The test extends further by trying to create an event
for counting only in EL2 and validates if the counter
is not progressing.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 85 +++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 3dfb770b538e9..5c166df245589 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -15,6 +15,10 @@
  * of allowing or denying the events. The guest validates it by
  * checking if it's able to count only the events that are allowed.
  *
+ * 3. KVM doesn't allow the guest to count the events attributed with
+ * higher exception levels (EL2, EL3). Verify this functionality by
+ * configuring and trying to count the events for EL2 in the guest.
+ *
  * Copyright (c) 2022 Google LLC.
  *
  */
@@ -23,6 +27,7 @@
 #include <test_util.h>
 #include <vgic.h>
 #include <asm/perf_event.h>
+#include <linux/arm-smccc.h>
 #include <linux/bitfield.h>
 #include <linux/bitmap.h>
 
@@ -259,6 +264,7 @@ struct vpmu_vm {
 enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
 	TEST_STAGE_KVM_EVENT_FILTER,
+	TEST_STAGE_KVM_EVTYPE_FILTER,
 };
 
 struct guest_data {
@@ -678,6 +684,70 @@ static void guest_event_filter_test(unsigned long *pmu_filter)
 	}
 }
 
+static void guest_evtype_filter_test(void)
+{
+	int i;
+	struct pmc_accessor *acc;
+	uint64_t typer, cnt;
+	struct arm_smccc_res res;
+
+	pmu_enable();
+
+	/*
+	 * KVM blocks the guests from creating events for counting in Secure/Non-Secure Hyp (EL2),
+	 * Monitor (EL3), and Multithreading configuration. It applies the mask
+	 * ARMV8_PMU_EVTYPE_MASK against guest accesses to PMXEVTYPER_EL0, PMEVTYPERn_EL0,
+	 * and PMCCFILTR_EL0 registers to prevent this. Check if KVM honors this using all possible
+	 * ways to configure the EVTYPER.
+	 */
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		/* Set all filter bits (31-24), readback, and check against the mask */
+		acc->write_typer(0, 0xff000000);
+		typer = acc->read_typer(0);
+
+		GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
+				typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
+
+		/*
+		 * Regardless of ARMV8_PMU_EVTYPE_MASK, KVM sets perf attr.exclude_hv
+		 * to not count NS-EL2 events. Verify this functionality by configuring
+		 * a NS-EL2 event, for which the couunt shouldn't increment.
+		 */
+		typer = ARMV8_PMUV3_PERFCTR_INST_RETIRED;
+		typer |= ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
+		acc->write_typer(0, typer);
+		acc->write_cntr(0, 0);
+		enable_counter(0);
+
+		/* Issue a hypercall to enter EL2 and return */
+		memset(&res, 0, sizeof(res));
+		smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
+
+		cnt = acc->read_cntr(0);
+		GUEST_ASSERT_3(cnt == 0, cnt, typer, i);
+	}
+
+	/* Check the same sequence for the Cycle counter */
+	write_pmccfiltr(0xff000000);
+	typer = read_pmccfiltr();
+	GUEST_ASSERT_2((typer | ARMV8_PMU_EVTYPE_EVENT) == ARMV8_PMU_EVTYPE_MASK,
+				typer | ARMV8_PMU_EVTYPE_EVENT, ARMV8_PMU_EVTYPE_MASK);
+
+	typer = ARMV8_PMU_INCLUDE_EL2 | ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0;
+	write_pmccfiltr(typer);
+	reset_cycle_counter();
+	enable_cycle_counter();
+
+	/* Issue a hypercall to enter EL2 and return */
+	memset(&res, 0, sizeof(res));
+	smccc_hvc(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0, 0, 0, 0, 0, &res);
+
+	cnt = read_cycle_counter();
+	GUEST_ASSERT_2(cnt == 0, cnt, typer);
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
@@ -687,6 +757,9 @@ static void guest_code(void)
 	case TEST_STAGE_KVM_EVENT_FILTER:
 		guest_event_filter_test(guest_data.pmu_filter);
 		break;
+	case TEST_STAGE_KVM_EVTYPE_FILTER:
+		guest_evtype_filter_test();
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -1014,10 +1087,22 @@ static void run_kvm_event_filter_test(void)
 	run_kvm_event_filter_error_tests();
 }
 
+static void run_kvm_evtype_filter_test(void)
+{
+	struct vpmu_vm *vpmu_vm;
+
+	guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
+
+	vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	run_vcpu(vpmu_vm->vcpu);
+	destroy_vpmu_vm(vpmu_vm);
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
 	run_kvm_event_filter_test();
+	run_kvm_evtype_filter_test();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (6 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Implement a stress test for KVM by frequently force-migrating the
vCPU to random pCPUs in the system. This would validate the
save/restore functionality of KVM and starting/stopping of
PMU counters as necessary.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 195 +++++++++++++++++-
 1 file changed, 193 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 5c166df245589..0c9d801f4e602 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -19,9 +19,15 @@
  * higher exception levels (EL2, EL3). Verify this functionality by
  * configuring and trying to count the events for EL2 in the guest.
  *
+ * 4. Since the PMU registers are per-cpu, stress KVM by frequently
+ * migrating the guest vCPU to random pCPUs in the system, and check
+ * if the vPMU is still behaving as expected.
+ *
  * Copyright (c) 2022 Google LLC.
  *
  */
+#define _GNU_SOURCE
+
 #include <kvm_util.h>
 #include <processor.h>
 #include <test_util.h>
@@ -30,6 +36,11 @@
 #include <linux/arm-smccc.h>
 #include <linux/bitfield.h>
 #include <linux/bitmap.h>
+#include <stdlib.h>
+#include <pthread.h>
+#include <sys/sysinfo.h>
+
+#include "delay.h"
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
@@ -37,6 +48,8 @@
 /* The max number of event numbers that's supported */
 #define ARMV8_PMU_MAX_EVENTS		64
 
+#define msecs_to_usecs(msec)		((msec) * 1000LL)
+
 /*
  * The macros and functions below for reading/writing PMEV{CNTR,TYPER}<n>_EL0
  * were basically copied from arch/arm64/kernel/perf_event.c.
@@ -265,6 +278,7 @@ enum test_stage {
 	TEST_STAGE_COUNTER_ACCESS = 1,
 	TEST_STAGE_KVM_EVENT_FILTER,
 	TEST_STAGE_KVM_EVTYPE_FILTER,
+	TEST_STAGE_VCPU_MIGRATION,
 };
 
 struct guest_data {
@@ -275,6 +289,19 @@ struct guest_data {
 
 static struct guest_data guest_data;
 
+#define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
+#define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
+
+struct test_args {
+	int vcpu_migration_test_iter;
+	int vcpu_migration_test_migrate_freq_ms;
+};
+
+static struct test_args test_args = {
+	.vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
+	.vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
+};
+
 static void guest_sync_handler(struct ex_regs *regs)
 {
 	uint64_t esr, ec;
@@ -352,7 +379,6 @@ static bool pmu_event_is_supported(uint64_t event)
 		GUEST_ASSERT_3(!(_tval & mask), _tval, mask, set_expected);\
 }
 
-
 /*
  * Extra instructions inserted by the compiler would be difficult to compensate
  * for, so hand assemble everything between, and including, the PMCR accesses
@@ -459,6 +485,13 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 	}
 }
 
+static void test_basic_pmu_functionality(void)
+{
+	/* Test events on generic and cycle counters */
+	test_instructions_count(0, true);
+	test_cycles_count(true);
+}
+
 /*
  * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers
  * are set or cleared as specified in @set_expected.
@@ -748,6 +781,16 @@ static void guest_evtype_filter_test(void)
 	GUEST_ASSERT_2(cnt == 0, cnt, typer);
 }
 
+static void guest_vcpu_migration_test(void)
+{
+	/*
+	 * While the userspace continuously migrates this vCPU to random pCPUs,
+	 * run basic PMU functionalities and verify the results.
+	 */
+	while (test_args.vcpu_migration_test_iter--)
+		test_basic_pmu_functionality();
+}
+
 static void guest_code(void)
 {
 	switch (guest_data.test_stage) {
@@ -760,6 +803,9 @@ static void guest_code(void)
 	case TEST_STAGE_KVM_EVTYPE_FILTER:
 		guest_evtype_filter_test();
 		break;
+	case TEST_STAGE_VCPU_MIGRATION:
+		guest_vcpu_migration_test();
+		break;
 	default:
 		GUEST_ASSERT_1(0, guest_data.test_stage);
 	}
@@ -837,6 +883,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 
 	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
+
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
 		vm_install_sync_handler(vm, VECTOR_SYNC_CURRENT, ec,
@@ -881,6 +928,8 @@ static void run_vcpu(struct kvm_vcpu *vcpu)
 	struct ucall uc;
 
 	sync_global_to_guest(vcpu->vm, guest_data);
+	sync_global_to_guest(vcpu->vm, test_args);
+
 	vcpu_run(vcpu);
 	switch (get_ucall(vcpu, &uc)) {
 	case UCALL_ABORT:
@@ -1098,11 +1147,112 @@ static void run_kvm_evtype_filter_test(void)
 	destroy_vpmu_vm(vpmu_vm);
 }
 
+struct vcpu_migrate_data {
+	struct vpmu_vm *vpmu_vm;
+	pthread_t *pt_vcpu;
+	bool vcpu_done;
+};
+
+static void *run_vcpus_migrate_test_func(void *arg)
+{
+	struct vcpu_migrate_data *migrate_data = arg;
+	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
+
+	run_vcpu(vpmu_vm->vcpu);
+	migrate_data->vcpu_done = true;
+
+	return NULL;
+}
+
+static uint32_t get_pcpu(void)
+{
+	uint32_t pcpu;
+	unsigned int nproc_conf;
+	cpu_set_t online_cpuset;
+
+	nproc_conf = get_nprocs_conf();
+	sched_getaffinity(0, sizeof(cpu_set_t), &online_cpuset);
+
+	/* Randomly find an available pCPU to place the vCPU on */
+	do {
+		pcpu = rand() % nproc_conf;
+	} while (!CPU_ISSET(pcpu, &online_cpuset));
+
+	return pcpu;
+}
+
+static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
+{
+	int ret;
+	cpu_set_t cpuset;
+	uint32_t new_pcpu = get_pcpu();
+
+	CPU_ZERO(&cpuset);
+	CPU_SET(new_pcpu, &cpuset);
+
+	pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
+
+	ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
+
+	/* Allow the error where the vCPU thread is already finished */
+	TEST_ASSERT(ret == 0 || ret == ESRCH,
+		    "Failed to migrate the vCPU to pCPU: %u; ret: %d\n", new_pcpu, ret);
+
+	return ret;
+}
+
+static void *vcpus_migrate_func(void *arg)
+{
+	struct vcpu_migrate_data *migrate_data = arg;
+
+	while (!migrate_data->vcpu_done) {
+		usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
+		migrate_vcpu(migrate_data);
+	}
+
+	return NULL;
+}
+
+static void run_vcpu_migration_test(uint64_t pmcr_n)
+{
+	int ret;
+	struct vpmu_vm *vpmu_vm;
+	pthread_t pt_vcpu, pt_sched;
+	struct vcpu_migrate_data migrate_data = {
+		.pt_vcpu = &pt_vcpu,
+		.vcpu_done = false,
+	};
+
+	__TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
+
+	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
+	guest_data.expected_pmcr_n = pmcr_n;
+
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
+
+	/* Initialize random number generation for migrating vCPUs to random pCPUs */
+	srand(time(NULL));
+
+	/* Spawn a vCPU thread */
+	ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
+	TEST_ASSERT(!ret, "Failed to create the vCPU thread");
+
+	/* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
+	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
+	TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
+
+	pthread_join(pt_sched, NULL);
+	pthread_join(pt_vcpu, NULL);
+
+	destroy_vpmu_vm(vpmu_vm);
+}
+
 static void run_tests(uint64_t pmcr_n)
 {
 	run_counter_access_tests(pmcr_n);
 	run_kvm_event_filter_test();
 	run_kvm_evtype_filter_test();
+	run_vcpu_migration_test(pmcr_n);
 }
 
 /*
@@ -1121,12 +1271,53 @@ static uint64_t get_pmcr_n_limit(void)
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
 }
 
-int main(void)
+static void print_help(char *name)
+{
+	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
+		name);
+	pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_ITERS_DEF);
+	pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
+	pr_info("\t-h: print this help screen\n");
+}
+
+static bool parse_args(int argc, char *argv[])
+{
+	int opt;
+
+	while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
+		switch (opt) {
+		case 'i':
+			test_args.vcpu_migration_test_iter =
+				atoi_positive("Nr vCPU migration iterations", optarg);
+			break;
+		case 'm':
+			test_args.vcpu_migration_test_migrate_freq_ms =
+				atoi_positive("vCPU migration frequency", optarg);
+			break;
+		case 'h':
+		default:
+			goto err;
+		}
+	}
+
+	return true;
+
+err:
+	print_help(argv[0]);
+	return false;
+}
+
+int main(int argc, char *argv[])
 {
 	uint64_t pmcr_n;
 
 	TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3));
 
+	if (!parse_args(argc, argv))
+		exit(KSFT_SKIP);
+
 	pmcr_n = get_pmcr_n_limit();
 	run_tests(pmcr_n);
 
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (7 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vCPU migration test to also validate the vPMU's
functionality when set up for overflow conditions.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 223 ++++++++++++++++--
 1 file changed, 198 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 0c9d801f4e602..066dc17fa3906 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -21,7 +21,9 @@
  *
  * 4. Since the PMU registers are per-cpu, stress KVM by frequently
  * migrating the guest vCPU to random pCPUs in the system, and check
- * if the vPMU is still behaving as expected.
+ * if the vPMU is still behaving as expected. The sub-tests include
+ * testing basic functionalities such as basic counters behavior,
+ * overflow, and overflow interrupts.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -41,13 +43,27 @@
 #include <sys/sysinfo.h>
 
 #include "delay.h"
+#include "gic.h"
+#include "spinlock.h"
 
 /* The max number of the PMU event counters (excluding the cycle counter) */
 #define ARMV8_PMU_MAX_GENERAL_COUNTERS	(ARMV8_PMU_MAX_COUNTERS - 1)
 
+/* The cycle counter bit position that's common among the PMU registers */
+#define ARMV8_PMU_CYCLE_COUNTER_IDX	31
+
 /* The max number of event numbers that's supported */
 #define ARMV8_PMU_MAX_EVENTS		64
 
+#define PMU_IRQ				23
+
+#define COUNT_TO_OVERFLOW	0xFULL
+#define PRE_OVERFLOW_32		(GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
+#define PRE_OVERFLOW_64		(GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
+
+#define GICD_BASE_GPA	0x8000000ULL
+#define GICR_BASE_GPA	0x80A0000ULL
+
 #define msecs_to_usecs(msec)		((msec) * 1000LL)
 
 /*
@@ -162,6 +178,17 @@ static inline void write_sel_evtyper(int sel, unsigned long val)
 	isb();
 }
 
+static inline void write_pmovsclr(unsigned long val)
+{
+	write_sysreg(val, pmovsclr_el0);
+	isb();
+}
+
+static unsigned long read_pmovsclr(void)
+{
+	return read_sysreg(pmovsclr_el0);
+}
+
 static inline void enable_counter(int idx)
 {
 	uint64_t v = read_sysreg(pmcntenset_el0);
@@ -178,11 +205,33 @@ static inline void disable_counter(int idx)
 	isb();
 }
 
+static inline void enable_irq(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmintenset_el1);
+	isb();
+}
+
+static inline void disable_irq(int idx)
+{
+	uint64_t v = read_sysreg(pmcntenset_el0);
+
+	write_sysreg(BIT(idx) | v, pmintenclr_el1);
+	isb();
+}
+
 static inline uint64_t read_cycle_counter(void)
 {
 	return read_sysreg(pmccntr_el0);
 }
 
+static inline void write_cycle_counter(uint64_t v)
+{
+	write_sysreg(v, pmccntr_el0);
+	isb();
+}
+
 static inline void reset_cycle_counter(void)
 {
 	uint64_t v = read_sysreg(pmcr_el0);
@@ -289,6 +338,15 @@ struct guest_data {
 
 static struct guest_data guest_data;
 
+/* Data to communicate among guest threads */
+struct guest_irq_data {
+	uint32_t pmc_idx_bmap;
+	uint32_t irq_received_bmap;
+	struct spinlock lock;
+};
+
+static struct guest_irq_data guest_irq_data;
+
 #define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
 
@@ -322,6 +380,79 @@ static void guest_sync_handler(struct ex_regs *regs)
 	expected_ec = INVALID_EC;
 }
 
+static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_bmap)
+{
+	/*
+	 * Fail if there's an interrupt from unexpected PMCs.
+	 * All the expected events' IRQs may not arrive at the same time.
+	 * Hence, check if the interrupt is valid only if it's expected.
+	 */
+	if (pmovsclr & BIT(pmc_idx)) {
+		GUEST_ASSERT_3(pmc_idx_bmap & BIT(pmc_idx), pmc_idx, pmovsclr, pmc_idx_bmap);
+		write_pmovsclr(BIT(pmc_idx));
+	}
+}
+
+static void guest_irq_handler(struct ex_regs *regs)
+{
+	uint32_t pmc_idx_bmap;
+	uint64_t i, pmcr_n = get_pmcr_n();
+	uint32_t pmovsclr = read_pmovsclr();
+	unsigned int intid = gic_get_and_ack_irq();
+
+	/* No other IRQ apart from the PMU IRQ is expected */
+	GUEST_ASSERT_1(intid == PMU_IRQ, intid);
+
+	spin_lock(&guest_irq_data.lock);
+	pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
+
+	for (i = 0; i < pmcr_n; i++)
+		guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
+	guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
+
+	/* Mark IRQ as recived for the corresponding PMCs */
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
+	spin_unlock(&guest_irq_data.lock);
+
+	gic_set_eoi(intid);
+}
+
+static int pmu_irq_received(int pmc_idx)
+{
+	bool irq_received;
+
+	spin_lock(&guest_irq_data.lock);
+	irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	return irq_received;
+}
+
+static void pmu_irq_init(int pmc_idx)
+{
+	write_pmovsclr(BIT(pmc_idx));
+
+	spin_lock(&guest_irq_data.lock);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	enable_irq(pmc_idx);
+}
+
+static void pmu_irq_exit(int pmc_idx)
+{
+	write_pmovsclr(BIT(pmc_idx));
+
+	spin_lock(&guest_irq_data.lock);
+	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&guest_irq_data.lock);
+
+	disable_irq(pmc_idx);
+}
+
 /*
  * Run the given operation that should trigger an exception with the
  * given exception class. The exception handler (guest_sync_handler)
@@ -420,12 +551,20 @@ static void execute_precise_instrs(int num, uint32_t pmcr)
 	precise_instrs_loop(loop, pmcr);
 }
 
-static void test_instructions_count(int pmc_idx, bool expect_count)
+static void test_instructions_count(int pmc_idx, bool expect_count, bool test_overflow)
 {
 	int i;
 	struct pmc_accessor *acc;
-	uint64_t cnt;
-	int instrs_count = 100;
+	uint64_t cntr_val = 0;
+	int instrs_count = 500;
+
+	if (test_overflow) {
+		/* Overflow scenarios can only be tested when a count is expected */
+		GUEST_ASSERT_1(expect_count, pmc_idx);
+
+		cntr_val = PRE_OVERFLOW_32;
+		pmu_irq_init(pmc_idx);
+	}
 
 	enable_counter(pmc_idx);
 
@@ -433,41 +572,68 @@ static void test_instructions_count(int pmc_idx, bool expect_count)
 	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
 		acc = &pmc_accessors[i];
 
-		pmu_disable_reset();
-
+		acc->write_cntr(pmc_idx, cntr_val);
 		acc->write_typer(pmc_idx, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
 
-		/* Enable the PMU and execute precisely number of instructions as a workload */
-		execute_precise_instrs(instrs_count, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
+		/*
+		 * Enable the PMU and execute a precise number of instructions as a workload.
+		 * Since execute_precise_instrs() disables the PMU at the end, 'instrs_count'
+		 * should have enough instructions to raise an IRQ.
+		 */
+		execute_precise_instrs(instrs_count, ARMV8_PMU_PMCR_E);
 
-		/* If a count is expected, the counter should be increased by 'instrs_count' */
-		cnt = acc->read_cntr(pmc_idx);
-		GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
-				i, expect_count, cnt, instrs_count);
+		/*
+		 * If an overflow is expected, only check for the overflag flag.
+		 * As overflow interrupt is enabled, the interrupt would add additional
+		 * instructions and mess up the precise instruction count. Hence, measure
+		 * the instructions count only when the test is not set up for an overflow.
+		 */
+		if (test_overflow) {
+			GUEST_ASSERT_2(pmu_irq_received(pmc_idx), pmc_idx, i);
+		} else {
+			uint64_t cnt = acc->read_cntr(pmc_idx);
+
+			GUEST_ASSERT_4(expect_count == (cnt == instrs_count),
+					pmc_idx, i, cnt, expect_count);
+		}
 	}
 
-	disable_counter(pmc_idx);
+	if (test_overflow)
+		pmu_irq_exit(pmc_idx);
 }
 
-static void test_cycles_count(bool expect_count)
+static void test_cycles_count(bool expect_count, bool test_overflow)
 {
 	uint64_t cnt;
 
-	pmu_enable();
-	reset_cycle_counter();
+	if (test_overflow) {
+		/* Overflow scenarios can only be tested when a count is expected */
+		GUEST_ASSERT(expect_count);
+
+		write_cycle_counter(PRE_OVERFLOW_64);
+		pmu_irq_init(ARMV8_PMU_CYCLE_COUNTER_IDX);
+	} else {
+		reset_cycle_counter();
+	}
 
 	/* Count cycles in EL0 and EL1 */
 	write_pmccfiltr(0);
 	enable_cycle_counter();
 
+	/* Enable the PMU and execute precisely number of instructions as a workload */
+	execute_precise_instrs(500, read_sysreg(pmcr_el0) | ARMV8_PMU_PMCR_E);
 	cnt = read_cycle_counter();
 
 	/*
 	 * If a count is expected by the test, the cycle counter should be increased by
-	 * at least 1, as there is at least one instruction between enabling the
+	 * at least 1, as there are a number of instructions between enabling the
 	 * counter and reading the counter.
 	 */
 	GUEST_ASSERT_2(expect_count == (cnt > 0), cnt, expect_count);
+	if (test_overflow) {
+		GUEST_ASSERT_2(pmu_irq_received(ARMV8_PMU_CYCLE_COUNTER_IDX), cnt, expect_count);
+		pmu_irq_exit(ARMV8_PMU_CYCLE_COUNTER_IDX);
+	}
 
 	disable_cycle_counter();
 	pmu_disable_reset();
@@ -477,19 +643,28 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
 	case ARMV8_PMUV3_PERFCTR_INST_RETIRED:
-		test_instructions_count(pmc_idx, expect_count);
+		test_instructions_count(pmc_idx, expect_count, false);
 		break;
 	case ARMV8_PMUV3_PERFCTR_CPU_CYCLES:
-		test_cycles_count(expect_count);
+		test_cycles_count(expect_count, false);
 		break;
 	}
 }
 
 static void test_basic_pmu_functionality(void)
 {
+	local_irq_disable();
+	gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+	gic_irq_enable(PMU_IRQ);
+	local_irq_enable();
+
 	/* Test events on generic and cycle counters */
-	test_instructions_count(0, true);
-	test_cycles_count(true);
+	test_instructions_count(0, true, false);
+	test_cycles_count(true, false);
+
+	/* Test overflow with interrupts on generic and cycle counters */
+	test_instructions_count(0, true, true);
+	test_cycles_count(true, true);
 }
 
 /*
@@ -813,9 +988,6 @@ static void guest_code(void)
 	GUEST_DONE();
 }
 
-#define GICD_BASE_GPA	0x8000000ULL
-#define GICR_BASE_GPA	0x80A0000ULL
-
 static unsigned long *
 set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_filters)
 {
@@ -866,7 +1038,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
 	uint8_t pmuver, ec;
-	uint64_t dfr0, irq = 23;
+	uint64_t dfr0, irq = PMU_IRQ;
 	struct vpmu_vm *vpmu_vm;
 	struct kvm_device_attr irq_attr = {
 		.group = KVM_ARM_VCPU_PMU_V3_CTRL,
@@ -883,6 +1055,7 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 
 	vpmu_vm->vm = vm = vm_create(1);
 	vm_init_descriptor_tables(vm);
+	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
 
 	/* Catch exceptions for easier debugging */
 	for (ec = 0; ec < ESR_EC_NUM; ec++) {
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (8 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vPMU's vCPU migration test to validate
chained events, and their overflow conditions.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 76 ++++++++++++++++++-
 1 file changed, 75 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 066dc17fa3906..de725f4339ad5 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -23,7 +23,7 @@
  * migrating the guest vCPU to random pCPUs in the system, and check
  * if the vPMU is still behaving as expected. The sub-tests include
  * testing basic functionalities such as basic counters behavior,
- * overflow, and overflow interrupts.
+ * overflow, overflow interrupts, and chained events.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -61,6 +61,8 @@
 #define PRE_OVERFLOW_32		(GENMASK(31, 0) - COUNT_TO_OVERFLOW + 1)
 #define PRE_OVERFLOW_64		(GENMASK(63, 0) - COUNT_TO_OVERFLOW + 1)
 
+#define ALL_SET_64		GENMASK(63, 0)
+
 #define GICD_BASE_GPA	0x8000000ULL
 #define GICR_BASE_GPA	0x80A0000ULL
 
@@ -639,6 +641,75 @@ static void test_cycles_count(bool expect_count, bool test_overflow)
 	pmu_disable_reset();
 }
 
+static void test_chained_count(int pmc_idx)
+{
+	int i, chained_pmc_idx;
+	struct pmc_accessor *acc;
+	uint64_t pmcr_n, cnt, cntr_val;
+
+	/* The test needs at least two PMCs */
+	pmcr_n = get_pmcr_n();
+	GUEST_ASSERT_1(pmcr_n >= 2, pmcr_n);
+
+	/*
+	 * The chained counter's idx is always chained with (pmc_idx + 1).
+	 * pmc_idx should be even as the chained event doesn't count on
+	 * odd numbered counters.
+	 */
+	GUEST_ASSERT_1(pmc_idx % 2 == 0, pmc_idx);
+
+	/*
+	 * The max counter idx that the chained counter can occupy is
+	 * (pmcr_n - 1), while the actual event sits on (pmcr_n - 2).
+	 */
+	chained_pmc_idx = pmc_idx + 1;
+	GUEST_ASSERT(chained_pmc_idx < pmcr_n);
+
+	enable_counter(chained_pmc_idx);
+	pmu_irq_init(chained_pmc_idx);
+
+	/* Configure the chained event using all the possible ways*/
+	for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) {
+		acc = &pmc_accessors[i];
+
+		/* Test if the chained counter increments when the base event overflows */
+
+		cntr_val = 1;
+		acc->write_cntr(chained_pmc_idx, cntr_val);
+		acc->write_typer(chained_pmc_idx, ARMV8_PMUV3_PERFCTR_CHAIN);
+
+		/* Chain the counter with pmc_idx that's configured for an overflow */
+		test_instructions_count(pmc_idx, true, true);
+
+		/*
+		 * pmc_idx is also configured to run for all the ARRAY_SIZE(pmc_accessors)
+		 * combinations. Hence, the chained chained_pmc_idx is expected to be
+		 * cntr_val + ARRAY_SIZE(pmc_accessors).
+		 */
+		cnt = acc->read_cntr(chained_pmc_idx);
+		GUEST_ASSERT_4(cnt == cntr_val + ARRAY_SIZE(pmc_accessors),
+				pmc_idx, i, cnt, cntr_val + ARRAY_SIZE(pmc_accessors));
+
+		/* Test for the overflow of the chained counter itself */
+
+		cntr_val = ALL_SET_64;
+		acc->write_cntr(chained_pmc_idx, cntr_val);
+
+		test_instructions_count(pmc_idx, true, true);
+
+		/*
+		 * At this point, an interrupt should've been fired for the chained
+		 * counter (which validates the overflow bit), and the counter should've
+		 * wrapped around to ARRAY_SIZE(pmc_accessors) - 1.
+		 */
+		cnt = acc->read_cntr(chained_pmc_idx);
+		GUEST_ASSERT_4(cnt == ARRAY_SIZE(pmc_accessors) - 1,
+				pmc_idx, i, cnt, ARRAY_SIZE(pmc_accessors));
+	}
+
+	pmu_irq_exit(chained_pmc_idx);
+}
+
 static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
@@ -665,6 +736,9 @@ static void test_basic_pmu_functionality(void)
 	/* Test overflow with interrupts on generic and cycle counters */
 	test_instructions_count(0, true, true);
 	test_cycles_count(true, true);
+
+	/* Test chained events */
+	test_chained_count(0);
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (9 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

Extend the vCPU migration test to occupy all the vPMU counters,
by configuring chained events on alternate counter-ids and chaining
them with its corresponding predecessor counter, and verify against
the extended behavior.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index de725f4339ad5..fd00acb9391c8 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -710,6 +710,63 @@ static void test_chained_count(int pmc_idx)
 	pmu_irq_exit(chained_pmc_idx);
 }
 
+static void test_chain_all_counters(void)
+{
+	int i;
+	uint64_t cnt, pmcr_n = get_pmcr_n();
+	struct pmc_accessor *acc = &pmc_accessors[0];
+
+	/*
+	 * Test the occupancy of all the event counters, by chaining the
+	 * alternate counters. The test assumes that the host hasn't
+	 * occupied any counters. Hence, if the test fails, it could be
+	 * because all the counters weren't available to the guest or
+	 * there's actually a bug in KVM.
+	 */
+
+	/*
+	 * Configure even numbered counters to count cpu-cycles, and chain
+	 * each of them with its odd numbered counter.
+	 */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2) {
+			acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CHAIN);
+			acc->write_cntr(i, 1);
+		} else {
+			pmu_irq_init(i);
+			acc->write_cntr(i, PRE_OVERFLOW_32);
+			acc->write_typer(i, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+		}
+		enable_counter(i);
+	}
+
+	/* Introduce some cycles */
+	execute_precise_instrs(500, ARMV8_PMU_PMCR_E);
+
+	/*
+	 * An overflow interrupt should've arrived for all the even numbered
+	 * counters but none for the odd numbered ones. The odd numbered ones
+	 * should've incremented exactly by 1.
+	 */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2) {
+			GUEST_ASSERT_1(!pmu_irq_received(i), i);
+
+			cnt = acc->read_cntr(i);
+			GUEST_ASSERT_2(cnt == 2, i, cnt);
+		} else {
+			GUEST_ASSERT_1(pmu_irq_received(i), i);
+		}
+	}
+
+	/* Cleanup the states */
+	for (i = 0; i < pmcr_n; i++) {
+		if (i % 2 == 0)
+			pmu_irq_exit(i);
+		disable_counter(i);
+	}
+}
+
 static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 {
 	switch (event) {
@@ -739,6 +796,9 @@ static void test_basic_pmu_functionality(void)
 
 	/* Test chained events */
 	test_chained_count(0);
+
+	/* Test running chained events on all the implemented counters */
+	test_chain_all_counters();
 }
 
 /*
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (10 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 18:02 ` [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

The PMU test's create_vpmu_vm() currently creates a VM with only
one cpu. Extend this to accept a number of cpus as a argument
to create a multi-vCPU VM. This would help the upcoming patches
to test the vPMU context across multiple vCPUs.

No functional change intended.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 82 +++++++++++--------
 1 file changed, 49 insertions(+), 33 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index fd00acb9391c8..239fc7e06b3b9 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -320,7 +320,8 @@ uint64_t op_end_addr;
 
 struct vpmu_vm {
 	struct kvm_vm *vm;
-	struct kvm_vcpu *vcpu;
+	int nr_vcpus;
+	struct kvm_vcpu **vcpus;
 	int gic_fd;
 	unsigned long *pmu_filter;
 };
@@ -1164,10 +1165,11 @@ set_event_filters(struct kvm_vcpu *vcpu, struct kvm_pmu_event_filter *pmu_event_
 	return pmu_filter;
 }
 
-/* Create a VM that has one vCPU with PMUv3 configured. */
+/* Create a VM that with PMUv3 configured. */
 static struct vpmu_vm *
-create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
+create_vpmu_vm(int nr_vcpus, void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 {
+	int i;
 	struct kvm_vm *vm;
 	struct kvm_vcpu *vcpu;
 	struct kvm_vcpu_init init;
@@ -1187,7 +1189,11 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 	vpmu_vm = calloc(1, sizeof(*vpmu_vm));
 	TEST_ASSERT(vpmu_vm, "Failed to allocate vpmu_vm");
 
-	vpmu_vm->vm = vm = vm_create(1);
+	vpmu_vm->vcpus = calloc(nr_vcpus, sizeof(struct kvm_vcpu *));
+	TEST_ASSERT(vpmu_vm->vcpus, "Failed to allocate kvm_vpus");
+	vpmu_vm->nr_vcpus = nr_vcpus;
+
+	vpmu_vm->vm = vm = vm_create(nr_vcpus);
 	vm_init_descriptor_tables(vm);
 	vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, guest_irq_handler);
 
@@ -1197,26 +1203,35 @@ create_vpmu_vm(void *guest_code, struct kvm_pmu_event_filter *pmu_event_filters)
 					guest_sync_handler);
 	}
 
-	/* Create vCPU with PMUv3 */
+	/* Create vCPUs with PMUv3 */
 	vm_ioctl(vm, KVM_ARM_PREFERRED_TARGET, &init);
 	init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3);
-	vpmu_vm->vcpu = vcpu = aarch64_vcpu_add(vm, 0, &init, guest_code);
-	vcpu_init_descriptor_tables(vcpu);
-	vpmu_vm->gic_fd = vgic_v3_setup(vm, 1, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
-	/* Make sure that PMUv3 support is indicated in the ID register */
-	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
-	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
-	TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
-		    pmuver >= ID_AA64DFR0_PMUVER_8_0,
-		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
+	for (i = 0; i < nr_vcpus; i++) {
+		vpmu_vm->vcpus[i] = vcpu = aarch64_vcpu_add(vm, i, &init, guest_code);
+		vcpu_init_descriptor_tables(vcpu);
+	}
 
-	/* Initialize vPMU */
-	if (pmu_event_filters)
-		vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+	/* vGIC setup is expected after the vCPUs are created but before the vPMU is initialized */
+	vpmu_vm->gic_fd = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA);
 
-	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
-	vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+	for (i = 0; i < nr_vcpus; i++) {
+		vcpu = vpmu_vm->vcpus[i];
+
+		/* Make sure that PMUv3 support is indicated in the ID register */
+		vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0);
+		pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_PMUVER), dfr0);
+		TEST_ASSERT(pmuver != ID_AA64DFR0_PMUVER_IMP_DEF &&
+			pmuver >= ID_AA64DFR0_PMUVER_8_0,
+			"Unexpected PMUVER (0x%x) on the vCPU %d with PMUv3", i, pmuver);
+
+		/* Initialize vPMU */
+		if (pmu_event_filters)
+			vpmu_vm->pmu_filter = set_event_filters(vcpu, pmu_event_filters);
+
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &irq_attr);
+		vcpu_ioctl(vcpu, KVM_SET_DEVICE_ATTR, &init_attr);
+	}
 
 	return vpmu_vm;
 }
@@ -1227,6 +1242,7 @@ static void destroy_vpmu_vm(struct vpmu_vm *vpmu_vm)
 		bitmap_free(vpmu_vm->pmu_filter);
 	close(vpmu_vm->gic_fd);
 	kvm_vm_free(vpmu_vm->vm);
+	free(vpmu_vm->vcpus);
 	free(vpmu_vm);
 }
 
@@ -1264,8 +1280,8 @@ static void run_counter_access_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Test with pmcr_n %lu\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu = vpmu_vm->vcpu;
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu = vpmu_vm->vcpus[0];
 
 	/* Save the initial sp to restore them later to run the guest again */
 	vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp);
@@ -1309,8 +1325,8 @@ static void run_counter_access_error_test(uint64_t pmcr_n)
 	guest_data.expected_pmcr_n = pmcr_n;
 
 	pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n);
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu = vpmu_vm->vcpu;
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu = vpmu_vm->vcpus[0];
 
 	/* Update the PMCR_EL0.N with @pmcr_n */
 	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig);
@@ -1396,8 +1412,8 @@ static void run_kvm_event_filter_error_tests(void)
 	};
 
 	/* KVM should not allow configuring filters after the PMU is initialized */
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	ret = __vcpu_ioctl(vpmu_vm->vcpu, KVM_SET_DEVICE_ATTR, &filter_attr);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	ret = __vcpu_ioctl(vpmu_vm->vcpus[0], KVM_SET_DEVICE_ATTR, &filter_attr);
 	TEST_ASSERT(ret == -1 && errno == EBUSY,
 			"Failed to disallow setting an event filter after PMU init");
 	destroy_vpmu_vm(vpmu_vm);
@@ -1427,14 +1443,14 @@ static void run_kvm_event_filter_test(void)
 
 	/* Test for valid filter configurations */
 	for (i = 0; i < ARRAY_SIZE(pmu_event_filters); i++) {
-		vpmu_vm = create_vpmu_vm(guest_code, pmu_event_filters[i]);
+		vpmu_vm = create_vpmu_vm(1, guest_code, pmu_event_filters[i]);
 		vm = vpmu_vm->vm;
 
 		pmu_filter_gva = vm_vaddr_alloc(vm, pmu_filter_bmap_sz, KVM_UTIL_MIN_VADDR);
 		memcpy(addr_gva2hva(vm, pmu_filter_gva), vpmu_vm->pmu_filter, pmu_filter_bmap_sz);
 		guest_data.pmu_filter = (unsigned long *) pmu_filter_gva;
 
-		run_vcpu(vpmu_vm->vcpu);
+		run_vcpu(vpmu_vm->vcpus[0]);
 
 		destroy_vpmu_vm(vpmu_vm);
 	}
@@ -1449,8 +1465,8 @@ static void run_kvm_evtype_filter_test(void)
 
 	guest_data.test_stage = TEST_STAGE_KVM_EVTYPE_FILTER;
 
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	run_vcpu(vpmu_vm->vcpu);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	run_vcpu(vpmu_vm->vcpus[0]);
 	destroy_vpmu_vm(vpmu_vm);
 }
 
@@ -1465,7 +1481,7 @@ static void *run_vcpus_migrate_test_func(void *arg)
 	struct vcpu_migrate_data *migrate_data = arg;
 	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
 
-	run_vcpu(vpmu_vm->vcpu);
+	run_vcpu(vpmu_vm->vcpus[0]);
 	migrate_data->vcpu_done = true;
 
 	return NULL;
@@ -1535,7 +1551,7 @@ static void run_vcpu_migration_test(uint64_t pmcr_n)
 	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
 	guest_data.expected_pmcr_n = pmcr_n;
 
-	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(guest_code, NULL);
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
 
 	/* Initialize random number generation for migrating vCPUs to random pCPUs */
 	srand(time(NULL));
@@ -1571,8 +1587,8 @@ static uint64_t get_pmcr_n_limit(void)
 	struct vpmu_vm *vpmu_vm;
 	uint64_t pmcr;
 
-	vpmu_vm = create_vpmu_vm(guest_code, NULL);
-	vcpu_get_reg(vpmu_vm->vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
+	vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	vcpu_get_reg(vpmu_vm->vcpus[0], KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr);
 	destroy_vpmu_vm(vpmu_vm);
 
 	return FIELD_GET(ARMV8_PMU_PMCR_N, pmcr);
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (11 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
@ 2023-02-13 18:02 ` Raghavendra Rao Ananta
  2023-02-13 23:39 ` [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
  2023-02-14  8:19 ` Oliver Upton
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 18:02 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, Raghavendra Rao Anata,
	linux-arm-kernel, kvmarm, linux-kernel, kvm

To test KVM's handling of multiple vCPU contexts together, that are
frequently migrating across random pCPUs in the system, extend the test
to create a VM with multiple vCPUs and validate the behavior.

Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
---
 .../testing/selftests/kvm/aarch64/vpmu_test.c | 166 ++++++++++++------
 1 file changed, 114 insertions(+), 52 deletions(-)

diff --git a/tools/testing/selftests/kvm/aarch64/vpmu_test.c b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
index 239fc7e06b3b9..c9d8e5f9a22ab 100644
--- a/tools/testing/selftests/kvm/aarch64/vpmu_test.c
+++ b/tools/testing/selftests/kvm/aarch64/vpmu_test.c
@@ -19,11 +19,12 @@
  * higher exception levels (EL2, EL3). Verify this functionality by
  * configuring and trying to count the events for EL2 in the guest.
  *
- * 4. Since the PMU registers are per-cpu, stress KVM by frequently
- * migrating the guest vCPU to random pCPUs in the system, and check
- * if the vPMU is still behaving as expected. The sub-tests include
- * testing basic functionalities such as basic counters behavior,
- * overflow, overflow interrupts, and chained events.
+ * 4. Since the PMU registers are per-cpu, stress KVM by creating a
+ * multi-vCPU VM, then frequently migrate the guest vCPUs to random
+ * pCPUs in the system, and check if the vPMU is still behaving as
+ * expected. The sub-tests include testing basic functionalities such
+ * as basic counters behavior, overflow, overflow interrupts, and
+ * chained events.
  *
  * Copyright (c) 2022 Google LLC.
  *
@@ -348,19 +349,22 @@ struct guest_irq_data {
 	struct spinlock lock;
 };
 
-static struct guest_irq_data guest_irq_data;
+static struct guest_irq_data guest_irq_data[KVM_MAX_VCPUS];
 
 #define VCPU_MIGRATIONS_TEST_ITERS_DEF		1000
 #define VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS	2
+#define VCPU_MIGRATIONS_TEST_NR_VPUS_DEF	2
 
 struct test_args {
 	int vcpu_migration_test_iter;
 	int vcpu_migration_test_migrate_freq_ms;
+	int vcpu_migration_test_nr_vcpus;
 };
 
 static struct test_args test_args = {
 	.vcpu_migration_test_iter = VCPU_MIGRATIONS_TEST_ITERS_DEF,
 	.vcpu_migration_test_migrate_freq_ms = VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS,
+	.vcpu_migration_test_nr_vcpus = VCPU_MIGRATIONS_TEST_NR_VPUS_DEF,
 };
 
 static void guest_sync_handler(struct ex_regs *regs)
@@ -396,26 +400,34 @@ static void guest_validate_irq(int pmc_idx, uint32_t pmovsclr, uint32_t pmc_idx_
 	}
 }
 
+static struct guest_irq_data *get_irq_data(void)
+{
+	uint32_t cpu = guest_get_vcpuid();
+
+	return &guest_irq_data[cpu];
+}
+
 static void guest_irq_handler(struct ex_regs *regs)
 {
 	uint32_t pmc_idx_bmap;
 	uint64_t i, pmcr_n = get_pmcr_n();
 	uint32_t pmovsclr = read_pmovsclr();
 	unsigned int intid = gic_get_and_ack_irq();
+	struct guest_irq_data *irq_data = get_irq_data();
 
 	/* No other IRQ apart from the PMU IRQ is expected */
 	GUEST_ASSERT_1(intid == PMU_IRQ, intid);
 
-	spin_lock(&guest_irq_data.lock);
-	pmc_idx_bmap = READ_ONCE(guest_irq_data.pmc_idx_bmap);
+	spin_lock(&irq_data->lock);
+	pmc_idx_bmap = READ_ONCE(irq_data->pmc_idx_bmap);
 
 	for (i = 0; i < pmcr_n; i++)
 		guest_validate_irq(i, pmovsclr, pmc_idx_bmap);
 	guest_validate_irq(ARMV8_PMU_CYCLE_COUNTER_IDX, pmovsclr, pmc_idx_bmap);
 
 	/* Mark IRQ as recived for the corresponding PMCs */
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, pmovsclr);
-	spin_unlock(&guest_irq_data.lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, pmovsclr);
+	spin_unlock(&irq_data->lock);
 
 	gic_set_eoi(intid);
 }
@@ -423,35 +435,40 @@ static void guest_irq_handler(struct ex_regs *regs)
 static int pmu_irq_received(int pmc_idx)
 {
 	bool irq_received;
+	struct guest_irq_data *irq_data = get_irq_data();
 
-	spin_lock(&guest_irq_data.lock);
-	irq_received = READ_ONCE(guest_irq_data.irq_received_bmap) & BIT(pmc_idx);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	irq_received = READ_ONCE(irq_data->irq_received_bmap) & BIT(pmc_idx);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	return irq_received;
 }
 
 static void pmu_irq_init(int pmc_idx)
 {
+	struct guest_irq_data *irq_data = get_irq_data();
+
 	write_pmovsclr(BIT(pmc_idx));
 
-	spin_lock(&guest_irq_data.lock);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap | BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap | BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	enable_irq(pmc_idx);
 }
 
 static void pmu_irq_exit(int pmc_idx)
 {
+	struct guest_irq_data *irq_data = get_irq_data();
+
 	write_pmovsclr(BIT(pmc_idx));
 
-	spin_lock(&guest_irq_data.lock);
-	WRITE_ONCE(guest_irq_data.irq_received_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	WRITE_ONCE(guest_irq_data.pmc_idx_bmap, guest_irq_data.pmc_idx_bmap & ~BIT(pmc_idx));
-	spin_unlock(&guest_irq_data.lock);
+	spin_lock(&irq_data->lock);
+	WRITE_ONCE(irq_data->irq_received_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	WRITE_ONCE(irq_data->pmc_idx_bmap, irq_data->pmc_idx_bmap & ~BIT(pmc_idx));
+	spin_unlock(&irq_data->lock);
 
 	disable_irq(pmc_idx);
 }
@@ -783,7 +800,8 @@ static void test_event_count(uint64_t event, int pmc_idx, bool expect_count)
 static void test_basic_pmu_functionality(void)
 {
 	local_irq_disable();
-	gic_init(GIC_V3, 1, (void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
+	gic_init(GIC_V3, test_args.vcpu_migration_test_nr_vcpus,
+			(void *)GICD_BASE_GPA, (void *)GICR_BASE_GPA);
 	gic_irq_enable(PMU_IRQ);
 	local_irq_enable();
 
@@ -1093,11 +1111,13 @@ static void guest_evtype_filter_test(void)
 
 static void guest_vcpu_migration_test(void)
 {
+	int iter = test_args.vcpu_migration_test_iter;
+
 	/*
 	 * While the userspace continuously migrates this vCPU to random pCPUs,
 	 * run basic PMU functionalities and verify the results.
 	 */
-	while (test_args.vcpu_migration_test_iter--)
+	while (iter--)
 		test_basic_pmu_functionality();
 }
 
@@ -1472,17 +1492,23 @@ static void run_kvm_evtype_filter_test(void)
 
 struct vcpu_migrate_data {
 	struct vpmu_vm *vpmu_vm;
-	pthread_t *pt_vcpu;
-	bool vcpu_done;
+	pthread_t *pt_vcpus;
+	unsigned long *vcpu_done_map;
+	pthread_mutex_t vcpu_done_map_lock;
 };
 
+struct vcpu_migrate_data migrate_data;
+
 static void *run_vcpus_migrate_test_func(void *arg)
 {
-	struct vcpu_migrate_data *migrate_data = arg;
-	struct vpmu_vm *vpmu_vm = migrate_data->vpmu_vm;
+	struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
+	unsigned int vcpu_idx = (unsigned long)arg;
 
-	run_vcpu(vpmu_vm->vcpus[0]);
-	migrate_data->vcpu_done = true;
+	run_vcpu(vpmu_vm->vcpus[vcpu_idx]);
+
+	pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
+	__set_bit(vcpu_idx, migrate_data.vcpu_done_map);
+	pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);
 
 	return NULL;
 }
@@ -1504,7 +1530,7 @@ static uint32_t get_pcpu(void)
 	return pcpu;
 }
 
-static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
+static int migrate_vcpu(int vcpu_idx)
 {
 	int ret;
 	cpu_set_t cpuset;
@@ -1513,9 +1539,9 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
 	CPU_ZERO(&cpuset);
 	CPU_SET(new_pcpu, &cpuset);
 
-	pr_debug("Migrating vCPU to pCPU: %u\n", new_pcpu);
+	pr_debug("Migrating vCPU %d to pCPU: %u\n", vcpu_idx, new_pcpu);
 
-	ret = pthread_setaffinity_np(*migrate_data->pt_vcpu, sizeof(cpuset), &cpuset);
+	ret = pthread_setaffinity_np(migrate_data.pt_vcpus[vcpu_idx], sizeof(cpuset), &cpuset);
 
 	/* Allow the error where the vCPU thread is already finished */
 	TEST_ASSERT(ret == 0 || ret == ESRCH,
@@ -1526,48 +1552,74 @@ static int migrate_vcpu(struct vcpu_migrate_data *migrate_data)
 
 static void *vcpus_migrate_func(void *arg)
 {
-	struct vcpu_migrate_data *migrate_data = arg;
+	struct vpmu_vm *vpmu_vm = migrate_data.vpmu_vm;
+	int i, n_done, nr_vcpus = vpmu_vm->nr_vcpus;
+	bool vcpu_done;
 
-	while (!migrate_data->vcpu_done) {
+	do {
 		usleep(msecs_to_usecs(test_args.vcpu_migration_test_migrate_freq_ms));
-		migrate_vcpu(migrate_data);
-	}
+		for (n_done = 0, i = 0; i < nr_vcpus; i++) {
+			pthread_mutex_lock(&migrate_data.vcpu_done_map_lock);
+			vcpu_done = test_bit(i, migrate_data.vcpu_done_map);
+			pthread_mutex_unlock(&migrate_data.vcpu_done_map_lock);
+
+			if (vcpu_done) {
+				n_done++;
+				continue;
+			}
+
+			migrate_vcpu(i);
+		}
+
+	} while (nr_vcpus != n_done);
 
 	return NULL;
 }
 
 static void run_vcpu_migration_test(uint64_t pmcr_n)
 {
-	int ret;
+	int i, nr_vcpus, ret;
 	struct vpmu_vm *vpmu_vm;
-	pthread_t pt_vcpu, pt_sched;
-	struct vcpu_migrate_data migrate_data = {
-		.pt_vcpu = &pt_vcpu,
-		.vcpu_done = false,
-	};
+	pthread_t pt_sched, *pt_vcpus;
 
 	__TEST_REQUIRE(get_nprocs() >= 2, "At least two pCPUs needed for vCPU migration test");
 
 	guest_data.test_stage = TEST_STAGE_VCPU_MIGRATION;
 	guest_data.expected_pmcr_n = pmcr_n;
 
-	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(1, guest_code, NULL);
+	nr_vcpus = test_args.vcpu_migration_test_nr_vcpus;
+
+	migrate_data.vcpu_done_map = bitmap_zalloc(nr_vcpus);
+	TEST_ASSERT(migrate_data.vcpu_done_map, "Failed to create vCPU done bitmap");
+	pthread_mutex_init(&migrate_data.vcpu_done_map_lock, NULL);
+
+	migrate_data.pt_vcpus = pt_vcpus = calloc(nr_vcpus, sizeof(*pt_vcpus));
+	TEST_ASSERT(pt_vcpus, "Failed to create vCPU thread pointers");
+
+	migrate_data.vpmu_vm = vpmu_vm = create_vpmu_vm(nr_vcpus, guest_code, NULL);
 
 	/* Initialize random number generation for migrating vCPUs to random pCPUs */
 	srand(time(NULL));
 
-	/* Spawn a vCPU thread */
-	ret = pthread_create(&pt_vcpu, NULL, run_vcpus_migrate_test_func, &migrate_data);
-	TEST_ASSERT(!ret, "Failed to create the vCPU thread");
+	/* Spawn vCPU threads */
+	for (i = 0; i < nr_vcpus; i++) {
+		ret = pthread_create(&pt_vcpus[i], NULL,
+					run_vcpus_migrate_test_func,  (void *)(unsigned long)i);
+		TEST_ASSERT(!ret, "Failed to create the vCPU thread: %d", i);
+	}
 
 	/* Spawn a scheduler thread to force-migrate vCPUs to various pCPUs */
-	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, &migrate_data);
+	ret = pthread_create(&pt_sched, NULL, vcpus_migrate_func, NULL);
 	TEST_ASSERT(!ret, "Failed to create the scheduler thread for migrating the vCPUs");
 
 	pthread_join(pt_sched, NULL);
-	pthread_join(pt_vcpu, NULL);
+
+	for (i = 0; i < nr_vcpus; i++)
+		pthread_join(pt_vcpus[i], NULL);
 
 	destroy_vpmu_vm(vpmu_vm);
+	free(pt_vcpus);
+	bitmap_free(migrate_data.vcpu_done_map);
 }
 
 static void run_tests(uint64_t pmcr_n)
@@ -1596,12 +1648,14 @@ static uint64_t get_pmcr_n_limit(void)
 
 static void print_help(char *name)
 {
-	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]\n",
-		name);
+	pr_info("Usage: %s [-h] [-i vcpu_migration_test_iterations] [-m vcpu_migration_freq_ms]"
+		"[-n vcpu_migration_nr_vcpus]\n", name);
 	pr_info("\t-i: Number of iterations of vCPU migrations test (default: %u)\n",
 		VCPU_MIGRATIONS_TEST_ITERS_DEF);
 	pr_info("\t-m: Frequency (in ms) of vCPUs to migrate to different pCPU. (default: %u)\n",
 		VCPU_MIGRATIONS_TEST_MIGRATION_FREQ_MS);
+	pr_info("\t-n: Number of vCPUs for vCPU migrations test. (default: %u)\n",
+		VCPU_MIGRATIONS_TEST_NR_VPUS_DEF);
 	pr_info("\t-h: print this help screen\n");
 }
 
@@ -1609,7 +1663,7 @@ static bool parse_args(int argc, char *argv[])
 {
 	int opt;
 
-	while ((opt = getopt(argc, argv, "hi:m:")) != -1) {
+	while ((opt = getopt(argc, argv, "hi:m:n:")) != -1) {
 		switch (opt) {
 		case 'i':
 			test_args.vcpu_migration_test_iter =
@@ -1619,6 +1673,14 @@ static bool parse_args(int argc, char *argv[])
 			test_args.vcpu_migration_test_migrate_freq_ms =
 				atoi_positive("vCPU migration frequency", optarg);
 			break;
+		case 'n':
+			test_args.vcpu_migration_test_nr_vcpus =
+				atoi_positive("Nr vCPUs for vCPU migrations", optarg);
+			if (test_args.vcpu_migration_test_nr_vcpus > KVM_MAX_VCPUS) {
+				pr_info("Max allowed vCPUs: %u\n", KVM_MAX_VCPUS);
+				goto err;
+			}
+			break;
 		case 'h':
 		default:
 			goto err;
-- 
2.39.1.581.gbfd45094c4-goog


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/13] Extend the vPMU selftest
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (12 preceding siblings ...)
  2023-02-13 18:02 ` [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
@ 2023-02-13 23:39 ` Raghavendra Rao Ananta
  2023-02-14  8:19 ` Oliver Upton
  14 siblings, 0 replies; 16+ messages in thread
From: Raghavendra Rao Ananta @ 2023-02-13 23:39 UTC (permalink / raw)
  To: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose
  Cc: Paolo Bonzini, Jing Zhang, Colton Lewis, linux-arm-kernel,
	kvmarm, linux-kernel, kvm

On Mon, Feb 13, 2023 at 10:02 AM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> Hello,
>
> This vPMU KVM selftest series is an extension to the selftests
> introduced by Reiji Watanabe in his series aims to limit the number
> of PMCs on vCPU from userspace [1].
>
> The idea behind this series is to expand the test coverage to include
> the tests that validates actions from userspace, such as allowing or
> denying certain events via KVM_ARM_VCPU_PMU_V3_FILTER attribute, KVM's
> guarding of the PMU attributes to count EL2/EL3 events, and formal KVM
> behavior that enables PMU emulation. The last part validates the guest
> expectations of the vPMU by setting up a stress test that force-migrates
> multiple vCPUs frequently across random pCPUs in the system, thus
> ensuring KVM's management of vCPU PMU contexts correctly.
>
> Patch-1 renames the test file to be more generic.
>
> Patch-2 refactors the existing tests for plugging-in the upcoming tests
> easily.
>
> Patch-3 and 4 add helper macros and functions respectively to interact
> with the cycle counter.
>
> Patch-5 extends create_vpmu_vm() to accept an array of event filters
> as an argument that are to be applied to the VM.
>
> Patch-6 tests the KVM_ARM_VCPU_PMU_V3_FILTER attribute by scripting
> various combinations of events that are to be allowed or denied to
> the guest and verifying guest's behavior.
>
> Patch-7 adds test to validate KVM's handling of guest requests to count
> events in EL2/EL3.
>
> Patch-8 introduces the vCPU migration stress testing by validating cycle
> counter and general purpose counter's behavior across vCPU migrations.
>
> Patch-9, 10, and 11 expands the tests in patch-8 to validate
> overflow/IRQ functionality, chained events, and occupancy of all the PMU
> counters, respectively.
>
> Patch-12 extends create_vpmu_vm() to create multiple vCPUs for the VM.
>
> Patch-13 expands the stress tests for multiple vCPUs.
>
> The series has been tested on hardwares with PMUv8p1 and PMUvp5.
>
Sorry for the typo (thanks Reiji for pointing it out!). It should be
"PMUv3p1 and
PMUv3p5". And the testing was done on v6.2-rc6 + [1].

Thank you.
Raghavendra
> Thank you.
> Raghavendra
>
> [1]: https://lore.kernel.org/all/20230203040242.1792453-1-reijiw@google.com/
>
>
> Raghavendra Rao Ananta (13):
>   selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c
>   selftests: KVM: aarch64: Refactor the vPMU counter access tests
>   tools: arm64: perf_event: Define Cycle counter enable/overflow bits
>   selftests: KVM: aarch64: Add PMU cycle counter helpers
>   selftests: KVM: aarch64: Consider PMU event filters for VM creation
>   selftests: KVM: aarch64: Add KVM PMU event filter test
>   selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test
>   selftests: KVM: aarch64: Add vCPU migration test for PMU
>   selftests: KVM: aarch64: Test PMU overflow/IRQ functionality
>   selftests: KVM: aarch64: Test chained events for PMU
>   selftests: KVM: aarch64: Add PMU test to chain all the counters
>   selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation
>   selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs
>
>  tools/arch/arm64/include/asm/perf_event.h     |    7 +
>  tools/testing/selftests/kvm/Makefile          |    2 +-
>  .../kvm/aarch64/vpmu_counter_access.c         |  642 -------
>  .../testing/selftests/kvm/aarch64/vpmu_test.c | 1710 +++++++++++++++++
>  4 files changed, 1718 insertions(+), 643 deletions(-)
>  delete mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
>  create mode 100644 tools/testing/selftests/kvm/aarch64/vpmu_test.c
>
> --
> 2.39.1.581.gbfd45094c4-goog
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 00/13] Extend the vPMU selftest
  2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
                   ` (13 preceding siblings ...)
  2023-02-13 23:39 ` [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
@ 2023-02-14  8:19 ` Oliver Upton
  14 siblings, 0 replies; 16+ messages in thread
From: Oliver Upton @ 2023-02-14  8:19 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Oliver Upton, Reiji Watanabe, Marc Zyngier, Ricardo Koller,
	James Morse, Suzuki K Poulose, Paolo Bonzini, Jing Zhang,
	Colton Lewis, linux-arm-kernel, kvmarm, linux-kernel, kvm

Hi Raghavendra,

On Mon, Feb 13, 2023 at 06:02:21PM +0000, Raghavendra Rao Ananta wrote:
> Hello,
> 
> This vPMU KVM selftest series is an extension to the selftests
> introduced by Reiji Watanabe in his series aims to limit the number
> of PMCs on vCPU from userspace [1].

Right off the bat, I'd much prefer it if the patches weren't posted this
way. Building on top of an in flight series requires that reviewers page
in the context from Reiji's selftest patch in another thread then read
what you have.

I imagine this happened organically because you two are developing in
parallel (which is great!), but at this point Reiji's kernel changes are
only tangientally related to the selftest. Given that, is it possible to
split the test + KVM changes into two distinct series that each of you
will own? That way it is possible to get the full picture from one email
thread alone.

> The idea behind this series is to expand the test coverage to include
> the tests that validates actions from userspace, such as allowing or
> denying certain events via KVM_ARM_VCPU_PMU_V3_FILTER attribute, KVM's
> guarding of the PMU attributes to count EL2/EL3 events, and formal KVM
> behavior that enables PMU emulation. The last part validates the guest
> expectations of the vPMU by setting up a stress test that force-migrates
> multiple vCPUs frequently across random pCPUs in the system, thus
> ensuring KVM's management of vCPU PMU contexts correctly.
> 
> Patch-1 renames the test file to be more generic.
> 
> Patch-2 refactors the existing tests for plugging-in the upcoming tests
> easily.

sidenote: if you wind up reposting the complete series these can just be
squashed into the original patch.

> Patch-3 and 4 add helper macros and functions respectively to interact
> with the cycle counter.
> 
> Patch-5 extends create_vpmu_vm() to accept an array of event filters
> as an argument that are to be applied to the VM.
> 
> Patch-6 tests the KVM_ARM_VCPU_PMU_V3_FILTER attribute by scripting
> various combinations of events that are to be allowed or denied to
> the guest and verifying guest's behavior.
> 
> Patch-7 adds test to validate KVM's handling of guest requests to count
> events in EL2/EL3.
> 
> Patch-8 introduces the vCPU migration stress testing by validating cycle
> counter and general purpose counter's behavior across vCPU migrations.
> 
> Patch-9, 10, and 11 expands the tests in patch-8 to validate
> overflow/IRQ functionality, chained events, and occupancy of all the PMU
> counters, respectively.
> 
> Patch-12 extends create_vpmu_vm() to create multiple vCPUs for the VM.
> 
> Patch-13 expands the stress tests for multiple vCPUs.
> 
> The series has been tested on hardwares with PMUv8p1 and PMUvp5.

-- 
Thanks,
Oliver

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-02-14  8:19 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-13 18:02 [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 01/13] selftests: KVM: aarch64: Rename vpmu_counter_access.c to vpmu_test.c Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 02/13] selftests: KVM: aarch64: Refactor the vPMU counter access tests Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 03/13] tools: arm64: perf_event: Define Cycle counter enable/overflow bits Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 04/13] selftests: KVM: aarch64: Add PMU cycle counter helpers Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 05/13] selftests: KVM: aarch64: Consider PMU event filters for VM creation Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 06/13] selftests: KVM: aarch64: Add KVM PMU event filter test Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 07/13] selftests: KVM: aarch64: Add KVM EVTYPE filter PMU test Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 08/13] selftests: KVM: aarch64: Add vCPU migration test for PMU Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 09/13] selftests: KVM: aarch64: Test PMU overflow/IRQ functionality Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 10/13] selftests: KVM: aarch64: Test chained events for PMU Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 11/13] selftests: KVM: aarch64: Add PMU test to chain all the counters Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 12/13] selftests: KVM: aarch64: Add multi-vCPU support for vPMU VM creation Raghavendra Rao Ananta
2023-02-13 18:02 ` [PATCH 13/13] selftests: KVM: aarch64: Extend the vCPU migration test to multi-vCPUs Raghavendra Rao Ananta
2023-02-13 23:39 ` [PATCH 00/13] Extend the vPMU selftest Raghavendra Rao Ananta
2023-02-14  8:19 ` Oliver Upton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).