All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] KVM: arm64: add more event counters for kvm_stat
@ 2021-04-20 13:08 ` Yoan Picchi
  0 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, Yoan Picchi, will

Hi all,

Considering the feedback on my first version https://lore.kernel.org/linux-arm-kernel/20210319161711.24972-1-yoan.picchi@arm.com/
I started from scratch and focussed only on stage 2 page fault.

For the context, as mentioned in the KVM forum talk from 2019 
(https://kvmforum2019.sched.com/event/Tmwf/kvmstat-and-beyond-past-present-and-future-of-performance-monitoring-christian-borntrager-ibm page 10),
there are few event counters for kvm_stat in the arm64 version of kvm when
you compare it to something like the x86 version.
Those counters are used in kvm_stat by kernel/driver developers to
have a rough idea of the impact of their patches on the general performance.

In this patchset I introduce 3 counters to use in kvm stat. They aim to help
a kernel/driver dev troubleshot performance issues by letting them know how
much exits comes from stage 2 table faults, and thus, see if their changes
added a lot. Between the existing mmio_user_exit and the added page_mapped,
the main reasons for a stage 2 page fault should be covered and thus give
some finer granularity when looking for the source of exits.


Yoan Picchi (3):
  KVM: arm64: Add a stage2 page fault counter for kvm_stat
  KVM: arm64: Add two page mapping counters for kvm_stat
  KVM: arm64: Add irq_exit counter for kvm_stat

 arch/arm64/include/asm/kvm_host.h | 4 ++++
 arch/arm64/kvm/guest.c            | 4 ++++
 arch/arm64/kvm/handle_exit.c      | 1 +
 arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
 arch/arm64/kvm/mmu.c              | 2 ++
 5 files changed, 16 insertions(+)

-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 0/3] KVM: arm64: add more event counters for kvm_stat
@ 2021-04-20 13:08 ` Yoan Picchi
  0 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, will, Yoan Picchi

Hi all,

Considering the feedback on my first version https://lore.kernel.org/linux-arm-kernel/20210319161711.24972-1-yoan.picchi@arm.com/
I started from scratch and focussed only on stage 2 page fault.

For the context, as mentioned in the KVM forum talk from 2019 
(https://kvmforum2019.sched.com/event/Tmwf/kvmstat-and-beyond-past-present-and-future-of-performance-monitoring-christian-borntrager-ibm page 10),
there are few event counters for kvm_stat in the arm64 version of kvm when
you compare it to something like the x86 version.
Those counters are used in kvm_stat by kernel/driver developers to
have a rough idea of the impact of their patches on the general performance.

In this patchset I introduce 3 counters to use in kvm stat. They aim to help
a kernel/driver dev troubleshot performance issues by letting them know how
much exits comes from stage 2 table faults, and thus, see if their changes
added a lot. Between the existing mmio_user_exit and the added page_mapped,
the main reasons for a stage 2 page fault should be covered and thus give
some finer granularity when looking for the source of exits.


Yoan Picchi (3):
  KVM: arm64: Add a stage2 page fault counter for kvm_stat
  KVM: arm64: Add two page mapping counters for kvm_stat
  KVM: arm64: Add irq_exit counter for kvm_stat

 arch/arm64/include/asm/kvm_host.h | 4 ++++
 arch/arm64/kvm/guest.c            | 4 ++++
 arch/arm64/kvm/handle_exit.c      | 1 +
 arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
 arch/arm64/kvm/mmu.c              | 2 ++
 5 files changed, 16 insertions(+)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/3] KVM: arm64: Add a stage2 page fault counter for kvm_stat
  2021-04-20 13:08 ` Yoan Picchi
@ 2021-04-20 13:08   ` Yoan Picchi
  -1 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, Yoan Picchi, will

This counter is meant to detect stage 2 page fault exits. This is meant to
measure how much those page fault influence the performance (by comparing
this number of exits to some other exit causes).
For now this counter is generic, but some more granularity is planned in
the next commits so that one know how much of the page fault are for memory
allocation, or for mmio for instance. The idea being that one using this
counter can get a better idea of what is trigerring those exits to try to
fix it.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/guest.c            | 1 +
 arch/arm64/kvm/mmu.c              | 2 ++
 3 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527..02891ce94 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -561,6 +561,7 @@ struct kvm_vcpu_stat {
 	u64 wfi_exit_stat;
 	u64 mmio_exit_user;
 	u64 mmio_exit_kernel;
+	u64 stage2_abort_exit;
 	u64 exits;
 };
 
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 9bbd30e62..82a4b6275 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("wfi_exit_stat", wfi_exit_stat),
 	VCPU_STAT("mmio_exit_user", mmio_exit_user),
 	VCPU_STAT("mmio_exit_kernel", mmio_exit_kernel),
+	VCPU_STAT("stage2_abort_exit", stage2_abort_exit),
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 77cb2d28f..c3527ccf6 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -966,6 +966,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 	fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
 	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
 
+
 	/* Synchronous External Abort? */
 	if (kvm_vcpu_abt_issea(vcpu)) {
 		/*
@@ -980,6 +981,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 
 	trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
 			      kvm_vcpu_get_hfar(vcpu), fault_ipa);
+	vcpu->stat.stage2_abort_exit++;
 
 	/* Check the stage-2 fault is trans. fault or write fault */
 	if (fault_status != FSC_FAULT && fault_status != FSC_PERM &&
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 1/3] KVM: arm64: Add a stage2 page fault counter for kvm_stat
@ 2021-04-20 13:08   ` Yoan Picchi
  0 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, will, Yoan Picchi

This counter is meant to detect stage 2 page fault exits. This is meant to
measure how much those page fault influence the performance (by comparing
this number of exits to some other exit causes).
For now this counter is generic, but some more granularity is planned in
the next commits so that one know how much of the page fault are for memory
allocation, or for mmio for instance. The idea being that one using this
counter can get a better idea of what is trigerring those exits to try to
fix it.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/guest.c            | 1 +
 arch/arm64/kvm/mmu.c              | 2 ++
 3 files changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3d10e6527..02891ce94 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -561,6 +561,7 @@ struct kvm_vcpu_stat {
 	u64 wfi_exit_stat;
 	u64 mmio_exit_user;
 	u64 mmio_exit_kernel;
+	u64 stage2_abort_exit;
 	u64 exits;
 };
 
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 9bbd30e62..82a4b6275 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("wfi_exit_stat", wfi_exit_stat),
 	VCPU_STAT("mmio_exit_user", mmio_exit_user),
 	VCPU_STAT("mmio_exit_kernel", mmio_exit_kernel),
+	VCPU_STAT("stage2_abort_exit", stage2_abort_exit),
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 77cb2d28f..c3527ccf6 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -966,6 +966,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 	fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
 	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
 
+
 	/* Synchronous External Abort? */
 	if (kvm_vcpu_abt_issea(vcpu)) {
 		/*
@@ -980,6 +981,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 
 	trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
 			      kvm_vcpu_get_hfar(vcpu), fault_ipa);
+	vcpu->stat.stage2_abort_exit++;
 
 	/* Check the stage-2 fault is trans. fault or write fault */
 	if (fault_status != FSC_FAULT && fault_status != FSC_PERM &&
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/3] KVM: arm64: Add two page mapping counters for kvm_stat
  2021-04-20 13:08 ` Yoan Picchi
@ 2021-04-20 13:08   ` Yoan Picchi
  -1 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, Yoan Picchi, will

This patch adds regular_page_mapped and huge_page_mapped.
regular_page_mapped is increased when a page of the smallest granularity
is mapped. This is usually a 4k, 16k or 64k page.
huge_page_mapped is increased when a huge page of any size other than the
smallest granularity is mapped.
Those counters only count pages allocated for the data and doesn't count
the pages/blocks allocated to the page tables as I don't see where those
might be needed to be recorded

I can see two usecases for those counters :
	We can detect memory pressure in the host when the guest gets
regular pages instead of huge ones.
	May help detecting an abnormal memory usage like some recurring
allocs past the kernel and a few program starts.
With the previous patch about stage2_abort_exit, it have the added
benefit of specifying the second main cause of stage 2 page fault (the
other being mmio access)

To test this patch I did start a guest VM and monitor the page allocation.
By default it only allocate huge pages. Then I tried to disable the huge
pages with : echo never > /sys/kernel/mm/transparent_hugepage/enabled
Starting the VM, it no longer allocate any huge page, but only regular
pages.

I can't log into the guess because it doesn't recognize my keyboard. To
circumvent that I added some command to the init script that need some
memory : cat /dev/zero | head -c 1000m | tail
This take 1GiB of memory before finishing.
From memory, it allocate 525 or so huge table which is around what I would
expect with 2MB pages.

I did check the relation between stage 2 exits, mmio exits and
allocation. The mmio + allocation account for almost all the stage 2 exit
as expected. There was only about 20 exits that was neither a mmio or an
alloc during the kernel boot. I did not look what they are, but it can be
a memory permission relaxation, or resizing a page.

My main concern here is about the case where we replace a page/block by
another/resize a block. I don't fully understand the mechanism yet and
so don't know if it should be counted as an allocation or not. For now I
don't account it.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 2 ++
 arch/arm64/kvm/guest.c            | 2 ++
 arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
 3 files changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 02891ce94..8f9d27571 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -547,6 +547,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
 
 struct kvm_vm_stat {
 	ulong remote_tlb_flush;
+	ulong regular_page_mapped;
+	ulong huge_page_mapped;
 };
 
 struct kvm_vcpu_stat {
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 82a4b6275..41316b30e 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -42,6 +42,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+	VM_STAT("regular_page_mapped", regular_page_mapped),
+	VM_STAT("huge_page_mapped", huge_page_mapped),
 	{ NULL }
 };
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d..2aba2b636 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -498,6 +498,11 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
 	smp_store_release(ptep, new);
 	get_page(page);
 	data->phys += granule;
+	if (level == KVM_PGTABLE_MAX_LEVELS - 1)
+		data->mmu->kvm->stat.regular_page_mapped++;
+	else
+		data->mmu->kvm->stat.huge_page_mapped++;
+
 	return 0;
 }
 
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/3] KVM: arm64: Add two page mapping counters for kvm_stat
@ 2021-04-20 13:08   ` Yoan Picchi
  0 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, will, Yoan Picchi

This patch adds regular_page_mapped and huge_page_mapped.
regular_page_mapped is increased when a page of the smallest granularity
is mapped. This is usually a 4k, 16k or 64k page.
huge_page_mapped is increased when a huge page of any size other than the
smallest granularity is mapped.
Those counters only count pages allocated for the data and doesn't count
the pages/blocks allocated to the page tables as I don't see where those
might be needed to be recorded

I can see two usecases for those counters :
	We can detect memory pressure in the host when the guest gets
regular pages instead of huge ones.
	May help detecting an abnormal memory usage like some recurring
allocs past the kernel and a few program starts.
With the previous patch about stage2_abort_exit, it have the added
benefit of specifying the second main cause of stage 2 page fault (the
other being mmio access)

To test this patch I did start a guest VM and monitor the page allocation.
By default it only allocate huge pages. Then I tried to disable the huge
pages with : echo never > /sys/kernel/mm/transparent_hugepage/enabled
Starting the VM, it no longer allocate any huge page, but only regular
pages.

I can't log into the guess because it doesn't recognize my keyboard. To
circumvent that I added some command to the init script that need some
memory : cat /dev/zero | head -c 1000m | tail
This take 1GiB of memory before finishing.
From memory, it allocate 525 or so huge table which is around what I would
expect with 2MB pages.

I did check the relation between stage 2 exits, mmio exits and
allocation. The mmio + allocation account for almost all the stage 2 exit
as expected. There was only about 20 exits that was neither a mmio or an
alloc during the kernel boot. I did not look what they are, but it can be
a memory permission relaxation, or resizing a page.

My main concern here is about the case where we replace a page/block by
another/resize a block. I don't fully understand the mechanism yet and
so don't know if it should be counted as an allocation or not. For now I
don't account it.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 2 ++
 arch/arm64/kvm/guest.c            | 2 ++
 arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
 3 files changed, 9 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 02891ce94..8f9d27571 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -547,6 +547,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
 
 struct kvm_vm_stat {
 	ulong remote_tlb_flush;
+	ulong regular_page_mapped;
+	ulong huge_page_mapped;
 };
 
 struct kvm_vcpu_stat {
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 82a4b6275..41316b30e 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -42,6 +42,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+	VM_STAT("regular_page_mapped", regular_page_mapped),
+	VM_STAT("huge_page_mapped", huge_page_mapped),
 	{ NULL }
 };
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d..2aba2b636 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -498,6 +498,11 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
 	smp_store_release(ptep, new);
 	get_page(page);
 	data->phys += granule;
+	if (level == KVM_PGTABLE_MAX_LEVELS - 1)
+		data->mmu->kvm->stat.regular_page_mapped++;
+	else
+		data->mmu->kvm->stat.huge_page_mapped++;
+
 	return 0;
 }
 
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/3] KVM: arm64: Add irq_exit counter for kvm_stat
  2021-04-20 13:08 ` Yoan Picchi
@ 2021-04-20 13:08   ` Yoan Picchi
  -1 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, Yoan Picchi, will

This counter is meant to detect when the guest vm exits due to an
interrupt. Those interrupts might be unrelated to the guest VM (say, some
network packet arrived, and such) but they still trigger an exit which is
recorded by the "exit" counter. The main purpose of this counter is to
give some more granularity to this base exit counter so that one can have
a rough idea of where those exits comes from and so, if those general
exits happen because of the host or of the guest.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/guest.c            | 1 +
 arch/arm64/kvm/handle_exit.c      | 1 +
 3 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8f9d27571..185e707fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -565,6 +565,7 @@ struct kvm_vcpu_stat {
 	u64 mmio_exit_kernel;
 	u64 stage2_abort_exit;
 	u64 exits;
+	u64 irq_exits;
 };
 
 int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 41316b30e..eb4c24b7a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -42,6 +42,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+	VCPU_STAT("irq_exits", irq_exits),
 	VM_STAT("regular_page_mapped", regular_page_mapped),
 	VM_STAT("huge_page_mapped", huge_page_mapped),
 	{ NULL }
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index cebe39f3b..7e5dbc5ff 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -244,6 +244,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
 
 	switch (exception_index) {
 	case ARM_EXCEPTION_IRQ:
+		vcpu->stat.irq_exits++;
 		return 1;
 	case ARM_EXCEPTION_EL1_SERROR:
 		return 1;
-- 
2.17.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/3] KVM: arm64: Add irq_exit counter for kvm_stat
@ 2021-04-20 13:08   ` Yoan Picchi
  0 siblings, 0 replies; 14+ messages in thread
From: Yoan Picchi @ 2021-04-20 13:08 UTC (permalink / raw)
  To: maz, james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm
  Cc: catalin.marinas, will, Yoan Picchi

This counter is meant to detect when the guest vm exits due to an
interrupt. Those interrupts might be unrelated to the guest VM (say, some
network packet arrived, and such) but they still trigger an exit which is
recorded by the "exit" counter. The main purpose of this counter is to
give some more granularity to this base exit counter so that one can have
a rough idea of where those exits comes from and so, if those general
exits happen because of the host or of the guest.

Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/kvm/guest.c            | 1 +
 arch/arm64/kvm/handle_exit.c      | 1 +
 3 files changed, 3 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8f9d27571..185e707fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -565,6 +565,7 @@ struct kvm_vcpu_stat {
 	u64 mmio_exit_kernel;
 	u64 stage2_abort_exit;
 	u64 exits;
+	u64 irq_exits;
 };
 
 int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 41316b30e..eb4c24b7a 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -42,6 +42,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 	VCPU_STAT("exits", exits),
 	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
 	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
+	VCPU_STAT("irq_exits", irq_exits),
 	VM_STAT("regular_page_mapped", regular_page_mapped),
 	VM_STAT("huge_page_mapped", huge_page_mapped),
 	{ NULL }
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index cebe39f3b..7e5dbc5ff 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -244,6 +244,7 @@ int handle_exit(struct kvm_vcpu *vcpu, int exception_index)
 
 	switch (exception_index) {
 	case ARM_EXCEPTION_IRQ:
+		vcpu->stat.irq_exits++;
 		return 1;
 	case ARM_EXCEPTION_EL1_SERROR:
 		return 1;
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm64: Add irq_exit counter for kvm_stat
  2021-04-20 13:08   ` Yoan Picchi
@ 2021-04-20 13:26     ` Marc Zyngier
  -1 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:26 UTC (permalink / raw)
  To: Yoan Picchi; +Cc: catalin.marinas, will, kvmarm, linux-arm-kernel

On Tue, 20 Apr 2021 14:08:25 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This counter is meant to detect when the guest vm exits due to an
> interrupt. Those interrupts might be unrelated to the guest VM (say, some
> network packet arrived, and such) but they still trigger an exit which is
> recorded by the "exit" counter. The main purpose of this counter is to
> give some more granularity to this base exit counter so that one can have
> a rough idea of where those exits comes from and so, if those general
> exits happen because of the host or of the guest.

I don't think this makes much sense. Why should we account interrupts
for the guest when the interrupts don't have anything to do with it?
And guess what, they almost *never* do. The only interrupt that could
be accountable to the vcpu is the vPE doorbell, and that one can never
causes an exit, by definition.

The kernel already provides quite a lot around interrupt accounting
already, and I don't think we need to do add more for the guest. If
this made any sense, it should be applicable to any userspace task,
not just vcpus.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/3] KVM: arm64: Add irq_exit counter for kvm_stat
@ 2021-04-20 13:26     ` Marc Zyngier
  0 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:26 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm, catalin.marinas, will

On Tue, 20 Apr 2021 14:08:25 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This counter is meant to detect when the guest vm exits due to an
> interrupt. Those interrupts might be unrelated to the guest VM (say, some
> network packet arrived, and such) but they still trigger an exit which is
> recorded by the "exit" counter. The main purpose of this counter is to
> give some more granularity to this base exit counter so that one can have
> a rough idea of where those exits comes from and so, if those general
> exits happen because of the host or of the guest.

I don't think this makes much sense. Why should we account interrupts
for the guest when the interrupts don't have anything to do with it?
And guess what, they almost *never* do. The only interrupt that could
be accountable to the vcpu is the vPE doorbell, and that one can never
causes an exit, by definition.

The kernel already provides quite a lot around interrupt accounting
already, and I don't think we need to do add more for the guest. If
this made any sense, it should be applicable to any userspace task,
not just vcpus.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/3] KVM: arm64: Add a stage2 page fault counter for kvm_stat
  2021-04-20 13:08   ` Yoan Picchi
@ 2021-04-20 13:31     ` Marc Zyngier
  -1 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:31 UTC (permalink / raw)
  To: Yoan Picchi; +Cc: catalin.marinas, will, kvmarm, linux-arm-kernel

On Tue, 20 Apr 2021 14:08:23 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This counter is meant to detect stage 2 page fault exits. This is meant to
> measure how much those page fault influence the performance (by comparing
> this number of exits to some other exit causes).
> For now this counter is generic, but some more granularity is planned in
> the next commits so that one know how much of the page fault are for memory
> allocation, or for mmio for instance. The idea being that one using this
> counter can get a better idea of what is trigerring those exits to try to
> fix it.
> 
> Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/guest.c            | 1 +
>  arch/arm64/kvm/mmu.c              | 2 ++
>  3 files changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3d10e6527..02891ce94 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -561,6 +561,7 @@ struct kvm_vcpu_stat {
>  	u64 wfi_exit_stat;
>  	u64 mmio_exit_user;
>  	u64 mmio_exit_kernel;
> +	u64 stage2_abort_exit;
>  	u64 exits;
>  };
>  
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 9bbd30e62..82a4b6275 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>  	VCPU_STAT("wfi_exit_stat", wfi_exit_stat),
>  	VCPU_STAT("mmio_exit_user", mmio_exit_user),
>  	VCPU_STAT("mmio_exit_kernel", mmio_exit_kernel),
> +	VCPU_STAT("stage2_abort_exit", stage2_abort_exit),
>  	VCPU_STAT("exits", exits),
>  	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
>  	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 77cb2d28f..c3527ccf6 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -966,6 +966,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  	fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
>  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
>  
> +

Please proofread your patches, and don't add spurious blank lines.

>  	/* Synchronous External Abort? */
>  	if (kvm_vcpu_abt_issea(vcpu)) {
>  		/*
> @@ -980,6 +981,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
>  			      kvm_vcpu_get_hfar(vcpu), fault_ipa);
> +	vcpu->stat.stage2_abort_exit++;

What is the benefit of this counter over, say, the tracepoint that is
already in place? This tracepoint allows you to count these exits, and
to actually find *why* you are exiting.

If the purpose of this patch is to identify exit reasons, I'd say that
the tracepoint is vastly superior in that respect.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/3] KVM: arm64: Add a stage2 page fault counter for kvm_stat
@ 2021-04-20 13:31     ` Marc Zyngier
  0 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:31 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm, catalin.marinas, will

On Tue, 20 Apr 2021 14:08:23 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This counter is meant to detect stage 2 page fault exits. This is meant to
> measure how much those page fault influence the performance (by comparing
> this number of exits to some other exit causes).
> For now this counter is generic, but some more granularity is planned in
> the next commits so that one know how much of the page fault are for memory
> allocation, or for mmio for instance. The idea being that one using this
> counter can get a better idea of what is trigerring those exits to try to
> fix it.
> 
> Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 1 +
>  arch/arm64/kvm/guest.c            | 1 +
>  arch/arm64/kvm/mmu.c              | 2 ++
>  3 files changed, 4 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 3d10e6527..02891ce94 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -561,6 +561,7 @@ struct kvm_vcpu_stat {
>  	u64 wfi_exit_stat;
>  	u64 mmio_exit_user;
>  	u64 mmio_exit_kernel;
> +	u64 stage2_abort_exit;
>  	u64 exits;
>  };
>  
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 9bbd30e62..82a4b6275 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -38,6 +38,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>  	VCPU_STAT("wfi_exit_stat", wfi_exit_stat),
>  	VCPU_STAT("mmio_exit_user", mmio_exit_user),
>  	VCPU_STAT("mmio_exit_kernel", mmio_exit_kernel),
> +	VCPU_STAT("stage2_abort_exit", stage2_abort_exit),
>  	VCPU_STAT("exits", exits),
>  	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
>  	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 77cb2d28f..c3527ccf6 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -966,6 +966,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  	fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
>  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
>  
> +

Please proofread your patches, and don't add spurious blank lines.

>  	/* Synchronous External Abort? */
>  	if (kvm_vcpu_abt_issea(vcpu)) {
>  		/*
> @@ -980,6 +981,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  
>  	trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu),
>  			      kvm_vcpu_get_hfar(vcpu), fault_ipa);
> +	vcpu->stat.stage2_abort_exit++;

What is the benefit of this counter over, say, the tracepoint that is
already in place? This tracepoint allows you to count these exits, and
to actually find *why* you are exiting.

If the purpose of this patch is to identify exit reasons, I'd say that
the tracepoint is vastly superior in that respect.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/3] KVM: arm64: Add two page mapping counters for kvm_stat
  2021-04-20 13:08   ` Yoan Picchi
@ 2021-04-20 13:52     ` Marc Zyngier
  -1 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:52 UTC (permalink / raw)
  To: Yoan Picchi; +Cc: catalin.marinas, will, kvmarm, linux-arm-kernel

On Tue, 20 Apr 2021 14:08:24 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This patch adds regular_page_mapped and huge_page_mapped.
> regular_page_mapped is increased when a page of the smallest granularity
> is mapped. This is usually a 4k, 16k or 64k page.
> huge_page_mapped is increased when a huge page of any size other than the
> smallest granularity is mapped.
> Those counters only count pages allocated for the data and doesn't count
> the pages/blocks allocated to the page tables as I don't see where those
> might be needed to be recorded
> 
> I can see two usecases for those counters :
> 	We can detect memory pressure in the host when the guest gets
> regular pages instead of huge ones.
> 	May help detecting an abnormal memory usage like some recurring
> allocs past the kernel and a few program starts.
> With the previous patch about stage2_abort_exit, it have the added
> benefit of specifying the second main cause of stage 2 page fault (the
> other being mmio access)
> 
> To test this patch I did start a guest VM and monitor the page allocation.
> By default it only allocate huge pages. Then I tried to disable the huge
> pages with : echo never > /sys/kernel/mm/transparent_hugepage/enabled
> Starting the VM, it no longer allocate any huge page, but only regular
> pages.
> 
> I can't log into the guess because it doesn't recognize my keyboard.

Maybe you should consider having a look at what is going wrong
here. If your guest isn't working correctly, trying to account for
page allocation is a bit... irrelevant.

> To
> circumvent that I added some command to the init script that need some
> memory : cat /dev/zero | head -c 1000m | tail
> This take 1GiB of memory before finishing.
> From memory, it allocate 525 or so huge table which is around what I would
> expect with 2MB pages.
> 
> I did check the relation between stage 2 exits, mmio exits and
> allocation. The mmio + allocation account for almost all the stage 2 exit
> as expected. There was only about 20 exits that was neither a mmio or an
> alloc during the kernel boot. I did not look what they are, but it can be
> a memory permission relaxation, or resizing a page.
> 
> My main concern here is about the case where we replace a page/block by
> another/resize a block. I don't fully understand the mechanism yet and
> so don't know if it should be counted as an allocation or not. For now I
> don't account it.

None of this discussion belongs to a commit message.

> 
> Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 2 ++
>  arch/arm64/kvm/guest.c            | 2 ++
>  arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 02891ce94..8f9d27571 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -547,6 +547,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
>  
>  struct kvm_vm_stat {
>  	ulong remote_tlb_flush;
> +	ulong regular_page_mapped;
> +	ulong huge_page_mapped;
>  };
>  
>  struct kvm_vcpu_stat {
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 82a4b6275..41316b30e 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -42,6 +42,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>  	VCPU_STAT("exits", exits),
>  	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
>  	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
> +	VM_STAT("regular_page_mapped", regular_page_mapped),
> +	VM_STAT("huge_page_mapped", huge_page_mapped),

As pointed out in the previous review, two sizes don't quite fit
all. There is a continuum of page, contiguous pages and block mappings
of various sizes (although KVM doesn't support the contiguous hint at
S2 yet).

For this information to be useful, you'd have to at least distinguish
between 2M and 1G mappings when operating with a 4k base granule
size. Because if I have enough memory to get contiguous 1G chunks, I
surely don't want them to show up as 2M mappings because I messed up
the userspace alignment requirements, and this counter can't
distinguish between these cases.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/3] KVM: arm64: Add two page mapping counters for kvm_stat
@ 2021-04-20 13:52     ` Marc Zyngier
  0 siblings, 0 replies; 14+ messages in thread
From: Marc Zyngier @ 2021-04-20 13:52 UTC (permalink / raw)
  To: Yoan Picchi
  Cc: james.morse, julien.thierry.kdev, suzuki.poulose,
	linux-arm-kernel, kvmarm, catalin.marinas, will

On Tue, 20 Apr 2021 14:08:24 +0100,
Yoan Picchi <yoan.picchi@arm.com> wrote:
> 
> This patch adds regular_page_mapped and huge_page_mapped.
> regular_page_mapped is increased when a page of the smallest granularity
> is mapped. This is usually a 4k, 16k or 64k page.
> huge_page_mapped is increased when a huge page of any size other than the
> smallest granularity is mapped.
> Those counters only count pages allocated for the data and doesn't count
> the pages/blocks allocated to the page tables as I don't see where those
> might be needed to be recorded
> 
> I can see two usecases for those counters :
> 	We can detect memory pressure in the host when the guest gets
> regular pages instead of huge ones.
> 	May help detecting an abnormal memory usage like some recurring
> allocs past the kernel and a few program starts.
> With the previous patch about stage2_abort_exit, it have the added
> benefit of specifying the second main cause of stage 2 page fault (the
> other being mmio access)
> 
> To test this patch I did start a guest VM and monitor the page allocation.
> By default it only allocate huge pages. Then I tried to disable the huge
> pages with : echo never > /sys/kernel/mm/transparent_hugepage/enabled
> Starting the VM, it no longer allocate any huge page, but only regular
> pages.
> 
> I can't log into the guess because it doesn't recognize my keyboard.

Maybe you should consider having a look at what is going wrong
here. If your guest isn't working correctly, trying to account for
page allocation is a bit... irrelevant.

> To
> circumvent that I added some command to the init script that need some
> memory : cat /dev/zero | head -c 1000m | tail
> This take 1GiB of memory before finishing.
> From memory, it allocate 525 or so huge table which is around what I would
> expect with 2MB pages.
> 
> I did check the relation between stage 2 exits, mmio exits and
> allocation. The mmio + allocation account for almost all the stage 2 exit
> as expected. There was only about 20 exits that was neither a mmio or an
> alloc during the kernel boot. I did not look what they are, but it can be
> a memory permission relaxation, or resizing a page.
> 
> My main concern here is about the case where we replace a page/block by
> another/resize a block. I don't fully understand the mechanism yet and
> so don't know if it should be counted as an allocation or not. For now I
> don't account it.

None of this discussion belongs to a commit message.

> 
> Signed-off-by: Yoan Picchi <yoan.picchi@arm.com>
> ---
>  arch/arm64/include/asm/kvm_host.h | 2 ++
>  arch/arm64/kvm/guest.c            | 2 ++
>  arch/arm64/kvm/hyp/pgtable.c      | 5 +++++
>  3 files changed, 9 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 02891ce94..8f9d27571 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -547,6 +547,8 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
>  
>  struct kvm_vm_stat {
>  	ulong remote_tlb_flush;
> +	ulong regular_page_mapped;
> +	ulong huge_page_mapped;
>  };
>  
>  struct kvm_vcpu_stat {
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 82a4b6275..41316b30e 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -42,6 +42,8 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>  	VCPU_STAT("exits", exits),
>  	VCPU_STAT("halt_poll_success_ns", halt_poll_success_ns),
>  	VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns),
> +	VM_STAT("regular_page_mapped", regular_page_mapped),
> +	VM_STAT("huge_page_mapped", huge_page_mapped),

As pointed out in the previous review, two sizes don't quite fit
all. There is a continuum of page, contiguous pages and block mappings
of various sizes (although KVM doesn't support the contiguous hint at
S2 yet).

For this information to be useful, you'd have to at least distinguish
between 2M and 1G mappings when operating with a 4k base granule
size. Because if I have enough memory to get contiguous 1G chunks, I
surely don't want them to show up as 2M mappings because I messed up
the userspace alignment requirements, and this counter can't
distinguish between these cases.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-04-20 13:53 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-20 13:08 [PATCH v2 0/3] KVM: arm64: add more event counters for kvm_stat Yoan Picchi
2021-04-20 13:08 ` Yoan Picchi
2021-04-20 13:08 ` [PATCH v2 1/3] KVM: arm64: Add a stage2 page fault counter " Yoan Picchi
2021-04-20 13:08   ` Yoan Picchi
2021-04-20 13:31   ` Marc Zyngier
2021-04-20 13:31     ` Marc Zyngier
2021-04-20 13:08 ` [PATCH v2 2/3] KVM: arm64: Add two page mapping counters " Yoan Picchi
2021-04-20 13:08   ` Yoan Picchi
2021-04-20 13:52   ` Marc Zyngier
2021-04-20 13:52     ` Marc Zyngier
2021-04-20 13:08 ` [PATCH v2 3/3] KVM: arm64: Add irq_exit counter " Yoan Picchi
2021-04-20 13:08   ` Yoan Picchi
2021-04-20 13:26   ` Marc Zyngier
2021-04-20 13:26     ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.