All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-10-30  6:21 ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.522048      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.50% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.383 K/sec                    ( +-100.00% )
                48      page-faults               #    0.092 M/sec                    ( +-  0.66% )
           1088597      cycles                    #    2.085 GHz                      ( +-  1.50% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            524457      instructions              #    0.48  insns per cycle          ( +-  0.89% )
   <not supported>      branches
              9688      branch-misses             #   18.557 M/sec                    ( +-  1.78% )

       5.000851736 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.632288      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.11% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.078 M/sec                    ( +-  1.19% )
           1119933      cycles                    #    1.771 GHz                      ( +-  1.19% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            568318      instructions              #    0.51  insns per cycle          ( +-  0.91% )
   <not supported>      branches
             10227      branch-misses             #   16.175 M/sec                    ( +-  1.71% )

       5.001170616 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3044948797
count2: 3044948931
delta: 134

Guest:
count1: 5782364731
count2: 5782364885
delta: 154

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v4
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  virtual_PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
  KVM: ARM64: Add reset and access handlers for PMSELR register
  KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1
    register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
  KVM: ARM64: Add reset and access handlers for PMCCNTR register
  KVM: ARM64: Add reset and access handlers for PMCNTENSET and
    PMCNTENCLR register
  KVM: ARM64: Add reset and access handlers for PMINTENSET and
    PMINTENCLR register
  KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR
    register
  KVM: ARM64: Add reset and access handlers for PMUSERENR register
  KVM: ARM64: Add reset and access handlers for PMSWINC register
  KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new kvm ARM PMU device

 Documentation/virtual/kvm/devices/arm-pmu.txt |  15 +
 arch/arm/kvm/arm.c                            |   5 +
 arch/arm64/include/asm/kvm_asm.h              |  55 ++-
 arch/arm64/include/asm/kvm_host.h             |   2 +
 arch/arm64/include/asm/pmu.h                  |  47 +++
 arch/arm64/include/uapi/asm/kvm.h             |   3 +
 arch/arm64/kernel/perf_event.c                |  35 --
 arch/arm64/kvm/Kconfig                        |   8 +
 arch/arm64/kvm/Makefile                       |   1 +
 arch/arm64/kvm/reset.c                        |   3 +
 arch/arm64/kvm/sys_regs.c                     | 547 ++++++++++++++++++++++++--
 arch/arm64/kvm/sys_regs.h                     |  16 +
 include/kvm/arm_pmu.h                         |  74 ++++
 include/linux/kvm_host.h                      |   1 +
 include/uapi/linux/kvm.h                      |   2 +
 virt/kvm/arm/pmu.c                            | 510 ++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |   8 +
 virt/kvm/arm/vgic.h                           |   1 +
 virt/kvm/kvm_main.c                           |   4 +
 19 files changed, 1269 insertions(+), 68 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4



^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-10-30  6:21 ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.522048      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.50% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.383 K/sec                    ( +-100.00% )
                48      page-faults               #    0.092 M/sec                    ( +-  0.66% )
           1088597      cycles                    #    2.085 GHz                      ( +-  1.50% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            524457      instructions              #    0.48  insns per cycle          ( +-  0.89% )
   <not supported>      branches
              9688      branch-misses             #   18.557 M/sec                    ( +-  1.78% )

       5.000851736 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.632288      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.11% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.078 M/sec                    ( +-  1.19% )
           1119933      cycles                    #    1.771 GHz                      ( +-  1.19% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            568318      instructions              #    0.51  insns per cycle          ( +-  0.91% )
   <not supported>      branches
             10227      branch-misses             #   16.175 M/sec                    ( +-  1.71% )

       5.001170616 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3044948797
count2: 3044948931
delta: 134

Guest:
count1: 5782364731
count2: 5782364885
delta: 154

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v4
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  virtual_PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
  KVM: ARM64: Add reset and access handlers for PMSELR register
  KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1
    register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
  KVM: ARM64: Add reset and access handlers for PMCCNTR register
  KVM: ARM64: Add reset and access handlers for PMCNTENSET and
    PMCNTENCLR register
  KVM: ARM64: Add reset and access handlers for PMINTENSET and
    PMINTENCLR register
  KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR
    register
  KVM: ARM64: Add reset and access handlers for PMUSERENR register
  KVM: ARM64: Add reset and access handlers for PMSWINC register
  KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new kvm ARM PMU device

 Documentation/virtual/kvm/devices/arm-pmu.txt |  15 +
 arch/arm/kvm/arm.c                            |   5 +
 arch/arm64/include/asm/kvm_asm.h              |  55 ++-
 arch/arm64/include/asm/kvm_host.h             |   2 +
 arch/arm64/include/asm/pmu.h                  |  47 +++
 arch/arm64/include/uapi/asm/kvm.h             |   3 +
 arch/arm64/kernel/perf_event.c                |  35 --
 arch/arm64/kvm/Kconfig                        |   8 +
 arch/arm64/kvm/Makefile                       |   1 +
 arch/arm64/kvm/reset.c                        |   3 +
 arch/arm64/kvm/sys_regs.c                     | 547 ++++++++++++++++++++++++--
 arch/arm64/kvm/sys_regs.h                     |  16 +
 include/kvm/arm_pmu.h                         |  74 ++++
 include/linux/kvm_host.h                      |   1 +
 include/uapi/linux/kvm.h                      |   2 +
 virt/kvm/arm/pmu.c                            | 510 ++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |   8 +
 virt/kvm/arm/vgic.h                           |   1 +
 virt/kvm/kvm_main.c                           |   4 +
 19 files changed, 1269 insertions(+), 68 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4



^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-10-30  6:21 ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.522048      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.50% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.383 K/sec                    ( +-100.00% )
                48      page-faults               #    0.092 M/sec                    ( +-  0.66% )
           1088597      cycles                    #    2.085 GHz                      ( +-  1.50% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            524457      instructions              #    0.48  insns per cycle          ( +-  0.89% )
   <not supported>      branches
              9688      branch-misses             #   18.557 M/sec                    ( +-  1.78% )

       5.000851736 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.632288      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.11% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.078 M/sec                    ( +-  1.19% )
           1119933      cycles                    #    1.771 GHz                      ( +-  1.19% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            568318      instructions              #    0.51  insns per cycle          ( +-  0.91% )
   <not supported>      branches
             10227      branch-misses             #   16.175 M/sec                    ( +-  1.71% )

       5.001170616 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3044948797
count2: 3044948931
delta: 134

Guest:
count1: 5782364731
count2: 5782364885
delta: 154

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v4
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  virtual_PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
  KVM: ARM64: Add reset and access handlers for PMSELR register
  KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1
    register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
  KVM: ARM64: Add reset and access handlers for PMCCNTR register
  KVM: ARM64: Add reset and access handlers for PMCNTENSET and
    PMCNTENCLR register
  KVM: ARM64: Add reset and access handlers for PMINTENSET and
    PMINTENCLR register
  KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR
    register
  KVM: ARM64: Add reset and access handlers for PMUSERENR register
  KVM: ARM64: Add reset and access handlers for PMSWINC register
  KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new kvm ARM PMU device

 Documentation/virtual/kvm/devices/arm-pmu.txt |  15 +
 arch/arm/kvm/arm.c                            |   5 +
 arch/arm64/include/asm/kvm_asm.h              |  55 ++-
 arch/arm64/include/asm/kvm_host.h             |   2 +
 arch/arm64/include/asm/pmu.h                  |  47 +++
 arch/arm64/include/uapi/asm/kvm.h             |   3 +
 arch/arm64/kernel/perf_event.c                |  35 --
 arch/arm64/kvm/Kconfig                        |   8 +
 arch/arm64/kvm/Makefile                       |   1 +
 arch/arm64/kvm/reset.c                        |   3 +
 arch/arm64/kvm/sys_regs.c                     | 547 ++++++++++++++++++++++++--
 arch/arm64/kvm/sys_regs.h                     |  16 +
 include/kvm/arm_pmu.h                         |  74 ++++
 include/linux/kvm_host.h                      |   1 +
 include/uapi/linux/kvm.h                      |   2 +
 virt/kvm/arm/pmu.c                            | 510 ++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |   8 +
 virt/kvm/arm/vgic.h                           |   1 +
 virt/kvm/kvm_main.c                           |   4 +
 19 files changed, 1269 insertions(+), 68 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 01/21] ARM64: Move PMU register related defines to asm/pmu.h
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h   | 45 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 35 --------------------------------
 2 files changed, 45 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b7710a5..b9f394a 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -19,6 +19,51 @@
 #ifndef __ASM_PMU_H
 #define __ASM_PMU_H
 
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
 #ifdef CONFIG_HW_PERF_EVENTS
 
 /* The events for a given PMU register set. */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f9a74d4..534e8ad 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -741,9 +741,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_COUNTER0	1
 #define	ARMV8_IDX_COUNTER_LAST	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -754,38 +751,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 01/21] ARM64: Move PMU register related defines to asm/pmu.h
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h   | 45 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 35 --------------------------------
 2 files changed, 45 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b7710a5..b9f394a 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -19,6 +19,51 @@
 #ifndef __ASM_PMU_H
 #define __ASM_PMU_H
 
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
 #ifdef CONFIG_HW_PERF_EVENTS
 
 /* The events for a given PMU register set. */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f9a74d4..534e8ad 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -741,9 +741,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_COUNTER0	1
 #define	ARMV8_IDX_COUNTER_LAST	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -754,38 +751,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 01/21] ARM64: Move PMU register related defines to asm/pmu.h
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h   | 45 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 35 --------------------------------
 2 files changed, 45 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b7710a5..b9f394a 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -19,6 +19,51 @@
 #ifndef __ASM_PMU_H
 #define __ASM_PMU_H
 
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
 #ifdef CONFIG_HW_PERF_EVENTS
 
 /* The events for a given PMU register set. */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f9a74d4..534e8ad 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -741,9 +741,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_COUNTER0	1
 #define	ARMV8_IDX_COUNTER_LAST	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -754,38 +751,6 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 02/21] KVM: ARM64: Define PMU data structure for each vcpu
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  8 ++++++++
 include/kvm/arm_pmu.h             | 41 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ed03968..cc843ca 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -37,6 +37,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -132,6 +133,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 5c7e920..8f321b1 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -31,6 +31,7 @@ config KVM
 	select KVM_VFIO
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
+	select KVM_ARM_PMU
 	---help---
 	  Support hosting virtualized guest machines.
 
@@ -41,4 +42,11 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	depends on KVM_ARM_HOST
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..254d2b4
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	struct kvm_vcpu *vcpu;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+#ifdef CONFIG_KVM_ARM_PMU
+	/* PMU IRQ Number per VCPU */
+	int irq_num;
+	/* IRQ pending flag */
+	bool irq_pending;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+#endif
+};
+
+#endif
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 02/21] KVM: ARM64: Define PMU data structure for each vcpu
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  8 ++++++++
 include/kvm/arm_pmu.h             | 41 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ed03968..cc843ca 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -37,6 +37,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -132,6 +133,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 5c7e920..8f321b1 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -31,6 +31,7 @@ config KVM
 	select KVM_VFIO
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
+	select KVM_ARM_PMU
 	---help---
 	  Support hosting virtualized guest machines.
 
@@ -41,4 +42,11 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	depends on KVM_ARM_HOST
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..254d2b4
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	struct kvm_vcpu *vcpu;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+#ifdef CONFIG_KVM_ARM_PMU
+	/* PMU IRQ Number per VCPU */
+	int irq_num;
+	/* IRQ pending flag */
+	bool irq_pending;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+#endif
+};
+
+#endif
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 02/21] KVM: ARM64: Define PMU data structure for each vcpu
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  8 ++++++++
 include/kvm/arm_pmu.h             | 41 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ed03968..cc843ca 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -37,6 +37,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -132,6 +133,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 5c7e920..8f321b1 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -31,6 +31,7 @@ config KVM
 	select KVM_VFIO
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
+	select KVM_ARM_PMU
 	---help---
 	  Support hosting virtualized guest machines.
 
@@ -41,4 +42,11 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	depends on KVM_ARM_HOST
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..254d2b4
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	struct kvm_vcpu *vcpu;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+#ifdef CONFIG_KVM_ARM_PMU
+	/* PMU IRQ Number per VCPU */
+	int irq_num;
+	/* IRQ pending flag */
+	bool irq_pending;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+#endif
+};
+
+#endif
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 03/21] KVM: ARM64: Add offset defines for PMU registers
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate acccesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers and their AArch32 counterparts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h | 55 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5e37710..4f804c1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,12 +48,34 @@
 #define MDSCR_EL1	22	/* Monitor Debug System Control Register */
 #define MDCCINT_EL1	23	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+/* Performance Monitors Registers */
+#define PMCR_EL0	24	/* Control Register */
+#define PMOVSSET_EL0	25	/* Overflow Flag Status Set Register */
+#define PMOVSCLR_EL0	26	/* Overflow Flag Status Clear Register */
+#define PMSELR_EL0	27	/* Event Counter Selection Register */
+#define PMCEID0_EL0	28	/* Common Event Identification Register 0 */
+#define PMCEID1_EL0	29	/* Common Event Identification Register 1 */
+#define PMEVCNTR0_EL0	30	/* Event Counter Register (0-30) */
+#define PMEVCNTR30_EL0	60
+#define PMCCNTR_EL0	61	/* Cycle Counter Register */
+#define PMEVTYPER0_EL0	62	/* Event Type Register (0-30) */
+#define PMEVTYPER30_EL0	92
+#define PMCCFILTR_EL0	93	/* Cycle Count Filter Register */
+#define PMXEVCNTR_EL0	94	/* Selected Event Count Register */
+#define PMXEVTYPER_EL0	95	/* Selected Event Type Register */
+#define PMCNTENSET_EL0	96	/* Count Enable Set Register */
+#define PMCNTENCLR_EL0	97	/* Count Enable Clear Register */
+#define PMINTENSET_EL1	98	/* Interrupt Enable Set Register */
+#define PMINTENCLR_EL1	99	/* Interrupt Enable Clear Register */
+#define PMUSERENR_EL0	100	/* User Enable Register */
+#define PMSWINC_EL0	101	/* Software Increment Register */
+
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	24	/* Domain Access Control Register */
-#define	IFSR32_EL2	25	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	26	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	27	/* Debug Vector Catch Register */
-#define	NR_SYS_REGS	28
+#define	DACR32_EL2	102	/* Domain Access Control Register */
+#define	IFSR32_EL2	103	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	104	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	105	/* Debug Vector Catch Register */
+#define	NR_SYS_REGS	106
 
 /* 32bit mapping */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
@@ -75,6 +97,24 @@
 #define c6_IFAR		(c6_DFAR + 1)	/* Instruction Fault Address Register */
 #define c7_PAR		(PAR_EL1 * 2)	/* Physical Address Register */
 #define c7_PAR_high	(c7_PAR + 1)	/* PAR top 32 bits */
+
+/* Performance Monitors*/
+#define c9_PMCR		(PMCR_EL0 * 2)
+#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
+#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
+#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
+#define c9_PMSELR	(PMSELR_EL0 * 2)
+#define c9_PMCEID0	(PMCEID0_EL0 * 2)
+#define c9_PMCEID1	(PMCEID1_EL0 * 2)
+#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
+#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
+#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
+#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
+#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
+#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
+#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
+#define c9_PMSWINC	(PMSWINC_EL0 * 2)
+
 #define c10_PRRR	(MAIR_EL1 * 2)	/* Primary Region Remap Register */
 #define c10_NMRR	(c10_PRRR + 1)	/* Normal Memory Remap Register */
 #define c12_VBAR	(VBAR_EL1 * 2)	/* Vector Base Address Register */
@@ -86,6 +126,11 @@
 #define c10_AMAIR1	(c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
 #define c14_CNTKCTL	(CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
 
+/* Performance Monitors*/
+#define c14_PMEVCNTR0	(PMEVCNTR0_EL0 * 2)
+#define c14_PMEVTYPER0	(PMEVTYPER0_EL0 * 2)
+#define c14_PMCCFILTR	(PMCCFILTR_EL0 * 2)
+
 #define cp14_DBGDSCRext	(MDSCR_EL1 * 2)
 #define cp14_DBGBCR0	(DBGBCR0_EL1 * 2)
 #define cp14_DBGBVR0	(DBGBVR0_EL1 * 2)
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 03/21] KVM: ARM64: Add offset defines for PMU registers
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate acccesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers and their AArch32 counterparts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h | 55 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5e37710..4f804c1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,12 +48,34 @@
 #define MDSCR_EL1	22	/* Monitor Debug System Control Register */
 #define MDCCINT_EL1	23	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+/* Performance Monitors Registers */
+#define PMCR_EL0	24	/* Control Register */
+#define PMOVSSET_EL0	25	/* Overflow Flag Status Set Register */
+#define PMOVSCLR_EL0	26	/* Overflow Flag Status Clear Register */
+#define PMSELR_EL0	27	/* Event Counter Selection Register */
+#define PMCEID0_EL0	28	/* Common Event Identification Register 0 */
+#define PMCEID1_EL0	29	/* Common Event Identification Register 1 */
+#define PMEVCNTR0_EL0	30	/* Event Counter Register (0-30) */
+#define PMEVCNTR30_EL0	60
+#define PMCCNTR_EL0	61	/* Cycle Counter Register */
+#define PMEVTYPER0_EL0	62	/* Event Type Register (0-30) */
+#define PMEVTYPER30_EL0	92
+#define PMCCFILTR_EL0	93	/* Cycle Count Filter Register */
+#define PMXEVCNTR_EL0	94	/* Selected Event Count Register */
+#define PMXEVTYPER_EL0	95	/* Selected Event Type Register */
+#define PMCNTENSET_EL0	96	/* Count Enable Set Register */
+#define PMCNTENCLR_EL0	97	/* Count Enable Clear Register */
+#define PMINTENSET_EL1	98	/* Interrupt Enable Set Register */
+#define PMINTENCLR_EL1	99	/* Interrupt Enable Clear Register */
+#define PMUSERENR_EL0	100	/* User Enable Register */
+#define PMSWINC_EL0	101	/* Software Increment Register */
+
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	24	/* Domain Access Control Register */
-#define	IFSR32_EL2	25	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	26	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	27	/* Debug Vector Catch Register */
-#define	NR_SYS_REGS	28
+#define	DACR32_EL2	102	/* Domain Access Control Register */
+#define	IFSR32_EL2	103	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	104	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	105	/* Debug Vector Catch Register */
+#define	NR_SYS_REGS	106
 
 /* 32bit mapping */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
@@ -75,6 +97,24 @@
 #define c6_IFAR		(c6_DFAR + 1)	/* Instruction Fault Address Register */
 #define c7_PAR		(PAR_EL1 * 2)	/* Physical Address Register */
 #define c7_PAR_high	(c7_PAR + 1)	/* PAR top 32 bits */
+
+/* Performance Monitors*/
+#define c9_PMCR		(PMCR_EL0 * 2)
+#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
+#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
+#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
+#define c9_PMSELR	(PMSELR_EL0 * 2)
+#define c9_PMCEID0	(PMCEID0_EL0 * 2)
+#define c9_PMCEID1	(PMCEID1_EL0 * 2)
+#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
+#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
+#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
+#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
+#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
+#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
+#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
+#define c9_PMSWINC	(PMSWINC_EL0 * 2)
+
 #define c10_PRRR	(MAIR_EL1 * 2)	/* Primary Region Remap Register */
 #define c10_NMRR	(c10_PRRR + 1)	/* Normal Memory Remap Register */
 #define c12_VBAR	(VBAR_EL1 * 2)	/* Vector Base Address Register */
@@ -86,6 +126,11 @@
 #define c10_AMAIR1	(c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
 #define c14_CNTKCTL	(CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
 
+/* Performance Monitors*/
+#define c14_PMEVCNTR0	(PMEVCNTR0_EL0 * 2)
+#define c14_PMEVTYPER0	(PMEVTYPER0_EL0 * 2)
+#define c14_PMCCFILTR	(PMCCFILTR_EL0 * 2)
+
 #define cp14_DBGDSCRext	(MDSCR_EL1 * 2)
 #define cp14_DBGBCR0	(DBGBCR0_EL1 * 2)
 #define cp14_DBGBVR0	(DBGBVR0_EL1 * 2)
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 03/21] KVM: ARM64: Add offset defines for PMU registers
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate acccesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers and their AArch32 counterparts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h | 55 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 50 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 5e37710..4f804c1 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -48,12 +48,34 @@
 #define MDSCR_EL1	22	/* Monitor Debug System Control Register */
 #define MDCCINT_EL1	23	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+/* Performance Monitors Registers */
+#define PMCR_EL0	24	/* Control Register */
+#define PMOVSSET_EL0	25	/* Overflow Flag Status Set Register */
+#define PMOVSCLR_EL0	26	/* Overflow Flag Status Clear Register */
+#define PMSELR_EL0	27	/* Event Counter Selection Register */
+#define PMCEID0_EL0	28	/* Common Event Identification Register 0 */
+#define PMCEID1_EL0	29	/* Common Event Identification Register 1 */
+#define PMEVCNTR0_EL0	30	/* Event Counter Register (0-30) */
+#define PMEVCNTR30_EL0	60
+#define PMCCNTR_EL0	61	/* Cycle Counter Register */
+#define PMEVTYPER0_EL0	62	/* Event Type Register (0-30) */
+#define PMEVTYPER30_EL0	92
+#define PMCCFILTR_EL0	93	/* Cycle Count Filter Register */
+#define PMXEVCNTR_EL0	94	/* Selected Event Count Register */
+#define PMXEVTYPER_EL0	95	/* Selected Event Type Register */
+#define PMCNTENSET_EL0	96	/* Count Enable Set Register */
+#define PMCNTENCLR_EL0	97	/* Count Enable Clear Register */
+#define PMINTENSET_EL1	98	/* Interrupt Enable Set Register */
+#define PMINTENCLR_EL1	99	/* Interrupt Enable Clear Register */
+#define PMUSERENR_EL0	100	/* User Enable Register */
+#define PMSWINC_EL0	101	/* Software Increment Register */
+
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	24	/* Domain Access Control Register */
-#define	IFSR32_EL2	25	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	26	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	27	/* Debug Vector Catch Register */
-#define	NR_SYS_REGS	28
+#define	DACR32_EL2	102	/* Domain Access Control Register */
+#define	IFSR32_EL2	103	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	104	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	105	/* Debug Vector Catch Register */
+#define	NR_SYS_REGS	106
 
 /* 32bit mapping */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
@@ -75,6 +97,24 @@
 #define c6_IFAR		(c6_DFAR + 1)	/* Instruction Fault Address Register */
 #define c7_PAR		(PAR_EL1 * 2)	/* Physical Address Register */
 #define c7_PAR_high	(c7_PAR + 1)	/* PAR top 32 bits */
+
+/* Performance Monitors*/
+#define c9_PMCR		(PMCR_EL0 * 2)
+#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
+#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
+#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
+#define c9_PMSELR	(PMSELR_EL0 * 2)
+#define c9_PMCEID0	(PMCEID0_EL0 * 2)
+#define c9_PMCEID1	(PMCEID1_EL0 * 2)
+#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
+#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
+#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
+#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
+#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
+#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
+#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
+#define c9_PMSWINC	(PMSWINC_EL0 * 2)
+
 #define c10_PRRR	(MAIR_EL1 * 2)	/* Primary Region Remap Register */
 #define c10_NMRR	(c10_PRRR + 1)	/* Normal Memory Remap Register */
 #define c12_VBAR	(VBAR_EL1 * 2)	/* Vector Base Address Register */
@@ -86,6 +126,11 @@
 #define c10_AMAIR1	(c10_AMAIR0 + 1)/* Aux Memory Attr Indirection Reg */
 #define c14_CNTKCTL	(CNTKCTL_EL1 * 2) /* Timer Control Register (PL1) */
 
+/* Performance Monitors*/
+#define c14_PMEVCNTR0	(PMEVCNTR0_EL0 * 2)
+#define c14_PMEVTYPER0	(PMEVTYPER0_EL0 * 2)
+#define c14_PMCCFILTR	(PMCCFILTR_EL0 * 2)
+
 #define cp14_DBGDSCRext	(MDSCR_EL1 * 2)
 #define cp14_DBGBCR0	(DBGBCR0_EL1 * 2)
 #define cp14_DBGBVR0	(DBGBVR0_EL1 * 2)
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
handler for PMU registers which emulates writing and reading register
and add emulation for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 104 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d03d3af..5b591d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
+			      const struct sys_reg_desc *r, u64 val)
+{
+	if (!vcpu_mode_is_32bit(vcpu))
+		vcpu_sys_reg(vcpu, r->reg) = val;
+	else
+		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
+}
+
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sysreg_write(vcpu, r, val);
+}
+
+/* PMU registers accessor. */
+static bool access_pmu_regs(struct kvm_vcpu *vcpu,
+			    const struct sys_reg_params *p,
+			    const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_sys_reg(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_sys_reg(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_sys_reg(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* PMU CP15 registers accessor. */
+static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
+				 const struct sys_reg_params *p,
+				 const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_cp15(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_cp15(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_cp15(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
+	  reset_pmcr, c9_PMCR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
handler for PMU registers which emulates writing and reading register
and add emulation for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 104 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d03d3af..5b591d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
+			      const struct sys_reg_desc *r, u64 val)
+{
+	if (!vcpu_mode_is_32bit(vcpu))
+		vcpu_sys_reg(vcpu, r->reg) = val;
+	else
+		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
+}
+
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sysreg_write(vcpu, r, val);
+}
+
+/* PMU registers accessor. */
+static bool access_pmu_regs(struct kvm_vcpu *vcpu,
+			    const struct sys_reg_params *p,
+			    const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_sys_reg(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_sys_reg(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_sys_reg(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* PMU CP15 registers accessor. */
+static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
+				 const struct sys_reg_params *p,
+				 const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_cp15(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_cp15(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_cp15(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
+	  reset_pmcr, c9_PMCR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
handler for PMU registers which emulates writing and reading register
and add emulation for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 104 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d03d3af..5b591d6 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -33,6 +33,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
+			      const struct sys_reg_desc *r, u64 val)
+{
+	if (!vcpu_mode_is_32bit(vcpu))
+		vcpu_sys_reg(vcpu, r->reg) = val;
+	else
+		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
+}
+
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sysreg_write(vcpu, r, val);
+}
+
+/* PMU registers accessor. */
+static bool access_pmu_regs(struct kvm_vcpu *vcpu,
+			    const struct sys_reg_params *p,
+			    const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_sys_reg(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_sys_reg(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case PMCR_EL0: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_sys_reg(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* PMU CP15 registers accessor. */
+static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
+				 const struct sys_reg_params *p,
+				 const struct sys_reg_desc *r)
+{
+	unsigned long val;
+
+	if (p->is_write) {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* Only update writeable bits of PMCR */
+			val = vcpu_cp15(vcpu, r->reg);
+			val &= ~ARMV8_PMCR_MASK;
+			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
+			vcpu_cp15(vcpu, r->reg) = val;
+			break;
+		}
+		default:
+			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
+	} else {
+		switch (r->reg) {
+		case c9_PMCR: {
+			/* PMCR.P & PMCR.C are RAZ */
+			val = vcpu_cp15(vcpu, r->reg)
+			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
+		default:
+			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
+			break;
+		}
+	}
+
+	return true;
+}
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
+	  reset_pmcr, c9_PMCR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. As it doesn't need to deal with the acsessing action
specially, it uses default case to emulate writing and reading PMSELR
register.

Add a helper for CP15 registers reset to UNKNOWN.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b591d6..35d232e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -707,7 +707,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -998,7 +998,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index eaa324e..8afeff7 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -110,6 +110,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
 	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
 }
 
+static inline void reset_unknown_cp15(struct kvm_vcpu *vcpu,
+				      const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_COPRO_REGS);
+	vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
+}
+
 static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	BUG_ON(!r->reg);
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. As it doesn't need to deal with the acsessing action
specially, it uses default case to emulate writing and reading PMSELR
register.

Add a helper for CP15 registers reset to UNKNOWN.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b591d6..35d232e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -707,7 +707,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -998,7 +998,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index eaa324e..8afeff7 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -110,6 +110,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
 	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
 }
 
+static inline void reset_unknown_cp15(struct kvm_vcpu *vcpu,
+				      const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_COPRO_REGS);
+	vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
+}
+
 static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	BUG_ON(!r->reg);
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. As it doesn't need to deal with the acsessing action
specially, it uses default case to emulate writing and reading PMSELR
register.

Add a helper for CP15 registers reset to UNKNOWN.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5b591d6..35d232e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -707,7 +707,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -998,7 +998,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index eaa324e..8afeff7 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -110,6 +110,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
 	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
 }
 
+static inline void reset_unknown_cp15(struct kvm_vcpu *vcpu,
+				      const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_COPRO_REGS);
+	vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
+}
+
 static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
 	BUG_ON(!r->reg);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35d232e..cb82b15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -469,6 +469,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sysreg_write(vcpu, r, val);
 }
 
+static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		/* PMCEID1_EL0 or c9_PMCEID1 */
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	vcpu_sysreg_write(vcpu, r, pmceid);
+}
+
 /* PMU registers accessor. */
 static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			    const struct sys_reg_params *p,
@@ -486,6 +499,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, r->reg) = val;
 			break;
 		}
+		case PMCEID0_EL0:
+		case PMCEID1_EL0:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -710,10 +726,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,6 +959,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, r->reg) = val;
 			break;
 		}
+		case c9_PMCEID0:
+		case c9_PMCEID1:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -1000,8 +1019,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID0 },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35d232e..cb82b15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -469,6 +469,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sysreg_write(vcpu, r, val);
 }
 
+static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		/* PMCEID1_EL0 or c9_PMCEID1 */
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	vcpu_sysreg_write(vcpu, r, pmceid);
+}
+
 /* PMU registers accessor. */
 static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			    const struct sys_reg_params *p,
@@ -486,6 +499,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, r->reg) = val;
 			break;
 		}
+		case PMCEID0_EL0:
+		case PMCEID1_EL0:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -710,10 +726,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,6 +959,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, r->reg) = val;
 			break;
 		}
+		case c9_PMCEID0:
+		case c9_PMCEID1:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -1000,8 +1019,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID0 },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 35d232e..cb82b15 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -469,6 +469,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sysreg_write(vcpu, r, val);
 }
 
+static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		/* PMCEID1_EL0 or c9_PMCEID1 */
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	vcpu_sysreg_write(vcpu, r, pmceid);
+}
+
 /* PMU registers accessor. */
 static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			    const struct sys_reg_params *p,
@@ -486,6 +499,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, r->reg) = val;
 			break;
 		}
+		case PMCEID0_EL0:
+		case PMCEID1_EL0:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -710,10 +726,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,6 +959,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, r->reg) = val;
 			break;
 		}
+		case c9_PMCEID0:
+		case c9_PMCEID1:
+			return ignore_write(vcpu, p);
 		default:
 			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
 			break;
@@ -1000,8 +1019,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID0 },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
+	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   2 +
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  13 +++++
 virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 133 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b9f394a..2c025f2 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -31,6 +31,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 1949fe5..18d56d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 254d2b4..1908c88 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,4 +38,17 @@ struct kvm_pmu {
 #endif
 };
 
+#ifdef CONFIG_KVM_ARM_PMU
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx);
+#else
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	return 0;
+}
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx) {}
+#endif
+
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..900a64c
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	u64 counter, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
+	else
+		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
+
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
+{
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	u64 counter;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
+		else
+			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
+
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u32 eventsel;
+	u64 counter;
+
+	kvm_pmu_stop_counter(pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = 1;
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		printk_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   2 +
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  13 +++++
 virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 133 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b9f394a..2c025f2 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -31,6 +31,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 1949fe5..18d56d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 254d2b4..1908c88 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,4 +38,17 @@ struct kvm_pmu {
 #endif
 };
 
+#ifdef CONFIG_KVM_ARM_PMU
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx);
+#else
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	return 0;
+}
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx) {}
+#endif
+
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..900a64c
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	u64 counter, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
+	else
+		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
+
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
+{
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	u64 counter;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
+		else
+			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
+
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u32 eventsel;
+	u64 counter;
+
+	kvm_pmu_stop_counter(pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = 1;
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		printk_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   2 +
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  13 +++++
 virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 133 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index b9f394a..2c025f2 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -31,6 +31,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 1949fe5..18d56d8 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 254d2b4..1908c88 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,4 +38,17 @@ struct kvm_pmu {
 #endif
 };
 
+#ifdef CONFIG_KVM_ARM_PMU
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx);
+#else
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	return 0;
+}
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx) {}
+#endif
+
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..900a64c
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
+{
+	u64 counter, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
+	else
+		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
+
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
+{
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	u64 counter;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
+		else
+			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
+
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
+				    u32 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u32 eventsel;
+	u64 counter;
+
+	kvm_pmu_stop_counter(pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = 1;
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		printk_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
reset_unknown_cp15 for its reset handler. Add access handler which
emulates writing and reading PMXEVTYPER register. When writing to
PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
for the selected event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index cb82b15..4e606ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVTYPER_EL0: {
+			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
+							 *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVTYPER: {
+			val = vcpu_cp15(vcpu, c9_PMSELR);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
reset_unknown_cp15 for its reset handler. Add access handler which
emulates writing and reading PMXEVTYPER register. When writing to
PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
for the selected event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index cb82b15..4e606ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVTYPER_EL0: {
+			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
+							 *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVTYPER: {
+			val = vcpu_cp15(vcpu, c9_PMSELR);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
reset_unknown_cp15 for its reset handler. Add access handler which
emulates writing and reading PMXEVTYPER register. When writing to
PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
for the selected event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index cb82b15..4e606ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVTYPER_EL0: {
+			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
+							 *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVTYPER: {
+			val = vcpu_cp15(vcpu, c9_PMSELR);
+			kvm_pmu_set_counter_event_type(vcpu,
+						       *vcpu_reg(vcpu, p->Rt),
+						       val);
+			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
+							 *vcpu_reg(vcpu, p->Rt);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 09/21] KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVCNTR is UNKNOWN, use reset_unknown for
its reset handler. Add access handler which emulates writing and reading
PMXEVCNTR register. When reading PMXEVCNTR, call perf_event_read_value
to get the count value of the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4e606ea..b7ca2cd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,16 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			int index = PMEVCNTR0_EL0
+				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			vcpu_sys_reg(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						     - val;
+			break;
+		}
 		case PMXEVTYPER_EL0: {
 			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -519,6 +529,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -749,7 +765,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -962,6 +978,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			vcpu_cp15(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						  - val;
+			break;
+		}
 		case c9_PMXEVTYPER: {
 			val = vcpu_cp15(vcpu, c9_PMSELR);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -989,6 +1014,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1047,7 +1078,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 09/21] KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVCNTR is UNKNOWN, use reset_unknown for
its reset handler. Add access handler which emulates writing and reading
PMXEVCNTR register. When reading PMXEVCNTR, call perf_event_read_value
to get the count value of the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4e606ea..b7ca2cd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,16 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			int index = PMEVCNTR0_EL0
+				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			vcpu_sys_reg(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						     - val;
+			break;
+		}
 		case PMXEVTYPER_EL0: {
 			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -519,6 +529,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -749,7 +765,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -962,6 +978,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			vcpu_cp15(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						  - val;
+			break;
+		}
 		case c9_PMXEVTYPER: {
 			val = vcpu_cp15(vcpu, c9_PMSELR);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -989,6 +1014,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1047,7 +1078,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 09/21] KVM: ARM64: Add reset and access handlers for PMXEVCNTR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMXEVCNTR is UNKNOWN, use reset_unknown for
its reset handler. Add access handler which emulates writing and reading
PMXEVCNTR register. When reading PMXEVCNTR, call perf_event_read_value
to get the count value of the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 36 ++++++++++++++++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4e606ea..b7ca2cd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,16 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			int index = PMEVCNTR0_EL0
+				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			vcpu_sys_reg(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						     - val;
+			break;
+		}
 		case PMXEVTYPER_EL0: {
 			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -519,6 +529,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMXEVCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						vcpu_sys_reg(vcpu, PMSELR_EL0));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -749,7 +765,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -962,6 +978,15 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
+
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			vcpu_cp15(vcpu, index) += (s64)*vcpu_reg(vcpu, p->Rt)
+						  - val;
+			break;
+		}
 		case c9_PMXEVTYPER: {
 			val = vcpu_cp15(vcpu, c9_PMSELR);
 			kvm_pmu_set_counter_event_type(vcpu,
@@ -989,6 +1014,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMXEVCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+						    vcpu_cp15(vcpu, c9_PMSELR));
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1047,7 +1078,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 10/21] KVM: ARM64: Add reset and access handlers for PMCCNTR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCCNTR is UNKNOWN, use reset_unknown for its
reset handler. Add a new case to emulate reading and writing to PMCCNTR
register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b7ca2cd..059c84c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_sys_reg(vcpu, r->reg) +=
+					      (s64)*vcpu_reg(vcpu, p->Rt) - val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			int index = PMEVCNTR0_EL0
 				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
@@ -529,6 +536,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						vcpu_sys_reg(vcpu, PMSELR_EL0));
@@ -759,7 +772,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
@@ -978,6 +991,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_cp15(vcpu, r->reg) += (s64)*vcpu_reg(vcpu, p->Rt)
+						   - val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
 
@@ -1014,6 +1034,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						    vcpu_cp15(vcpu, c9_PMSELR));
@@ -1075,7 +1101,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_pmceid, c9_PMCEID0 },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCCNTR },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 10/21] KVM: ARM64: Add reset and access handlers for PMCCNTR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCCNTR is UNKNOWN, use reset_unknown for its
reset handler. Add a new case to emulate reading and writing to PMCCNTR
register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b7ca2cd..059c84c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_sys_reg(vcpu, r->reg) +=
+					      (s64)*vcpu_reg(vcpu, p->Rt) - val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			int index = PMEVCNTR0_EL0
 				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
@@ -529,6 +536,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						vcpu_sys_reg(vcpu, PMSELR_EL0));
@@ -759,7 +772,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
@@ -978,6 +991,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_cp15(vcpu, r->reg) += (s64)*vcpu_reg(vcpu, p->Rt)
+						   - val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
 
@@ -1014,6 +1034,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						    vcpu_cp15(vcpu, c9_PMSELR));
@@ -1075,7 +1101,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_pmceid, c9_PMCEID0 },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCCNTR },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 10/21] KVM: ARM64: Add reset and access handlers for PMCCNTR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCCNTR is UNKNOWN, use reset_unknown for its
reset handler. Add a new case to emulate reading and writing to PMCCNTR
register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b7ca2cd..059c84c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -491,6 +491,13 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_sys_reg(vcpu, r->reg) +=
+					      (s64)*vcpu_reg(vcpu, p->Rt) - val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			int index = PMEVCNTR0_EL0
 				    + vcpu_sys_reg(vcpu, PMSELR_EL0);
@@ -529,6 +536,12 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case PMCCNTR_EL0: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case PMXEVCNTR_EL0: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						vcpu_sys_reg(vcpu, PMSELR_EL0));
@@ -759,7 +772,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
@@ -978,6 +991,13 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 
 	if (p->is_write) {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			vcpu_cp15(vcpu, r->reg) += (s64)*vcpu_reg(vcpu, p->Rt)
+						   - val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			int index = c14_PMEVCNTR0 + vcpu_cp15(vcpu, c9_PMSELR);
 
@@ -1014,6 +1034,12 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 		}
 	} else {
 		switch (r->reg) {
+		case c9_PMCCNTR: {
+			val = kvm_pmu_get_counter_value(vcpu,
+							ARMV8_MAX_COUNTERS - 1);
+			*vcpu_reg(vcpu, p->Rt) = val;
+			break;
+		}
 		case c9_PMXEVCNTR: {
 			val = kvm_pmu_get_counter_value(vcpu,
 						    vcpu_cp15(vcpu, c9_PMSELR));
@@ -1075,7 +1101,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_pmceid, c9_PMCEID0 },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
 	  reset_pmceid, c9_PMCEID1 },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCCNTR },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 11/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 52 +++++++++++++++++++++++++++++++++++++++++++----
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 52 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 104 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 059c84c..c358ae0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -519,6 +519,27 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case PMCNTENSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				   vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMCNTENCLR_EL0) |= val;
+			break;
+		}
+		case PMCNTENCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -751,10 +772,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1017,6 +1038,27 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case c9_PMCNTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				       vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMCNTENCLR) |= val;
+			break;
+		}
+		case c9_PMCNTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1092,8 +1134,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
 	  reset_pmcr, c9_PMCR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENSET },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1908c88..53d5907 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,8 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -47,6 +49,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
 }
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 900a64c..3d9075e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,58 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ * @all_enable: the value of PMCR.E
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!all_enable)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event) {
+				perf_event_enable(pmc->perf_event);
+				if (pmc->perf_event->state
+				    != PERF_EVENT_STATE_ACTIVE)
+					kvm_debug("fail to enable event\n");
+			}
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				perf_event_disable(pmc->perf_event);
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 11/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 52 +++++++++++++++++++++++++++++++++++++++++++----
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 52 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 104 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 059c84c..c358ae0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -519,6 +519,27 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case PMCNTENSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				   vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMCNTENCLR_EL0) |= val;
+			break;
+		}
+		case PMCNTENCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -751,10 +772,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1017,6 +1038,27 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case c9_PMCNTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				       vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMCNTENCLR) |= val;
+			break;
+		}
+		case c9_PMCNTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1092,8 +1134,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
 	  reset_pmcr, c9_PMCR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENSET },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1908c88..53d5907 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,8 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -47,6 +49,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
 }
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 900a64c..3d9075e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,58 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ * @all_enable: the value of PMCR.E
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!all_enable)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event) {
+				perf_event_enable(pmc->perf_event);
+				if (pmc->perf_event->state
+				    != PERF_EVENT_STATE_ACTIVE)
+					kvm_debug("fail to enable event\n");
+			}
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				perf_event_disable(pmc->perf_event);
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 11/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 52 +++++++++++++++++++++++++++++++++++++++++++----
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 52 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 104 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 059c84c..c358ae0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -519,6 +519,27 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case PMCNTENSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				   vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMCNTENCLR_EL0) |= val;
+			break;
+		}
+		case PMCNTENCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -751,10 +772,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1017,6 +1038,27 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 							 *vcpu_reg(vcpu, p->Rt);
 			break;
 		}
+		case c9_PMCNTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_enable_counter(vcpu, val,
+				       vcpu_cp15(vcpu, c9_PMCR) & ARMV8_PMCR_E);
+			/* Value 1 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter enabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMCNTENCLR) |= val;
+			break;
+		}
+		case c9_PMCNTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_disable_counter(vcpu, val);
+			/* Value 0 of PMCNTENSET_EL0 and PMCNTENCLR_EL0 means
+			 * corresponding counter disabled.
+			 */
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1092,8 +1134,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
 	  reset_pmcr, c9_PMCR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENSET },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 1908c88..53d5907 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,8 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -47,6 +49,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
 }
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 900a64c..3d9075e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,58 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ * @all_enable: the value of PMCR.E
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!all_enable)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event) {
+				perf_event_enable(pmc->perf_event);
+				if (pmc->perf_event->state
+				    != PERF_EVENT_STATE_ACTIVE)
+					kvm_debug("fail to enable event\n");
+			}
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				perf_event_disable(pmc->perf_event);
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 12/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c358ae0..6d2febf 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -540,6 +540,18 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
 			break;
 		}
+		case PMINTENSET_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMINTENCLR_EL1) |= val;
+			break;
+		}
+		case PMINTENCLR_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -729,10 +741,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENCLR_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1059,6 +1071,18 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
 			break;
 		}
+		case c9_PMINTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMINTENCLR) |= val;
+			break;
+		}
+		case c9_PMINTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1152,8 +1176,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENSET },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENCLR },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 12/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c358ae0..6d2febf 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -540,6 +540,18 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
 			break;
 		}
+		case PMINTENSET_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMINTENCLR_EL1) |= val;
+			break;
+		}
+		case PMINTENCLR_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -729,10 +741,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENCLR_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1059,6 +1071,18 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
 			break;
 		}
+		case c9_PMINTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMINTENCLR) |= val;
+			break;
+		}
+		case c9_PMINTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1152,8 +1176,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENSET },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENCLR },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 12/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c358ae0..6d2febf 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -540,6 +540,18 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
 			break;
 		}
+		case PMINTENSET_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMINTENCLR_EL1) |= val;
+			break;
+		}
+		case PMINTENCLR_EL1: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -729,10 +741,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMINTENCLR_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1059,6 +1071,18 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMCNTENSET) &= ~val;
 			break;
 		}
+		case c9_PMINTENSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMINTENCLR) |= val;
+			break;
+		}
+		case c9_PMINTENCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1152,8 +1176,10 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENSET },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMINTENCLR },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 13/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, pend PMU interrupt. When the
value writing to PMOVSCLR is equal to the current value, clear the PMU
pending interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6d2febf..e03d3b8d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -552,6 +552,21 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 			break;
 		}
+		case PMOVSSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMOVSCLR_EL0) |= val;
+			break;
+		}
+		case PMOVSCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_sys_reg(vcpu, r->reg));
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -790,7 +805,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -817,7 +832,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1083,6 +1098,21 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
 			break;
 		}
+		case c9_PMOVSSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMOVSCLR) |= val;
+			break;
+		}
+		case c9_PMOVSCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_cp15(vcpu, r->reg));
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1162,7 +1192,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENSET },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMCNTENCLR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
@@ -1180,6 +1211,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENCLR },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSSET },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 53d5907..ff17578 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,8 @@ struct kvm_pmu {
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -51,6 +53,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 }
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 3d9075e..5761386 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_overflow_clear - clear PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSCLR register
+ * @reg: the current value of PMOVSCLR register
+ */
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	/* If all overflow bits are cleared, clear interrupt pending status*/
+	if (val == reg)
+		pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (val != 0) {
+		pmu->irq_pending = true;
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 13/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, pend PMU interrupt. When the
value writing to PMOVSCLR is equal to the current value, clear the PMU
pending interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6d2febf..e03d3b8d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -552,6 +552,21 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 			break;
 		}
+		case PMOVSSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMOVSCLR_EL0) |= val;
+			break;
+		}
+		case PMOVSCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_sys_reg(vcpu, r->reg));
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -790,7 +805,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -817,7 +832,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1083,6 +1098,21 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
 			break;
 		}
+		case c9_PMOVSSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMOVSCLR) |= val;
+			break;
+		}
+		case c9_PMOVSCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_cp15(vcpu, r->reg));
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1162,7 +1192,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENSET },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMCNTENCLR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
@@ -1180,6 +1211,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENCLR },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSSET },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 53d5907..ff17578 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,8 @@ struct kvm_pmu {
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -51,6 +53,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 }
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 3d9075e..5761386 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_overflow_clear - clear PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSCLR register
+ * @reg: the current value of PMOVSCLR register
+ */
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	/* If all overflow bits are cleared, clear interrupt pending status*/
+	if (val == reg)
+		pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (val != 0) {
+		pmu->irq_pending = true;
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 13/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a new case to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, pend PMU interrupt. When the
value writing to PMOVSCLR is equal to the current value, clear the PMU
pending interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 39 ++++++++++++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  4 ++++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6d2febf..e03d3b8d 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -552,6 +552,21 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~val;
 			break;
 		}
+		case PMOVSSET_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_sys_reg(vcpu, r->reg) |= val;
+			vcpu_sys_reg(vcpu, PMOVSCLR_EL0) |= val;
+			break;
+		}
+		case PMOVSCLR_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_sys_reg(vcpu, r->reg));
+			vcpu_sys_reg(vcpu, r->reg) &= ~val;
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -790,7 +805,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMCNTENCLR_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -817,7 +832,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1083,6 +1098,21 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMINTENSET) &= ~val;
 			break;
 		}
+		case c9_PMOVSSET: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_set(vcpu, val);
+			vcpu_cp15(vcpu, r->reg) |= val;
+			vcpu_cp15(vcpu, c9_PMOVSCLR) |= val;
+			break;
+		}
+		case c9_PMOVSCLR: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_overflow_clear(vcpu, val,
+					       vcpu_cp15(vcpu, r->reg));
+			vcpu_cp15(vcpu, r->reg) &= ~val;
+			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1162,7 +1192,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENSET },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMCNTENCLR },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
@@ -1180,6 +1211,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENCLR },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMOVSSET },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 53d5907..ff17578 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,8 @@ struct kvm_pmu {
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -51,6 +53,8 @@ unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 }
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 3d9075e..5761386 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_overflow_clear - clear PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSCLR register
+ * @reg: the current value of PMOVSCLR register
+ */
+void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	/* If all overflow bits are cleared, clear interrupt pending status*/
+	if (val == reg)
+		pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (val != 0) {
+		pmu->irq_pending = true;
+		kvm_vcpu_kick(vcpu);
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 14/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown. While
the reset value of PMUSERENR is zero, use reset_val_cp15 with zero for
its reset handler.

Add a helper for CP15 registers reset to specified value.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e03d3b8d..c44c8e1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -829,7 +829,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
@@ -1206,7 +1206,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmu_cp15_regs,
+	  reset_val_cp15,  c9_PMUSERENR, 0 },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 8afeff7..aba997d 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -125,6 +125,14 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
 	vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
+static inline void reset_val_cp15(struct kvm_vcpu *vcpu,
+				  const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_SYS_REGS);
+	vcpu_cp15(vcpu, r->reg) = r->val;
+}
+
 static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 			      const struct sys_reg_desc *i2)
 {
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 14/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown. While
the reset value of PMUSERENR is zero, use reset_val_cp15 with zero for
its reset handler.

Add a helper for CP15 registers reset to specified value.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e03d3b8d..c44c8e1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -829,7 +829,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
@@ -1206,7 +1206,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmu_cp15_regs,
+	  reset_val_cp15,  c9_PMUSERENR, 0 },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 8afeff7..aba997d 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -125,6 +125,14 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
 	vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
+static inline void reset_val_cp15(struct kvm_vcpu *vcpu,
+				  const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_SYS_REGS);
+	vcpu_cp15(vcpu, r->reg) = r->val;
+}
+
 static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 			      const struct sys_reg_desc *i2)
 {
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 14/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

The reset value of PMUSERENR_EL0 is UNKNOWN, use reset_unknown. While
the reset value of PMUSERENR is zero, use reset_val_cp15 with zero for
its reset handler.

Add a helper for CP15 registers reset to specified value.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 5 +++--
 arch/arm64/kvm/sys_regs.h | 8 ++++++++
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e03d3b8d..c44c8e1 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -829,7 +829,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMXEVCNTR_EL0 },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMUSERENR_EL0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmu_regs, reset_unknown, PMOVSSET_EL0 },
@@ -1206,7 +1206,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMXEVTYPER },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMXEVCNTR },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmu_cp15_regs,
+	  reset_val_cp15,  c9_PMUSERENR, 0 },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMINTENSET },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pmu_cp15_regs,
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index 8afeff7..aba997d 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -125,6 +125,14 @@ static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r
 	vcpu_sys_reg(vcpu, r->reg) = r->val;
 }
 
+static inline void reset_val_cp15(struct kvm_vcpu *vcpu,
+				  const struct sys_reg_desc *r)
+{
+	BUG_ON(!r->reg);
+	BUG_ON(r->reg >= NR_SYS_REGS);
+	vcpu_cp15(vcpu, r->reg) = r->val;
+}
+
 static inline int cmp_sys_reg(const struct sys_reg_desc *i1,
 			      const struct sys_reg_desc *i2)
 {
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 15/21] KVM: ARM64: Add reset and access handlers for PMSWINC register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 18 +++++++++++++++-
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c44c8e1..c86f8dd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -567,6 +567,11 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
 			break;
 		}
+		case PMSWINC_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -596,6 +601,8 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case PMSWINC_EL0:
+			return read_zero(vcpu, p);
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -808,7 +815,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
@@ -1113,6 +1120,11 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
 			break;
 		}
+		case c9_PMSWINC: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1142,6 +1154,8 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case c9_PMSWINC:
+			return read_zero(vcpu, p);
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1194,6 +1208,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMOVSCLR },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSWINC },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ff17578..d7de7f1 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -55,6 +56,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 5761386..ae21089 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -151,6 +151,57 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	u32 type, enable, reg;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			if (!vcpu_mode_is_32bit(vcpu)) {
+				type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i)++;
+					reg = vcpu_sys_reg(vcpu,
+							   PMEVCNTR0_EL0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+						kvm_pmu_overflow_set(vcpu,
+					      vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+					}
+				}
+			} else {
+				type = vcpu_cp15(vcpu, c14_PMEVTYPER0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_cp15(vcpu, c14_PMEVCNTR0 + i)++;
+					reg = vcpu_cp15(vcpu,
+							c14_PMEVCNTR0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+						kvm_pmu_overflow_set(vcpu,
+						  vcpu_cp15(vcpu, c9_PMOVSSET));
+					}
+				}
+			}
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
@@ -173,6 +224,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	kvm_pmu_stop_counter(pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* For software increment event it does't need to create perf event */
+	if (eventsel == 0)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 15/21] KVM: ARM64: Add reset and access handlers for PMSWINC register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 18 +++++++++++++++-
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c44c8e1..c86f8dd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -567,6 +567,11 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
 			break;
 		}
+		case PMSWINC_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -596,6 +601,8 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case PMSWINC_EL0:
+			return read_zero(vcpu, p);
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -808,7 +815,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
@@ -1113,6 +1120,11 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
 			break;
 		}
+		case c9_PMSWINC: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1142,6 +1154,8 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case c9_PMSWINC:
+			return read_zero(vcpu, p);
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1194,6 +1208,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMOVSCLR },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSWINC },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ff17578..d7de7f1 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -55,6 +56,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 5761386..ae21089 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -151,6 +151,57 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	u32 type, enable, reg;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			if (!vcpu_mode_is_32bit(vcpu)) {
+				type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i)++;
+					reg = vcpu_sys_reg(vcpu,
+							   PMEVCNTR0_EL0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+						kvm_pmu_overflow_set(vcpu,
+					      vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+					}
+				}
+			} else {
+				type = vcpu_cp15(vcpu, c14_PMEVTYPER0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_cp15(vcpu, c14_PMEVCNTR0 + i)++;
+					reg = vcpu_cp15(vcpu,
+							c14_PMEVCNTR0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+						kvm_pmu_overflow_set(vcpu,
+						  vcpu_cp15(vcpu, c9_PMOVSSET));
+					}
+				}
+			}
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
@@ -173,6 +224,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	kvm_pmu_stop_counter(pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* For software increment event it does't need to create perf event */
+	if (eventsel == 0)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 15/21] KVM: ARM64: Add reset and access handlers for PMSWINC register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 18 +++++++++++++++-
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 55 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c44c8e1..c86f8dd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -567,6 +567,11 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~val;
 			break;
 		}
+		case PMSWINC_EL0: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case PMCR_EL0: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_sys_reg(vcpu, r->reg);
@@ -596,6 +601,8 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case PMSWINC_EL0:
+			return read_zero(vcpu, p);
 		case PMCR_EL0: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_sys_reg(vcpu, r->reg)
@@ -808,7 +815,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmu_regs, reset_unknown, PMOVSCLR_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmu_regs, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
@@ -1113,6 +1120,11 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			vcpu_cp15(vcpu, c9_PMOVSSET) &= ~val;
 			break;
 		}
+		case c9_PMSWINC: {
+			val = *vcpu_reg(vcpu, p->Rt);
+			kvm_pmu_software_increment(vcpu, val);
+			break;
+		}
 		case c9_PMCR: {
 			/* Only update writeable bits of PMCR */
 			val = vcpu_cp15(vcpu, r->reg);
@@ -1142,6 +1154,8 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			*vcpu_reg(vcpu, p->Rt) = val;
 			break;
 		}
+		case c9_PMSWINC:
+			return read_zero(vcpu, p);
 		case c9_PMCR: {
 			/* PMCR.P & PMCR.C are RAZ */
 			val = vcpu_cp15(vcpu, r->reg)
@@ -1194,6 +1208,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	  reset_unknown_cp15, c9_PMCNTENCLR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMOVSCLR },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmu_cp15_regs,
+	  reset_unknown_cp15, c9_PMSWINC },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
 	  reset_unknown_cp15, c9_PMSELR },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ff17578..d7de7f1 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -44,6 +44,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 #else
@@ -55,6 +56,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable) {}
 void kvm_pmu_overflow_clear(struct kvm_vcpu *vcpu, u32 val, u32 reg) {}
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 5761386..ae21089 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -151,6 +151,57 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val)
+{
+	int i;
+	u32 type, enable, reg;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if ((val >> i) & 0x1) {
+			if (!vcpu_mode_is_32bit(vcpu)) {
+				type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i)++;
+					reg = vcpu_sys_reg(vcpu,
+							   PMEVCNTR0_EL0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+						__set_bit(i,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+						kvm_pmu_overflow_set(vcpu,
+					      vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+					}
+				}
+			} else {
+				type = vcpu_cp15(vcpu, c14_PMEVTYPER0 + i)
+				       & ARMV8_EVTYPE_EVENT;
+				enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+				if ((type == 0) && ((enable >> i) & 0x1)) {
+					vcpu_cp15(vcpu, c14_PMEVCNTR0 + i)++;
+					reg = vcpu_cp15(vcpu,
+							c14_PMEVCNTR0 + i);
+					if ((reg & 0xFFFFFFFF) == 0) {
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+						__set_bit(i,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+						kvm_pmu_overflow_set(vcpu,
+						  vcpu_cp15(vcpu, c9_PMOVSSET));
+					}
+				}
+			}
+		}
+	}
+}
+
+/**
  * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
  * @vcpu: The vcpu pointer
  * @data: The data guest writes to PMXEVTYPER_EL0
@@ -173,6 +224,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	kvm_pmu_stop_counter(pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* For software increment event it does't need to create perf event */
+	if (eventsel == 0)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 16/21] KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMEVCNTRn and
PMEVTYPERn.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 164 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 164 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c86f8dd..50bf3fb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -634,6 +634,20 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -848,6 +862,74 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_regs, reset_unknown, PMCCFILTR_EL0, },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -1172,6 +1254,20 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVCNTR0 + n), }
+
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVTYPER0 + n), }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -1240,6 +1336,74 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_cp15_regs,
+	  reset_val_cp15, c14_PMCCFILTR, 0 },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 16/21] KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMEVCNTRn and
PMEVTYPERn.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 164 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 164 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c86f8dd..50bf3fb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -634,6 +634,20 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -848,6 +862,74 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_regs, reset_unknown, PMCCFILTR_EL0, },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -1172,6 +1254,20 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVCNTR0 + n), }
+
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVTYPER0 + n), }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -1240,6 +1336,74 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_cp15_regs,
+	  reset_val_cp15, c14_PMCCFILTR, 0 },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 16/21] KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMEVCNTRn and
PMEVTYPERn.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 164 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 164 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c86f8dd..50bf3fb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -634,6 +634,20 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_regs, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -848,6 +862,74 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_regs, reset_unknown, PMCCFILTR_EL0, },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -1172,6 +1254,20 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVCNTR0 + n), }
+
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_cp15_regs, reset_unknown_cp15, (c14_PMEVTYPER0 + n), }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -1240,6 +1336,74 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_cp15_regs,
+	  reset_val_cp15, c14_PMCCFILTR, 0 },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:21   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |  2 ++
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 50 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 50bf3fb..a0bb9d2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -578,6 +578,7 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_sys_reg(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case PMCEID0_EL0:
@@ -1213,6 +1214,7 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_cp15(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case c9_PMCEID0:
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d7de7f1..acd025a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -47,6 +47,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
@@ -59,6 +60,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ae21089..11d1bfb 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u32 enable;
+	int i;
+
+	if (val & ARMV8_PMCR_E) {
+		if (!vcpu_mode_is_32bit(vcpu))
+			enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		else
+			enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+
+		kvm_pmu_enable_counter(vcpu, enable, true);
+	} else
+		kvm_pmu_disable_counter(vcpu, 0xffffffffUL);
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+		else
+			vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			if (!vcpu_mode_is_32bit(vcpu))
+				vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+			else
+				vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
+/**
  * kvm_pmu_overflow_clear - clear PMU overflow interrupt
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMOVSCLR register
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |  2 ++
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 50 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 50bf3fb..a0bb9d2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -578,6 +578,7 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_sys_reg(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case PMCEID0_EL0:
@@ -1213,6 +1214,7 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_cp15(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case c9_PMCEID0:
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d7de7f1..acd025a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -47,6 +47,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
@@ -59,6 +60,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ae21089..11d1bfb 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u32 enable;
+	int i;
+
+	if (val & ARMV8_PMCR_E) {
+		if (!vcpu_mode_is_32bit(vcpu))
+			enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		else
+			enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+
+		kvm_pmu_enable_counter(vcpu, enable, true);
+	} else
+		kvm_pmu_disable_counter(vcpu, 0xffffffffUL);
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+		else
+			vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			if (!vcpu_mode_is_32bit(vcpu))
+				vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+			else
+				vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
+/**
  * kvm_pmu_overflow_clear - clear PMU overflow interrupt
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMOVSCLR register
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2015-10-30  6:21   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:21 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |  2 ++
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 50 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 50bf3fb..a0bb9d2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -578,6 +578,7 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_sys_reg(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case PMCEID0_EL0:
@@ -1213,6 +1214,7 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
 			val &= ~ARMV8_PMCR_MASK;
 			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
 			vcpu_cp15(vcpu, r->reg) = val;
+			kvm_pmu_handle_pmcr(vcpu, val);
 			break;
 		}
 		case c9_PMCEID0:
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d7de7f1..acd025a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -47,6 +47,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
@@ -59,6 +60,7 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u32 val) {}
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx) {}
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ae21089..11d1bfb 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -121,6 +121,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
 }
 
 /**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u32 enable;
+	int i;
+
+	if (val & ARMV8_PMCR_E) {
+		if (!vcpu_mode_is_32bit(vcpu))
+			enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		else
+			enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
+
+		kvm_pmu_enable_counter(vcpu, enable, true);
+	} else
+		kvm_pmu_disable_counter(vcpu, 0xffffffffUL);
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		if (!vcpu_mode_is_32bit(vcpu))
+			vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+		else
+			vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			if (!vcpu_mode_is_32bit(vcpu))
+				vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+			else
+				vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
+/**
  * kvm_pmu_overflow_clear - clear PMU overflow interrupt
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMOVSCLR register
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:22   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when perf event overflows, set
irq_pending and call kvm_vcpu_kick() to sync the interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  4 +++
 include/kvm/arm_pmu.h |  4 +++
 virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 78b2869..9c0fec4 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
 			local_irq_enable();
+			kvm_pmu_sync_hwstate(vcpu);
 			kvm_vgic_sync_hwstate(vcpu);
 			preempt_enable();
 			kvm_timer_sync_hwstate(vcpu);
@@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		kvm_guest_exit();
 		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
+		kvm_pmu_post_sync_hwstate(vcpu);
+
 		kvm_vgic_sync_hwstate(vcpu);
 
 		preempt_enable();
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index acd025a..5e7f943 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,8 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
@@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 11d1bfb..6d48d9a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_sync_hwstate - sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u32 overflow;
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	else
+		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
+
+	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
+ */
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (pmu->irq_pending && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
+ * the interrupt.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	int idx = pmc->idx;
+
+	if (!vcpu_mode_is_32bit(vcpu)) {
+		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	} else {
+		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	}
+}
+
+/**
  * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		printk_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when perf event overflows, set
irq_pending and call kvm_vcpu_kick() to sync the interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  4 +++
 include/kvm/arm_pmu.h |  4 +++
 virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 78b2869..9c0fec4 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
 			local_irq_enable();
+			kvm_pmu_sync_hwstate(vcpu);
 			kvm_vgic_sync_hwstate(vcpu);
 			preempt_enable();
 			kvm_timer_sync_hwstate(vcpu);
@@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		kvm_guest_exit();
 		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
+		kvm_pmu_post_sync_hwstate(vcpu);
+
 		kvm_vgic_sync_hwstate(vcpu);
 
 		preempt_enable();
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index acd025a..5e7f943 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,8 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
@@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 11d1bfb..6d48d9a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_sync_hwstate - sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u32 overflow;
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	else
+		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
+
+	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
+ */
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (pmu->irq_pending && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
+ * the interrupt.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	int idx = pmc->idx;
+
+	if (!vcpu_mode_is_32bit(vcpu)) {
+		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	} else {
+		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	}
+}
+
+/**
  * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		printk_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when perf event overflows, set
irq_pending and call kvm_vcpu_kick() to sync the interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  4 +++
 include/kvm/arm_pmu.h |  4 +++
 virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 78b2869..9c0fec4 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 
 		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
 			local_irq_enable();
+			kvm_pmu_sync_hwstate(vcpu);
 			kvm_vgic_sync_hwstate(vcpu);
 			preempt_enable();
 			kvm_timer_sync_hwstate(vcpu);
@@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		kvm_guest_exit();
 		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
 
+		kvm_pmu_post_sync_hwstate(vcpu);
+
 		kvm_vgic_sync_hwstate(vcpu);
 
 		preempt_enable();
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index acd025a..5e7f943 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,8 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
@@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
 {
 	return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 11d1bfb..6d48d9a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_sync_hwstate - sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u32 overflow;
+
+	if (!vcpu_mode_is_32bit(vcpu))
+		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	else
+		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
+
+	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
+ */
+void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	if (pmu->irq_pending && (pmu->irq_num != -1))
+		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
+
+	pmu->irq_pending = false;
+}
+
+/**
+ * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
+ * the interrupt.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = pmc->vcpu;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	int idx = pmc->idx;
+
+	if (!vcpu_mode_is_32bit(vcpu)) {
+		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
+			__set_bit(idx,
+			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	} else {
+		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
+			__set_bit(idx,
+				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
+			pmu->irq_pending = true;
+			kvm_vcpu_kick(vcpu);
+		}
+	}
+}
+
+/**
  * kvm_pmu_enable_counter - enable selected PMU counter
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMCNTENSET register
@@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		printk_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 19/21] KVM: ARM64: Reset PMU state when resetting vcpu
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:22   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 19 +++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 91cf535..4da7f6c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5e7f943..e708c49 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -51,6 +52,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 6d48d9a..84720a2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -70,6 +70,25 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(&pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].vcpu = vcpu;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+	pmu->irq_pending = false;
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 19/21] KVM: ARM64: Reset PMU state when resetting vcpu
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 19 +++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 91cf535..4da7f6c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5e7f943..e708c49 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -51,6 +52,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 6d48d9a..84720a2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -70,6 +70,25 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(&pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].vcpu = vcpu;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+	pmu->irq_pending = false;
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 19/21] KVM: ARM64: Reset PMU state when resetting vcpu
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 19 +++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 91cf535..4da7f6c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5e7f943..e708c49 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 };
 
 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -51,6 +52,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 				    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 6d48d9a..84720a2 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -70,6 +70,25 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }
 
 /**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(&pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].vcpu = vcpu;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+	pmu->irq_pending = false;
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:22   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9c0fec4..90ddb93 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -259,6 +259,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e708c49..f2cd8d9 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -53,6 +54,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 84720a2..d78ce7b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -89,6 +89,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9c0fec4..90ddb93 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -259,6 +259,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e708c49..f2cd8d9 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -53,6 +54,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 84720a2..d78ce7b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -89,6 +89,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 9c0fec4..90ddb93 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -259,6 +259,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index e708c49..f2cd8d9 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ struct kvm_pmu {
 
 #ifdef CONFIG_KVM_ARM_PMU
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
@@ -53,6 +54,7 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 84720a2..d78ce7b 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -89,6 +89,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 }
 
 /**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
+/**
  * kvm_pmu_sync_hwstate - sync pmu state for cpu
  * @vcpu: The vcpu pointer
  *
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-10-30  6:22   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
the kvm_device_ops for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
 arch/arm64/include/uapi/asm/kvm.h             |  3 +
 include/linux/kvm_host.h                      |  1 +
 include/uapi/linux/kvm.h                      |  2 +
 virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |  8 +++
 virt/kvm/arm/vgic.h                           |  1 +
 virt/kvm/kvm_main.c                           |  4 ++
 8 files changed, 126 insertions(+)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt

diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
new file mode 100644
index 0000000..49481c4
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
@@ -0,0 +1,15 @@
+ARM Virtual Performance Monitor Unit (vPMU)
+===========================================
+
+Device types supported:
+  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
+
+Instantiate one PMU instance for per VCPU through this API.
+
+Groups:
+  KVM_DEV_ARM_PMU_GRP_IRQ
+  Attributes:
+    A value describing the interrupt number of PMU overflow interrupt.
+
+  Errors:
+    -EINVAL: Value set is out of the expected range
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 0cd7b59..1309a93 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API: ARM PMU */
+#define KVM_DEV_ARM_PMU_GRP_IRQ		0
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1bef9e2..f6be696 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
 extern struct kvm_device_ops kvm_xics_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
+extern struct kvm_device_ops kvm_arm_pmu_ops;
 
 #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a9256f0..f41e6b6 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1025,6 +1025,8 @@ enum kvm_device_type {
 #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
 	KVM_DEV_TYPE_ARM_VGIC_V3,
 #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
+	KVM_DEV_TYPE_ARM_PMU_V3,
+#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
 	KVM_DEV_TYPE_MAX,
 };
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d78ce7b..0a00d04 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,10 +19,13 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#include "vgic.h"
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 
 	pmc->perf_event = event;
 }
+
+static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
+{
+	int j;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
+		pmu->irq_num = irq;
+		vgic_dist_irq_set_cfg(vcpu, irq, true);
+	}
+
+	return 0;
+}
+
+static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
+{
+	int i, j;
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm = dev->kvm;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		memset(pmu, 0, sizeof(*pmu));
+		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+			pmu->pmc[i].idx = i;
+			pmu->pmc[i].vcpu = vcpu;
+			pmu->pmc[i].bitmask = 0xffffffffUL;
+		}
+		pmu->irq_num = -1;
+	}
+
+	return 0;
+}
+
+static void kvm_arm_pmu_destroy(struct kvm_device *dev)
+{
+	kfree(dev);
+}
+
+static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
+			return -EINVAL;
+
+		return kvm_arm_pmu_set_irq(dev->kvm, reg);
+	}
+	}
+
+	return -ENXIO;
+}
+
+static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	return 0;
+}
+
+static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ:
+		return 0;
+	}
+
+	return -ENXIO;
+}
+
+struct kvm_device_ops kvm_arm_pmu_ops = {
+	.name = "kvm-arm-pmu",
+	.create = kvm_arm_pmu_create,
+	.destroy = kvm_arm_pmu_destroy,
+	.set_attr = kvm_arm_pmu_set_attr,
+	.get_attr = kvm_arm_pmu_get_attr,
+	.has_attr = kvm_arm_pmu_has_attr,
+};
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 66c6616..8e00987 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
 	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
 }
 
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
+{
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
+				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
+}
+
 static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
 {
 	if (irq < VGIC_NR_PRIVATE_IRQS)
diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
index 0df74cb..eb814f5 100644
--- a/virt/kvm/arm/vgic.h
+++ b/virt/kvm/arm/vgic.h
@@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
 
 void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
 void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
 void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
 void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 			     int irq, int val);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8db1d93..5decfb5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
 #ifdef CONFIG_KVM_XICS
 	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
 #endif
+
+#ifdef CONFIG_KVM_ARM_PMU
+	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
+#endif
 };
 
 int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, shannon.zhao,
	peter.huangpeng, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
the kvm_device_ops for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
 arch/arm64/include/uapi/asm/kvm.h             |  3 +
 include/linux/kvm_host.h                      |  1 +
 include/uapi/linux/kvm.h                      |  2 +
 virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |  8 +++
 virt/kvm/arm/vgic.h                           |  1 +
 virt/kvm/kvm_main.c                           |  4 ++
 8 files changed, 126 insertions(+)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt

diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
new file mode 100644
index 0000000..49481c4
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
@@ -0,0 +1,15 @@
+ARM Virtual Performance Monitor Unit (vPMU)
+===========================================
+
+Device types supported:
+  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
+
+Instantiate one PMU instance for per VCPU through this API.
+
+Groups:
+  KVM_DEV_ARM_PMU_GRP_IRQ
+  Attributes:
+    A value describing the interrupt number of PMU overflow interrupt.
+
+  Errors:
+    -EINVAL: Value set is out of the expected range
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 0cd7b59..1309a93 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API: ARM PMU */
+#define KVM_DEV_ARM_PMU_GRP_IRQ		0
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1bef9e2..f6be696 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
 extern struct kvm_device_ops kvm_xics_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
+extern struct kvm_device_ops kvm_arm_pmu_ops;
 
 #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a9256f0..f41e6b6 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1025,6 +1025,8 @@ enum kvm_device_type {
 #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
 	KVM_DEV_TYPE_ARM_VGIC_V3,
 #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
+	KVM_DEV_TYPE_ARM_PMU_V3,
+#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
 	KVM_DEV_TYPE_MAX,
 };
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d78ce7b..0a00d04 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,10 +19,13 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#include "vgic.h"
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 
 	pmc->perf_event = event;
 }
+
+static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
+{
+	int j;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
+		pmu->irq_num = irq;
+		vgic_dist_irq_set_cfg(vcpu, irq, true);
+	}
+
+	return 0;
+}
+
+static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
+{
+	int i, j;
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm = dev->kvm;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		memset(pmu, 0, sizeof(*pmu));
+		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+			pmu->pmc[i].idx = i;
+			pmu->pmc[i].vcpu = vcpu;
+			pmu->pmc[i].bitmask = 0xffffffffUL;
+		}
+		pmu->irq_num = -1;
+	}
+
+	return 0;
+}
+
+static void kvm_arm_pmu_destroy(struct kvm_device *dev)
+{
+	kfree(dev);
+}
+
+static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
+			return -EINVAL;
+
+		return kvm_arm_pmu_set_irq(dev->kvm, reg);
+	}
+	}
+
+	return -ENXIO;
+}
+
+static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	return 0;
+}
+
+static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ:
+		return 0;
+	}
+
+	return -ENXIO;
+}
+
+struct kvm_device_ops kvm_arm_pmu_ops = {
+	.name = "kvm-arm-pmu",
+	.create = kvm_arm_pmu_create,
+	.destroy = kvm_arm_pmu_destroy,
+	.set_attr = kvm_arm_pmu_set_attr,
+	.get_attr = kvm_arm_pmu_get_attr,
+	.has_attr = kvm_arm_pmu_has_attr,
+};
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 66c6616..8e00987 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
 	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
 }
 
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
+{
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
+				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
+}
+
 static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
 {
 	if (irq < VGIC_NR_PRIVATE_IRQS)
diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
index 0df74cb..eb814f5 100644
--- a/virt/kvm/arm/vgic.h
+++ b/virt/kvm/arm/vgic.h
@@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
 
 void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
 void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
 void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
 void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 			     int irq, int val);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8db1d93..5decfb5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
 #ifdef CONFIG_KVM_XICS
 	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
 #endif
+
+#ifdef CONFIG_KVM_ARM_PMU
+	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
+#endif
 };
 
 int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
@ 2015-10-30  6:22   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-30  6:22 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
the kvm_device_ops for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
 arch/arm64/include/uapi/asm/kvm.h             |  3 +
 include/linux/kvm_host.h                      |  1 +
 include/uapi/linux/kvm.h                      |  2 +
 virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
 virt/kvm/arm/vgic.c                           |  8 +++
 virt/kvm/arm/vgic.h                           |  1 +
 virt/kvm/kvm_main.c                           |  4 ++
 8 files changed, 126 insertions(+)
 create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt

diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
new file mode 100644
index 0000000..49481c4
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
@@ -0,0 +1,15 @@
+ARM Virtual Performance Monitor Unit (vPMU)
+===========================================
+
+Device types supported:
+  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
+
+Instantiate one PMU instance for per VCPU through this API.
+
+Groups:
+  KVM_DEV_ARM_PMU_GRP_IRQ
+  Attributes:
+    A value describing the interrupt number of PMU overflow interrupt.
+
+  Errors:
+    -EINVAL: Value set is out of the expected range
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 0cd7b59..1309a93 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API: ARM PMU */
+#define KVM_DEV_ARM_PMU_GRP_IRQ		0
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 1bef9e2..f6be696 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
 extern struct kvm_device_ops kvm_xics_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
 extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
+extern struct kvm_device_ops kvm_arm_pmu_ops;
 
 #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a9256f0..f41e6b6 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1025,6 +1025,8 @@ enum kvm_device_type {
 #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
 	KVM_DEV_TYPE_ARM_VGIC_V3,
 #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
+	KVM_DEV_TYPE_ARM_PMU_V3,
+#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
 	KVM_DEV_TYPE_MAX,
 };
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d78ce7b..0a00d04 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,10 +19,13 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
 
+#include "vgic.h"
+
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
  * @vcpu: The vcpu pointer
@@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
 
 	pmc->perf_event = event;
 }
+
+static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
+{
+	int j;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
+		pmu->irq_num = irq;
+		vgic_dist_irq_set_cfg(vcpu, irq, true);
+	}
+
+	return 0;
+}
+
+static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
+{
+	int i, j;
+	struct kvm_vcpu *vcpu;
+	struct kvm *kvm = dev->kvm;
+
+	kvm_for_each_vcpu(j, vcpu, kvm) {
+		struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+		memset(pmu, 0, sizeof(*pmu));
+		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+			pmu->pmc[i].idx = i;
+			pmu->pmc[i].vcpu = vcpu;
+			pmu->pmc[i].bitmask = 0xffffffffUL;
+		}
+		pmu->irq_num = -1;
+	}
+
+	return 0;
+}
+
+static void kvm_arm_pmu_destroy(struct kvm_device *dev)
+{
+	kfree(dev);
+}
+
+static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
+			return -EINVAL;
+
+		return kvm_arm_pmu_set_irq(dev->kvm, reg);
+	}
+	}
+
+	return -ENXIO;
+}
+
+static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	return 0;
+}
+
+static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
+				struct kvm_device_attr *attr)
+{
+	switch (attr->group) {
+	case KVM_DEV_ARM_PMU_GRP_IRQ:
+		return 0;
+	}
+
+	return -ENXIO;
+}
+
+struct kvm_device_ops kvm_arm_pmu_ops = {
+	.name = "kvm-arm-pmu",
+	.create = kvm_arm_pmu_create,
+	.destroy = kvm_arm_pmu_destroy,
+	.set_attr = kvm_arm_pmu_set_attr,
+	.get_attr = kvm_arm_pmu_get_attr,
+	.has_attr = kvm_arm_pmu_has_attr,
+};
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 66c6616..8e00987 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
 	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
 }
 
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
+{
+	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
+
+	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
+				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
+}
+
 static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
 {
 	if (irq < VGIC_NR_PRIVATE_IRQS)
diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
index 0df74cb..eb814f5 100644
--- a/virt/kvm/arm/vgic.h
+++ b/virt/kvm/arm/vgic.h
@@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
 
 void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
 void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
+void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
 void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
 void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
 			     int irq, int val);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8db1d93..5decfb5 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
 #ifdef CONFIG_KVM_XICS
 	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
 #endif
+
+#ifdef CONFIG_KVM_ARM_PMU
+	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
+#endif
 };
 
 int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-10-30  6:22   ` Shannon Zhao
  (?)
@ 2015-10-30 12:08     ` kbuild test robot
  -1 siblings, 0 replies; 142+ messages in thread
From: kbuild test robot @ 2015-10-30 12:08 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kbuild-all, kvmarm, linux-arm-kernel, kvm, marc.zyngier,
	christoffer.dall, will.deacon, alex.bennee, wei, cov,
	shannon.zhao, peter.huangpeng, zhaoshenglong

[-- Attachment #1: Type: text/plain, Size: 1810 bytes --]

Hi Shannon,

[auto build test ERROR on kvm/linux-next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]

url:    https://github.com/0day-ci/linux/commits/Shannon-Zhao/KVM-ARM64-Add-guest-PMU-support/20151030-143148
config: arm-axm55xx_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   In file included from arch/arm/kvm/arm.c:31:0:
>> include/kvm/arm_pmu.h:22:21: fatal error: asm/pmu.h: No such file or directory
    #include <asm/pmu.h>
                        ^
   compilation terminated.

vim +22 include/kvm/arm_pmu.h

219856f5 Shannon Zhao 2015-10-30  16   */
219856f5 Shannon Zhao 2015-10-30  17  
219856f5 Shannon Zhao 2015-10-30  18  #ifndef __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  19  #define __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  20  
219856f5 Shannon Zhao 2015-10-30  21  #include <linux/perf_event.h>
219856f5 Shannon Zhao 2015-10-30 @22  #include <asm/pmu.h>
219856f5 Shannon Zhao 2015-10-30  23  
219856f5 Shannon Zhao 2015-10-30  24  struct kvm_pmc {
219856f5 Shannon Zhao 2015-10-30  25  	u8 idx;/* index into the pmu->pmc array */

:::::: The code at line 22 was first introduced by commit
:::::: 219856f54d23298fff48e6e20e7e87fc45e42798 KVM: ARM64: Define PMU data structure for each vcpu

:::::: TO: Shannon Zhao <shannon.zhao@linaro.org>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 18309 bytes --]

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-10-30 12:08     ` kbuild test robot
  0 siblings, 0 replies; 142+ messages in thread
From: kbuild test robot @ 2015-10-30 12:08 UTC (permalink / raw)
  Cc: kbuild-all, kvmarm, linux-arm-kernel, kvm, marc.zyngier,
	christoffer.dall, will.deacon, alex.bennee, wei, cov,
	shannon.zhao, peter.huangpeng, zhaoshenglong

[-- Attachment #1: Type: text/plain, Size: 1810 bytes --]

Hi Shannon,

[auto build test ERROR on kvm/linux-next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]

url:    https://github.com/0day-ci/linux/commits/Shannon-Zhao/KVM-ARM64-Add-guest-PMU-support/20151030-143148
config: arm-axm55xx_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   In file included from arch/arm/kvm/arm.c:31:0:
>> include/kvm/arm_pmu.h:22:21: fatal error: asm/pmu.h: No such file or directory
    #include <asm/pmu.h>
                        ^
   compilation terminated.

vim +22 include/kvm/arm_pmu.h

219856f5 Shannon Zhao 2015-10-30  16   */
219856f5 Shannon Zhao 2015-10-30  17  
219856f5 Shannon Zhao 2015-10-30  18  #ifndef __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  19  #define __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  20  
219856f5 Shannon Zhao 2015-10-30  21  #include <linux/perf_event.h>
219856f5 Shannon Zhao 2015-10-30 @22  #include <asm/pmu.h>
219856f5 Shannon Zhao 2015-10-30  23  
219856f5 Shannon Zhao 2015-10-30  24  struct kvm_pmc {
219856f5 Shannon Zhao 2015-10-30  25  	u8 idx;/* index into the pmu->pmc array */

:::::: The code at line 22 was first introduced by commit
:::::: 219856f54d23298fff48e6e20e7e87fc45e42798 KVM: ARM64: Define PMU data structure for each vcpu

:::::: TO: Shannon Zhao <shannon.zhao@linaro.org>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 18309 bytes --]

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-10-30 12:08     ` kbuild test robot
  0 siblings, 0 replies; 142+ messages in thread
From: kbuild test robot @ 2015-10-30 12:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Shannon,

[auto build test ERROR on kvm/linux-next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]

url:    https://github.com/0day-ci/linux/commits/Shannon-Zhao/KVM-ARM64-Add-guest-PMU-support/20151030-143148
config: arm-axm55xx_defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=arm 

All errors (new ones prefixed by >>):

   In file included from arch/arm/kvm/arm.c:31:0:
>> include/kvm/arm_pmu.h:22:21: fatal error: asm/pmu.h: No such file or directory
    #include <asm/pmu.h>
                        ^
   compilation terminated.

vim +22 include/kvm/arm_pmu.h

219856f5 Shannon Zhao 2015-10-30  16   */
219856f5 Shannon Zhao 2015-10-30  17  
219856f5 Shannon Zhao 2015-10-30  18  #ifndef __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  19  #define __ASM_ARM_KVM_PMU_H
219856f5 Shannon Zhao 2015-10-30  20  
219856f5 Shannon Zhao 2015-10-30  21  #include <linux/perf_event.h>
219856f5 Shannon Zhao 2015-10-30 @22  #include <asm/pmu.h>
219856f5 Shannon Zhao 2015-10-30  23  
219856f5 Shannon Zhao 2015-10-30  24  struct kvm_pmc {
219856f5 Shannon Zhao 2015-10-30  25  	u8 idx;/* index into the pmu->pmc array */

:::::: The code at line 22 was first introduced by commit
:::::: 219856f54d23298fff48e6e20e7e87fc45e42798 KVM: ARM64: Define PMU data structure for each vcpu

:::::: TO: Shannon Zhao <shannon.zhao@linaro.org>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 18309 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20151030/132d758a/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-10-30 12:08     ` kbuild test robot
@ 2015-10-31  2:06       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-31  2:06 UTC (permalink / raw)
  To: kbuild test robot
  Cc: linux-arm-kernel, kvm, marc.zyngier, shannon.zhao, will.deacon,
	kbuild-all, kvmarm

Hi,

Thanks for your test:)
It fails because there is no arch/arm/include/asm/pmu.h while
arch/arm64/include/asm/pmu.h exists. Will fix this at next version.

On 2015/10/30 20:08, kbuild test robot wrote:
> Hi Shannon,
> 
> [auto build test ERROR on kvm/linux-next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]
> 
> url:    https://github.com/0day-ci/linux/commits/Shannon-Zhao/KVM-ARM64-Add-guest-PMU-support/20151030-143148
> config: arm-axm55xx_defconfig (attached as .config)
> reproduce:
>         wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=arm 
> 
> All errors (new ones prefixed by >>):
> 
>    In file included from arch/arm/kvm/arm.c:31:0:
>>> include/kvm/arm_pmu.h:22:21: fatal error: asm/pmu.h: No such file or directory
>     #include <asm/pmu.h>
>                         ^
>    compilation terminated.
> 
> vim +22 include/kvm/arm_pmu.h
> 
> 219856f5 Shannon Zhao 2015-10-30  16   */
> 219856f5 Shannon Zhao 2015-10-30  17  
> 219856f5 Shannon Zhao 2015-10-30  18  #ifndef __ASM_ARM_KVM_PMU_H
> 219856f5 Shannon Zhao 2015-10-30  19  #define __ASM_ARM_KVM_PMU_H
> 219856f5 Shannon Zhao 2015-10-30  20  
> 219856f5 Shannon Zhao 2015-10-30  21  #include <linux/perf_event.h>
> 219856f5 Shannon Zhao 2015-10-30 @22  #include <asm/pmu.h>
> 219856f5 Shannon Zhao 2015-10-30  23  
> 219856f5 Shannon Zhao 2015-10-30  24  struct kvm_pmc {
> 219856f5 Shannon Zhao 2015-10-30  25  	u8 idx;/* index into the pmu->pmc array */
> 
> :::::: The code at line 22 was first introduced by commit
> :::::: 219856f54d23298fff48e6e20e7e87fc45e42798 KVM: ARM64: Define PMU data structure for each vcpu
> 
> :::::: TO: Shannon Zhao <shannon.zhao@linaro.org>
> :::::: CC: 0day robot <fengguang.wu@intel.com>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-10-31  2:06       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-10-31  2:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

Thanks for your test:)
It fails because there is no arch/arm/include/asm/pmu.h while
arch/arm64/include/asm/pmu.h exists. Will fix this at next version.

On 2015/10/30 20:08, kbuild test robot wrote:
> Hi Shannon,
> 
> [auto build test ERROR on kvm/linux-next -- if it's inappropriate base, please suggest rules for selecting the more suitable base]
> 
> url:    https://github.com/0day-ci/linux/commits/Shannon-Zhao/KVM-ARM64-Add-guest-PMU-support/20151030-143148
> config: arm-axm55xx_defconfig (attached as .config)
> reproduce:
>         wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=arm 
> 
> All errors (new ones prefixed by >>):
> 
>    In file included from arch/arm/kvm/arm.c:31:0:
>>> include/kvm/arm_pmu.h:22:21: fatal error: asm/pmu.h: No such file or directory
>     #include <asm/pmu.h>
>                         ^
>    compilation terminated.
> 
> vim +22 include/kvm/arm_pmu.h
> 
> 219856f5 Shannon Zhao 2015-10-30  16   */
> 219856f5 Shannon Zhao 2015-10-30  17  
> 219856f5 Shannon Zhao 2015-10-30  18  #ifndef __ASM_ARM_KVM_PMU_H
> 219856f5 Shannon Zhao 2015-10-30  19  #define __ASM_ARM_KVM_PMU_H
> 219856f5 Shannon Zhao 2015-10-30  20  
> 219856f5 Shannon Zhao 2015-10-30  21  #include <linux/perf_event.h>
> 219856f5 Shannon Zhao 2015-10-30 @22  #include <asm/pmu.h>
> 219856f5 Shannon Zhao 2015-10-30  23  
> 219856f5 Shannon Zhao 2015-10-30  24  struct kvm_pmc {
> 219856f5 Shannon Zhao 2015-10-30  25  	u8 idx;/* index into the pmu->pmc array */
> 
> :::::: The code at line 22 was first introduced by commit
> :::::: 219856f54d23298fff48e6e20e7e87fc45e42798 KVM: ARM64: Define PMU data structure for each vcpu
> 
> :::::: TO: Shannon Zhao <shannon.zhao@linaro.org>
> :::::: CC: 0day robot <fengguang.wu@intel.com>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-02 20:06     ` Christopher Covington
  -1 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:06 UTC (permalink / raw)
  To: Shannon Zhao, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. As it doesn't need to deal with the acsessing action

Nit: accessing

> specially, it uses default case to emulate writing and reading PMSELR
> register.
> 
> Add a helper for CP15 registers reset to UNKNOWN.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-11-02 20:06     ` Christopher Covington
  0 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:06 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. As it doesn't need to deal with the acsessing action

Nit: accessing

> specially, it uses default case to emulate writing and reading PMSELR
> register.
> 
> Add a helper for CP15 registers reset to UNKNOWN.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-02 20:13     ` Christopher Covington
  -1 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:13 UTC (permalink / raw)
  To: Shannon Zhao, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When we use tools like perf on host, perf passes the event type and the
> id of this event type category to kernel, then kernel will map them to
> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> register. When getting the event number in KVM, directly use raw event
> type to create a perf_event for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |   2 +
>  arch/arm64/kvm/Makefile      |   1 +
>  include/kvm/arm_pmu.h        |  13 +++++
>  virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 133 insertions(+)
>  create mode 100644 virt/kvm/arm/pmu.c
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index b9f394a..2c025f2 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -31,6 +31,8 @@
>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
> +#define ARMV8_PMCR_LC		(1 << 6)
>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>  #define	ARMV8_PMCR_N_MASK	0x1f
>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 1949fe5..18d56d8 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 254d2b4..1908c88 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -38,4 +38,17 @@ struct kvm_pmu {
>  #endif
>  };
>  
> +#ifdef CONFIG_KVM_ARM_PMU
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx);
> +#else
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> +	return 0;
> +}
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx) {}
> +#endif
> +
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> new file mode 100644
> index 0000000..900a64c
> --- /dev/null
> +++ b/virt/kvm/arm/pmu.c
> @@ -0,0 +1,117 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/perf_event.h>
> +#include <asm/kvm_emulate.h>
> +#include <kvm/arm_pmu.h>
> +
> +/**
> + * kvm_pmu_get_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> +	u64 counter, enabled, running;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
> +	else
> +		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
> +
> +	if (pmc->perf_event)
> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +						 &running);
> +
> +	return counter & pmc->bitmask;
> +}
> +
> +/**
> + * kvm_pmu_stop_counter - stop PMU counter
> + * @pmc: The PMU counter pointer
> + *
> + * If this counter has been configured to monitor some event, release it here.
> + */
> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> +{
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	u64 counter;
> +
> +	if (pmc->perf_event) {
> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
> +		else
> +			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
> +
> +		perf_event_release_kernel(pmc->perf_event);
> +		pmc->perf_event = NULL;
> +	}
> +}
> +
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	struct perf_event *event;
> +	struct perf_event_attr attr;
> +	u32 eventsel;
> +	u64 counter;
> +
> +	kvm_pmu_stop_counter(pmc);
> +	eventsel = data & ARMV8_EVTYPE_EVENT;
> +
> +	memset(&attr, 0, sizeof(struct perf_event_attr));
> +	attr.type = PERF_TYPE_RAW;
> +	attr.size = sizeof(attr);
> +	attr.pinned = 1;
> +	attr.disabled = 1;

Should this value be calculated from PMCR.E and PMCNTENSET/CLR state?

> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
> +	attr.exclude_hv = 1; /* Don't count EL2 events */

Should this be calculated from PMXEVTYPER.NSH?

> +	attr.exclude_host = 1; /* Don't count host events */
> +	attr.config = eventsel;
> +
> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> +	/* The initial sample period (overflow count) of an event. */
> +	attr.sample_period = (-counter) & pmc->bitmask;
> +
> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	if (IS_ERR(event)) {
> +		printk_once("kvm: pmu event creation failed %ld\n",
> +			    PTR_ERR(event));
> +		return;
> +	}
> +
> +	pmc->perf_event = event;
> +}
> 


-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2015-11-02 20:13     ` Christopher Covington
  0 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:13 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When we use tools like perf on host, perf passes the event type and the
> id of this event type category to kernel, then kernel will map them to
> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> register. When getting the event number in KVM, directly use raw event
> type to create a perf_event for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |   2 +
>  arch/arm64/kvm/Makefile      |   1 +
>  include/kvm/arm_pmu.h        |  13 +++++
>  virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 133 insertions(+)
>  create mode 100644 virt/kvm/arm/pmu.c
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index b9f394a..2c025f2 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -31,6 +31,8 @@
>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
> +#define ARMV8_PMCR_LC		(1 << 6)
>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>  #define	ARMV8_PMCR_N_MASK	0x1f
>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index 1949fe5..18d56d8 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 254d2b4..1908c88 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -38,4 +38,17 @@ struct kvm_pmu {
>  #endif
>  };
>  
> +#ifdef CONFIG_KVM_ARM_PMU
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx);
> +#else
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> +	return 0;
> +}
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx) {}
> +#endif
> +
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> new file mode 100644
> index 0000000..900a64c
> --- /dev/null
> +++ b/virt/kvm/arm/pmu.c
> @@ -0,0 +1,117 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/perf_event.h>
> +#include <asm/kvm_emulate.h>
> +#include <kvm/arm_pmu.h>
> +
> +/**
> + * kvm_pmu_get_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
> +{
> +	u64 counter, enabled, running;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
> +	else
> +		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
> +
> +	if (pmc->perf_event)
> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +						 &running);
> +
> +	return counter & pmc->bitmask;
> +}
> +
> +/**
> + * kvm_pmu_stop_counter - stop PMU counter
> + * @pmc: The PMU counter pointer
> + *
> + * If this counter has been configured to monitor some event, release it here.
> + */
> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
> +{
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	u64 counter;
> +
> +	if (pmc->perf_event) {
> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
> +		else
> +			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
> +
> +		perf_event_release_kernel(pmc->perf_event);
> +		pmc->perf_event = NULL;
> +	}
> +}
> +
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
> +				    u32 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	struct perf_event *event;
> +	struct perf_event_attr attr;
> +	u32 eventsel;
> +	u64 counter;
> +
> +	kvm_pmu_stop_counter(pmc);
> +	eventsel = data & ARMV8_EVTYPE_EVENT;
> +
> +	memset(&attr, 0, sizeof(struct perf_event_attr));
> +	attr.type = PERF_TYPE_RAW;
> +	attr.size = sizeof(attr);
> +	attr.pinned = 1;
> +	attr.disabled = 1;

Should this value be calculated from PMCR.E and PMCNTENSET/CLR state?

> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
> +	attr.exclude_hv = 1; /* Don't count EL2 events */

Should this be calculated from PMXEVTYPER.NSH?

> +	attr.exclude_host = 1; /* Don't count host events */
> +	attr.config = eventsel;
> +
> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> +	/* The initial sample period (overflow count) of an event. */
> +	attr.sample_period = (-counter) & pmc->bitmask;
> +
> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	if (IS_ERR(event)) {
> +		printk_once("kvm: pmu event creation failed %ld\n",
> +			    PTR_ERR(event));
> +		return;
> +	}
> +
> +	pmc->perf_event = event;
> +}
> 


-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-02 20:54     ` Christopher Covington
  -1 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:54 UTC (permalink / raw)
  To: Shannon Zhao, kvmarm
  Cc: linux-arm-kernel, kvm, marc.zyngier, christoffer.dall,
	will.deacon, alex.bennee, wei, shannon.zhao, peter.huangpeng

Hi Shannon,

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
> reset_unknown_cp15 for its reset handler. Add access handler which
> emulates writing and reading PMXEVTYPER register. When writing to
> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
> for the selected event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index cb82b15..4e606ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case PMXEVTYPER_EL0: {
> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);
> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
> +							 *vcpu_reg(vcpu, p->Rt);

Why does PMXEVTYPER get set directly? It seems like it could have an accessor
that redirected to PMEVTYPER<n>.

> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);

I tried to look around briefly but couldn't find counter number range checking
in the PMSELR handler or here. Should there be some here and in PMXEVCNTR?

Thanks,
Christopher Covington

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-11-02 20:54     ` Christopher Covington
  0 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Shannon,

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
> reset_unknown_cp15 for its reset handler. Add access handler which
> emulates writing and reading PMXEVTYPER register. When writing to
> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
> for the selected event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index cb82b15..4e606ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case PMXEVTYPER_EL0: {
> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);
> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
> +							 *vcpu_reg(vcpu, p->Rt);

Why does PMXEVTYPER get set directly? It seems like it could have an accessor
that redirected to PMEVTYPER<n>.

> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);

I tried to look around briefly but couldn't find counter number range checking
in the PMSELR handler or here. Should there be some here and in PMXEVCNTR?

Thanks,
Christopher Covington

-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-02 21:20     ` Christopher Covington
  -1 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 21:20 UTC (permalink / raw)
  To: Shannon Zhao, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
> enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
> disabled. When writing 1 to PMCR.P, reset all event counters, not
> including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
> zero.

> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index ae21089..11d1bfb 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -121,6 +121,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
>  }
>  
>  /**
> + * kvm_pmu_handle_pmcr - handle PMCR register
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCR register
> + */
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +	u32 enable;
> +	int i;
> +
> +	if (val & ARMV8_PMCR_E) {
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> +		else
> +			enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
> +
> +		kvm_pmu_enable_counter(vcpu, enable, true);
> +	} else
> +		kvm_pmu_disable_counter(vcpu, 0xffffffffUL);

Nit: If using braces on one side of if-else, please use them on the other.
(Search for "braces in both branches" in Documentation/CodingStyle.)

> +
> +	if (val & ARMV8_PMCR_C) {
> +		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
> +		if (pmc->perf_event)
> +			local64_set(&pmc->perf_event->count, 0);
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
> +		else
> +			vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
> +	}
> +
> +	if (val & ARMV8_PMCR_P) {
> +		for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
> +			pmc = &pmu->pmc[i];
> +			if (pmc->perf_event)
> +				local64_set(&pmc->perf_event->count, 0);
> +			if (!vcpu_mode_is_32bit(vcpu))
> +				vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
> +			else
> +				vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
> +		}
> +	}
> +
> +	if (val & ARMV8_PMCR_LC) {
> +		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
> +		pmc->bitmask = 0xffffffffffffffffUL;
> +	}
> +}
> +
> +/**
>   * kvm_pmu_overflow_clear - clear PMU overflow interrupt
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMOVSCLR register
> 


-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2015-11-02 21:20     ` Christopher Covington
  0 siblings, 0 replies; 142+ messages in thread
From: Christopher Covington @ 2015-11-02 21:20 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/30/2015 02:21 AM, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
> enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
> disabled. When writing 1 to PMCR.P, reset all event counters, not
> including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
> zero.

> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index ae21089..11d1bfb 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -121,6 +121,56 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val)
>  }
>  
>  /**
> + * kvm_pmu_handle_pmcr - handle PMCR register
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCR register
> + */
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +	u32 enable;
> +	int i;
> +
> +	if (val & ARMV8_PMCR_E) {
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
> +		else
> +			enable = vcpu_cp15(vcpu, c9_PMCNTENSET);
> +
> +		kvm_pmu_enable_counter(vcpu, enable, true);
> +	} else
> +		kvm_pmu_disable_counter(vcpu, 0xffffffffUL);

Nit: If using braces on one side of if-else, please use them on the other.
(Search for "braces in both branches" in Documentation/CodingStyle.)

> +
> +	if (val & ARMV8_PMCR_C) {
> +		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
> +		if (pmc->perf_event)
> +			local64_set(&pmc->perf_event->count, 0);
> +		if (!vcpu_mode_is_32bit(vcpu))
> +			vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
> +		else
> +			vcpu_cp15(vcpu, c9_PMCCNTR) = 0;
> +	}
> +
> +	if (val & ARMV8_PMCR_P) {
> +		for (i = 0; i < ARMV8_MAX_COUNTERS - 1; i++) {
> +			pmc = &pmu->pmc[i];
> +			if (pmc->perf_event)
> +				local64_set(&pmc->perf_event->count, 0);
> +			if (!vcpu_mode_is_32bit(vcpu))
> +				vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
> +			else
> +				vcpu_cp15(vcpu, c14_PMEVCNTR0 + i) = 0;
> +		}
> +	}
> +
> +	if (val & ARMV8_PMCR_LC) {
> +		pmc = &pmu->pmc[ARMV8_MAX_COUNTERS - 1];
> +		pmc->bitmask = 0xffffffffffffffffUL;
> +	}
> +}
> +
> +/**
>   * kvm_pmu_overflow_clear - clear PMU overflow interrupt
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMOVSCLR register
> 


-- 
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2015-11-02 20:13     ` Christopher Covington
  (?)
@ 2015-11-03  2:33       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:33 UTC (permalink / raw)
  To: Christopher Covington, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao



On 2015/11/3 4:13, Christopher Covington wrote:
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/include/asm/pmu.h |   2 +
>>  arch/arm64/kvm/Makefile      |   1 +
>>  include/kvm/arm_pmu.h        |  13 +++++
>>  virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 133 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> index b9f394a..2c025f2 100644
>> --- a/arch/arm64/include/asm/pmu.h
>> +++ b/arch/arm64/include/asm/pmu.h
>> @@ -31,6 +31,8 @@
>>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
>> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
>> +#define ARMV8_PMCR_LC		(1 << 6)
>>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>>  #define	ARMV8_PMCR_N_MASK	0x1f
>>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
>> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
>> index 1949fe5..18d56d8 100644
>> --- a/arch/arm64/kvm/Makefile
>> +++ b/arch/arm64/kvm/Makefile
>> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
>> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 254d2b4..1908c88 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -38,4 +38,17 @@ struct kvm_pmu {
>>  #endif
>>  };
>>  
>> +#ifdef CONFIG_KVM_ARM_PMU
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx);
>> +#else
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	return 0;
>> +}
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx) {}
>> +#endif
>> +
>>  #endif
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> new file mode 100644
>> index 0000000..900a64c
>> --- /dev/null
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -0,0 +1,117 @@
>> +/*
>> + * Copyright (C) 2015 Linaro Ltd.
>> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/cpu.h>
>> +#include <linux/kvm.h>
>> +#include <linux/kvm_host.h>
>> +#include <linux/perf_event.h>
>> +#include <asm/kvm_emulate.h>
>> +#include <kvm/arm_pmu.h>
>> +
>> +/**
>> + * kvm_pmu_get_counter_value - get PMU counter value
>> + * @vcpu: The vcpu pointer
>> + * @select_idx: The counter index
>> + */
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	u64 counter, enabled, running;
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +
>> +	if (!vcpu_mode_is_32bit(vcpu))
>> +		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
>> +	else
>> +		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
>> +
>> +	if (pmc->perf_event)
>> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
>> +						 &running);
>> +
>> +	return counter & pmc->bitmask;
>> +}
>> +
>> +/**
>> + * kvm_pmu_stop_counter - stop PMU counter
>> + * @pmc: The PMU counter pointer
>> + *
>> + * If this counter has been configured to monitor some event, release it here.
>> + */
>> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>> +{
>> +	struct kvm_vcpu *vcpu = pmc->vcpu;
>> +	u64 counter;
>> +
>> +	if (pmc->perf_event) {
>> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>> +		if (!vcpu_mode_is_32bit(vcpu))
>> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
>> +		else
>> +			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
>> +
>> +		perf_event_release_kernel(pmc->perf_event);
>> +		pmc->perf_event = NULL;
>> +	}
>> +}
>> +
>> +/**
>> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
>> + * @vcpu: The vcpu pointer
>> + * @data: The data guest writes to PMXEVTYPER_EL0
>> + * @select_idx: The number of selected counter
>> + *
>> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
>> + * event with given hardware event number. Here we call perf_event API to
>> + * emulate this action and create a kernel perf event for it.
>> + */
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx)
>> +{
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +	struct perf_event *event;
>> +	struct perf_event_attr attr;
>> +	u32 eventsel;
>> +	u64 counter;
>> +
>> +	kvm_pmu_stop_counter(pmc);
>> +	eventsel = data & ARMV8_EVTYPE_EVENT;
>> +
>> +	memset(&attr, 0, sizeof(struct perf_event_attr));
>> +	attr.type = PERF_TYPE_RAW;
>> +	attr.size = sizeof(attr);
>> +	attr.pinned = 1;
>> +	attr.disabled = 1;
> 
> Should this value be calculated from PMCR.E and PMCNTENSET/CLR state?
> 
Sure.

>> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
>> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
>> +	attr.exclude_hv = 1; /* Don't count EL2 events */
> 
> Should this be calculated from PMXEVTYPER.NSH?
> 
As discussed with Christoffer before, it's unlikely to support
nested-virtualiztion on ARMv8 and guest will not see EL2. So it doesn't
need to take care the value of PMXEVTYPER.NSH since it should not count
EL2 events.

>> +	attr.exclude_host = 1; /* Don't count host events */
>> +	attr.config = eventsel;
>> +
>> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
>> +	/* The initial sample period (overflow count) of an event. */
>> +	attr.sample_period = (-counter) & pmc->bitmask;
>> +
>> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
>> +	if (IS_ERR(event)) {
>> +		printk_once("kvm: pmu event creation failed %ld\n",
>> +			    PTR_ERR(event));
>> +		return;
>> +	}
>> +
>> +	pmc->perf_event = event;
>> +}
>>
> 
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2015-11-03  2:33       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:33 UTC (permalink / raw)
  To: Christopher Covington, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao



On 2015/11/3 4:13, Christopher Covington wrote:
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/include/asm/pmu.h |   2 +
>>  arch/arm64/kvm/Makefile      |   1 +
>>  include/kvm/arm_pmu.h        |  13 +++++
>>  virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 133 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> index b9f394a..2c025f2 100644
>> --- a/arch/arm64/include/asm/pmu.h
>> +++ b/arch/arm64/include/asm/pmu.h
>> @@ -31,6 +31,8 @@
>>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
>> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
>> +#define ARMV8_PMCR_LC		(1 << 6)
>>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>>  #define	ARMV8_PMCR_N_MASK	0x1f
>>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
>> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
>> index 1949fe5..18d56d8 100644
>> --- a/arch/arm64/kvm/Makefile
>> +++ b/arch/arm64/kvm/Makefile
>> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
>> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 254d2b4..1908c88 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -38,4 +38,17 @@ struct kvm_pmu {
>>  #endif
>>  };
>>  
>> +#ifdef CONFIG_KVM_ARM_PMU
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx);
>> +#else
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	return 0;
>> +}
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx) {}
>> +#endif
>> +
>>  #endif
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> new file mode 100644
>> index 0000000..900a64c
>> --- /dev/null
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -0,0 +1,117 @@
>> +/*
>> + * Copyright (C) 2015 Linaro Ltd.
>> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/cpu.h>
>> +#include <linux/kvm.h>
>> +#include <linux/kvm_host.h>
>> +#include <linux/perf_event.h>
>> +#include <asm/kvm_emulate.h>
>> +#include <kvm/arm_pmu.h>
>> +
>> +/**
>> + * kvm_pmu_get_counter_value - get PMU counter value
>> + * @vcpu: The vcpu pointer
>> + * @select_idx: The counter index
>> + */
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	u64 counter, enabled, running;
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +
>> +	if (!vcpu_mode_is_32bit(vcpu))
>> +		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
>> +	else
>> +		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
>> +
>> +	if (pmc->perf_event)
>> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
>> +						 &running);
>> +
>> +	return counter & pmc->bitmask;
>> +}
>> +
>> +/**
>> + * kvm_pmu_stop_counter - stop PMU counter
>> + * @pmc: The PMU counter pointer
>> + *
>> + * If this counter has been configured to monitor some event, release it here.
>> + */
>> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>> +{
>> +	struct kvm_vcpu *vcpu = pmc->vcpu;
>> +	u64 counter;
>> +
>> +	if (pmc->perf_event) {
>> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>> +		if (!vcpu_mode_is_32bit(vcpu))
>> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
>> +		else
>> +			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
>> +
>> +		perf_event_release_kernel(pmc->perf_event);
>> +		pmc->perf_event = NULL;
>> +	}
>> +}
>> +
>> +/**
>> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
>> + * @vcpu: The vcpu pointer
>> + * @data: The data guest writes to PMXEVTYPER_EL0
>> + * @select_idx: The number of selected counter
>> + *
>> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
>> + * event with given hardware event number. Here we call perf_event API to
>> + * emulate this action and create a kernel perf event for it.
>> + */
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx)
>> +{
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +	struct perf_event *event;
>> +	struct perf_event_attr attr;
>> +	u32 eventsel;
>> +	u64 counter;
>> +
>> +	kvm_pmu_stop_counter(pmc);
>> +	eventsel = data & ARMV8_EVTYPE_EVENT;
>> +
>> +	memset(&attr, 0, sizeof(struct perf_event_attr));
>> +	attr.type = PERF_TYPE_RAW;
>> +	attr.size = sizeof(attr);
>> +	attr.pinned = 1;
>> +	attr.disabled = 1;
> 
> Should this value be calculated from PMCR.E and PMCNTENSET/CLR state?
> 
Sure.

>> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
>> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
>> +	attr.exclude_hv = 1; /* Don't count EL2 events */
> 
> Should this be calculated from PMXEVTYPER.NSH?
> 
As discussed with Christoffer before, it's unlikely to support
nested-virtualiztion on ARMv8 and guest will not see EL2. So it doesn't
need to take care the value of PMXEVTYPER.NSH since it should not count
EL2 events.

>> +	attr.exclude_host = 1; /* Don't count host events */
>> +	attr.config = eventsel;
>> +
>> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
>> +	/* The initial sample period (overflow count) of an event. */
>> +	attr.sample_period = (-counter) & pmc->bitmask;
>> +
>> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
>> +	if (IS_ERR(event)) {
>> +		printk_once("kvm: pmu event creation failed %ld\n",
>> +			    PTR_ERR(event));
>> +		return;
>> +	}
>> +
>> +	pmc->perf_event = event;
>> +}
>>
> 
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2015-11-03  2:33       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:33 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/11/3 4:13, Christopher Covington wrote:
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/include/asm/pmu.h |   2 +
>>  arch/arm64/kvm/Makefile      |   1 +
>>  include/kvm/arm_pmu.h        |  13 +++++
>>  virt/kvm/arm/pmu.c           | 117 +++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 133 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> index b9f394a..2c025f2 100644
>> --- a/arch/arm64/include/asm/pmu.h
>> +++ b/arch/arm64/include/asm/pmu.h
>> @@ -31,6 +31,8 @@
>>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
>> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
>> +#define ARMV8_PMCR_LC		(1 << 6)
>>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>>  #define	ARMV8_PMCR_N_MASK	0x1f
>>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
>> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
>> index 1949fe5..18d56d8 100644
>> --- a/arch/arm64/kvm/Makefile
>> +++ b/arch/arm64/kvm/Makefile
>> @@ -27,3 +27,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += vgic-v3-switch.o
>>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
>> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 254d2b4..1908c88 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -38,4 +38,17 @@ struct kvm_pmu {
>>  #endif
>>  };
>>  
>> +#ifdef CONFIG_KVM_ARM_PMU
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx);
>> +#else
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	return 0;
>> +}
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx) {}
>> +#endif
>> +
>>  #endif
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> new file mode 100644
>> index 0000000..900a64c
>> --- /dev/null
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -0,0 +1,117 @@
>> +/*
>> + * Copyright (C) 2015 Linaro Ltd.
>> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + *
>> + * You should have received a copy of the GNU General Public License
>> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <linux/cpu.h>
>> +#include <linux/kvm.h>
>> +#include <linux/kvm_host.h>
>> +#include <linux/perf_event.h>
>> +#include <asm/kvm_emulate.h>
>> +#include <kvm/arm_pmu.h>
>> +
>> +/**
>> + * kvm_pmu_get_counter_value - get PMU counter value
>> + * @vcpu: The vcpu pointer
>> + * @select_idx: The counter index
>> + */
>> +unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>> +{
>> +	u64 counter, enabled, running;
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +
>> +	if (!vcpu_mode_is_32bit(vcpu))
>> +		counter = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + select_idx);
>> +	else
>> +		counter = vcpu_cp15(vcpu, c14_PMEVCNTR0 + select_idx);
>> +
>> +	if (pmc->perf_event)
>> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
>> +						 &running);
>> +
>> +	return counter & pmc->bitmask;
>> +}
>> +
>> +/**
>> + * kvm_pmu_stop_counter - stop PMU counter
>> + * @pmc: The PMU counter pointer
>> + *
>> + * If this counter has been configured to monitor some event, release it here.
>> + */
>> +static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>> +{
>> +	struct kvm_vcpu *vcpu = pmc->vcpu;
>> +	u64 counter;
>> +
>> +	if (pmc->perf_event) {
>> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
>> +		if (!vcpu_mode_is_32bit(vcpu))
>> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + pmc->idx) = counter;
>> +		else
>> +			vcpu_cp15(vcpu, c14_PMEVCNTR0 + pmc->idx) = counter;
>> +
>> +		perf_event_release_kernel(pmc->perf_event);
>> +		pmc->perf_event = NULL;
>> +	}
>> +}
>> +
>> +/**
>> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
>> + * @vcpu: The vcpu pointer
>> + * @data: The data guest writes to PMXEVTYPER_EL0
>> + * @select_idx: The number of selected counter
>> + *
>> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
>> + * event with given hardware event number. Here we call perf_event API to
>> + * emulate this action and create a kernel perf event for it.
>> + */
>> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>> +				    u32 select_idx)
>> +{
>> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
>> +	struct perf_event *event;
>> +	struct perf_event_attr attr;
>> +	u32 eventsel;
>> +	u64 counter;
>> +
>> +	kvm_pmu_stop_counter(pmc);
>> +	eventsel = data & ARMV8_EVTYPE_EVENT;
>> +
>> +	memset(&attr, 0, sizeof(struct perf_event_attr));
>> +	attr.type = PERF_TYPE_RAW;
>> +	attr.size = sizeof(attr);
>> +	attr.pinned = 1;
>> +	attr.disabled = 1;
> 
> Should this value be calculated from PMCR.E and PMCNTENSET/CLR state?
> 
Sure.

>> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
>> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
>> +	attr.exclude_hv = 1; /* Don't count EL2 events */
> 
> Should this be calculated from PMXEVTYPER.NSH?
> 
As discussed with Christoffer before, it's unlikely to support
nested-virtualiztion on ARMv8 and guest will not see EL2. So it doesn't
need to take care the value of PMXEVTYPER.NSH since it should not count
EL2 events.

>> +	attr.exclude_host = 1; /* Don't count host events */
>> +	attr.config = eventsel;
>> +
>> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
>> +	/* The initial sample period (overflow count) of an event. */
>> +	attr.sample_period = (-counter) & pmc->bitmask;
>> +
>> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
>> +	if (IS_ERR(event)) {
>> +		printk_once("kvm: pmu event creation failed %ld\n",
>> +			    PTR_ERR(event));
>> +		return;
>> +	}
>> +
>> +	pmc->perf_event = event;
>> +}
>>
> 
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  2015-11-02 20:54     ` Christopher Covington
  (?)
@ 2015-11-03  2:41       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:41 UTC (permalink / raw)
  To: Christopher Covington, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao



On 2015/11/3 4:54, Christopher Covington wrote:
> Hi Shannon,
> 
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
>> reset_unknown_cp15 for its reset handler. Add access handler which
>> emulates writing and reading PMXEVTYPER register. When writing to
>> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
>> for the selected event type.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index cb82b15..4e606ea 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>>  
>>  	if (p->is_write) {
>>  		switch (r->reg) {
>> +		case PMXEVTYPER_EL0: {
>> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
>> +			kvm_pmu_set_counter_event_type(vcpu,
>> +						       *vcpu_reg(vcpu, p->Rt),
>> +						       val);
>> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> Why does PMXEVTYPER get set directly? It seems like it could have an accessor
> that redirected to PMEVTYPER<n>.
> 
Yeah, that's what this patch does. It gets the counter index from
PMSELR_EL0 register, then set the event type, create perf_event, store
event type to PMEVTYPER<n>, etc.

>> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> I tried to look around briefly but couldn't find counter number range checking
> in the PMSELR handler or here. Should there be some here and in PMXEVCNTR?
> 
Ok, will fix this. Thanks.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-11-03  2:41       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:41 UTC (permalink / raw)
  To: Christopher Covington, kvmarm
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, shannon.zhao



On 2015/11/3 4:54, Christopher Covington wrote:
> Hi Shannon,
> 
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
>> reset_unknown_cp15 for its reset handler. Add access handler which
>> emulates writing and reading PMXEVTYPER register. When writing to
>> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
>> for the selected event type.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index cb82b15..4e606ea 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>>  
>>  	if (p->is_write) {
>>  		switch (r->reg) {
>> +		case PMXEVTYPER_EL0: {
>> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
>> +			kvm_pmu_set_counter_event_type(vcpu,
>> +						       *vcpu_reg(vcpu, p->Rt),
>> +						       val);
>> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> Why does PMXEVTYPER get set directly? It seems like it could have an accessor
> that redirected to PMEVTYPER<n>.
> 
Yeah, that's what this patch does. It gets the counter index from
PMSELR_EL0 register, then set the event type, create perf_event, store
event type to PMEVTYPER<n>, etc.

>> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> I tried to look around briefly but couldn't find counter number range checking
> in the PMSELR handler or here. Should there be some here and in PMXEVCNTR?
> 
Ok, will fix this. Thanks.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-11-03  2:41       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-03  2:41 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/11/3 4:54, Christopher Covington wrote:
> Hi Shannon,
> 
> On 10/30/2015 02:21 AM, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
>> reset_unknown_cp15 for its reset handler. Add access handler which
>> emulates writing and reading PMXEVTYPER register. When writing to
>> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
>> for the selected event type.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index cb82b15..4e606ea 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>>  
>>  	if (p->is_write) {
>>  		switch (r->reg) {
>> +		case PMXEVTYPER_EL0: {
>> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
>> +			kvm_pmu_set_counter_event_type(vcpu,
>> +						       *vcpu_reg(vcpu, p->Rt),
>> +						       val);
>> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> Why does PMXEVTYPER get set directly? It seems like it could have an accessor
> that redirected to PMEVTYPER<n>.
> 
Yeah, that's what this patch does. It gets the counter index from
PMSELR_EL0 register, then set the event type, create perf_event, store
event type to PMEVTYPER<n>, etc.

>> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
>> +							 *vcpu_reg(vcpu, p->Rt);
> 
> I tried to look around briefly but couldn't find counter number range checking
> in the PMSELR handler or here. Should there be some here and in PMXEVCNTR?
> 
Ok, will fix this. Thanks.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-30 11:42     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 11:42 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On Fri, 30 Oct 2015 14:21:48 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
> write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 35d232e..cb82b15 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -469,6 +469,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sysreg_write(vcpu, r, val);
>  }
>  
> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmceid;
> +
> +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)

That feels wrong. We should only reset the 64bit view of the sysregs,
as the 32bit view is directly mapped to it.

> +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
> +	else
> +		/* PMCEID1_EL0 or c9_PMCEID1 */
> +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
> +
> +	vcpu_sysreg_write(vcpu, r, pmceid);
> +}
> +
>  /* PMU registers accessor. */
>  static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  			    const struct sys_reg_params *p,
> @@ -486,6 +499,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  			vcpu_sys_reg(vcpu, r->reg) = val;
>  			break;
>  		}
> +		case PMCEID0_EL0:
> +		case PMCEID1_EL0:
> +			return ignore_write(vcpu, p);
>  		default:
>  			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
>  			break;
> @@ -710,10 +726,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
>  	/* PMCEID1_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
>  	/* PMCCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>  	  trap_raz_wi },
> @@ -943,6 +959,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>  			vcpu_cp15(vcpu, r->reg) = val;
>  			break;
>  		}
> +		case c9_PMCEID0:
> +		case c9_PMCEID1:
> +			return ignore_write(vcpu, p);
>  		default:
>  			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
>  			break;
> @@ -1000,8 +1019,10 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
>  	  reset_unknown_cp15, c9_PMSELR },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
> +	  reset_pmceid, c9_PMCEID0 },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
> +	  reset_pmceid, c9_PMCEID1 },

and as a consequence, this hunk should be reworked.

>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-11-30 11:42     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 11:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:21:48 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCEID0 or PMCEID1. Since
> write action to PMCEID0 or PMCEID1 is ignored, add a new case for this.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 35d232e..cb82b15 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -469,6 +469,19 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sysreg_write(vcpu, r, val);
>  }
>  
> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmceid;
> +
> +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)

That feels wrong. We should only reset the 64bit view of the sysregs,
as the 32bit view is directly mapped to it.

> +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
> +	else
> +		/* PMCEID1_EL0 or c9_PMCEID1 */
> +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
> +
> +	vcpu_sysreg_write(vcpu, r, pmceid);
> +}
> +
>  /* PMU registers accessor. */
>  static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  			    const struct sys_reg_params *p,
> @@ -486,6 +499,9 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  			vcpu_sys_reg(vcpu, r->reg) = val;
>  			break;
>  		}
> +		case PMCEID0_EL0:
> +		case PMCEID1_EL0:
> +			return ignore_write(vcpu, p);
>  		default:
>  			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
>  			break;
> @@ -710,10 +726,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmceid, PMCEID0_EL0 },
>  	/* PMCEID1_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmceid, PMCEID1_EL0 },
>  	/* PMCCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>  	  trap_raz_wi },
> @@ -943,6 +959,9 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>  			vcpu_cp15(vcpu, r->reg) = val;
>  			break;
>  		}
> +		case c9_PMCEID0:
> +		case c9_PMCEID1:
> +			return ignore_write(vcpu, p);
>  		default:
>  			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
>  			break;
> @@ -1000,8 +1019,10 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
>  	  reset_unknown_cp15, c9_PMSELR },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmu_cp15_regs,
> +	  reset_pmceid, c9_PMCEID0 },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
> +	  reset_pmceid, c9_PMCEID1 },

and as a consequence, this hunk should be reworked.

>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
  2015-11-30 11:42     ` Marc Zyngier
@ 2015-11-30 11:59       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-30 11:59 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: wei, kvm, shannon.zhao, will.deacon, peter.huangpeng,
	linux-arm-kernel, alex.bennee, kvmarm, christoffer.dall, cov

Hi Marc,

On 2015/11/30 19:42, Marc Zyngier wrote:
>> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> > +{
>> > +	u64 pmceid;
>> > +
>> > +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
> That feels wrong. We should only reset the 64bit view of the sysregs,
> as the 32bit view is directly mapped to it.
> 
Just to confirm, if guest access c9_PMCEID0, KVM will trap this register
with the register index as PMCEID0_EL0? Or still as c9_PMCEID0?

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-11-30 11:59       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-11-30 11:59 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 2015/11/30 19:42, Marc Zyngier wrote:
>> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> > +{
>> > +	u64 pmceid;
>> > +
>> > +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
> That feels wrong. We should only reset the 64bit view of the sysregs,
> as the 32bit view is directly mapped to it.
> 
Just to confirm, if guest access c9_PMCEID0, KVM will trap this register
with the register index as PMCEID0_EL0? Or still as c9_PMCEID0?

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
  2015-11-30 11:59       ` Shannon Zhao
  (?)
@ 2015-11-30 13:19         ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 13:19 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Mon, 30 Nov 2015 19:59:53 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> Hi Marc,
> 
> On 2015/11/30 19:42, Marc Zyngier wrote:
> >> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >> > +{
> >> > +	u64 pmceid;
> >> > +
> >> > +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
> > That feels wrong. We should only reset the 64bit view of the sysregs,
> > as the 32bit view is directly mapped to it.
> > 
> Just to confirm, if guest access c9_PMCEID0, KVM will trap this register
> with the register index as PMCEID0_EL0? Or still as c9_PMCEID0?

The traps are per execution mode (you'll get c9_PMCEID0 with a 32bit
guest). But the reset function is only concerned with the 64bit view.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-11-30 13:19         ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 13:19 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Mon, 30 Nov 2015 19:59:53 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> Hi Marc,
> 
> On 2015/11/30 19:42, Marc Zyngier wrote:
> >> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >> > +{
> >> > +	u64 pmceid;
> >> > +
> >> > +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
> > That feels wrong. We should only reset the 64bit view of the sysregs,
> > as the 32bit view is directly mapped to it.
> > 
> Just to confirm, if guest access c9_PMCEID0, KVM will trap this register
> with the register index as PMCEID0_EL0? Or still as c9_PMCEID0?

The traps are per execution mode (you'll get c9_PMCEID0 with a 32bit
guest). But the reset function is only concerned with the 64bit view.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register
@ 2015-11-30 13:19         ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 13:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 30 Nov 2015 19:59:53 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> Hi Marc,
> 
> On 2015/11/30 19:42, Marc Zyngier wrote:
> >> +static void reset_pmceid(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >> > +{
> >> > +	u64 pmceid;
> >> > +
> >> > +	if (r->reg == PMCEID0_EL0 || r->reg == c9_PMCEID0)
> > That feels wrong. We should only reset the 64bit view of the sysregs,
> > as the 32bit view is directly mapped to it.
> > 
> Just to confirm, if guest access c9_PMCEID0, KVM will trap this register
> with the register index as PMCEID0_EL0? Or still as c9_PMCEID0?

The traps are per execution mode (you'll get c9_PMCEID0 with a 32bit
guest). But the reset function is only concerned with the 64bit view.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-10-30  6:21   ` Shannon Zhao
@ 2015-11-30 17:56     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 17:56 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On Fri, 30 Oct 2015 14:21:47 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. As it doesn't need to deal with the acsessing action
> specially, it uses default case to emulate writing and reading PMSELR
> register.
> 
> Add a helper for CP15 registers reset to UNKNOWN.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 5 +++--
>  arch/arm64/kvm/sys_regs.h | 8 ++++++++
>  2 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 5b591d6..35d232e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -707,7 +707,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>  	  trap_raz_wi },
> @@ -998,7 +998,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
> +	  reset_unknown_cp15, c9_PMSELR },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index eaa324e..8afeff7 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -110,6 +110,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
>  	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
>  }
>  
> +static inline void reset_unknown_cp15(struct kvm_vcpu *vcpu,
> +				      const struct sys_reg_desc *r)
> +{
> +	BUG_ON(!r->reg);
> +	BUG_ON(r->reg >= NR_COPRO_REGS);
> +	vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
> +}
> +
>  static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	BUG_ON(!r->reg);


Same remark here as the one I made earlier. I'm pretty sure we don't
call any CP15 reset because they are all shared with their 64bit
counterparts. The same thing goes for the whole series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-11-30 17:56     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 17:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:21:47 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. As it doesn't need to deal with the acsessing action
> specially, it uses default case to emulate writing and reading PMSELR
> register.
> 
> Add a helper for CP15 registers reset to UNKNOWN.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 5 +++--
>  arch/arm64/kvm/sys_regs.h | 8 ++++++++
>  2 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 5b591d6..35d232e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -707,7 +707,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>  	  trap_raz_wi },
> @@ -998,7 +998,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmu_cp15_regs,
> +	  reset_unknown_cp15, c9_PMSELR },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index eaa324e..8afeff7 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -110,6 +110,14 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
>  	vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
>  }
>  
> +static inline void reset_unknown_cp15(struct kvm_vcpu *vcpu,
> +				      const struct sys_reg_desc *r)
> +{
> +	BUG_ON(!r->reg);
> +	BUG_ON(r->reg >= NR_COPRO_REGS);
> +	vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
> +}
> +
>  static inline void reset_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  {
>  	BUG_ON(!r->reg);


Same remark here as the one I made earlier. I'm pretty sure we don't
call any CP15 reset because they are all shared with their 64bit
counterparts. The same thing goes for the whole series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
  2015-10-30  6:21   ` Shannon Zhao
  (?)
@ 2015-11-30 18:11     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:11 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:46 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCR_EL0 and make writable
> bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
> handler for PMU registers which emulates writing and reading register
> and add emulation for PMCR.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 104 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d03d3af..5b591d6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -33,6 +33,7 @@
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/pmu.h>
>  
>  #include <trace/events/kvm.h>
>  
> @@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
> +			      const struct sys_reg_desc *r, u64 val)
> +{
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		vcpu_sys_reg(vcpu, r->reg) = val;
> +	else
> +		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
> +}
> +
> +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmcr, val;
> +
> +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> +	 * except PMCR.E resetting to zero.
> +	 */
> +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> +	      & (~ARMV8_PMCR_E);
> +	vcpu_sysreg_write(vcpu, r, val);
> +}
> +
> +/* PMU registers accessor. */
> +static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> +			    const struct sys_reg_params *p,
> +			    const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

I'd feel a lot more comfortable if this was a u64...

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_sys_reg(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_sys_reg(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_sys_reg(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMCR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>  	  trap_raz_wi },
> @@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>  	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
>  };
>  
> +/* PMU CP15 registers accessor. */
> +static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> +				 const struct sys_reg_params *p,
> +				 const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

... and this a u32.

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_cp15(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_cp15(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_cp15(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /*
>   * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
>   * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
>  
>  	/* PMU */
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
> +	  reset_pmcr, c9_PMCR },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },


Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
@ 2015-11-30 18:11     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:11 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:46 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCR_EL0 and make writable
> bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
> handler for PMU registers which emulates writing and reading register
> and add emulation for PMCR.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 104 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d03d3af..5b591d6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -33,6 +33,7 @@
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/pmu.h>
>  
>  #include <trace/events/kvm.h>
>  
> @@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
> +			      const struct sys_reg_desc *r, u64 val)
> +{
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		vcpu_sys_reg(vcpu, r->reg) = val;
> +	else
> +		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
> +}
> +
> +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmcr, val;
> +
> +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> +	 * except PMCR.E resetting to zero.
> +	 */
> +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> +	      & (~ARMV8_PMCR_E);
> +	vcpu_sysreg_write(vcpu, r, val);
> +}
> +
> +/* PMU registers accessor. */
> +static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> +			    const struct sys_reg_params *p,
> +			    const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

I'd feel a lot more comfortable if this was a u64...

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_sys_reg(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_sys_reg(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_sys_reg(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMCR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>  	  trap_raz_wi },
> @@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>  	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
>  };
>  
> +/* PMU CP15 registers accessor. */
> +static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> +				 const struct sys_reg_params *p,
> +				 const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

... and this a u32.

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_cp15(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_cp15(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_cp15(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /*
>   * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
>   * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
>  
>  	/* PMU */
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
> +	  reset_pmcr, c9_PMCR },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },


Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register
@ 2015-11-30 18:11     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:21:46 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCR_EL0 and make writable
> bits architecturally UNKNOWN except PMCR.E to zero. Add a common access
> handler for PMU registers which emulates writing and reading register
> and add emulation for PMCR.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 106 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 104 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d03d3af..5b591d6 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -33,6 +33,7 @@
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/pmu.h>
>  
>  #include <trace/events/kvm.h>
>  
> @@ -446,6 +447,67 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static void vcpu_sysreg_write(struct kvm_vcpu *vcpu,
> +			      const struct sys_reg_desc *r, u64 val)
> +{
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		vcpu_sys_reg(vcpu, r->reg) = val;
> +	else
> +		vcpu_cp15(vcpu, r->reg) = lower_32_bits(val);
> +}
> +
> +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmcr, val;
> +
> +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> +	 * except PMCR.E resetting to zero.
> +	 */
> +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> +	      & (~ARMV8_PMCR_E);
> +	vcpu_sysreg_write(vcpu, r, val);
> +}
> +
> +/* PMU registers accessor. */
> +static bool access_pmu_regs(struct kvm_vcpu *vcpu,
> +			    const struct sys_reg_params *p,
> +			    const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

I'd feel a lot more comfortable if this was a u64...

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_sys_reg(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_sys_reg(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case PMCR_EL0: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_sys_reg(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -630,7 +692,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMCR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_pmcr, PMCR_EL0, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>  	  trap_raz_wi },
> @@ -864,6 +926,45 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>  	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
>  };
>  
> +/* PMU CP15 registers accessor. */
> +static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
> +				 const struct sys_reg_params *p,
> +				 const struct sys_reg_desc *r)
> +{
> +	unsigned long val;

... and this a u32.

> +
> +	if (p->is_write) {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* Only update writeable bits of PMCR */
> +			val = vcpu_cp15(vcpu, r->reg);
> +			val &= ~ARMV8_PMCR_MASK;
> +			val |= *vcpu_reg(vcpu, p->Rt) & ARMV8_PMCR_MASK;
> +			vcpu_cp15(vcpu, r->reg) = val;
> +			break;
> +		}
> +		default:
> +			vcpu_cp15(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
> +	} else {
> +		switch (r->reg) {
> +		case c9_PMCR: {
> +			/* PMCR.P & PMCR.C are RAZ */
> +			val = vcpu_cp15(vcpu, r->reg)
> +			      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +			*vcpu_reg(vcpu, p->Rt) = val;
> +			break;
> +		}
> +		default:
> +			*vcpu_reg(vcpu, p->Rt) = vcpu_cp15(vcpu, r->reg);
> +			break;
> +		}
> +	}
> +
> +	return true;
> +}
> +
>  /*
>   * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
>   * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -892,7 +993,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
>  
>  	/* PMU */
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmu_cp15_regs,
> +	  reset_pmcr, c9_PMCR },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },


Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  2015-10-30  6:21   ` Shannon Zhao
  (?)
@ 2015-11-30 18:12     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:12 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:50 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
> reset_unknown_cp15 for its reset handler. Add access handler which
> emulates writing and reading PMXEVTYPER register. When writing to
> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
> for the selected event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index cb82b15..4e606ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case PMXEVTYPER_EL0: {
> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);

You are blindingly truncating 64bit values to u32. Is that intentional?

> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);

Please do not break assignments like this, it makes the code
unreadable. I don't care what the 80-character police says... ;-)

> +			break;
> +		}
>  		case PMCR_EL0: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_sys_reg(vcpu, r->reg);
> @@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMXEVTYPER_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  trap_raz_wi },
> @@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case c9_PMXEVTYPER: {
> +			val = vcpu_cp15(vcpu, c9_PMSELR);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);
> +			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
> +			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
>  		case c9_PMCR: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_cp15(vcpu, r->reg);
> @@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
>  	  reset_pmceid, c9_PMCEID1 },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
> +	  reset_unknown_cp15, c9_PMXEVTYPER },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-11-30 18:12     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:12 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:50 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
> reset_unknown_cp15 for its reset handler. Add access handler which
> emulates writing and reading PMXEVTYPER register. When writing to
> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
> for the selected event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index cb82b15..4e606ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case PMXEVTYPER_EL0: {
> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);

You are blindingly truncating 64bit values to u32. Is that intentional?

> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);

Please do not break assignments like this, it makes the code
unreadable. I don't care what the 80-character police says... ;-)

> +			break;
> +		}
>  		case PMCR_EL0: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_sys_reg(vcpu, r->reg);
> @@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMXEVTYPER_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  trap_raz_wi },
> @@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case c9_PMXEVTYPER: {
> +			val = vcpu_cp15(vcpu, c9_PMSELR);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);
> +			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
> +			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
>  		case c9_PMCR: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_cp15(vcpu, r->reg);
> @@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
>  	  reset_pmceid, c9_PMCEID1 },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
> +	  reset_unknown_cp15, c9_PMXEVTYPER },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-11-30 18:12     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:21:50 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
> reset_unknown_cp15 for its reset handler. Add access handler which
> emulates writing and reading PMXEVTYPER register. When writing to
> PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
> for the selected event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index cb82b15..4e606ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case PMXEVTYPER_EL0: {
> +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);

You are blindingly truncating 64bit values to u32. Is that intentional?

> +			vcpu_sys_reg(vcpu, PMXEVTYPER_EL0) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);

Please do not break assignments like this, it makes the code
unreadable. I don't care what the 80-character police says... ;-)

> +			break;
> +		}
>  		case PMCR_EL0: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_sys_reg(vcpu, r->reg);
> @@ -735,7 +746,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMXEVTYPER_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmu_regs, reset_unknown, PMXEVTYPER_EL0 },
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  trap_raz_wi },
> @@ -951,6 +962,16 @@ static bool access_pmu_cp15_regs(struct kvm_vcpu *vcpu,
>  
>  	if (p->is_write) {
>  		switch (r->reg) {
> +		case c9_PMXEVTYPER: {
> +			val = vcpu_cp15(vcpu, c9_PMSELR);
> +			kvm_pmu_set_counter_event_type(vcpu,
> +						       *vcpu_reg(vcpu, p->Rt),
> +						       val);
> +			vcpu_cp15(vcpu, c9_PMXEVTYPER) = *vcpu_reg(vcpu, p->Rt);
> +			vcpu_cp15(vcpu, c14_PMEVTYPER0 + val) =
> +							 *vcpu_reg(vcpu, p->Rt);
> +			break;
> +		}
>  		case c9_PMCR: {
>  			/* Only update writeable bits of PMCR */
>  			val = vcpu_cp15(vcpu, r->reg);
> @@ -1024,7 +1045,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmu_cp15_regs,
>  	  reset_pmceid, c9_PMCEID1 },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_cp15_regs,
> +	  reset_unknown_cp15, c9_PMXEVTYPER },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-10-30  6:22   ` Shannon Zhao
  (?)
@ 2015-11-30 18:22     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:22 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:22:00 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When calling perf_event_create_kernel_counter to create perf_event,
> assign a overflow handler. Then when perf event overflows, set
> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm/kvm/arm.c    |  4 +++
>  include/kvm/arm_pmu.h |  4 +++
>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 83 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 78b2869..9c0fec4 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/kvm.h>
>  #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  
>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>  			local_irq_enable();
> +			kvm_pmu_sync_hwstate(vcpu);

This is very weird. Are you only injecting interrupts when a signal is
pending? I don't understand how this works...

>  			kvm_vgic_sync_hwstate(vcpu);
>  			preempt_enable();
>  			kvm_timer_sync_hwstate(vcpu);
> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		kvm_guest_exit();
>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>  
> +		kvm_pmu_post_sync_hwstate(vcpu);
> +
>  		kvm_vgic_sync_hwstate(vcpu);
>  
>  		preempt_enable();
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index acd025a..5e7f943 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -39,6 +39,8 @@ struct kvm_pmu {
>  };
>  
>  #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);

Please follow the current terminology: _flush_ on VM entry, _sync_ on
VM exit.

>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> @@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  				    u32 select_idx);
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>  #else
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>  {
>  	return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 11d1bfb..6d48d9a 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/perf_event.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>  }
>  
>  /**
> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	u32 overflow;
> +
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> +	else
> +		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
> +
> +	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
> + */
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +	if (pmu->irq_pending && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
> + * the interrupt.
> + */
> +static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> +				  struct perf_sample_data *data,
> +				  struct pt_regs *regs)
> +{
> +	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	int idx = pmc->idx;
> +
> +	if (!vcpu_mode_is_32bit(vcpu)) {
> +		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);
> +		}
> +	} else {
> +		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);

There is some obvious code factorization that can be done here.

> +		}
> +	}
> +}
> +
> +/**
>   * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
> @@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  	/* The initial sample period (overflow count) of an event. */
>  	attr.sample_period = (-counter) & pmc->bitmask;
>  
> -	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	event = perf_event_create_kernel_counter(&attr, -1, current,
> +						 kvm_pmu_perf_overflow, pmc);
>  	if (IS_ERR(event)) {
>  		printk_once("kvm: pmu event creation failed %ld\n",
>  			    PTR_ERR(event));

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-11-30 18:22     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:22 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:22:00 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When calling perf_event_create_kernel_counter to create perf_event,
> assign a overflow handler. Then when perf event overflows, set
> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm/kvm/arm.c    |  4 +++
>  include/kvm/arm_pmu.h |  4 +++
>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 83 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 78b2869..9c0fec4 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/kvm.h>
>  #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  
>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>  			local_irq_enable();
> +			kvm_pmu_sync_hwstate(vcpu);

This is very weird. Are you only injecting interrupts when a signal is
pending? I don't understand how this works...

>  			kvm_vgic_sync_hwstate(vcpu);
>  			preempt_enable();
>  			kvm_timer_sync_hwstate(vcpu);
> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		kvm_guest_exit();
>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>  
> +		kvm_pmu_post_sync_hwstate(vcpu);
> +
>  		kvm_vgic_sync_hwstate(vcpu);
>  
>  		preempt_enable();
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index acd025a..5e7f943 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -39,6 +39,8 @@ struct kvm_pmu {
>  };
>  
>  #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);

Please follow the current terminology: _flush_ on VM entry, _sync_ on
VM exit.

>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> @@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  				    u32 select_idx);
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>  #else
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>  {
>  	return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 11d1bfb..6d48d9a 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/perf_event.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>  }
>  
>  /**
> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	u32 overflow;
> +
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> +	else
> +		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
> +
> +	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
> + */
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +	if (pmu->irq_pending && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
> + * the interrupt.
> + */
> +static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> +				  struct perf_sample_data *data,
> +				  struct pt_regs *regs)
> +{
> +	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	int idx = pmc->idx;
> +
> +	if (!vcpu_mode_is_32bit(vcpu)) {
> +		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);
> +		}
> +	} else {
> +		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);

There is some obvious code factorization that can be done here.

> +		}
> +	}
> +}
> +
> +/**
>   * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
> @@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  	/* The initial sample period (overflow count) of an event. */
>  	attr.sample_period = (-counter) & pmc->bitmask;
>  
> -	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	event = perf_event_create_kernel_counter(&attr, -1, current,
> +						 kvm_pmu_perf_overflow, pmc);
>  	if (IS_ERR(event)) {
>  		printk_once("kvm: pmu event creation failed %ld\n",
>  			    PTR_ERR(event));

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-11-30 18:22     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:22:00 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When calling perf_event_create_kernel_counter to create perf_event,
> assign a overflow handler. Then when perf event overflows, set
> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm/kvm/arm.c    |  4 +++
>  include/kvm/arm_pmu.h |  4 +++
>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  3 files changed, 83 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 78b2869..9c0fec4 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/kvm.h>
>  #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
>  
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  
>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>  			local_irq_enable();
> +			kvm_pmu_sync_hwstate(vcpu);

This is very weird. Are you only injecting interrupts when a signal is
pending? I don't understand how this works...

>  			kvm_vgic_sync_hwstate(vcpu);
>  			preempt_enable();
>  			kvm_timer_sync_hwstate(vcpu);
> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		kvm_guest_exit();
>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>  
> +		kvm_pmu_post_sync_hwstate(vcpu);
> +
>  		kvm_vgic_sync_hwstate(vcpu);
>  
>  		preempt_enable();
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index acd025a..5e7f943 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -39,6 +39,8 @@ struct kvm_pmu {
>  };
>  
>  #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);

Please follow the current terminology: _flush_ on VM entry, _sync_ on
VM exit.

>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool all_enable);
> @@ -49,6 +51,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  				    u32 select_idx);
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>  #else
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32 select_idx)
>  {
>  	return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 11d1bfb..6d48d9a 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/perf_event.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
>  
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -69,6 +70,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>  }
>  
>  /**
> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	u32 overflow;
> +
> +	if (!vcpu_mode_is_32bit(vcpu))
> +		overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> +	else
> +		overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
> +
> +	if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * kvm_pmu_post_sync_hwstate - post sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from guest.
> + */
> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +	if (pmu->irq_pending && (pmu->irq_num != -1))
> +		kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, 1);
> +
> +	pmu->irq_pending = false;
> +}
> +
> +/**
> + * When perf event overflows, set irq_pending and call kvm_vcpu_kick() to inject
> + * the interrupt.
> + */
> +static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
> +				  struct perf_sample_data *data,
> +				  struct pt_regs *regs)
> +{
> +	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
> +	struct kvm_vcpu *vcpu = pmc->vcpu;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	int idx = pmc->idx;
> +
> +	if (!vcpu_mode_is_32bit(vcpu)) {
> +		if ((vcpu_sys_reg(vcpu, PMINTENSET_EL1) >> idx) & 0x1) {
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSSET_EL0));
> +			__set_bit(idx,
> +			    (unsigned long *)&vcpu_sys_reg(vcpu, PMOVSCLR_EL0));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);
> +		}
> +	} else {
> +		if ((vcpu_cp15(vcpu, c9_PMINTENSET) >> idx) & 0x1) {
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSSET));
> +			__set_bit(idx,
> +				(unsigned long *)&vcpu_cp15(vcpu, c9_PMOVSCLR));
> +			pmu->irq_pending = true;
> +			kvm_vcpu_kick(vcpu);

There is some obvious code factorization that can be done here.

> +		}
> +	}
> +}
> +
> +/**
>   * kvm_pmu_enable_counter - enable selected PMU counter
>   * @vcpu: The vcpu pointer
>   * @val: the value guest writes to PMCNTENSET register
> @@ -293,7 +366,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  	/* The initial sample period (overflow count) of an event. */
>  	attr.sample_period = (-counter) & pmc->bitmask;
>  
> -	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	event = perf_event_create_kernel_counter(&attr, -1, current,
> +						 kvm_pmu_perf_overflow, pmc);
>  	if (IS_ERR(event)) {
>  		printk_once("kvm: pmu event creation failed %ld\n",
>  			    PTR_ERR(event));

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
  2015-10-30  6:22   ` Shannon Zhao
  (?)
@ 2015-11-30 18:31     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:31 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:22:03 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
> the kvm_device_ops for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
>  arch/arm64/include/uapi/asm/kvm.h             |  3 +
>  include/linux/kvm_host.h                      |  1 +
>  include/uapi/linux/kvm.h                      |  2 +
>  virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
>  virt/kvm/arm/vgic.c                           |  8 +++
>  virt/kvm/arm/vgic.h                           |  1 +
>  virt/kvm/kvm_main.c                           |  4 ++
>  8 files changed, 126 insertions(+)
>  create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
> 
> diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
> new file mode 100644
> index 0000000..49481c4
> --- /dev/null
> +++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
> @@ -0,0 +1,15 @@
> +ARM Virtual Performance Monitor Unit (vPMU)
> +===========================================
> +
> +Device types supported:
> +  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
> +
> +Instantiate one PMU instance for per VCPU through this API.
> +
> +Groups:
> +  KVM_DEV_ARM_PMU_GRP_IRQ
> +  Attributes:
> +    A value describing the interrupt number of PMU overflow interrupt.
> +
> +  Errors:
> +    -EINVAL: Value set is out of the expected range

What is the expected range?

> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 0cd7b59..1309a93 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
>  #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
>  #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
>  
> +/* Device Control API: ARM PMU */
> +#define KVM_DEV_ARM_PMU_GRP_IRQ		0
> +
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
>  #define KVM_ARM_IRQ_TYPE_MASK		0xff
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 1bef9e2..f6be696 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
>  extern struct kvm_device_ops kvm_xics_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
> +extern struct kvm_device_ops kvm_arm_pmu_ops;
>  
>  #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
>  
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index a9256f0..f41e6b6 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1025,6 +1025,8 @@ enum kvm_device_type {
>  #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
>  	KVM_DEV_TYPE_ARM_VGIC_V3,
>  #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
> +	KVM_DEV_TYPE_ARM_PMU_V3,
> +#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
>  	KVM_DEV_TYPE_MAX,
>  };
>  
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index d78ce7b..0a00d04 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,10 +19,13 @@
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
>  #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#include "vgic.h"
> +
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  
>  	pmc->perf_event = event;
>  }
> +
> +static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
> +{
> +	int j;
> +	struct kvm_vcpu *vcpu;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
> +		pmu->irq_num = irq;
> +		vgic_dist_irq_set_cfg(vcpu, irq, true);
> +	}

So obviously, the irq must be a PPI, since all vcpus are getting the
same one. Worth documenting.

> +
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
> +{
> +	int i, j;
> +	struct kvm_vcpu *vcpu;
> +	struct kvm *kvm = dev->kvm;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		memset(pmu, 0, sizeof(*pmu));
> +		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +			pmu->pmc[i].idx = i;
> +			pmu->pmc[i].vcpu = vcpu;
> +			pmu->pmc[i].bitmask = 0xffffffffUL;
> +		}
> +		pmu->irq_num = -1;
> +	}

Surely this can be shared with the reset code?

> +
> +	return 0;
> +}
> +
> +static void kvm_arm_pmu_destroy(struct kvm_device *dev)
> +{
> +	kfree(dev);
> +}
> +
> +static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg;
> +
> +		if (get_user(reg, uaddr))
> +			return -EFAULT;
> +
> +		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
> +			return -EINVAL;

On the other have, this doesn't prevent a SPI from being used.
Something is wrong.

> +
> +		return kvm_arm_pmu_set_irq(dev->kvm, reg);
> +	}
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ:
> +		return 0;
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +struct kvm_device_ops kvm_arm_pmu_ops = {
> +	.name = "kvm-arm-pmu",
> +	.create = kvm_arm_pmu_create,
> +	.destroy = kvm_arm_pmu_destroy,
> +	.set_attr = kvm_arm_pmu_set_attr,
> +	.get_attr = kvm_arm_pmu_get_attr,
> +	.has_attr = kvm_arm_pmu_has_attr,
> +};
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 66c6616..8e00987 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
>  	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
>  }
>  
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
> +{
> +	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> +
> +	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
> +				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
> +}
> +

This has nothing to do here. If the interrupt must be configured, it
should be explicit, not hidden here.

>  static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
>  {
>  	if (irq < VGIC_NR_PRIVATE_IRQS)
> diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
> index 0df74cb..eb814f5 100644
> --- a/virt/kvm/arm/vgic.h
> +++ b/virt/kvm/arm/vgic.h
> @@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
>  
>  void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
>  void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
>  void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
>  void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  			     int irq, int val);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8db1d93..5decfb5 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
>  #ifdef CONFIG_KVM_XICS
>  	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
>  #endif
> +
> +#ifdef CONFIG_KVM_ARM_PMU
> +	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
> +#endif
>  };
>  
>  int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
@ 2015-11-30 18:31     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:31 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:22:03 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
> the kvm_device_ops for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
>  arch/arm64/include/uapi/asm/kvm.h             |  3 +
>  include/linux/kvm_host.h                      |  1 +
>  include/uapi/linux/kvm.h                      |  2 +
>  virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
>  virt/kvm/arm/vgic.c                           |  8 +++
>  virt/kvm/arm/vgic.h                           |  1 +
>  virt/kvm/kvm_main.c                           |  4 ++
>  8 files changed, 126 insertions(+)
>  create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
> 
> diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
> new file mode 100644
> index 0000000..49481c4
> --- /dev/null
> +++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
> @@ -0,0 +1,15 @@
> +ARM Virtual Performance Monitor Unit (vPMU)
> +===========================================
> +
> +Device types supported:
> +  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
> +
> +Instantiate one PMU instance for per VCPU through this API.
> +
> +Groups:
> +  KVM_DEV_ARM_PMU_GRP_IRQ
> +  Attributes:
> +    A value describing the interrupt number of PMU overflow interrupt.
> +
> +  Errors:
> +    -EINVAL: Value set is out of the expected range

What is the expected range?

> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 0cd7b59..1309a93 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
>  #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
>  #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
>  
> +/* Device Control API: ARM PMU */
> +#define KVM_DEV_ARM_PMU_GRP_IRQ		0
> +
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
>  #define KVM_ARM_IRQ_TYPE_MASK		0xff
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 1bef9e2..f6be696 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
>  extern struct kvm_device_ops kvm_xics_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
> +extern struct kvm_device_ops kvm_arm_pmu_ops;
>  
>  #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
>  
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index a9256f0..f41e6b6 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1025,6 +1025,8 @@ enum kvm_device_type {
>  #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
>  	KVM_DEV_TYPE_ARM_VGIC_V3,
>  #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
> +	KVM_DEV_TYPE_ARM_PMU_V3,
> +#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
>  	KVM_DEV_TYPE_MAX,
>  };
>  
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index d78ce7b..0a00d04 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,10 +19,13 @@
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
>  #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#include "vgic.h"
> +
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  
>  	pmc->perf_event = event;
>  }
> +
> +static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
> +{
> +	int j;
> +	struct kvm_vcpu *vcpu;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
> +		pmu->irq_num = irq;
> +		vgic_dist_irq_set_cfg(vcpu, irq, true);
> +	}

So obviously, the irq must be a PPI, since all vcpus are getting the
same one. Worth documenting.

> +
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
> +{
> +	int i, j;
> +	struct kvm_vcpu *vcpu;
> +	struct kvm *kvm = dev->kvm;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		memset(pmu, 0, sizeof(*pmu));
> +		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +			pmu->pmc[i].idx = i;
> +			pmu->pmc[i].vcpu = vcpu;
> +			pmu->pmc[i].bitmask = 0xffffffffUL;
> +		}
> +		pmu->irq_num = -1;
> +	}

Surely this can be shared with the reset code?

> +
> +	return 0;
> +}
> +
> +static void kvm_arm_pmu_destroy(struct kvm_device *dev)
> +{
> +	kfree(dev);
> +}
> +
> +static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg;
> +
> +		if (get_user(reg, uaddr))
> +			return -EFAULT;
> +
> +		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
> +			return -EINVAL;

On the other have, this doesn't prevent a SPI from being used.
Something is wrong.

> +
> +		return kvm_arm_pmu_set_irq(dev->kvm, reg);
> +	}
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ:
> +		return 0;
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +struct kvm_device_ops kvm_arm_pmu_ops = {
> +	.name = "kvm-arm-pmu",
> +	.create = kvm_arm_pmu_create,
> +	.destroy = kvm_arm_pmu_destroy,
> +	.set_attr = kvm_arm_pmu_set_attr,
> +	.get_attr = kvm_arm_pmu_get_attr,
> +	.has_attr = kvm_arm_pmu_has_attr,
> +};
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 66c6616..8e00987 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
>  	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
>  }
>  
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
> +{
> +	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> +
> +	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
> +				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
> +}
> +

This has nothing to do here. If the interrupt must be configured, it
should be explicit, not hidden here.

>  static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
>  {
>  	if (irq < VGIC_NR_PRIVATE_IRQS)
> diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
> index 0df74cb..eb814f5 100644
> --- a/virt/kvm/arm/vgic.h
> +++ b/virt/kvm/arm/vgic.h
> @@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
>  
>  void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
>  void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
>  void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
>  void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  			     int irq, int val);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8db1d93..5decfb5 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
>  #ifdef CONFIG_KVM_XICS
>  	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
>  #endif
> +
> +#ifdef CONFIG_KVM_ARM_PMU
> +	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
> +#endif
>  };
>  
>  int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device
@ 2015-11-30 18:31     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:22:03 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add a new kvm device type KVM_DEV_TYPE_ARM_PMU_V3 for ARM PMU. Implement
> the kvm_device_ops for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  Documentation/virtual/kvm/devices/arm-pmu.txt | 15 +++++
>  arch/arm64/include/uapi/asm/kvm.h             |  3 +
>  include/linux/kvm_host.h                      |  1 +
>  include/uapi/linux/kvm.h                      |  2 +
>  virt/kvm/arm/pmu.c                            | 92 +++++++++++++++++++++++++++
>  virt/kvm/arm/vgic.c                           |  8 +++
>  virt/kvm/arm/vgic.h                           |  1 +
>  virt/kvm/kvm_main.c                           |  4 ++
>  8 files changed, 126 insertions(+)
>  create mode 100644 Documentation/virtual/kvm/devices/arm-pmu.txt
> 
> diff --git a/Documentation/virtual/kvm/devices/arm-pmu.txt b/Documentation/virtual/kvm/devices/arm-pmu.txt
> new file mode 100644
> index 0000000..49481c4
> --- /dev/null
> +++ b/Documentation/virtual/kvm/devices/arm-pmu.txt
> @@ -0,0 +1,15 @@
> +ARM Virtual Performance Monitor Unit (vPMU)
> +===========================================
> +
> +Device types supported:
> +  KVM_DEV_TYPE_ARM_PMU_V3         ARM Performance Monitor Unit v3
> +
> +Instantiate one PMU instance for per VCPU through this API.
> +
> +Groups:
> +  KVM_DEV_ARM_PMU_GRP_IRQ
> +  Attributes:
> +    A value describing the interrupt number of PMU overflow interrupt.
> +
> +  Errors:
> +    -EINVAL: Value set is out of the expected range

What is the expected range?

> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 0cd7b59..1309a93 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -204,6 +204,9 @@ struct kvm_arch_memory_slot {
>  #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
>  #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
>  
> +/* Device Control API: ARM PMU */
> +#define KVM_DEV_ARM_PMU_GRP_IRQ		0
> +
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
>  #define KVM_ARM_IRQ_TYPE_MASK		0xff
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 1bef9e2..f6be696 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1122,6 +1122,7 @@ extern struct kvm_device_ops kvm_mpic_ops;
>  extern struct kvm_device_ops kvm_xics_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v2_ops;
>  extern struct kvm_device_ops kvm_arm_vgic_v3_ops;
> +extern struct kvm_device_ops kvm_arm_pmu_ops;
>  
>  #ifdef CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT
>  
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index a9256f0..f41e6b6 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1025,6 +1025,8 @@ enum kvm_device_type {
>  #define KVM_DEV_TYPE_FLIC		KVM_DEV_TYPE_FLIC
>  	KVM_DEV_TYPE_ARM_VGIC_V3,
>  #define KVM_DEV_TYPE_ARM_VGIC_V3	KVM_DEV_TYPE_ARM_VGIC_V3
> +	KVM_DEV_TYPE_ARM_PMU_V3,
> +#define	KVM_DEV_TYPE_ARM_PMU_V3		KVM_DEV_TYPE_ARM_PMU_V3
>  	KVM_DEV_TYPE_MAX,
>  };
>  
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index d78ce7b..0a00d04 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,10 +19,13 @@
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
>  #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
>  
> +#include "vgic.h"
> +
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
>   * @vcpu: The vcpu pointer
> @@ -416,3 +419,92 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u32 data,
>  
>  	pmc->perf_event = event;
>  }
> +
> +static int kvm_arm_pmu_set_irq(struct kvm *kvm, int irq)
> +{
> +	int j;
> +	struct kvm_vcpu *vcpu;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		kvm_debug("Set kvm ARM PMU irq: %d\n", irq);
> +		pmu->irq_num = irq;
> +		vgic_dist_irq_set_cfg(vcpu, irq, true);
> +	}

So obviously, the irq must be a PPI, since all vcpus are getting the
same one. Worth documenting.

> +
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_create(struct kvm_device *dev, u32 type)
> +{
> +	int i, j;
> +	struct kvm_vcpu *vcpu;
> +	struct kvm *kvm = dev->kvm;
> +
> +	kvm_for_each_vcpu(j, vcpu, kvm) {
> +		struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +		memset(pmu, 0, sizeof(*pmu));
> +		for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +			pmu->pmc[i].idx = i;
> +			pmu->pmc[i].vcpu = vcpu;
> +			pmu->pmc[i].bitmask = 0xffffffffUL;
> +		}
> +		pmu->irq_num = -1;
> +	}

Surely this can be shared with the reset code?

> +
> +	return 0;
> +}
> +
> +static void kvm_arm_pmu_destroy(struct kvm_device *dev)
> +{
> +	kfree(dev);
> +}
> +
> +static int kvm_arm_pmu_set_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg;
> +
> +		if (get_user(reg, uaddr))
> +			return -EFAULT;
> +
> +		if (reg < VGIC_NR_SGIS || reg > dev->kvm->arch.vgic.nr_irqs)
> +			return -EINVAL;

On the other have, this doesn't prevent a SPI from being used.
Something is wrong.

> +
> +		return kvm_arm_pmu_set_irq(dev->kvm, reg);
> +	}
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +static int kvm_arm_pmu_get_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_has_attr(struct kvm_device *dev,
> +				struct kvm_device_attr *attr)
> +{
> +	switch (attr->group) {
> +	case KVM_DEV_ARM_PMU_GRP_IRQ:
> +		return 0;
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +struct kvm_device_ops kvm_arm_pmu_ops = {
> +	.name = "kvm-arm-pmu",
> +	.create = kvm_arm_pmu_create,
> +	.destroy = kvm_arm_pmu_destroy,
> +	.set_attr = kvm_arm_pmu_set_attr,
> +	.get_attr = kvm_arm_pmu_get_attr,
> +	.has_attr = kvm_arm_pmu_has_attr,
> +};
> diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
> index 66c6616..8e00987 100644
> --- a/virt/kvm/arm/vgic.c
> +++ b/virt/kvm/arm/vgic.c
> @@ -380,6 +380,14 @@ void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq)
>  	vgic_bitmap_set_irq_val(&dist->irq_pending, vcpu->vcpu_id, irq, 0);
>  }
>  
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level)
> +{
> +	struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
> +
> +	vgic_bitmap_set_irq_val(&dist->irq_cfg, vcpu->vcpu_id, irq,
> +				level ? VGIC_CFG_LEVEL : VGIC_CFG_EDGE);
> +}
> +

This has nothing to do here. If the interrupt must be configured, it
should be explicit, not hidden here.

>  static void vgic_cpu_irq_set(struct kvm_vcpu *vcpu, int irq)
>  {
>  	if (irq < VGIC_NR_PRIVATE_IRQS)
> diff --git a/virt/kvm/arm/vgic.h b/virt/kvm/arm/vgic.h
> index 0df74cb..eb814f5 100644
> --- a/virt/kvm/arm/vgic.h
> +++ b/virt/kvm/arm/vgic.h
> @@ -49,6 +49,7 @@ u32 *vgic_bytemap_get_reg(struct vgic_bytemap *x, int cpuid, u32 offset);
>  
>  void vgic_dist_irq_set_pending(struct kvm_vcpu *vcpu, int irq);
>  void vgic_dist_irq_clear_pending(struct kvm_vcpu *vcpu, int irq);
> +void vgic_dist_irq_set_cfg(struct kvm_vcpu *vcpu, int irq, bool level);
>  void vgic_cpu_irq_clear(struct kvm_vcpu *vcpu, int irq);
>  void vgic_bitmap_set_irq_val(struct vgic_bitmap *x, int cpuid,
>  			     int irq, int val);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8db1d93..5decfb5 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2641,6 +2641,10 @@ static struct kvm_device_ops *kvm_device_ops_table[KVM_DEV_TYPE_MAX] = {
>  #ifdef CONFIG_KVM_XICS
>  	[KVM_DEV_TYPE_XICS]		= &kvm_xics_ops,
>  #endif
> +
> +#ifdef CONFIG_KVM_ARM_PMU
> +	[KVM_DEV_TYPE_ARM_PMU_V3]	= &kvm_arm_pmu_ops,
> +#endif
>  };
>  
>  int kvm_register_device_ops(struct kvm_device_ops *ops, u32 type)

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
  2015-10-30  6:21 ` Shannon Zhao
  (?)
@ 2015-11-30 18:34   ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:34 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:42 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

Hi Shannon,

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.

I've been through this whole series, and while this is shaping nicely,
there is still a number of things that are a bit odd (interrupt
injection is one, the whole CP15 reset is another).

Can you please respin this soon? I'd really like to have this in for
4.5...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-11-30 18:34   ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:34 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On Fri, 30 Oct 2015 14:21:42 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

Hi Shannon,

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.

I've been through this whole series, and while this is shaping nicely,
there is still a number of things that are a bit odd (interrupt
injection is one, the whole CP15 reset is another).

Can you please respin this soon? I'd really like to have this in for
4.5...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-11-30 18:34   ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-11-30 18:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 30 Oct 2015 14:21:42 +0800
Shannon Zhao <zhaoshenglong@huawei.com> wrote:

Hi Shannon,

> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.

I've been through this whole series, and while this is shaping nicely,
there is still a number of things that are a bit odd (interrupt
injection is one, the whole CP15 reset is another).

Can you please respin this soon? I'd really like to have this in for
4.5...

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-11-30 17:56     ` Marc Zyngier
@ 2015-12-01  1:51       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  1:51 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

Hi Marc,

On 2015/12/1 1:56, Marc Zyngier wrote:
> Same remark here as the one I made earlier. I'm pretty sure we don't
> call any CP15 reset because they are all shared with their 64bit
> counterparts. The same thing goes for the whole series.
Ok, I see. But within the 64bit reset function, it needs to update the
32bit register value, right? Since when accessing these 32bit registers,
it uses the offset c9_PMXXXX.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-12-01  1:51       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  1:51 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 2015/12/1 1:56, Marc Zyngier wrote:
> Same remark here as the one I made earlier. I'm pretty sure we don't
> call any CP15 reset because they are all shared with their 64bit
> counterparts. The same thing goes for the whole series.
Ok, I see. But within the 64bit reset function, it needs to update the
32bit register value, right? Since when accessing these 32bit registers,
it uses the offset c9_PMXXXX.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
  2015-11-30 18:34   ` Marc Zyngier
  (?)
@ 2015-12-01  1:52     ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  1:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

Hi Marc,

On 2015/12/1 2:34, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:21:42 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
> Hi Shannon,
> 
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This patchset adds guest PMU support for KVM on ARM64. It takes
>> > trap-and-emulate approach. When guest wants to monitor one event, it
>> > will be trapped by KVM and KVM will call perf_event API to create a perf
>> > event and call relevant perf_event APIs to get the count value of event.
> I've been through this whole series, and while this is shaping nicely,
> there is still a number of things that are a bit odd (interrupt
> injection is one, the whole CP15 reset is another).
> 
> Can you please respin this soon? I'd really like to have this in for
> 4.5...

Thanks! I will respin it soon.

-- 
Shannon


^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-12-01  1:52     ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  1:52 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

Hi Marc,

On 2015/12/1 2:34, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:21:42 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
> Hi Shannon,
> 
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This patchset adds guest PMU support for KVM on ARM64. It takes
>> > trap-and-emulate approach. When guest wants to monitor one event, it
>> > will be trapped by KVM and KVM will call perf_event API to create a perf
>> > event and call relevant perf_event APIs to get the count value of event.
> I've been through this whole series, and while this is shaping nicely,
> there is still a number of things that are a bit odd (interrupt
> injection is one, the whole CP15 reset is another).
> 
> Can you please respin this soon? I'd really like to have this in for
> 4.5...

Thanks! I will respin it soon.

-- 
Shannon


^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 00/21] KVM: ARM64: Add guest PMU support
@ 2015-12-01  1:52     ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  1:52 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Marc,

On 2015/12/1 2:34, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:21:42 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
> Hi Shannon,
> 
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This patchset adds guest PMU support for KVM on ARM64. It takes
>> > trap-and-emulate approach. When guest wants to monitor one event, it
>> > will be trapped by KVM and KVM will call perf_event API to create a perf
>> > event and call relevant perf_event APIs to get the count value of event.
> I've been through this whole series, and while this is shaping nicely,
> there is still a number of things that are a bit odd (interrupt
> injection is one, the whole CP15 reset is another).
> 
> Can you please respin this soon? I'd really like to have this in for
> 4.5...

Thanks! I will respin it soon.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
  2015-11-30 18:12     ` Marc Zyngier
@ 2015-12-01  2:42       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  2:42 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm



On 2015/12/1 2:12, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:21:50 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
>> > reset_unknown_cp15 for its reset handler. Add access handler which
>> > emulates writing and reading PMXEVTYPER register. When writing to
>> > PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
>> > for the selected event type.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>> >  1 file changed, 24 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index cb82b15..4e606ea 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>> >  
>> >  	if (p->is_write) {
>> >  		switch (r->reg) {
>> > +		case PMXEVTYPER_EL0: {
>> > +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
>> > +			kvm_pmu_set_counter_event_type(vcpu,
>> > +						       *vcpu_reg(vcpu, p->Rt),
>> > +						       val);
> You are blindingly truncating 64bit values to u32. Is that intentional?
> 
Yeah, the register PMXEVTYPER_EL0 and PMSELR_EL0 are all 32bit.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register
@ 2015-12-01  2:42       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01  2:42 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/1 2:12, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:21:50 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > Since the reset value of PMXEVTYPER is UNKNOWN, use reset_unknown or
>> > reset_unknown_cp15 for its reset handler. Add access handler which
>> > emulates writing and reading PMXEVTYPER register. When writing to
>> > PMXEVTYPER, call kvm_pmu_set_counter_event_type to create a perf_event
>> > for the selected event type.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 26 ++++++++++++++++++++++++--
>> >  1 file changed, 24 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index cb82b15..4e606ea 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -491,6 +491,17 @@ static bool access_pmu_regs(struct kvm_vcpu *vcpu,
>> >  
>> >  	if (p->is_write) {
>> >  		switch (r->reg) {
>> > +		case PMXEVTYPER_EL0: {
>> > +			val = vcpu_sys_reg(vcpu, PMSELR_EL0);
>> > +			kvm_pmu_set_counter_event_type(vcpu,
>> > +						       *vcpu_reg(vcpu, p->Rt),
>> > +						       val);
> You are blindingly truncating 64bit values to u32. Is that intentional?
> 
Yeah, the register PMXEVTYPER_EL0 and PMSELR_EL0 are all 32bit.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-12-01  1:51       ` Shannon Zhao
@ 2015-12-01  8:49         ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01  8:49 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, shannon.zhao, peter.huangpeng

On 01/12/15 01:51, Shannon Zhao wrote:
> Hi Marc,
> 
> On 2015/12/1 1:56, Marc Zyngier wrote:
>> Same remark here as the one I made earlier. I'm pretty sure we don't
>> call any CP15 reset because they are all shared with their 64bit
>> counterparts. The same thing goes for the whole series.
> Ok, I see. But within the 64bit reset function, it needs to update the
> 32bit register value, right? Since when accessing these 32bit registers,
> it uses the offset c9_PMXXXX.

It shouldn't,  because the 64bit and 32bit share the same storage. From
your own patch:

+/* Performance Monitors*/
+#define c9_PMCR		(PMCR_EL0 * 2)
+#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
+#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
+#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
+#define c9_PMSELR	(PMSELR_EL0 * 2)
+#define c9_PMCEID0	(PMCEID0_EL0 * 2)
+#define c9_PMCEID1	(PMCEID1_EL0 * 2)
+#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
+#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
+#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
+#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
+#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
+#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
+#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
+#define c9_PMSWINC	(PMSWINC_EL0 * 2)

These are indexes in the copro array:

struct kvm_cpu_context {
	struct kvm_regs	gp_regs;
	union {
		u64 sys_regs[NR_SYS_REGS];
		u32 copro[NR_COPRO_REGS];
	};
};

which is in a union with the sys_reg array. So anything that affects one
affects the other because:
- there is only one state in the physical CPU, no matter which mode
you're in,
- the guest EL1 is either 32bit or 64bit, and never changes over time.

Hope this helps,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-12-01  8:49         ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01  8:49 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/12/15 01:51, Shannon Zhao wrote:
> Hi Marc,
> 
> On 2015/12/1 1:56, Marc Zyngier wrote:
>> Same remark here as the one I made earlier. I'm pretty sure we don't
>> call any CP15 reset because they are all shared with their 64bit
>> counterparts. The same thing goes for the whole series.
> Ok, I see. But within the 64bit reset function, it needs to update the
> 32bit register value, right? Since when accessing these 32bit registers,
> it uses the offset c9_PMXXXX.

It shouldn't,  because the 64bit and 32bit share the same storage. From
your own patch:

+/* Performance Monitors*/
+#define c9_PMCR		(PMCR_EL0 * 2)
+#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
+#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
+#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
+#define c9_PMSELR	(PMSELR_EL0 * 2)
+#define c9_PMCEID0	(PMCEID0_EL0 * 2)
+#define c9_PMCEID1	(PMCEID1_EL0 * 2)
+#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
+#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
+#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
+#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
+#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
+#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
+#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
+#define c9_PMSWINC	(PMSWINC_EL0 * 2)

These are indexes in the copro array:

struct kvm_cpu_context {
	struct kvm_regs	gp_regs;
	union {
		u64 sys_regs[NR_SYS_REGS];
		u32 copro[NR_COPRO_REGS];
	};
};

which is in a union with the sys_reg array. So anything that affects one
affects the other because:
- there is only one state in the physical CPU, no matter which mode
you're in,
- the guest EL1 is either 32bit or 64bit, and never changes over time.

Hope this helps,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
  2015-12-01  8:49         ` Marc Zyngier
@ 2015-12-01 12:46           ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 12:46 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: wei, kvm, shannon.zhao, will.deacon, peter.huangpeng,
	linux-arm-kernel, alex.bennee, kvmarm, christoffer.dall, cov



On 2015/12/1 16:49, Marc Zyngier wrote:
> On 01/12/15 01:51, Shannon Zhao wrote:
>> Hi Marc,
>>
>> On 2015/12/1 1:56, Marc Zyngier wrote:
>>> Same remark here as the one I made earlier. I'm pretty sure we don't
>>> call any CP15 reset because they are all shared with their 64bit
>>> counterparts. The same thing goes for the whole series.
>> Ok, I see. But within the 64bit reset function, it needs to update the
>> 32bit register value, right? Since when accessing these 32bit registers,
>> it uses the offset c9_PMXXXX.
> 
> It shouldn't,  because the 64bit and 32bit share the same storage. From
> your own patch:
> 
> +/* Performance Monitors*/
> +#define c9_PMCR		(PMCR_EL0 * 2)
> +#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
> +#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
> +#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
> +#define c9_PMSELR	(PMSELR_EL0 * 2)
> +#define c9_PMCEID0	(PMCEID0_EL0 * 2)
> +#define c9_PMCEID1	(PMCEID1_EL0 * 2)
> +#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
> +#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
> +#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
> +#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
> +#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
> +#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
> +#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
> +#define c9_PMSWINC	(PMSWINC_EL0 * 2)
> 
> These are indexes in the copro array:
> 
> struct kvm_cpu_context {
> 	struct kvm_regs	gp_regs;
> 	union {
> 		u64 sys_regs[NR_SYS_REGS];
> 		u32 copro[NR_COPRO_REGS];
> 	};
> };
> 
> which is in a union with the sys_reg array. So anything that affects one
> affects the other because:
> - there is only one state in the physical CPU, no matter which mode
> you're in,
> - the guest EL1 is either 32bit or 64bit, and never changes over time.
> 
> Hope this helps,
> 
Ok, I see. Thanks for the explanation. :)

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register
@ 2015-12-01 12:46           ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 12:46 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/1 16:49, Marc Zyngier wrote:
> On 01/12/15 01:51, Shannon Zhao wrote:
>> Hi Marc,
>>
>> On 2015/12/1 1:56, Marc Zyngier wrote:
>>> Same remark here as the one I made earlier. I'm pretty sure we don't
>>> call any CP15 reset because they are all shared with their 64bit
>>> counterparts. The same thing goes for the whole series.
>> Ok, I see. But within the 64bit reset function, it needs to update the
>> 32bit register value, right? Since when accessing these 32bit registers,
>> it uses the offset c9_PMXXXX.
> 
> It shouldn't,  because the 64bit and 32bit share the same storage. From
> your own patch:
> 
> +/* Performance Monitors*/
> +#define c9_PMCR		(PMCR_EL0 * 2)
> +#define c9_PMOVSSET	(PMOVSSET_EL0 * 2)
> +#define c9_PMOVSCLR	(PMOVSCLR_EL0 * 2)
> +#define c9_PMCCNTR	(PMCCNTR_EL0 * 2)
> +#define c9_PMSELR	(PMSELR_EL0 * 2)
> +#define c9_PMCEID0	(PMCEID0_EL0 * 2)
> +#define c9_PMCEID1	(PMCEID1_EL0 * 2)
> +#define c9_PMXEVCNTR	(PMXEVCNTR_EL0 * 2)
> +#define c9_PMXEVTYPER	(PMXEVTYPER_EL0 * 2)
> +#define c9_PMCNTENSET	(PMCNTENSET_EL0 * 2)
> +#define c9_PMCNTENCLR	(PMCNTENCLR_EL0 * 2)
> +#define c9_PMINTENSET	(PMINTENSET_EL1 * 2)
> +#define c9_PMINTENCLR	(PMINTENCLR_EL1 * 2)
> +#define c9_PMUSERENR	(PMUSERENR_EL0 * 2)
> +#define c9_PMSWINC	(PMSWINC_EL0 * 2)
> 
> These are indexes in the copro array:
> 
> struct kvm_cpu_context {
> 	struct kvm_regs	gp_regs;
> 	union {
> 		u64 sys_regs[NR_SYS_REGS];
> 		u32 copro[NR_COPRO_REGS];
> 	};
> };
> 
> which is in a union with the sys_reg array. So anything that affects one
> affects the other because:
> - there is only one state in the physical CPU, no matter which mode
> you're in,
> - the guest EL1 is either 32bit or 64bit, and never changes over time.
> 
> Hope this helps,
> 
Ok, I see. Thanks for the explanation. :)

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-11-30 18:22     ` Marc Zyngier
@ 2015-12-01 14:35       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 14:35 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm



On 2015/12/1 2:22, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:22:00 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When calling perf_event_create_kernel_counter to create perf_event,
>> assign a overflow handler. Then when perf event overflows, set
>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm/kvm/arm.c    |  4 +++
>>  include/kvm/arm_pmu.h |  4 +++
>>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>  3 files changed, 83 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 78b2869..9c0fec4 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -28,6 +28,7 @@
>>  #include <linux/sched.h>
>>  #include <linux/kvm.h>
>>  #include <trace/events/kvm.h>
>> +#include <kvm/arm_pmu.h>
>>  
>>  #define CREATE_TRACE_POINTS
>>  #include "trace.h"
>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  
>>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>  			local_irq_enable();
>> +			kvm_pmu_sync_hwstate(vcpu);
> 
> This is very weird. Are you only injecting interrupts when a signal is
> pending? I don't understand how this works...
> 
>>  			kvm_vgic_sync_hwstate(vcpu);
>>  			preempt_enable();
>>  			kvm_timer_sync_hwstate(vcpu);
>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  		kvm_guest_exit();
>>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>  
>> +		kvm_pmu_post_sync_hwstate(vcpu);
>> +
>>  		kvm_vgic_sync_hwstate(vcpu);
>>  
>>  		preempt_enable();
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index acd025a..5e7f943 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>  };
>>  
>>  #ifdef CONFIG_KVM_ARM_PMU
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
> 
> Please follow the current terminology: _flush_ on VM entry, _sync_ on
> VM exit.
> 

Hi Marc,

Is below patch the right way for this?

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 78b2869..84008d1 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>

 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)
                 */
                kvm_timer_flush_hwstate(vcpu);

+               kvm_pmu_flush_hwstate(vcpu);
+
                /*
                 * Preparing the interrupts to be injected also
                 * involves poking the GIC, which must be done in a
@@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)
                        kvm_vgic_sync_hwstate(vcpu);
                        preempt_enable();
                        kvm_timer_sync_hwstate(vcpu);
+                       kvm_pmu_sync_hwstate(vcpu);
                        continue;
                }

@@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)

                kvm_timer_sync_hwstate(vcpu);

+               kvm_pmu_sync_hwstate(vcpu);
+
                ret = handle_exit(vcpu, run, ret);
        }

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 47bbd43..edfe4e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,8 @@ struct kvm_pmu {
 };

 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
all_enable);
@@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
*vcpu, u32 data,
                                    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
select_idx)
 {
        return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 15cac45..9aad2f7 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>

 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }

 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+       struct kvm_pmu *pmu = &vcpu->arch.pmu;
+       u32 overflow;
+
+       if (!vcpu_mode_is_32bit(vcpu))
+               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+       else
+               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
+
+       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
+               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
pmu->irq_num, 1);
+
+       pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_sync_hwstate - sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
guest.
+ */
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+       struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+       if (pmu->irq_pending && (pmu->irq_num != -1))
+               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
pmu->irq_num, 1);
+
+       pmu->irq_pending = false;
+}

-- 
Shannon

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 14:35       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 14:35 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/1 2:22, Marc Zyngier wrote:
> On Fri, 30 Oct 2015 14:22:00 +0800
> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
> 
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When calling perf_event_create_kernel_counter to create perf_event,
>> assign a overflow handler. Then when perf event overflows, set
>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm/kvm/arm.c    |  4 +++
>>  include/kvm/arm_pmu.h |  4 +++
>>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>  3 files changed, 83 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 78b2869..9c0fec4 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -28,6 +28,7 @@
>>  #include <linux/sched.h>
>>  #include <linux/kvm.h>
>>  #include <trace/events/kvm.h>
>> +#include <kvm/arm_pmu.h>
>>  
>>  #define CREATE_TRACE_POINTS
>>  #include "trace.h"
>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  
>>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>  			local_irq_enable();
>> +			kvm_pmu_sync_hwstate(vcpu);
> 
> This is very weird. Are you only injecting interrupts when a signal is
> pending? I don't understand how this works...
> 
>>  			kvm_vgic_sync_hwstate(vcpu);
>>  			preempt_enable();
>>  			kvm_timer_sync_hwstate(vcpu);
>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  		kvm_guest_exit();
>>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>  
>> +		kvm_pmu_post_sync_hwstate(vcpu);
>> +
>>  		kvm_vgic_sync_hwstate(vcpu);
>>  
>>  		preempt_enable();
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index acd025a..5e7f943 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>  };
>>  
>>  #ifdef CONFIG_KVM_ARM_PMU
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
> 
> Please follow the current terminology: _flush_ on VM entry, _sync_ on
> VM exit.
> 

Hi Marc,

Is below patch the right way for this?

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 78b2869..84008d1 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>

 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)
                 */
                kvm_timer_flush_hwstate(vcpu);

+               kvm_pmu_flush_hwstate(vcpu);
+
                /*
                 * Preparing the interrupts to be injected also
                 * involves poking the GIC, which must be done in a
@@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)
                        kvm_vgic_sync_hwstate(vcpu);
                        preempt_enable();
                        kvm_timer_sync_hwstate(vcpu);
+                       kvm_pmu_sync_hwstate(vcpu);
                        continue;
                }

@@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
struct kvm_run *run)

                kvm_timer_sync_hwstate(vcpu);

+               kvm_pmu_sync_hwstate(vcpu);
+
                ret = handle_exit(vcpu, run, ret);
        }

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 47bbd43..edfe4e5 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,8 @@ struct kvm_pmu {
 };

 #ifdef CONFIG_KVM_ARM_PMU
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
select_idx);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
all_enable);
@@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
*vcpu, u32 data,
                                    u32 select_idx);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
 #else
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
 unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
select_idx)
 {
        return 0;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 15cac45..9aad2f7 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>

 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
 }

 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+       struct kvm_pmu *pmu = &vcpu->arch.pmu;
+       u32 overflow;
+
+       if (!vcpu_mode_is_32bit(vcpu))
+               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+       else
+               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
+
+       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
+               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
pmu->irq_num, 1);
+
+       pmu->irq_pending = false;
+}
+
+/**
+ * kvm_pmu_sync_hwstate - sync pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
guest.
+ */
+void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
+{
+       struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+       if (pmu->irq_pending && (pmu->irq_num != -1))
+               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
pmu->irq_num, 1);
+
+       pmu->irq_pending = false;
+}

-- 
Shannon

^ permalink raw reply related	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 14:35       ` Shannon Zhao
@ 2015-12-01 14:50         ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 14:50 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: kvm, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On 01/12/15 14:35, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 2:22, Marc Zyngier wrote:
>> On Fri, 30 Oct 2015 14:22:00 +0800
>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>
>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>
>>> When calling perf_event_create_kernel_counter to create perf_event,
>>> assign a overflow handler. Then when perf event overflows, set
>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>
>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>> ---
>>>  arch/arm/kvm/arm.c    |  4 +++
>>>  include/kvm/arm_pmu.h |  4 +++
>>>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>  3 files changed, 83 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index 78b2869..9c0fec4 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -28,6 +28,7 @@
>>>  #include <linux/sched.h>
>>>  #include <linux/kvm.h>
>>>  #include <trace/events/kvm.h>
>>> +#include <kvm/arm_pmu.h>
>>>  
>>>  #define CREATE_TRACE_POINTS
>>>  #include "trace.h"
>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  
>>>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>  			local_irq_enable();
>>> +			kvm_pmu_sync_hwstate(vcpu);
>>
>> This is very weird. Are you only injecting interrupts when a signal is
>> pending? I don't understand how this works...
>>
>>>  			kvm_vgic_sync_hwstate(vcpu);
>>>  			preempt_enable();
>>>  			kvm_timer_sync_hwstate(vcpu);
>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  		kvm_guest_exit();
>>>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>  
>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>> +
>>>  		kvm_vgic_sync_hwstate(vcpu);
>>>  
>>>  		preempt_enable();
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index acd025a..5e7f943 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>  };
>>>  
>>>  #ifdef CONFIG_KVM_ARM_PMU
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>
>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>> VM exit.
>>
> 
> Hi Marc,
> 
> Is below patch the right way for this?
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 78b2869..84008d1 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/kvm.h>
>  #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
> 
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
>                  */
>                 kvm_timer_flush_hwstate(vcpu);
> 
> +               kvm_pmu_flush_hwstate(vcpu);
> +
>                 /*
>                  * Preparing the interrupts to be injected also
>                  * involves poking the GIC, which must be done in a
> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
>                         kvm_vgic_sync_hwstate(vcpu);
>                         preempt_enable();
>                         kvm_timer_sync_hwstate(vcpu);
> +                       kvm_pmu_sync_hwstate(vcpu);
>                         continue;
>                 }
> 
> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
> 
>                 kvm_timer_sync_hwstate(vcpu);
> 
> +               kvm_pmu_sync_hwstate(vcpu);
> +
>                 ret = handle_exit(vcpu, run, ret);
>         }

yeah, that's more like it!

> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 47bbd43..edfe4e5 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -41,6 +41,8 @@ struct kvm_pmu {
>  };
> 
>  #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
> select_idx);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
> all_enable);
> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
> *vcpu, u32 data,
>                                     u32 select_idx);
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>  #else
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
> select_idx)
>  {
>         return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 15cac45..9aad2f7 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/perf_event.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
> 
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>  }
> 
>  /**
> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
> +{
> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +       u32 overflow;
> +
> +       if (!vcpu_mode_is_32bit(vcpu))
> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> +       else
> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
> +
> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> pmu->irq_num, 1);
> +
> +       pmu->irq_pending = false;

Now, we get to the critical point. Why do you need to keep this shadow
state for the interrupt?

The way I see it, you should set the line high when the overflow has
been registered, and set it low when the overflow condition has been
cleared by the guest. And nothing else.

> +}
> +
> +/**
> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
> guest.
> + */
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +       if (pmu->irq_pending && (pmu->irq_num != -1))
> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> pmu->irq_num, 1);
> +
> +       pmu->irq_pending = false;
> +}
> 

Why do you have to do it twice??

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 14:50         ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 14:50 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/12/15 14:35, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 2:22, Marc Zyngier wrote:
>> On Fri, 30 Oct 2015 14:22:00 +0800
>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>
>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>
>>> When calling perf_event_create_kernel_counter to create perf_event,
>>> assign a overflow handler. Then when perf event overflows, set
>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>
>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>> ---
>>>  arch/arm/kvm/arm.c    |  4 +++
>>>  include/kvm/arm_pmu.h |  4 +++
>>>  virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>  3 files changed, 83 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index 78b2869..9c0fec4 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -28,6 +28,7 @@
>>>  #include <linux/sched.h>
>>>  #include <linux/kvm.h>
>>>  #include <trace/events/kvm.h>
>>> +#include <kvm/arm_pmu.h>
>>>  
>>>  #define CREATE_TRACE_POINTS
>>>  #include "trace.h"
>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  
>>>  		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>  			local_irq_enable();
>>> +			kvm_pmu_sync_hwstate(vcpu);
>>
>> This is very weird. Are you only injecting interrupts when a signal is
>> pending? I don't understand how this works...
>>
>>>  			kvm_vgic_sync_hwstate(vcpu);
>>>  			preempt_enable();
>>>  			kvm_timer_sync_hwstate(vcpu);
>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  		kvm_guest_exit();
>>>  		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>  
>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>> +
>>>  		kvm_vgic_sync_hwstate(vcpu);
>>>  
>>>  		preempt_enable();
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index acd025a..5e7f943 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>  };
>>>  
>>>  #ifdef CONFIG_KVM_ARM_PMU
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>
>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>> VM exit.
>>
> 
> Hi Marc,
> 
> Is below patch the right way for this?
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 78b2869..84008d1 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/kvm.h>
>  #include <trace/events/kvm.h>
> +#include <kvm/arm_pmu.h>
> 
>  #define CREATE_TRACE_POINTS
>  #include "trace.h"
> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
>                  */
>                 kvm_timer_flush_hwstate(vcpu);
> 
> +               kvm_pmu_flush_hwstate(vcpu);
> +
>                 /*
>                  * Preparing the interrupts to be injected also
>                  * involves poking the GIC, which must be done in a
> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
>                         kvm_vgic_sync_hwstate(vcpu);
>                         preempt_enable();
>                         kvm_timer_sync_hwstate(vcpu);
> +                       kvm_pmu_sync_hwstate(vcpu);
>                         continue;
>                 }
> 
> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
> struct kvm_run *run)
> 
>                 kvm_timer_sync_hwstate(vcpu);
> 
> +               kvm_pmu_sync_hwstate(vcpu);
> +
>                 ret = handle_exit(vcpu, run, ret);
>         }

yeah, that's more like it!

> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 47bbd43..edfe4e5 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -41,6 +41,8 @@ struct kvm_pmu {
>  };
> 
>  #ifdef CONFIG_KVM_ARM_PMU
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
> select_idx);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
> all_enable);
> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
> *vcpu, u32 data,
>                                     u32 select_idx);
>  void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>  #else
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>  unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
> select_idx)
>  {
>         return 0;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 15cac45..9aad2f7 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -21,6 +21,7 @@
>  #include <linux/perf_event.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
> +#include <kvm/arm_vgic.h>
> 
>  /**
>   * kvm_pmu_get_counter_value - get PMU counter value
> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>  }
> 
>  /**
> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
> + */
> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
> +{
> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +       u32 overflow;
> +
> +       if (!vcpu_mode_is_32bit(vcpu))
> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
> +       else
> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
> +
> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> pmu->irq_num, 1);
> +
> +       pmu->irq_pending = false;

Now, we get to the critical point. Why do you need to keep this shadow
state for the interrupt?

The way I see it, you should set the line high when the overflow has
been registered, and set it low when the overflow condition has been
cleared by the guest. And nothing else.

> +}
> +
> +/**
> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
> + * @vcpu: The vcpu pointer
> + *
> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
> guest.
> + */
> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
> +{
> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +
> +       if (pmu->irq_pending && (pmu->irq_num != -1))
> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> pmu->irq_num, 1);
> +
> +       pmu->irq_pending = false;
> +}
> 

Why do you have to do it twice??

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 14:50         ` Marc Zyngier
@ 2015-12-01 15:13           ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 15:13 UTC (permalink / raw)
  To: Marc Zyngier, Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, peter.huangpeng



On 2015/12/1 22:50, Marc Zyngier wrote:
> On 01/12/15 14:35, Shannon Zhao wrote:
>>
>>
>> On 2015/12/1 2:22, Marc Zyngier wrote:
>>> On Fri, 30 Oct 2015 14:22:00 +0800
>>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>>
>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>
>>>> When calling perf_event_create_kernel_counter to create perf_event,
>>>> assign a overflow handler. Then when perf event overflows, set
>>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>>
>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> ---
>>>>   arch/arm/kvm/arm.c    |  4 +++
>>>>   include/kvm/arm_pmu.h |  4 +++
>>>>   virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>   3 files changed, 83 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>> index 78b2869..9c0fec4 100644
>>>> --- a/arch/arm/kvm/arm.c
>>>> +++ b/arch/arm/kvm/arm.c
>>>> @@ -28,6 +28,7 @@
>>>>   #include <linux/sched.h>
>>>>   #include <linux/kvm.h>
>>>>   #include <trace/events/kvm.h>
>>>> +#include <kvm/arm_pmu.h>
>>>>
>>>>   #define CREATE_TRACE_POINTS
>>>>   #include "trace.h"
>>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>
>>>>   		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>>   			local_irq_enable();
>>>> +			kvm_pmu_sync_hwstate(vcpu);
>>>
>>> This is very weird. Are you only injecting interrupts when a signal is
>>> pending? I don't understand how this works...
>>>
>>>>   			kvm_vgic_sync_hwstate(vcpu);
>>>>   			preempt_enable();
>>>>   			kvm_timer_sync_hwstate(vcpu);
>>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>   		kvm_guest_exit();
>>>>   		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>>
>>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>>> +
>>>>   		kvm_vgic_sync_hwstate(vcpu);
>>>>
>>>>   		preempt_enable();
>>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>>> index acd025a..5e7f943 100644
>>>> --- a/include/kvm/arm_pmu.h
>>>> +++ b/include/kvm/arm_pmu.h
>>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>>   };
>>>>
>>>>   #ifdef CONFIG_KVM_ARM_PMU
>>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>>
>>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>>> VM exit.
>>>
>>
>> Hi Marc,
>>
>> Is below patch the right way for this?
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 78b2869..84008d1 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -28,6 +28,7 @@
>>   #include <linux/sched.h>
>>   #include <linux/kvm.h>
>>   #include <trace/events/kvm.h>
>> +#include <kvm/arm_pmu.h>
>>
>>   #define CREATE_TRACE_POINTS
>>   #include "trace.h"
>> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>                   */
>>                  kvm_timer_flush_hwstate(vcpu);
>>
>> +               kvm_pmu_flush_hwstate(vcpu);
>> +
>>                  /*
>>                   * Preparing the interrupts to be injected also
>>                   * involves poking the GIC, which must be done in a
>> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>                          kvm_vgic_sync_hwstate(vcpu);
>>                          preempt_enable();
>>                          kvm_timer_sync_hwstate(vcpu);
>> +                       kvm_pmu_sync_hwstate(vcpu);
>>                          continue;
>>                  }
>>
>> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>
>>                  kvm_timer_sync_hwstate(vcpu);
>>
>> +               kvm_pmu_sync_hwstate(vcpu);
>> +
>>                  ret = handle_exit(vcpu, run, ret);
>>          }
>
> yeah, that's more like it!
>
>>
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 47bbd43..edfe4e5 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -41,6 +41,8 @@ struct kvm_pmu {
>>   };
>>
>>   #ifdef CONFIG_KVM_ARM_PMU
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>> select_idx);
>>   void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
>> all_enable);
>> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
>> *vcpu, u32 data,
>>                                      u32 select_idx);
>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>>   #else
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>> select_idx)
>>   {
>>          return 0;
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> index 15cac45..9aad2f7 100644
>> --- a/virt/kvm/arm/pmu.c
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -21,6 +21,7 @@
>>   #include <linux/perf_event.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <kvm/arm_pmu.h>
>> +#include <kvm/arm_vgic.h>
>>
>>   /**
>>    * kvm_pmu_get_counter_value - get PMU counter value
>> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>>   }
>>
>>   /**
>> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
>> + * @vcpu: The vcpu pointer
>> + *
>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
>> + */
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
>> +{
>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +       u32 overflow;
>> +
>> +       if (!vcpu_mode_is_32bit(vcpu))
>> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>> +       else
>> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
>> +
>> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> pmu->irq_num, 1);
>> +
>> +       pmu->irq_pending = false;
>
> Now, we get to the critical point. Why do you need to keep this shadow
> state for the interrupt?
>
The reason is that when guest clear the overflow register, it will trap 
to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment, 
the overflow register is still overflowed(that is some bit is still 1). 
So We need to use some flag to mark we already inject this interrupt. 
And if during guest handling the overflow, there is a new overflow 
happening, the pmu->irq_pending will be set ture by 
kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?

> The way I see it, you should set the line high when the overflow has
> been registered, and set it low when the overflow condition has been
> cleared by the guest. And nothing else.
>
>> +}
>> +
>> +/**
>> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
>> + * @vcpu: The vcpu pointer
>> + *
>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
>> guest.
>> + */
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>> +{
>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +
>> +       if (pmu->irq_pending && (pmu->irq_num != -1))
>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> pmu->irq_num, 1);
>> +
>> +       pmu->irq_pending = false;
>> +}
>>
>
> Why do you have to do it twice??
>
> Thanks,
>
> 	M.
>

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 15:13           ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 15:13 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/1 22:50, Marc Zyngier wrote:
> On 01/12/15 14:35, Shannon Zhao wrote:
>>
>>
>> On 2015/12/1 2:22, Marc Zyngier wrote:
>>> On Fri, 30 Oct 2015 14:22:00 +0800
>>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>>
>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>
>>>> When calling perf_event_create_kernel_counter to create perf_event,
>>>> assign a overflow handler. Then when perf event overflows, set
>>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>>
>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> ---
>>>>   arch/arm/kvm/arm.c    |  4 +++
>>>>   include/kvm/arm_pmu.h |  4 +++
>>>>   virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>   3 files changed, 83 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>> index 78b2869..9c0fec4 100644
>>>> --- a/arch/arm/kvm/arm.c
>>>> +++ b/arch/arm/kvm/arm.c
>>>> @@ -28,6 +28,7 @@
>>>>   #include <linux/sched.h>
>>>>   #include <linux/kvm.h>
>>>>   #include <trace/events/kvm.h>
>>>> +#include <kvm/arm_pmu.h>
>>>>
>>>>   #define CREATE_TRACE_POINTS
>>>>   #include "trace.h"
>>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>
>>>>   		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>>   			local_irq_enable();
>>>> +			kvm_pmu_sync_hwstate(vcpu);
>>>
>>> This is very weird. Are you only injecting interrupts when a signal is
>>> pending? I don't understand how this works...
>>>
>>>>   			kvm_vgic_sync_hwstate(vcpu);
>>>>   			preempt_enable();
>>>>   			kvm_timer_sync_hwstate(vcpu);
>>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>   		kvm_guest_exit();
>>>>   		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>>
>>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>>> +
>>>>   		kvm_vgic_sync_hwstate(vcpu);
>>>>
>>>>   		preempt_enable();
>>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>>> index acd025a..5e7f943 100644
>>>> --- a/include/kvm/arm_pmu.h
>>>> +++ b/include/kvm/arm_pmu.h
>>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>>   };
>>>>
>>>>   #ifdef CONFIG_KVM_ARM_PMU
>>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>>
>>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>>> VM exit.
>>>
>>
>> Hi Marc,
>>
>> Is below patch the right way for this?
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index 78b2869..84008d1 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -28,6 +28,7 @@
>>   #include <linux/sched.h>
>>   #include <linux/kvm.h>
>>   #include <trace/events/kvm.h>
>> +#include <kvm/arm_pmu.h>
>>
>>   #define CREATE_TRACE_POINTS
>>   #include "trace.h"
>> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>                   */
>>                  kvm_timer_flush_hwstate(vcpu);
>>
>> +               kvm_pmu_flush_hwstate(vcpu);
>> +
>>                  /*
>>                   * Preparing the interrupts to be injected also
>>                   * involves poking the GIC, which must be done in a
>> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>                          kvm_vgic_sync_hwstate(vcpu);
>>                          preempt_enable();
>>                          kvm_timer_sync_hwstate(vcpu);
>> +                       kvm_pmu_sync_hwstate(vcpu);
>>                          continue;
>>                  }
>>
>> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>> struct kvm_run *run)
>>
>>                  kvm_timer_sync_hwstate(vcpu);
>>
>> +               kvm_pmu_sync_hwstate(vcpu);
>> +
>>                  ret = handle_exit(vcpu, run, ret);
>>          }
>
> yeah, that's more like it!
>
>>
>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>> index 47bbd43..edfe4e5 100644
>> --- a/include/kvm/arm_pmu.h
>> +++ b/include/kvm/arm_pmu.h
>> @@ -41,6 +41,8 @@ struct kvm_pmu {
>>   };
>>
>>   #ifdef CONFIG_KVM_ARM_PMU
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>> select_idx);
>>   void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
>> all_enable);
>> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
>> *vcpu, u32 data,
>>                                      u32 select_idx);
>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>>   #else
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>> select_idx)
>>   {
>>          return 0;
>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>> index 15cac45..9aad2f7 100644
>> --- a/virt/kvm/arm/pmu.c
>> +++ b/virt/kvm/arm/pmu.c
>> @@ -21,6 +21,7 @@
>>   #include <linux/perf_event.h>
>>   #include <asm/kvm_emulate.h>
>>   #include <kvm/arm_pmu.h>
>> +#include <kvm/arm_vgic.h>
>>
>>   /**
>>    * kvm_pmu_get_counter_value - get PMU counter value
>> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>>   }
>>
>>   /**
>> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
>> + * @vcpu: The vcpu pointer
>> + *
>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
>> + */
>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
>> +{
>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +       u32 overflow;
>> +
>> +       if (!vcpu_mode_is_32bit(vcpu))
>> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>> +       else
>> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
>> +
>> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> pmu->irq_num, 1);
>> +
>> +       pmu->irq_pending = false;
>
> Now, we get to the critical point. Why do you need to keep this shadow
> state for the interrupt?
>
The reason is that when guest clear the overflow register, it will trap 
to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment, 
the overflow register is still overflowed(that is some bit is still 1). 
So We need to use some flag to mark we already inject this interrupt. 
And if during guest handling the overflow, there is a new overflow 
happening, the pmu->irq_pending will be set ture by 
kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?

> The way I see it, you should set the line high when the overflow has
> been registered, and set it low when the overflow condition has been
> cleared by the guest. And nothing else.
>
>> +}
>> +
>> +/**
>> + * kvm_pmu_sync_hwstate - sync pmu state for cpu
>> + * @vcpu: The vcpu pointer
>> + *
>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu when back from
>> guest.
>> + */
>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu)
>> +{
>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>> +
>> +       if (pmu->irq_pending && (pmu->irq_num != -1))
>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> pmu->irq_num, 1);
>> +
>> +       pmu->irq_pending = false;
>> +}
>>
>
> Why do you have to do it twice??
>
> Thanks,
>
> 	M.
>

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 15:13           ` Shannon Zhao
@ 2015-12-01 15:41             ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 15:41 UTC (permalink / raw)
  To: Shannon Zhao, Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, peter.huangpeng

On 01/12/15 15:13, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 22:50, Marc Zyngier wrote:
>> On 01/12/15 14:35, Shannon Zhao wrote:
>>>
>>>
>>> On 2015/12/1 2:22, Marc Zyngier wrote:
>>>> On Fri, 30 Oct 2015 14:22:00 +0800
>>>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>>>
>>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>>
>>>>> When calling perf_event_create_kernel_counter to create perf_event,
>>>>> assign a overflow handler. Then when perf event overflows, set
>>>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>>>
>>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>>> ---
>>>>>   arch/arm/kvm/arm.c    |  4 +++
>>>>>   include/kvm/arm_pmu.h |  4 +++
>>>>>   virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>>   3 files changed, 83 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>>> index 78b2869..9c0fec4 100644
>>>>> --- a/arch/arm/kvm/arm.c
>>>>> +++ b/arch/arm/kvm/arm.c
>>>>> @@ -28,6 +28,7 @@
>>>>>   #include <linux/sched.h>
>>>>>   #include <linux/kvm.h>
>>>>>   #include <trace/events/kvm.h>
>>>>> +#include <kvm/arm_pmu.h>
>>>>>
>>>>>   #define CREATE_TRACE_POINTS
>>>>>   #include "trace.h"
>>>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>>
>>>>>   		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>>>   			local_irq_enable();
>>>>> +			kvm_pmu_sync_hwstate(vcpu);
>>>>
>>>> This is very weird. Are you only injecting interrupts when a signal is
>>>> pending? I don't understand how this works...
>>>>
>>>>>   			kvm_vgic_sync_hwstate(vcpu);
>>>>>   			preempt_enable();
>>>>>   			kvm_timer_sync_hwstate(vcpu);
>>>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>>   		kvm_guest_exit();
>>>>>   		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>>>
>>>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>>>> +
>>>>>   		kvm_vgic_sync_hwstate(vcpu);
>>>>>
>>>>>   		preempt_enable();
>>>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>>>> index acd025a..5e7f943 100644
>>>>> --- a/include/kvm/arm_pmu.h
>>>>> +++ b/include/kvm/arm_pmu.h
>>>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>>>   };
>>>>>
>>>>>   #ifdef CONFIG_KVM_ARM_PMU
>>>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>>>
>>>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>>>> VM exit.
>>>>
>>>
>>> Hi Marc,
>>>
>>> Is below patch the right way for this?
>>>
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index 78b2869..84008d1 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -28,6 +28,7 @@
>>>   #include <linux/sched.h>
>>>   #include <linux/kvm.h>
>>>   #include <trace/events/kvm.h>
>>> +#include <kvm/arm_pmu.h>
>>>
>>>   #define CREATE_TRACE_POINTS
>>>   #include "trace.h"
>>> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>                   */
>>>                  kvm_timer_flush_hwstate(vcpu);
>>>
>>> +               kvm_pmu_flush_hwstate(vcpu);
>>> +
>>>                  /*
>>>                   * Preparing the interrupts to be injected also
>>>                   * involves poking the GIC, which must be done in a
>>> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>                          kvm_vgic_sync_hwstate(vcpu);
>>>                          preempt_enable();
>>>                          kvm_timer_sync_hwstate(vcpu);
>>> +                       kvm_pmu_sync_hwstate(vcpu);
>>>                          continue;
>>>                  }
>>>
>>> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>
>>>                  kvm_timer_sync_hwstate(vcpu);
>>>
>>> +               kvm_pmu_sync_hwstate(vcpu);
>>> +
>>>                  ret = handle_exit(vcpu, run, ret);
>>>          }
>>
>> yeah, that's more like it!
>>
>>>
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index 47bbd43..edfe4e5 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -41,6 +41,8 @@ struct kvm_pmu {
>>>   };
>>>
>>>   #ifdef CONFIG_KVM_ARM_PMU
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>>> select_idx);
>>>   void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>>>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
>>> all_enable);
>>> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
>>> *vcpu, u32 data,
>>>                                      u32 select_idx);
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>>>   #else
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>>> select_idx)
>>>   {
>>>          return 0;
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 15cac45..9aad2f7 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -21,6 +21,7 @@
>>>   #include <linux/perf_event.h>
>>>   #include <asm/kvm_emulate.h>
>>>   #include <kvm/arm_pmu.h>
>>> +#include <kvm/arm_vgic.h>
>>>
>>>   /**
>>>    * kvm_pmu_get_counter_value - get PMU counter value
>>> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>>>   }
>>>
>>>   /**
>>> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
>>> + * @vcpu: The vcpu pointer
>>> + *
>>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
>>> + */
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
>>> +{
>>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +       u32 overflow;
>>> +
>>> +       if (!vcpu_mode_is_32bit(vcpu))
>>> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>> +       else
>>> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
>>> +
>>> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
>>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>> pmu->irq_num, 1);
>>> +
>>> +       pmu->irq_pending = false;
>>
>> Now, we get to the critical point. Why do you need to keep this shadow
>> state for the interrupt?
>>
> The reason is that when guest clear the overflow register, it will trap 
> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment, 
> the overflow register is still overflowed(that is some bit is still 1). 
> So We need to use some flag to mark we already inject this interrupt. 
> And if during guest handling the overflow, there is a new overflow 
> happening, the pmu->irq_pending will be set ture by 
> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?

I don't think so. This is a level interrupt, so the level should stay
high as long as the guest hasn't cleared all possible sources for that
interrupt.

For your example, the guest writes to PMOVSCLR to clear the overflow
caused by a given counter. If the status is now 0, the interrupt line
drops. If the status is still non zero, the line stays high. And I
believe that writing a 1 to PMOVSSET would actually trigger an
interrupt, or keep it high if it has already high.

In essence, do not try to maintain side state. I've been bitten.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 15:41             ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 15:41 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/12/15 15:13, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 22:50, Marc Zyngier wrote:
>> On 01/12/15 14:35, Shannon Zhao wrote:
>>>
>>>
>>> On 2015/12/1 2:22, Marc Zyngier wrote:
>>>> On Fri, 30 Oct 2015 14:22:00 +0800
>>>> Shannon Zhao <zhaoshenglong@huawei.com> wrote:
>>>>
>>>>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>>>
>>>>> When calling perf_event_create_kernel_counter to create perf_event,
>>>>> assign a overflow handler. Then when perf event overflows, set
>>>>> irq_pending and call kvm_vcpu_kick() to sync the interrupt.
>>>>>
>>>>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>>> ---
>>>>>   arch/arm/kvm/arm.c    |  4 +++
>>>>>   include/kvm/arm_pmu.h |  4 +++
>>>>>   virt/kvm/arm/pmu.c    | 76 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>>>   3 files changed, 83 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>>> index 78b2869..9c0fec4 100644
>>>>> --- a/arch/arm/kvm/arm.c
>>>>> +++ b/arch/arm/kvm/arm.c
>>>>> @@ -28,6 +28,7 @@
>>>>>   #include <linux/sched.h>
>>>>>   #include <linux/kvm.h>
>>>>>   #include <trace/events/kvm.h>
>>>>> +#include <kvm/arm_pmu.h>
>>>>>
>>>>>   #define CREATE_TRACE_POINTS
>>>>>   #include "trace.h"
>>>>> @@ -551,6 +552,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>>
>>>>>   		if (ret <= 0 || need_new_vmid_gen(vcpu->kvm)) {
>>>>>   			local_irq_enable();
>>>>> +			kvm_pmu_sync_hwstate(vcpu);
>>>>
>>>> This is very weird. Are you only injecting interrupts when a signal is
>>>> pending? I don't understand how this works...
>>>>
>>>>>   			kvm_vgic_sync_hwstate(vcpu);
>>>>>   			preempt_enable();
>>>>>   			kvm_timer_sync_hwstate(vcpu);
>>>>> @@ -598,6 +600,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>>>   		kvm_guest_exit();
>>>>>   		trace_kvm_exit(kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
>>>>>
>>>>> +		kvm_pmu_post_sync_hwstate(vcpu);
>>>>> +
>>>>>   		kvm_vgic_sync_hwstate(vcpu);
>>>>>
>>>>>   		preempt_enable();
>>>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>>>> index acd025a..5e7f943 100644
>>>>> --- a/include/kvm/arm_pmu.h
>>>>> +++ b/include/kvm/arm_pmu.h
>>>>> @@ -39,6 +39,8 @@ struct kvm_pmu {
>>>>>   };
>>>>>
>>>>>   #ifdef CONFIG_KVM_ARM_PMU
>>>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>>> +void kvm_pmu_post_sync_hwstate(struct kvm_vcpu *vcpu);
>>>>
>>>> Please follow the current terminology: _flush_ on VM entry, _sync_ on
>>>> VM exit.
>>>>
>>>
>>> Hi Marc,
>>>
>>> Is below patch the right way for this?
>>>
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index 78b2869..84008d1 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -28,6 +28,7 @@
>>>   #include <linux/sched.h>
>>>   #include <linux/kvm.h>
>>>   #include <trace/events/kvm.h>
>>> +#include <kvm/arm_pmu.h>
>>>
>>>   #define CREATE_TRACE_POINTS
>>>   #include "trace.h"
>>> @@ -531,6 +532,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>                   */
>>>                  kvm_timer_flush_hwstate(vcpu);
>>>
>>> +               kvm_pmu_flush_hwstate(vcpu);
>>> +
>>>                  /*
>>>                   * Preparing the interrupts to be injected also
>>>                   * involves poking the GIC, which must be done in a
>>> @@ -554,6 +557,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>                          kvm_vgic_sync_hwstate(vcpu);
>>>                          preempt_enable();
>>>                          kvm_timer_sync_hwstate(vcpu);
>>> +                       kvm_pmu_sync_hwstate(vcpu);
>>>                          continue;
>>>                  }
>>>
>>> @@ -604,6 +608,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu,
>>> struct kvm_run *run)
>>>
>>>                  kvm_timer_sync_hwstate(vcpu);
>>>
>>> +               kvm_pmu_sync_hwstate(vcpu);
>>> +
>>>                  ret = handle_exit(vcpu, run, ret);
>>>          }
>>
>> yeah, that's more like it!
>>
>>>
>>> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
>>> index 47bbd43..edfe4e5 100644
>>> --- a/include/kvm/arm_pmu.h
>>> +++ b/include/kvm/arm_pmu.h
>>> @@ -41,6 +41,8 @@ struct kvm_pmu {
>>>   };
>>>
>>>   #ifdef CONFIG_KVM_ARM_PMU
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu);
>>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>>> select_idx);
>>>   void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u32 val);
>>>   void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u32 val, bool
>>> all_enable);
>>> @@ -51,6 +53,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu
>>> *vcpu, u32 data,
>>>                                      u32 select_idx);
>>>   void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u32 val);
>>>   #else
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
>>> +void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {}
>>>   unsigned long kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u32
>>> select_idx)
>>>   {
>>>          return 0;
>>> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
>>> index 15cac45..9aad2f7 100644
>>> --- a/virt/kvm/arm/pmu.c
>>> +++ b/virt/kvm/arm/pmu.c
>>> @@ -21,6 +21,7 @@
>>>   #include <linux/perf_event.h>
>>>   #include <asm/kvm_emulate.h>
>>>   #include <kvm/arm_pmu.h>
>>> +#include <kvm/arm_vgic.h>
>>>
>>>   /**
>>>    * kvm_pmu_get_counter_value - get PMU counter value
>>> @@ -79,6 +80,78 @@ static void kvm_pmu_stop_counter(struct kvm_pmc *pmc)
>>>   }
>>>
>>>   /**
>>> + * kvm_pmu_flush_hwstate - flush pmu state to cpu
>>> + * @vcpu: The vcpu pointer
>>> + *
>>> + * Inject virtual PMU IRQ if IRQ is pending for this cpu.
>>> + */
>>> +void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
>>> +{
>>> +       struct kvm_pmu *pmu = &vcpu->arch.pmu;
>>> +       u32 overflow;
>>> +
>>> +       if (!vcpu_mode_is_32bit(vcpu))
>>> +               overflow = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
>>> +       else
>>> +               overflow = vcpu_cp15(vcpu, c9_PMOVSSET);
>>> +
>>> +       if ((pmu->irq_pending || overflow != 0) && (pmu->irq_num != -1))
>>> +               kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>> pmu->irq_num, 1);
>>> +
>>> +       pmu->irq_pending = false;
>>
>> Now, we get to the critical point. Why do you need to keep this shadow
>> state for the interrupt?
>>
> The reason is that when guest clear the overflow register, it will trap 
> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment, 
> the overflow register is still overflowed(that is some bit is still 1). 
> So We need to use some flag to mark we already inject this interrupt. 
> And if during guest handling the overflow, there is a new overflow 
> happening, the pmu->irq_pending will be set ture by 
> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?

I don't think so. This is a level interrupt, so the level should stay
high as long as the guest hasn't cleared all possible sources for that
interrupt.

For your example, the guest writes to PMOVSCLR to clear the overflow
caused by a given counter. If the status is now 0, the interrupt line
drops. If the status is still non zero, the line stays high. And I
believe that writing a 1 to PMOVSSET would actually trigger an
interrupt, or keep it high if it has already high.

In essence, do not try to maintain side state. I've been bitten.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 15:41             ` Marc Zyngier
@ 2015-12-01 16:26               ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 16:26 UTC (permalink / raw)
  To: Marc Zyngier, Shannon Zhao; +Cc: kvm, will.deacon, linux-arm-kernel, kvmarm



On 2015/12/1 23:41, Marc Zyngier wrote:
>> The reason is that when guest clear the overflow register, it will trap
>> >to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>> >the overflow register is still overflowed(that is some bit is still 1).
>> >So We need to use some flag to mark we already inject this interrupt.
>> >And if during guest handling the overflow, there is a new overflow
>> >happening, the pmu->irq_pending will be set ture by
>> >kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
> I don't think so. This is a level interrupt, so the level should stay
> high as long as the guest hasn't cleared all possible sources for that
> interrupt.
>
> For your example, the guest writes to PMOVSCLR to clear the overflow
> caused by a given counter. If the status is now 0, the interrupt line
> drops. If the status is still non zero, the line stays high. And I
> believe that writing a 1 to PMOVSSET would actually trigger an
> interrupt, or keep it high if it has already high.
>
Right, writing 1 to PMOVSSET will trigger an interrupt.

> In essence, do not try to maintain side state. I've been bitten.

So on VM entry, it check if PMOVSSET is zero. If not, call 
kvm_vgic_inject_irq to set the level high. If so, set the level low.
On VM exit, it seems there is nothing to do.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 16:26               ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-01 16:26 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/1 23:41, Marc Zyngier wrote:
>> The reason is that when guest clear the overflow register, it will trap
>> >to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>> >the overflow register is still overflowed(that is some bit is still 1).
>> >So We need to use some flag to mark we already inject this interrupt.
>> >And if during guest handling the overflow, there is a new overflow
>> >happening, the pmu->irq_pending will be set ture by
>> >kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
> I don't think so. This is a level interrupt, so the level should stay
> high as long as the guest hasn't cleared all possible sources for that
> interrupt.
>
> For your example, the guest writes to PMOVSCLR to clear the overflow
> caused by a given counter. If the status is now 0, the interrupt line
> drops. If the status is still non zero, the line stays high. And I
> believe that writing a 1 to PMOVSSET would actually trigger an
> interrupt, or keep it high if it has already high.
>
Right, writing 1 to PMOVSSET will trigger an interrupt.

> In essence, do not try to maintain side state. I've been bitten.

So on VM entry, it check if PMOVSSET is zero. If not, call 
kvm_vgic_inject_irq to set the level high. If so, set the level low.
On VM exit, it seems there is nothing to do.

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 16:26               ` Shannon Zhao
@ 2015-12-01 16:57                 ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 16:57 UTC (permalink / raw)
  To: Shannon Zhao, Shannon Zhao
  Cc: kvmarm, linux-arm-kernel, kvm, christoffer.dall, will.deacon,
	alex.bennee, wei, cov, peter.huangpeng

On 01/12/15 16:26, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 23:41, Marc Zyngier wrote:
>>> The reason is that when guest clear the overflow register, it will trap
>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>> So We need to use some flag to mark we already inject this interrupt.
>>>> And if during guest handling the overflow, there is a new overflow
>>>> happening, the pmu->irq_pending will be set ture by
>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>> I don't think so. This is a level interrupt, so the level should stay
>> high as long as the guest hasn't cleared all possible sources for that
>> interrupt.
>>
>> For your example, the guest writes to PMOVSCLR to clear the overflow
>> caused by a given counter. If the status is now 0, the interrupt line
>> drops. If the status is still non zero, the line stays high. And I
>> believe that writing a 1 to PMOVSSET would actually trigger an
>> interrupt, or keep it high if it has already high.
>>
> Right, writing 1 to PMOVSSET will trigger an interrupt.
> 
>> In essence, do not try to maintain side state. I've been bitten.
> 
> So on VM entry, it check if PMOVSSET is zero. If not, call 
> kvm_vgic_inject_irq to set the level high. If so, set the level low.
> On VM exit, it seems there is nothing to do.

It is even simpler than that:

- When you get an overflow, you inject an interrupt with the level set to 1.
- When the overflow register gets cleared, you inject the same interrupt
with the level set to 0.

I don't think you need to do anything else, and the world switch should
be left untouched.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-01 16:57                 ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-01 16:57 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/12/15 16:26, Shannon Zhao wrote:
> 
> 
> On 2015/12/1 23:41, Marc Zyngier wrote:
>>> The reason is that when guest clear the overflow register, it will trap
>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>> So We need to use some flag to mark we already inject this interrupt.
>>>> And if during guest handling the overflow, there is a new overflow
>>>> happening, the pmu->irq_pending will be set ture by
>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>> I don't think so. This is a level interrupt, so the level should stay
>> high as long as the guest hasn't cleared all possible sources for that
>> interrupt.
>>
>> For your example, the guest writes to PMOVSCLR to clear the overflow
>> caused by a given counter. If the status is now 0, the interrupt line
>> drops. If the status is still non zero, the line stays high. And I
>> believe that writing a 1 to PMOVSSET would actually trigger an
>> interrupt, or keep it high if it has already high.
>>
> Right, writing 1 to PMOVSSET will trigger an interrupt.
> 
>> In essence, do not try to maintain side state. I've been bitten.
> 
> So on VM entry, it check if PMOVSSET is zero. If not, call 
> kvm_vgic_inject_irq to set the level high. If so, set the level low.
> On VM exit, it seems there is nothing to do.

It is even simpler than that:

- When you get an overflow, you inject an interrupt with the level set to 1.
- When the overflow register gets cleared, you inject the same interrupt
with the level set to 0.

I don't think you need to do anything else, and the world switch should
be left untouched.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-01 16:57                 ` Marc Zyngier
@ 2015-12-02  2:40                   ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-02  2:40 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, will.deacon, Shannon Zhao, kvmarm



On 2015/12/2 0:57, Marc Zyngier wrote:
> On 01/12/15 16:26, Shannon Zhao wrote:
>>
>>
>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>> The reason is that when guest clear the overflow register, it will trap
>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>> And if during guest handling the overflow, there is a new overflow
>>>>> happening, the pmu->irq_pending will be set ture by
>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>> I don't think so. This is a level interrupt, so the level should stay
>>> high as long as the guest hasn't cleared all possible sources for that
>>> interrupt.
>>>
>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>> caused by a given counter. If the status is now 0, the interrupt line
>>> drops. If the status is still non zero, the line stays high. And I
>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>> interrupt, or keep it high if it has already high.
>>>
>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>
>>> In essence, do not try to maintain side state. I've been bitten.
>>
>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>> On VM exit, it seems there is nothing to do.
> 
> It is even simpler than that:
> 
> - When you get an overflow, you inject an interrupt with the level set to 1.
> - When the overflow register gets cleared, you inject the same interrupt
> with the level set to 0.
> 
> I don't think you need to do anything else, and the world switch should
> be left untouched.
> 

On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> > +					    pmu->irq_num, 1);
> what context is this overflow handler function?  kvm_vgic_inject_irq
> grabs a mutex, so it can sleep...
>
> from a quick glance at the perf core code, it looks like this is in
> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>

But as Christoffer said before, it's not good to call
kvm_vgic_inject_irq directly in interrupt context. So if we just kick
the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-02  2:40                   ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-02  2:40 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/2 0:57, Marc Zyngier wrote:
> On 01/12/15 16:26, Shannon Zhao wrote:
>>
>>
>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>> The reason is that when guest clear the overflow register, it will trap
>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>> And if during guest handling the overflow, there is a new overflow
>>>>> happening, the pmu->irq_pending will be set ture by
>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>> I don't think so. This is a level interrupt, so the level should stay
>>> high as long as the guest hasn't cleared all possible sources for that
>>> interrupt.
>>>
>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>> caused by a given counter. If the status is now 0, the interrupt line
>>> drops. If the status is still non zero, the line stays high. And I
>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>> interrupt, or keep it high if it has already high.
>>>
>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>
>>> In essence, do not try to maintain side state. I've been bitten.
>>
>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>> On VM exit, it seems there is nothing to do.
> 
> It is even simpler than that:
> 
> - When you get an overflow, you inject an interrupt with the level set to 1.
> - When the overflow register gets cleared, you inject the same interrupt
> with the level set to 0.
> 
> I don't think you need to do anything else, and the world switch should
> be left untouched.
> 

On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>> > +					    pmu->irq_num, 1);
> what context is this overflow handler function?  kvm_vgic_inject_irq
> grabs a mutex, so it can sleep...
>
> from a quick glance at the perf core code, it looks like this is in
> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>

But as Christoffer said before, it's not good to call
kvm_vgic_inject_irq directly in interrupt context. So if we just kick
the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-02  2:40                   ` Shannon Zhao
@ 2015-12-02  8:45                     ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-02  8:45 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: Shannon Zhao, kvmarm, linux-arm-kernel, kvm, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, peter.huangpeng

On 02/12/15 02:40, Shannon Zhao wrote:
> 
> 
> On 2015/12/2 0:57, Marc Zyngier wrote:
>> On 01/12/15 16:26, Shannon Zhao wrote:
>>>
>>>
>>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>> I don't think so. This is a level interrupt, so the level should stay
>>>> high as long as the guest hasn't cleared all possible sources for that
>>>> interrupt.
>>>>
>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>> caused by a given counter. If the status is now 0, the interrupt line
>>>> drops. If the status is still non zero, the line stays high. And I
>>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>> interrupt, or keep it high if it has already high.
>>>>
>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>
>>>> In essence, do not try to maintain side state. I've been bitten.
>>>
>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>> On VM exit, it seems there is nothing to do.
>>
>> It is even simpler than that:
>>
>> - When you get an overflow, you inject an interrupt with the level set to 1.
>> - When the overflow register gets cleared, you inject the same interrupt
>> with the level set to 0.
>>
>> I don't think you need to do anything else, and the world switch should
>> be left untouched.
>>
> 
> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>> +					    pmu->irq_num, 1);
>> what context is this overflow handler function?  kvm_vgic_inject_irq
>> grabs a mutex, so it can sleep...
>>
>> from a quick glance at the perf core code, it looks like this is in
>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>
> 
> But as Christoffer said before, it's not good to call
> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?

Possibly. I'm slightly worried that inject_irq itself is going to kick
the vcpu again for no good reason. I guess we'll find out (and maybe
we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-02  8:45                     ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-02  8:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12/15 02:40, Shannon Zhao wrote:
> 
> 
> On 2015/12/2 0:57, Marc Zyngier wrote:
>> On 01/12/15 16:26, Shannon Zhao wrote:
>>>
>>>
>>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>> I don't think so. This is a level interrupt, so the level should stay
>>>> high as long as the guest hasn't cleared all possible sources for that
>>>> interrupt.
>>>>
>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>> caused by a given counter. If the status is now 0, the interrupt line
>>>> drops. If the status is still non zero, the line stays high. And I
>>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>> interrupt, or keep it high if it has already high.
>>>>
>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>
>>>> In essence, do not try to maintain side state. I've been bitten.
>>>
>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>> On VM exit, it seems there is nothing to do.
>>
>> It is even simpler than that:
>>
>> - When you get an overflow, you inject an interrupt with the level set to 1.
>> - When the overflow register gets cleared, you inject the same interrupt
>> with the level set to 0.
>>
>> I don't think you need to do anything else, and the world switch should
>> be left untouched.
>>
> 
> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>> +					    pmu->irq_num, 1);
>> what context is this overflow handler function?  kvm_vgic_inject_irq
>> grabs a mutex, so it can sleep...
>>
>> from a quick glance at the perf core code, it looks like this is in
>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>
> 
> But as Christoffer said before, it's not good to call
> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?

Possibly. I'm slightly worried that inject_irq itself is going to kick
the vcpu again for no good reason. I guess we'll find out (and maybe
we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-02  8:45                     ` Marc Zyngier
@ 2015-12-02  9:49                       ` Shannon Zhao
  -1 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-02  9:49 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, will.deacon, Shannon Zhao, kvmarm



On 2015/12/2 16:45, Marc Zyngier wrote:
> On 02/12/15 02:40, Shannon Zhao wrote:
>> > 
>> > 
>> > On 2015/12/2 0:57, Marc Zyngier wrote:
>>> >> On 01/12/15 16:26, Shannon Zhao wrote:
>>>> >>>
>>>> >>>
>>>> >>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>>> >>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>>> >>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>>> >>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>>> >>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>>> >>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>>> >>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>>> >>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>>> >>>> I don't think so. This is a level interrupt, so the level should stay
>>>>> >>>> high as long as the guest hasn't cleared all possible sources for that
>>>>> >>>> interrupt.
>>>>> >>>>
>>>>> >>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>>> >>>> caused by a given counter. If the status is now 0, the interrupt line
>>>>> >>>> drops. If the status is still non zero, the line stays high. And I
>>>>> >>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>>> >>>> interrupt, or keep it high if it has already high.
>>>>> >>>>
>>>> >>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>> >>>
>>>>> >>>> In essence, do not try to maintain side state. I've been bitten.
>>>> >>>
>>>> >>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>>> >>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>>> >>> On VM exit, it seems there is nothing to do.
>>> >>
>>> >> It is even simpler than that:
>>> >>
>>> >> - When you get an overflow, you inject an interrupt with the level set to 1.
>>> >> - When the overflow register gets cleared, you inject the same interrupt
>>> >> with the level set to 0.
>>> >>
>>> >> I don't think you need to do anything else, and the world switch should
>>> >> be left untouched.
>>> >>
>> > 
>> > On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
>> > kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>>> >>>> +					    pmu->irq_num, 1);
>>> >> what context is this overflow handler function?  kvm_vgic_inject_irq
>>> >> grabs a mutex, so it can sleep...
>>> >>
>>> >> from a quick glance at the perf core code, it looks like this is in
>>> >> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>> >>
>> > 
>> > But as Christoffer said before, it's not good to call
>> > kvm_vgic_inject_irq directly in interrupt context. So if we just kick
>> > the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
> Possibly. I'm slightly worried that inject_irq itself is going to kick
> the vcpu again for no good reason. 
Yes, this will introduce a extra kick. What's the impact of kicking a
kicked vcpu?

> I guess we'll find out (and maybe
> we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).
And add a parameter "bool kick" for vgic_update_irq_pending ?

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-02  9:49                       ` Shannon Zhao
  0 siblings, 0 replies; 142+ messages in thread
From: Shannon Zhao @ 2015-12-02  9:49 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/12/2 16:45, Marc Zyngier wrote:
> On 02/12/15 02:40, Shannon Zhao wrote:
>> > 
>> > 
>> > On 2015/12/2 0:57, Marc Zyngier wrote:
>>> >> On 01/12/15 16:26, Shannon Zhao wrote:
>>>> >>>
>>>> >>>
>>>> >>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>>> >>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>>> >>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>>> >>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>>> >>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>>> >>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>>> >>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>>> >>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>>> >>>> I don't think so. This is a level interrupt, so the level should stay
>>>>> >>>> high as long as the guest hasn't cleared all possible sources for that
>>>>> >>>> interrupt.
>>>>> >>>>
>>>>> >>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>>> >>>> caused by a given counter. If the status is now 0, the interrupt line
>>>>> >>>> drops. If the status is still non zero, the line stays high. And I
>>>>> >>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>>> >>>> interrupt, or keep it high if it has already high.
>>>>> >>>>
>>>> >>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>> >>>
>>>>> >>>> In essence, do not try to maintain side state. I've been bitten.
>>>> >>>
>>>> >>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>>> >>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>>> >>> On VM exit, it seems there is nothing to do.
>>> >>
>>> >> It is even simpler than that:
>>> >>
>>> >> - When you get an overflow, you inject an interrupt with the level set to 1.
>>> >> - When the overflow register gets cleared, you inject the same interrupt
>>> >> with the level set to 0.
>>> >>
>>> >> I don't think you need to do anything else, and the world switch should
>>> >> be left untouched.
>>> >>
>> > 
>> > On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
>> > kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>>> >>>> +					    pmu->irq_num, 1);
>>> >> what context is this overflow handler function?  kvm_vgic_inject_irq
>>> >> grabs a mutex, so it can sleep...
>>> >>
>>> >> from a quick glance at the perf core code, it looks like this is in
>>> >> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>> >>
>> > 
>> > But as Christoffer said before, it's not good to call
>> > kvm_vgic_inject_irq directly in interrupt context. So if we just kick
>> > the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
> Possibly. I'm slightly worried that inject_irq itself is going to kick
> the vcpu again for no good reason. 
Yes, this will introduce a extra kick. What's the impact of kicking a
kicked vcpu?

> I guess we'll find out (and maybe
> we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).
And add a parameter "bool kick" for vgic_update_irq_pending ?

-- 
Shannon

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-02  9:49                       ` Shannon Zhao
@ 2015-12-02 10:22                         ` Marc Zyngier
  -1 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-02 10:22 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: Shannon Zhao, kvmarm, linux-arm-kernel, kvm, christoffer.dall,
	will.deacon, alex.bennee, wei, cov, peter.huangpeng

On 02/12/15 09:49, Shannon Zhao wrote:
> 
> 
> On 2015/12/2 16:45, Marc Zyngier wrote:
>> On 02/12/15 02:40, Shannon Zhao wrote:
>>>>
>>>>
>>>> On 2015/12/2 0:57, Marc Zyngier wrote:
>>>>>> On 01/12/15 16:26, Shannon Zhao wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>>>>>>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>>>>>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>>>>>>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>>>>>>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>>>>>>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>>>>>>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>>>>>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>>>>>>>> I don't think so. This is a level interrupt, so the level should stay
>>>>>>>>>> high as long as the guest hasn't cleared all possible sources for that
>>>>>>>>>> interrupt.
>>>>>>>>>>
>>>>>>>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>>>>>>>> caused by a given counter. If the status is now 0, the interrupt line
>>>>>>>>>> drops. If the status is still non zero, the line stays high. And I
>>>>>>>>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>>>>>>>> interrupt, or keep it high if it has already high.
>>>>>>>>>>
>>>>>>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>>>>>>
>>>>>>>>>> In essence, do not try to maintain side state. I've been bitten.
>>>>>>>>
>>>>>>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>>>>>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>>>>>>> On VM exit, it seems there is nothing to do.
>>>>>>
>>>>>> It is even simpler than that:
>>>>>>
>>>>>> - When you get an overflow, you inject an interrupt with the level set to 1.
>>>>>> - When the overflow register gets cleared, you inject the same interrupt
>>>>>> with the level set to 0.
>>>>>>
>>>>>> I don't think you need to do anything else, and the world switch should
>>>>>> be left untouched.
>>>>>>
>>>>
>>>> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
>>>> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>>>>>>>> +					    pmu->irq_num, 1);
>>>>>> what context is this overflow handler function?  kvm_vgic_inject_irq
>>>>>> grabs a mutex, so it can sleep...
>>>>>>
>>>>>> from a quick glance at the perf core code, it looks like this is in
>>>>>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>>>>>
>>>>
>>>> But as Christoffer said before, it's not good to call
>>>> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
>>>> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
>> Possibly. I'm slightly worried that inject_irq itself is going to kick
>> the vcpu again for no good reason. 
> Yes, this will introduce a extra kick. What's the impact of kicking a
> kicked vcpu?

As long as you only kick yourself, it shouldn't be much (trying to
decipher vcpu_kick).

>> I guess we'll find out (and maybe
>> we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).
> And add a parameter "bool kick" for vgic_update_irq_pending ?

Given that we're completely rewriting the thing, I'd rather not add more
hacks to it if we can avoid it.

Give it a go, and we'll find out!

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-02 10:22                         ` Marc Zyngier
  0 siblings, 0 replies; 142+ messages in thread
From: Marc Zyngier @ 2015-12-02 10:22 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12/15 09:49, Shannon Zhao wrote:
> 
> 
> On 2015/12/2 16:45, Marc Zyngier wrote:
>> On 02/12/15 02:40, Shannon Zhao wrote:
>>>>
>>>>
>>>> On 2015/12/2 0:57, Marc Zyngier wrote:
>>>>>> On 01/12/15 16:26, Shannon Zhao wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2015/12/1 23:41, Marc Zyngier wrote:
>>>>>>>>>>>> The reason is that when guest clear the overflow register, it will trap
>>>>>>>>>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
>>>>>>>>>>>>>> the overflow register is still overflowed(that is some bit is still 1).
>>>>>>>>>>>>>> So We need to use some flag to mark we already inject this interrupt.
>>>>>>>>>>>>>> And if during guest handling the overflow, there is a new overflow
>>>>>>>>>>>>>> happening, the pmu->irq_pending will be set ture by
>>>>>>>>>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
>>>>>>>>>> I don't think so. This is a level interrupt, so the level should stay
>>>>>>>>>> high as long as the guest hasn't cleared all possible sources for that
>>>>>>>>>> interrupt.
>>>>>>>>>>
>>>>>>>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
>>>>>>>>>> caused by a given counter. If the status is now 0, the interrupt line
>>>>>>>>>> drops. If the status is still non zero, the line stays high. And I
>>>>>>>>>> believe that writing a 1 to PMOVSSET would actually trigger an
>>>>>>>>>> interrupt, or keep it high if it has already high.
>>>>>>>>>>
>>>>>>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
>>>>>>>>
>>>>>>>>>> In essence, do not try to maintain side state. I've been bitten.
>>>>>>>>
>>>>>>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
>>>>>>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
>>>>>>>> On VM exit, it seems there is nothing to do.
>>>>>>
>>>>>> It is even simpler than that:
>>>>>>
>>>>>> - When you get an overflow, you inject an interrupt with the level set to 1.
>>>>>> - When the overflow register gets cleared, you inject the same interrupt
>>>>>> with the level set to 0.
>>>>>>
>>>>>> I don't think you need to do anything else, and the world switch should
>>>>>> be left untouched.
>>>>>>
>>>>
>>>> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
>>>> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
>>>>>>>>>> +					    pmu->irq_num, 1);
>>>>>> what context is this overflow handler function?  kvm_vgic_inject_irq
>>>>>> grabs a mutex, so it can sleep...
>>>>>>
>>>>>> from a quick glance at the perf core code, it looks like this is in
>>>>>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
>>>>>>
>>>>
>>>> But as Christoffer said before, it's not good to call
>>>> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
>>>> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
>> Possibly. I'm slightly worried that inject_irq itself is going to kick
>> the vcpu again for no good reason. 
> Yes, this will introduce a extra kick. What's the impact of kicking a
> kicked vcpu?

As long as you only kick yourself, it shouldn't be much (trying to
decipher vcpu_kick).

>> I guess we'll find out (and maybe
>> we'll add a kvm_vgic_inject_irq_no_kick_please() helper...).
> And add a parameter "bool kick" for vgic_update_irq_pending ?

Given that we're completely rewriting the thing, I'd rather not add more
hacks to it if we can avoid it.

Give it a go, and we'll find out!

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 142+ messages in thread

* Re: [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
  2015-12-02 10:22                         ` Marc Zyngier
@ 2015-12-02 16:27                           ` Christoffer Dall
  -1 siblings, 0 replies; 142+ messages in thread
From: Christoffer Dall @ 2015-12-02 16:27 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Shannon Zhao, Shannon Zhao, kvmarm, linux-arm-kernel, kvm,
	will.deacon, alex.bennee, wei, cov, peter.huangpeng

On Wed, Dec 02, 2015 at 10:22:04AM +0000, Marc Zyngier wrote:
> On 02/12/15 09:49, Shannon Zhao wrote:
> > 
> > 
> > On 2015/12/2 16:45, Marc Zyngier wrote:
> >> On 02/12/15 02:40, Shannon Zhao wrote:
> >>>>
> >>>>
> >>>> On 2015/12/2 0:57, Marc Zyngier wrote:
> >>>>>> On 01/12/15 16:26, Shannon Zhao wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 2015/12/1 23:41, Marc Zyngier wrote:
> >>>>>>>>>>>> The reason is that when guest clear the overflow register, it will trap
> >>>>>>>>>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
> >>>>>>>>>>>>>> the overflow register is still overflowed(that is some bit is still 1).
> >>>>>>>>>>>>>> So We need to use some flag to mark we already inject this interrupt.
> >>>>>>>>>>>>>> And if during guest handling the overflow, there is a new overflow
> >>>>>>>>>>>>>> happening, the pmu->irq_pending will be set ture by
> >>>>>>>>>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
> >>>>>>>>>> I don't think so. This is a level interrupt, so the level should stay
> >>>>>>>>>> high as long as the guest hasn't cleared all possible sources for that
> >>>>>>>>>> interrupt.
> >>>>>>>>>>
> >>>>>>>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
> >>>>>>>>>> caused by a given counter. If the status is now 0, the interrupt line
> >>>>>>>>>> drops. If the status is still non zero, the line stays high. And I
> >>>>>>>>>> believe that writing a 1 to PMOVSSET would actually trigger an
> >>>>>>>>>> interrupt, or keep it high if it has already high.
> >>>>>>>>>>
> >>>>>>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
> >>>>>>>>
> >>>>>>>>>> In essence, do not try to maintain side state. I've been bitten.
> >>>>>>>>
> >>>>>>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
> >>>>>>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
> >>>>>>>> On VM exit, it seems there is nothing to do.
> >>>>>>
> >>>>>> It is even simpler than that:
> >>>>>>
> >>>>>> - When you get an overflow, you inject an interrupt with the level set to 1.
> >>>>>> - When the overflow register gets cleared, you inject the same interrupt
> >>>>>> with the level set to 0.
> >>>>>>
> >>>>>> I don't think you need to do anything else, and the world switch should
> >>>>>> be left untouched.
> >>>>>>
> >>>>
> >>>> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
> >>>> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> >>>>>>>>>> +					    pmu->irq_num, 1);
> >>>>>> what context is this overflow handler function?  kvm_vgic_inject_irq
> >>>>>> grabs a mutex, so it can sleep...
> >>>>>>
> >>>>>> from a quick glance at the perf core code, it looks like this is in
> >>>>>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
> >>>>>>
> >>>>
> >>>> But as Christoffer said before, it's not good to call
> >>>> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
> >>>> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
> >> Possibly. I'm slightly worried that inject_irq itself is going to kick
> >> the vcpu again for no good reason. 
> > Yes, this will introduce a extra kick. What's the impact of kicking a
> > kicked vcpu?
> 
> As long as you only kick yourself, it shouldn't be much (trying to
> decipher vcpu_kick).
> 

The behavior of vcpu_kick really depends on a number of things:

 - If you're kicking yourself, nothing happens.
 - If you're kicking a sleeping vcpu, wake it up
 - If you're kicking a running vcpu, send it a physical IPI
 - If the vcpu is not running, and not sleeping (so still runnable)
   don't do anything, just wait until it gets scheduled.

-Christoffer

^ permalink raw reply	[flat|nested] 142+ messages in thread

* [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2015-12-02 16:27                           ` Christoffer Dall
  0 siblings, 0 replies; 142+ messages in thread
From: Christoffer Dall @ 2015-12-02 16:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 02, 2015 at 10:22:04AM +0000, Marc Zyngier wrote:
> On 02/12/15 09:49, Shannon Zhao wrote:
> > 
> > 
> > On 2015/12/2 16:45, Marc Zyngier wrote:
> >> On 02/12/15 02:40, Shannon Zhao wrote:
> >>>>
> >>>>
> >>>> On 2015/12/2 0:57, Marc Zyngier wrote:
> >>>>>> On 01/12/15 16:26, Shannon Zhao wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 2015/12/1 23:41, Marc Zyngier wrote:
> >>>>>>>>>>>> The reason is that when guest clear the overflow register, it will trap
> >>>>>>>>>>>>>> to kvm and call kvm_pmu_sync_hwstate() as you see above. At this moment,
> >>>>>>>>>>>>>> the overflow register is still overflowed(that is some bit is still 1).
> >>>>>>>>>>>>>> So We need to use some flag to mark we already inject this interrupt.
> >>>>>>>>>>>>>> And if during guest handling the overflow, there is a new overflow
> >>>>>>>>>>>>>> happening, the pmu->irq_pending will be set ture by
> >>>>>>>>>>>>>> kvm_pmu_perf_overflow(), then it needs to inject this new interrupt, right?
> >>>>>>>>>> I don't think so. This is a level interrupt, so the level should stay
> >>>>>>>>>> high as long as the guest hasn't cleared all possible sources for that
> >>>>>>>>>> interrupt.
> >>>>>>>>>>
> >>>>>>>>>> For your example, the guest writes to PMOVSCLR to clear the overflow
> >>>>>>>>>> caused by a given counter. If the status is now 0, the interrupt line
> >>>>>>>>>> drops. If the status is still non zero, the line stays high. And I
> >>>>>>>>>> believe that writing a 1 to PMOVSSET would actually trigger an
> >>>>>>>>>> interrupt, or keep it high if it has already high.
> >>>>>>>>>>
> >>>>>>>> Right, writing 1 to PMOVSSET will trigger an interrupt.
> >>>>>>>>
> >>>>>>>>>> In essence, do not try to maintain side state. I've been bitten.
> >>>>>>>>
> >>>>>>>> So on VM entry, it check if PMOVSSET is zero. If not, call 
> >>>>>>>> kvm_vgic_inject_irq to set the level high. If so, set the level low.
> >>>>>>>> On VM exit, it seems there is nothing to do.
> >>>>>>
> >>>>>> It is even simpler than that:
> >>>>>>
> >>>>>> - When you get an overflow, you inject an interrupt with the level set to 1.
> >>>>>> - When the overflow register gets cleared, you inject the same interrupt
> >>>>>> with the level set to 0.
> >>>>>>
> >>>>>> I don't think you need to do anything else, and the world switch should
> >>>>>> be left untouched.
> >>>>>>
> >>>>
> >>>> On 2015/7/17 23:28, Christoffer Dall wrote:>> > +		
> >>>> kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id,
> >>>>>>>>>> +					    pmu->irq_num, 1);
> >>>>>> what context is this overflow handler function?  kvm_vgic_inject_irq
> >>>>>> grabs a mutex, so it can sleep...
> >>>>>>
> >>>>>> from a quick glance at the perf core code, it looks like this is in
> >>>>>> interrupt context, so that call to kvm_vgic_inject_irq looks bad.
> >>>>>>
> >>>>
> >>>> But as Christoffer said before, it's not good to call
> >>>> kvm_vgic_inject_irq directly in interrupt context. So if we just kick
> >>>> the vcpu here and call kvm_vgic_inject_irq on VM entry, is this fine?
> >> Possibly. I'm slightly worried that inject_irq itself is going to kick
> >> the vcpu again for no good reason. 
> > Yes, this will introduce a extra kick. What's the impact of kicking a
> > kicked vcpu?
> 
> As long as you only kick yourself, it shouldn't be much (trying to
> decipher vcpu_kick).
> 

The behavior of vcpu_kick really depends on a number of things:

 - If you're kicking yourself, nothing happens.
 - If you're kicking a sleeping vcpu, wake it up
 - If you're kicking a running vcpu, send it a physical IPI
 - If the vcpu is not running, and not sleeping (so still runnable)
   don't do anything, just wait until it gets scheduled.

-Christoffer

^ permalink raw reply	[flat|nested] 142+ messages in thread

end of thread, other threads:[~2015-12-02 16:27 UTC | newest]

Thread overview: 142+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-30  6:21 [PATCH v4 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
2015-10-30  6:21 ` Shannon Zhao
2015-10-30  6:21 ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 03/21] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 04/21] KVM: ARM64: Add reset and access handlers for PMCR_EL0 register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-30 18:11   ` Marc Zyngier
2015-11-30 18:11     ` Marc Zyngier
2015-11-30 18:11     ` Marc Zyngier
2015-10-30  6:21 ` [PATCH v4 05/21] KVM: ARM64: Add reset and access handlers for PMSELR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-02 20:06   ` Christopher Covington
2015-11-02 20:06     ` Christopher Covington
2015-11-30 17:56   ` Marc Zyngier
2015-11-30 17:56     ` Marc Zyngier
2015-12-01  1:51     ` Shannon Zhao
2015-12-01  1:51       ` Shannon Zhao
2015-12-01  8:49       ` Marc Zyngier
2015-12-01  8:49         ` Marc Zyngier
2015-12-01 12:46         ` Shannon Zhao
2015-12-01 12:46           ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 06/21] KVM: ARM64: Add reset and access handlers for PMCEID0 and PMCEID1 register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-30 11:42   ` Marc Zyngier
2015-11-30 11:42     ` Marc Zyngier
2015-11-30 11:59     ` Shannon Zhao
2015-11-30 11:59       ` Shannon Zhao
2015-11-30 13:19       ` Marc Zyngier
2015-11-30 13:19         ` Marc Zyngier
2015-11-30 13:19         ` Marc Zyngier
2015-10-30  6:21 ` [PATCH v4 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-02 20:13   ` Christopher Covington
2015-11-02 20:13     ` Christopher Covington
2015-11-03  2:33     ` Shannon Zhao
2015-11-03  2:33       ` Shannon Zhao
2015-11-03  2:33       ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 08/21] KVM: ARM64: Add reset and access handlers for PMXEVTYPER register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-02 20:54   ` Christopher Covington
2015-11-02 20:54     ` Christopher Covington
2015-11-03  2:41     ` Shannon Zhao
2015-11-03  2:41       ` Shannon Zhao
2015-11-03  2:41       ` Shannon Zhao
2015-11-30 18:12   ` Marc Zyngier
2015-11-30 18:12     ` Marc Zyngier
2015-11-30 18:12     ` Marc Zyngier
2015-12-01  2:42     ` Shannon Zhao
2015-12-01  2:42       ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 09/21] KVM: ARM64: Add reset and access handlers for PMXEVCNTR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 10/21] KVM: ARM64: Add reset and access handlers for PMCCNTR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 11/21] KVM: ARM64: Add reset and access handlers for PMCNTENSET and PMCNTENCLR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 12/21] KVM: ARM64: Add reset and access handlers for PMINTENSET and PMINTENCLR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 13/21] KVM: ARM64: Add reset and access handlers for PMOVSSET and PMOVSCLR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 14/21] KVM: ARM64: Add reset and access handlers for PMUSERENR register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 15/21] KVM: ARM64: Add reset and access handlers for PMSWINC register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 16/21] KVM: ARM64: Add access handlers for PMEVCNTRn and PMEVTYPERn register Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21 ` [PATCH v4 17/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-10-30  6:21   ` Shannon Zhao
2015-11-02 21:20   ` Christopher Covington
2015-11-02 21:20     ` Christopher Covington
2015-10-30  6:22 ` [PATCH v4 18/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30 12:08   ` kbuild test robot
2015-10-30 12:08     ` kbuild test robot
2015-10-30 12:08     ` kbuild test robot
2015-10-31  2:06     ` Shannon Zhao
2015-10-31  2:06       ` Shannon Zhao
2015-11-30 18:22   ` Marc Zyngier
2015-11-30 18:22     ` Marc Zyngier
2015-11-30 18:22     ` Marc Zyngier
2015-12-01 14:35     ` Shannon Zhao
2015-12-01 14:35       ` Shannon Zhao
2015-12-01 14:50       ` Marc Zyngier
2015-12-01 14:50         ` Marc Zyngier
2015-12-01 15:13         ` Shannon Zhao
2015-12-01 15:13           ` Shannon Zhao
2015-12-01 15:41           ` Marc Zyngier
2015-12-01 15:41             ` Marc Zyngier
2015-12-01 16:26             ` Shannon Zhao
2015-12-01 16:26               ` Shannon Zhao
2015-12-01 16:57               ` Marc Zyngier
2015-12-01 16:57                 ` Marc Zyngier
2015-12-02  2:40                 ` Shannon Zhao
2015-12-02  2:40                   ` Shannon Zhao
2015-12-02  8:45                   ` Marc Zyngier
2015-12-02  8:45                     ` Marc Zyngier
2015-12-02  9:49                     ` Shannon Zhao
2015-12-02  9:49                       ` Shannon Zhao
2015-12-02 10:22                       ` Marc Zyngier
2015-12-02 10:22                         ` Marc Zyngier
2015-12-02 16:27                         ` Christoffer Dall
2015-12-02 16:27                           ` Christoffer Dall
2015-10-30  6:22 ` [PATCH v4 19/21] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22 ` [PATCH v4 20/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22 ` [PATCH v4 21/21] KVM: ARM64: Add a new kvm ARM PMU device Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-10-30  6:22   ` Shannon Zhao
2015-11-30 18:31   ` Marc Zyngier
2015-11-30 18:31     ` Marc Zyngier
2015-11-30 18:31     ` Marc Zyngier
2015-11-30 18:34 ` [PATCH v4 00/21] KVM: ARM64: Add guest PMU support Marc Zyngier
2015-11-30 18:34   ` Marc Zyngier
2015-11-30 18:34   ` Marc Zyngier
2015-12-01  1:52   ` Shannon Zhao
2015-12-01  1:52     ` Shannon Zhao
2015-12-01  1:52     ` Shannon Zhao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.