All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 00/21] KVM: ARM64: Add guest PMU support
@ 2016-01-27  3:51 ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.529248      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.65% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.092 M/sec                    ( +-  1.05% )
           1104279      cycles                    #    2.087 GHz                      ( +-  1.65% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            528112      instructions              #    0.48  insns per cycle          ( +-  1.12% )
   <not supported>      branches
              9579      branch-misses             #   18.099 M/sec                    ( +-  2.40% )

       5.000851904 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.695412      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.26% )
                 1      context-switches          #    0.001 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.070 M/sec                    ( +-  1.29% )
           1430471      cycles                    #    2.057 GHz                      ( +-  1.25% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            659173      instructions              #    0.46  insns per cycle          ( +-  2.64% )
   <not supported>      branches
             10893      branch-misses             #   15.664 M/sec                    ( +-  1.23% )

       5.001277044 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3046505444
count2: 3046505575
delta: 131

Guest:
count1: 5932773531
count2: 5932773668
delta: 137

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Also, I have tested "perf top" in two VMs and host at the same time. It
works well.

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v10
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v9:
* Change kvm_arm_support_pmu_v3 to a bool function [PATCH 19/21]
* Fix several typoes, change checking logic of kvm_arm_pmu_v3_init and
  change irq_is_invalid to irq_is_valid [PATCH 21/21]
* Add Acks and Rb from Peter and Andrew, thanks a lot

Changes since v8:
* Fix the wrong use of r->reg in register accessors for 32bit part
* Rewrite the handle of PMUSERENR based on the new inject UND patch
* Drop the inline attribute
* Introduce SET/GET/HAS_DEVICE_ATTR for vcpu iotcl and set the PMU
  overflow interrupt via this API
* Use one feature bit for PMUv3

Changes since v7:
* Rebase on kvm-arm next
* Fix the handler of PMUSERENR and add a helper to forward trap to guest
  EL1
* Fix some small bugs found by Marc

Changes since v6:
* Rebase on v4.4-rc5
* Drop access_pmu_cp15_regs() so that it could use same handler for both
  arch64 and arch32. And it could drop the definitions of CP15 register
  offsets, also avoid same codes added twice
* Use vcpu_sys_reg() when accessing PMU registers to avoid endian things
* Add handler for PMUSERENR and some checkers for other registers
* Add kvm_arm_pmu_get_attr()

Changes since v5:
* Rebase on new linux kernel mainline
* Remove state duplications and drop PMOVSCLR, PMCNTENCLR, PMINTENCLR,
  PMXEVCNTR, PMXEVTYPER
* Add a helper to check if vPMU is already initialized
* remove kvm_vcpu from kvm_pmc

Changes since v4:
* Rebase on new linux kernel mainline 
* Drop the reset handler of CP15 registers
* Fix a compile failure on arch ARM due to lack of asm/pmu.h
* Refactor the interrupt injecting flow according to Marc's suggestion
* Check the value of PMSELR register
* Calculate the attr.disabled according to PMCR.E and PMCNTENSET/CLR
* Fix some coding style
* Document the vPMU irq range

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add access handler for PMCR register
  KVM: ARM64: Add access handler for PMSELR register
  KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add access handler for event type register
  KVM: ARM64: Add access handler for event counter register
  KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
  KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
  KVM: ARM64: Add access handler for PMSWINC register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add access handler for PMUSERENR register
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new feature bit for PMUv3
  KVM: ARM: Introduce per-vcpu kvm device controls
  KVM: ARM64: Add a new vcpu device control group for PMUv3

 Documentation/virtual/kvm/api.txt          |  12 +-
 Documentation/virtual/kvm/devices/vcpu.txt |  32 ++
 arch/arm/include/asm/kvm_host.h            |  15 +
 arch/arm/kvm/arm.c                         |  61 +++
 arch/arm64/include/asm/kvm_host.h          |  25 +-
 arch/arm64/include/asm/pmu.h               |  81 ++++
 arch/arm64/include/uapi/asm/kvm.h          |   6 +
 arch/arm64/kernel/perf_event.c             |  36 +-
 arch/arm64/kvm/Kconfig                     |   7 +
 arch/arm64/kvm/Makefile                    |   1 +
 arch/arm64/kvm/guest.c                     |  51 +++
 arch/arm64/kvm/hyp/hyp.h                   |   1 +
 arch/arm64/kvm/hyp/switch.c                |   3 +
 arch/arm64/kvm/reset.c                     |   7 +
 arch/arm64/kvm/sys_regs.c                  | 570 +++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h                      | 102 ++++++
 include/uapi/linux/kvm.h                   |   2 +
 virt/kvm/arm/pmu.c                         | 513 ++++++++++++++++++++++++++
 18 files changed, 1456 insertions(+), 69 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt
 create mode 100644 arch/arm64/include/asm/pmu.h
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 00/21] KVM: ARM64: Add guest PMU support
@ 2016-01-27  3:51 ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.529248      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.65% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.092 M/sec                    ( +-  1.05% )
           1104279      cycles                    #    2.087 GHz                      ( +-  1.65% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            528112      instructions              #    0.48  insns per cycle          ( +-  1.12% )
   <not supported>      branches
              9579      branch-misses             #   18.099 M/sec                    ( +-  2.40% )

       5.000851904 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.695412      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.26% )
                 1      context-switches          #    0.001 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.070 M/sec                    ( +-  1.29% )
           1430471      cycles                    #    2.057 GHz                      ( +-  1.25% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            659173      instructions              #    0.46  insns per cycle          ( +-  2.64% )
   <not supported>      branches
             10893      branch-misses             #   15.664 M/sec                    ( +-  1.23% )

       5.001277044 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3046505444
count2: 3046505575
delta: 131

Guest:
count1: 5932773531
count2: 5932773668
delta: 137

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Also, I have tested "perf top" in two VMs and host at the same time. It
works well.

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v10
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v9:
* Change kvm_arm_support_pmu_v3 to a bool function [PATCH 19/21]
* Fix several typoes, change checking logic of kvm_arm_pmu_v3_init and
  change irq_is_invalid to irq_is_valid [PATCH 21/21]
* Add Acks and Rb from Peter and Andrew, thanks a lot

Changes since v8:
* Fix the wrong use of r->reg in register accessors for 32bit part
* Rewrite the handle of PMUSERENR based on the new inject UND patch
* Drop the inline attribute
* Introduce SET/GET/HAS_DEVICE_ATTR for vcpu iotcl and set the PMU
  overflow interrupt via this API
* Use one feature bit for PMUv3

Changes since v7:
* Rebase on kvm-arm next
* Fix the handler of PMUSERENR and add a helper to forward trap to guest
  EL1
* Fix some small bugs found by Marc

Changes since v6:
* Rebase on v4.4-rc5
* Drop access_pmu_cp15_regs() so that it could use same handler for both
  arch64 and arch32. And it could drop the definitions of CP15 register
  offsets, also avoid same codes added twice
* Use vcpu_sys_reg() when accessing PMU registers to avoid endian things
* Add handler for PMUSERENR and some checkers for other registers
* Add kvm_arm_pmu_get_attr()

Changes since v5:
* Rebase on new linux kernel mainline
* Remove state duplications and drop PMOVSCLR, PMCNTENCLR, PMINTENCLR,
  PMXEVCNTR, PMXEVTYPER
* Add a helper to check if vPMU is already initialized
* remove kvm_vcpu from kvm_pmc

Changes since v4:
* Rebase on new linux kernel mainline 
* Drop the reset handler of CP15 registers
* Fix a compile failure on arch ARM due to lack of asm/pmu.h
* Refactor the interrupt injecting flow according to Marc's suggestion
* Check the value of PMSELR register
* Calculate the attr.disabled according to PMCR.E and PMCNTENSET/CLR
* Fix some coding style
* Document the vPMU irq range

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add access handler for PMCR register
  KVM: ARM64: Add access handler for PMSELR register
  KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add access handler for event type register
  KVM: ARM64: Add access handler for event counter register
  KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
  KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
  KVM: ARM64: Add access handler for PMSWINC register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add access handler for PMUSERENR register
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new feature bit for PMUv3
  KVM: ARM: Introduce per-vcpu kvm device controls
  KVM: ARM64: Add a new vcpu device control group for PMUv3

 Documentation/virtual/kvm/api.txt          |  12 +-
 Documentation/virtual/kvm/devices/vcpu.txt |  32 ++
 arch/arm/include/asm/kvm_host.h            |  15 +
 arch/arm/kvm/arm.c                         |  61 +++
 arch/arm64/include/asm/kvm_host.h          |  25 +-
 arch/arm64/include/asm/pmu.h               |  81 ++++
 arch/arm64/include/uapi/asm/kvm.h          |   6 +
 arch/arm64/kernel/perf_event.c             |  36 +-
 arch/arm64/kvm/Kconfig                     |   7 +
 arch/arm64/kvm/Makefile                    |   1 +
 arch/arm64/kvm/guest.c                     |  51 +++
 arch/arm64/kvm/hyp/hyp.h                   |   1 +
 arch/arm64/kvm/hyp/switch.c                |   3 +
 arch/arm64/kvm/reset.c                     |   7 +
 arch/arm64/kvm/sys_regs.c                  | 570 +++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h                      | 102 ++++++
 include/uapi/linux/kvm.h                   |   2 +
 virt/kvm/arm/pmu.c                         | 513 ++++++++++++++++++++++++++
 18 files changed, 1456 insertions(+), 69 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt
 create mode 100644 arch/arm64/include/asm/pmu.h
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 00/21] KVM: ARM64: Add guest PMU support
@ 2016-01-27  3:51 ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

This patchset adds guest PMU support for KVM on ARM64. It takes
trap-and-emulate approach. When guest wants to monitor one event, it
will be trapped by KVM and KVM will call perf_event API to create a perf
event and call relevant perf_event APIs to get the count value of event.

Use perf to test this patchset in guest. When using "perf list", it
shows the list of the hardware events and hardware cache events perf
supports. Then use "perf stat -e EVENT" to monitor some event. For
example, use "perf stat -e cycles" to count cpu cycles and
"perf stat -e cache-misses" to count cache misses.

Below are the outputs of "perf stat -r 5 sleep 5" when running in host
and guest.

Host:
 Performance counter stats for 'sleep 5' (5 runs):

          0.529248      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.65% )
                 1      context-switches          #    0.002 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.092 M/sec                    ( +-  1.05% )
           1104279      cycles                    #    2.087 GHz                      ( +-  1.65% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            528112      instructions              #    0.48  insns per cycle          ( +-  1.12% )
   <not supported>      branches
              9579      branch-misses             #   18.099 M/sec                    ( +-  2.40% )

       5.000851904 seconds time elapsed                                          ( +-  0.00% )

Guest:
 Performance counter stats for 'sleep 5' (5 runs):

          0.695412      task-clock (msec)         #    0.000 CPUs utilized            ( +-  1.26% )
                 1      context-switches          #    0.001 M/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.070 M/sec                    ( +-  1.29% )
           1430471      cycles                    #    2.057 GHz                      ( +-  1.25% )
   <not supported>      stalled-cycles-frontend
   <not supported>      stalled-cycles-backend
            659173      instructions              #    0.46  insns per cycle          ( +-  2.64% )
   <not supported>      branches
             10893      branch-misses             #   15.664 M/sec                    ( +-  1.23% )

       5.001277044 seconds time elapsed                                          ( +-  0.00% )

Have a cycle counter read test like below in guest and host:

static void test(void)
{
	unsigned long count, count1, count2;
	count1 = read_cycles();
	count++;
	count2 = read_cycles();
}

Host:
count1: 3046505444
count2: 3046505575
delta: 131

Guest:
count1: 5932773531
count2: 5932773668
delta: 137

The gap between guest and host is very small. One reason for this I
think is that it doesn't count the cycles in EL2 and host since we add
exclude_hv = 1. So the cycles spent to store/restore registers which
happens at EL2 are not included.

This patchset can be fetched from [1] and the relevant QEMU version for
test can be fetched from [2].

The results of 'perf test' can be found from [3][4].
The results of perf_event_tests test suite can be found from [5][6].

Also, I have tested "perf top" in two VMs and host at the same time. It
works well.

Thanks,
Shannon

[1] https://git.linaro.org/people/shannon.zhao/linux-mainline.git  KVM_ARM64_PMU_v10
[2] https://git.linaro.org/people/shannon.zhao/qemu.git  PMU
[3] http://people.linaro.org/~shannon.zhao/PMU/perf-test-host.txt
[4] http://people.linaro.org/~shannon.zhao/PMU/perf-test-guest.txt
[5] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-host.txt
[6] http://people.linaro.org/~shannon.zhao/PMU/perf_event_tests-guest.txt

Changes since v9:
* Change kvm_arm_support_pmu_v3 to a bool function [PATCH 19/21]
* Fix several typoes, change checking logic of kvm_arm_pmu_v3_init and
  change irq_is_invalid to irq_is_valid [PATCH 21/21]
* Add Acks and Rb from Peter and Andrew, thanks a lot

Changes since v8:
* Fix the wrong use of r->reg in register accessors for 32bit part
* Rewrite the handle of PMUSERENR based on the new inject UND patch
* Drop the inline attribute
* Introduce SET/GET/HAS_DEVICE_ATTR for vcpu iotcl and set the PMU
  overflow interrupt via this API
* Use one feature bit for PMUv3

Changes since v7:
* Rebase on kvm-arm next
* Fix the handler of PMUSERENR and add a helper to forward trap to guest
  EL1
* Fix some small bugs found by Marc

Changes since v6:
* Rebase on v4.4-rc5
* Drop access_pmu_cp15_regs() so that it could use same handler for both
  arch64 and arch32. And it could drop the definitions of CP15 register
  offsets, also avoid same codes added twice
* Use vcpu_sys_reg() when accessing PMU registers to avoid endian things
* Add handler for PMUSERENR and some checkers for other registers
* Add kvm_arm_pmu_get_attr()

Changes since v5:
* Rebase on new linux kernel mainline
* Remove state duplications and drop PMOVSCLR, PMCNTENCLR, PMINTENCLR,
  PMXEVCNTR, PMXEVTYPER
* Add a helper to check if vPMU is already initialized
* remove kvm_vcpu from kvm_pmc

Changes since v4:
* Rebase on new linux kernel mainline 
* Drop the reset handler of CP15 registers
* Fix a compile failure on arch ARM due to lack of asm/pmu.h
* Refactor the interrupt injecting flow according to Marc's suggestion
* Check the value of PMSELR register
* Calculate the attr.disabled according to PMCR.E and PMCNTENSET/CLR
* Fix some coding style
* Document the vPMU irq range

Changes since v3:
* Rebase on new linux kernel mainline 
* Use ARMV8_MAX_COUNTERS instead of 32
* Reset PMCR.E to zero.
* Trigger overflow for software increment.
* Optimize PMU interrupt inject logic.
* Add handler for E,C,P bits of PMCR
* Fix the overflow bug found by perf_event_tests
* Run 'perf test', 'perf top' and perf_event_tests test suite
* Add exclude_hv = 1 configuration to not count in EL2

Changes since v2:
* Directly use perf raw event type to create perf_event in KVM
* Add a helper vcpu_sysreg_write
* remove unrelated header file

Changes since v1:
* Use switch...case for registers access handler instead of adding
  alone handler for each register
* Try to use the sys_regs to store the register value instead of adding
  new variables in struct kvm_pmc
* Fix the handle of cp15 regs
* Create a new kvm device vPMU, then userspace could choose whether to
  create PMU
* Fix the handle of PMU overflow interrupt

Shannon Zhao (21):
  ARM64: Move PMU register related defines to asm/pmu.h
  KVM: ARM64: Define PMU data structure for each vcpu
  KVM: ARM64: Add offset defines for PMU registers
  KVM: ARM64: Add access handler for PMCR register
  KVM: ARM64: Add access handler for PMSELR register
  KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  KVM: ARM64: PMU: Add perf event map and introduce perf event creating
    function
  KVM: ARM64: Add access handler for event type register
  KVM: ARM64: Add access handler for event counter register
  KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
  KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
  KVM: ARM64: Add access handler for PMSWINC register
  KVM: ARM64: Add helper to handle PMCR register bits
  KVM: ARM64: Add access handler for PMUSERENR register
  KVM: ARM64: Add PMU overflow interrupt routing
  KVM: ARM64: Reset PMU state when resetting vcpu
  KVM: ARM64: Free perf event of PMU when destroying vcpu
  KVM: ARM64: Add a new feature bit for PMUv3
  KVM: ARM: Introduce per-vcpu kvm device controls
  KVM: ARM64: Add a new vcpu device control group for PMUv3

 Documentation/virtual/kvm/api.txt          |  12 +-
 Documentation/virtual/kvm/devices/vcpu.txt |  32 ++
 arch/arm/include/asm/kvm_host.h            |  15 +
 arch/arm/kvm/arm.c                         |  61 +++
 arch/arm64/include/asm/kvm_host.h          |  25 +-
 arch/arm64/include/asm/pmu.h               |  81 ++++
 arch/arm64/include/uapi/asm/kvm.h          |   6 +
 arch/arm64/kernel/perf_event.c             |  36 +-
 arch/arm64/kvm/Kconfig                     |   7 +
 arch/arm64/kvm/Makefile                    |   1 +
 arch/arm64/kvm/guest.c                     |  51 +++
 arch/arm64/kvm/hyp/hyp.h                   |   1 +
 arch/arm64/kvm/hyp/switch.c                |   3 +
 arch/arm64/kvm/reset.c                     |   7 +
 arch/arm64/kvm/sys_regs.c                  | 570 +++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h                      | 102 ++++++
 include/uapi/linux/kvm.h                   |   2 +
 virt/kvm/arm/pmu.c                         | 513 ++++++++++++++++++++++++++
 18 files changed, 1456 insertions(+), 69 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt
 create mode 100644 arch/arm64/include/asm/pmu.h
 create mode 100644 include/kvm/arm_pmu.h
 create mode 100644 virt/kvm/arm/pmu.c

-- 
2.0.4

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: Anup Patel, kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h   | 67 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 36 +----------------------
 2 files changed, 68 insertions(+), 35 deletions(-)
 create mode 100644 arch/arm64/include/asm/pmu.h

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
new file mode 100644
index 0000000..4406184
--- /dev/null
+++ b/arch/arm64/include/asm/pmu.h
@@ -0,0 +1,67 @@
+/*
+ * PMU support
+ *
+ * Copyright (C) 2012 ARM Limited
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_PMU_H
+#define __ASM_PMU_H
+
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
+#endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..8fad83d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
 #include <linux/platform_device.h>
+#include <asm/pmu.h>
 
 /*
  * ARMv8 PMUv3 Performance Events handling code.
@@ -333,9 +334,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
 	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -346,38 +344,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: Anup Patel, kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h   | 67 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 36 +----------------------
 2 files changed, 68 insertions(+), 35 deletions(-)
 create mode 100644 arch/arm64/include/asm/pmu.h

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
new file mode 100644
index 0000000..4406184
--- /dev/null
+++ b/arch/arm64/include/asm/pmu.h
@@ -0,0 +1,67 @@
+/*
+ * PMU support
+ *
+ * Copyright (C) 2012 ARM Limited
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_PMU_H
+#define __ASM_PMU_H
+
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
+#endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..8fad83d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
 #include <linux/platform_device.h>
+#include <asm/pmu.h>
 
 /*
  * ARMv8 PMUv3 Performance Events handling code.
@@ -333,9 +334,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
 	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -346,38 +344,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

To use the ARMv8 PMU related register defines from the KVM code,
we move the relevant definitions to asm/pmu.h header file.

Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h   | 67 ++++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/perf_event.c | 36 +----------------------
 2 files changed, 68 insertions(+), 35 deletions(-)
 create mode 100644 arch/arm64/include/asm/pmu.h

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
new file mode 100644
index 0000000..4406184
--- /dev/null
+++ b/arch/arm64/include/asm/pmu.h
@@ -0,0 +1,67 @@
+/*
+ * PMU support
+ *
+ * Copyright (C) 2012 ARM Limited
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_PMU_H
+#define __ASM_PMU_H
+
+#define ARMV8_MAX_COUNTERS      32
+#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+
+/*
+ * Per-CPU PMCR: config reg
+ */
+#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
+#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
+#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
+#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
+#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
+#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
+#define	ARMV8_PMCR_N_MASK	0x1f
+#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
+
+/*
+ * PMCNTEN: counters enable reg
+ */
+#define	ARMV8_CNTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMINTEN: counters interrupt enable reg
+ */
+#define	ARMV8_INTEN_MASK	0xffffffff	/* Mask for writable bits */
+
+/*
+ * PMOVSR: counters overflow flag status reg
+ */
+#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
+#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
+
+/*
+ * PMXEVTYPER: Event selection reg
+ */
+#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
+#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
+
+/*
+ * Event filters for PMUv3
+ */
+#define	ARMV8_EXCLUDE_EL1	(1 << 31)
+#define	ARMV8_EXCLUDE_EL0	(1 << 30)
+#define	ARMV8_INCLUDE_EL2	(1 << 27)
+
+#endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f7ab14c..8fad83d 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -24,6 +24,7 @@
 #include <linux/of.h>
 #include <linux/perf/arm_pmu.h>
 #include <linux/platform_device.h>
+#include <asm/pmu.h>
 
 /*
  * ARMv8 PMUv3 Performance Events handling code.
@@ -333,9 +334,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_COUNTER_LAST(cpu_pmu) \
 	(ARMV8_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1)
 
-#define	ARMV8_MAX_COUNTERS	32
-#define	ARMV8_COUNTER_MASK	(ARMV8_MAX_COUNTERS - 1)
-
 /*
  * ARMv8 low level PMU access
  */
@@ -346,38 +344,6 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
 #define	ARMV8_IDX_TO_COUNTER(x)	\
 	(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
 
-/*
- * Per-CPU PMCR: config reg
- */
-#define ARMV8_PMCR_E		(1 << 0) /* Enable all counters */
-#define ARMV8_PMCR_P		(1 << 1) /* Reset all counters */
-#define ARMV8_PMCR_C		(1 << 2) /* Cycle counter reset */
-#define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
-#define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
-#define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
-#define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
-#define	ARMV8_PMCR_N_MASK	0x1f
-#define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
-
-/*
- * PMOVSR: counters overflow flag status reg
- */
-#define	ARMV8_OVSR_MASK		0xffffffff	/* Mask for writable bits */
-#define	ARMV8_OVERFLOWED_MASK	ARMV8_OVSR_MASK
-
-/*
- * PMXEVTYPER: Event selection reg
- */
-#define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
-#define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
-
-/*
- * Event filters for PMUv3
- */
-#define	ARMV8_EXCLUDE_EL1	(1 << 31)
-#define	ARMV8_EXCLUDE_EL0	(1 << 30)
-#define	ARMV8_INCLUDE_EL2	(1 << 27)
-
 static inline u32 armv8pmu_pmcr_read(void)
 {
 	u32 val;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 02/21] KVM: ARM64: Define PMU data structure for each vcpu
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: linux-arm-kernel, kvm, will.deacon, wei, drjones, cov,
	shannon.zhao, peter.huangpeng, hangaohuai, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  7 +++++++
 include/kvm/arm_pmu.h             | 42 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..6f0241f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -36,6 +36,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -211,6 +212,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..de7450d 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
 	select KVM_ARM_VGIC_V3
+	select KVM_ARM_PMU if HW_PERF_EVENTS
 	---help---
 	  Support hosting virtualized guest machines.
 	  We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,12 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..be220ee
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#ifdef CONFIG_KVM_ARM_PMU
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+	int irq_num;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+	bool ready;
+};
+#else
+struct kvm_pmu {
+};
+#endif
+
+#endif
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 02/21] KVM: ARM64: Define PMU data structure for each vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: linux-arm-kernel, kvm, will.deacon, wei, drjones, cov,
	shannon.zhao, peter.huangpeng, hangaohuai, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  7 +++++++
 include/kvm/arm_pmu.h             | 42 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..6f0241f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -36,6 +36,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -211,6 +212,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..de7450d 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
 	select KVM_ARM_VGIC_V3
+	select KVM_ARM_PMU if HW_PERF_EVENTS
 	---help---
 	  Support hosting virtualized guest machines.
 	  We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,12 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..be220ee
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#ifdef CONFIG_KVM_ARM_PMU
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+	int irq_num;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+	bool ready;
+};
+#else
+struct kvm_pmu {
+};
+#endif
+
+#endif
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 02/21] KVM: ARM64: Define PMU data structure for each vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Here we plan to support virtual PMU for guest by full software
emulation, so define some basic structs and functions preparing for
futher steps. Define struct kvm_pmc for performance monitor counter and
struct kvm_pmu for performance monitor unit for each vcpu. According to
ARMv8 spec, the PMU contains at most 32(ARMV8_MAX_COUNTERS) counters.

Since this only supports ARM64 (or PMUv3), add a separate config symbol
for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig            |  7 +++++++
 include/kvm/arm_pmu.h             | 42 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+)
 create mode 100644 include/kvm/arm_pmu.h

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c9..6f0241f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -36,6 +36,7 @@
 
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
+#include <kvm/arm_pmu.h>
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -211,6 +212,7 @@ struct kvm_vcpu_arch {
 	/* VGIC state */
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
+	struct kvm_pmu pmu;
 
 	/*
 	 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a5272c0..de7450d 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -36,6 +36,7 @@ config KVM
 	select HAVE_KVM_EVENTFD
 	select HAVE_KVM_IRQFD
 	select KVM_ARM_VGIC_V3
+	select KVM_ARM_PMU if HW_PERF_EVENTS
 	---help---
 	  Support hosting virtualized guest machines.
 	  We don't support KVM with 16K page tables yet, due to the multiple
@@ -48,6 +49,12 @@ config KVM_ARM_HOST
 	---help---
 	  Provides host support for ARM processors.
 
+config KVM_ARM_PMU
+	bool
+	---help---
+	  Adds support for a virtual Performance Monitoring Unit (PMU) in
+	  virtual machines.
+
 source drivers/vhost/Kconfig
 
 endif # VIRTUALIZATION
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
new file mode 100644
index 0000000..be220ee
--- /dev/null
+++ b/include/kvm/arm_pmu.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_KVM_PMU_H
+#define __ASM_ARM_KVM_PMU_H
+
+#ifdef CONFIG_KVM_ARM_PMU
+
+#include <linux/perf_event.h>
+#include <asm/pmu.h>
+
+struct kvm_pmc {
+	u8 idx;/* index into the pmu->pmc array */
+	struct perf_event *perf_event;
+	u64 bitmask;
+};
+
+struct kvm_pmu {
+	int irq_num;
+	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
+	bool ready;
+};
+#else
+struct kvm_pmu {
+};
+#endif
+
+#endif
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 03/21] KVM: ARM64: Add offset defines for PMU registers
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: linux-arm-kernel, kvm, will.deacon, wei, drjones, cov,
	shannon.zhao, peter.huangpeng, hangaohuai, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate accesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6f0241f..6bab7fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -115,6 +115,21 @@ enum vcpu_sysreg {
 	MDSCR_EL1,	/* Monitor Debug System Control Register */
 	MDCCINT_EL1,	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+	/* Performance Monitors Registers */
+	PMCR_EL0,	/* Control Register */
+	PMOVSSET_EL0,	/* Overflow Flag Status Set Register */
+	PMSELR_EL0,	/* Event Counter Selection Register */
+	PMEVCNTR0_EL0,	/* Event Counter Register (0-30) */
+	PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
+	PMCCNTR_EL0,	/* Cycle Counter Register */
+	PMEVTYPER0_EL0,	/* Event Type Register (0-30) */
+	PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
+	PMCCFILTR_EL0,	/* Cycle Count Filter Register */
+	PMCNTENSET_EL0,	/* Count Enable Set Register */
+	PMINTENSET_EL1,	/* Interrupt Enable Set Register */
+	PMUSERENR_EL0,	/* User Enable Register */
+	PMSWINC_EL0,	/* Software Increment Register */
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 03/21] KVM: ARM64: Add offset defines for PMU registers
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: linux-arm-kernel, kvm, will.deacon, wei, drjones, cov,
	shannon.zhao, peter.huangpeng, hangaohuai, zhaoshenglong

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate accesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6f0241f..6bab7fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -115,6 +115,21 @@ enum vcpu_sysreg {
 	MDSCR_EL1,	/* Monitor Debug System Control Register */
 	MDCCINT_EL1,	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+	/* Performance Monitors Registers */
+	PMCR_EL0,	/* Control Register */
+	PMOVSSET_EL0,	/* Overflow Flag Status Set Register */
+	PMSELR_EL0,	/* Event Counter Selection Register */
+	PMEVCNTR0_EL0,	/* Event Counter Register (0-30) */
+	PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
+	PMCCNTR_EL0,	/* Cycle Counter Register */
+	PMEVTYPER0_EL0,	/* Event Type Register (0-30) */
+	PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
+	PMCCFILTR_EL0,	/* Cycle Count Filter Register */
+	PMCNTENSET_EL0,	/* Count Enable Set Register */
+	PMINTENSET_EL1,	/* Interrupt Enable Set Register */
+	PMUSERENR_EL0,	/* User Enable Register */
+	PMSWINC_EL0,	/* Software Increment Register */
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
-- 
2.0.4



^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 03/21] KVM: ARM64: Add offset defines for PMU registers
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

We are about to trap and emulate accesses to each PMU register
individually. This adds the context offsets for the AArch64 PMU
registers.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_host.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6f0241f..6bab7fb 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -115,6 +115,21 @@ enum vcpu_sysreg {
 	MDSCR_EL1,	/* Monitor Debug System Control Register */
 	MDCCINT_EL1,	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 
+	/* Performance Monitors Registers */
+	PMCR_EL0,	/* Control Register */
+	PMOVSSET_EL0,	/* Overflow Flag Status Set Register */
+	PMSELR_EL0,	/* Event Counter Selection Register */
+	PMEVCNTR0_EL0,	/* Event Counter Register (0-30) */
+	PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
+	PMCCNTR_EL0,	/* Cycle Counter Register */
+	PMEVTYPER0_EL0,	/* Event Type Register (0-30) */
+	PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
+	PMCCFILTR_EL0,	/* Cycle Count Filter Register */
+	PMCNTENSET_EL0,	/* Count Enable Set Register */
+	PMINTENSET_EL1,	/* Interrupt Enable Set Register */
+	PMUSERENR_EL0,	/* User Enable Register */
+	PMSWINC_EL0,	/* Software Increment Register */
+
 	/* 32bit specific registers. Keep them at the end of the range */
 	DACR32_EL2,	/* Domain Access Control Register */
 	IFSR32_EL2,	/* Instruction Fault Status Register */
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
handler for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h     |  4 ++++
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eec3598..97fea84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+}
+
+static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			const struct sys_reg_desc *r)
+{
+	u64 val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		/* Only update writeable bits of PMCR */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0);
+		val &= ~ARMV8_PMCR_MASK;
+		val |= p->regval & ARMV8_PMCR_MASK;
+		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+	} else {
+		/* PMCR.P & PMCR.C are RAZ */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0)
+		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+		p->regval = val;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index be220ee..32fee2d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -34,9 +34,13 @@ struct kvm_pmu {
 	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
 	bool ready;
 };
+
+#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 #else
 struct kvm_pmu {
 };
+
+#define kvm_arm_pmu_v3_ready(v)		(false)
 #endif
 
 #endif
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
handler for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h     |  4 ++++
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eec3598..97fea84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+}
+
+static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			const struct sys_reg_desc *r)
+{
+	u64 val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		/* Only update writeable bits of PMCR */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0);
+		val &= ~ARMV8_PMCR_MASK;
+		val |= p->regval & ARMV8_PMCR_MASK;
+		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+	} else {
+		/* PMCR.P & PMCR.C are RAZ */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0)
+		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+		p->regval = val;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index be220ee..32fee2d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -34,9 +34,13 @@ struct kvm_pmu {
 	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
 	bool ready;
 };
+
+#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 #else
 struct kvm_pmu {
 };
+
+#define kvm_arm_pmu_v3_ready(v)		(false)
 #endif
 
 #endif
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add reset handler which gets host value of PMCR_EL0 and make writable
bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
handler for PMCR.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
 include/kvm/arm_pmu.h     |  4 ++++
 2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eec3598..97fea84 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -34,6 +34,7 @@
 #include <asm/kvm_emulate.h>
 #include <asm/kvm_host.h>
 #include <asm/kvm_mmu.h>
+#include <asm/pmu.h>
 
 #include <trace/events/kvm.h>
 
@@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 pmcr, val;
+
+	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
+	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
+	 * except PMCR.E resetting to zero.
+	 */
+	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
+	      & (~ARMV8_PMCR_E);
+	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+}
+
+static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			const struct sys_reg_desc *r)
+{
+	u64 val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		/* Only update writeable bits of PMCR */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0);
+		val &= ~ARMV8_PMCR_MASK;
+		val |= p->regval & ARMV8_PMCR_MASK;
+		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+	} else {
+		/* PMCR.P & PMCR.C are RAZ */
+		val = vcpu_sys_reg(vcpu, PMCR_EL0)
+		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
+		p->regval = val;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMCR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
 	  trap_raz_wi },
@@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
 
 	/* PMU */
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index be220ee..32fee2d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -34,9 +34,13 @@ struct kvm_pmu {
 	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
 	bool ready;
 };
+
+#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 #else
 struct kvm_pmu {
 };
+
+#define kvm_arm_pmu_v3_ready(v)		(false)
 #endif
 
 #endif
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. When reading PMSELR, return the PMSELR.SEL field to
guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 97fea84..fc60041 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -477,6 +477,21 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+	else
+		/* return PMSELR.SEL field */
+		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -676,7 +691,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -927,7 +942,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. When reading PMSELR, return the PMSELR.SEL field to
guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 97fea84..fc60041 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -477,6 +477,21 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+	else
+		/* return PMSELR.SEL field */
+		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -676,7 +691,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -927,7 +942,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
its reset handler. When reading PMSELR, return the PMSELR.SEL field to
guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 97fea84..fc60041 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -477,6 +477,21 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
+	else
+		/* return PMSELR.SEL field */
+		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -676,7 +691,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
-	  trap_raz_wi },
+	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
 	  trap_raz_wi },
@@ -927,7 +942,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which gets host value of PMCEID0 or PMCEID1 when
guest access these registers. Writing action to PMCEID0 or PMCEID1 is
UNDEFINED.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc60041..06257e2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		return false;
+
+	if (!(p->Op2 & 1))
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	p->regval = pmceid;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -694,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which gets host value of PMCEID0 or PMCEID1 when
guest access these registers. Writing action to PMCEID0 or PMCEID1 is
UNDEFINED.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc60041..06257e2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		return false;
+
+	if (!(p->Op2 & 1))
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	p->regval = pmceid;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -694,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which gets host value of PMCEID0 or PMCEID1 when
guest access these registers. Writing action to PMCEID0 or PMCEID1 is
UNDEFINED.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index fc60041..06257e2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	u64 pmceid;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write)
+		return false;
+
+	if (!(p->Op2 & 1))
+		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
+	else
+		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
+
+	p->regval = pmceid;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -694,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
 	/* PMCEID0_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCEID1_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
-	  trap_raz_wi },
+	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
 	  trap_raz_wi },
@@ -943,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h |   3 ++
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  10 ++++
 virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 136 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 4406184..2588f9c 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -21,6 +21,7 @@
 
 #define ARMV8_MAX_COUNTERS      32
 #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
 
 /*
  * Per-CPU PMCR: config reg
@@ -31,6 +32,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index caee9ee..122cff4 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 32fee2d..ee4b15c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,11 +36,21 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	return 0;
+}
+static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+						  u64 data, u64 select_idx) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..673ec55
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	reg = (select_idx == ARMV8_CYCLE_IDX)
+	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	counter = vcpu_sys_reg(vcpu, reg);
+
+	/* The real counter value is equal to the value of counter register plus
+	 * the value perf event counts.
+	 */
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
+{
+	u64 counter, reg;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		reg = (pmc->idx == ARMV8_CYCLE_IDX)
+		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+		vcpu_sys_reg(vcpu, reg) = counter;
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
+	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u64 eventsel, counter;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		pr_err_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h |   3 ++
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  10 ++++
 virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 136 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 4406184..2588f9c 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -21,6 +21,7 @@
 
 #define ARMV8_MAX_COUNTERS      32
 #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
 
 /*
  * Per-CPU PMCR: config reg
@@ -31,6 +32,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index caee9ee..122cff4 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 32fee2d..ee4b15c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,11 +36,21 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	return 0;
+}
+static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+						  u64 data, u64 select_idx) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..673ec55
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	reg = (select_idx == ARMV8_CYCLE_IDX)
+	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	counter = vcpu_sys_reg(vcpu, reg);
+
+	/* The real counter value is equal to the value of counter register plus
+	 * the value perf event counts.
+	 */
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
+{
+	u64 counter, reg;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		reg = (pmc->idx == ARMV8_CYCLE_IDX)
+		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+		vcpu_sys_reg(vcpu, reg) = counter;
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
+	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u64 eventsel, counter;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		pr_err_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When we use tools like perf on host, perf passes the event type and the
id of this event type category to kernel, then kernel will map them to
hardware event number and write this number to PMU PMEVTYPER<n>_EL0
register. When getting the event number in KVM, directly use raw event
type to create a perf_event for it.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/pmu.h |   3 ++
 arch/arm64/kvm/Makefile      |   1 +
 include/kvm/arm_pmu.h        |  10 ++++
 virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 136 insertions(+)
 create mode 100644 virt/kvm/arm/pmu.c

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 4406184..2588f9c 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -21,6 +21,7 @@
 
 #define ARMV8_MAX_COUNTERS      32
 #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
+#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
 
 /*
  * Per-CPU PMCR: config reg
@@ -31,6 +32,8 @@
 #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
 #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
 #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
+/* Determines which PMCCNTR_EL0 bit generates an overflow */
+#define ARMV8_PMCR_LC		(1 << 6)
 #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
 #define	ARMV8_PMCR_N_MASK	0x1f
 #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index caee9ee..122cff4 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
+kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 32fee2d..ee4b15c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,11 +36,21 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
+					    u64 select_idx)
+{
+	return 0;
+}
+static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
+						  u64 data, u64 select_idx) {}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
new file mode 100644
index 0000000..673ec55
--- /dev/null
+++ b/virt/kvm/arm/pmu.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright (C) 2015 Linaro Ltd.
+ * Author: Shannon Zhao <shannon.zhao@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/cpu.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <linux/perf_event.h>
+#include <asm/kvm_emulate.h>
+#include <kvm/arm_pmu.h>
+
+/**
+ * kvm_pmu_get_counter_value - get PMU counter value
+ * @vcpu: The vcpu pointer
+ * @select_idx: The counter index
+ */
+u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	u64 counter, reg, enabled, running;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+
+	reg = (select_idx == ARMV8_CYCLE_IDX)
+	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
+	counter = vcpu_sys_reg(vcpu, reg);
+
+	/* The real counter value is equal to the value of counter register plus
+	 * the value perf event counts.
+	 */
+	if (pmc->perf_event)
+		counter += perf_event_read_value(pmc->perf_event, &enabled,
+						 &running);
+
+	return counter & pmc->bitmask;
+}
+
+/**
+ * kvm_pmu_stop_counter - stop PMU counter
+ * @pmc: The PMU counter pointer
+ *
+ * If this counter has been configured to monitor some event, release it here.
+ */
+static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
+{
+	u64 counter, reg;
+
+	if (pmc->perf_event) {
+		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
+		reg = (pmc->idx == ARMV8_CYCLE_IDX)
+		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;
+		vcpu_sys_reg(vcpu, reg) = counter;
+		perf_event_disable(pmc->perf_event);
+		perf_event_release_kernel(pmc->perf_event);
+		pmc->perf_event = NULL;
+	}
+}
+
+static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
+{
+	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
+	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
+}
+
+/**
+ * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
+ * @vcpu: The vcpu pointer
+ * @data: The data guest writes to PMXEVTYPER_EL0
+ * @select_idx: The number of selected counter
+ *
+ * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
+ * event with given hardware event number. Here we call perf_event API to
+ * emulate this action and create a kernel perf event for it.
+ */
+void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
+				    u64 select_idx)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
+	struct perf_event *event;
+	struct perf_event_attr attr;
+	u64 eventsel, counter;
+
+	kvm_pmu_stop_counter(vcpu, pmc);
+	eventsel = data & ARMV8_EVTYPE_EVENT;
+
+	memset(&attr, 0, sizeof(struct perf_event_attr));
+	attr.type = PERF_TYPE_RAW;
+	attr.size = sizeof(attr);
+	attr.pinned = 1;
+	attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);
+	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
+	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
+	attr.exclude_hv = 1; /* Don't count EL2 events */
+	attr.exclude_host = 1; /* Don't count host events */
+	attr.config = eventsel;
+
+	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
+	/* The initial sample period (overflow count) of an event. */
+	attr.sample_period = (-counter) & pmc->bitmask;
+
+	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	if (IS_ERR(event)) {
+		pr_err_once("kvm: pmu event creation failed %ld\n",
+			    PTR_ERR(event));
+		return;
+	}
+
+	pmc->perf_event = event;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
which is mapped to PMEVTYPERn or PMCCFILTR.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When writing to these registers, create a perf_event for the selected
event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 138 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 06257e2..298ae94 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
+{
+	u64 pmcr, val;
+
+	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
+	if (idx >= val && idx != ARMV8_CYCLE_IDX)
+		return false;
+
+	return true;
+}
+
+static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			       const struct sys_reg_desc *r)
+{
+	u64 idx, reg;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
+		/* PMXEVTYPER_EL0 */
+		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+		reg = PMEVTYPER0_EL0 + idx;
+	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		if (idx == ARMV8_CYCLE_IDX)
+			reg = PMCCFILTR_EL0;
+		else
+			/* PMEVTYPERn_EL0 */
+			reg = PMEVTYPER0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	if (p->is_write) {
+		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
+		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -528,6 +576,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -724,7 +779,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -742,6 +797,45 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -931,6 +1025,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -967,7 +1068,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
@@ -982,6 +1083,41 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
which is mapped to PMEVTYPERn or PMCCFILTR.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When writing to these registers, create a perf_event for the selected
event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 138 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 06257e2..298ae94 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
+{
+	u64 pmcr, val;
+
+	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
+	if (idx >= val && idx != ARMV8_CYCLE_IDX)
+		return false;
+
+	return true;
+}
+
+static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			       const struct sys_reg_desc *r)
+{
+	u64 idx, reg;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
+		/* PMXEVTYPER_EL0 */
+		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+		reg = PMEVTYPER0_EL0 + idx;
+	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		if (idx == ARMV8_CYCLE_IDX)
+			reg = PMCCFILTR_EL0;
+		else
+			/* PMEVTYPERn_EL0 */
+			reg = PMEVTYPER0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	if (p->is_write) {
+		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
+		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -528,6 +576,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -724,7 +779,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -742,6 +797,45 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -931,6 +1025,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -967,7 +1068,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
@@ -982,6 +1083,41 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
which is mapped to PMEVTYPERn or PMCCFILTR.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When writing to these registers, create a perf_event for the selected
event type.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 138 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 06257e2..298ae94 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
+{
+	u64 pmcr, val;
+
+	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
+	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
+	if (idx >= val && idx != ARMV8_CYCLE_IDX)
+		return false;
+
+	return true;
+}
+
+static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			       const struct sys_reg_desc *r)
+{
+	u64 idx, reg;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
+		/* PMXEVTYPER_EL0 */
+		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
+		reg = PMEVTYPER0_EL0 + idx;
+	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		if (idx == ARMV8_CYCLE_IDX)
+			reg = PMCCFILTR_EL0;
+		else
+			/* PMEVTYPERn_EL0 */
+			reg = PMEVTYPER0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	if (p->is_write) {
+		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
+		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -528,6 +576,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVTYPERn_EL0 register */
+#define PMU_PMEVTYPER_EL0(n)						\
+	/* PMEVTYPERn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
+
 /*
  * Architected system registers.
  * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -724,7 +779,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  trap_raz_wi },
@@ -742,6 +797,45 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVTYPERn_EL0 */
+	PMU_PMEVTYPER_EL0(0),
+	PMU_PMEVTYPER_EL0(1),
+	PMU_PMEVTYPER_EL0(2),
+	PMU_PMEVTYPER_EL0(3),
+	PMU_PMEVTYPER_EL0(4),
+	PMU_PMEVTYPER_EL0(5),
+	PMU_PMEVTYPER_EL0(6),
+	PMU_PMEVTYPER_EL0(7),
+	PMU_PMEVTYPER_EL0(8),
+	PMU_PMEVTYPER_EL0(9),
+	PMU_PMEVTYPER_EL0(10),
+	PMU_PMEVTYPER_EL0(11),
+	PMU_PMEVTYPER_EL0(12),
+	PMU_PMEVTYPER_EL0(13),
+	PMU_PMEVTYPER_EL0(14),
+	PMU_PMEVTYPER_EL0(15),
+	PMU_PMEVTYPER_EL0(16),
+	PMU_PMEVTYPER_EL0(17),
+	PMU_PMEVTYPER_EL0(18),
+	PMU_PMEVTYPER_EL0(19),
+	PMU_PMEVTYPER_EL0(20),
+	PMU_PMEVTYPER_EL0(21),
+	PMU_PMEVTYPER_EL0(22),
+	PMU_PMEVTYPER_EL0(23),
+	PMU_PMEVTYPER_EL0(24),
+	PMU_PMEVTYPER_EL0(25),
+	PMU_PMEVTYPER_EL0(26),
+	PMU_PMEVTYPER_EL0(27),
+	PMU_PMEVTYPER_EL0(28),
+	PMU_PMEVTYPER_EL0(29),
+	PMU_PMEVTYPER_EL0(30),
+	/* PMCCFILTR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
+	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
+	  access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
+
 	/* DACR32_EL2 */
 	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
 	  NULL, reset_unknown, DACR32_EL2 },
@@ -931,6 +1025,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVTYPERn register */
+#define PMU_PMEVTYPER(n)						\
+	/* PMEVTYPERn */						\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evtyper }
+
 /*
  * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
  * depending on the way they are accessed (as a 32bit or a 64bit
@@ -967,7 +1068,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
@@ -982,6 +1083,41 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
+
+	/* PMEVTYPERn */
+	PMU_PMEVTYPER(0),
+	PMU_PMEVTYPER(1),
+	PMU_PMEVTYPER(2),
+	PMU_PMEVTYPER(3),
+	PMU_PMEVTYPER(4),
+	PMU_PMEVTYPER(5),
+	PMU_PMEVTYPER(6),
+	PMU_PMEVTYPER(7),
+	PMU_PMEVTYPER(8),
+	PMU_PMEVTYPER(9),
+	PMU_PMEVTYPER(10),
+	PMU_PMEVTYPER(11),
+	PMU_PMEVTYPER(12),
+	PMU_PMEVTYPER(13),
+	PMU_PMEVTYPER(14),
+	PMU_PMEVTYPER(15),
+	PMU_PMEVTYPER(16),
+	PMU_PMEVTYPER(17),
+	PMU_PMEVTYPER(18),
+	PMU_PMEVTYPER(19),
+	PMU_PMEVTYPER(20),
+	PMU_PMEVTYPER(21),
+	PMU_PMEVTYPER(22),
+	PMU_PMEVTYPER(23),
+	PMU_PMEVTYPER(24),
+	PMU_PMEVTYPER(25),
+	PMU_PMEVTYPER(26),
+	PMU_PMEVTYPER(27),
+	PMU_PMEVTYPER(28),
+	PMU_PMEVTYPER(29),
+	PMU_PMEVTYPER(30),
+	/* PMCCFILTR */
+	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
 };
 
 static const struct sys_reg_desc cp15_64_regs[] = {
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 09/21] KVM: ARM64: Add access handler for event counter register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which
is mapped to PMEVCNTRn.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When reading these registers, return the sum of register value and the
value perf event counts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 129 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 125 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 298ae94..6a50262 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -561,6 +561,48 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+			      struct sys_reg_params *p,
+			      const struct sys_reg_desc *r)
+{
+	u64 idx, reg, val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13) {
+		if (r->Op2 == 2) {
+			/* PMXEVCNTR_EL0 */
+			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
+			      & ARMV8_COUNTER_MASK;
+			reg = PMEVCNTR0_EL0 + idx;
+		} else if (r->Op2 == 0) {
+			/* PMCCNTR_EL0 */
+			idx = ARMV8_CYCLE_IDX;
+			reg = PMCCNTR_EL0;
+		} else {
+			BUG();
+		}
+	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+		/* PMEVCNTRn_EL0 */
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		reg = PMEVCNTR0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	val = kvm_pmu_get_counter_value(vcpu, idx);
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
+	else
+		p->regval = val;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -576,6 +618,13 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	/* PMEVTYPERn_EL0 */						\
@@ -776,13 +825,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_evcntr },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -797,6 +846,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
 	/* PMEVTYPERn_EL0 */
 	PMU_PMEVTYPER_EL0(0),
 	PMU_PMEVTYPER_EL0(1),
@@ -1025,6 +1106,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr }
+
 /* Macro to expand the PMEVTYPERn register */
 #define PMU_PMEVTYPER(n)						\
 	/* PMEVTYPERn */						\
@@ -1067,9 +1155,9 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
@@ -1084,6 +1172,38 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
 
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
 	/* PMEVTYPERn */
 	PMU_PMEVTYPER(0),
 	PMU_PMEVTYPER(1),
@@ -1122,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 static const struct sys_reg_desc cp15_64_regs[] = {
 	{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+	{ Op1( 0), CRn( 0), CRm( 9), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi },
 	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
 };
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 09/21] KVM: ARM64: Add access handler for event counter register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which
is mapped to PMEVCNTRn.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When reading these registers, return the sum of register value and the
value perf event counts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 129 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 125 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 298ae94..6a50262 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -561,6 +561,48 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+			      struct sys_reg_params *p,
+			      const struct sys_reg_desc *r)
+{
+	u64 idx, reg, val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13) {
+		if (r->Op2 == 2) {
+			/* PMXEVCNTR_EL0 */
+			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
+			      & ARMV8_COUNTER_MASK;
+			reg = PMEVCNTR0_EL0 + idx;
+		} else if (r->Op2 == 0) {
+			/* PMCCNTR_EL0 */
+			idx = ARMV8_CYCLE_IDX;
+			reg = PMCCNTR_EL0;
+		} else {
+			BUG();
+		}
+	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+		/* PMEVCNTRn_EL0 */
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		reg = PMEVCNTR0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	val = kvm_pmu_get_counter_value(vcpu, idx);
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
+	else
+		p->regval = val;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -576,6 +618,13 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	/* PMEVTYPERn_EL0 */						\
@@ -776,13 +825,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_evcntr },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -797,6 +846,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
 	/* PMEVTYPERn_EL0 */
 	PMU_PMEVTYPER_EL0(0),
 	PMU_PMEVTYPER_EL0(1),
@@ -1025,6 +1106,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr }
+
 /* Macro to expand the PMEVTYPERn register */
 #define PMU_PMEVTYPER(n)						\
 	/* PMEVTYPERn */						\
@@ -1067,9 +1155,9 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
@@ -1084,6 +1172,38 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
 
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
 	/* PMEVTYPERn */
 	PMU_PMEVTYPER(0),
 	PMU_PMEVTYPER(1),
@@ -1122,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 static const struct sys_reg_desc cp15_64_regs[] = {
 	{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+	{ Op1( 0), CRn( 0), CRm( 9), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi },
 	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
 };
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 09/21] KVM: ARM64: Add access handler for event counter register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which
is mapped to PMEVCNTRn.

The access handler translates all aarch32 register offsets to aarch64
ones and uses vcpu_sys_reg() to access their values to avoid taking care
of big endian.

When reading these registers, return the sum of register value and the
value perf event counts.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 129 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 125 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 298ae94..6a50262 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -561,6 +561,48 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
+			      struct sys_reg_params *p,
+			      const struct sys_reg_desc *r)
+{
+	u64 idx, reg, val;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (r->CRn == 9 && r->CRm == 13) {
+		if (r->Op2 == 2) {
+			/* PMXEVCNTR_EL0 */
+			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
+			      & ARMV8_COUNTER_MASK;
+			reg = PMEVCNTR0_EL0 + idx;
+		} else if (r->Op2 == 0) {
+			/* PMCCNTR_EL0 */
+			idx = ARMV8_CYCLE_IDX;
+			reg = PMCCNTR_EL0;
+		} else {
+			BUG();
+		}
+	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
+		/* PMEVCNTRn_EL0 */
+		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
+		reg = PMEVCNTR0_EL0 + idx;
+	} else {
+		BUG();
+	}
+
+	if (!pmu_counter_idx_valid(vcpu, idx))
+		return false;
+
+	val = kvm_pmu_get_counter_value(vcpu, idx);
+	if (p->is_write)
+		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
+	else
+		p->regval = val;
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -576,6 +618,13 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
 	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
 
+/* Macro to expand the PMEVCNTRn_EL0 register */
+#define PMU_PMEVCNTR_EL0(n)						\
+	/* PMEVCNTRn_EL0 */						\
+	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), }
+
 /* Macro to expand the PMEVTYPERn_EL0 register */
 #define PMU_PMEVTYPER_EL0(n)						\
 	/* PMEVTYPERn_EL0 */						\
@@ -776,13 +825,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmceid },
 	/* PMCCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 },
 	/* PMXEVTYPER_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
 	  access_pmu_evtyper },
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmu_evcntr },
 	/* PMUSERENR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
 	  trap_raz_wi },
@@ -797,6 +846,38 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
 	  NULL, reset_unknown, TPIDRRO_EL0 },
 
+	/* PMEVCNTRn_EL0 */
+	PMU_PMEVCNTR_EL0(0),
+	PMU_PMEVCNTR_EL0(1),
+	PMU_PMEVCNTR_EL0(2),
+	PMU_PMEVCNTR_EL0(3),
+	PMU_PMEVCNTR_EL0(4),
+	PMU_PMEVCNTR_EL0(5),
+	PMU_PMEVCNTR_EL0(6),
+	PMU_PMEVCNTR_EL0(7),
+	PMU_PMEVCNTR_EL0(8),
+	PMU_PMEVCNTR_EL0(9),
+	PMU_PMEVCNTR_EL0(10),
+	PMU_PMEVCNTR_EL0(11),
+	PMU_PMEVCNTR_EL0(12),
+	PMU_PMEVCNTR_EL0(13),
+	PMU_PMEVCNTR_EL0(14),
+	PMU_PMEVCNTR_EL0(15),
+	PMU_PMEVCNTR_EL0(16),
+	PMU_PMEVCNTR_EL0(17),
+	PMU_PMEVCNTR_EL0(18),
+	PMU_PMEVCNTR_EL0(19),
+	PMU_PMEVCNTR_EL0(20),
+	PMU_PMEVCNTR_EL0(21),
+	PMU_PMEVCNTR_EL0(22),
+	PMU_PMEVCNTR_EL0(23),
+	PMU_PMEVCNTR_EL0(24),
+	PMU_PMEVCNTR_EL0(25),
+	PMU_PMEVCNTR_EL0(26),
+	PMU_PMEVCNTR_EL0(27),
+	PMU_PMEVCNTR_EL0(28),
+	PMU_PMEVCNTR_EL0(29),
+	PMU_PMEVCNTR_EL0(30),
 	/* PMEVTYPERn_EL0 */
 	PMU_PMEVTYPER_EL0(0),
 	PMU_PMEVTYPER_EL0(1),
@@ -1025,6 +1106,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
 	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
 };
 
+/* Macro to expand the PMEVCNTRn register */
+#define PMU_PMEVCNTR(n)							\
+	/* PMEVCNTRn */							\
+	{ Op1(0), CRn(0b1110),						\
+	  CRm((0b1000 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
+	  access_pmu_evcntr }
+
 /* Macro to expand the PMEVTYPERn register */
 #define PMU_PMEVTYPER(n)						\
 	/* PMEVTYPERn */						\
@@ -1067,9 +1155,9 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
-	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
@@ -1084,6 +1172,38 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
 
+	/* PMEVCNTRn */
+	PMU_PMEVCNTR(0),
+	PMU_PMEVCNTR(1),
+	PMU_PMEVCNTR(2),
+	PMU_PMEVCNTR(3),
+	PMU_PMEVCNTR(4),
+	PMU_PMEVCNTR(5),
+	PMU_PMEVCNTR(6),
+	PMU_PMEVCNTR(7),
+	PMU_PMEVCNTR(8),
+	PMU_PMEVCNTR(9),
+	PMU_PMEVCNTR(10),
+	PMU_PMEVCNTR(11),
+	PMU_PMEVCNTR(12),
+	PMU_PMEVCNTR(13),
+	PMU_PMEVCNTR(14),
+	PMU_PMEVCNTR(15),
+	PMU_PMEVCNTR(16),
+	PMU_PMEVCNTR(17),
+	PMU_PMEVCNTR(18),
+	PMU_PMEVCNTR(19),
+	PMU_PMEVCNTR(20),
+	PMU_PMEVCNTR(21),
+	PMU_PMEVCNTR(22),
+	PMU_PMEVCNTR(23),
+	PMU_PMEVCNTR(24),
+	PMU_PMEVCNTR(25),
+	PMU_PMEVCNTR(26),
+	PMU_PMEVCNTR(27),
+	PMU_PMEVCNTR(28),
+	PMU_PMEVCNTR(29),
+	PMU_PMEVCNTR(30),
 	/* PMEVTYPERn */
 	PMU_PMEVTYPER(0),
 	PMU_PMEVTYPER(1),
@@ -1122,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 static const struct sys_reg_desc cp15_64_regs[] = {
 	{ Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 },
+	{ Op1( 0), CRn( 0), CRm( 9), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi },
 	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
 };
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 35 +++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  9 +++++++
 virt/kvm/arm/pmu.c        | 63 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 103 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a50262..d43a9a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 val, mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (p->is_write) {
+		val = p->regval & mask;
+		if (r->Op2 & 0x1) {
+			/* accessing PMCNTENSET_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+			kvm_pmu_enable_counter(vcpu, val);
+		} else {
+			/* accessing PMCNTENCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			kvm_pmu_disable_counter(vcpu, val);
+		}
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1149,8 +1176,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ee4b15c..a7e5485 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -37,6 +37,9 @@ struct kvm_pmu {
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -49,6 +52,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 {
 	return 0;
 }
+static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 673ec55..0873977 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -68,6 +68,69 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
+
+	val &= ARMV8_PMCR_N_MASK;
+	return GENMASK(val - 1, 0) | BIT(ARMV8_CYCLE_IDX);
+}
+
+/**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) || !val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event) {
+			perf_event_enable(pmc->perf_event);
+			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+				kvm_debug("fail to enable perf event\n");
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event)
+			perf_event_disable(pmc->perf_event);
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 35 +++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  9 +++++++
 virt/kvm/arm/pmu.c        | 63 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 103 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a50262..d43a9a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 val, mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (p->is_write) {
+		val = p->regval & mask;
+		if (r->Op2 & 0x1) {
+			/* accessing PMCNTENSET_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+			kvm_pmu_enable_counter(vcpu, val);
+		} else {
+			/* accessing PMCNTENCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			kvm_pmu_disable_counter(vcpu, val);
+		}
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1149,8 +1176,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ee4b15c..a7e5485 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -37,6 +37,9 @@ struct kvm_pmu {
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -49,6 +52,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 {
 	return 0;
 }
+static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 673ec55..0873977 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -68,6 +68,69 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
+
+	val &= ARMV8_PMCR_N_MASK;
+	return GENMASK(val - 1, 0) | BIT(ARMV8_CYCLE_IDX);
+}
+
+/**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) || !val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event) {
+			perf_event_enable(pmc->perf_event);
+			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+				kvm_debug("fail to enable perf event\n");
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event)
+			perf_event_disable(pmc->perf_event);
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMCNTENSET or PMCNTENCLR register.

When writing to PMCNTENSET, call perf_event_enable to enable the perf
event. When writing to PMCNTENCLR, call perf_event_disable to disable
the perf event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 35 +++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  9 +++++++
 virt/kvm/arm/pmu.c        | 63 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 103 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a50262..d43a9a4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -603,6 +603,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 val, mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (p->is_write) {
+		val = p->regval & mask;
+		if (r->Op2 & 0x1) {
+			/* accessing PMCNTENSET_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
+			kvm_pmu_enable_counter(vcpu, val);
+		} else {
+			/* accessing PMCNTENCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
+			kvm_pmu_disable_counter(vcpu, val);
+		}
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcr, reset_pmcr, },
 	/* PMCNTENSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
-	  trap_raz_wi },
+	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
 	/* PMCNTENCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
-	  trap_raz_wi },
+	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
 	  trap_raz_wi },
@@ -1149,8 +1176,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 
 	/* PMU */
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ee4b15c..a7e5485 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -37,6 +37,9 @@ struct kvm_pmu {
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -49,6 +52,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 {
 	return 0;
 }
+static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
+static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 673ec55..0873977 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -68,6 +68,69 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
+{
+	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
+
+	val &= ARMV8_PMCR_N_MASK;
+	return GENMASK(val - 1, 0) | BIT(ARMV8_CYCLE_IDX);
+}
+
+/**
+ * kvm_pmu_enable_counter - enable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENSET register
+ *
+ * Call perf_event_enable to start counting the perf event
+ */
+void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) || !val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event) {
+			perf_event_enable(pmc->perf_event);
+			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
+				kvm_debug("fail to enable perf event\n");
+		}
+	}
+}
+
+/**
+ * kvm_pmu_disable_counter - disable selected PMU counter
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCNTENCLR register
+ *
+ * Call perf_event_disable to stop counting the perf event
+ */
+void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+
+	if (!val)
+		return;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		if (!(val & BIT(i)))
+			continue;
+
+		pmc = &pmu->pmc[i];
+		if (pmc->perf_event)
+			perf_event_disable(pmc->perf_event);
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d43a9a4..41d4bcd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -630,6 +630,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->Op2 & 0x1)
+			/* accessing PMINTENSET_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= (p->regval
+							       & mask);
+		else
+			/* accessing PMINTENCLR_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~(p->regval
+								& mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -788,10 +812,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pminten, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pminten, NULL, PMINTENSET_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1186,8 +1210,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d43a9a4..41d4bcd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -630,6 +630,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->Op2 & 0x1)
+			/* accessing PMINTENSET_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= (p->regval
+							       & mask);
+		else
+			/* accessing PMINTENCLR_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~(p->regval
+								& mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -788,10 +812,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pminten, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pminten, NULL, PMINTENSET_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1186,8 +1210,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMINTENSET or PMINTENCLR register.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 32 ++++++++++++++++++++++++++++----
 1 file changed, 28 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index d43a9a4..41d4bcd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -630,6 +630,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->Op2 & 0x1)
+			/* accessing PMINTENSET_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= (p->regval
+							       & mask);
+		else
+			/* accessing PMINTENCLR_EL1 */
+			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~(p->regval
+								& mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -788,10 +812,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* PMINTENSET_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
-	  trap_raz_wi },
+	  access_pminten, reset_unknown, PMINTENSET_EL1 },
 	/* PMINTENCLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
-	  trap_raz_wi },
+	  access_pminten, NULL, PMINTENSET_EL1 },
 
 	/* MAIR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
@@ -1186,8 +1210,8 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 12/21] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, the counter and its interrupt
is enabled, kick this vcpu to sync PMU interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 41d4bcd..60b24ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,28 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->CRm & 0x2)
+			/* accessing PMOVSSET_EL0 */
+			kvm_pmu_overflow_set(vcpu, p->regval & mask);
+		else
+			/* accessing PMOVSCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -861,7 +883,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -888,7 +910,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1202,7 +1224,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
@@ -1212,6 +1234,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a7e5485..4f8409d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -58,6 +59,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 0873977..ee75fac 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -131,6 +131,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+{
+	u64 reg;
+
+	reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+	reg &= kvm_pmu_valid_counter_mask(vcpu);
+
+	return reg;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
+{
+	u64 reg;
+
+	if (val == 0)
+		return;
+
+	vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= val;
+	reg = kvm_pmu_overflow_status(vcpu);
+	if (reg != 0)
+		kvm_vcpu_kick(vcpu);
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 12/21] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, the counter and its interrupt
is enabled, kick this vcpu to sync PMU interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 41d4bcd..60b24ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,28 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->CRm & 0x2)
+			/* accessing PMOVSSET_EL0 */
+			kvm_pmu_overflow_set(vcpu, p->regval & mask);
+		else
+			/* accessing PMOVSCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -861,7 +883,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -888,7 +910,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1202,7 +1224,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
@@ -1212,6 +1234,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a7e5485..4f8409d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -58,6 +59,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 0873977..ee75fac 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -131,6 +131,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+{
+	u64 reg;
+
+	reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+	reg &= kvm_pmu_valid_counter_mask(vcpu);
+
+	return reg;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
+{
+	u64 reg;
+
+	if (val == 0)
+		return;
+
+	vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= val;
+	reg = kvm_pmu_overflow_status(vcpu);
+	if (reg != 0)
+		kvm_vcpu_kick(vcpu);
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 12/21] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use
reset_unknown for its reset handler. Add a handler to emulate writing
PMOVSSET or PMOVSCLR register.

When writing non-zero value to PMOVSSET, the counter and its interrupt
is enabled, kick this vcpu to sync PMU interrupt.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 29 ++++++++++++++++++++++++++---
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 30 ++++++++++++++++++++++++++++++
 3 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 41d4bcd..60b24ea 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -654,6 +654,28 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			 const struct sys_reg_desc *r)
+{
+	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (r->CRm & 0x2)
+			/* accessing PMOVSSET_EL0 */
+			kvm_pmu_overflow_set(vcpu, p->regval & mask);
+		else
+			/* accessing PMOVSCLR_EL0 */
+			vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= ~(p->regval & mask);
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMOVSSET_EL0) & mask;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -861,7 +883,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmcnten, NULL, PMCNTENSET_EL0 },
 	/* PMOVSCLR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
 	  trap_raz_wi },
@@ -888,7 +910,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  trap_raz_wi },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
-	  trap_raz_wi },
+	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
 
 	/* TPIDR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b010),
@@ -1202,7 +1224,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
-	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
@@ -1212,6 +1234,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
 
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
 	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index a7e5485..4f8409d 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -40,6 +40,7 @@ u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -58,6 +59,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 }
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 0873977..ee75fac 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -131,6 +131,36 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+{
+	u64 reg;
+
+	reg = vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+	reg &= vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+	reg &= kvm_pmu_valid_counter_mask(vcpu);
+
+	return reg;
+}
+
+/**
+ * kvm_pmu_overflow_set - set PMU overflow interrupt
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMOVSSET register
+ */
+void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
+{
+	u64 reg;
+
+	if (val == 0)
+		return;
+
+	vcpu_sys_reg(vcpu, PMOVSSET_EL0) |= val;
+	reg = kvm_pmu_overflow_status(vcpu);
+	if (reg != 0)
+		kvm_vcpu_kick(vcpu);
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |  2 ++
 arch/arm64/kvm/sys_regs.c    | 20 +++++++++++++++++++-
 include/kvm/arm_pmu.h        |  2 ++
 virt/kvm/arm/pmu.c           | 33 +++++++++++++++++++++++++++++++++
 4 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 2588f9c..6f14a01 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -60,6 +60,8 @@
 #define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
 #define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
 
+#define ARMV8_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
+
 /*
  * Event filters for PMUv3
  */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 60b24ea..f45c227 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -676,6 +676,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		mask = kvm_pmu_valid_counter_mask(vcpu);
+		kvm_pmu_software_increment(vcpu, p->regval & mask);
+		return true;
+	}
+
+	return false;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -886,7 +903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmswinc, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
@@ -1225,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4f8409d..caa706e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ee75fac..706c935 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -161,6 +161,35 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 		kvm_vcpu_kick(vcpu);
 }
 
+/**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	u64 type, enable, reg;
+
+	if (val == 0)
+		return;
+
+	for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+		if (!(val & BIT(i)))
+			continue;
+		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+		       & ARMV8_EVTYPE_EVENT;
+		enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		if ((type == ARMV8_EVTYPE_EVENT_SW_INCR) && (enable & BIT(i))) {
+			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+			reg = lower_32_bits(reg);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+			if (!reg)
+				kvm_pmu_overflow_set(vcpu, BIT(i));
+		}
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
@@ -189,6 +218,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* Software increment event does't need to be backed by a perf event */
+	if (eventsel == ARMV8_EVTYPE_EVENT_SW_INCR)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |  2 ++
 arch/arm64/kvm/sys_regs.c    | 20 +++++++++++++++++++-
 include/kvm/arm_pmu.h        |  2 ++
 virt/kvm/arm/pmu.c           | 33 +++++++++++++++++++++++++++++++++
 4 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 2588f9c..6f14a01 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -60,6 +60,8 @@
 #define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
 #define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
 
+#define ARMV8_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
+
 /*
  * Event filters for PMUv3
  */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 60b24ea..f45c227 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -676,6 +676,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		mask = kvm_pmu_valid_counter_mask(vcpu);
+		kvm_pmu_software_increment(vcpu, p->regval & mask);
+		return true;
+	}
+
+	return false;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -886,7 +903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmswinc, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
@@ -1225,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4f8409d..caa706e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ee75fac..706c935 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -161,6 +161,35 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 		kvm_vcpu_kick(vcpu);
 }
 
+/**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	u64 type, enable, reg;
+
+	if (val == 0)
+		return;
+
+	for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+		if (!(val & BIT(i)))
+			continue;
+		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+		       & ARMV8_EVTYPE_EVENT;
+		enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		if ((type == ARMV8_EVTYPE_EVENT_SW_INCR) && (enable & BIT(i))) {
+			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+			reg = lower_32_bits(reg);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+			if (!reg)
+				kvm_pmu_overflow_set(vcpu, BIT(i));
+		}
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
@@ -189,6 +218,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* Software increment event does't need to be backed by a perf event */
+	if (eventsel == ARMV8_EVTYPE_EVENT_SW_INCR)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

Add access handler which emulates writing and reading PMSWINC
register and add support for creating software increment event.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |  2 ++
 arch/arm64/kvm/sys_regs.c    | 20 +++++++++++++++++++-
 include/kvm/arm_pmu.h        |  2 ++
 virt/kvm/arm/pmu.c           | 33 +++++++++++++++++++++++++++++++++
 4 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 2588f9c..6f14a01 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -60,6 +60,8 @@
 #define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
 #define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
 
+#define ARMV8_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
+
 /*
  * Event filters for PMUv3
  */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 60b24ea..f45c227 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -676,6 +676,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return true;
 }
 
+static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			   const struct sys_reg_desc *r)
+{
+	u64 mask;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		mask = kvm_pmu_valid_counter_mask(vcpu);
+		kvm_pmu_software_increment(vcpu, p->regval & mask);
+		return true;
+	}
+
+	return false;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -886,7 +903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	  access_pmovs, NULL, PMOVSSET_EL0 },
 	/* PMSWINC_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
-	  trap_raz_wi },
+	  access_pmswinc, reset_unknown, PMSWINC_EL0 },
 	/* PMSELR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
 	  access_pmselr, reset_unknown, PMSELR_EL0 },
@@ -1225,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
+	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
 	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4f8409d..caa706e 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index ee75fac..706c935 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -161,6 +161,35 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 		kvm_vcpu_kick(vcpu);
 }
 
+/**
+ * kvm_pmu_software_increment - do software increment
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMSWINC register
+ */
+void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
+{
+	int i;
+	u64 type, enable, reg;
+
+	if (val == 0)
+		return;
+
+	for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+		if (!(val & BIT(i)))
+			continue;
+		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
+		       & ARMV8_EVTYPE_EVENT;
+		enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+		if ((type == ARMV8_EVTYPE_EVENT_SW_INCR) && (enable & BIT(i))) {
+			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
+			reg = lower_32_bits(reg);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
+			if (!reg)
+				kvm_pmu_overflow_set(vcpu, BIT(i));
+		}
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
@@ -189,6 +218,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	kvm_pmu_stop_counter(vcpu, pmc);
 	eventsel = data & ARMV8_EVTYPE_EVENT;
 
+	/* Software increment event does't need to be backed by a perf event */
+	if (eventsel == ARMV8_EVTYPE_EVENT_SW_INCR)
+		return;
+
 	memset(&attr, 0, sizeof(struct perf_event_attr));
 	attr.type = PERF_TYPE_RAW;
 	attr.size = sizeof(attr);
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/sys_regs.c |  1 +
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 42 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f45c227..eefc60a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val &= ~ARMV8_PMCR_MASK;
 		val |= p->regval & ARMV8_PMCR_MASK;
 		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+		kvm_pmu_handle_pmcr(vcpu, val);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0)
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index caa706e..5bed00c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -62,6 +63,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 706c935..d411f3f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -190,6 +190,48 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+/**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u64 mask;
+	int i;
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (val & ARMV8_PMCR_E) {
+		kvm_pmu_enable_counter(vcpu,
+				     vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+	} else {
+		kvm_pmu_disable_counter(vcpu, mask);
+	}
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/sys_regs.c |  1 +
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 42 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f45c227..eefc60a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val &= ~ARMV8_PMCR_MASK;
 		val |= p->regval & ARMV8_PMCR_MASK;
 		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+		kvm_pmu_handle_pmcr(vcpu, val);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0)
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index caa706e..5bed00c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -62,6 +63,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 706c935..d411f3f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -190,6 +190,48 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+/**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u64 mask;
+	int i;
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (val & ARMV8_PMCR_E) {
+		kvm_pmu_enable_counter(vcpu,
+				     vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+	} else {
+		kvm_pmu_disable_counter(vcpu, mask);
+	}
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
disabled. When writing 1 to PMCR.P, reset all event counters, not
including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
zero.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/sys_regs.c |  1 +
 include/kvm/arm_pmu.h     |  2 ++
 virt/kvm/arm/pmu.c        | 42 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 45 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f45c227..eefc60a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 		val &= ~ARMV8_PMCR_MASK;
 		val |= p->regval & ARMV8_PMCR_MASK;
 		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
+		kvm_pmu_handle_pmcr(vcpu, val);
 	} else {
 		/* PMCR.P & PMCR.C are RAZ */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0)
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index caa706e..5bed00c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -42,6 +42,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 #else
@@ -62,6 +63,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 706c935..d411f3f 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -190,6 +190,48 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
 	}
 }
 
+/**
+ * kvm_pmu_handle_pmcr - handle PMCR register
+ * @vcpu: The vcpu pointer
+ * @val: the value guest writes to PMCR register
+ */
+void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	struct kvm_pmc *pmc;
+	u64 mask;
+	int i;
+
+	mask = kvm_pmu_valid_counter_mask(vcpu);
+	if (val & ARMV8_PMCR_E) {
+		kvm_pmu_enable_counter(vcpu,
+				     vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);
+	} else {
+		kvm_pmu_disable_counter(vcpu, mask);
+	}
+
+	if (val & ARMV8_PMCR_C) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		if (pmc->perf_event)
+			local64_set(&pmc->perf_event->count, 0);
+		vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
+	}
+
+	if (val & ARMV8_PMCR_P) {
+		for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
+			pmc = &pmu->pmc[i];
+			if (pmc->perf_event)
+				local64_set(&pmc->perf_event->count, 0);
+			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
+		}
+	}
+
+	if (val & ARMV8_PMCR_LC) {
+		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
+		pmc->bitmask = 0xffffffffffffffffUL;
+	}
+}
+
 static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
 {
 	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

This register resets as unknown in 64bit mode while it resets as zero
in 32bit mode. Here we choose to reset it as zero for consistency.

PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
accessed from EL0. Add some check helpers to handle the access from EL0.

When these bits are zero, only reading PMUSERENR will trap to EL2 and
writing PMUSERENR or reading/writing other PMU registers will trap to
EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
physical PMUSERENR register on VM entry, so that it will trap PMU access
from EL0 to EL2. Within the register access handler we check the real
value of guest PMUSERENR register to decide whether this access is
allowed. If not allowed, return false to inject UND to guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   9 ++++
 arch/arm64/kvm/hyp/hyp.h     |   1 +
 arch/arm64/kvm/hyp/switch.c  |   3 ++
 arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
 4 files changed, 107 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 6f14a01..eb3dc88 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -69,4 +69,13 @@
 #define	ARMV8_EXCLUDE_EL0	(1 << 30)
 #define	ARMV8_INCLUDE_EL2	(1 << 27)
 
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
 #endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..9a28b7bd8 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -22,6 +22,7 @@
 #include <linux/kvm_host.h>
 #include <asm/kvm_mmu.h>
 #include <asm/sysreg.h>
+#include <asm/pmu.h>
 
 #define __hyp_text __section(.hyp.text) notrace
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..1a7d679 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
 	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+	/* Make sure we trap PMU access from EL0 to EL2 */
+	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(HCR_RW, hcr_el2);
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
+	write_sysreg(0, pmuserenr_el0);
 	write_sysreg(0, cptr_el2);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eefc60a..084e527 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
 static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			const struct sys_reg_desc *r)
 {
@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		/* Only update writeable bits of PMCR */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0);
@@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_event_counter_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write)
 		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
 	else
@@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
-	if (p->is_write)
+	if (p->is_write || pmu_access_el0_disabled(vcpu))
 		return false;
 
 	if (!(p->Op2 & 1))
@@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
 		/* PMXEVTYPER_EL0 */
 		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
@@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	if (r->CRn == 9 && r->CRm == 13) {
 		if (r->Op2 == 2) {
 			/* PMXEVCNTR_EL0 */
+			if (pmu_access_event_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			      & ARMV8_COUNTER_MASK;
 			reg = PMEVCNTR0_EL0 + idx;
 		} else if (r->Op2 == 0) {
 			/* PMCCNTR_EL0 */
+			if (pmu_access_cycle_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = ARMV8_CYCLE_IDX;
 			reg = PMCCNTR_EL0;
 		} else {
@@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		}
 	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
 		/* PMEVCNTRn_EL0 */
+		if (pmu_access_event_counter_el0_disabled(vcpu))
+			return false;
+
 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
 		reg = PMEVCNTR0_EL0 + idx;
 	} else {
@@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		return false;
 
 	val = kvm_pmu_get_counter_value(vcpu, idx);
-	if (p->is_write)
+	if (p->is_write) {
+		if (pmu_access_el0_disabled(vcpu))
+			return false;
+
 		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
-	else
+	} else {
 		p->regval = val;
+	}
 
 	return true;
 }
@@ -612,6 +665,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	mask = kvm_pmu_valid_counter_mask(vcpu);
 	if (p->is_write) {
 		val = p->regval & mask;
@@ -639,6 +695,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (!vcpu_mode_priv(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->Op2 & 0x1)
 			/* accessing PMINTENSET_EL1 */
@@ -663,6 +722,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->CRm & 0x2)
 			/* accessing PMOVSSET_EL0 */
@@ -685,6 +747,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_write_swinc_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		mask = kvm_pmu_valid_counter_mask(vcpu);
 		kvm_pmu_software_increment(vcpu, p->regval & mask);
@@ -694,6 +759,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return false;
 }
 
+static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			     const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (!vcpu_mode_priv(vcpu))
+			return false;
+
+		vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
+						    & ARMV8_USERENR_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
+			    & ARMV8_USERENR_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -923,9 +1008,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  access_pmu_evcntr },
-	/* PMUSERENR_EL0 */
+	/* PMUSERENR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
@@ -1250,7 +1338,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

This register resets as unknown in 64bit mode while it resets as zero
in 32bit mode. Here we choose to reset it as zero for consistency.

PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
accessed from EL0. Add some check helpers to handle the access from EL0.

When these bits are zero, only reading PMUSERENR will trap to EL2 and
writing PMUSERENR or reading/writing other PMU registers will trap to
EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
physical PMUSERENR register on VM entry, so that it will trap PMU access
from EL0 to EL2. Within the register access handler we check the real
value of guest PMUSERENR register to decide whether this access is
allowed. If not allowed, return false to inject UND to guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   9 ++++
 arch/arm64/kvm/hyp/hyp.h     |   1 +
 arch/arm64/kvm/hyp/switch.c  |   3 ++
 arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
 4 files changed, 107 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 6f14a01..eb3dc88 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -69,4 +69,13 @@
 #define	ARMV8_EXCLUDE_EL0	(1 << 30)
 #define	ARMV8_INCLUDE_EL2	(1 << 27)
 
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
+
 #endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..9a28b7bd8 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -22,6 +22,7 @@
 #include <linux/kvm_host.h>
 #include <asm/kvm_mmu.h>
 #include <asm/sysreg.h>
+#include <asm/pmu.h>
 
 #define __hyp_text __section(.hyp.text) notrace
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..1a7d679 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
 	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+	/* Make sure we trap PMU access from EL0 to EL2 */
+	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(HCR_RW, hcr_el2);
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
+	write_sysreg(0, pmuserenr_el0);
 	write_sysreg(0, cptr_el2);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eefc60a..084e527 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
 static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			const struct sys_reg_desc *r)
 {
@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		/* Only update writeable bits of PMCR */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0);
@@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_event_counter_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write)
 		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
 	else
@@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
-	if (p->is_write)
+	if (p->is_write || pmu_access_el0_disabled(vcpu))
 		return false;
 
 	if (!(p->Op2 & 1))
@@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
 		/* PMXEVTYPER_EL0 */
 		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
@@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	if (r->CRn == 9 && r->CRm == 13) {
 		if (r->Op2 == 2) {
 			/* PMXEVCNTR_EL0 */
+			if (pmu_access_event_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			      & ARMV8_COUNTER_MASK;
 			reg = PMEVCNTR0_EL0 + idx;
 		} else if (r->Op2 == 0) {
 			/* PMCCNTR_EL0 */
+			if (pmu_access_cycle_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = ARMV8_CYCLE_IDX;
 			reg = PMCCNTR_EL0;
 		} else {
@@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		}
 	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
 		/* PMEVCNTRn_EL0 */
+		if (pmu_access_event_counter_el0_disabled(vcpu))
+			return false;
+
 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
 		reg = PMEVCNTR0_EL0 + idx;
 	} else {
@@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		return false;
 
 	val = kvm_pmu_get_counter_value(vcpu, idx);
-	if (p->is_write)
+	if (p->is_write) {
+		if (pmu_access_el0_disabled(vcpu))
+			return false;
+
 		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
-	else
+	} else {
 		p->regval = val;
+	}
 
 	return true;
 }
@@ -612,6 +665,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	mask = kvm_pmu_valid_counter_mask(vcpu);
 	if (p->is_write) {
 		val = p->regval & mask;
@@ -639,6 +695,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (!vcpu_mode_priv(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->Op2 & 0x1)
 			/* accessing PMINTENSET_EL1 */
@@ -663,6 +722,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->CRm & 0x2)
 			/* accessing PMOVSSET_EL0 */
@@ -685,6 +747,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_write_swinc_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		mask = kvm_pmu_valid_counter_mask(vcpu);
 		kvm_pmu_software_increment(vcpu, p->regval & mask);
@@ -694,6 +759,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return false;
 }
 
+static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			     const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (!vcpu_mode_priv(vcpu))
+			return false;
+
+		vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
+						    & ARMV8_USERENR_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
+			    & ARMV8_USERENR_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -923,9 +1008,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  access_pmu_evcntr },
-	/* PMUSERENR_EL0 */
+	/* PMUSERENR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
@@ -1250,7 +1338,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

This register resets as unknown in 64bit mode while it resets as zero
in 32bit mode. Here we choose to reset it as zero for consistency.

PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
accessed from EL0. Add some check helpers to handle the access from EL0.

When these bits are zero, only reading PMUSERENR will trap to EL2 and
writing PMUSERENR or reading/writing other PMU registers will trap to
EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
physical PMUSERENR register on VM entry, so that it will trap PMU access
from EL0 to EL2. Within the register access handler we check the real
value of guest PMUSERENR register to decide whether this access is
allowed. If not allowed, return false to inject UND to guest.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/pmu.h |   9 ++++
 arch/arm64/kvm/hyp/hyp.h     |   1 +
 arch/arm64/kvm/hyp/switch.c  |   3 ++
 arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
 4 files changed, 107 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
index 6f14a01..eb3dc88 100644
--- a/arch/arm64/include/asm/pmu.h
+++ b/arch/arm64/include/asm/pmu.h
@@ -69,4 +69,13 @@
 #define	ARMV8_EXCLUDE_EL0	(1 << 30)
 #define	ARMV8_INCLUDE_EL2	(1 << 27)
 
+/*
+ * PMUSERENR: user enable reg
+ */
+#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
+#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
+#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
+#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
+#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
+
 #endif /* __ASM_PMU_H */
diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
index fb27517..9a28b7bd8 100644
--- a/arch/arm64/kvm/hyp/hyp.h
+++ b/arch/arm64/kvm/hyp/hyp.h
@@ -22,6 +22,7 @@
 #include <linux/kvm_host.h>
 #include <asm/kvm_mmu.h>
 #include <asm/sysreg.h>
+#include <asm/pmu.h>
 
 #define __hyp_text __section(.hyp.text) notrace
 
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..1a7d679 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
 	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
 	write_sysreg(1 << 15, hstr_el2);
 	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+	/* Make sure we trap PMU access from EL0 to EL2 */
+	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
 	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
 }
 
@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(HCR_RW, hcr_el2);
 	write_sysreg(0, hstr_el2);
 	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
+	write_sysreg(0, pmuserenr_el0);
 	write_sysreg(0, cptr_el2);
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eefc60a..084e527 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
+{
+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
+
+	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
+		 || vcpu_mode_priv(vcpu));
+}
+
 static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 			const struct sys_reg_desc *r)
 {
@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		/* Only update writeable bits of PMCR */
 		val = vcpu_sys_reg(vcpu, PMCR_EL0);
@@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_event_counter_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write)
 		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
 	else
@@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
-	if (p->is_write)
+	if (p->is_write || pmu_access_el0_disabled(vcpu))
 		return false;
 
 	if (!(p->Op2 & 1))
@@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
 		/* PMXEVTYPER_EL0 */
 		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
@@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 	if (r->CRn == 9 && r->CRm == 13) {
 		if (r->Op2 == 2) {
 			/* PMXEVCNTR_EL0 */
+			if (pmu_access_event_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
 			      & ARMV8_COUNTER_MASK;
 			reg = PMEVCNTR0_EL0 + idx;
 		} else if (r->Op2 == 0) {
 			/* PMCCNTR_EL0 */
+			if (pmu_access_cycle_counter_el0_disabled(vcpu))
+				return false;
+
 			idx = ARMV8_CYCLE_IDX;
 			reg = PMCCNTR_EL0;
 		} else {
@@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		}
 	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
 		/* PMEVCNTRn_EL0 */
+		if (pmu_access_event_counter_el0_disabled(vcpu))
+			return false;
+
 		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
 		reg = PMEVCNTR0_EL0 + idx;
 	} else {
@@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
 		return false;
 
 	val = kvm_pmu_get_counter_value(vcpu, idx);
-	if (p->is_write)
+	if (p->is_write) {
+		if (pmu_access_el0_disabled(vcpu))
+			return false;
+
 		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
-	else
+	} else {
 		p->regval = val;
+	}
 
 	return true;
 }
@@ -612,6 +665,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	mask = kvm_pmu_valid_counter_mask(vcpu);
 	if (p->is_write) {
 		val = p->regval & mask;
@@ -639,6 +695,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (!vcpu_mode_priv(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->Op2 & 0x1)
 			/* accessing PMINTENSET_EL1 */
@@ -663,6 +722,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_access_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		if (r->CRm & 0x2)
 			/* accessing PMOVSSET_EL0 */
@@ -685,6 +747,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	if (!kvm_arm_pmu_v3_ready(vcpu))
 		return trap_raz_wi(vcpu, p, r);
 
+	if (pmu_write_swinc_el0_disabled(vcpu))
+		return false;
+
 	if (p->is_write) {
 		mask = kvm_pmu_valid_counter_mask(vcpu);
 		kvm_pmu_software_increment(vcpu, p->regval & mask);
@@ -694,6 +759,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
 	return false;
 }
 
+static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+			     const struct sys_reg_desc *r)
+{
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return trap_raz_wi(vcpu, p, r);
+
+	if (p->is_write) {
+		if (!vcpu_mode_priv(vcpu))
+			return false;
+
+		vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
+						    & ARMV8_USERENR_MASK;
+	} else {
+		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
+			    & ARMV8_USERENR_MASK;
+	}
+
+	return true;
+}
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -923,9 +1008,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* PMXEVCNTR_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
 	  access_pmu_evcntr },
-	/* PMUSERENR_EL0 */
+	/* PMUSERENR_EL0
+	 * This register resets as unknown in 64bit mode while it resets as zero
+	 * in 32bit mode. Here we choose to reset it as zero for consistency.
+	 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
-	  trap_raz_wi },
+	  access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
 	/* PMOVSSET_EL0 */
 	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
 	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
@@ -1250,7 +1338,7 @@ static const struct sys_reg_desc cp15_regs[] = {
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
 	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
-	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
+	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
 	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 16/21] KVM: ARM64: Add PMU overflow interrupt routing
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  2 ++
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * non-preemptible context.
 		 */
 		preempt_disable();
+		kvm_pmu_flush_hwstate(vcpu);
 		kvm_timer_flush_hwstate(vcpu);
 		kvm_vgic_flush_hwstate(vcpu);
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5bed00c..fbc42fd 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d411f3f..644f2dc 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -162,6 +163,52 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u64 overflow;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E))
+		return;
+
+	overflow = kvm_pmu_overflow_status(vcpu);
+	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, !!overflow);
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+	struct kvm_pmu *pmu;
+	struct kvm_vcpu_arch *vcpu_arch;
+
+	pmc -= pmc->idx;
+	pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+	vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+	return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+	int idx = pmc->idx;
+
+	kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
  * kvm_pmu_software_increment - do software increment
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMSWINC register
@@ -279,7 +326,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 16/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  2 ++
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * non-preemptible context.
 		 */
 		preempt_disable();
+		kvm_pmu_flush_hwstate(vcpu);
 		kvm_timer_flush_hwstate(vcpu);
 		kvm_vgic_flush_hwstate(vcpu);
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5bed00c..fbc42fd 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d411f3f..644f2dc 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -162,6 +163,52 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u64 overflow;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E))
+		return;
+
+	overflow = kvm_pmu_overflow_status(vcpu);
+	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, !!overflow);
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+	struct kvm_pmu *pmu;
+	struct kvm_vcpu_arch *vcpu_arch;
+
+	pmc -= pmc->idx;
+	pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+	vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+	return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+	int idx = pmc->idx;
+
+	kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
  * kvm_pmu_software_increment - do software increment
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMSWINC register
@@ -279,7 +326,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 16/21] KVM: ARM64: Add PMU overflow interrupt routing
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When calling perf_event_create_kernel_counter to create perf_event,
assign a overflow handler. Then when the perf event overflows, set the
corresponding bit of guest PMOVSSET register. If this counter is enabled
and its interrupt is enabled as well, kick the vcpu to sync the
interrupt.

On VM entry, if there is counter overflowed, inject the interrupt with
the level set to 1. Otherwise, inject the interrupt with the level set
to 0.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  2 ++
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 50 +++++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 53 insertions(+), 1 deletion(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959..f54264c 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -28,6 +28,7 @@
 #include <linux/sched.h>
 #include <linux/kvm.h>
 #include <trace/events/kvm.h>
+#include <kvm/arm_pmu.h>
 
 #define CREATE_TRACE_POINTS
 #include "trace.h"
@@ -577,6 +578,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * non-preemptible context.
 		 */
 		preempt_disable();
+		kvm_pmu_flush_hwstate(vcpu);
 		kvm_timer_flush_hwstate(vcpu);
 		kvm_vgic_flush_hwstate(vcpu);
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 5bed00c..fbc42fd 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu);
 void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
+static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index d411f3f..644f2dc 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -21,6 +21,7 @@
 #include <linux/perf_event.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
+#include <kvm/arm_vgic.h>
 
 /**
  * kvm_pmu_get_counter_value - get PMU counter value
@@ -162,6 +163,52 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
 }
 
 /**
+ * kvm_pmu_flush_hwstate - flush pmu state to cpu
+ * @vcpu: The vcpu pointer
+ *
+ * Inject virtual PMU IRQ if IRQ is pending for this cpu.
+ */
+void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu)
+{
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+	u64 overflow;
+
+	if (!kvm_arm_pmu_v3_ready(vcpu))
+		return;
+
+	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E))
+		return;
+
+	overflow = kvm_pmu_overflow_status(vcpu);
+	kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, pmu->irq_num, !!overflow);
+}
+
+static inline struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+	struct kvm_pmu *pmu;
+	struct kvm_vcpu_arch *vcpu_arch;
+
+	pmc -= pmc->idx;
+	pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+	vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+	return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * When perf event overflows, call kvm_pmu_overflow_set to set overflow status.
+ */
+static void kvm_pmu_perf_overflow(struct perf_event *perf_event,
+				  struct perf_sample_data *data,
+				  struct pt_regs *regs)
+{
+	struct kvm_pmc *pmc = perf_event->overflow_handler_context;
+	struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+	int idx = pmc->idx;
+
+	kvm_pmu_overflow_set(vcpu, BIT(idx));
+}
+
+/**
  * kvm_pmu_software_increment - do software increment
  * @vcpu: The vcpu pointer
  * @val: the value guest writes to PMSWINC register
@@ -279,7 +326,8 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 	/* The initial sample period (overflow count) of an event. */
 	attr.sample_period = (-counter) & pmc->bitmask;
 
-	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
+	event = perf_event_create_kernel_counter(&attr, -1, current,
+						 kvm_pmu_perf_overflow, pmc);
 	if (IS_ERR(event)) {
 		pr_err_once("kvm: pmu event creation failed %ld\n",
 			    PTR_ERR(event));
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 17/21] KVM: ARM64: Reset PMU state when resetting vcpu
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 17 +++++++++++++++++
 3 files changed, 22 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f34745c..dfbce78 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fbc42fd..4394e0c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu {
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	return 0;
 }
+static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 644f2dc..8142921 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,23 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 17/21] KVM: ARM64: Reset PMU state when resetting vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 17 +++++++++++++++++
 3 files changed, 22 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f34745c..dfbce78 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fbc42fd..4394e0c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu {
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	return 0;
 }
+static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 644f2dc..8142921 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,23 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 17/21] KVM: ARM64: Reset PMU state when resetting vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When resetting vcpu, it needs to reset the PMU state to initial status.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kvm/reset.c |  3 +++
 include/kvm/arm_pmu.h  |  2 ++
 virt/kvm/arm/pmu.c     | 17 +++++++++++++++++
 3 files changed, 22 insertions(+)

diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index f34745c..dfbce78 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -120,6 +120,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
 	/* Reset system registers */
 	kvm_reset_sys_regs(vcpu);
 
+	/* Reset PMU */
+	kvm_pmu_vcpu_reset(vcpu);
+
 	/* Reset timer */
 	return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
 }
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fbc42fd..4394e0c 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -38,6 +38,7 @@ struct kvm_pmu {
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	return 0;
 }
+static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 644f2dc..8142921 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -69,6 +69,23 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_reset - reset pmu state for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		kvm_pmu_stop_counter(vcpu, &pmu->pmc[i]);
+		pmu->pmc[i].idx = i;
+		pmu->pmc[i].bitmask = 0xffffffffUL;
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f54264c..d2c2cc3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4394e0c..d90fc65 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 	return 0;
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 8142921..45d4d91 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -86,6 +86,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f54264c..d2c2cc3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4394e0c..d90fc65 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 	return 0;
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 8142921..45d4d91 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -86,6 +86,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

When KVM frees VCPU, it needs to free the perf_event of PMU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/arm.c    |  1 +
 include/kvm/arm_pmu.h |  2 ++
 virt/kvm/arm/pmu.c    | 21 +++++++++++++++++++++
 3 files changed, 24 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index f54264c..d2c2cc3 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -266,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
 	kvm_mmu_free_memory_caches(vcpu);
 	kvm_timer_vcpu_terminate(vcpu);
 	kvm_vgic_vcpu_destroy(vcpu);
+	kvm_pmu_vcpu_destroy(vcpu);
 	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4394e0c..d90fc65 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -39,6 +39,7 @@ struct kvm_pmu {
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu);
 void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
@@ -62,6 +63,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 	return 0;
 }
 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {}
+static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {}
 static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 8142921..45d4d91 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -86,6 +86,27 @@ void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu)
 	}
 }
 
+/**
+ * kvm_pmu_vcpu_destroy - free perf event of PMU for cpu
+ * @vcpu: The vcpu pointer
+ *
+ */
+void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu)
+{
+	int i;
+	struct kvm_pmu *pmu = &vcpu->arch.pmu;
+
+	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
+		struct kvm_pmc *pmc = &pmu->pmc[i];
+
+		if (pmc->perf_event) {
+			perf_event_disable(pmc->perf_event);
+			perf_event_release_kernel(pmc->perf_event);
+			pmc->perf_event = NULL;
+		}
+	}
+}
+
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
 {
 	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To support guest PMUv3, use one bit of the VCPU INIT feature array.
Initialize the PMU when initialzing the vcpu with that bit and PMU
overflow interrupt set.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/uapi/asm/kvm.h | 1 +
 arch/arm64/kvm/reset.c            | 3 +++
 include/kvm/arm_pmu.h             | 2 ++
 include/uapi/linux/kvm.h          | 1 +
 virt/kvm/arm/pmu.c                | 9 +++++++++
 7 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 053f613..e51fa04 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2577,6 +2577,8 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
+	  Depends on KVM_CAP_ARM_PMU_V3.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6bab7fb..cb220b7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,7 +40,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 4
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2d4ca4b..6aedbe3 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -94,6 +94,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index dfbce78..cf4f28a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_GUEST_DEBUG_HW_WPS:
 		r = get_num_wrps();
 		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_arm_support_pmu_v3();
+		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
 		r = 1;
 		break;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d90fc65..fee86eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -48,6 +48,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
+bool kvm_arm_support_pmu_v3(void);
 #else
 struct kvm_pmu {
 };
@@ -72,6 +73,7 @@ static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
+static inline bool kvm_arm_support_pmu_v3(void) { return false; }
 #endif
 
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 9da9051..dc16d30 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -850,6 +850,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IOEVENTFD_ANY_LENGTH 122
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
+#define KVM_CAP_ARM_PMU_V3 125
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 45d4d91..05e9d7e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -374,3 +374,12 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	pmc->perf_event = event;
 }
+
+bool kvm_arm_support_pmu_v3(void)
+{
+	/* Check if HW_PERF_EVENTS are supported by checking the number of
+	 * hardware performance counters. This could ensure physical PMU and
+	 * PERF_EVENT driver existing.
+	 */
+	return (perf_num_counters() > 0);
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To support guest PMUv3, use one bit of the VCPU INIT feature array.
Initialize the PMU when initialzing the vcpu with that bit and PMU
overflow interrupt set.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/uapi/asm/kvm.h | 1 +
 arch/arm64/kvm/reset.c            | 3 +++
 include/kvm/arm_pmu.h             | 2 ++
 include/uapi/linux/kvm.h          | 1 +
 virt/kvm/arm/pmu.c                | 9 +++++++++
 7 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 053f613..e51fa04 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2577,6 +2577,8 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
+	  Depends on KVM_CAP_ARM_PMU_V3.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6bab7fb..cb220b7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,7 +40,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 4
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2d4ca4b..6aedbe3 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -94,6 +94,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index dfbce78..cf4f28a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_GUEST_DEBUG_HW_WPS:
 		r = get_num_wrps();
 		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_arm_support_pmu_v3();
+		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
 		r = 1;
 		break;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d90fc65..fee86eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -48,6 +48,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
+bool kvm_arm_support_pmu_v3(void);
 #else
 struct kvm_pmu {
 };
@@ -72,6 +73,7 @@ static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
+static inline bool kvm_arm_support_pmu_v3(void) { return false; }
 #endif
 
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 9da9051..dc16d30 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -850,6 +850,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IOEVENTFD_ANY_LENGTH 122
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
+#define KVM_CAP_ARM_PMU_V3 125
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 45d4d91..05e9d7e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -374,3 +374,12 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	pmc->perf_event = event;
 }
+
+bool kvm_arm_support_pmu_v3(void)
+{
+	/* Check if HW_PERF_EVENTS are supported by checking the number of
+	 * hardware performance counters. This could ensure physical PMU and
+	 * PERF_EVENT driver existing.
+	 */
+	return (perf_num_counters() > 0);
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

To support guest PMUv3, use one bit of the VCPU INIT feature array.
Initialize the PMU when initialzing the vcpu with that bit and PMU
overflow interrupt set.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt | 2 ++
 arch/arm64/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/uapi/asm/kvm.h | 1 +
 arch/arm64/kvm/reset.c            | 3 +++
 include/kvm/arm_pmu.h             | 2 ++
 include/uapi/linux/kvm.h          | 1 +
 virt/kvm/arm/pmu.c                | 9 +++++++++
 7 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 053f613..e51fa04 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2577,6 +2577,8 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
+	  Depends on KVM_CAP_ARM_PMU_V3.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6bab7fb..cb220b7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,7 +40,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 4
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 2d4ca4b..6aedbe3 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -94,6 +94,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index dfbce78..cf4f28a 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_GUEST_DEBUG_HW_WPS:
 		r = get_num_wrps();
 		break;
+	case KVM_CAP_ARM_PMU_V3:
+		r = kvm_arm_support_pmu_v3();
+		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
 		r = 1;
 		break;
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index d90fc65..fee86eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -48,6 +48,7 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
+bool kvm_arm_support_pmu_v3(void);
 #else
 struct kvm_pmu {
 };
@@ -72,6 +73,7 @@ static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
+static inline bool kvm_arm_support_pmu_v3(void) { return false; }
 #endif
 
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 9da9051..dc16d30 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -850,6 +850,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_IOEVENTFD_ANY_LENGTH 122
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
+#define KVM_CAP_ARM_PMU_V3 125
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 45d4d91..05e9d7e 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -374,3 +374,12 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 
 	pmc->perf_event = event;
 }
+
+bool kvm_arm_support_pmu_v3(void)
+{
+	/* Check if HW_PERF_EVENTS are supported by checking the number of
+	 * hardware performance counters. This could ensure physical PMU and
+	 * PERF_EVENT driver existing.
+	 */
+	return (perf_num_counters() > 0);
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 20/21] KVM: ARM: Introduce per-vcpu kvm device controls
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

In some cases it needs to get/set attributes specific to a vcpu and so
needs something else than ONE_REG.

Let's copy the KVM_DEVICE approach, and define the respective ioctls
for the vcpu file descriptor.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt          | 10 +++---
 Documentation/virtual/kvm/devices/vcpu.txt |  8 +++++
 arch/arm/kvm/arm.c                         | 55 ++++++++++++++++++++++++++++++
 arch/arm64/kvm/reset.c                     |  1 +
 include/uapi/linux/kvm.h                   |  1 +
 5 files changed, 71 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e51fa04..3976645 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2507,8 +2507,9 @@ struct kvm_create_device {
 
 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
@@ -2533,8 +2534,9 @@ struct kvm_device_attr {
 
 4.81 KVM_HAS_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
new file mode 100644
index 0000000..3cc59c5
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -0,0 +1,8 @@
+Generic vcpu interface
+====================================
+
+The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR,
+KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
+kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
+
+The groups and attributes per virtual cpu, if any, are architecture specific.
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d2c2cc3..34d7395 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -826,11 +826,51 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
 	struct kvm_vcpu *vcpu = filp->private_data;
 	void __user *argp = (void __user *)arg;
+	struct kvm_device_attr attr;
 
 	switch (ioctl) {
 	case KVM_ARM_VCPU_INIT: {
@@ -873,6 +913,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			return -E2BIG;
 		return kvm_arm_copy_reg_indices(vcpu, user_list->reg);
 	}
+	case KVM_SET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_set_attr(vcpu, &attr);
+	}
+	case KVM_GET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_get_attr(vcpu, &attr);
+	}
+	case KVM_HAS_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_has_attr(vcpu, &attr);
+	}
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cf4f28a..9677bf0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,6 +81,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 		r = kvm_arm_support_pmu_v3();
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
+	case KVM_CAP_VCPU_ATTRIBUTES:
 		r = 1;
 		break;
 	default:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index dc16d30..50f44a2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -851,6 +851,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
 #define KVM_CAP_ARM_PMU_V3 125
+#define KVM_CAP_VCPU_ATTRIBUTES 126
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 20/21] KVM: ARM: Introduce per-vcpu kvm device controls
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

In some cases it needs to get/set attributes specific to a vcpu and so
needs something else than ONE_REG.

Let's copy the KVM_DEVICE approach, and define the respective ioctls
for the vcpu file descriptor.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt          | 10 +++---
 Documentation/virtual/kvm/devices/vcpu.txt |  8 +++++
 arch/arm/kvm/arm.c                         | 55 ++++++++++++++++++++++++++++++
 arch/arm64/kvm/reset.c                     |  1 +
 include/uapi/linux/kvm.h                   |  1 +
 5 files changed, 71 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e51fa04..3976645 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2507,8 +2507,9 @@ struct kvm_create_device {
 
 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
@@ -2533,8 +2534,9 @@ struct kvm_device_attr {
 
 4.81 KVM_HAS_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
new file mode 100644
index 0000000..3cc59c5
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -0,0 +1,8 @@
+Generic vcpu interface
+====================================
+
+The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR,
+KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
+kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
+
+The groups and attributes per virtual cpu, if any, are architecture specific.
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d2c2cc3..34d7395 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -826,11 +826,51 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
 	struct kvm_vcpu *vcpu = filp->private_data;
 	void __user *argp = (void __user *)arg;
+	struct kvm_device_attr attr;
 
 	switch (ioctl) {
 	case KVM_ARM_VCPU_INIT: {
@@ -873,6 +913,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			return -E2BIG;
 		return kvm_arm_copy_reg_indices(vcpu, user_list->reg);
 	}
+	case KVM_SET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_set_attr(vcpu, &attr);
+	}
+	case KVM_GET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_get_attr(vcpu, &attr);
+	}
+	case KVM_HAS_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_has_attr(vcpu, &attr);
+	}
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cf4f28a..9677bf0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,6 +81,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 		r = kvm_arm_support_pmu_v3();
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
+	case KVM_CAP_VCPU_ATTRIBUTES:
 		r = 1;
 		break;
 	default:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index dc16d30..50f44a2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -851,6 +851,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
 #define KVM_CAP_ARM_PMU_V3 125
+#define KVM_CAP_VCPU_ATTRIBUTES 126
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 20/21] KVM: ARM: Introduce per-vcpu kvm device controls
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

In some cases it needs to get/set attributes specific to a vcpu and so
needs something else than ONE_REG.

Let's copy the KVM_DEVICE approach, and define the respective ioctls
for the vcpu file descriptor.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/api.txt          | 10 +++---
 Documentation/virtual/kvm/devices/vcpu.txt |  8 +++++
 arch/arm/kvm/arm.c                         | 55 ++++++++++++++++++++++++++++++
 arch/arm64/kvm/reset.c                     |  1 +
 include/uapi/linux/kvm.h                   |  1 +
 5 files changed, 71 insertions(+), 4 deletions(-)
 create mode 100644 Documentation/virtual/kvm/devices/vcpu.txt

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index e51fa04..3976645 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2507,8 +2507,9 @@ struct kvm_create_device {
 
 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
@@ -2533,8 +2534,9 @@ struct kvm_device_attr {
 
 4.81 KVM_HAS_DEVICE_ATTR
 
-Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device
-Type: device ioctl, vm ioctl
+Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
+  KVM_CAP_VCPU_ATTRIBUTES for vcpu device
+Type: device ioctl, vm ioctl, vcpu ioctl
 Parameters: struct kvm_device_attr
 Returns: 0 on success, -1 on error
 Errors:
diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
new file mode 100644
index 0000000..3cc59c5
--- /dev/null
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -0,0 +1,8 @@
+Generic vcpu interface
+====================================
+
+The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR,
+KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
+kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
+
+The groups and attributes per virtual cpu, if any, are architecture specific.
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index d2c2cc3..34d7395 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -826,11 +826,51 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
 	return 0;
 }
 
+static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
+				 struct kvm_device_attr *attr)
+{
+	int ret = -ENXIO;
+
+	switch (attr->group) {
+	default:
+		break;
+	}
+
+	return ret;
+}
+
 long kvm_arch_vcpu_ioctl(struct file *filp,
 			 unsigned int ioctl, unsigned long arg)
 {
 	struct kvm_vcpu *vcpu = filp->private_data;
 	void __user *argp = (void __user *)arg;
+	struct kvm_device_attr attr;
 
 	switch (ioctl) {
 	case KVM_ARM_VCPU_INIT: {
@@ -873,6 +913,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
 			return -E2BIG;
 		return kvm_arm_copy_reg_indices(vcpu, user_list->reg);
 	}
+	case KVM_SET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_set_attr(vcpu, &attr);
+	}
+	case KVM_GET_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_get_attr(vcpu, &attr);
+	}
+	case KVM_HAS_DEVICE_ATTR: {
+		if (copy_from_user(&attr, argp, sizeof(attr)))
+			return -EFAULT;
+		return kvm_arm_vcpu_has_attr(vcpu, &attr);
+	}
 	default:
 		return -EINVAL;
 	}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index cf4f28a..9677bf0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -81,6 +81,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
 		r = kvm_arm_support_pmu_v3();
 		break;
 	case KVM_CAP_SET_GUEST_DEBUG:
+	case KVM_CAP_VCPU_ATTRIBUTES:
 		r = 1;
 		break;
 	default:
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index dc16d30..50f44a2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -851,6 +851,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_HYPERV_SYNIC 123
 #define KVM_CAP_S390_RI 124
 #define KVM_CAP_ARM_PMU_V3 125
+#define KVM_CAP_VCPU_ATTRIBUTES 126
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3
  2016-01-27  3:51 ` Shannon Zhao
  (?)
@ 2016-01-27  3:51   ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To configure the virtual PMUv3 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.

After configuring the PMUv3, call the vcpu ioctl with attribute
KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/devices/vcpu.txt |  24 ++++++
 arch/arm/include/asm/kvm_host.h            |  15 ++++
 arch/arm/kvm/arm.c                         |   3 +
 arch/arm64/include/asm/kvm_host.h          |   6 ++
 arch/arm64/include/uapi/asm/kvm.h          |   5 ++
 arch/arm64/kvm/guest.c                     |  51 ++++++++++++
 include/kvm/arm_pmu.h                      |  23 ++++++
 virt/kvm/arm/pmu.c                         | 128 +++++++++++++++++++++++++++++
 8 files changed, 255 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
index 3cc59c5..d626237 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -6,3 +6,27 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
 kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
 
 The groups and attributes per virtual cpu, if any, are architecture specific.
+
+1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
+Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt
+Returns: -EBUSY: The PMU overflow interrupt is already set
+         -ENXIO: The overflow interrupt not set when attempting to get it
+         -ENODEV: PMUv3 not supported
+         -EINVAL: Invalid PMU overflow interrupt number supplied
+
+A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
+number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
+type must be same for each vcpu. As a PPI, the interrupt number is same for all
+vcpus, while as an SPI it must be different for each vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: PMUv3 not supported
+         -ENXIO: PMUv3 not properly configured as required prior to calling this
+                 attribute
+         -EBUSY: PMUv3 already initialized
+
+Request the initialization of the PMUv3.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..6dd0992 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
+static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 34d7395..dc8644f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
 		break;
 	}
 
@@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
 		break;
 	}
 
@@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
 		break;
 	}
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index cb220b7..a855a30 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6aedbe3..f209ea1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -205,6 +205,11 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API on vcpu fd */
+#define KVM_ARM_VCPU_PMU_V3_CTRL	0
+#define   KVM_ARM_VCPU_PMU_V3_IRQ	0
+#define   KVM_ARM_VCPU_PMU_V3_INIT	1
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index fcb7788..dbe45c3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	}
 	return 0;
 }
+
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fee86eb..3890c94 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,6 +36,7 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+#define kvm_arm_pmu_irq_initialized(v)	((v)->arch.pmu.irq_num >= VGIC_NR_SGIS)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
@@ -49,11 +50,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 bool kvm_arm_support_pmu_v3(void);
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+#define kvm_arm_pmu_irq_initialized(v)	(false)
 static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 					    u64 select_idx)
 {
@@ -74,6 +82,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 static inline bool kvm_arm_support_pmu_v3(void) { return false; }
+static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 05e9d7e..37f6100 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,6 +19,7 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
@@ -383,3 +384,130 @@ bool kvm_arm_support_pmu_v3(void)
 	 */
 	return (perf_num_counters() > 0);
 }
+
+static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
+{
+	if (!kvm_arm_support_pmu_v3())
+		return -ENODEV;
+
+	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
+	    !kvm_arm_pmu_irq_initialized(vcpu))
+		return -ENXIO;
+
+	if (kvm_arm_pmu_v3_ready(vcpu))
+		return -EBUSY;
+
+	kvm_pmu_vcpu_reset(vcpu);
+	vcpu->arch.pmu.ready = true;
+
+	return 0;
+}
+
+static int kvm_arm_pmu_irq_access(struct kvm_vcpu *vcpu,
+				  struct kvm_device_attr *attr,
+				  int *irq, bool is_set)
+{
+	if (!is_set) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			return -ENXIO;
+
+		*irq = vcpu->arch.pmu.irq_num;
+	} else {
+		if (kvm_arm_pmu_irq_initialized(vcpu))
+			return -EBUSY;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", *irq);
+		vcpu->arch.pmu.irq_num = *irq;
+	}
+
+	return 0;
+}
+
+static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi)
+{
+	int i;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			continue;
+
+		if (is_ppi) {
+			if (vcpu->arch.pmu.irq_num != irq)
+				return false;
+		} else {
+			if (vcpu->arch.pmu.irq_num == irq)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		/*
+		 * The PMU overflow interrupt could be a PPI or SPI, but for one
+		 * VM the interrupt type must be same for each vcpu. As a PPI,
+		 * the interrupt number is same for all vcpus, while as an SPI
+		 * it must be different for each vcpu.
+		 */
+		if (reg < VGIC_NR_SGIS || reg >= vcpu->kvm->arch.vgic.nr_irqs ||
+		    !irq_is_valid(vcpu->kvm, reg, reg < VGIC_NR_PRIVATE_IRQS))
+			return -EINVAL;
+
+		return kvm_arm_pmu_irq_access(vcpu, attr, &reg, true);
+	}
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		return kvm_arm_pmu_v3_init(vcpu);
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg = -1;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		ret = kvm_arm_pmu_irq_access(vcpu, attr, &reg, false);
+		if (ret)
+			return ret;
+		return put_user(reg, uaddr);
+	}
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ:
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		if (kvm_arm_support_pmu_v3() &&
+		    test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return 0;
+	}
+
+	return -ENXIO;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: kvmarm, marc.zyngier, christoffer.dall
  Cc: kvm, will.deacon, linux-arm-kernel, shannon.zhao

From: Shannon Zhao <shannon.zhao@linaro.org>

To configure the virtual PMUv3 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.

After configuring the PMUv3, call the vcpu ioctl with attribute
KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/devices/vcpu.txt |  24 ++++++
 arch/arm/include/asm/kvm_host.h            |  15 ++++
 arch/arm/kvm/arm.c                         |   3 +
 arch/arm64/include/asm/kvm_host.h          |   6 ++
 arch/arm64/include/uapi/asm/kvm.h          |   5 ++
 arch/arm64/kvm/guest.c                     |  51 ++++++++++++
 include/kvm/arm_pmu.h                      |  23 ++++++
 virt/kvm/arm/pmu.c                         | 128 +++++++++++++++++++++++++++++
 8 files changed, 255 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
index 3cc59c5..d626237 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -6,3 +6,27 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
 kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
 
 The groups and attributes per virtual cpu, if any, are architecture specific.
+
+1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
+Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt
+Returns: -EBUSY: The PMU overflow interrupt is already set
+         -ENXIO: The overflow interrupt not set when attempting to get it
+         -ENODEV: PMUv3 not supported
+         -EINVAL: Invalid PMU overflow interrupt number supplied
+
+A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
+number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
+type must be same for each vcpu. As a PPI, the interrupt number is same for all
+vcpus, while as an SPI it must be different for each vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: PMUv3 not supported
+         -ENXIO: PMUv3 not properly configured as required prior to calling this
+                 attribute
+         -EBUSY: PMUv3 already initialized
+
+Request the initialization of the PMUv3.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..6dd0992 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
+static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 34d7395..dc8644f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
 		break;
 	}
 
@@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
 		break;
 	}
 
@@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
 		break;
 	}
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index cb220b7..a855a30 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6aedbe3..f209ea1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -205,6 +205,11 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API on vcpu fd */
+#define KVM_ARM_VCPU_PMU_V3_CTRL	0
+#define   KVM_ARM_VCPU_PMU_V3_IRQ	0
+#define   KVM_ARM_VCPU_PMU_V3_INIT	1
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index fcb7788..dbe45c3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	}
 	return 0;
 }
+
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fee86eb..3890c94 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,6 +36,7 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+#define kvm_arm_pmu_irq_initialized(v)	((v)->arch.pmu.irq_num >= VGIC_NR_SGIS)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
@@ -49,11 +50,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 bool kvm_arm_support_pmu_v3(void);
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+#define kvm_arm_pmu_irq_initialized(v)	(false)
 static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 					    u64 select_idx)
 {
@@ -74,6 +82,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 static inline bool kvm_arm_support_pmu_v3(void) { return false; }
+static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 05e9d7e..37f6100 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,6 +19,7 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
@@ -383,3 +384,130 @@ bool kvm_arm_support_pmu_v3(void)
 	 */
 	return (perf_num_counters() > 0);
 }
+
+static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
+{
+	if (!kvm_arm_support_pmu_v3())
+		return -ENODEV;
+
+	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
+	    !kvm_arm_pmu_irq_initialized(vcpu))
+		return -ENXIO;
+
+	if (kvm_arm_pmu_v3_ready(vcpu))
+		return -EBUSY;
+
+	kvm_pmu_vcpu_reset(vcpu);
+	vcpu->arch.pmu.ready = true;
+
+	return 0;
+}
+
+static int kvm_arm_pmu_irq_access(struct kvm_vcpu *vcpu,
+				  struct kvm_device_attr *attr,
+				  int *irq, bool is_set)
+{
+	if (!is_set) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			return -ENXIO;
+
+		*irq = vcpu->arch.pmu.irq_num;
+	} else {
+		if (kvm_arm_pmu_irq_initialized(vcpu))
+			return -EBUSY;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", *irq);
+		vcpu->arch.pmu.irq_num = *irq;
+	}
+
+	return 0;
+}
+
+static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi)
+{
+	int i;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			continue;
+
+		if (is_ppi) {
+			if (vcpu->arch.pmu.irq_num != irq)
+				return false;
+		} else {
+			if (vcpu->arch.pmu.irq_num == irq)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		/*
+		 * The PMU overflow interrupt could be a PPI or SPI, but for one
+		 * VM the interrupt type must be same for each vcpu. As a PPI,
+		 * the interrupt number is same for all vcpus, while as an SPI
+		 * it must be different for each vcpu.
+		 */
+		if (reg < VGIC_NR_SGIS || reg >= vcpu->kvm->arch.vgic.nr_irqs ||
+		    !irq_is_valid(vcpu->kvm, reg, reg < VGIC_NR_PRIVATE_IRQS))
+			return -EINVAL;
+
+		return kvm_arm_pmu_irq_access(vcpu, attr, &reg, true);
+	}
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		return kvm_arm_pmu_v3_init(vcpu);
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg = -1;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		ret = kvm_arm_pmu_irq_access(vcpu, attr, &reg, false);
+		if (ret)
+			return ret;
+		return put_user(reg, uaddr);
+	}
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ:
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		if (kvm_arm_support_pmu_v3() &&
+		    test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return 0;
+	}
+
+	return -ENXIO;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3
@ 2016-01-27  3:51   ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-27  3:51 UTC (permalink / raw)
  To: linux-arm-kernel

From: Shannon Zhao <shannon.zhao@linaro.org>

To configure the virtual PMUv3 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.

After configuring the PMUv3, call the vcpu ioctl with attribute
KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Acked-by: Peter Maydell <peter.maydell@linaro.org>
---
CC: Peter Maydell <peter.maydell@linaro.org>
---
 Documentation/virtual/kvm/devices/vcpu.txt |  24 ++++++
 arch/arm/include/asm/kvm_host.h            |  15 ++++
 arch/arm/kvm/arm.c                         |   3 +
 arch/arm64/include/asm/kvm_host.h          |   6 ++
 arch/arm64/include/uapi/asm/kvm.h          |   5 ++
 arch/arm64/kvm/guest.c                     |  51 ++++++++++++
 include/kvm/arm_pmu.h                      |  23 ++++++
 virt/kvm/arm/pmu.c                         | 128 +++++++++++++++++++++++++++++
 8 files changed, 255 insertions(+)

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
index 3cc59c5..d626237 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -6,3 +6,27 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
 kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
 
 The groups and attributes per virtual cpu, if any, are architecture specific.
+
+1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
+Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt
+Returns: -EBUSY: The PMU overflow interrupt is already set
+         -ENXIO: The overflow interrupt not set when attempting to get it
+         -ENODEV: PMUv3 not supported
+         -EINVAL: Invalid PMU overflow interrupt number supplied
+
+A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
+number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
+type must be same for each vcpu. As a PPI, the interrupt number is same for all
+vcpus, while as an SPI it must be different for each vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: PMUv3 not supported
+         -ENXIO: PMUv3 not properly configured as required prior to calling this
+                 attribute
+         -EBUSY: PMUv3 already initialized
+
+Request the initialization of the PMUv3.
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index f9f2779..6dd0992 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
+static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+					     struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 
 #endif /* __ARM_KVM_HOST_H__ */
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 34d7395..dc8644f 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
 		break;
 	}
 
@@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
 		break;
 	}
 
@@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
 
 	switch (attr->group) {
 	default:
+		ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
 		break;
 	}
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index cb220b7..a855a30 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr);
 
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 6aedbe3..f209ea1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -205,6 +205,11 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
 #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
 
+/* Device Control API on vcpu fd */
+#define KVM_ARM_VCPU_PMU_V3_CTRL	0
+#define   KVM_ARM_VCPU_PMU_V3_IRQ	0
+#define   KVM_ARM_VCPU_PMU_V3_INIT	1
+
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT		24
 #define KVM_ARM_IRQ_TYPE_MASK		0xff
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index fcb7788..dbe45c3 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
 	}
 	return 0;
 }
+
+int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
+
+int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+			       struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->group) {
+	case KVM_ARM_VCPU_PMU_V3_CTRL:
+		ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
+		break;
+	default:
+		ret = -ENXIO;
+		break;
+	}
+
+	return ret;
+}
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fee86eb..3890c94 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -36,6 +36,7 @@ struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
+#define kvm_arm_pmu_irq_initialized(v)	((v)->arch.pmu.irq_num >= VGIC_NR_SGIS)
 u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
@@ -49,11 +50,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
 				    u64 select_idx);
 bool kvm_arm_support_pmu_v3(void);
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+			    struct kvm_device_attr *attr);
 #else
 struct kvm_pmu {
 };
 
 #define kvm_arm_pmu_v3_ready(v)		(false)
+#define kvm_arm_pmu_irq_initialized(v)	(false)
 static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
 					    u64 select_idx)
 {
@@ -74,6 +82,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
 static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
 						  u64 data, u64 select_idx) {}
 static inline bool kvm_arm_support_pmu_v3(void) { return false; }
+static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
+static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
+					  struct kvm_device_attr *attr)
+{
+	return -ENXIO;
+}
 #endif
 
 #endif
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 05e9d7e..37f6100 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -19,6 +19,7 @@
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
 #include <linux/perf_event.h>
+#include <linux/uaccess.h>
 #include <asm/kvm_emulate.h>
 #include <kvm/arm_pmu.h>
 #include <kvm/arm_vgic.h>
@@ -383,3 +384,130 @@ bool kvm_arm_support_pmu_v3(void)
 	 */
 	return (perf_num_counters() > 0);
 }
+
+static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
+{
+	if (!kvm_arm_support_pmu_v3())
+		return -ENODEV;
+
+	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
+	    !kvm_arm_pmu_irq_initialized(vcpu))
+		return -ENXIO;
+
+	if (kvm_arm_pmu_v3_ready(vcpu))
+		return -EBUSY;
+
+	kvm_pmu_vcpu_reset(vcpu);
+	vcpu->arch.pmu.ready = true;
+
+	return 0;
+}
+
+static int kvm_arm_pmu_irq_access(struct kvm_vcpu *vcpu,
+				  struct kvm_device_attr *attr,
+				  int *irq, bool is_set)
+{
+	if (!is_set) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			return -ENXIO;
+
+		*irq = vcpu->arch.pmu.irq_num;
+	} else {
+		if (kvm_arm_pmu_irq_initialized(vcpu))
+			return -EBUSY;
+
+		kvm_debug("Set kvm ARM PMU irq: %d\n", *irq);
+		vcpu->arch.pmu.irq_num = *irq;
+	}
+
+	return 0;
+}
+
+static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi)
+{
+	int i;
+	struct kvm_vcpu *vcpu;
+
+	kvm_for_each_vcpu(i, vcpu, kvm) {
+		if (!kvm_arm_pmu_irq_initialized(vcpu))
+			continue;
+
+		if (is_ppi) {
+			if (vcpu->arch.pmu.irq_num != irq)
+				return false;
+		} else {
+			if (vcpu->arch.pmu.irq_num == irq)
+				return false;
+		}
+	}
+
+	return true;
+}
+
+
+int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		if (get_user(reg, uaddr))
+			return -EFAULT;
+
+		/*
+		 * The PMU overflow interrupt could be a PPI or SPI, but for one
+		 * VM the interrupt type must be same for each vcpu. As a PPI,
+		 * the interrupt number is same for all vcpus, while as an SPI
+		 * it must be different for each vcpu.
+		 */
+		if (reg < VGIC_NR_SGIS || reg >= vcpu->kvm->arch.vgic.nr_irqs ||
+		    !irq_is_valid(vcpu->kvm, reg, reg < VGIC_NR_PRIVATE_IRQS))
+			return -EINVAL;
+
+		return kvm_arm_pmu_irq_access(vcpu, attr, &reg, true);
+	}
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		return kvm_arm_pmu_v3_init(vcpu);
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	int ret;
+
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ: {
+		int __user *uaddr = (int __user *)(long)attr->addr;
+		int reg = -1;
+
+		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return -ENODEV;
+
+		ret = kvm_arm_pmu_irq_access(vcpu, attr, &reg, false);
+		if (ret)
+			return ret;
+		return put_user(reg, uaddr);
+	}
+	}
+
+	return -ENXIO;
+}
+
+int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
+{
+	switch (attr->attr) {
+	case KVM_ARM_VCPU_PMU_V3_IRQ:
+	case KVM_ARM_VCPU_PMU_V3_INIT:
+		if (kvm_arm_support_pmu_v3() &&
+		    test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
+			return 0;
+	}
+
+	return -ENXIO;
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 15:36     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 15:36 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvm, marc.zyngier, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCR_EL0 and make writable
> bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
> handler for PMCR.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
>  include/kvm/arm_pmu.h     |  4 ++++
>  2 files changed, 44 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index eec3598..97fea84 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -34,6 +34,7 @@
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/pmu.h>
>  
>  #include <trace/events/kvm.h>
>  
> @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmcr, val;
> +
> +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> +	 * except PMCR.E resetting to zero.
> +	 */
> +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> +	      & (~ARMV8_PMCR_E);
> +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +}
> +
> +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			const struct sys_reg_desc *r)
> +{
> +	u64 val;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		/* Only update writeable bits of PMCR */
> +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> +		val &= ~ARMV8_PMCR_MASK;
> +		val |= p->regval & ARMV8_PMCR_MASK;
> +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +	} else {
> +		/* PMCR.P & PMCR.C are RAZ */
> +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +		p->regval = val;

Should we also be setting the IMP, IDCODE, and N fields here to the
values of the host PE?

> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMCR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmcr, reset_pmcr, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>  	  trap_raz_wi },
> @@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
>  
>  	/* PMU */
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index be220ee..32fee2d 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -34,9 +34,13 @@ struct kvm_pmu {
>  	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
>  	bool ready;
>  };
> +
> +#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
>  #else
>  struct kvm_pmu {
>  };
> +
> +#define kvm_arm_pmu_v3_ready(v)		(false)
>  #endif
>  
>  #endif
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
@ 2016-01-28 15:36     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 15:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add reset handler which gets host value of PMCR_EL0 and make writable
> bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
> handler for PMCR.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
>  include/kvm/arm_pmu.h     |  4 ++++
>  2 files changed, 44 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index eec3598..97fea84 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -34,6 +34,7 @@
>  #include <asm/kvm_emulate.h>
>  #include <asm/kvm_host.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/pmu.h>
>  
>  #include <trace/events/kvm.h>
>  
> @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 pmcr, val;
> +
> +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> +	 * except PMCR.E resetting to zero.
> +	 */
> +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> +	      & (~ARMV8_PMCR_E);
> +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +}
> +
> +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			const struct sys_reg_desc *r)
> +{
> +	u64 val;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		/* Only update writeable bits of PMCR */
> +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> +		val &= ~ARMV8_PMCR_MASK;
> +		val |= p->regval & ARMV8_PMCR_MASK;
> +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +	} else {
> +		/* PMCR.P & PMCR.C are RAZ */
> +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> +		p->regval = val;

Should we also be setting the IMP, IDCODE, and N fields here to the
values of the host PE?

> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMCR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmcr, reset_pmcr, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
>  	  trap_raz_wi },
> @@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
>  
>  	/* PMU */
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index be220ee..32fee2d 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -34,9 +34,13 @@ struct kvm_pmu {
>  	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
>  	bool ready;
>  };
> +
> +#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
>  #else
>  struct kvm_pmu {
>  };
> +
> +#define kvm_arm_pmu_v3_ready(v)		(false)
>  #endif
>  
>  #endif
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 16:31     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 16:31 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When we use tools like perf on host, perf passes the event type and the
> id of this event type category to kernel, then kernel will map them to
> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> register. When getting the event number in KVM, directly use raw event
> type to create a perf_event for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/pmu.h |   3 ++
>  arch/arm64/kvm/Makefile      |   1 +
>  include/kvm/arm_pmu.h        |  10 ++++
>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 136 insertions(+)
>  create mode 100644 virt/kvm/arm/pmu.c
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 4406184..2588f9c 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -21,6 +21,7 @@
>  
>  #define ARMV8_MAX_COUNTERS      32
>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)

I'm not sure we want to add this. It's name is wrong, as it's really
PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
arch/arm64/kernel/perf_event.c maps it that way.

I think we should do the same with the pmc array, i.e. map the cycle
counter to idx zero.

>  
>  /*
>   * Per-CPU PMCR: config reg
> @@ -31,6 +32,8 @@
>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
> +#define ARMV8_PMCR_LC		(1 << 6)
>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>  #define	ARMV8_PMCR_N_MASK	0x1f
>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index caee9ee..122cff4 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 32fee2d..ee4b15c 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -36,11 +36,21 @@ struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx);
>  #else
>  struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		(false)
> +static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	return 0;
> +}
> +static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
> +						  u64 data, u64 select_idx) {}
>  #endif
>  
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> new file mode 100644
> index 0000000..673ec55
> --- /dev/null
> +++ b/virt/kvm/arm/pmu.c
> @@ -0,0 +1,122 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/perf_event.h>
> +#include <asm/kvm_emulate.h>
> +#include <kvm/arm_pmu.h>
> +
> +/**
> + * kvm_pmu_get_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	u64 counter, reg, enabled, running;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	reg = (select_idx == ARMV8_CYCLE_IDX)
> +	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;

reg = select_idx == 0 ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1;

> +	counter = vcpu_sys_reg(vcpu, reg);
> +
> +	/* The real counter value is equal to the value of counter register plus
> +	 * the value perf event counts.
> +	 */
> +	if (pmc->perf_event)
> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +						 &running);
> +
> +	return counter & pmc->bitmask;
> +}
> +
> +/**
> + * kvm_pmu_stop_counter - stop PMU counter
> + * @pmc: The PMU counter pointer
> + *
> + * If this counter has been configured to monitor some event, release it here.
> + */
> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
> +{
> +	u64 counter, reg;
> +
> +	if (pmc->perf_event) {
> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> +		reg = (pmc->idx == ARMV8_CYCLE_IDX)
> +		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;

Second need for this reg selection. We should probably create an idx to
reg function.

> +		vcpu_sys_reg(vcpu, reg) = counter;
> +		perf_event_disable(pmc->perf_event);
> +		perf_event_release_kernel(pmc->perf_event);
> +		pmc->perf_event = NULL;
> +	}
> +}
> +
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> +	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> +}
> +
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	struct perf_event *event;
> +	struct perf_event_attr attr;
> +	u64 eventsel, counter;
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	eventsel = data & ARMV8_EVTYPE_EVENT;
> +
> +	memset(&attr, 0, sizeof(struct perf_event_attr));
> +	attr.type = PERF_TYPE_RAW;
> +	attr.size = sizeof(attr);

nit: the memset sizeof could also just use attr to save characters.
Or why not avoid the memset using struct perf_event_attr attr = {...},
like arch/x86/kvm/pmu.c does?


> +	attr.pinned = 1;
> +	attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);

hmm.. disabled = enabled ? If we always want it off at set time, then it
should just be '= 1', right?

> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
> +	attr.exclude_hv = 1; /* Don't count EL2 events */
> +	attr.exclude_host = 1; /* Don't count host events */
> +	attr.config = eventsel;
> +
> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> +	/* The initial sample period (overflow count) of an event. */
> +	attr.sample_period = (-counter) & pmc->bitmask;
> +
> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	if (IS_ERR(event)) {
> +		pr_err_once("kvm: pmu event creation failed %ld\n",
> +			    PTR_ERR(event));
> +		return;
> +	}
> +
> +	pmc->perf_event = event;
> +}
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-28 16:31     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 16:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When we use tools like perf on host, perf passes the event type and the
> id of this event type category to kernel, then kernel will map them to
> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> register. When getting the event number in KVM, directly use raw event
> type to create a perf_event for it.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/pmu.h |   3 ++
>  arch/arm64/kvm/Makefile      |   1 +
>  include/kvm/arm_pmu.h        |  10 ++++
>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 136 insertions(+)
>  create mode 100644 virt/kvm/arm/pmu.c
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 4406184..2588f9c 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -21,6 +21,7 @@
>  
>  #define ARMV8_MAX_COUNTERS      32
>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)

I'm not sure we want to add this. It's name is wrong, as it's really
PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
arch/arm64/kernel/perf_event.c maps it that way.

I think we should do the same with the pmc array, i.e. map the cycle
counter to idx zero.

>  
>  /*
>   * Per-CPU PMCR: config reg
> @@ -31,6 +32,8 @@
>  #define ARMV8_PMCR_D		(1 << 3) /* CCNT counts every 64th cpu cycle */
>  #define ARMV8_PMCR_X		(1 << 4) /* Export to ETM */
>  #define ARMV8_PMCR_DP		(1 << 5) /* Disable CCNT if non-invasive debug*/
> +/* Determines which PMCCNTR_EL0 bit generates an overflow */
> +#define ARMV8_PMCR_LC		(1 << 6)
>  #define	ARMV8_PMCR_N_SHIFT	11	 /* Number of counters supported */
>  #define	ARMV8_PMCR_N_MASK	0x1f
>  #define	ARMV8_PMCR_MASK		0x3f	 /* Mask for writable bits */
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index caee9ee..122cff4 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
>  kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
> +kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 32fee2d..ee4b15c 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -36,11 +36,21 @@ struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx);
>  #else
>  struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		(false)
> +static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
> +					    u64 select_idx)
> +{
> +	return 0;
> +}
> +static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
> +						  u64 data, u64 select_idx) {}
>  #endif
>  
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> new file mode 100644
> index 0000000..673ec55
> --- /dev/null
> +++ b/virt/kvm/arm/pmu.c
> @@ -0,0 +1,122 @@
> +/*
> + * Copyright (C) 2015 Linaro Ltd.
> + * Author: Shannon Zhao <shannon.zhao@linaro.org>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <linux/cpu.h>
> +#include <linux/kvm.h>
> +#include <linux/kvm_host.h>
> +#include <linux/perf_event.h>
> +#include <asm/kvm_emulate.h>
> +#include <kvm/arm_pmu.h>
> +
> +/**
> + * kvm_pmu_get_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @select_idx: The counter index
> + */
> +u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	u64 counter, reg, enabled, running;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +
> +	reg = (select_idx == ARMV8_CYCLE_IDX)
> +	      ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;

reg = select_idx == 0 ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx - 1;

> +	counter = vcpu_sys_reg(vcpu, reg);
> +
> +	/* The real counter value is equal to the value of counter register plus
> +	 * the value perf event counts.
> +	 */
> +	if (pmc->perf_event)
> +		counter += perf_event_read_value(pmc->perf_event, &enabled,
> +						 &running);
> +
> +	return counter & pmc->bitmask;
> +}
> +
> +/**
> + * kvm_pmu_stop_counter - stop PMU counter
> + * @pmc: The PMU counter pointer
> + *
> + * If this counter has been configured to monitor some event, release it here.
> + */
> +static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
> +{
> +	u64 counter, reg;
> +
> +	if (pmc->perf_event) {
> +		counter = kvm_pmu_get_counter_value(vcpu, pmc->idx);
> +		reg = (pmc->idx == ARMV8_CYCLE_IDX)
> +		       ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + pmc->idx;

Second need for this reg selection. We should probably create an idx to
reg function.

> +		vcpu_sys_reg(vcpu, reg) = counter;
> +		perf_event_disable(pmc->perf_event);
> +		perf_event_release_kernel(pmc->perf_event);
> +		pmc->perf_event = NULL;
> +	}
> +}
> +
> +static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
> +{
> +	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> +	       (vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(select_idx));
> +}
> +
> +/**
> + * kvm_pmu_set_counter_event_type - set selected counter to monitor some event
> + * @vcpu: The vcpu pointer
> + * @data: The data guest writes to PMXEVTYPER_EL0
> + * @select_idx: The number of selected counter
> + *
> + * When OS accesses PMXEVTYPER_EL0, that means it wants to set a PMC to count an
> + * event with given hardware event number. Here we call perf_event API to
> + * emulate this action and create a kernel perf event for it.
> + */
> +void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
> +				    u64 select_idx)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> +	struct perf_event *event;
> +	struct perf_event_attr attr;
> +	u64 eventsel, counter;
> +
> +	kvm_pmu_stop_counter(vcpu, pmc);
> +	eventsel = data & ARMV8_EVTYPE_EVENT;
> +
> +	memset(&attr, 0, sizeof(struct perf_event_attr));
> +	attr.type = PERF_TYPE_RAW;
> +	attr.size = sizeof(attr);

nit: the memset sizeof could also just use attr to save characters.
Or why not avoid the memset using struct perf_event_attr attr = {...},
like arch/x86/kvm/pmu.c does?


> +	attr.pinned = 1;
> +	attr.disabled = kvm_pmu_counter_is_enabled(vcpu, select_idx);

hmm.. disabled = enabled ? If we always want it off at set time, then it
should just be '= 1', right?

> +	attr.exclude_user = data & ARMV8_EXCLUDE_EL0 ? 1 : 0;
> +	attr.exclude_kernel = data & ARMV8_EXCLUDE_EL1 ? 1 : 0;
> +	attr.exclude_hv = 1; /* Don't count EL2 events */
> +	attr.exclude_host = 1; /* Don't count host events */
> +	attr.config = eventsel;
> +
> +	counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> +	/* The initial sample period (overflow count) of an event. */
> +	attr.sample_period = (-counter) & pmc->bitmask;
> +
> +	event = perf_event_create_kernel_counter(&attr, -1, current, NULL, pmc);
> +	if (IS_ERR(event)) {
> +		pr_err_once("kvm: pmu event creation failed %ld\n",
> +			    PTR_ERR(event));
> +		return;
> +	}
> +
> +	pmc->perf_event = event;
> +}
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-28 16:31     ` Andrew Jones
@ 2016-01-28 16:45       ` Marc Zyngier
  -1 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2016-01-28 16:45 UTC (permalink / raw)
  To: Andrew Jones, Shannon Zhao, will.deacon
  Cc: kvmarm, christoffer.dall, linux-arm-kernel, kvm, wei, cov,
	shannon.zhao, peter.huangpeng, hangaohuai

On 28/01/16 16:31, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/pmu.h |   3 ++
>>  arch/arm64/kvm/Makefile      |   1 +
>>  include/kvm/arm_pmu.h        |  10 ++++
>>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 136 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> index 4406184..2588f9c 100644
>> --- a/arch/arm64/include/asm/pmu.h
>> +++ b/arch/arm64/include/asm/pmu.h
>> @@ -21,6 +21,7 @@
>>  
>>  #define ARMV8_MAX_COUNTERS      32
>>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> 
> I'm not sure we want to add this. It's name is wrong, as it's really
> PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> arch/arm64/kernel/perf_event.c maps it that way.
> 
> I think we should do the same with the pmc array, i.e. map the cycle
> counter to idx zero.

I tend to have the opposite view. Not for the sake of it, but because I
find it helpful to directly map the code to the architecture
documentation without having to bend another handful of neurons.

Will probably had some good reasons to structure it that way, but I
don't know the rational. Will?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-28 16:45       ` Marc Zyngier
  0 siblings, 0 replies; 127+ messages in thread
From: Marc Zyngier @ 2016-01-28 16:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 28/01/16 16:31, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When we use tools like perf on host, perf passes the event type and the
>> id of this event type category to kernel, then kernel will map them to
>> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>> register. When getting the event number in KVM, directly use raw event
>> type to create a perf_event for it.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/pmu.h |   3 ++
>>  arch/arm64/kvm/Makefile      |   1 +
>>  include/kvm/arm_pmu.h        |  10 ++++
>>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 136 insertions(+)
>>  create mode 100644 virt/kvm/arm/pmu.c
>>
>> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> index 4406184..2588f9c 100644
>> --- a/arch/arm64/include/asm/pmu.h
>> +++ b/arch/arm64/include/asm/pmu.h
>> @@ -21,6 +21,7 @@
>>  
>>  #define ARMV8_MAX_COUNTERS      32
>>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> 
> I'm not sure we want to add this. It's name is wrong, as it's really
> PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> arch/arm64/kernel/perf_event.c maps it that way.
> 
> I think we should do the same with the pmc array, i.e. map the cycle
> counter to idx zero.

I tend to have the opposite view. Not for the sake of it, but because I
find it helpful to directly map the code to the architecture
documentation without having to bend another handful of neurons.

Will probably had some good reasons to structure it that way, but I
don't know the rational. Will?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-28 16:45       ` Marc Zyngier
@ 2016-01-28 18:06         ` Will Deacon
  -1 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-01-28 18:06 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Andrew Jones, Shannon Zhao, kvmarm, christoffer.dall,
	linux-arm-kernel, kvm, wei, cov, shannon.zhao, peter.huangpeng,
	hangaohuai

On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
> On 28/01/16 16:31, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> >> From: Shannon Zhao <shannon.zhao@linaro.org>
> >>
> >> When we use tools like perf on host, perf passes the event type and the
> >> id of this event type category to kernel, then kernel will map them to
> >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> >> register. When getting the event number in KVM, directly use raw event
> >> type to create a perf_event for it.
> >>
> >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm64/include/asm/pmu.h |   3 ++
> >>  arch/arm64/kvm/Makefile      |   1 +
> >>  include/kvm/arm_pmu.h        |  10 ++++
> >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
> >>  4 files changed, 136 insertions(+)
> >>  create mode 100644 virt/kvm/arm/pmu.c
> >>
> >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >> index 4406184..2588f9c 100644
> >> --- a/arch/arm64/include/asm/pmu.h
> >> +++ b/arch/arm64/include/asm/pmu.h
> >> @@ -21,6 +21,7 @@
> >>  
> >>  #define ARMV8_MAX_COUNTERS      32
> >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> > 
> > I'm not sure we want to add this. It's name is wrong, as it's really
> > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> > arch/arm64/kernel/perf_event.c maps it that way.
> > 
> > I think we should do the same with the pmc array, i.e. map the cycle
> > counter to idx zero.
> 
> I tend to have the opposite view. Not for the sake of it, but because I
> find it helpful to directly map the code to the architecture
> documentation without having to bend another handful of neurons.
> 
> Will probably had some good reasons to structure it that way, but I
> don't know the rational. Will?

It was years ago, but I suspect that the cycle counter is index zero
because its mandated, whilst the number of event counters is IMPDEF.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-28 18:06         ` Will Deacon
  0 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-01-28 18:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
> On 28/01/16 16:31, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> >> From: Shannon Zhao <shannon.zhao@linaro.org>
> >>
> >> When we use tools like perf on host, perf passes the event type and the
> >> id of this event type category to kernel, then kernel will map them to
> >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> >> register. When getting the event number in KVM, directly use raw event
> >> type to create a perf_event for it.
> >>
> >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm64/include/asm/pmu.h |   3 ++
> >>  arch/arm64/kvm/Makefile      |   1 +
> >>  include/kvm/arm_pmu.h        |  10 ++++
> >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
> >>  4 files changed, 136 insertions(+)
> >>  create mode 100644 virt/kvm/arm/pmu.c
> >>
> >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >> index 4406184..2588f9c 100644
> >> --- a/arch/arm64/include/asm/pmu.h
> >> +++ b/arch/arm64/include/asm/pmu.h
> >> @@ -21,6 +21,7 @@
> >>  
> >>  #define ARMV8_MAX_COUNTERS      32
> >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> > 
> > I'm not sure we want to add this. It's name is wrong, as it's really
> > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> > arch/arm64/kernel/perf_event.c maps it that way.
> > 
> > I think we should do the same with the pmc array, i.e. map the cycle
> > counter to idx zero.
> 
> I tend to have the opposite view. Not for the sake of it, but because I
> find it helpful to directly map the code to the architecture
> documentation without having to bend another handful of neurons.
> 
> Will probably had some good reasons to structure it that way, but I
> don't know the rational. Will?

It was years ago, but I suspect that the cycle counter is index zero
because its mandated, whilst the number of event counters is IMPDEF.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 18:08     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:08 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvm, marc.zyngier, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On Wed, Jan 27, 2016 at 11:51:38AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a handler to emulate writing
> PMCNTENSET or PMCNTENCLR register.
> 
> When writing to PMCNTENSET, call perf_event_enable to enable the perf
> event. When writing to PMCNTENCLR, call perf_event_disable to disable
> the perf event.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 35 +++++++++++++++++++++++---
>  include/kvm/arm_pmu.h     |  9 +++++++
>  virt/kvm/arm/pmu.c        | 63 +++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 103 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 6a50262..d43a9a4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -603,6 +603,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> +static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 val, mask;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	mask = kvm_pmu_valid_counter_mask(vcpu);
> +	if (p->is_write) {
> +		val = p->regval & mask;
> +		if (r->Op2 & 0x1) {
> +			/* accessing PMCNTENSET_EL0 */
> +			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
> +			kvm_pmu_enable_counter(vcpu, val);
> +		} else {
> +			/* accessing PMCNTENCLR_EL0 */
> +			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
> +			kvm_pmu_disable_counter(vcpu, val);
> +		}
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmcr, reset_pmcr, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
>  	/* PMCNTENCLR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> -	  trap_raz_wi },
> +	  access_pmcnten, NULL, PMCNTENSET_EL0 },

I don't think the reg field is needed, as the reset handler isn't
defined and the access handler doesn't use it. Oh, and shouldn't it be
PMCNTENCLR_EL0 anyway?

>  	/* PMOVSCLR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
>  	  trap_raz_wi },
> @@ -1149,8 +1176,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  
>  	/* PMU */
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index ee4b15c..a7e5485 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -37,6 +37,9 @@ struct kvm_pmu {
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
> +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -49,6 +52,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
>  {
>  	return 0;
>  }
> +static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> +{
> +	return 0;
> +}
> +static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 673ec55..0873977 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -68,6 +68,69 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>  	}
>  }
>  
> +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> +{
> +	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
> +
> +	val &= ARMV8_PMCR_N_MASK;
> +	return GENMASK(val - 1, 0) | BIT(ARMV8_CYCLE_IDX);

val can be zero if PMCR.N is zero (meaning only the cycle counter is
implemented). We should confirm it's not zero before calling GENMASK.

> +}
> +
> +/**
> + * kvm_pmu_enable_counter - enable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENSET register
> + *
> + * Call perf_event_enable to start counting the perf event
> + */
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +
> +	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) || !val)
> +		return;
> +
> +	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +
> +		pmc = &pmu->pmc[i];
> +		if (pmc->perf_event) {
> +			perf_event_enable(pmc->perf_event);
> +			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +				kvm_debug("fail to enable perf event\n");
> +		}
> +	}
> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENCLR register
> + *
> + * Call perf_event_disable to stop counting the perf event
> + */
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +
> +	if (!val)
> +		return;
> +
> +	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +
> +		pmc = &pmu->pmc[i];
> +		if (pmc->perf_event)
> +			perf_event_disable(pmc->perf_event);
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
@ 2016-01-28 18:08     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:38AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a handler to emulate writing
> PMCNTENSET or PMCNTENCLR register.
> 
> When writing to PMCNTENSET, call perf_event_enable to enable the perf
> event. When writing to PMCNTENCLR, call perf_event_disable to disable
> the perf event.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 35 +++++++++++++++++++++++---
>  include/kvm/arm_pmu.h     |  9 +++++++
>  virt/kvm/arm/pmu.c        | 63 +++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 103 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 6a50262..d43a9a4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -603,6 +603,33 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> +static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 val, mask;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	mask = kvm_pmu_valid_counter_mask(vcpu);
> +	if (p->is_write) {
> +		val = p->regval & mask;
> +		if (r->Op2 & 0x1) {
> +			/* accessing PMCNTENSET_EL0 */
> +			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val;
> +			kvm_pmu_enable_counter(vcpu, val);
> +		} else {
> +			/* accessing PMCNTENCLR_EL0 */
> +			vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val;
> +			kvm_pmu_disable_counter(vcpu, val);
> +		}
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmcr, reset_pmcr, },
>  	/* PMCNTENSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
>  	/* PMCNTENCLR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> -	  trap_raz_wi },
> +	  access_pmcnten, NULL, PMCNTENSET_EL0 },

I don't think the reg field is needed, as the reset handler isn't
defined and the access handler doesn't use it. Oh, and shouldn't it be
PMCNTENCLR_EL0 anyway?

>  	/* PMOVSCLR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b011),
>  	  trap_raz_wi },
> @@ -1149,8 +1176,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  
>  	/* PMU */
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index ee4b15c..a7e5485 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -37,6 +37,9 @@ struct kvm_pmu {
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
> +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -49,6 +52,12 @@ static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
>  {
>  	return 0;
>  }
> +static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> +{
> +	return 0;
> +}
> +static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 673ec55..0873977 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -68,6 +68,69 @@ static void kvm_pmu_stop_counter(struct kvm_vcpu *vcpu, struct kvm_pmc *pmc)
>  	}
>  }
>  
> +u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
> +{
> +	u64 val = vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMCR_N_SHIFT;
> +
> +	val &= ARMV8_PMCR_N_MASK;
> +	return GENMASK(val - 1, 0) | BIT(ARMV8_CYCLE_IDX);

val can be zero if PMCR.N is zero (meaning only the cycle counter is
implemented). We should confirm it's not zero before calling GENMASK.

> +}
> +
> +/**
> + * kvm_pmu_enable_counter - enable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENSET register
> + *
> + * Call perf_event_enable to start counting the perf event
> + */
> +void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +
> +	if (!(vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) || !val)
> +		return;
> +
> +	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +
> +		pmc = &pmu->pmc[i];
> +		if (pmc->perf_event) {
> +			perf_event_enable(pmc->perf_event);
> +			if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> +				kvm_debug("fail to enable perf event\n");
> +		}
> +	}
> +}
> +
> +/**
> + * kvm_pmu_disable_counter - disable selected PMU counter
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCNTENCLR register
> + *
> + * Call perf_event_disable to stop counting the perf event
> + */
> +void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +
> +	if (!val)
> +		return;
> +
> +	for (i = 0; i < ARMV8_MAX_COUNTERS; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +
> +		pmc = &pmu->pmc[i];
> +		if (pmc->perf_event)
> +			perf_event_disable(pmc->perf_event);
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
  2016-01-28 18:08     ` Andrew Jones
@ 2016-01-28 18:12       ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:12 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Thu, Jan 28, 2016 at 07:08:56PM +0100, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:38AM +0800, Shannon Zhao wrote:
> > @@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  	  access_pmcr, reset_pmcr, },
> >  	/* PMCNTENSET_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> > -	  trap_raz_wi },
> > +	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
> >  	/* PMCNTENCLR_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> > -	  trap_raz_wi },
> > +	  access_pmcnten, NULL, PMCNTENSET_EL0 },
> 
> I don't think the reg field is needed, as the reset handler isn't
> defined and the access handler doesn't use it. Oh, and shouldn't it be
> PMCNTENCLR_EL0 anyway?

eh.. nevermind. Of course we just have the one sys_reg for both set/clr...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register
@ 2016-01-28 18:12       ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 07:08:56PM +0100, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:38AM +0800, Shannon Zhao wrote:
> > @@ -804,10 +831,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  	  access_pmcr, reset_pmcr, },
> >  	/* PMCNTENSET_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> > -	  trap_raz_wi },
> > +	  access_pmcnten, reset_unknown, PMCNTENSET_EL0 },
> >  	/* PMCNTENCLR_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b010),
> > -	  trap_raz_wi },
> > +	  access_pmcnten, NULL, PMCNTENSET_EL0 },
> 
> I don't think the reg field is needed, as the reset handler isn't
> defined and the access handler doesn't use it. Oh, and shouldn't it be
> PMCNTENCLR_EL0 anyway?

eh.. nevermind. Of course we just have the one sys_reg for both set/clr...

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 18:18     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:18 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:39AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a handler to emulate writing
> PMINTENSET or PMINTENCLR register.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 32 ++++++++++++++++++++++++++++----
>  1 file changed, 28 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d43a9a4..41d4bcd 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -630,6 +630,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {

The two '& mask' line wrappings are kinda gross. How about doing

                u64 val = p->regval & mask;

here, and then not needing to wrap?

> +		if (r->Op2 & 0x1)
> +			/* accessing PMINTENSET_EL1 */
> +			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= (p->regval
> +							       & mask);
> +		else
> +			/* accessing PMINTENCLR_EL1 */
> +			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~(p->regval
> +								& mask);
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -788,10 +812,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMINTENSET_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pminten, reset_unknown, PMINTENSET_EL1 },
>  	/* PMINTENCLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
> -	  trap_raz_wi },
> +	  access_pminten, NULL, PMINTENSET_EL1 },
>  
>  	/* MAIR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
> @@ -1186,8 +1210,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
>  
>  	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
>  	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register
@ 2016-01-28 18:18     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:39AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use
> reset_unknown for its reset handler. Add a handler to emulate writing
> PMINTENSET or PMINTENCLR register.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 32 ++++++++++++++++++++++++++++----
>  1 file changed, 28 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index d43a9a4..41d4bcd 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -630,6 +630,30 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 mask = kvm_pmu_valid_counter_mask(vcpu);
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {

The two '& mask' line wrappings are kinda gross. How about doing

                u64 val = p->regval & mask;

here, and then not needing to wrap?

> +		if (r->Op2 & 0x1)
> +			/* accessing PMINTENSET_EL1 */
> +			vcpu_sys_reg(vcpu, PMINTENSET_EL1) |= (p->regval
> +							       & mask);
> +		else
> +			/* accessing PMINTENCLR_EL1 */
> +			vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= ~(p->regval
> +								& mask);
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMINTENSET_EL1) & mask;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -788,10 +812,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  
>  	/* PMINTENSET_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pminten, reset_unknown, PMINTENSET_EL1 },
>  	/* PMINTENCLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b010),
> -	  trap_raz_wi },
> +	  access_pminten, NULL, PMINTENSET_EL1 },
>  
>  	/* MAIR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b1010), CRm(0b0010), Op2(0b000),
> @@ -1186,8 +1210,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
>  
>  	{ Op1( 0), CRn(10), CRm( 2), Op2( 0), access_vm_reg, NULL, c10_PRRR },
>  	{ Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 18:37     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:37 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:41AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add access handler which emulates writing and reading PMSWINC
> register and add support for creating software increment event.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |  2 ++
>  arch/arm64/kvm/sys_regs.c    | 20 +++++++++++++++++++-
>  include/kvm/arm_pmu.h        |  2 ++
>  virt/kvm/arm/pmu.c           | 33 +++++++++++++++++++++++++++++++++
>  4 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 2588f9c..6f14a01 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -60,6 +60,8 @@
>  #define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
>  #define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
>  
> +#define ARMV8_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
> +
>  /*
>   * Event filters for PMUv3
>   */
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 60b24ea..f45c227 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -676,6 +676,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 mask;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		mask = kvm_pmu_valid_counter_mask(vcpu);
> +		kvm_pmu_software_increment(vcpu, p->regval & mask);
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -886,7 +903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmovs, NULL, PMOVSSET_EL0 },
>  	/* PMSWINC_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
> -	  trap_raz_wi },
> +	  access_pmswinc, reset_unknown, PMSWINC_EL0 },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
>  	  access_pmselr, reset_unknown, PMSELR_EL0 },
> @@ -1225,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 4f8409d..caa706e 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index ee75fac..706c935 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -161,6 +161,35 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
>  		kvm_vcpu_kick(vcpu);
>  }
>  
> +/**
> + * kvm_pmu_software_increment - do software increment
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMSWINC register
> + */
> +void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	u64 type, enable, reg;
> +
> +	if (val == 0)
> +		return;
> +
> +	for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
> +		       & ARMV8_EVTYPE_EVENT;
> +		enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);

The PMCNTENSET_EL0 read can be moved outside the loop.

> +		if ((type == ARMV8_EVTYPE_EVENT_SW_INCR) && (enable & BIT(i))) {
> +			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
> +			reg = lower_32_bits(reg);
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
> +			if (!reg)
> +				kvm_pmu_overflow_set(vcpu, BIT(i));
> +		}
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> @@ -189,6 +218,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  	kvm_pmu_stop_counter(vcpu, pmc);
>  	eventsel = data & ARMV8_EVTYPE_EVENT;
>  
> +	/* Software increment event does't need to be backed by a perf event */
> +	if (eventsel == ARMV8_EVTYPE_EVENT_SW_INCR)
> +		return;
> +
>  	memset(&attr, 0, sizeof(struct perf_event_attr));
>  	attr.type = PERF_TYPE_RAW;
>  	attr.size = sizeof(attr);
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register
@ 2016-01-28 18:37     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 18:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:41AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add access handler which emulates writing and reading PMSWINC
> register and add support for creating software increment event.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |  2 ++
>  arch/arm64/kvm/sys_regs.c    | 20 +++++++++++++++++++-
>  include/kvm/arm_pmu.h        |  2 ++
>  virt/kvm/arm/pmu.c           | 33 +++++++++++++++++++++++++++++++++
>  4 files changed, 56 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 2588f9c..6f14a01 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -60,6 +60,8 @@
>  #define	ARMV8_EVTYPE_MASK	0xc80003ff	/* Mask for writable bits */
>  #define	ARMV8_EVTYPE_EVENT	0x3ff		/* Mask for EVENT bits */
>  
> +#define ARMV8_EVTYPE_EVENT_SW_INCR	0	/* Software increment event */
> +
>  /*
>   * Event filters for PMUv3
>   */
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 60b24ea..f45c227 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -676,6 +676,23 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			   const struct sys_reg_desc *r)
> +{
> +	u64 mask;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		mask = kvm_pmu_valid_counter_mask(vcpu);
> +		kvm_pmu_software_increment(vcpu, p->regval & mask);
> +		return true;
> +	}
> +
> +	return false;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -886,7 +903,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmovs, NULL, PMOVSSET_EL0 },
>  	/* PMSWINC_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b100),
> -	  trap_raz_wi },
> +	  access_pmswinc, reset_unknown, PMSWINC_EL0 },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
>  	  access_pmselr, reset_unknown, PMSELR_EL0 },
> @@ -1225,6 +1242,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), access_pmcnten },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), access_pmovs },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 4), access_pmswinc },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 4f8409d..caa706e 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -41,6 +41,7 @@ u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>  void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -60,6 +61,7 @@ static inline u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu)
>  static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index ee75fac..706c935 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -161,6 +161,35 @@ void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val)
>  		kvm_vcpu_kick(vcpu);
>  }
>  
> +/**
> + * kvm_pmu_software_increment - do software increment
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMSWINC register
> + */
> +void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	int i;
> +	u64 type, enable, reg;
> +
> +	if (val == 0)
> +		return;
> +
> +	for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
> +		if (!(val & BIT(i)))
> +			continue;
> +		type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i)
> +		       & ARMV8_EVTYPE_EVENT;
> +		enable = vcpu_sys_reg(vcpu, PMCNTENSET_EL0);

The PMCNTENSET_EL0 read can be moved outside the loop.

> +		if ((type == ARMV8_EVTYPE_EVENT_SW_INCR) && (enable & BIT(i))) {
> +			reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1;
> +			reg = lower_32_bits(reg);
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = reg;
> +			if (!reg)
> +				kvm_pmu_overflow_set(vcpu, BIT(i));
> +		}
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> @@ -189,6 +218,10 @@ void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  	kvm_pmu_stop_counter(vcpu, pmc);
>  	eventsel = data & ARMV8_EVTYPE_EVENT;
>  
> +	/* Software increment event does't need to be backed by a perf event */
> +	if (eventsel == ARMV8_EVTYPE_EVENT_SW_INCR)
> +		return;
> +
>  	memset(&attr, 0, sizeof(struct perf_event_attr));
>  	attr.type = PERF_TYPE_RAW;
>  	attr.size = sizeof(attr);
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 19:15     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 19:15 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:42AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
> enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
> disabled. When writing 1 to PMCR.P, reset all event counters, not
> including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
> zero.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kvm/sys_regs.c |  1 +
>  include/kvm/arm_pmu.h     |  2 ++
>  virt/kvm/arm/pmu.c        | 42 ++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f45c227..eefc60a 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		val &= ~ARMV8_PMCR_MASK;
>  		val |= p->regval & ARMV8_PMCR_MASK;
>  		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +		kvm_pmu_handle_pmcr(vcpu, val);
>  	} else {
>  		/* PMCR.P & PMCR.C are RAZ */
>  		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index caa706e..5bed00c 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -42,6 +42,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -62,6 +63,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 706c935..d411f3f 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -190,6 +190,48 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>  	}
>  }
>  
> +/**
> + * kvm_pmu_handle_pmcr - handle PMCR register
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCR register
> + */
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +	u64 mask;
> +	int i;
> +
> +	mask = kvm_pmu_valid_counter_mask(vcpu);
> +	if (val & ARMV8_PMCR_E) {
> +		kvm_pmu_enable_counter(vcpu,
> +				     vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);

nit: sort of an ugly indentation. I don't think the vcpu_sys_reg needs
to line up with the vcpu.

> +	} else {
> +		kvm_pmu_disable_counter(vcpu, mask);
> +	}
> +
> +	if (val & ARMV8_PMCR_C) {
> +		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
> +		if (pmc->perf_event)
> +			local64_set(&pmc->perf_event->count, 0);
> +		vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
> +	}
> +
> +	if (val & ARMV8_PMCR_P) {
> +		for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
> +			pmc = &pmu->pmc[i];
> +			if (pmc->perf_event)
> +				local64_set(&pmc->perf_event->count, 0);
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
> +		}
> +	}

The local64_set's surprise me. Patch 9/21 seems to go out of its way to
allow the perf_event count to be whatever it happens to be, but then
calculate the appropriate base to modify it with when the register is
written by the guest. Here we're just simply setting both the perf_event
counter and the register to zero. Shouldn't we be going through some perf
API for the zeroing of its counter, and then do the same thing as patch
9/21 does to set the register?

> +
> +	if (val & ARMV8_PMCR_LC) {
> +		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
> +		pmc->bitmask = 0xffffffffffffffffUL;
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits
@ 2016-01-28 19:15     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 19:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:42AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> According to ARMv8 spec, when writing 1 to PMCR.E, all counters are
> enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are
> disabled. When writing 1 to PMCR.P, reset all event counters, not
> including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to
> zero.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kvm/sys_regs.c |  1 +
>  include/kvm/arm_pmu.h     |  2 ++
>  virt/kvm/arm/pmu.c        | 42 ++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f45c227..eefc60a 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -467,6 +467,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  		val &= ~ARMV8_PMCR_MASK;
>  		val |= p->regval & ARMV8_PMCR_MASK;
>  		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> +		kvm_pmu_handle_pmcr(vcpu, val);
>  	} else {
>  		/* PMCR.P & PMCR.C are RAZ */
>  		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index caa706e..5bed00c 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -42,6 +42,7 @@ void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val);
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  #else
> @@ -62,6 +63,7 @@ static inline void kvm_pmu_disable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_enable_counter(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_overflow_set(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val) {}
> +static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 706c935..d411f3f 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -190,6 +190,48 @@ void kvm_pmu_software_increment(struct kvm_vcpu *vcpu, u64 val)
>  	}
>  }
>  
> +/**
> + * kvm_pmu_handle_pmcr - handle PMCR register
> + * @vcpu: The vcpu pointer
> + * @val: the value guest writes to PMCR register
> + */
> +void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> +{
> +	struct kvm_pmu *pmu = &vcpu->arch.pmu;
> +	struct kvm_pmc *pmc;
> +	u64 mask;
> +	int i;
> +
> +	mask = kvm_pmu_valid_counter_mask(vcpu);
> +	if (val & ARMV8_PMCR_E) {
> +		kvm_pmu_enable_counter(vcpu,
> +				     vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & mask);

nit: sort of an ugly indentation. I don't think the vcpu_sys_reg needs
to line up with the vcpu.

> +	} else {
> +		kvm_pmu_disable_counter(vcpu, mask);
> +	}
> +
> +	if (val & ARMV8_PMCR_C) {
> +		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
> +		if (pmc->perf_event)
> +			local64_set(&pmc->perf_event->count, 0);
> +		vcpu_sys_reg(vcpu, PMCCNTR_EL0) = 0;
> +	}
> +
> +	if (val & ARMV8_PMCR_P) {
> +		for (i = 0; i < ARMV8_CYCLE_IDX; i++) {
> +			pmc = &pmu->pmc[i];
> +			if (pmc->perf_event)
> +				local64_set(&pmc->perf_event->count, 0);
> +			vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) = 0;
> +		}
> +	}

The local64_set's surprise me. Patch 9/21 seems to go out of its way to
allow the perf_event count to be whatever it happens to be, but then
calculate the appropriate base to modify it with when the register is
written by the guest. Here we're just simply setting both the perf_event
counter and the register to zero. Shouldn't we be going through some perf
API for the zeroing of its counter, and then do the same thing as patch
9/21 does to set the register?

> +
> +	if (val & ARMV8_PMCR_LC) {
> +		pmc = &pmu->pmc[ARMV8_CYCLE_IDX];
> +		pmc->bitmask = 0xffffffffffffffffUL;
> +	}
> +}
> +
>  static bool kvm_pmu_counter_is_enabled(struct kvm_vcpu *vcpu, u64 select_idx)
>  {
>  	return (vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMCR_E) &&
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 19:58     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 19:58 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This register resets as unknown in 64bit mode while it resets as zero
> in 32bit mode. Here we choose to reset it as zero for consistency.
> 
> PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
> accessed from EL0. Add some check helpers to handle the access from EL0.
> 
> When these bits are zero, only reading PMUSERENR will trap to EL2 and
> writing PMUSERENR or reading/writing other PMU registers will trap to
> EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
> (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
> physical PMUSERENR register on VM entry, so that it will trap PMU access
> from EL0 to EL2. Within the register access handler we check the real
> value of guest PMUSERENR register to decide whether this access is
> allowed. If not allowed, return false to inject UND to guest.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |   9 ++++
>  arch/arm64/kvm/hyp/hyp.h     |   1 +
>  arch/arm64/kvm/hyp/switch.c  |   3 ++
>  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>  4 files changed, 107 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 6f14a01..eb3dc88 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -69,4 +69,13 @@
>  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>  
> +/*
> + * PMUSERENR: user enable reg
> + */
> +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
> +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
> +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
> +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
> +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
> +
>  #endif /* __ASM_PMU_H */
> diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
> index fb27517..9a28b7bd8 100644
> --- a/arch/arm64/kvm/hyp/hyp.h
> +++ b/arch/arm64/kvm/hyp/hyp.h
> @@ -22,6 +22,7 @@
>  #include <linux/kvm_host.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/sysreg.h>
> +#include <asm/pmu.h>
>  
>  #define __hyp_text __section(.hyp.text) notrace
>  
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index ca8f5a5..1a7d679 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>  	write_sysreg(1 << 15, hstr_el2);
>  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
> +	/* Make sure we trap PMU access from EL0 to EL2 */
> +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  	write_sysreg(HCR_RW, hcr_el2);
>  	write_sysreg(0, hstr_el2);
>  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
> +	write_sysreg(0, pmuserenr_el0);
>  	write_sysreg(0, cptr_el2);
>  }
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index eefc60a..084e527 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>  }
>  
> +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
>  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			const struct sys_reg_desc *r)
>  {
> @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;

Based on the function name I'm not sure I like embedding vcpu_mode_priv.
It seems a condition like

  if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
      return false;

would be more clear here and the other callsites below. (I also prefer
checking for enabled vs. disabled)

> +
>  	if (p->is_write) {
>  		/* Only update writeable bits of PMCR */
>  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_event_counter_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write)
>  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
>  	else
> @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> -	if (p->is_write)
> +	if (p->is_write || pmu_access_el0_disabled(vcpu))
>  		return false;
>  
>  	if (!(p->Op2 & 1))
> @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>  		/* PMXEVTYPER_EL0 */
>  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  	if (r->CRn == 9 && r->CRm == 13) {
>  		if (r->Op2 == 2) {
>  			/* PMXEVCNTR_EL0 */
> +			if (pmu_access_event_counter_el0_disabled(vcpu))
> +				return false;
> +
>  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
>  			      & ARMV8_COUNTER_MASK;
>  			reg = PMEVCNTR0_EL0 + idx;
>  		} else if (r->Op2 == 0) {
>  			/* PMCCNTR_EL0 */
> +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
> +				return false;
> +
>  			idx = ARMV8_CYCLE_IDX;
>  			reg = PMCCNTR_EL0;
>  		} else {
> @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  		}
>  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
>  		/* PMEVCNTRn_EL0 */
> +		if (pmu_access_event_counter_el0_disabled(vcpu))
> +			return false;
> +
>  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>  		reg = PMEVCNTR0_EL0 + idx;
>  	} else {
> @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  		return false;
>  
>  	val = kvm_pmu_get_counter_value(vcpu, idx);
> -	if (p->is_write)
> +	if (p->is_write) {
> +		if (pmu_access_el0_disabled(vcpu))
> +			return false;
> +

This check isn't necessary because at this point we've either already
checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.

>  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
> -	else
> +	} else {
>  		p->regval = val;
> +	}

It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
just have another helper that takes a reg_idx argument, i.e.

static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
{
  if (idx == PMCCNTR_EL0)
     return pmu_access_cycle_counter_el0_disabled
  if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
     return pmu_access_event_counter_el0_disabled
...

and call it once after the pmu_counter_idx_valid check?

>  
>  	return true;
>  }
> @@ -612,6 +665,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	mask = kvm_pmu_valid_counter_mask(vcpu);
>  	if (p->is_write) {
>  		val = p->regval & mask;
> @@ -639,6 +695,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (!vcpu_mode_priv(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		if (r->Op2 & 0x1)
>  			/* accessing PMINTENSET_EL1 */
> @@ -663,6 +722,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		if (r->CRm & 0x2)
>  			/* accessing PMOVSSET_EL0 */
> @@ -685,6 +747,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_write_swinc_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		mask = kvm_pmu_valid_counter_mask(vcpu);
>  		kvm_pmu_software_increment(vcpu, p->regval & mask);
> @@ -694,6 +759,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return false;
>  }
>  
> +static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			     const struct sys_reg_desc *r)
> +{
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		if (!vcpu_mode_priv(vcpu))
> +			return false;
> +
> +		vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
> +						    & ARMV8_USERENR_MASK;
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
> +			    & ARMV8_USERENR_MASK;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -923,9 +1008,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  access_pmu_evcntr },
> -	/* PMUSERENR_EL0 */
> +	/* PMUSERENR_EL0
> +	 * This register resets as unknown in 64bit mode while it resets as zero
> +	 * in 32bit mode. Here we choose to reset it as zero for consistency.
> +	 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
>  	/* PMOVSSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
>  	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
> @@ -1250,7 +1338,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-28 19:58     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 19:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This register resets as unknown in 64bit mode while it resets as zero
> in 32bit mode. Here we choose to reset it as zero for consistency.
> 
> PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
> accessed from EL0. Add some check helpers to handle the access from EL0.
> 
> When these bits are zero, only reading PMUSERENR will trap to EL2 and
> writing PMUSERENR or reading/writing other PMU registers will trap to
> EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
> (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
> physical PMUSERENR register on VM entry, so that it will trap PMU access
> from EL0 to EL2. Within the register access handler we check the real
> value of guest PMUSERENR register to decide whether this access is
> allowed. If not allowed, return false to inject UND to guest.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/pmu.h |   9 ++++
>  arch/arm64/kvm/hyp/hyp.h     |   1 +
>  arch/arm64/kvm/hyp/switch.c  |   3 ++
>  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>  4 files changed, 107 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> index 6f14a01..eb3dc88 100644
> --- a/arch/arm64/include/asm/pmu.h
> +++ b/arch/arm64/include/asm/pmu.h
> @@ -69,4 +69,13 @@
>  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>  
> +/*
> + * PMUSERENR: user enable reg
> + */
> +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
> +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed@EL0 */
> +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written@EL0 */
> +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read@EL0 */
> +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
> +
>  #endif /* __ASM_PMU_H */
> diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
> index fb27517..9a28b7bd8 100644
> --- a/arch/arm64/kvm/hyp/hyp.h
> +++ b/arch/arm64/kvm/hyp/hyp.h
> @@ -22,6 +22,7 @@
>  #include <linux/kvm_host.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/sysreg.h>
> +#include <asm/pmu.h>
>  
>  #define __hyp_text __section(.hyp.text) notrace
>  
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index ca8f5a5..1a7d679 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>  	write_sysreg(1 << 15, hstr_el2);
>  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
> +	/* Make sure we trap PMU access from EL0 to EL2 */
> +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>  }
>  
> @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>  	write_sysreg(HCR_RW, hcr_el2);
>  	write_sysreg(0, hstr_el2);
>  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
> +	write_sysreg(0, pmuserenr_el0);
>  	write_sysreg(0, cptr_el2);
>  }
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index eefc60a..084e527 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>  }
>  
> +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
> +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> +{
> +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> +
> +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
> +		 || vcpu_mode_priv(vcpu));
> +}
> +
>  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  			const struct sys_reg_desc *r)
>  {
> @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;

Based on the function name I'm not sure I like embedding vcpu_mode_priv.
It seems a condition like

  if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
      return false;

would be more clear here and the other callsites below. (I also prefer
checking for enabled vs. disabled)

> +
>  	if (p->is_write) {
>  		/* Only update writeable bits of PMCR */
>  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_event_counter_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write)
>  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
>  	else
> @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> -	if (p->is_write)
> +	if (p->is_write || pmu_access_el0_disabled(vcpu))
>  		return false;
>  
>  	if (!(p->Op2 & 1))
> @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>  		/* PMXEVTYPER_EL0 */
>  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  	if (r->CRn == 9 && r->CRm == 13) {
>  		if (r->Op2 == 2) {
>  			/* PMXEVCNTR_EL0 */
> +			if (pmu_access_event_counter_el0_disabled(vcpu))
> +				return false;
> +
>  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
>  			      & ARMV8_COUNTER_MASK;
>  			reg = PMEVCNTR0_EL0 + idx;
>  		} else if (r->Op2 == 0) {
>  			/* PMCCNTR_EL0 */
> +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
> +				return false;
> +
>  			idx = ARMV8_CYCLE_IDX;
>  			reg = PMCCNTR_EL0;
>  		} else {
> @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  		}
>  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
>  		/* PMEVCNTRn_EL0 */
> +		if (pmu_access_event_counter_el0_disabled(vcpu))
> +			return false;
> +
>  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>  		reg = PMEVCNTR0_EL0 + idx;
>  	} else {
> @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>  		return false;
>  
>  	val = kvm_pmu_get_counter_value(vcpu, idx);
> -	if (p->is_write)
> +	if (p->is_write) {
> +		if (pmu_access_el0_disabled(vcpu))
> +			return false;
> +

This check isn't necessary because at this point we've either already
checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.

>  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
> -	else
> +	} else {
>  		p->regval = val;
> +	}

It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
just have another helper that takes a reg_idx argument, i.e.

static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
{
  if (idx == PMCCNTR_EL0)
     return pmu_access_cycle_counter_el0_disabled
  if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
     return pmu_access_event_counter_el0_disabled
...

and call it once after the pmu_counter_idx_valid check?

>  
>  	return true;
>  }
> @@ -612,6 +665,9 @@ static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	mask = kvm_pmu_valid_counter_mask(vcpu);
>  	if (p->is_write) {
>  		val = p->regval & mask;
> @@ -639,6 +695,9 @@ static bool access_pminten(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (!vcpu_mode_priv(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		if (r->Op2 & 0x1)
>  			/* accessing PMINTENSET_EL1 */
> @@ -663,6 +722,9 @@ static bool access_pmovs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_access_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		if (r->CRm & 0x2)
>  			/* accessing PMOVSSET_EL0 */
> @@ -685,6 +747,9 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	if (!kvm_arm_pmu_v3_ready(vcpu))
>  		return trap_raz_wi(vcpu, p, r);
>  
> +	if (pmu_write_swinc_el0_disabled(vcpu))
> +		return false;
> +
>  	if (p->is_write) {
>  		mask = kvm_pmu_valid_counter_mask(vcpu);
>  		kvm_pmu_software_increment(vcpu, p->regval & mask);
> @@ -694,6 +759,26 @@ static bool access_pmswinc(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return false;
>  }
>  
> +static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			     const struct sys_reg_desc *r)
> +{
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write) {
> +		if (!vcpu_mode_priv(vcpu))
> +			return false;
> +
> +		vcpu_sys_reg(vcpu, PMUSERENR_EL0) = p->regval
> +						    & ARMV8_USERENR_MASK;
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, PMUSERENR_EL0)
> +			    & ARMV8_USERENR_MASK;
> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -923,9 +1008,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  access_pmu_evcntr },
> -	/* PMUSERENR_EL0 */
> +	/* PMUSERENR_EL0
> +	 * This register resets as unknown in 64bit mode while it resets as zero
> +	 * in 32bit mode. Here we choose to reset it as zero for consistency.
> +	 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b000),
> -	  trap_raz_wi },
> +	  access_pmuserenr, reset_val, PMUSERENR_EL0, 0 },
>  	/* PMOVSSET_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1110), Op2(0b011),
>  	  access_pmovs, reset_unknown, PMOVSSET_EL0 },
> @@ -1250,7 +1338,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), access_pmu_evcntr },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), access_pmu_evcntr },
> -	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), access_pmuserenr },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), access_pminten },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 2), access_pminten },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 3), access_pmovs },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 20:10     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:10 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:33AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. When reading PMSELR, return the PMSELR.SEL field to
> guest.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 97fea84..fc60041 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -477,6 +477,21 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write)
> +		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;

why don't we mask p->regval here when we write vcpu_sys_reg, and then
not need to mask it every time we use it, like below and in later
patches?

> +	else
> +		/* return PMSELR.SEL field */
> +		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -676,7 +691,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> -	  trap_raz_wi },
> +	  access_pmselr, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>  	  trap_raz_wi },
> @@ -927,7 +942,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register
@ 2016-01-28 20:10     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:33AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for
> its reset handler. When reading PMSELR, return the PMSELR.SEL field to
> guest.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 19 +++++++++++++++++--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 97fea84..fc60041 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -477,6 +477,21 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write)
> +		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;

why don't we mask p->regval here when we write vcpu_sys_reg, and then
not need to mask it every time we use it, like below and in later
patches?

> +	else
> +		/* return PMSELR.SEL field */
> +		p->regval = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -676,7 +691,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMSELR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b101),
> -	  trap_raz_wi },
> +	  access_pmselr, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
>  	  trap_raz_wi },
> @@ -927,7 +942,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 20:11     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:11 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
> which is mapped to PMEVTYPERn or PMCCFILTR.
> 
> The access handler translates all aarch32 register offsets to aarch64
> ones and uses vcpu_sys_reg() to access their values to avoid taking care
> of big endian.
> 
> When writing to these registers, create a perf_event for the selected
> event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 138 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 06257e2..298ae94 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> +{
> +	u64 pmcr, val;
> +
> +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
> +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
> +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
> +		return false;
> +
> +	return true;
> +}
> +
> +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			       const struct sys_reg_desc *r)
> +{
> +	u64 idx, reg;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> +		/* PMXEVTYPER_EL0 */
> +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> +		reg = PMEVTYPER0_EL0 + idx;
> +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
> +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> +		if (idx == ARMV8_CYCLE_IDX)
> +			reg = PMCCFILTR_EL0;
> +		else
> +			/* PMEVTYPERn_EL0 */
> +			reg = PMEVTYPER0_EL0 + idx;
> +	} else {
> +		BUG();
> +	}
> +
> +	if (!pmu_counter_idx_valid(vcpu, idx))
> +		return false;
> +
> +	if (p->is_write) {
> +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
> +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;

Related to my comment in 5/21. Why should we need to mask it here when
reading it, since it was masked on writing?

> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -528,6 +576,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
>  	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
>  
> +/* Macro to expand the PMEVTYPERn_EL0 register */
> +#define PMU_PMEVTYPER_EL0(n)						\
> +	/* PMEVTYPERn_EL0 */						\
> +	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
> +	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
> +	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
> +
>  /*
>   * Architected system registers.
>   * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> @@ -724,7 +779,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMXEVTYPER_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmu_evtyper },
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  trap_raz_wi },
> @@ -742,6 +797,45 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
>  	  NULL, reset_unknown, TPIDRRO_EL0 },
>  
> +	/* PMEVTYPERn_EL0 */
> +	PMU_PMEVTYPER_EL0(0),
> +	PMU_PMEVTYPER_EL0(1),
> +	PMU_PMEVTYPER_EL0(2),
> +	PMU_PMEVTYPER_EL0(3),
> +	PMU_PMEVTYPER_EL0(4),
> +	PMU_PMEVTYPER_EL0(5),
> +	PMU_PMEVTYPER_EL0(6),
> +	PMU_PMEVTYPER_EL0(7),
> +	PMU_PMEVTYPER_EL0(8),
> +	PMU_PMEVTYPER_EL0(9),
> +	PMU_PMEVTYPER_EL0(10),
> +	PMU_PMEVTYPER_EL0(11),
> +	PMU_PMEVTYPER_EL0(12),
> +	PMU_PMEVTYPER_EL0(13),
> +	PMU_PMEVTYPER_EL0(14),
> +	PMU_PMEVTYPER_EL0(15),
> +	PMU_PMEVTYPER_EL0(16),
> +	PMU_PMEVTYPER_EL0(17),
> +	PMU_PMEVTYPER_EL0(18),
> +	PMU_PMEVTYPER_EL0(19),
> +	PMU_PMEVTYPER_EL0(20),
> +	PMU_PMEVTYPER_EL0(21),
> +	PMU_PMEVTYPER_EL0(22),
> +	PMU_PMEVTYPER_EL0(23),
> +	PMU_PMEVTYPER_EL0(24),
> +	PMU_PMEVTYPER_EL0(25),
> +	PMU_PMEVTYPER_EL0(26),
> +	PMU_PMEVTYPER_EL0(27),
> +	PMU_PMEVTYPER_EL0(28),
> +	PMU_PMEVTYPER_EL0(29),
> +	PMU_PMEVTYPER_EL0(30),
> +	/* PMCCFILTR_EL0
> +	 * This register resets as unknown in 64bit mode while it resets as zero
> +	 * in 32bit mode. Here we choose to reset it as zero for consistency.
> +	 */
> +	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
> +	  access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
> +
>  	/* DACR32_EL2 */
>  	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
>  	  NULL, reset_unknown, DACR32_EL2 },
> @@ -931,6 +1025,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>  	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
>  };
>  
> +/* Macro to expand the PMEVTYPERn register */
> +#define PMU_PMEVTYPER(n)						\
> +	/* PMEVTYPERn */						\
> +	{ Op1(0), CRn(0b1110),						\
> +	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
> +	  access_pmu_evtyper }
> +
>  /*
>   * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
>   * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -967,7 +1068,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
> @@ -982,6 +1083,41 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
>  
>  	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
> +
> +	/* PMEVTYPERn */
> +	PMU_PMEVTYPER(0),
> +	PMU_PMEVTYPER(1),
> +	PMU_PMEVTYPER(2),
> +	PMU_PMEVTYPER(3),
> +	PMU_PMEVTYPER(4),
> +	PMU_PMEVTYPER(5),
> +	PMU_PMEVTYPER(6),
> +	PMU_PMEVTYPER(7),
> +	PMU_PMEVTYPER(8),
> +	PMU_PMEVTYPER(9),
> +	PMU_PMEVTYPER(10),
> +	PMU_PMEVTYPER(11),
> +	PMU_PMEVTYPER(12),
> +	PMU_PMEVTYPER(13),
> +	PMU_PMEVTYPER(14),
> +	PMU_PMEVTYPER(15),
> +	PMU_PMEVTYPER(16),
> +	PMU_PMEVTYPER(17),
> +	PMU_PMEVTYPER(18),
> +	PMU_PMEVTYPER(19),
> +	PMU_PMEVTYPER(20),
> +	PMU_PMEVTYPER(21),
> +	PMU_PMEVTYPER(22),
> +	PMU_PMEVTYPER(23),
> +	PMU_PMEVTYPER(24),
> +	PMU_PMEVTYPER(25),
> +	PMU_PMEVTYPER(26),
> +	PMU_PMEVTYPER(27),
> +	PMU_PMEVTYPER(28),
> +	PMU_PMEVTYPER(29),
> +	PMU_PMEVTYPER(30),
> +	/* PMCCFILTR */
> +	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
>  };
>  
>  static const struct sys_reg_desc cp15_64_regs[] = {
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-28 20:11     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
> which is mapped to PMEVTYPERn or PMCCFILTR.
> 
> The access handler translates all aarch32 register offsets to aarch64
> ones and uses vcpu_sys_reg() to access their values to avoid taking care
> of big endian.
> 
> When writing to these registers, create a perf_event for the selected
> event type.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 138 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 06257e2..298ae94 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> +{
> +	u64 pmcr, val;
> +
> +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
> +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
> +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
> +		return false;
> +
> +	return true;
> +}
> +
> +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			       const struct sys_reg_desc *r)
> +{
> +	u64 idx, reg;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> +		/* PMXEVTYPER_EL0 */
> +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> +		reg = PMEVTYPER0_EL0 + idx;
> +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
> +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> +		if (idx == ARMV8_CYCLE_IDX)
> +			reg = PMCCFILTR_EL0;
> +		else
> +			/* PMEVTYPERn_EL0 */
> +			reg = PMEVTYPER0_EL0 + idx;
> +	} else {
> +		BUG();
> +	}
> +
> +	if (!pmu_counter_idx_valid(vcpu, idx))
> +		return false;
> +
> +	if (p->is_write) {
> +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
> +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
> +	} else {
> +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;

Related to my comment in 5/21. Why should we need to mask it here when
reading it, since it was masked on writing?

> +	}
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -528,6 +576,13 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	{ Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111),	\
>  	  trap_wcr, reset_wcr, n, 0,  get_wcr, set_wcr }
>  
> +/* Macro to expand the PMEVTYPERn_EL0 register */
> +#define PMU_PMEVTYPER_EL0(n)						\
> +	/* PMEVTYPERn_EL0 */						\
> +	{ Op0(0b11), Op1(0b011), CRn(0b1110),				\
> +	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
> +	  access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), }
> +
>  /*
>   * Architected system registers.
>   * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> @@ -724,7 +779,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  trap_raz_wi },
>  	/* PMXEVTYPER_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b001),
> -	  trap_raz_wi },
> +	  access_pmu_evtyper },
>  	/* PMXEVCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b010),
>  	  trap_raz_wi },
> @@ -742,6 +797,45 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ Op0(0b11), Op1(0b011), CRn(0b1101), CRm(0b0000), Op2(0b011),
>  	  NULL, reset_unknown, TPIDRRO_EL0 },
>  
> +	/* PMEVTYPERn_EL0 */
> +	PMU_PMEVTYPER_EL0(0),
> +	PMU_PMEVTYPER_EL0(1),
> +	PMU_PMEVTYPER_EL0(2),
> +	PMU_PMEVTYPER_EL0(3),
> +	PMU_PMEVTYPER_EL0(4),
> +	PMU_PMEVTYPER_EL0(5),
> +	PMU_PMEVTYPER_EL0(6),
> +	PMU_PMEVTYPER_EL0(7),
> +	PMU_PMEVTYPER_EL0(8),
> +	PMU_PMEVTYPER_EL0(9),
> +	PMU_PMEVTYPER_EL0(10),
> +	PMU_PMEVTYPER_EL0(11),
> +	PMU_PMEVTYPER_EL0(12),
> +	PMU_PMEVTYPER_EL0(13),
> +	PMU_PMEVTYPER_EL0(14),
> +	PMU_PMEVTYPER_EL0(15),
> +	PMU_PMEVTYPER_EL0(16),
> +	PMU_PMEVTYPER_EL0(17),
> +	PMU_PMEVTYPER_EL0(18),
> +	PMU_PMEVTYPER_EL0(19),
> +	PMU_PMEVTYPER_EL0(20),
> +	PMU_PMEVTYPER_EL0(21),
> +	PMU_PMEVTYPER_EL0(22),
> +	PMU_PMEVTYPER_EL0(23),
> +	PMU_PMEVTYPER_EL0(24),
> +	PMU_PMEVTYPER_EL0(25),
> +	PMU_PMEVTYPER_EL0(26),
> +	PMU_PMEVTYPER_EL0(27),
> +	PMU_PMEVTYPER_EL0(28),
> +	PMU_PMEVTYPER_EL0(29),
> +	PMU_PMEVTYPER_EL0(30),
> +	/* PMCCFILTR_EL0
> +	 * This register resets as unknown in 64bit mode while it resets as zero
> +	 * in 32bit mode. Here we choose to reset it as zero for consistency.
> +	 */
> +	{ Op0(0b11), Op1(0b011), CRn(0b1110), CRm(0b1111), Op2(0b111),
> +	  access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 },
> +
>  	/* DACR32_EL2 */
>  	{ Op0(0b11), Op1(0b100), CRn(0b0011), CRm(0b0000), Op2(0b000),
>  	  NULL, reset_unknown, DACR32_EL2 },
> @@ -931,6 +1025,13 @@ static const struct sys_reg_desc cp14_64_regs[] = {
>  	{ Op1( 0), CRm( 2), .access = trap_raz_wi },
>  };
>  
> +/* Macro to expand the PMEVTYPERn register */
> +#define PMU_PMEVTYPER(n)						\
> +	/* PMEVTYPERn */						\
> +	{ Op1(0), CRn(0b1110),						\
> +	  CRm((0b1100 | (((n) >> 3) & 0x3))), Op2(((n) & 0x7)),		\
> +	  access_pmu_evtyper }
> +
>  /*
>   * Trapped cp15 registers. TTBR0/TTBR1 get a double encoding,
>   * depending on the way they are accessed (as a 32bit or a 64bit
> @@ -967,7 +1068,7 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), access_pmu_evtyper },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(14), Op2( 1), trap_raz_wi },
> @@ -982,6 +1083,41 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi },
>  
>  	{ Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID },
> +
> +	/* PMEVTYPERn */
> +	PMU_PMEVTYPER(0),
> +	PMU_PMEVTYPER(1),
> +	PMU_PMEVTYPER(2),
> +	PMU_PMEVTYPER(3),
> +	PMU_PMEVTYPER(4),
> +	PMU_PMEVTYPER(5),
> +	PMU_PMEVTYPER(6),
> +	PMU_PMEVTYPER(7),
> +	PMU_PMEVTYPER(8),
> +	PMU_PMEVTYPER(9),
> +	PMU_PMEVTYPER(10),
> +	PMU_PMEVTYPER(11),
> +	PMU_PMEVTYPER(12),
> +	PMU_PMEVTYPER(13),
> +	PMU_PMEVTYPER(14),
> +	PMU_PMEVTYPER(15),
> +	PMU_PMEVTYPER(16),
> +	PMU_PMEVTYPER(17),
> +	PMU_PMEVTYPER(18),
> +	PMU_PMEVTYPER(19),
> +	PMU_PMEVTYPER(20),
> +	PMU_PMEVTYPER(21),
> +	PMU_PMEVTYPER(22),
> +	PMU_PMEVTYPER(23),
> +	PMU_PMEVTYPER(24),
> +	PMU_PMEVTYPER(25),
> +	PMU_PMEVTYPER(26),
> +	PMU_PMEVTYPER(27),
> +	PMU_PMEVTYPER(28),
> +	PMU_PMEVTYPER(29),
> +	PMU_PMEVTYPER(30),
> +	/* PMCCFILTR */
> +	{ Op1(0), CRn(14), CRm(15), Op2(7), access_pmu_evtyper },
>  };
>  
>  static const struct sys_reg_desc cp15_64_regs[] = {
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 20:34     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:34 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Wed, Jan 27, 2016 at 11:51:34AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add access handler which gets host value of PMCEID0 or PMCEID1 when
> guest access these registers. Writing action to PMCEID0 or PMCEID1 is
> UNDEFINED.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fc60041..06257e2 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	u64 pmceid;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write)
> +		return false;
> +
> +	if (!(p->Op2 & 1))
> +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
> +	else
> +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));

For migratibility concerns we may want to filter some of these events.
With that in mind the answer to my question in 4/21 is 'no'. Instead we
should pick an IMP,IDCODE,N,PMCEID0_EL0,PMCEID1_EL0 that we expect to
represent the least common denominator of all the platforms available
now, and then only expose that view to the guest. If we want to support
more events, and userspace requests it for the guest, then we can relax
the filtering (at the expense of migratibility), when the host has the
support.

I don't know what a reasonable first filter is. Maybe D5.10.6 "Required
events" is enough for the base?

> +
> +	p->regval = pmceid;
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -694,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmselr, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> -	  trap_raz_wi },
> +	  access_pmceid },
>  	/* PMCEID1_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> -	  trap_raz_wi },
> +	  access_pmceid },
>  	/* PMCCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>  	  trap_raz_wi },
> @@ -943,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
@ 2016-01-28 20:34     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:34AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add access handler which gets host value of PMCEID0 or PMCEID1 when
> guest access these registers. Writing action to PMCEID0 or PMCEID1 is
> UNDEFINED.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fc60041..06257e2 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>  	return true;
>  }
>  
> +static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	u64 pmceid;
> +
> +	if (!kvm_arm_pmu_v3_ready(vcpu))
> +		return trap_raz_wi(vcpu, p, r);
> +
> +	if (p->is_write)
> +		return false;
> +
> +	if (!(p->Op2 & 1))
> +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
> +	else
> +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));

For migratibility concerns we may want to filter some of these events.
With that in mind the answer to my question in 4/21 is 'no'. Instead we
should pick an IMP,IDCODE,N,PMCEID0_EL0,PMCEID1_EL0 that we expect to
represent the least common denominator of all the platforms available
now, and then only expose that view to the guest. If we want to support
more events, and userspace requests it for the guest, then we can relax
the filtering (at the expense of migratibility), when the host has the
support.

I don't know what a reasonable first filter is. Maybe D5.10.6 "Required
events" is enough for the base?

> +
> +	p->regval = pmceid;
> +
> +	return true;
> +}
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -694,10 +715,10 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	  access_pmselr, reset_unknown, PMSELR_EL0 },
>  	/* PMCEID0_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b110),
> -	  trap_raz_wi },
> +	  access_pmceid },
>  	/* PMCEID1_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b111),
> -	  trap_raz_wi },
> +	  access_pmceid },
>  	/* PMCCNTR_EL0 */
>  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1101), Op2(0b000),
>  	  trap_raz_wi },
> @@ -943,8 +964,8 @@ static const struct sys_reg_desc cp15_regs[] = {
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(12), Op2( 5), access_pmselr },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), trap_raz_wi },
> -	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), trap_raz_wi },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 6), access_pmceid },
> +	{ Op1( 0), CRn( 9), CRm(12), Op2( 7), access_pmceid },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 0), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 1), trap_raz_wi },
>  	{ Op1( 0), CRn( 9), CRm(13), Op2( 2), trap_raz_wi },
> -- 
> 2.0.4
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
  2016-01-28 15:36     ` Andrew Jones
@ 2016-01-28 20:43       ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:43 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Thu, Jan 28, 2016 at 04:36:35PM +0100, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> > 
> > Add reset handler which gets host value of PMCR_EL0 and make writable
> > bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
> > handler for PMCR.
> > 
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
> >  include/kvm/arm_pmu.h     |  4 ++++
> >  2 files changed, 44 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index eec3598..97fea84 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -34,6 +34,7 @@
> >  #include <asm/kvm_emulate.h>
> >  #include <asm/kvm_host.h>
> >  #include <asm/kvm_mmu.h>
> > +#include <asm/pmu.h>
> >  
> >  #include <trace/events/kvm.h>
> >  
> > @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> >  }
> >  
> > +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 pmcr, val;
> > +
> > +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> > +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> > +	 * except PMCR.E resetting to zero.
> > +	 */
> > +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> > +	      & (~ARMV8_PMCR_E);
> > +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> > +}
> > +
> > +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +			const struct sys_reg_desc *r)
> > +{
> > +	u64 val;
> > +
> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
> > +		return trap_raz_wi(vcpu, p, r);
> > +
> > +	if (p->is_write) {
> > +		/* Only update writeable bits of PMCR */
> > +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> > +		val &= ~ARMV8_PMCR_MASK;
> > +		val |= p->regval & ARMV8_PMCR_MASK;
> > +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> > +	} else {
> > +		/* PMCR.P & PMCR.C are RAZ */
> > +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> > +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> > +		p->regval = val;
> 
> Should we also be setting the IMP, IDCODE, and N fields here to the
> values of the host PE?

Not sure how I skimmed over the reset_pmcr doing this when I first
read it. I'm now wondering if we want to always expose the host's
IMP, IDCODE, N though (migration concerns). Although we have a ton
of invariant sys regs already... So I guess this is a bridge to burn
another day.

> 
> > +	}
> > +
> > +	return true;
> > +}
> > +
> >  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
> >  	/* DBGBVRn_EL1 */						\
> > @@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  
> >  	/* PMCR_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> > -	  trap_raz_wi },
> > +	  access_pmcr, reset_pmcr, },
> >  	/* PMCNTENSET_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> >  	  trap_raz_wi },
> > @@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
> >  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
> >  
> >  	/* PMU */
> > -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> > +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > index be220ee..32fee2d 100644
> > --- a/include/kvm/arm_pmu.h
> > +++ b/include/kvm/arm_pmu.h
> > @@ -34,9 +34,13 @@ struct kvm_pmu {
> >  	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
> >  	bool ready;
> >  };
> > +
> > +#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> >  #else
> >  struct kvm_pmu {
> >  };
> > +
> > +#define kvm_arm_pmu_v3_ready(v)		(false)
> >  #endif
> >  
> >  #endif
> > -- 
> > 2.0.4
> > 
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
@ 2016-01-28 20:43       ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jan 28, 2016 at 04:36:35PM +0100, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> > 
> > Add reset handler which gets host value of PMCR_EL0 and make writable
> > bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
> > handler for PMCR.
> > 
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
> >  include/kvm/arm_pmu.h     |  4 ++++
> >  2 files changed, 44 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index eec3598..97fea84 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -34,6 +34,7 @@
> >  #include <asm/kvm_emulate.h>
> >  #include <asm/kvm_host.h>
> >  #include <asm/kvm_mmu.h>
> > +#include <asm/pmu.h>
> >  
> >  #include <trace/events/kvm.h>
> >  
> > @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> >  }
> >  
> > +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 pmcr, val;
> > +
> > +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
> > +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
> > +	 * except PMCR.E resetting to zero.
> > +	 */
> > +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
> > +	      & (~ARMV8_PMCR_E);
> > +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> > +}
> > +
> > +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > +			const struct sys_reg_desc *r)
> > +{
> > +	u64 val;
> > +
> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
> > +		return trap_raz_wi(vcpu, p, r);
> > +
> > +	if (p->is_write) {
> > +		/* Only update writeable bits of PMCR */
> > +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> > +		val &= ~ARMV8_PMCR_MASK;
> > +		val |= p->regval & ARMV8_PMCR_MASK;
> > +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> > +	} else {
> > +		/* PMCR.P & PMCR.C are RAZ */
> > +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
> > +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
> > +		p->regval = val;
> 
> Should we also be setting the IMP, IDCODE, and N fields here to the
> values of the host PE?

Not sure how I skimmed over the reset_pmcr doing this when I first
read it. I'm now wondering if we want to always expose the host's
IMP, IDCODE, N though (migration concerns). Although we have a ton
of invariant sys regs already... So I guess this is a bridge to burn
another day.

> 
> > +	}
> > +
> > +	return true;
> > +}
> > +
> >  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
> >  	/* DBGBVRn_EL1 */						\
> > @@ -623,7 +661,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  
> >  	/* PMCR_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b000),
> > -	  trap_raz_wi },
> > +	  access_pmcr, reset_pmcr, },
> >  	/* PMCNTENSET_EL0 */
> >  	{ Op0(0b11), Op1(0b011), CRn(0b1001), CRm(0b1100), Op2(0b001),
> >  	  trap_raz_wi },
> > @@ -885,7 +923,7 @@ static const struct sys_reg_desc cp15_regs[] = {
> >  	{ Op1( 0), CRn( 7), CRm(14), Op2( 2), access_dcsw },
> >  
> >  	/* PMU */
> > -	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), trap_raz_wi },
> > +	{ Op1( 0), CRn( 9), CRm(12), Op2( 0), access_pmcr },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 1), trap_raz_wi },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 2), trap_raz_wi },
> >  	{ Op1( 0), CRn( 9), CRm(12), Op2( 3), trap_raz_wi },
> > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> > index be220ee..32fee2d 100644
> > --- a/include/kvm/arm_pmu.h
> > +++ b/include/kvm/arm_pmu.h
> > @@ -34,9 +34,13 @@ struct kvm_pmu {
> >  	struct kvm_pmc pmc[ARMV8_MAX_COUNTERS];
> >  	bool ready;
> >  };
> > +
> > +#define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> >  #else
> >  struct kvm_pmu {
> >  };
> > +
> > +#define kvm_arm_pmu_v3_ready(v)		(false)
> >  #endif
> >  
> >  #endif
> > -- 
> > 2.0.4
> > 
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo at vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 20:54     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:54 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai,
	Peter Maydell

On Wed, Jan 27, 2016 at 11:51:47AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To support guest PMUv3, use one bit of the VCPU INIT feature array.
> Initialize the PMU when initialzing the vcpu with that bit and PMU
> overflow interrupt set.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> CC: Peter Maydell <peter.maydell@linaro.org>
> ---
>  Documentation/virtual/kvm/api.txt | 2 ++
>  arch/arm64/include/asm/kvm_host.h | 2 +-
>  arch/arm64/include/uapi/asm/kvm.h | 1 +
>  arch/arm64/kvm/reset.c            | 3 +++
>  include/kvm/arm_pmu.h             | 2 ++
>  include/uapi/linux/kvm.h          | 1 +
>  virt/kvm/arm/pmu.c                | 9 +++++++++
>  7 files changed, 19 insertions(+), 1 deletion(-)
>

Reviewed-by: Andrew Jones <drjones@redhat.com> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3
@ 2016-01-28 20:54     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 20:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:47AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To support guest PMUv3, use one bit of the VCPU INIT feature array.
> Initialize the PMU when initialzing the vcpu with that bit and PMU
> overflow interrupt set.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> CC: Peter Maydell <peter.maydell@linaro.org>
> ---
>  Documentation/virtual/kvm/api.txt | 2 ++
>  arch/arm64/include/asm/kvm_host.h | 2 +-
>  arch/arm64/include/uapi/asm/kvm.h | 1 +
>  arch/arm64/kvm/reset.c            | 3 +++
>  include/kvm/arm_pmu.h             | 2 ++
>  include/uapi/linux/kvm.h          | 1 +
>  virt/kvm/arm/pmu.c                | 9 +++++++++
>  7 files changed, 19 insertions(+), 1 deletion(-)
>

Reviewed-by: Andrew Jones <drjones@redhat.com> 

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-01-28 21:12     ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 21:12 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai,
	Peter Maydell

On Wed, Jan 27, 2016 at 11:51:49AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To configure the virtual PMUv3 overflow interrupt number, we use the
> vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
> attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.
> 
> After configuring the PMUv3, call the vcpu ioctl with attribute
> KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> CC: Peter Maydell <peter.maydell@linaro.org>
> ---
>  Documentation/virtual/kvm/devices/vcpu.txt |  24 ++++++
>  arch/arm/include/asm/kvm_host.h            |  15 ++++
>  arch/arm/kvm/arm.c                         |   3 +
>  arch/arm64/include/asm/kvm_host.h          |   6 ++
>  arch/arm64/include/uapi/asm/kvm.h          |   5 ++
>  arch/arm64/kvm/guest.c                     |  51 ++++++++++++
>  include/kvm/arm_pmu.h                      |  23 ++++++
>  virt/kvm/arm/pmu.c                         | 128 +++++++++++++++++++++++++++++
>  8 files changed, 255 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
> index 3cc59c5..d626237 100644
> --- a/Documentation/virtual/kvm/devices/vcpu.txt
> +++ b/Documentation/virtual/kvm/devices/vcpu.txt
> @@ -6,3 +6,27 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
>  kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
>  
>  The groups and attributes per virtual cpu, if any, are architecture specific.
> +
> +1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
> +Architectures: ARM64
> +
> +1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
> +Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt
> +Returns: -EBUSY: The PMU overflow interrupt is already set
> +         -ENXIO: The overflow interrupt not set when attempting to get it
> +         -ENODEV: PMUv3 not supported
> +         -EINVAL: Invalid PMU overflow interrupt number supplied
> +
> +A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
> +number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
> +type must be same for each vcpu. As a PPI, the interrupt number is same for all
> +vcpus, while as an SPI it must be different for each vcpu.
> +
> +1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
> +Parameters: no additional parameter in kvm_device_attr.addr
> +Returns: -ENODEV: PMUv3 not supported
> +         -ENXIO: PMUv3 not properly configured as required prior to calling this
> +                 attribute
> +         -EBUSY: PMUv3 already initialized
> +
> +Request the initialization of the PMUv3.
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index f9f2779..6dd0992 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
>  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
> +static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
>  
>  #endif /* __ARM_KVM_HOST_H__ */
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 34d7395..dc8644f 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
>  		break;
>  	}
>  
> @@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
>  		break;
>  	}
>  
> @@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
>  		break;
>  	}
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index cb220b7..a855a30 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
> +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
> +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
> +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
>  
>  #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 6aedbe3..f209ea1 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -205,6 +205,11 @@ struct kvm_arch_memory_slot {
>  #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
>  #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
>  
> +/* Device Control API on vcpu fd */
> +#define KVM_ARM_VCPU_PMU_V3_CTRL	0
> +#define   KVM_ARM_VCPU_PMU_V3_IRQ	0
> +#define   KVM_ARM_VCPU_PMU_V3_INIT	1
> +
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
>  #define KVM_ARM_IRQ_TYPE_MASK		0xff
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index fcb7788..dbe45c3 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
>  	}
>  	return 0;
>  }
> +
> +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index fee86eb..3890c94 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -36,6 +36,7 @@ struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> +#define kvm_arm_pmu_irq_initialized(v)	((v)->arch.pmu.irq_num >= VGIC_NR_SGIS)
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>  void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
> @@ -49,11 +50,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  bool kvm_arm_support_pmu_v3(void);
> +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
> +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
> +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
>  #else
>  struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		(false)
> +#define kvm_arm_pmu_irq_initialized(v)	(false)
>  static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
>  					    u64 select_idx)
>  {
> @@ -74,6 +82,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  static inline bool kvm_arm_support_pmu_v3(void) { return false; }
> +static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
>  #endif
>  
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 05e9d7e..37f6100 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,6 +19,7 @@
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
>  #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
> @@ -383,3 +384,130 @@ bool kvm_arm_support_pmu_v3(void)
>  	 */
>  	return (perf_num_counters() > 0);
>  }
> +
> +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
> +{
> +	if (!kvm_arm_support_pmu_v3())
> +		return -ENODEV;
> +
> +	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
> +	    !kvm_arm_pmu_irq_initialized(vcpu))
> +		return -ENXIO;
> +
> +	if (kvm_arm_pmu_v3_ready(vcpu))
> +		return -EBUSY;
> +
> +	kvm_pmu_vcpu_reset(vcpu);
> +	vcpu->arch.pmu.ready = true;
> +
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_irq_access(struct kvm_vcpu *vcpu,
> +				  struct kvm_device_attr *attr,
> +				  int *irq, bool is_set)
> +{
> +	if (!is_set) {
> +		if (!kvm_arm_pmu_irq_initialized(vcpu))
> +			return -ENXIO;
> +
> +		*irq = vcpu->arch.pmu.irq_num;
> +	} else {
> +		if (kvm_arm_pmu_irq_initialized(vcpu))
> +			return -EBUSY;
> +
> +		kvm_debug("Set kvm ARM PMU irq: %d\n", *irq);
> +		vcpu->arch.pmu.irq_num = *irq;
> +	}
> +
> +	return 0;
> +}
> +
> +static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi)
> +{
> +	int i;
> +	struct kvm_vcpu *vcpu;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> +		if (!kvm_arm_pmu_irq_initialized(vcpu))
> +			continue;
> +
> +		if (is_ppi) {
> +			if (vcpu->arch.pmu.irq_num != irq)
> +				return false;
> +		} else {
> +			if (vcpu->arch.pmu.irq_num == irq)
> +				return false;
> +		}
> +	}
> +
> +	return true;
> +}
> +
> +

nit: extra blank line here

> +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg;
> +
> +		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return -ENODEV;
> +
> +		if (get_user(reg, uaddr))
> +			return -EFAULT;
> +
> +		/*
> +		 * The PMU overflow interrupt could be a PPI or SPI, but for one
> +		 * VM the interrupt type must be same for each vcpu. As a PPI,
> +		 * the interrupt number is same for all vcpus, while as an SPI
> +		 * it must be different for each vcpu.
> +		 */
> +		if (reg < VGIC_NR_SGIS || reg >= vcpu->kvm->arch.vgic.nr_irqs ||
> +		    !irq_is_valid(vcpu->kvm, reg, reg < VGIC_NR_PRIVATE_IRQS))
> +			return -EINVAL;
> +
> +		return kvm_arm_pmu_irq_access(vcpu, attr, &reg, true);
> +	}
> +	case KVM_ARM_VCPU_PMU_V3_INIT:
> +		return kvm_arm_pmu_v3_init(vcpu);
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg = -1;
> +
> +		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return -ENODEV;
> +
> +		ret = kvm_arm_pmu_irq_access(vcpu, attr, &reg, false);
> +		if (ret)
> +			return ret;
> +		return put_user(reg, uaddr);
> +	}
> +	}
> +
> +	return -ENXIO;
> +}

nit: I'm not sure why we're calling the irq a 'reg' in the get and set attr
functions.

> +
> +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ:
> +	case KVM_ARM_VCPU_PMU_V3_INIT:
> +		if (kvm_arm_support_pmu_v3() &&
> +		    test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return 0;
> +	}
> +
> +	return -ENXIO;
> +}
> -- 
> 2.0.4

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3
@ 2016-01-28 21:12     ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 21:12 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:49AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To configure the virtual PMUv3 overflow interrupt number, we use the
> vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ
> attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group.
> 
> After configuring the PMUv3, call the vcpu ioctl with attribute
> KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Peter Maydell <peter.maydell@linaro.org>
> ---
> CC: Peter Maydell <peter.maydell@linaro.org>
> ---
>  Documentation/virtual/kvm/devices/vcpu.txt |  24 ++++++
>  arch/arm/include/asm/kvm_host.h            |  15 ++++
>  arch/arm/kvm/arm.c                         |   3 +
>  arch/arm64/include/asm/kvm_host.h          |   6 ++
>  arch/arm64/include/uapi/asm/kvm.h          |   5 ++
>  arch/arm64/kvm/guest.c                     |  51 ++++++++++++
>  include/kvm/arm_pmu.h                      |  23 ++++++
>  virt/kvm/arm/pmu.c                         | 128 +++++++++++++++++++++++++++++
>  8 files changed, 255 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/devices/vcpu.txt b/Documentation/virtual/kvm/devices/vcpu.txt
> index 3cc59c5..d626237 100644
> --- a/Documentation/virtual/kvm/devices/vcpu.txt
> +++ b/Documentation/virtual/kvm/devices/vcpu.txt
> @@ -6,3 +6,27 @@ KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
>  kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
>  
>  The groups and attributes per virtual cpu, if any, are architecture specific.
> +
> +1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
> +Architectures: ARM64
> +
> +1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
> +Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt
> +Returns: -EBUSY: The PMU overflow interrupt is already set
> +         -ENXIO: The overflow interrupt not set when attempting to get it
> +         -ENODEV: PMUv3 not supported
> +         -EINVAL: Invalid PMU overflow interrupt number supplied
> +
> +A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
> +number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
> +type must be same for each vcpu. As a PPI, the interrupt number is same for all
> +vcpus, while as an SPI it must be different for each vcpu.
> +
> +1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
> +Parameters: no additional parameter in kvm_device_attr.addr
> +Returns: -ENODEV: PMUv3 not supported
> +         -ENXIO: PMUv3 not properly configured as required prior to calling this
> +                 attribute
> +         -EBUSY: PMUv3 already initialized
> +
> +Request the initialization of the PMUv3.
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index f9f2779..6dd0992 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -242,5 +242,20 @@ static inline void kvm_arm_init_debug(void) {}
>  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
>  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
> +static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +					     struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
>  
>  #endif /* __ARM_KVM_HOST_H__ */
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 34d7395..dc8644f 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -833,6 +833,7 @@ static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
>  		break;
>  	}
>  
> @@ -846,6 +847,7 @@ static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
>  		break;
>  	}
>  
> @@ -859,6 +861,7 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
>  
>  	switch (attr->group) {
>  	default:
> +		ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
>  		break;
>  	}
>  
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index cb220b7..a855a30 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -359,5 +359,11 @@ void kvm_arm_init_debug(void);
>  void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
>  void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
> +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
> +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
> +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr);
>  
>  #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 6aedbe3..f209ea1 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -205,6 +205,11 @@ struct kvm_arch_memory_slot {
>  #define KVM_DEV_ARM_VGIC_GRP_CTRL	4
>  #define   KVM_DEV_ARM_VGIC_CTRL_INIT	0
>  
> +/* Device Control API on vcpu fd */
> +#define KVM_ARM_VCPU_PMU_V3_CTRL	0
> +#define   KVM_ARM_VCPU_PMU_V3_IRQ	0
> +#define   KVM_ARM_VCPU_PMU_V3_INIT	1
> +
>  /* KVM_IRQ_LINE irq field index values */
>  #define KVM_ARM_IRQ_TYPE_SHIFT		24
>  #define KVM_ARM_IRQ_TYPE_MASK		0xff
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index fcb7788..dbe45c3 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
>  	}
>  	return 0;
>  }
> +
> +int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> +
> +int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> +			       struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->group) {
> +	case KVM_ARM_VCPU_PMU_V3_CTRL:
> +		ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
> +		break;
> +	default:
> +		ret = -ENXIO;
> +		break;
> +	}
> +
> +	return ret;
> +}
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index fee86eb..3890c94 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -36,6 +36,7 @@ struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		((v)->arch.pmu.ready)
> +#define kvm_arm_pmu_irq_initialized(v)	((v)->arch.pmu.irq_num >= VGIC_NR_SGIS)
>  u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx);
>  u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu);
>  void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu);
> @@ -49,11 +50,18 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val);
>  void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data,
>  				    u64 select_idx);
>  bool kvm_arm_support_pmu_v3(void);
> +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
> +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
> +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
> +			    struct kvm_device_attr *attr);
>  #else
>  struct kvm_pmu {
>  };
>  
>  #define kvm_arm_pmu_v3_ready(v)		(false)
> +#define kvm_arm_pmu_irq_initialized(v)	(false)
>  static inline u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu,
>  					    u64 select_idx)
>  {
> @@ -74,6 +82,21 @@ static inline void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val) {}
>  static inline void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu,
>  						  u64 data, u64 select_idx) {}
>  static inline bool kvm_arm_support_pmu_v3(void) { return false; }
> +static inline int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
> +static inline int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu,
> +					  struct kvm_device_attr *attr)
> +{
> +	return -ENXIO;
> +}
>  #endif
>  
>  #endif
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index 05e9d7e..37f6100 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -19,6 +19,7 @@
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
>  #include <linux/perf_event.h>
> +#include <linux/uaccess.h>
>  #include <asm/kvm_emulate.h>
>  #include <kvm/arm_pmu.h>
>  #include <kvm/arm_vgic.h>
> @@ -383,3 +384,130 @@ bool kvm_arm_support_pmu_v3(void)
>  	 */
>  	return (perf_num_counters() > 0);
>  }
> +
> +static int kvm_arm_pmu_v3_init(struct kvm_vcpu *vcpu)
> +{
> +	if (!kvm_arm_support_pmu_v3())
> +		return -ENODEV;
> +
> +	if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features) ||
> +	    !kvm_arm_pmu_irq_initialized(vcpu))
> +		return -ENXIO;
> +
> +	if (kvm_arm_pmu_v3_ready(vcpu))
> +		return -EBUSY;
> +
> +	kvm_pmu_vcpu_reset(vcpu);
> +	vcpu->arch.pmu.ready = true;
> +
> +	return 0;
> +}
> +
> +static int kvm_arm_pmu_irq_access(struct kvm_vcpu *vcpu,
> +				  struct kvm_device_attr *attr,
> +				  int *irq, bool is_set)
> +{
> +	if (!is_set) {
> +		if (!kvm_arm_pmu_irq_initialized(vcpu))
> +			return -ENXIO;
> +
> +		*irq = vcpu->arch.pmu.irq_num;
> +	} else {
> +		if (kvm_arm_pmu_irq_initialized(vcpu))
> +			return -EBUSY;
> +
> +		kvm_debug("Set kvm ARM PMU irq: %d\n", *irq);
> +		vcpu->arch.pmu.irq_num = *irq;
> +	}
> +
> +	return 0;
> +}
> +
> +static bool irq_is_valid(struct kvm *kvm, int irq, bool is_ppi)
> +{
> +	int i;
> +	struct kvm_vcpu *vcpu;
> +
> +	kvm_for_each_vcpu(i, vcpu, kvm) {
> +		if (!kvm_arm_pmu_irq_initialized(vcpu))
> +			continue;
> +
> +		if (is_ppi) {
> +			if (vcpu->arch.pmu.irq_num != irq)
> +				return false;
> +		} else {
> +			if (vcpu->arch.pmu.irq_num == irq)
> +				return false;
> +		}
> +	}
> +
> +	return true;
> +}
> +
> +

nit: extra blank line here

> +int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg;
> +
> +		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return -ENODEV;
> +
> +		if (get_user(reg, uaddr))
> +			return -EFAULT;
> +
> +		/*
> +		 * The PMU overflow interrupt could be a PPI or SPI, but for one
> +		 * VM the interrupt type must be same for each vcpu. As a PPI,
> +		 * the interrupt number is same for all vcpus, while as an SPI
> +		 * it must be different for each vcpu.
> +		 */
> +		if (reg < VGIC_NR_SGIS || reg >= vcpu->kvm->arch.vgic.nr_irqs ||
> +		    !irq_is_valid(vcpu->kvm, reg, reg < VGIC_NR_PRIVATE_IRQS))
> +			return -EINVAL;
> +
> +		return kvm_arm_pmu_irq_access(vcpu, attr, &reg, true);
> +	}
> +	case KVM_ARM_VCPU_PMU_V3_INIT:
> +		return kvm_arm_pmu_v3_init(vcpu);
> +	}
> +
> +	return -ENXIO;
> +}
> +
> +int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	int ret;
> +
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ: {
> +		int __user *uaddr = (int __user *)(long)attr->addr;
> +		int reg = -1;
> +
> +		if (!test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return -ENODEV;
> +
> +		ret = kvm_arm_pmu_irq_access(vcpu, attr, &reg, false);
> +		if (ret)
> +			return ret;
> +		return put_user(reg, uaddr);
> +	}
> +	}
> +
> +	return -ENXIO;
> +}

nit: I'm not sure why we're calling the irq a 'reg' in the get and set attr
functions.

> +
> +int kvm_arm_pmu_v3_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr)
> +{
> +	switch (attr->attr) {
> +	case KVM_ARM_VCPU_PMU_V3_IRQ:
> +	case KVM_ARM_VCPU_PMU_V3_INIT:
> +		if (kvm_arm_support_pmu_v3() &&
> +		    test_bit(KVM_ARM_VCPU_PMU_V3, vcpu->arch.features))
> +			return 0;
> +	}
> +
> +	return -ENXIO;
> +}
> -- 
> 2.0.4

Reviewed-by: Andrew Jones <drjones@redhat.com>

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 00/21] KVM: ARM64: Add guest PMU support
  2016-01-27  3:51 ` Shannon Zhao
@ 2016-01-28 21:30   ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 21:30 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvm, marc.zyngier, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm

On Wed, Jan 27, 2016 at 11:51:28AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.
> 
> Use perf to test this patchset in guest. When using "perf list", it
> shows the list of the hardware events and hardware cache events perf
> supports. Then use "perf stat -e EVENT" to monitor some event. For
> example, use "perf stat -e cycles" to count cpu cycles and
> "perf stat -e cache-misses" to count cache misses.
>

Hi Shannon,

I just completed my first full review of the series. I'm sorry I didn't
have time to do a full review on an earlier version. I have a few comments,
but most of them are minor. If you'd like to add some r-b's from me, then
here are the patches I can offer them for

1-5,8,9,11-13,16-21

5,8,9,12 had some extra masking on input/outputs that I'm not sure is
necessary.

Thanks,
drew

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 00/21] KVM: ARM64: Add guest PMU support
@ 2016-01-28 21:30   ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-28 21:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:28AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> This patchset adds guest PMU support for KVM on ARM64. It takes
> trap-and-emulate approach. When guest wants to monitor one event, it
> will be trapped by KVM and KVM will call perf_event API to create a perf
> event and call relevant perf_event APIs to get the count value of event.
> 
> Use perf to test this patchset in guest. When using "perf list", it
> shows the list of the hardware events and hardware cache events perf
> supports. Then use "perf stat -e EVENT" to monitor some event. For
> example, use "perf stat -e cycles" to count cpu cycles and
> "perf stat -e cache-misses" to count cache misses.
>

Hi Shannon,

I just completed my first full review of the series. I'm sorry I didn't
have time to do a full review on an earlier version. I have a few comments,
but most of them are minor. If you'd like to add some r-b's from me, then
here are the patches I can offer them for

1-5,8,9,11-13,16-21

5,8,9,12 had some extra masking on input/outputs that I'm not sure is
necessary.

Thanks,
drew

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
  2016-01-28 20:11     ` Andrew Jones
  (?)
@ 2016-01-29  1:42       ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  1:42 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 4:11, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
>> > which is mapped to PMEVTYPERn or PMCCFILTR.
>> > 
>> > The access handler translates all aarch32 register offsets to aarch64
>> > ones and uses vcpu_sys_reg() to access their values to avoid taking care
>> > of big endian.
>> > 
>> > When writing to these registers, create a perf_event for the selected
>> > event type.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>> >  1 file changed, 138 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 06257e2..298ae94 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	return true;
>> >  }
>> >  
>> > +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
>> > +{
>> > +	u64 pmcr, val;
>> > +
>> > +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
>> > +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
>> > +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
>> > +		return false;
>> > +
>> > +	return true;
>> > +}
>> > +
>> > +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> > +			       const struct sys_reg_desc *r)
>> > +{
>> > +	u64 idx, reg;
>> > +
>> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>> > +		return trap_raz_wi(vcpu, p, r);
>> > +
>> > +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>> > +		/* PMXEVTYPER_EL0 */
>> > +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > +		reg = PMEVTYPER0_EL0 + idx;
>> > +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
>> > +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>> > +		if (idx == ARMV8_CYCLE_IDX)
>> > +			reg = PMCCFILTR_EL0;
>> > +		else
>> > +			/* PMEVTYPERn_EL0 */
>> > +			reg = PMEVTYPER0_EL0 + idx;
>> > +	} else {
>> > +		BUG();
>> > +	}
>> > +
>> > +	if (!pmu_counter_idx_valid(vcpu, idx))
>> > +		return false;
>> > +
>> > +	if (p->is_write) {
>> > +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
>> > +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
>> > +	} else {
>> > +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
> Related to my comment in 5/21. Why should we need to mask it here when
> reading it, since it was masked on writing?
> 
But what if guest reads this register before writing to it?

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-29  1:42       ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  1:42 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 4:11, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
>> > which is mapped to PMEVTYPERn or PMCCFILTR.
>> > 
>> > The access handler translates all aarch32 register offsets to aarch64
>> > ones and uses vcpu_sys_reg() to access their values to avoid taking care
>> > of big endian.
>> > 
>> > When writing to these registers, create a perf_event for the selected
>> > event type.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>> >  1 file changed, 138 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 06257e2..298ae94 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	return true;
>> >  }
>> >  
>> > +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
>> > +{
>> > +	u64 pmcr, val;
>> > +
>> > +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
>> > +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
>> > +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
>> > +		return false;
>> > +
>> > +	return true;
>> > +}
>> > +
>> > +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> > +			       const struct sys_reg_desc *r)
>> > +{
>> > +	u64 idx, reg;
>> > +
>> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>> > +		return trap_raz_wi(vcpu, p, r);
>> > +
>> > +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>> > +		/* PMXEVTYPER_EL0 */
>> > +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > +		reg = PMEVTYPER0_EL0 + idx;
>> > +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
>> > +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>> > +		if (idx == ARMV8_CYCLE_IDX)
>> > +			reg = PMCCFILTR_EL0;
>> > +		else
>> > +			/* PMEVTYPERn_EL0 */
>> > +			reg = PMEVTYPER0_EL0 + idx;
>> > +	} else {
>> > +		BUG();
>> > +	}
>> > +
>> > +	if (!pmu_counter_idx_valid(vcpu, idx))
>> > +		return false;
>> > +
>> > +	if (p->is_write) {
>> > +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
>> > +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
>> > +	} else {
>> > +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
> Related to my comment in 5/21. Why should we need to mask it here when
> reading it, since it was masked on writing?
> 
But what if guest reads this register before writing to it?

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-29  1:42       ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  1:42 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 4:11, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
>> > which is mapped to PMEVTYPERn or PMCCFILTR.
>> > 
>> > The access handler translates all aarch32 register offsets to aarch64
>> > ones and uses vcpu_sys_reg() to access their values to avoid taking care
>> > of big endian.
>> > 
>> > When writing to these registers, create a perf_event for the selected
>> > event type.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
>> >  1 file changed, 138 insertions(+), 2 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index 06257e2..298ae94 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	return true;
>> >  }
>> >  
>> > +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
>> > +{
>> > +	u64 pmcr, val;
>> > +
>> > +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
>> > +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
>> > +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
>> > +		return false;
>> > +
>> > +	return true;
>> > +}
>> > +
>> > +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> > +			       const struct sys_reg_desc *r)
>> > +{
>> > +	u64 idx, reg;
>> > +
>> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>> > +		return trap_raz_wi(vcpu, p, r);
>> > +
>> > +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>> > +		/* PMXEVTYPER_EL0 */
>> > +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > +		reg = PMEVTYPER0_EL0 + idx;
>> > +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
>> > +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>> > +		if (idx == ARMV8_CYCLE_IDX)
>> > +			reg = PMCCFILTR_EL0;
>> > +		else
>> > +			/* PMEVTYPERn_EL0 */
>> > +			reg = PMEVTYPER0_EL0 + idx;
>> > +	} else {
>> > +		BUG();
>> > +	}
>> > +
>> > +	if (!pmu_counter_idx_valid(vcpu, idx))
>> > +		return false;
>> > +
>> > +	if (p->is_write) {
>> > +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
>> > +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
>> > +	} else {
>> > +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
> Related to my comment in 5/21. Why should we need to mask it here when
> reading it, since it was masked on writing?
> 
But what if guest reads this register before writing to it?

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
  2016-01-28 20:43       ` Andrew Jones
@ 2016-01-29  2:07         ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  2:07 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvm, marc.zyngier, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm



On 2016/1/29 4:43, Andrew Jones wrote:
> On Thu, Jan 28, 2016 at 04:36:35PM +0100, Andrew Jones wrote:
>> > On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
>>> > > From: Shannon Zhao <shannon.zhao@linaro.org>
>>> > > 
>>> > > Add reset handler which gets host value of PMCR_EL0 and make writable
>>> > > bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
>>> > > handler for PMCR.
>>> > > 
>>> > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>> > > ---
>>> > >  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
>>> > >  include/kvm/arm_pmu.h     |  4 ++++
>>> > >  2 files changed, 44 insertions(+), 2 deletions(-)
>>> > > 
>>> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> > > index eec3598..97fea84 100644
>>> > > --- a/arch/arm64/kvm/sys_regs.c
>>> > > +++ b/arch/arm64/kvm/sys_regs.c
>>> > > @@ -34,6 +34,7 @@
>>> > >  #include <asm/kvm_emulate.h>
>>> > >  #include <asm/kvm_host.h>
>>> > >  #include <asm/kvm_mmu.h>
>>> > > +#include <asm/pmu.h>
>>> > >  
>>> > >  #include <trace/events/kvm.h>
>>> > >  
>>> > > @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> > >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>>> > >  }
>>> > >  
>>> > > +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> > > +{
>>> > > +	u64 pmcr, val;
>>> > > +
>>> > > +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
>>> > > +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
>>> > > +	 * except PMCR.E resetting to zero.
>>> > > +	 */
>>> > > +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
>>> > > +	      & (~ARMV8_PMCR_E);
>>> > > +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>> > > +}
>>> > > +
>>> > > +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>> > > +			const struct sys_reg_desc *r)
>>> > > +{
>>> > > +	u64 val;
>>> > > +
>>> > > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>>> > > +		return trap_raz_wi(vcpu, p, r);
>>> > > +
>>> > > +	if (p->is_write) {
>>> > > +		/* Only update writeable bits of PMCR */
>>> > > +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
>>> > > +		val &= ~ARMV8_PMCR_MASK;
>>> > > +		val |= p->regval & ARMV8_PMCR_MASK;
>>> > > +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>> > > +	} else {
>>> > > +		/* PMCR.P & PMCR.C are RAZ */
>>> > > +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
>>> > > +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
>>> > > +		p->regval = val;
>> > 
>> > Should we also be setting the IMP, IDCODE, and N fields here to the
>> > values of the host PE?
> Not sure how I skimmed over the reset_pmcr doing this when I first
> read it. I'm now wondering if we want to always expose the host's
> IMP, IDCODE, N though (migration concerns). Although we have a ton
> of invariant sys regs already... So I guess this is a bridge to burn
> another day.
> 
There is a discussion about this. For migrating across different CPU
types, the userspace will set the number of PMU counters and as
discussed it will add some codes in the reset_pmcr to check if the
userspace have set the number, if so set N as the value. But these will
be done after this patch set and by the cross-cpu type support patch
set[1](currently this patch set doesn't set the PMU counters but I
discussed this with Tushar before).

[1]https://lists.gnu.org/archive/html/qemu-devel/2015-09/msg02375.html

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register
@ 2016-01-29  2:07         ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  2:07 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 4:43, Andrew Jones wrote:
> On Thu, Jan 28, 2016 at 04:36:35PM +0100, Andrew Jones wrote:
>> > On Wed, Jan 27, 2016 at 11:51:32AM +0800, Shannon Zhao wrote:
>>> > > From: Shannon Zhao <shannon.zhao@linaro.org>
>>> > > 
>>> > > Add reset handler which gets host value of PMCR_EL0 and make writable
>>> > > bits architecturally UNKNOWN except PMCR.E which is zero. Add an access
>>> > > handler for PMCR.
>>> > > 
>>> > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>> > > ---
>>> > >  arch/arm64/kvm/sys_regs.c | 42 ++++++++++++++++++++++++++++++++++++++++--
>>> > >  include/kvm/arm_pmu.h     |  4 ++++
>>> > >  2 files changed, 44 insertions(+), 2 deletions(-)
>>> > > 
>>> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> > > index eec3598..97fea84 100644
>>> > > --- a/arch/arm64/kvm/sys_regs.c
>>> > > +++ b/arch/arm64/kvm/sys_regs.c
>>> > > @@ -34,6 +34,7 @@
>>> > >  #include <asm/kvm_emulate.h>
>>> > >  #include <asm/kvm_host.h>
>>> > >  #include <asm/kvm_mmu.h>
>>> > > +#include <asm/pmu.h>
>>> > >  
>>> > >  #include <trace/events/kvm.h>
>>> > >  
>>> > > @@ -439,6 +440,43 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> > >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>>> > >  }
>>> > >  
>>> > > +static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>> > > +{
>>> > > +	u64 pmcr, val;
>>> > > +
>>> > > +	asm volatile("mrs %0, pmcr_el0\n" : "=r" (pmcr));
>>> > > +	/* Writable bits of PMCR_EL0 (ARMV8_PMCR_MASK) is reset to UNKNOWN
>>> > > +	 * except PMCR.E resetting to zero.
>>> > > +	 */
>>> > > +	val = ((pmcr & ~ARMV8_PMCR_MASK) | (ARMV8_PMCR_MASK & 0xdecafbad))
>>> > > +	      & (~ARMV8_PMCR_E);
>>> > > +	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>> > > +}
>>> > > +
>>> > > +static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>> > > +			const struct sys_reg_desc *r)
>>> > > +{
>>> > > +	u64 val;
>>> > > +
>>> > > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>>> > > +		return trap_raz_wi(vcpu, p, r);
>>> > > +
>>> > > +	if (p->is_write) {
>>> > > +		/* Only update writeable bits of PMCR */
>>> > > +		val = vcpu_sys_reg(vcpu, PMCR_EL0);
>>> > > +		val &= ~ARMV8_PMCR_MASK;
>>> > > +		val |= p->regval & ARMV8_PMCR_MASK;
>>> > > +		vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>> > > +	} else {
>>> > > +		/* PMCR.P & PMCR.C are RAZ */
>>> > > +		val = vcpu_sys_reg(vcpu, PMCR_EL0)
>>> > > +		      & ~(ARMV8_PMCR_P | ARMV8_PMCR_C);
>>> > > +		p->regval = val;
>> > 
>> > Should we also be setting the IMP, IDCODE, and N fields here to the
>> > values of the host PE?
> Not sure how I skimmed over the reset_pmcr doing this when I first
> read it. I'm now wondering if we want to always expose the host's
> IMP, IDCODE, N though (migration concerns). Although we have a ton
> of invariant sys regs already... So I guess this is a bridge to burn
> another day.
> 
There is a discussion about this. For migrating across different CPU
types, the userspace will set the number of PMU counters and as
discussed it will add some codes in the reset_pmcr to check if the
userspace have set the number, if so set N as the value. But these will
be done after this patch set and by the cross-cpu type support patch
set[1](currently this patch set doesn't set the PMU counters but I
discussed this with Tushar before).

[1]https://lists.gnu.org/archive/html/qemu-devel/2015-09/msg02375.html

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
  2016-01-28 20:34     ` Andrew Jones
@ 2016-01-29  3:47       ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  3:47 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvm, marc.zyngier, will.deacon, shannon.zhao, kvmarm, linux-arm-kernel



On 2016/1/29 4:34, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:34AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > Add access handler which gets host value of PMCEID0 or PMCEID1 when
>> > guest access these registers. Writing action to PMCEID0 or PMCEID1 is
>> > UNDEFINED.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>> >  1 file changed, 25 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index fc60041..06257e2 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	return true;
>> >  }
>> >  
>> > +static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> > +			  const struct sys_reg_desc *r)
>> > +{
>> > +	u64 pmceid;
>> > +
>> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>> > +		return trap_raz_wi(vcpu, p, r);
>> > +
>> > +	if (p->is_write)
>> > +		return false;
>> > +
>> > +	if (!(p->Op2 & 1))
>> > +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
>> > +	else
>> > +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
> For migratibility concerns we may want to filter some of these events.
> With that in mind the answer to my question in 4/21 is 'no'. Instead we
> should pick an IMP,IDCODE,N,PMCEID0_EL0,PMCEID1_EL0 that we expect to
> represent the least common denominator of all the platforms available
> now, and then only expose that view to the guest. If we want to support
> more events, and userspace requests it for the guest, then we can relax
> the filtering (at the expense of migratibility), when the host has the
> support.
> 
As replied at patch 4, I think this could be done by the cross-cpu type
support patches.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register
@ 2016-01-29  3:47       ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  3:47 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 4:34, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:34AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > Add access handler which gets host value of PMCEID0 or PMCEID1 when
>> > guest access these registers. Writing action to PMCEID0 or PMCEID1 is
>> > UNDEFINED.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/kvm/sys_regs.c | 29 +++++++++++++++++++++++++----
>> >  1 file changed, 25 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index fc60041..06257e2 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -492,6 +492,27 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	return true;
>> >  }
>> >  
>> > +static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> > +			  const struct sys_reg_desc *r)
>> > +{
>> > +	u64 pmceid;
>> > +
>> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
>> > +		return trap_raz_wi(vcpu, p, r);
>> > +
>> > +	if (p->is_write)
>> > +		return false;
>> > +
>> > +	if (!(p->Op2 & 1))
>> > +		asm volatile("mrs %0, pmceid0_el0\n" : "=r" (pmceid));
>> > +	else
>> > +		asm volatile("mrs %0, pmceid1_el0\n" : "=r" (pmceid));
> For migratibility concerns we may want to filter some of these events.
> With that in mind the answer to my question in 4/21 is 'no'. Instead we
> should pick an IMP,IDCODE,N,PMCEID0_EL0,PMCEID1_EL0 that we expect to
> represent the least common denominator of all the platforms available
> now, and then only expose that view to the guest. If we want to support
> more events, and userspace requests it for the guest, then we can relax
> the filtering (at the expense of migratibility), when the host has the
> support.
> 
As replied at patch 4, I think this could be done by the cross-cpu type
support patches.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-28 18:06         ` Will Deacon
  (?)
@ 2016-01-29  6:14           ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:14 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: Andrew Jones, kvmarm, christoffer.dall, linux-arm-kernel, kvm,
	wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.
So does it need to change the cycle counter index to zero in this patch set?

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29  6:14           ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:14 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: Andrew Jones, kvmarm, christoffer.dall, linux-arm-kernel, kvm,
	wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.
So does it need to change the cycle counter index to zero in this patch set?

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29  6:14           ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:14 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.
So does it need to change the cycle counter index to zero in this patch set?

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-28 18:06         ` Will Deacon
  (?)
@ 2016-01-29  6:26           ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:26 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: Andrew Jones, kvmarm, christoffer.dall, linux-arm-kernel, kvm,
	wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.

Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
bit 31 to stands the state of cycle counter, if we make cycle counter
index to zero, we always need to do translation between the idx and bit
31 when we access these registers.

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29  6:26           ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:26 UTC (permalink / raw)
  To: Will Deacon, Marc Zyngier
  Cc: Andrew Jones, kvmarm, christoffer.dall, linux-arm-kernel, kvm,
	wei, cov, shannon.zhao, peter.huangpeng, hangaohuai



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.

Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
bit 31 to stands the state of cycle counter, if we make cycle counter
index to zero, we always need to do translation between the idx and bit
31 when we access these registers.

Thanks,
-- 
Shannon


^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29  6:26           ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  6:26 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 2:06, Will Deacon wrote:
> On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>> > On 28/01/16 16:31, Andrew Jones wrote:
>>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >>
>>>> > >> When we use tools like perf on host, perf passes the event type and the
>>>> > >> id of this event type category to kernel, then kernel will map them to
>>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>> > >> register. When getting the event number in KVM, directly use raw event
>>>> > >> type to create a perf_event for it.
>>>> > >>
>>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> > >> ---
>>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>> > >>  4 files changed, 136 insertions(+)
>>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>> > >>
>>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>> > >> index 4406184..2588f9c 100644
>>>> > >> --- a/arch/arm64/include/asm/pmu.h
>>>> > >> +++ b/arch/arm64/include/asm/pmu.h
>>>> > >> @@ -21,6 +21,7 @@
>>>> > >>  
>>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>> > > 
>>> > > I'm not sure we want to add this. It's name is wrong, as it's really
>>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>> > > arch/arm64/kernel/perf_event.c maps it that way.
>>> > > 
>>> > > I think we should do the same with the pmc array, i.e. map the cycle
>>> > > counter to idx zero.
>> > 
>> > I tend to have the opposite view. Not for the sake of it, but because I
>> > find it helpful to directly map the code to the architecture
>> > documentation without having to bend another handful of neurons.
>> > 
>> > Will probably had some good reasons to structure it that way, but I
>> > don't know the rational. Will?
> It was years ago, but I suspect that the cycle counter is index zero
> because its mandated, whilst the number of event counters is IMPDEF.

Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
bit 31 to stands the state of cycle counter, if we make cycle counter
index to zero, we always need to do translation between the idx and bit
31 when we access these registers.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
  2016-01-28 19:58     ` Andrew Jones
@ 2016-01-29  7:37       ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  7:37 UTC (permalink / raw)
  To: Andrew Jones
  Cc: kvm, marc.zyngier, shannon.zhao, will.deacon, linux-arm-kernel, kvmarm



On 2016/1/29 3:58, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This register resets as unknown in 64bit mode while it resets as zero
>> > in 32bit mode. Here we choose to reset it as zero for consistency.
>> > 
>> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>> > accessed from EL0. Add some check helpers to handle the access from EL0.
>> > 
>> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
>> > writing PMUSERENR or reading/writing other PMU registers will trap to
>> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>> > physical PMUSERENR register on VM entry, so that it will trap PMU access
>> > from EL0 to EL2. Within the register access handler we check the real
>> > value of guest PMUSERENR register to decide whether this access is
>> > allowed. If not allowed, return false to inject UND to guest.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/include/asm/pmu.h |   9 ++++
>> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
>> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>> >  4 files changed, 107 insertions(+), 6 deletions(-)
>> > 
>> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> > index 6f14a01..eb3dc88 100644
>> > --- a/arch/arm64/include/asm/pmu.h
>> > +++ b/arch/arm64/include/asm/pmu.h
>> > @@ -69,4 +69,13 @@
>> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>> >  
>> > +/*
>> > + * PMUSERENR: user enable reg
>> > + */
>> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
>> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
>> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
>> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
>> > +
>> >  #endif /* __ASM_PMU_H */
>> > diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
>> > index fb27517..9a28b7bd8 100644
>> > --- a/arch/arm64/kvm/hyp/hyp.h
>> > +++ b/arch/arm64/kvm/hyp/hyp.h
>> > @@ -22,6 +22,7 @@
>> >  #include <linux/kvm_host.h>
>> >  #include <asm/kvm_mmu.h>
>> >  #include <asm/sysreg.h>
>> > +#include <asm/pmu.h>
>> >  
>> >  #define __hyp_text __section(.hyp.text) notrace
>> >  
>> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> > index ca8f5a5..1a7d679 100644
>> > --- a/arch/arm64/kvm/hyp/switch.c
>> > +++ b/arch/arm64/kvm/hyp/switch.c
>> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>> >  	write_sysreg(1 << 15, hstr_el2);
>> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>> > +	/* Make sure we trap PMU access from EL0 to EL2 */
>> > +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> >  }
>> >  
>> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>> >  	write_sysreg(HCR_RW, hcr_el2);
>> >  	write_sysreg(0, hstr_el2);
>> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>> > +	write_sysreg(0, pmuserenr_el0);
>> >  	write_sysreg(0, cptr_el2);
>> >  }
>> >  
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index eefc60a..084e527 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>> >  }
>> >  
>> > +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  			const struct sys_reg_desc *r)
>> >  {
>> > @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu))
>> > +		return false;
> Based on the function name I'm not sure I like embedding vcpu_mode_priv.
> It seems a condition like
> 
>   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
>       return false;
> 

I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
make sense if it doesn't check vcpu mode and it doesn't reflect the
meaning of the function name because if pmu_access_el0_enabled returns
false which should mean the EL0 access is disabled but actually the vcpu
mode might not be EL0.

> would be more clear here and the other callsites below. (I also prefer
> checking for enabled vs. disabled)
> 
>> > +
>> >  	if (p->is_write) {
>> >  		/* Only update writeable bits of PMCR */
>> >  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
>> > @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +		return false;
>> > +
>> >  	if (p->is_write)
>> >  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
>> >  	else
>> > @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > -	if (p->is_write)
>> > +	if (p->is_write || pmu_access_el0_disabled(vcpu))
>> >  		return false;
>> >  
>> >  	if (!(p->Op2 & 1))
>> > @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu))
>> > +		return false;
>> > +
>> >  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>> >  		/* PMXEVTYPER_EL0 */
>> >  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  	if (r->CRn == 9 && r->CRm == 13) {
>> >  		if (r->Op2 == 2) {
>> >  			/* PMXEVCNTR_EL0 */
>> > +			if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +				return false;
>> > +
>> >  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
>> >  			      & ARMV8_COUNTER_MASK;
>> >  			reg = PMEVCNTR0_EL0 + idx;
>> >  		} else if (r->Op2 == 0) {
>> >  			/* PMCCNTR_EL0 */
>> > +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
>> > +				return false;
>> > +
>> >  			idx = ARMV8_CYCLE_IDX;
>> >  			reg = PMCCNTR_EL0;
>> >  		} else {
>> > @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  		}
>> >  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
>> >  		/* PMEVCNTRn_EL0 */
>> > +		if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +			return false;
>> > +
>> >  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>> >  		reg = PMEVCNTR0_EL0 + idx;
>> >  	} else {
>> > @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  		return false;
>> >  
>> >  	val = kvm_pmu_get_counter_value(vcpu, idx);
>> > -	if (p->is_write)
>> > +	if (p->is_write) {
>> > +		if (pmu_access_el0_disabled(vcpu))
>> > +			return false;
>> > +
> This check isn't necessary because at this point we've either already
> checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.
> 
No. For example to cycle counter, if the CR bit is 1 but EN is zero,
pmu_access_cycle_counter_el0_disabled will return false and this means
EL0 could read this cycle counter but it can't write this register
because the CR bit only affects the read access.

"1 EL0 using AArch64: EL0 read accesses to the PMCCNTR_EL0 are not
trapped to EL1."

So within the write access branch, it needs to check if the EN bit is 1.

>> >  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
>> > -	else
>> > +	} else {
>> >  		p->regval = val;
>> > +	}
> It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
> just have another helper that takes a reg_idx argument, i.e.
> 
> static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
> {
>   if (idx == PMCCNTR_EL0)
>      return pmu_access_cycle_counter_el0_disabled
>   if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
>      return pmu_access_event_counter_el0_disabled
> ...
> 
> and call it once after the pmu_counter_idx_valid check?
> 
No, I don't think this is nasty because through above if (r->CRn == 9 &&
r->CRm == 13) else, we already know the type of the counter, i.e. cycle
or event counter, and we could call different checker directly other
than re-distinguishing the type.

What I considered here before is trying to shorten the code path to make
it effective. So I dropped early switch...case implementation. Therefore
we could have a small gap of perf event value between guest and host.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-29  7:37       ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29  7:37 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 3:58, Andrew Jones wrote:
> On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao <shannon.zhao@linaro.org>
>> > 
>> > This register resets as unknown in 64bit mode while it resets as zero
>> > in 32bit mode. Here we choose to reset it as zero for consistency.
>> > 
>> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>> > accessed from EL0. Add some check helpers to handle the access from EL0.
>> > 
>> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
>> > writing PMUSERENR or reading/writing other PMU registers will trap to
>> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>> > physical PMUSERENR register on VM entry, so that it will trap PMU access
>> > from EL0 to EL2. Within the register access handler we check the real
>> > value of guest PMUSERENR register to decide whether this access is
>> > allowed. If not allowed, return false to inject UND to guest.
>> > 
>> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> > ---
>> >  arch/arm64/include/asm/pmu.h |   9 ++++
>> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
>> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>> >  4 files changed, 107 insertions(+), 6 deletions(-)
>> > 
>> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>> > index 6f14a01..eb3dc88 100644
>> > --- a/arch/arm64/include/asm/pmu.h
>> > +++ b/arch/arm64/include/asm/pmu.h
>> > @@ -69,4 +69,13 @@
>> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>> >  
>> > +/*
>> > + * PMUSERENR: user enable reg
>> > + */
>> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed@EL0 */
>> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written@EL0 */
>> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read@EL0 */
>> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
>> > +
>> >  #endif /* __ASM_PMU_H */
>> > diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
>> > index fb27517..9a28b7bd8 100644
>> > --- a/arch/arm64/kvm/hyp/hyp.h
>> > +++ b/arch/arm64/kvm/hyp/hyp.h
>> > @@ -22,6 +22,7 @@
>> >  #include <linux/kvm_host.h>
>> >  #include <asm/kvm_mmu.h>
>> >  #include <asm/sysreg.h>
>> > +#include <asm/pmu.h>
>> >  
>> >  #define __hyp_text __section(.hyp.text) notrace
>> >  
>> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> > index ca8f5a5..1a7d679 100644
>> > --- a/arch/arm64/kvm/hyp/switch.c
>> > +++ b/arch/arm64/kvm/hyp/switch.c
>> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>> >  	write_sysreg(1 << 15, hstr_el2);
>> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>> > +	/* Make sure we trap PMU access from EL0 to EL2 */
>> > +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>> >  }
>> >  
>> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>> >  	write_sysreg(HCR_RW, hcr_el2);
>> >  	write_sysreg(0, hstr_el2);
>> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>> > +	write_sysreg(0, pmuserenr_el0);
>> >  	write_sysreg(0, cptr_el2);
>> >  }
>> >  
>> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> > index eefc60a..084e527 100644
>> > --- a/arch/arm64/kvm/sys_regs.c
>> > +++ b/arch/arm64/kvm/sys_regs.c
>> > @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>> >  }
>> >  
>> > +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> > +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>> > +{
>> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>> > +
>> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>> > +		 || vcpu_mode_priv(vcpu));
>> > +}
>> > +
>> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  			const struct sys_reg_desc *r)
>> >  {
>> > @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu))
>> > +		return false;
> Based on the function name I'm not sure I like embedding vcpu_mode_priv.
> It seems a condition like
> 
>   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
>       return false;
> 

I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
make sense if it doesn't check vcpu mode and it doesn't reflect the
meaning of the function name because if pmu_access_el0_enabled returns
false which should mean the EL0 access is disabled but actually the vcpu
mode might not be EL0.

> would be more clear here and the other callsites below. (I also prefer
> checking for enabled vs. disabled)
> 
>> > +
>> >  	if (p->is_write) {
>> >  		/* Only update writeable bits of PMCR */
>> >  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
>> > @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +		return false;
>> > +
>> >  	if (p->is_write)
>> >  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
>> >  	else
>> > @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > -	if (p->is_write)
>> > +	if (p->is_write || pmu_access_el0_disabled(vcpu))
>> >  		return false;
>> >  
>> >  	if (!(p->Op2 & 1))
>> > @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>> >  		return trap_raz_wi(vcpu, p, r);
>> >  
>> > +	if (pmu_access_el0_disabled(vcpu))
>> > +		return false;
>> > +
>> >  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
>> >  		/* PMXEVTYPER_EL0 */
>> >  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
>> > @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  	if (r->CRn == 9 && r->CRm == 13) {
>> >  		if (r->Op2 == 2) {
>> >  			/* PMXEVCNTR_EL0 */
>> > +			if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +				return false;
>> > +
>> >  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
>> >  			      & ARMV8_COUNTER_MASK;
>> >  			reg = PMEVCNTR0_EL0 + idx;
>> >  		} else if (r->Op2 == 0) {
>> >  			/* PMCCNTR_EL0 */
>> > +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
>> > +				return false;
>> > +
>> >  			idx = ARMV8_CYCLE_IDX;
>> >  			reg = PMCCNTR_EL0;
>> >  		} else {
>> > @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  		}
>> >  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
>> >  		/* PMEVCNTRn_EL0 */
>> > +		if (pmu_access_event_counter_el0_disabled(vcpu))
>> > +			return false;
>> > +
>> >  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
>> >  		reg = PMEVCNTR0_EL0 + idx;
>> >  	} else {
>> > @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
>> >  		return false;
>> >  
>> >  	val = kvm_pmu_get_counter_value(vcpu, idx);
>> > -	if (p->is_write)
>> > +	if (p->is_write) {
>> > +		if (pmu_access_el0_disabled(vcpu))
>> > +			return false;
>> > +
> This check isn't necessary because at this point we've either already
> checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.
> 
No. For example to cycle counter, if the CR bit is 1 but EN is zero,
pmu_access_cycle_counter_el0_disabled will return false and this means
EL0 could read this cycle counter but it can't write this register
because the CR bit only affects the read access.

"1 EL0 using AArch64: EL0 read accesses to the PMCCNTR_EL0 are not
trapped to EL1."

So within the write access branch, it needs to check if the EN bit is 1.

>> >  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
>> > -	else
>> > +	} else {
>> >  		p->regval = val;
>> > +	}
> It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
> just have another helper that takes a reg_idx argument, i.e.
> 
> static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
> {
>   if (idx == PMCCNTR_EL0)
>      return pmu_access_cycle_counter_el0_disabled
>   if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
>      return pmu_access_event_counter_el0_disabled
> ...
> 
> and call it once after the pmu_counter_idx_valid check?
> 
No, I don't think this is nasty because through above if (r->CRn == 9 &&
r->CRm == 13) else, we already know the type of the counter, i.e. cycle
or event counter, and we could call different checker directly other
than re-distinguishing the type.

What I considered here before is trying to shorten the code path to make
it effective. So I dropped early switch...case implementation. Therefore
we could have a small gap of perf event value between guest and host.

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-29  6:26           ` Shannon Zhao
@ 2016-01-29 10:18             ` Will Deacon
  -1 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-01-29 10:18 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: Marc Zyngier, Andrew Jones, kvmarm, christoffer.dall,
	linux-arm-kernel, kvm, wei, cov, shannon.zhao, peter.huangpeng,
	hangaohuai

On Fri, Jan 29, 2016 at 02:26:31PM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 2:06, Will Deacon wrote:
> > On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
> >> > On 28/01/16 16:31, Andrew Jones wrote:
> >>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> >>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
> >>>> > >>
> >>>> > >> When we use tools like perf on host, perf passes the event type and the
> >>>> > >> id of this event type category to kernel, then kernel will map them to
> >>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> >>>> > >> register. When getting the event number in KVM, directly use raw event
> >>>> > >> type to create a perf_event for it.
> >>>> > >>
> >>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> > >> ---
> >>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
> >>>> > >>  arch/arm64/kvm/Makefile      |   1 +
> >>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
> >>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
> >>>> > >>  4 files changed, 136 insertions(+)
> >>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
> >>>> > >>
> >>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >>>> > >> index 4406184..2588f9c 100644
> >>>> > >> --- a/arch/arm64/include/asm/pmu.h
> >>>> > >> +++ b/arch/arm64/include/asm/pmu.h
> >>>> > >> @@ -21,6 +21,7 @@
> >>>> > >>  
> >>>> > >>  #define ARMV8_MAX_COUNTERS      32
> >>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> >>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> >>> > > 
> >>> > > I'm not sure we want to add this. It's name is wrong, as it's really
> >>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> >>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> >>> > > arch/arm64/kernel/perf_event.c maps it that way.
> >>> > > 
> >>> > > I think we should do the same with the pmc array, i.e. map the cycle
> >>> > > counter to idx zero.
> >> > 
> >> > I tend to have the opposite view. Not for the sake of it, but because I
> >> > find it helpful to directly map the code to the architecture
> >> > documentation without having to bend another handful of neurons.
> >> > 
> >> > Will probably had some good reasons to structure it that way, but I
> >> > don't know the rational. Will?
> > It was years ago, but I suspect that the cycle counter is index zero
> > because its mandated, whilst the number of event counters is IMPDEF.
> 
> Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
> bit 31 to stands the state of cycle counter, if we make cycle counter
> index to zero, we always need to do translation between the idx and bit
> 31 when we access these registers.

Conversely, if you stick the cycle counter right at the top, then you'll
need to rework a bunch of the perf code that iterates from
ARMV7_IDX_COUNTER0 to pmu->num_events.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29 10:18             ` Will Deacon
  0 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-01-29 10:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 29, 2016 at 02:26:31PM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 2:06, Will Deacon wrote:
> > On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
> >> > On 28/01/16 16:31, Andrew Jones wrote:
> >>> > > On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
> >>>> > >> From: Shannon Zhao <shannon.zhao@linaro.org>
> >>>> > >>
> >>>> > >> When we use tools like perf on host, perf passes the event type and the
> >>>> > >> id of this event type category to kernel, then kernel will map them to
> >>>> > >> hardware event number and write this number to PMU PMEVTYPER<n>_EL0
> >>>> > >> register. When getting the event number in KVM, directly use raw event
> >>>> > >> type to create a perf_event for it.
> >>>> > >>
> >>>> > >> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >>>> > >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> > >> ---
> >>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
> >>>> > >>  arch/arm64/kvm/Makefile      |   1 +
> >>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
> >>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
> >>>> > >>  4 files changed, 136 insertions(+)
> >>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
> >>>> > >>
> >>>> > >> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >>>> > >> index 4406184..2588f9c 100644
> >>>> > >> --- a/arch/arm64/include/asm/pmu.h
> >>>> > >> +++ b/arch/arm64/include/asm/pmu.h
> >>>> > >> @@ -21,6 +21,7 @@
> >>>> > >>  
> >>>> > >>  #define ARMV8_MAX_COUNTERS      32
> >>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
> >>>> > >> +#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
> >>> > > 
> >>> > > I'm not sure we want to add this. It's name is wrong, as it's really
> >>> > > PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
> >>> > > already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
> >>> > > arch/arm64/kernel/perf_event.c maps it that way.
> >>> > > 
> >>> > > I think we should do the same with the pmc array, i.e. map the cycle
> >>> > > counter to idx zero.
> >> > 
> >> > I tend to have the opposite view. Not for the sake of it, but because I
> >> > find it helpful to directly map the code to the architecture
> >> > documentation without having to bend another handful of neurons.
> >> > 
> >> > Will probably had some good reasons to structure it that way, but I
> >> > don't know the rational. Will?
> > It was years ago, but I suspect that the cycle counter is index zero
> > because its mandated, whilst the number of event counters is IMPDEF.
> 
> Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
> bit 31 to stands the state of cycle counter, if we make cycle counter
> index to zero, we always need to do translation between the idx and bit
> 31 when we access these registers.

Conversely, if you stick the cycle counter right at the top, then you'll
need to rework a bunch of the perf code that iterates from
ARMV7_IDX_COUNTER0 to pmu->num_events.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
  2016-01-29  7:37       ` Shannon Zhao
@ 2016-01-29 11:08         ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-29 11:08 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Fri, Jan 29, 2016 at 03:37:26PM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 3:58, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
> >> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >> > 
> >> > This register resets as unknown in 64bit mode while it resets as zero
> >> > in 32bit mode. Here we choose to reset it as zero for consistency.
> >> > 
> >> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
> >> > accessed from EL0. Add some check helpers to handle the access from EL0.
> >> > 
> >> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
> >> > writing PMUSERENR or reading/writing other PMU registers will trap to
> >> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
> >> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
> >> > physical PMUSERENR register on VM entry, so that it will trap PMU access
> >> > from EL0 to EL2. Within the register access handler we check the real
> >> > value of guest PMUSERENR register to decide whether this access is
> >> > allowed. If not allowed, return false to inject UND to guest.
> >> > 
> >> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> > ---
> >> >  arch/arm64/include/asm/pmu.h |   9 ++++
> >> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
> >> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
> >> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
> >> >  4 files changed, 107 insertions(+), 6 deletions(-)
> >> > 
> >> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >> > index 6f14a01..eb3dc88 100644
> >> > --- a/arch/arm64/include/asm/pmu.h
> >> > +++ b/arch/arm64/include/asm/pmu.h
> >> > @@ -69,4 +69,13 @@
> >> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
> >> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
> >> >  
> >> > +/*
> >> > + * PMUSERENR: user enable reg
> >> > + */
> >> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
> >> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
> >> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
> >> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
> >> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
> >> > +
> >> >  #endif /* __ASM_PMU_H */
> >> > diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
> >> > index fb27517..9a28b7bd8 100644
> >> > --- a/arch/arm64/kvm/hyp/hyp.h
> >> > +++ b/arch/arm64/kvm/hyp/hyp.h
> >> > @@ -22,6 +22,7 @@
> >> >  #include <linux/kvm_host.h>
> >> >  #include <asm/kvm_mmu.h>
> >> >  #include <asm/sysreg.h>
> >> > +#include <asm/pmu.h>
> >> >  
> >> >  #define __hyp_text __section(.hyp.text) notrace
> >> >  
> >> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> >> > index ca8f5a5..1a7d679 100644
> >> > --- a/arch/arm64/kvm/hyp/switch.c
> >> > +++ b/arch/arm64/kvm/hyp/switch.c
> >> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> >> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
> >> >  	write_sysreg(1 << 15, hstr_el2);
> >> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
> >> > +	/* Make sure we trap PMU access from EL0 to EL2 */
> >> > +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
> >> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >> >  }
> >> >  
> >> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> >> >  	write_sysreg(HCR_RW, hcr_el2);
> >> >  	write_sysreg(0, hstr_el2);
> >> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
> >> > +	write_sysreg(0, pmuserenr_el0);
> >> >  	write_sysreg(0, cptr_el2);
> >> >  }
> >> >  
> >> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> > index eefc60a..084e527 100644
> >> > --- a/arch/arm64/kvm/sys_regs.c
> >> > +++ b/arch/arm64/kvm/sys_regs.c
> >> > @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> >> >  }
> >> >  
> >> > +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  			const struct sys_reg_desc *r)
> >> >  {
> >> > @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_el0_disabled(vcpu))
> >> > +		return false;
> > Based on the function name I'm not sure I like embedding vcpu_mode_priv.
> > It seems a condition like
> > 
> >   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
> >       return false;
> > 
> 
> I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
> make sense if it doesn't check vcpu mode and it doesn't reflect the
> meaning of the function name because if pmu_access_el0_enabled returns
> false which should mean the EL0 access is disabled but actually the vcpu
> mode might not be EL0.

I think it always makes sense to simply check if some bit or bits are
set in some register, without having the answer mixed up with other
state. Actually, maybe we should just drop these helpers and check the
register for the appropriate bits directly whenever needed,

  pmuserenr_el0 = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
  restricted = !vcpu_mode_priv(vcpu) && !(pmuserenr_el0 & ARMV8_USERENR_EN);
  ...

  if (restricted && !(pmuserenr_el0 & ARMV8_USERENR_CR))
     return false;


Or whatever... I won't complain about this anymore.

> 
> > would be more clear here and the other callsites below. (I also prefer
> > checking for enabled vs. disabled)
> > 
> >> > +
> >> >  	if (p->is_write) {
> >> >  		/* Only update writeable bits of PMCR */
> >> >  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> >> > @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +		return false;
> >> > +
> >> >  	if (p->is_write)
> >> >  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
> >> >  	else
> >> > @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > -	if (p->is_write)
> >> > +	if (p->is_write || pmu_access_el0_disabled(vcpu))
> >> >  		return false;
> >> >  
> >> >  	if (!(p->Op2 & 1))
> >> > @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_el0_disabled(vcpu))
> >> > +		return false;
> >> > +
> >> >  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> >> >  		/* PMXEVTYPER_EL0 */
> >> >  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> >> > @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  	if (r->CRn == 9 && r->CRm == 13) {
> >> >  		if (r->Op2 == 2) {
> >> >  			/* PMXEVCNTR_EL0 */
> >> > +			if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +				return false;
> >> > +
> >> >  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
> >> >  			      & ARMV8_COUNTER_MASK;
> >> >  			reg = PMEVCNTR0_EL0 + idx;
> >> >  		} else if (r->Op2 == 0) {
> >> >  			/* PMCCNTR_EL0 */
> >> > +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
> >> > +				return false;
> >> > +
> >> >  			idx = ARMV8_CYCLE_IDX;
> >> >  			reg = PMCCNTR_EL0;
> >> >  		} else {
> >> > @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  		}
> >> >  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
> >> >  		/* PMEVCNTRn_EL0 */
> >> > +		if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +			return false;
> >> > +
> >> >  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> >> >  		reg = PMEVCNTR0_EL0 + idx;
> >> >  	} else {
> >> > @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  		return false;
> >> >  
> >> >  	val = kvm_pmu_get_counter_value(vcpu, idx);
> >> > -	if (p->is_write)
> >> > +	if (p->is_write) {
> >> > +		if (pmu_access_el0_disabled(vcpu))
> >> > +			return false;
> >> > +
> > This check isn't necessary because at this point we've either already
> > checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.
> > 
> No. For example to cycle counter, if the CR bit is 1 but EN is zero,
> pmu_access_cycle_counter_el0_disabled will return false and this means
> EL0 could read this cycle counter but it can't write this register
> because the CR bit only affects the read access.
> 
> "1 EL0 using AArch64: EL0 read accesses to the PMCCNTR_EL0 are not
> trapped to EL1."
> 
> So within the write access branch, it needs to check if the EN bit is 1.

Oh yeah. Thanks for the clarification.

> 
> >> >  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
> >> > -	else
> >> > +	} else {
> >> >  		p->regval = val;
> >> > +	}
> > It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
> > just have another helper that takes a reg_idx argument, i.e.
> > 
> > static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
> > {
> >   if (idx == PMCCNTR_EL0)
> >      return pmu_access_cycle_counter_el0_disabled
> >   if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
> >      return pmu_access_event_counter_el0_disabled
> > ...
> > 
> > and call it once after the pmu_counter_idx_valid check?
> > 
> No, I don't think this is nasty because through above if (r->CRn == 9 &&
> r->CRm == 13) else, we already know the type of the counter, i.e. cycle
> or event counter, and we could call different checker directly other
> than re-distinguishing the type.
> 
> What I considered here before is trying to shorten the code path to make
> it effective. So I dropped early switch...case implementation. Therefore
> we could have a small gap of perf event value between guest and host.
> 
> Thanks,
> -- 
> Shannon
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-29 11:08         ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-29 11:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 29, 2016 at 03:37:26PM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 3:58, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
> >> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >> > 
> >> > This register resets as unknown in 64bit mode while it resets as zero
> >> > in 32bit mode. Here we choose to reset it as zero for consistency.
> >> > 
> >> > PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
> >> > accessed from EL0. Add some check helpers to handle the access from EL0.
> >> > 
> >> > When these bits are zero, only reading PMUSERENR will trap to EL2 and
> >> > writing PMUSERENR or reading/writing other PMU registers will trap to
> >> > EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
> >> > (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
> >> > physical PMUSERENR register on VM entry, so that it will trap PMU access
> >> > from EL0 to EL2. Within the register access handler we check the real
> >> > value of guest PMUSERENR register to decide whether this access is
> >> > allowed. If not allowed, return false to inject UND to guest.
> >> > 
> >> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> > ---
> >> >  arch/arm64/include/asm/pmu.h |   9 ++++
> >> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
> >> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
> >> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
> >> >  4 files changed, 107 insertions(+), 6 deletions(-)
> >> > 
> >> > diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> >> > index 6f14a01..eb3dc88 100644
> >> > --- a/arch/arm64/include/asm/pmu.h
> >> > +++ b/arch/arm64/include/asm/pmu.h
> >> > @@ -69,4 +69,13 @@
> >> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
> >> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
> >> >  
> >> > +/*
> >> > + * PMUSERENR: user enable reg
> >> > + */
> >> > +#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
> >> > +#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed@EL0 */
> >> > +#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written@EL0 */
> >> > +#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read@EL0 */
> >> > +#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
> >> > +
> >> >  #endif /* __ASM_PMU_H */
> >> > diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
> >> > index fb27517..9a28b7bd8 100644
> >> > --- a/arch/arm64/kvm/hyp/hyp.h
> >> > +++ b/arch/arm64/kvm/hyp/hyp.h
> >> > @@ -22,6 +22,7 @@
> >> >  #include <linux/kvm_host.h>
> >> >  #include <asm/kvm_mmu.h>
> >> >  #include <asm/sysreg.h>
> >> > +#include <asm/pmu.h>
> >> >  
> >> >  #define __hyp_text __section(.hyp.text) notrace
> >> >  
> >> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> >> > index ca8f5a5..1a7d679 100644
> >> > --- a/arch/arm64/kvm/hyp/switch.c
> >> > +++ b/arch/arm64/kvm/hyp/switch.c
> >> > @@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> >> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
> >> >  	write_sysreg(1 << 15, hstr_el2);
> >> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
> >> > +	/* Make sure we trap PMU access from EL0 to EL2 */
> >> > +	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
> >> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
> >> >  }
> >> >  
> >> > @@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
> >> >  	write_sysreg(HCR_RW, hcr_el2);
> >> >  	write_sysreg(0, hstr_el2);
> >> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
> >> > +	write_sysreg(0, pmuserenr_el0);
> >> >  	write_sysreg(0, cptr_el2);
> >> >  }
> >> >  
> >> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> > index eefc60a..084e527 100644
> >> > --- a/arch/arm64/kvm/sys_regs.c
> >> > +++ b/arch/arm64/kvm/sys_regs.c
> >> > @@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> >> >  }
> >> >  
> >> > +static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> > +static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
> >> > +{
> >> > +	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
> >> > +
> >> > +	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
> >> > +		 || vcpu_mode_priv(vcpu));
> >> > +}
> >> > +
> >> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  			const struct sys_reg_desc *r)
> >> >  {
> >> > @@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_el0_disabled(vcpu))
> >> > +		return false;
> > Based on the function name I'm not sure I like embedding vcpu_mode_priv.
> > It seems a condition like
> > 
> >   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
> >       return false;
> > 
> 
> I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
> make sense if it doesn't check vcpu mode and it doesn't reflect the
> meaning of the function name because if pmu_access_el0_enabled returns
> false which should mean the EL0 access is disabled but actually the vcpu
> mode might not be EL0.

I think it always makes sense to simply check if some bit or bits are
set in some register, without having the answer mixed up with other
state. Actually, maybe we should just drop these helpers and check the
register for the appropriate bits directly whenever needed,

  pmuserenr_el0 = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
  restricted = !vcpu_mode_priv(vcpu) && !(pmuserenr_el0 & ARMV8_USERENR_EN);
  ...

  if (restricted && !(pmuserenr_el0 & ARMV8_USERENR_CR))
     return false;


Or whatever... I won't complain about this anymore.

> 
> > would be more clear here and the other callsites below. (I also prefer
> > checking for enabled vs. disabled)
> > 
> >> > +
> >> >  	if (p->is_write) {
> >> >  		/* Only update writeable bits of PMCR */
> >> >  		val = vcpu_sys_reg(vcpu, PMCR_EL0);
> >> > @@ -484,6 +518,9 @@ static bool access_pmselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +		return false;
> >> > +
> >> >  	if (p->is_write)
> >> >  		vcpu_sys_reg(vcpu, PMSELR_EL0) = p->regval;
> >> >  	else
> >> > @@ -501,7 +538,7 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > -	if (p->is_write)
> >> > +	if (p->is_write || pmu_access_el0_disabled(vcpu))
> >> >  		return false;
> >> >  
> >> >  	if (!(p->Op2 & 1))
> >> > @@ -534,6 +571,9 @@ static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> >  		return trap_raz_wi(vcpu, p, r);
> >> >  
> >> > +	if (pmu_access_el0_disabled(vcpu))
> >> > +		return false;
> >> > +
> >> >  	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> >> >  		/* PMXEVTYPER_EL0 */
> >> >  		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> >> > @@ -574,11 +614,17 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  	if (r->CRn == 9 && r->CRm == 13) {
> >> >  		if (r->Op2 == 2) {
> >> >  			/* PMXEVCNTR_EL0 */
> >> > +			if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +				return false;
> >> > +
> >> >  			idx = vcpu_sys_reg(vcpu, PMSELR_EL0)
> >> >  			      & ARMV8_COUNTER_MASK;
> >> >  			reg = PMEVCNTR0_EL0 + idx;
> >> >  		} else if (r->Op2 == 0) {
> >> >  			/* PMCCNTR_EL0 */
> >> > +			if (pmu_access_cycle_counter_el0_disabled(vcpu))
> >> > +				return false;
> >> > +
> >> >  			idx = ARMV8_CYCLE_IDX;
> >> >  			reg = PMCCNTR_EL0;
> >> >  		} else {
> >> > @@ -586,6 +632,9 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  		}
> >> >  	} else if (r->CRn == 14 && (r->CRm & 12) == 8) {
> >> >  		/* PMEVCNTRn_EL0 */
> >> > +		if (pmu_access_event_counter_el0_disabled(vcpu))
> >> > +			return false;
> >> > +
> >> >  		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> >> >  		reg = PMEVCNTR0_EL0 + idx;
> >> >  	} else {
> >> > @@ -596,10 +645,14 @@ static bool access_pmu_evcntr(struct kvm_vcpu *vcpu,
> >> >  		return false;
> >> >  
> >> >  	val = kvm_pmu_get_counter_value(vcpu, idx);
> >> > -	if (p->is_write)
> >> > +	if (p->is_write) {
> >> > +		if (pmu_access_el0_disabled(vcpu))
> >> > +			return false;
> >> > +
> > This check isn't necessary because at this point we've either already
> > checked ARMV8_USERENR_EN with one of the other tests, or we've BUGed.
> > 
> No. For example to cycle counter, if the CR bit is 1 but EN is zero,
> pmu_access_cycle_counter_el0_disabled will return false and this means
> EL0 could read this cycle counter but it can't write this register
> because the CR bit only affects the read access.
> 
> "1 EL0 using AArch64: EL0 read accesses to the PMCCNTR_EL0 are not
> trapped to EL1."
> 
> So within the write access branch, it needs to check if the EN bit is 1.

Oh yeah. Thanks for the clarification.

> 
> >> >  		vcpu_sys_reg(vcpu, reg) += (s64)p->regval - val;
> >> > -	else
> >> > +	} else {
> >> >  		p->regval = val;
> >> > +	}
> > It's nasty to have to add 3 checks to access_pmu_evcntr. Can we instead
> > just have another helper that takes a reg_idx argument, i.e.
> > 
> > static bool pmu_reg_access_el0_disabled(struct kvm_vcpu *vcpu, u64 idx)
> > {
> >   if (idx == PMCCNTR_EL0)
> >      return pmu_access_cycle_counter_el0_disabled
> >   if (idx >= PMEVCNTR0_EL0 && idx <= PMEVCNTR30_EL0)
> >      return pmu_access_event_counter_el0_disabled
> > ...
> > 
> > and call it once after the pmu_counter_idx_valid check?
> > 
> No, I don't think this is nasty because through above if (r->CRn == 9 &&
> r->CRm == 13) else, we already know the type of the counter, i.e. cycle
> or event counter, and we could call different checker directly other
> than re-distinguishing the type.
> 
> What I considered here before is trying to shorten the code path to make
> it effective. So I dropped early switch...case implementation. Therefore
> we could have a small gap of perf event value between guest and host.
> 
> Thanks,
> -- 
> Shannon
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
  2016-01-29  1:42       ` Shannon Zhao
@ 2016-01-29 11:25         ` Andrew Jones
  -1 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-29 11:25 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	will.deacon, wei, cov, shannon.zhao, peter.huangpeng, hangaohuai

On Fri, Jan 29, 2016 at 09:42:00AM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 4:11, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
> >> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >> > 
> >> > These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
> >> > which is mapped to PMEVTYPERn or PMCCFILTR.
> >> > 
> >> > The access handler translates all aarch32 register offsets to aarch64
> >> > ones and uses vcpu_sys_reg() to access their values to avoid taking care
> >> > of big endian.
> >> > 
> >> > When writing to these registers, create a perf_event for the selected
> >> > event type.
> >> > 
> >> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> > ---
> >> >  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
> >> >  1 file changed, 138 insertions(+), 2 deletions(-)
> >> > 
> >> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> > index 06257e2..298ae94 100644
> >> > --- a/arch/arm64/kvm/sys_regs.c
> >> > +++ b/arch/arm64/kvm/sys_regs.c
> >> > @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	return true;
> >> >  }
> >> >  
> >> > +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> >> > +{
> >> > +	u64 pmcr, val;
> >> > +
> >> > +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
> >> > +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
> >> > +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
> >> > +		return false;
> >> > +
> >> > +	return true;
> >> > +}
> >> > +
> >> > +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> > +			       const struct sys_reg_desc *r)
> >> > +{
> >> > +	u64 idx, reg;
> >> > +
> >> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> > +		return trap_raz_wi(vcpu, p, r);
> >> > +
> >> > +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> >> > +		/* PMXEVTYPER_EL0 */
> >> > +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> >> > +		reg = PMEVTYPER0_EL0 + idx;
> >> > +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
> >> > +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> >> > +		if (idx == ARMV8_CYCLE_IDX)
> >> > +			reg = PMCCFILTR_EL0;
> >> > +		else
> >> > +			/* PMEVTYPERn_EL0 */
> >> > +			reg = PMEVTYPER0_EL0 + idx;
> >> > +	} else {
> >> > +		BUG();
> >> > +	}
> >> > +
> >> > +	if (!pmu_counter_idx_valid(vcpu, idx))
> >> > +		return false;
> >> > +
> >> > +	if (p->is_write) {
> >> > +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
> >> > +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
> >> > +	} else {
> >> > +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
> > Related to my comment in 5/21. Why should we need to mask it here when
> > reading it, since it was masked on writing?
> > 
> But what if guest reads this register before writing to it?

Oh, I see. The need comes from the use of the reset_unknown reset function.
It might be nice to have a reset_unknown_mask function that uses r->val
as the mask, as there are many registers that have RES0/1 and/or RO fields.

Thanks,
drew

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register
@ 2016-01-29 11:25         ` Andrew Jones
  0 siblings, 0 replies; 127+ messages in thread
From: Andrew Jones @ 2016-01-29 11:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jan 29, 2016 at 09:42:00AM +0800, Shannon Zhao wrote:
> 
> 
> On 2016/1/29 4:11, Andrew Jones wrote:
> > On Wed, Jan 27, 2016 at 11:51:36AM +0800, Shannon Zhao wrote:
> >> > From: Shannon Zhao <shannon.zhao@linaro.org>
> >> > 
> >> > These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER
> >> > which is mapped to PMEVTYPERn or PMCCFILTR.
> >> > 
> >> > The access handler translates all aarch32 register offsets to aarch64
> >> > ones and uses vcpu_sys_reg() to access their values to avoid taking care
> >> > of big endian.
> >> > 
> >> > When writing to these registers, create a perf_event for the selected
> >> > event type.
> >> > 
> >> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> >> > ---
> >> >  arch/arm64/kvm/sys_regs.c | 140 +++++++++++++++++++++++++++++++++++++++++++++-
> >> >  1 file changed, 138 insertions(+), 2 deletions(-)
> >> > 
> >> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> >> > index 06257e2..298ae94 100644
> >> > --- a/arch/arm64/kvm/sys_regs.c
> >> > +++ b/arch/arm64/kvm/sys_regs.c
> >> > @@ -513,6 +513,54 @@ static bool access_pmceid(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> >  	return true;
> >> >  }
> >> >  
> >> > +static bool pmu_counter_idx_valid(struct kvm_vcpu *vcpu, u64 idx)
> >> > +{
> >> > +	u64 pmcr, val;
> >> > +
> >> > +	pmcr = vcpu_sys_reg(vcpu, PMCR_EL0);
> >> > +	val = (pmcr >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
> >> > +	if (idx >= val && idx != ARMV8_CYCLE_IDX)
> >> > +		return false;
> >> > +
> >> > +	return true;
> >> > +}
> >> > +
> >> > +static bool access_pmu_evtyper(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> >> > +			       const struct sys_reg_desc *r)
> >> > +{
> >> > +	u64 idx, reg;
> >> > +
> >> > +	if (!kvm_arm_pmu_v3_ready(vcpu))
> >> > +		return trap_raz_wi(vcpu, p, r);
> >> > +
> >> > +	if (r->CRn == 9 && r->CRm == 13 && r->Op2 == 1) {
> >> > +		/* PMXEVTYPER_EL0 */
> >> > +		idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_COUNTER_MASK;
> >> > +		reg = PMEVTYPER0_EL0 + idx;
> >> > +	} else if (r->CRn == 14 && (r->CRm & 12) == 12) {
> >> > +		idx = ((r->CRm & 3) << 3) | (r->Op2 & 7);
> >> > +		if (idx == ARMV8_CYCLE_IDX)
> >> > +			reg = PMCCFILTR_EL0;
> >> > +		else
> >> > +			/* PMEVTYPERn_EL0 */
> >> > +			reg = PMEVTYPER0_EL0 + idx;
> >> > +	} else {
> >> > +		BUG();
> >> > +	}
> >> > +
> >> > +	if (!pmu_counter_idx_valid(vcpu, idx))
> >> > +		return false;
> >> > +
> >> > +	if (p->is_write) {
> >> > +		kvm_pmu_set_counter_event_type(vcpu, p->regval, idx);
> >> > +		vcpu_sys_reg(vcpu, reg) = p->regval & ARMV8_EVTYPE_MASK;
> >> > +	} else {
> >> > +		p->regval = vcpu_sys_reg(vcpu, reg) & ARMV8_EVTYPE_MASK;
> > Related to my comment in 5/21. Why should we need to mask it here when
> > reading it, since it was masked on writing?
> > 
> But what if guest reads this register before writing to it?

Oh, I see. The need comes from the use of the reset_unknown reset function.
It might be nice to have a reset_unknown_mask function that uses r->val
as the mask, as there are many registers that have RES0/1 and/or RO fields.

Thanks,
drew

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
  2016-01-29 10:18             ` Will Deacon
@ 2016-01-29 13:11               ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29 13:11 UTC (permalink / raw)
  To: Will Deacon, Shannon Zhao; +Cc: kvm, Marc Zyngier, linux-arm-kernel, kvmarm



On 2016/1/29 18:18, Will Deacon wrote:
> On Fri, Jan 29, 2016 at 02:26:31PM +0800, Shannon Zhao wrote:
>> >
>> >
>> >On 2016/1/29 2:06, Will Deacon wrote:
>>> > >On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>>>>> > >> >On 28/01/16 16:31, Andrew Jones wrote:
>>>>>>> > >>> > >On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>>>>>>> > >>>> > >>From: Shannon Zhao<shannon.zhao@linaro.org>
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>When we use tools like perf on host, perf passes the event type and the
>>>>>>>>> > >>>> > >>id of this event type category to kernel, then kernel will map them to
>>>>>>>>> > >>>> > >>hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>>>>>>> > >>>> > >>register. When getting the event number in KVM, directly use raw event
>>>>>>>>> > >>>> > >>type to create a perf_event for it.
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>Signed-off-by: Shannon Zhao<shannon.zhao@linaro.org>
>>>>>>>>> > >>>> > >>Reviewed-by: Marc Zyngier<marc.zyngier@arm.com>
>>>>>>>>> > >>>> > >>---
>>>>>>>>> > >>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>>>>>>> > >>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>>>>>>> > >>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>>>>>>> > >>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>> > >>>> > >>  4 files changed, 136 insertions(+)
>>>>>>>>> > >>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>index 4406184..2588f9c 100644
>>>>>>>>> > >>>> > >>--- a/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>+++ b/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>@@ -21,6 +21,7 @@
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>>>>>>> > >>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>>>>>>> > >>>> > >>+#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>>>>>> > >>> > >
>>>>>>> > >>> > >I'm not sure we want to add this. It's name is wrong, as it's really
>>>>>>> > >>> > >PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>>>>>> > >>> > >already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>>>>>> > >>> > >arch/arm64/kernel/perf_event.c maps it that way.
>>>>>>> > >>> > >
>>>>>>> > >>> > >I think we should do the same with the pmc array, i.e. map the cycle
>>>>>>> > >>> > >counter to idx zero.
>>>>> > >> >
>>>>> > >> >I tend to have the opposite view. Not for the sake of it, but because I
>>>>> > >> >find it helpful to directly map the code to the architecture
>>>>> > >> >documentation without having to bend another handful of neurons.
>>>>> > >> >
>>>>> > >> >Will probably had some good reasons to structure it that way, but I
>>>>> > >> >don't know the rational. Will?
>>> > >It was years ago, but I suspect that the cycle counter is index zero
>>> > >because its mandated, whilst the number of event counters is IMPDEF.
>> >
>> >Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
>> >bit 31 to stands the state of cycle counter, if we make cycle counter
>> >index to zero, we always need to do translation between the idx and bit
>> >31 when we access these registers.
> Conversely, if you stick the cycle counter right at the top, then you'll
> need to rework a bunch of the perf code that iterates from
> ARMV7_IDX_COUNTER0 to pmu->num_events.

But actually this doesn't affect the perf code now because it's just a 
internal PMC array index of kvm arm guest pmu codes.
And having a look at the ARMv8 SPEC, it says
"
PMCCFILTR_EL0 can also be accessed by using PMXEVTYPER_EL0 with 
PMSELR_EL0.SEL set to 0b11111.
"
Apparently it treats the index of cycle counter as 31 not zero. Also 
regarding the PMCR.N field my understanding is that it just stands the 
number of counters not the counter's order or index. Looking at the 
description of PMSEL.SEL field, it says "This field can take any value 
from 0 (0b00000) to (PMCR.N)-1, or 31 (0b11111)." while it's not "This 
field can take any value from 1 (0b00001) to (PMCR.N), or 0."

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function
@ 2016-01-29 13:11               ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29 13:11 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 18:18, Will Deacon wrote:
> On Fri, Jan 29, 2016 at 02:26:31PM +0800, Shannon Zhao wrote:
>> >
>> >
>> >On 2016/1/29 2:06, Will Deacon wrote:
>>> > >On Thu, Jan 28, 2016 at 04:45:36PM +0000, Marc Zyngier wrote:
>>>>> > >> >On 28/01/16 16:31, Andrew Jones wrote:
>>>>>>> > >>> > >On Wed, Jan 27, 2016 at 11:51:35AM +0800, Shannon Zhao wrote:
>>>>>>>>> > >>>> > >>From: Shannon Zhao<shannon.zhao@linaro.org>
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>When we use tools like perf on host, perf passes the event type and the
>>>>>>>>> > >>>> > >>id of this event type category to kernel, then kernel will map them to
>>>>>>>>> > >>>> > >>hardware event number and write this number to PMU PMEVTYPER<n>_EL0
>>>>>>>>> > >>>> > >>register. When getting the event number in KVM, directly use raw event
>>>>>>>>> > >>>> > >>type to create a perf_event for it.
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>Signed-off-by: Shannon Zhao<shannon.zhao@linaro.org>
>>>>>>>>> > >>>> > >>Reviewed-by: Marc Zyngier<marc.zyngier@arm.com>
>>>>>>>>> > >>>> > >>---
>>>>>>>>> > >>>> > >>  arch/arm64/include/asm/pmu.h |   3 ++
>>>>>>>>> > >>>> > >>  arch/arm64/kvm/Makefile      |   1 +
>>>>>>>>> > >>>> > >>  include/kvm/arm_pmu.h        |  10 ++++
>>>>>>>>> > >>>> > >>  virt/kvm/arm/pmu.c           | 122 +++++++++++++++++++++++++++++++++++++++++++
>>>>>>>>> > >>>> > >>  4 files changed, 136 insertions(+)
>>>>>>>>> > >>>> > >>  create mode 100644 virt/kvm/arm/pmu.c
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>index 4406184..2588f9c 100644
>>>>>>>>> > >>>> > >>--- a/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>+++ b/arch/arm64/include/asm/pmu.h
>>>>>>>>> > >>>> > >>@@ -21,6 +21,7 @@
>>>>>>>>> > >>>> > >>
>>>>>>>>> > >>>> > >>  #define ARMV8_MAX_COUNTERS      32
>>>>>>>>> > >>>> > >>  #define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)
>>>>>>>>> > >>>> > >>+#define ARMV8_CYCLE_IDX         (ARMV8_MAX_COUNTERS - 1)
>>>>>>> > >>> > >
>>>>>>> > >>> > >I'm not sure we want to add this. It's name is wrong, as it's really
>>>>>>> > >>> > >PMCNTENSET_EL0.C, and just a few lines above we have the idx defined
>>>>>>> > >>> > >already (ARMV8_IDX_CYCLE_COUNTER), but as zero, because
>>>>>>> > >>> > >arch/arm64/kernel/perf_event.c maps it that way.
>>>>>>> > >>> > >
>>>>>>> > >>> > >I think we should do the same with the pmc array, i.e. map the cycle
>>>>>>> > >>> > >counter to idx zero.
>>>>> > >> >
>>>>> > >> >I tend to have the opposite view. Not for the sake of it, but because I
>>>>> > >> >find it helpful to directly map the code to the architecture
>>>>> > >> >documentation without having to bend another handful of neurons.
>>>>> > >> >
>>>>> > >> >Will probably had some good reasons to structure it that way, but I
>>>>> > >> >don't know the rational. Will?
>>> > >It was years ago, but I suspect that the cycle counter is index zero
>>> > >because its mandated, whilst the number of event counters is IMPDEF.
>> >
>> >Since PMCNTENSET/CLR, PMINTENSET/CLR, PMOVSSET/CLR and PMSWINC are using
>> >bit 31 to stands the state of cycle counter, if we make cycle counter
>> >index to zero, we always need to do translation between the idx and bit
>> >31 when we access these registers.
> Conversely, if you stick the cycle counter right at the top, then you'll
> need to rework a bunch of the perf code that iterates from
> ARMV7_IDX_COUNTER0 to pmu->num_events.

But actually this doesn't affect the perf code now because it's just a 
internal PMC array index of kvm arm guest pmu codes.
And having a look at the ARMv8 SPEC, it says
"
PMCCFILTR_EL0 can also be accessed by using PMXEVTYPER_EL0 with 
PMSELR_EL0.SEL set to 0b11111.
"
Apparently it treats the index of cycle counter as 31 not zero. Also 
regarding the PMCR.N field my understanding is that it just stands the 
number of counters not the counter's order or index. Looking at the 
description of PMSEL.SEL field, it says "This field can take any value 
from 0 (0b00000) to (PMCR.N)-1, or 31 (0b11111)." while it's not "This 
field can take any value from 1 (0b00001) to (PMCR.N), or 0."

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
  2016-01-29 11:08         ` Andrew Jones
@ 2016-01-29 13:17           ` Shannon Zhao
  -1 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29 13:17 UTC (permalink / raw)
  To: Andrew Jones, Shannon Zhao
  Cc: kvm, marc.zyngier, will.deacon, linux-arm-kernel, kvmarm



On 2016/1/29 19:08, Andrew Jones wrote:
> On Fri, Jan 29, 2016 at 03:37:26PM +0800, Shannon Zhao wrote:
>> >
>> >
>> >On 2016/1/29 3:58, Andrew Jones wrote:
>>> > >On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
>>>>> > >> >From: Shannon Zhao<shannon.zhao@linaro.org>
>>>>> > >> >
>>>>> > >> >This register resets as unknown in 64bit mode while it resets as zero
>>>>> > >> >in 32bit mode. Here we choose to reset it as zero for consistency.
>>>>> > >> >
>>>>> > >> >PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>>>>> > >> >accessed from EL0. Add some check helpers to handle the access from EL0.
>>>>> > >> >
>>>>> > >> >When these bits are zero, only reading PMUSERENR will trap to EL2 and
>>>>> > >> >writing PMUSERENR or reading/writing other PMU registers will trap to
>>>>> > >> >EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>>>>> > >> >(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>>>>> > >> >physical PMUSERENR register on VM entry, so that it will trap PMU access
>>>>> > >> >from EL0 to EL2. Within the register access handler we check the real
>>>>> > >> >value of guest PMUSERENR register to decide whether this access is
>>>>> > >> >allowed. If not allowed, return false to inject UND to guest.
>>>>> > >> >
>>>>> > >> >Signed-off-by: Shannon Zhao<shannon.zhao@linaro.org>
>>>>> > >> >---
>>>>> > >> >  arch/arm64/include/asm/pmu.h |   9 ++++
>>>>> > >> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
>>>>> > >> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>>>>> > >> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>>>>> > >> >  4 files changed, 107 insertions(+), 6 deletions(-)
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>>> > >> >index 6f14a01..eb3dc88 100644
>>>>> > >> >--- a/arch/arm64/include/asm/pmu.h
>>>>> > >> >+++ b/arch/arm64/include/asm/pmu.h
>>>>> > >> >@@ -69,4 +69,13 @@
>>>>> > >> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>>>>> > >> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>>>>> > >> >
>>>>> > >> >+/*
>>>>> > >> >+ * PMUSERENR: user enable reg
>>>>> > >> >+ */
>>>>> > >> >+#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>>>>> > >> >+#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed at EL0 */
>>>>> > >> >+#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written at EL0 */
>>>>> > >> >+#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read at EL0 */
>>>>> > >> >+#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read at EL0 */
>>>>> > >> >+
>>>>> > >> >  #endif /* __ASM_PMU_H */
>>>>> > >> >diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >index fb27517..9a28b7bd8 100644
>>>>> > >> >--- a/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >+++ b/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >@@ -22,6 +22,7 @@
>>>>> > >> >  #include <linux/kvm_host.h>
>>>>> > >> >  #include <asm/kvm_mmu.h>
>>>>> > >> >  #include <asm/sysreg.h>
>>>>> > >> >+#include <asm/pmu.h>
>>>>> > >> >
>>>>> > >> >  #define __hyp_text __section(.hyp.text) notrace
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >index ca8f5a5..1a7d679 100644
>>>>> > >> >--- a/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >+++ b/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>>>>> > >> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>>>>> > >> >  	write_sysreg(1 << 15, hstr_el2);
>>>>> > >> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>>>>> > >> >+	/* Make sure we trap PMU access from EL0 to EL2 */
>>>>> > >> >+	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>>>>> > >> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>>>>> > >> >  	write_sysreg(HCR_RW, hcr_el2);
>>>>> > >> >  	write_sysreg(0, hstr_el2);
>>>>> > >> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>>>>> > >> >+	write_sysreg(0, pmuserenr_el0);
>>>>> > >> >  	write_sysreg(0, cptr_el2);
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>>> > >> >index eefc60a..084e527 100644
>>>>> > >> >--- a/arch/arm64/kvm/sys_regs.c
>>>>> > >> >+++ b/arch/arm64/kvm/sys_regs.c
>>>>> > >> >@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>>>> > >> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>>>> > >> >  			const struct sys_reg_desc *r)
>>>>> > >> >  {
>>>>> > >> >@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>>>> > >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>>>>> > >> >  		return trap_raz_wi(vcpu, p, r);
>>>>> > >> >
>>>>> > >> >+	if (pmu_access_el0_disabled(vcpu))
>>>>> > >> >+		return false;
>>> > >Based on the function name I'm not sure I like embedding vcpu_mode_priv.
>>> > >It seems a condition like
>>> > >
>>> > >   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
>>> > >       return false;
>>> > >
>> >
>> >I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
>> >make sense if it doesn't check vcpu mode and it doesn't reflect the
>> >meaning of the function name because if pmu_access_el0_enabled returns
>> >false which should mean the EL0 access is disabled but actually the vcpu
>> >mode might not be EL0.
> I think it always makes sense to simply check if some bit or bits are
> set in some register, without having the answer mixed up with other
> state.
But the final result is what we want.

> Actually, maybe we should just drop these helpers and check the
> register for the appropriate bits directly whenever needed,
>
>    pmuserenr_el0 = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>    restricted = !vcpu_mode_priv(vcpu) && !(pmuserenr_el0 & ARMV8_USERENR_EN);
>    ...
>
>    if (restricted && !(pmuserenr_el0 & ARMV8_USERENR_CR))
>       return false;
>
>
I would say no. Since this will add a lot of duplicated codes that's why 
we add helpers to factor them out.

> Or whatever... I won't complain about this anymore.
>

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register
@ 2016-01-29 13:17           ` Shannon Zhao
  0 siblings, 0 replies; 127+ messages in thread
From: Shannon Zhao @ 2016-01-29 13:17 UTC (permalink / raw)
  To: linux-arm-kernel



On 2016/1/29 19:08, Andrew Jones wrote:
> On Fri, Jan 29, 2016 at 03:37:26PM +0800, Shannon Zhao wrote:
>> >
>> >
>> >On 2016/1/29 3:58, Andrew Jones wrote:
>>> > >On Wed, Jan 27, 2016 at 11:51:43AM +0800, Shannon Zhao wrote:
>>>>> > >> >From: Shannon Zhao<shannon.zhao@linaro.org>
>>>>> > >> >
>>>>> > >> >This register resets as unknown in 64bit mode while it resets as zero
>>>>> > >> >in 32bit mode. Here we choose to reset it as zero for consistency.
>>>>> > >> >
>>>>> > >> >PMUSERENR_EL0 holds some bits which decide whether PMU registers can be
>>>>> > >> >accessed from EL0. Add some check helpers to handle the access from EL0.
>>>>> > >> >
>>>>> > >> >When these bits are zero, only reading PMUSERENR will trap to EL2 and
>>>>> > >> >writing PMUSERENR or reading/writing other PMU registers will trap to
>>>>> > >> >EL1 other than EL2 when HCR.TGE==0. To current KVM configuration
>>>>> > >> >(HCR.TGE==0) there is no way to get these traps. Here we write 0xf to
>>>>> > >> >physical PMUSERENR register on VM entry, so that it will trap PMU access
>>>>> > >> >from EL0 to EL2. Within the register access handler we check the real
>>>>> > >> >value of guest PMUSERENR register to decide whether this access is
>>>>> > >> >allowed. If not allowed, return false to inject UND to guest.
>>>>> > >> >
>>>>> > >> >Signed-off-by: Shannon Zhao<shannon.zhao@linaro.org>
>>>>> > >> >---
>>>>> > >> >  arch/arm64/include/asm/pmu.h |   9 ++++
>>>>> > >> >  arch/arm64/kvm/hyp/hyp.h     |   1 +
>>>>> > >> >  arch/arm64/kvm/hyp/switch.c  |   3 ++
>>>>> > >> >  arch/arm64/kvm/sys_regs.c    | 100 ++++++++++++++++++++++++++++++++++++++++---
>>>>> > >> >  4 files changed, 107 insertions(+), 6 deletions(-)
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
>>>>> > >> >index 6f14a01..eb3dc88 100644
>>>>> > >> >--- a/arch/arm64/include/asm/pmu.h
>>>>> > >> >+++ b/arch/arm64/include/asm/pmu.h
>>>>> > >> >@@ -69,4 +69,13 @@
>>>>> > >> >  #define	ARMV8_EXCLUDE_EL0	(1 << 30)
>>>>> > >> >  #define	ARMV8_INCLUDE_EL2	(1 << 27)
>>>>> > >> >
>>>>> > >> >+/*
>>>>> > >> >+ * PMUSERENR: user enable reg
>>>>> > >> >+ */
>>>>> > >> >+#define ARMV8_USERENR_MASK	0xf		/* Mask for writable bits */
>>>>> > >> >+#define ARMV8_USERENR_EN	(1 << 0) /* PMU regs can be accessed@EL0 */
>>>>> > >> >+#define ARMV8_USERENR_SW	(1 << 1) /* PMSWINC can be written@EL0 */
>>>>> > >> >+#define ARMV8_USERENR_CR	(1 << 2) /* Cycle counter can be read@EL0 */
>>>>> > >> >+#define ARMV8_USERENR_ER	(1 << 3) /* Event counter can be read@EL0 */
>>>>> > >> >+
>>>>> > >> >  #endif /* __ASM_PMU_H */
>>>>> > >> >diff --git a/arch/arm64/kvm/hyp/hyp.h b/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >index fb27517..9a28b7bd8 100644
>>>>> > >> >--- a/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >+++ b/arch/arm64/kvm/hyp/hyp.h
>>>>> > >> >@@ -22,6 +22,7 @@
>>>>> > >> >  #include <linux/kvm_host.h>
>>>>> > >> >  #include <asm/kvm_mmu.h>
>>>>> > >> >  #include <asm/sysreg.h>
>>>>> > >> >+#include <asm/pmu.h>
>>>>> > >> >
>>>>> > >> >  #define __hyp_text __section(.hyp.text) notrace
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >index ca8f5a5..1a7d679 100644
>>>>> > >> >--- a/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >+++ b/arch/arm64/kvm/hyp/switch.c
>>>>> > >> >@@ -37,6 +37,8 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>>>>> > >> >  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>>>>> > >> >  	write_sysreg(1 << 15, hstr_el2);
>>>>> > >> >  	write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
>>>>> > >> >+	/* Make sure we trap PMU access from EL0 to EL2 */
>>>>> > >> >+	write_sysreg(ARMV8_USERENR_MASK, pmuserenr_el0);
>>>>> > >> >  	write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >@@ -45,6 +47,7 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
>>>>> > >> >  	write_sysreg(HCR_RW, hcr_el2);
>>>>> > >> >  	write_sysreg(0, hstr_el2);
>>>>> > >> >  	write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
>>>>> > >> >+	write_sysreg(0, pmuserenr_el0);
>>>>> > >> >  	write_sysreg(0, cptr_el2);
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>>> > >> >index eefc60a..084e527 100644
>>>>> > >> >--- a/arch/arm64/kvm/sys_regs.c
>>>>> > >> >+++ b/arch/arm64/kvm/sys_regs.c
>>>>> > >> >@@ -453,6 +453,37 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>>>>> > >> >  	vcpu_sys_reg(vcpu, PMCR_EL0) = val;
>>>>> > >> >  }
>>>>> > >> >
>>>>> > >> >+static bool pmu_access_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & ARMV8_USERENR_EN) || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_write_swinc_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_SW | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_access_cycle_counter_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_CR | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >+static bool pmu_access_event_counter_el0_disabled(struct kvm_vcpu *vcpu)
>>>>> > >> >+{
>>>>> > >> >+	u64 reg = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>>>>> > >> >+
>>>>> > >> >+	return !((reg & (ARMV8_USERENR_ER | ARMV8_USERENR_EN))
>>>>> > >> >+		 || vcpu_mode_priv(vcpu));
>>>>> > >> >+}
>>>>> > >> >+
>>>>> > >> >  static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>>>> > >> >  			const struct sys_reg_desc *r)
>>>>> > >> >  {
>>>>> > >> >@@ -461,6 +492,9 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
>>>>> > >> >  	if (!kvm_arm_pmu_v3_ready(vcpu))
>>>>> > >> >  		return trap_raz_wi(vcpu, p, r);
>>>>> > >> >
>>>>> > >> >+	if (pmu_access_el0_disabled(vcpu))
>>>>> > >> >+		return false;
>>> > >Based on the function name I'm not sure I like embedding vcpu_mode_priv.
>>> > >It seems a condition like
>>> > >
>>> > >   if (!vcpu_mode_priv(vcpu) && !pmu_access_el0_enabled(vcpu))
>>> > >       return false;
>>> > >
>> >
>> >I don't think so. The return vlaue of pmu_access_el0_enabled doesn't
>> >make sense if it doesn't check vcpu mode and it doesn't reflect the
>> >meaning of the function name because if pmu_access_el0_enabled returns
>> >false which should mean the EL0 access is disabled but actually the vcpu
>> >mode might not be EL0.
> I think it always makes sense to simply check if some bit or bits are
> set in some register, without having the answer mixed up with other
> state.
But the final result is what we want.

> Actually, maybe we should just drop these helpers and check the
> register for the appropriate bits directly whenever needed,
>
>    pmuserenr_el0 = vcpu_sys_reg(vcpu, PMUSERENR_EL0);
>    restricted = !vcpu_mode_priv(vcpu) && !(pmuserenr_el0 & ARMV8_USERENR_EN);
>    ...
>
>    if (restricted && !(pmuserenr_el0 & ARMV8_USERENR_CR))
>       return false;
>
>
I would say no. Since this will add a lot of duplicated codes that's why 
we add helpers to factor them out.

> Or whatever... I won't complain about this anymore.
>

Thanks,
-- 
Shannon

^ permalink raw reply	[flat|nested] 127+ messages in thread

* Re: [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h
  2016-01-27  3:51   ` Shannon Zhao
@ 2016-02-10 10:36     ` Will Deacon
  -1 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-02-10 10:36 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: kvmarm, marc.zyngier, christoffer.dall, linux-arm-kernel, kvm,
	wei, drjones, cov, shannon.zhao, peter.huangpeng, hangaohuai,
	Anup Patel

On Wed, Jan 27, 2016 at 11:51:29AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To use the ARMv8 PMU related register defines from the KVM code,
> we move the relevant definitions to asm/pmu.h header file.
> 
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/pmu.h   | 67 ++++++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/perf_event.c | 36 +----------------------
>  2 files changed, 68 insertions(+), 35 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pmu.h
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> new file mode 100644
> index 0000000..4406184
> --- /dev/null
> +++ b/arch/arm64/include/asm/pmu.h

I think you can stick this in perf_event.h and avoid having a brand
new header.

> @@ -0,0 +1,67 @@
> +/*
> + * PMU support
> + *
> + * Copyright (C) 2012 ARM Limited
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_PMU_H
> +#define __ASM_PMU_H
> +
> +#define ARMV8_MAX_COUNTERS      32
> +#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)

[...]

> +/*
> + * Event filters for PMUv3
> + */
> +#define	ARMV8_EXCLUDE_EL1	(1 << 31)
> +#define	ARMV8_EXCLUDE_EL0	(1 << 30)
> +#define	ARMV8_INCLUDE_EL2	(1 << 27)

You should prefix these more specifically if they're going to be exposed
like this. Something like ARMV8_PMU_*.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

* [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h
@ 2016-02-10 10:36     ` Will Deacon
  0 siblings, 0 replies; 127+ messages in thread
From: Will Deacon @ 2016-02-10 10:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 27, 2016 at 11:51:29AM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> To use the ARMv8 PMU related register defines from the KVM code,
> we move the relevant definitions to asm/pmu.h header file.
> 
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/pmu.h   | 67 ++++++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kernel/perf_event.c | 36 +----------------------
>  2 files changed, 68 insertions(+), 35 deletions(-)
>  create mode 100644 arch/arm64/include/asm/pmu.h
> 
> diff --git a/arch/arm64/include/asm/pmu.h b/arch/arm64/include/asm/pmu.h
> new file mode 100644
> index 0000000..4406184
> --- /dev/null
> +++ b/arch/arm64/include/asm/pmu.h

I think you can stick this in perf_event.h and avoid having a brand
new header.

> @@ -0,0 +1,67 @@
> +/*
> + * PMU support
> + *
> + * Copyright (C) 2012 ARM Limited
> + * Author: Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_PMU_H
> +#define __ASM_PMU_H
> +
> +#define ARMV8_MAX_COUNTERS      32
> +#define ARMV8_COUNTER_MASK      (ARMV8_MAX_COUNTERS - 1)

[...]

> +/*
> + * Event filters for PMUv3
> + */
> +#define	ARMV8_EXCLUDE_EL1	(1 << 31)
> +#define	ARMV8_EXCLUDE_EL0	(1 << 30)
> +#define	ARMV8_INCLUDE_EL2	(1 << 27)

You should prefix these more specifically if they're going to be exposed
like this. Something like ARMV8_PMU_*.

Will

^ permalink raw reply	[flat|nested] 127+ messages in thread

end of thread, other threads:[~2016-02-10 10:36 UTC | newest]

Thread overview: 127+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-27  3:51 [PATCH v10 00/21] KVM: ARM64: Add guest PMU support Shannon Zhao
2016-01-27  3:51 ` Shannon Zhao
2016-01-27  3:51 ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 01/21] ARM64: Move PMU register related defines to asm/pmu.h Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-02-10 10:36   ` Will Deacon
2016-02-10 10:36     ` Will Deacon
2016-01-27  3:51 ` [PATCH v10 02/21] KVM: ARM64: Define PMU data structure for each vcpu Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 03/21] KVM: ARM64: Add offset defines for PMU registers Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 04/21] KVM: ARM64: Add access handler for PMCR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 15:36   ` Andrew Jones
2016-01-28 15:36     ` Andrew Jones
2016-01-28 20:43     ` Andrew Jones
2016-01-28 20:43       ` Andrew Jones
2016-01-29  2:07       ` Shannon Zhao
2016-01-29  2:07         ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 05/21] KVM: ARM64: Add access handler for PMSELR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 20:10   ` Andrew Jones
2016-01-28 20:10     ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 06/21] KVM: ARM64: Add access handler for PMCEID0 and PMCEID1 register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 20:34   ` Andrew Jones
2016-01-28 20:34     ` Andrew Jones
2016-01-29  3:47     ` Shannon Zhao
2016-01-29  3:47       ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 07/21] KVM: ARM64: PMU: Add perf event map and introduce perf event creating function Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 16:31   ` Andrew Jones
2016-01-28 16:31     ` Andrew Jones
2016-01-28 16:45     ` Marc Zyngier
2016-01-28 16:45       ` Marc Zyngier
2016-01-28 18:06       ` Will Deacon
2016-01-28 18:06         ` Will Deacon
2016-01-29  6:14         ` Shannon Zhao
2016-01-29  6:14           ` Shannon Zhao
2016-01-29  6:14           ` Shannon Zhao
2016-01-29  6:26         ` Shannon Zhao
2016-01-29  6:26           ` Shannon Zhao
2016-01-29  6:26           ` Shannon Zhao
2016-01-29 10:18           ` Will Deacon
2016-01-29 10:18             ` Will Deacon
2016-01-29 13:11             ` Shannon Zhao
2016-01-29 13:11               ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 08/21] KVM: ARM64: Add access handler for event type register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 20:11   ` Andrew Jones
2016-01-28 20:11     ` Andrew Jones
2016-01-29  1:42     ` Shannon Zhao
2016-01-29  1:42       ` Shannon Zhao
2016-01-29  1:42       ` Shannon Zhao
2016-01-29 11:25       ` Andrew Jones
2016-01-29 11:25         ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 09/21] KVM: ARM64: Add access handler for event counter register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 10/21] KVM: ARM64: Add access handler for PMCNTENSET and PMCNTENCLR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 18:08   ` Andrew Jones
2016-01-28 18:08     ` Andrew Jones
2016-01-28 18:12     ` Andrew Jones
2016-01-28 18:12       ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 11/21] KVM: ARM64: Add access handler for PMINTENSET and PMINTENCLR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 18:18   ` Andrew Jones
2016-01-28 18:18     ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 12/21] KVM: ARM64: Add access handler for PMOVSSET and PMOVSCLR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 13/21] KVM: ARM64: Add access handler for PMSWINC register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 18:37   ` Andrew Jones
2016-01-28 18:37     ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 14/21] KVM: ARM64: Add helper to handle PMCR register bits Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 19:15   ` Andrew Jones
2016-01-28 19:15     ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 15/21] KVM: ARM64: Add access handler for PMUSERENR register Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 19:58   ` Andrew Jones
2016-01-28 19:58     ` Andrew Jones
2016-01-29  7:37     ` Shannon Zhao
2016-01-29  7:37       ` Shannon Zhao
2016-01-29 11:08       ` Andrew Jones
2016-01-29 11:08         ` Andrew Jones
2016-01-29 13:17         ` Shannon Zhao
2016-01-29 13:17           ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 16/21] KVM: ARM64: Add PMU overflow interrupt routing Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 17/21] KVM: ARM64: Reset PMU state when resetting vcpu Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 18/21] KVM: ARM64: Free perf event of PMU when destroying vcpu Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 19/21] KVM: ARM64: Add a new feature bit for PMUv3 Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 20:54   ` Andrew Jones
2016-01-28 20:54     ` Andrew Jones
2016-01-27  3:51 ` [PATCH v10 20/21] KVM: ARM: Introduce per-vcpu kvm device controls Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51 ` [PATCH v10 21/21] KVM: ARM64: Add a new vcpu device control group for PMUv3 Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-27  3:51   ` Shannon Zhao
2016-01-28 21:12   ` Andrew Jones
2016-01-28 21:12     ` Andrew Jones
2016-01-28 21:30 ` [PATCH v10 00/21] KVM: ARM64: Add guest PMU support Andrew Jones
2016-01-28 21:30   ` Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.