All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support
@ 2014-05-13 15:53 Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest Boris Ostrovsky
                   ` (19 more replies)
  0 siblings, 20 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky


Here is the sixth version of PV(H) PMU patches.

Changes in v6:

* Two new patches:
  o Merge VMX MSR add/remove routines in vmcs.c (patch 5)
  o Merge VPMU read/write MSR routines in vpmu.c (patch 14)
* Check for pending NMI softirq after saving VPMU context to prevent a newly-scheduled
  guest from overwriting sampled_vcpu written by de-scheduled VPCU.
* Keep track of enabled counters on Intel. This was removed in earlier patches and
  was a mistake. As result of this change struct vpmu will have a pointer to private
  context data (i.e. data that is not exposed to a PV(H) guest). Use this private pointer
  on SVM as well for storing MSR bitmap status (it was unnecessarily exposed to PV guests
  earlier).
  Dropped Reviewed-by: and Tested-by: tags from patch 4 since it needs to be reviewed
  agan (core2_vpmu_do_wrmsr() routine, mostly)
* Replaced references to dom0 with hardware_domain (and is_control_domain with
  is_hardware_domain for consistency)
* Prevent non-privileged domains from reading PMU MSRs in VPMU_PRIV_MODE
* Reverted unnecessary changes in vpmu_initialise()'s switch statement
* Fixed comment in vpmu_do_interrupt


Changes in v5:

* Dropped patch number 2 ("Stop AMD counters when called from vpmu_save_force()")
  as no longer needed
* Added patch number 2 that marks context as loaded before PMU registers are
  loaded. This prevents situation where a PMU interrupt may occur while context
  is still viewed as not loaded. (This is really a bug fix for exsiting VPMU
  code)
* Renamed xenpmu.h files to pmu.h
* More careful use of is_pv_domain(), is_hvm_domain(, is_pvh_domain and
  has_hvm_container_domain(). Also explicitly disabled support for PVH until
  patch 16 to make distinction between usage of the above macros more clear.
* Added support for disabling VPMU support during runtime.
* Disable VPMUs for non-privileged domains when switching to privileged
  profiling mode
* Added ARM stub for xen_arch_pmu_t
* Separated vpmu_mode from vpmu_features
* Moved CS register query to make sure we use appropriate query mechanism for
  various guest types.
* LVTPC is now set from value in shared area, not copied from dom0
* Various code and comments cleanup as suggested by Jan.

Changes in v4:

* Added support for PVH guests:
  o changes in pvpmu_init() to accommodate both PV and PVH guests, still in patch 10
  o more careful use of is_hvm_domain
  o Additional patch (16)
* Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe handling
* Fixed dom0's VCPU selection in privileged mode
* Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in vpmu_do_interrupt.
  (don't want to expose compat_cpu_user_regs in a public header)
* Renamed public structures by prefixing them with "xen_"
* Added an entry for xenpf_symdata in xlat.lst
* Fixed pv_cpuid check for vpmu-specific cpuid adjustments
* Varios code style fixes
* Eliminated anonymous unions
* Added more verbiage to NMI patch description


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest


The following patch series adds PMU support in Xen for PV(H)
guests. There is a companion patchset for Linux kernel. In addition,
another set of changes will be provided (later) for userland perf
code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately

A few notes that may help reviewing:

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or regular vector
interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV(H) guests
* PV guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been moved
up from hvm subtree



Boris Ostrovsky (19):
  common/symbols: Export hypervisor symbols to privileged guest
  VPMU: Mark context LOADED before registers are loaded
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  vmx: Merge MSR management routines
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Suport for PVH guests
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  15 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/hvm.c                   |   3 +-
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/svm.c               |  15 +-
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              | 115 ++--
 xen/arch/x86/hvm/vmx/vmx.c               |  21 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 935 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  42 +-
 xen/arch/x86/vpmu.c                      | 726 +++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 510 +++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 948 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |  10 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               | 102 ++++
 xen/include/public/arch-arm.h            |   3 +
 xen/include/public/arch-x86/pmu.h        |  62 ++
 xen/include/public/platform.h            |  16 +
 xen/include/public/pmu.h                 |  99 ++++
 xen/include/public/xen.h                 |   2 +
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   6 +-
 xen/include/xlat.lst                     |   1 +
 40 files changed, 2727 insertions(+), 1927 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-16  8:05   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded Boris Ostrovsky
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 16 ++++++++++
 xen/include/xen/symbols.h                |  6 ++--
 xen/include/xlat.lst                     |  1 +
 7 files changed, 91 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..0a93037 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, strlen(name)) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..bc83f76 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++*symnum;
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 8c43282..696c3e9 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -276,7 +276,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 053b9fa..f52448c 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,21 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    XEN_GUEST_HANDLE(char) name;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +568,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..3017449 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,7 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +10,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+int xensyms_read(uint32_t *symnum, uint32_t *type,
+                 uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 9a35dd7..c8fafef 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -86,6 +86,7 @@
 ?	processor_px			platform.h
 !	psd_package			platform.h
 ?	xenpf_enter_acpi_sleep		platform.h
+!	xenpf_symdata			platform.h
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 !	sched_poll			sched.h
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 14:18   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup Boris Ostrovsky
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Because a PMU interrupt may be generated as soon as PMU registers are loaded (or,
more precisely, as soon as HW PMU is "armed") we don't want to delay marking
context as LOADED until after registers are loaded. Otherwise during interrupt
handling VPMU_CONTEXT_LOADED may not be set and this could be confusing.

(Technically, only SVM needs this change right now since VMX will "arm" PMU later,
during VMRUN when global control register is loaded from VMCS. However, both
AMD and Intel code will require this patch when we introduce PV VPMU).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 2 ++
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 2 ++
 xen/arch/x86/hvm/vpmu.c           | 3 +--
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..3ac7d53 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -203,6 +203,8 @@ static void amd_vpmu_load(struct vcpu *v)
         return;
     }
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     context_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 3129ebd..ccd14d9 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -369,6 +369,8 @@ static void core2_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     __core2_vpmu_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..63765fa 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -211,10 +211,9 @@ void vpmu_load(struct vcpu *v)
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
     {
         apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
         vpmu->arch_vpmu_ops->arch_vpmu_load(v);
     }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
 }
 
 void vpmu_initialise(struct vcpu *v)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 11:55   ` Tian, Kevin
  2014-05-19 14:26   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code Boris Ostrovsky
                   ` (16 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 12 +++++-------
 xen/arch/x86/hvm/vpmu.c           |  3 +--
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3ac7d53..3666915 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -244,7 +244,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -284,7 +285,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -300,7 +301,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -311,7 +313,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -403,7 +406,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ccd14d9..a3fb458 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,16 +326,14 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_domain(v->domain) &&
+         cpu_has_vmx_msr_bitmap )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -448,7 +446,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -815,7 +813,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 63765fa..a48dae2 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -143,8 +143,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (2 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 11:59   ` Tian, Kevin
  2014-05-19 14:30   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 05/19] vmx: Merge MSR management routines Boris Ostrovsky
                   ` (15 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 +++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 337 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 209 insertions(+), 204 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index cc84ca2..0f43a1b 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1204,6 +1204,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1234,6 +1262,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index a3fb458..0a9c643 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,27 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 enabled_cntrs;  /* Follows PERF_GLOBAL_CTRL MSR format */
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +109,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +118,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +130,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +143,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +186,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +196,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +214,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +222,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +237,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +258,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +290,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +303,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -344,20 +331,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -376,56 +365,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -436,10 +408,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -454,8 +424,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
-    u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    u64 global_ctrl;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -500,6 +469,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -512,57 +482,48 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
-        {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        core2_vpmu_cxt->enabled_cntrs &=
+                ~(((1ULL << VPMU_CORE2_MAX_FIXED_PMCS) - 1) << 32);
+        if ( msr_content != 0 )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
+            u64 val = msr_content;
+            for ( i = 0; i < fixed_pmc_cnt; i++ )
+            {
+                if ( val & 3 )
+                    core2_vpmu_cxt->enabled_cntrs |= (1ULL << 32) << i;
+                val >>= FIXED_CTR_CTRL_BITS;
+            }
         }
+
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+
+            if ( msr_content & (1ULL << 22) )
+                core2_vpmu_cxt->enabled_cntrs |= 1ULL << tmp;
+            else
+                core2_vpmu_cxt->enabled_cntrs &= ~(1ULL << tmp);
+
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
-    if ( pmu_enable )
+    if ((global_ctrl & core2_vpmu_cxt->enabled_cntrs) ||
+        (core2_vpmu_cxt->ds_area != 0)  )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
@@ -580,7 +541,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -596,7 +556,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -681,7 +641,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -699,27 +659,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -737,7 +695,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -800,18 +758,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 445b39f..50befe1 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -480,7 +480,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 05/19] vmx: Merge MSR management routines
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (3 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 12:00   ` Tian, Kevin
  2014-05-22 10:24   ` Dietmar Hahn
  2014-05-13 15:53 ` [PATCH v6 06/19] x86/VPMU: Handle APIC_LVTPC accesses Boris Ostrovsky
                   ` (14 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

vmx_add_host_load_msr()/vmx_rm_guest_msr() and vmx_add_guest_msr()/vmx_rm_guest_msr()
share fair amount of code. Merge them to simplify code maintenance.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 154 +++++++++++++++++--------------------
 xen/arch/x86/hvm/vmx/vmx.c         |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |   8 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h |  10 ++-
 4 files changed, 83 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 0f43a1b..aaa3691 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1172,121 +1172,109 @@ int vmx_write_guest_msr(u32 msr, u64 val)
     return -ESRCH;
 }
 
-int vmx_add_guest_msr(u32 msr)
+int vmx_add_msr(u32 msr, u8 type)
 {
     struct vcpu *curr = current;
-    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    unsigned int idx, *msr_count;
+    struct vmx_msr_entry **msr_area;
 
-    if ( msr_area == NULL )
+    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
+
+    if ( type == VMX_GUEST_MSR )
     {
-        if ( (msr_area = alloc_xenheap_page()) == NULL )
+        msr_count = &curr->arch.hvm_vmx.msr_count;
+        msr_area = &curr->arch.hvm_vmx.msr_area;
+    }
+    else
+    {
+        msr_count = &curr->arch.hvm_vmx.host_msr_count;
+        msr_area = &curr->arch.hvm_vmx.host_msr_area;
+    }
+
+    if ( *msr_area == NULL )
+    {
+        if ( (*msr_area = alloc_xenheap_page()) == NULL )
             return -ENOMEM;
-        curr->arch.hvm_vmx.msr_area = msr_area;
-        __vmwrite(VM_EXIT_MSR_STORE_ADDR, virt_to_maddr(msr_area));
-        __vmwrite(VM_ENTRY_MSR_LOAD_ADDR, virt_to_maddr(msr_area));
+
+        if ( type == VMX_GUEST_MSR )
+        {
+            __vmwrite(VM_EXIT_MSR_STORE_ADDR, virt_to_maddr(*msr_area));
+            __vmwrite(VM_ENTRY_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
+        }
+        else
+            __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
     }
 
-    for ( i = 0; i < msr_count; i++ )
-        if ( msr_area[i].index == msr )
+    for ( idx = 0; idx < *msr_count; idx++ )
+        if ( msr_area[idx]->index == msr )
             return 0;
 
-    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
+    if ( *msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
         return -ENOSPC;
 
-    msr_area[msr_count].index = msr;
-    msr_area[msr_count].mbz   = 0;
-    msr_area[msr_count].data  = 0;
-    curr->arch.hvm_vmx.msr_count = ++msr_count;
-    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
-    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+    msr_area[*msr_count]->index = msr;
+    msr_area[*msr_count]->mbz   = 0;
+    (*msr_count)++;
+    if ( type == VMX_GUEST_MSR )
+    {
+        msr_area[*msr_count - 1]->data  = 0;
+        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
+        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
+    }
+    else
+    {
+        rdmsrl(msr, msr_area[*msr_count - 1]->data);
+        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
+    }
 
     return 0;
 }
 
-void vmx_rm_guest_msr(u32 msr)
+void vmx_rm_msr(u32 msr, u8 type)
 {
     struct vcpu *curr = current;
-    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    unsigned int idx, *msr_count;
+    struct vmx_msr_entry **msr_area;
 
-    if ( msr_area == NULL )
-        return;
-
-    for ( idx = 0; idx < msr_count; idx++ )
-        if ( msr_area[idx].index == msr )
-            break;
+    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
 
-    if ( idx == msr_count )
-        return;
-
-    for ( ; idx < msr_count - 1; idx++ )
+    if ( type == VMX_GUEST_MSR )
     {
-        msr_area[idx].index = msr_area[idx + 1].index;
-        msr_area[idx].data = msr_area[idx + 1].data;
+        msr_count = &curr->arch.hvm_vmx.msr_count;
+        msr_area = &curr->arch.hvm_vmx.msr_area;
     }
-    msr_area[msr_count - 1].index = 0;
-
-    curr->arch.hvm_vmx.msr_count = --msr_count;
-    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
-    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
-}
-
-int vmx_add_host_load_msr(u32 msr)
-{
-    struct vcpu *curr = current;
-    unsigned int i, msr_count = curr->arch.hvm_vmx.host_msr_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
-
-    if ( msr_area == NULL )
+    else
     {
-        if ( (msr_area = alloc_xenheap_page()) == NULL )
-            return -ENOMEM;
-        curr->arch.hvm_vmx.host_msr_area = msr_area;
-        __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(msr_area));
+        msr_count = &curr->arch.hvm_vmx.host_msr_count;
+        msr_area = &curr->arch.hvm_vmx.host_msr_area;
     }
 
-    for ( i = 0; i < msr_count; i++ )
-        if ( msr_area[i].index == msr )
-            return 0;
-
-    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
-        return -ENOSPC;
-
-    msr_area[msr_count].index = msr;
-    msr_area[msr_count].mbz   = 0;
-    rdmsrl(msr, msr_area[msr_count].data);
-    curr->arch.hvm_vmx.host_msr_count = ++msr_count;
-    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
-
-    return 0;
-}
-
-void vmx_rm_host_load_msr(u32 msr)
-{
-    struct vcpu *curr = current;
-    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
-
-    if ( msr_area == NULL )
+    if ( *msr_area == NULL )
         return;
 
-    for ( idx = 0; idx < msr_count; idx++ )
-        if ( msr_area[idx].index == msr )
+    for ( idx = 0; idx < *msr_count; idx++ )
+        if ( msr_area[idx]->index == msr )
             break;
 
-    if ( idx == msr_count )
+    if ( idx == *msr_count )
         return;
 
-    for ( ; idx < msr_count - 1; idx++ )
+    for ( ; idx < *msr_count - 1; idx++ )
     {
-        msr_area[idx].index = msr_area[idx + 1].index;
-        msr_area[idx].data = msr_area[idx + 1].data;
+        msr_area[idx]->index = msr_area[idx + 1]->index;
+        msr_area[idx]->data = msr_area[idx + 1]->data;
+    }
+    msr_area[*msr_count - 1]->index = 0;
+    (*msr_count)--;
+    if ( type == VMX_GUEST_MSR )
+    {
+        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
+        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
+    }
+    else
+    {
+        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
     }
-    msr_area[msr_count - 1].index = 0;
-
-    curr->arch.hvm_vmx.host_msr_count = --msr_count;
-    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
 }
 
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ecdbc17..23d58d9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2234,12 +2234,12 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
             for ( ; (rc == 0) && lbr->count; lbr++ )
                 for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
-                    if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
+                    if ( (rc = vmx_add_msr(lbr->base + i, VMX_GUEST_MSR)) == 0 )
                         vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
         }
 
         if ( (rc < 0) ||
-             (vmx_add_host_load_msr(msr) < 0) )
+             (vmx_add_msr(msr, VMX_HOST_MSR) < 0) )
             hvm_inject_hw_exception(TRAP_machine_check, 0);
         else
         {
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 0a9c643..5e980fa 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -370,10 +370,10 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
         goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
@@ -390,8 +390,8 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     return 1;
 
 out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
+    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
     release_pmu_ownship(PMU_OWNER_HVM);
 
     printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 50befe1..dd34b2c 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -475,14 +475,16 @@ enum vmcs_field {
 
 #define MSR_TYPE_R 1
 #define MSR_TYPE_W 2
+
+#define VMX_GUEST_MSR 0
+#define VMX_HOST_MSR  1
+
 void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
-int vmx_add_guest_msr(u32 msr);
-void vmx_rm_guest_msr(u32 msr);
-int vmx_add_host_load_msr(u32 msr);
-void vmx_rm_host_load_msr(u32 msr);
+int vmx_add_msr(u32 msr, u8 type);
+void vmx_rm_msr(u32 msr, u8 type);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 06/19] x86/VPMU: Handle APIC_LVTPC accesses
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (4 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 05/19] vmx: Merge MSR management routines Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 07/19] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero Boris Ostrovsky
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           |  8 ++++++++
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 13 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3666915..2fbe2c1 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -298,8 +298,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( is_hvm_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -310,8 +308,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index cd7e872..f1b543c 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -734,8 +735,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 5e980fa..534dd66 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -528,19 +528,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -706,10 +693,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a48dae2..0340e5b 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 07/19] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (5 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 06/19] x86/VPMU: Handle APIC_LVTPC accesses Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 534dd66..dffdd80 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -165,13 +165,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -375,8 +368,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (6 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 07/19] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 12:02   ` Tian, Kevin
                     ` (2 more replies)
  2014-05-13 15:53 ` [PATCH v6 09/19] x86/VPMU: Make vpmu not HVM-specific Boris Ostrovsky
                   ` (11 subsequent siblings)
  19 siblings, 3 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add pmu.h header files, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              |  81 ++++++++++++-----------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 110 +++++++++++++++++--------------
 xen/arch/x86/hvm/vpmu.c                  |   1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |   6 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  32 ---------
 xen/include/asm-x86/hvm/vpmu.h           |  16 ++---
 xen/include/public/arch-arm.h            |   3 +
 xen/include/public/arch-x86/pmu.h        |  62 +++++++++++++++++
 xen/include/public/pmu.h                 |  38 +++++++++++
 9 files changed, 220 insertions(+), 129 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 2fbe2c1..ebdba8e 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/pmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,12 +84,10 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
+/* Use private context as a flag for MSR bitmap */
+#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
+#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}
+#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)
 
 static inline int get_pmu_reg_type(u32 addr)
 {
@@ -142,7 +141,6 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -150,14 +148,13 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
         svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
     }
 
-    ctxt->msr_bitmap_set = 1;
+    msr_bitmap_on(vpmu);
 }
 
 static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -165,7 +162,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
         svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
     }
 
-    ctxt->msr_bitmap_set = 0;
+    msr_bitmap_off(vpmu);
 }
 
 static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
@@ -177,19 +174,22 @@ static inline void context_load(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     vpmu_reset(vpmu, VPMU_FROZEN);
 
@@ -198,7 +198,7 @@ static void amd_vpmu_load(struct vcpu *v)
         unsigned int i;
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -212,17 +212,17 @@ static inline void context_save(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
     unsigned int i;
 
     /*
@@ -245,7 +245,7 @@ static int amd_vpmu_save(struct vcpu *v)
     context_save(v);
 
     if ( is_hvm_domain(v->domain) &&
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -256,7 +256,9 @@ static void context_update(unsigned int msr, u64 msr_content)
     unsigned int i;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -268,12 +270,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -299,8 +301,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
 
-        if ( is_hvm_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -309,8 +310,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -351,7 +351,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 static int amd_vpmu_initialise(struct vcpu *v)
 {
-    struct amd_vpmu_context *ctxt;
+    struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
 
@@ -381,7 +381,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -390,7 +392,11 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
+    vpmu->priv_context = NULL;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
 }
@@ -402,8 +408,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
@@ -420,7 +425,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
 static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -450,8 +457,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index dffdd80..1fe583f 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/pmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,17 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 enabled_cntrs;  /* Follows PERF_GLOBAL_CTRL MSR format */
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -225,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -294,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -322,10 +319,13 @@ static int core2_vpmu_save(struct vcpu *v)
 static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -333,8 +333,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -357,7 +357,8 @@ static void core2_vpmu_load(struct vcpu *v)
 static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    uint64_t *p = NULL;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
@@ -370,12 +371,20 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-    if ( !core2_vpmu_cxt )
+    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                   sizeof(uint64_t) * fixed_pmc_cnt +
+                                   sizeof(struct xen_pmu_cntr_pair) *
+                                   arch_pmc_cnt);
+    p = xzalloc_bytes(sizeof(uint64_t));
+    if ( !core2_vpmu_cxt || !p )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+                                    sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
+    vpmu->priv_context = (void *)p;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
@@ -386,6 +395,9 @@ out_err:
     vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
     release_pmu_ownship(PMU_OWNER_HVM);
 
+    xfree(core2_vpmu_cxt);
+    xfree(p);
+
     printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
            v->vcpu_id, v->domain->domain_id);
 
@@ -421,7 +433,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    uint64_t *enabled_cntrs;
 
     if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -447,10 +460,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     }
 
     core2_vpmu_cxt = vpmu->context;
+    enabled_cntrs = (uint64_t *)vpmu->priv_context;
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -484,15 +498,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        core2_vpmu_cxt->enabled_cntrs &=
-                ~(((1ULL << VPMU_CORE2_MAX_FIXED_PMCS) - 1) << 32);
+        *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
         if ( msr_content != 0 )
         {
             u64 val = msr_content;
             for ( i = 0; i < fixed_pmc_cnt; i++ )
             {
                 if ( val & 3 )
-                    core2_vpmu_cxt->enabled_cntrs |= (1ULL << 32) << i;
+                    *enabled_cntrs |= (1ULL << 32) << i;
                 val >>= FIXED_CTR_CTRL_BITS;
             }
         }
@@ -503,19 +516,21 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
 
             if ( msr_content & (1ULL << 22) )
-                core2_vpmu_cxt->enabled_cntrs |= 1ULL << tmp;
+                *enabled_cntrs |= 1ULL << tmp;
             else
-                core2_vpmu_cxt->enabled_cntrs &= ~(1ULL << tmp);
+                *enabled_cntrs &= ~(1ULL << tmp);
 
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            xen_pmu_cntr_pair[tmp].control = msr_content;
         }
     }
 
-    if ((global_ctrl & core2_vpmu_cxt->enabled_cntrs) ||
-        (core2_vpmu_cxt->ds_area != 0)  )
+    if ((global_ctrl & *enabled_cntrs) || (core2_vpmu_cxt->ds_area != 0)  )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
@@ -561,7 +576,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -572,7 +587,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -621,8 +636,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -641,12 +659,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -655,7 +670,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -666,14 +681,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     u64 msr_content;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
 
     rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
     if ( msr_content )
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -736,12 +751,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
@@ -755,6 +764,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
 
     xfree(vpmu->context);
+    xfree(vpmu->priv_context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0340e5b..0acc486 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/pmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..3e5d9de 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/pmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -42,6 +41,9 @@
 #define MSR_TYPE_ARCH_COUNTER       3
 #define MSR_TYPE_ARCH_CTRL          4
 
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
 
 /* Arch specific operations shared by all vpmus */
 struct arch_vpmu_ops {
@@ -64,7 +66,8 @@ struct vpmu_struct {
     u32 flags;
     u32 last_pcpu;
     u32 hw_lapic_lvtpc;
-    void *context;
+    void *context;      /* May be shared with PV guest */
+    void *priv_context; /* hypervisor-only */
     struct arch_vpmu_ops *arch_vpmu_ops;
 };
 
@@ -76,11 +79,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7496556..e982b53 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -388,6 +388,9 @@ typedef uint64_t xen_callback_t;
 
 #endif
 
+/* Stub definition of PMU structure */
+typedef struct xen_arch_pmu {} xen_arch_pmu_t;
+
 #endif /*  __XEN_PUBLIC_ARCH_ARM_H__ */
 
 /*
diff --git a/xen/include/public/arch-x86/pmu.h b/xen/include/public/arch-x86/pmu.h
new file mode 100644
index 0000000..b4eda67
--- /dev/null
+++ b/xen/include/public/arch-x86/pmu.h
@@ -0,0 +1,62 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+
+/* AMD PMU registers and structures */
+struct xen_pmu_amd_ctxt {
+    uint32_t counters;       /* Offset to counter MSRs */
+    uint32_t ctrls;          /* Offset to control MSRs */
+};
+
+/* Intel PMU registers and structures */
+struct xen_pmu_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+
+struct xen_pmu_intel_ctxt {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
+                                    sizeof(struct xen_pmu_intel_ctxt) ? \
+                                     sizeof(struct xen_pmu_amd_ctxt) : \
+                                     sizeof(struct xen_pmu_intel_ctxt))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct xen_arch_pmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } r;
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } l;
+    union {
+        struct xen_pmu_amd_ctxt amd;
+        struct xen_pmu_intel_ctxt intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } c;
+};
+typedef struct xen_arch_pmu xen_arch_pmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
new file mode 100644
index 0000000..3ffd2cf
--- /dev/null
+++ b/xen/include/public/pmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_PMU_H__
+#define __XEN_PUBLIC_PMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/pmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xen_pmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    xen_arch_pmu_t pmu;
+};
+typedef struct xen_pmu_data xen_pmu_data_t;
+
+#endif /* __XEN_PUBLIC_PMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 09/19] x86/VPMU: Make vpmu not HVM-specific
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (7 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags Boris Ostrovsky
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index c5c266f..25fb5d6 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -403,6 +403,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index f34fa91..a0c5583 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 3e5d9de..9e49a78 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (8 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 09/19] x86/VPMU: Make vpmu not HVM-specific Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-20 15:40   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests Boris Ostrovsky
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/domain.c              |   4 +-
 xen/arch/x86/hvm/svm/vpmu.c        |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  10 +--
 xen/arch/x86/hvm/vpmu.c            | 121 ++++++++++++++++++++++++++++++++++---
 xen/arch/x86/x86_64/compat/entry.S |   4 ++
 xen/arch/x86/x86_64/entry.S        |   4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  14 ++---
 xen/include/public/pmu.h           |  48 +++++++++++++++
 xen/include/public/xen.h           |   1 +
 xen/include/xen/hypercall.h        |   4 ++
 10 files changed, 188 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2a9c6fc..483e85b 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1482,7 +1482,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
     if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
             vpmu_save(prev);
 
         if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
@@ -1526,7 +1526,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            !is_hardware_domain(next->domain));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index ebdba8e..3828c4b 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -472,14 +472,14 @@ struct arch_vpmu_ops amd_vpmu_ops = {
     .arch_vpmu_dump = amd_vpmu_dump
 };
 
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int svm_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 1fe583f..8d5b26e 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -703,13 +703,13 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     return 1;
 }
 
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+static int core2_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -821,7 +821,7 @@ struct arch_vpmu_ops core2_no_vpmu_ops = {
     .do_cpuid = core2_no_vpmu_do_cpuid,
 };
 
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int vmx_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
@@ -829,7 +829,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
@@ -872,7 +872,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         /* future: */
         case 0x3d:
         case 0x4e:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
+            ret = core2_vpmu_initialise(v);
             if ( !ret )
                 vpmu->arch_vpmu_ops = &core2_vpmu_ops;
             return ret;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 0acc486..d3de8a6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,8 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +54,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +62,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode = XENPMU_MODE_ON;
         break;
     }
 }
@@ -77,6 +79,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
         return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
     return 0;
@@ -86,6 +91,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
         return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
     return 0;
@@ -237,19 +245,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         break;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         break;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         break;
     }
 }
@@ -271,3 +279,100 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+        if ( vpmu_mode == XENPMU_MODE_OFF )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. We also prevent PMU MSR accesses (which can load
+             * context) when VPMU is disabled.
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 32b3bcc..e1ccf1c 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 9e49a78..3dfdab7 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/pmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
 #define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
@@ -58,8 +51,8 @@ struct arch_vpmu_ops {
     void (*arch_vpmu_dump)(const struct vcpu *);
 };
 
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
 
 struct vpmu_struct {
     u32 flags;
@@ -99,5 +92,8 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 3ffd2cf..f91d935 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -13,6 +13,54 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xen_pmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } v;
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } d;
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xen_pmu_params xen_pmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a6a2092..0766790 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..cf34547 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/pmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (9 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-20 15:51   ` Jan Beulich
  2014-05-20 15:52   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on " Boris Ostrovsky
                   ` (8 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/svm.c        |  6 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 37 +++++++++--------
 xen/arch/x86/hvm/vmx/vmx.c        |  6 ++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 64 +++++++++++++++++++----------
 xen/arch/x86/hvm/vpmu.c           | 85 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/pmu.h          |  2 +
 xen/include/public/xen.h          |  1 +
 xen/include/xen/softirq.h         |  1 +
 10 files changed, 161 insertions(+), 43 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 38d7923..c23db32 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1150,7 +1150,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     svm_guest_osvw_init(v);
 
@@ -1159,7 +1160,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
 
 static void svm_vcpu_destroy(struct vcpu *v)
 {
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     svm_destroy_vmcb(v);
     passive_domain_destroy(v);
 }
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3828c4b..42c3530 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -381,16 +381,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( !is_pv_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
 
     ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -408,17 +413,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( is_hvm_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( is_msr_bitmap_on(vpmu) )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 23d58d9..1c9e742 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -115,7 +115,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     vmx_install_vlapic_mapping(v);
 
@@ -129,7 +130,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 static void vmx_vcpu_destroy(struct vcpu *v)
 {
     vmx_destroy_vmcs(v);
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     passive_domain_destroy(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8d5b26e..b048180 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -360,25 +360,34 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     uint64_t *p = NULL;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
-
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
-        goto out_err;
-
-    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                   sizeof(uint64_t) * fixed_pmc_cnt +
-                                   sizeof(struct xen_pmu_cntr_pair) *
-                                   arch_pmc_cnt);
     p = xzalloc_bytes(sizeof(uint64_t));
-    if ( !core2_vpmu_cxt || !p )
+    if ( !p )
         goto out_err;
 
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            goto out_err;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
+            goto out_err_hvm;
+        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
+            goto out_err_hvm;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) *
+                                       arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err_hvm;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+    }
+
     core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
                                     sizeof(uint64_t) * fixed_pmc_cnt;
@@ -390,7 +399,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     return 1;
 
-out_err:
+out_err_hvm:
     vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
     vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
     release_pmu_ownship(PMU_OWNER_HVM);
@@ -398,6 +407,7 @@ out_err:
     xfree(core2_vpmu_cxt);
     xfree(p);
 
+out_err:
     printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
            v->vcpu_id, v->domain->domain_id);
 
@@ -753,6 +763,10 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -764,11 +778,17 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
 
     xfree(vpmu->context);
-    xfree(vpmu->priv_context);
-    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->priv_context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index d3de8a6..2d152f7 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,10 +21,14 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
+#include <asm/p2m.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vmcs.h>
@@ -267,7 +271,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -312,6 +322,67 @@ static void vpmu_unload_all(void)
     }
 }
 
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+
+    if ( !is_pv_domain(d) )
+        return -EINVAL;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
 
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
 {
@@ -372,7 +443,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 6853842..7f9b8c9 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 3dfdab7..438a913 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -61,6 +61,7 @@ struct vpmu_struct {
     void *context;      /* May be shared with PV guest */
     void *priv_context; /* hypervisor-only */
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index f91d935..814e061 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 0766790..e4d0b79 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on PV guests
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (10 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-22 14:50   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for " Boris Ostrovsky
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 64 +++++++++++++++++++++++++++++++--------
 xen/arch/x86/hvm/vpmu.c           |  7 +++++
 xen/arch/x86/traps.c              | 32 ++++++++++++++++++--
 xen/include/public/pmu.h          |  1 +
 5 files changed, 91 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 483e85b..4a122da 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2012,8 +2012,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index b048180..8182dc3 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,12 +298,18 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
@@ -340,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -436,9 +450,16 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
-    u64 global_ctrl;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -462,7 +483,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -475,11 +496,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -495,7 +517,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -504,10 +526,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
+                               &core2_vpmu_cxt->global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
         *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
         if ( msr_content != 0 )
         {
@@ -529,7 +555,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
+                                   &core2_vpmu_cxt->global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
 
             if ( msr_content & (1ULL << 22) )
                 *enabled_cntrs |= 1ULL << tmp;
@@ -540,7 +570,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         }
     }
 
-    if ((global_ctrl & *enabled_cntrs) || (core2_vpmu_cxt->ds_area != 0)  )
+    if ((core2_vpmu_cxt->global_ctrl & *enabled_cntrs) ||
+        (core2_vpmu_cxt->ds_area != 0)  )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
@@ -570,13 +601,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -600,7 +637,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 2d152f7..f89e7ff 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -455,6 +455,13 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 45070bb..a0d0ba7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -868,8 +869,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
         break;
 
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        break; 
+
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -885,6 +888,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
     }
 
  out:
+    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
+
     regs->eax = a;
     regs->ebx = b;
     regs->ecx = c;
@@ -2515,7 +2520,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( v->arch.debugreg[7] & DR7_ACTIVE_MASK )
                 wrmsrl(regs->_ecx, msr_content);
             break;
-
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2617,6 +2629,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
                             [regs->_ecx - MSR_AMD64_DR1_ADDRESS_MASK + 1];
             regs->edx = 0;
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) )
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
 
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 814e061..81783de 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for PV guests
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (11 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on " Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-22 15:30   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr Boris Ostrovsky
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/vpmu.c  | 111 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/pmu.h |   7 +++
 2 files changed, 113 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index f89e7ff..9995728 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -76,7 +76,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -87,7 +92,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
@@ -99,14 +120,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle interrupt for special domains (e.g. idle domain) */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = hardware_domain->vcpu[smp_processor_id() %
+            hardware_domain->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        const struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs *cmp;
+
+            gregs = guest_cpu_user_regs();
+
+            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            XLAT_cpu_user_regs(cmp, gregs);
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   &cmp, sizeof(struct compat_cpu_user_regs));
+        }
+        else if ( !is_hardware_domain(current->domain) &&
+                 !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
 
     if ( vpmu->arch_vpmu_ops )
     {
@@ -225,7 +319,8 @@ void vpmu_load(struct vcpu *v)
     local_irq_enable();
 
     /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -462,6 +557,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 81783de..50f6d6d 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
@@ -65,6 +66,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  */
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
     uint32_t domain_id;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (12 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for " Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-19 12:04   ` Tian, Kevin
  2014-05-13 15:53 ` [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode Boris Ostrovsky
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

The two routines share most of their logic.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/svm.c     |  9 ++++++---
 xen/arch/x86/hvm/vmx/vmx.c     | 11 +++++++----
 xen/arch/x86/hvm/vpmu.c        | 42 +++++++++++++++---------------------------
 xen/arch/x86/traps.c           |  4 ++--
 xen/include/asm-x86/hvm/vpmu.h |  6 ++++--
 5 files changed, 34 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index c23db32..3d652c2 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1632,7 +1632,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_AMD_FAM15H_EVNTSEL3:
     case MSR_AMD_FAM15H_EVNTSEL4:
     case MSR_AMD_FAM15H_EVNTSEL5:
-        vpmu_do_rdmsr(msr, msr_content);
+        vpmu_do_msr(msr, msr_content, VPMU_MSR_READ);
         break;
 
     case MSR_AMD64_DR0_ADDRESS_MASK:
@@ -1783,9 +1783,12 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     case MSR_AMD_FAM15H_EVNTSEL3:
     case MSR_AMD_FAM15H_EVNTSEL4:
     case MSR_AMD_FAM15H_EVNTSEL5:
-        vpmu_do_wrmsr(msr, msr_content);
-        break;
+    {
+        uint64_t msr_val = msr_content;
 
+        vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE);
+        break;
+    }
     case MSR_IA32_MCx_MISC(4): /* Threshold register */
     case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
         /*
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 1c9e742..8588f48 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2047,11 +2047,11 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL |
                        MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
         /* Perhaps vpmu will change some bits. */
-        if ( vpmu_do_rdmsr(msr, msr_content) )
+        if ( vpmu_do_msr(msr, msr_content, VPMU_MSR_READ) )
             goto done;
         break;
     default:
-        if ( vpmu_do_rdmsr(msr, msr_content) )
+        if ( vpmu_do_msr(msr, msr_content, VPMU_MSR_READ) )
             break;
         if ( passive_domain_do_rdmsr(msr, msr_content) )
             goto done;
@@ -2202,6 +2202,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
 static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 {
     struct vcpu *v = current;
+    uint64_t msr_val;
 
     HVM_DBG_LOG(DBG_LEVEL_1, "ecx=%#x, msr_value=%#"PRIx64, msr, msr_content);
 
@@ -2225,7 +2226,8 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         if ( msr_content & ~supported )
         {
             /* Perhaps some other bits are supported in vpmu. */
-            if ( !vpmu_do_wrmsr(msr, msr_content) )
+            msr_val = msr_content;
+            if ( !vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE) )
                 break;
         }
         if ( msr_content & IA32_DEBUGCTLMSR_LBR )
@@ -2256,7 +2258,8 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             goto gp_fault;
         break;
     default:
-        if ( vpmu_do_wrmsr(msr, msr_content) )
+        msr_val = msr_content;
+        if ( vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE) )
             return X86EMUL_OKAY;
         if ( passive_domain_do_wrmsr(msr, msr_content) )
             return X86EMUL_OKAY;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 9995728..896e2be 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -84,20 +84,29 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     if ( !(vpmu_mode & XENPMU_MODE_ON) )
         return 0;
 
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    ASSERT((rw == VPMU_MSR_READ) || (rw == VPMU_MSR_WRITE));
+
+    if ( vpmu->arch_vpmu_ops )
     {
-        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+        int ret;
+
+        if ( (rw == VPMU_MSR_READ) && vpmu->arch_vpmu_ops->do_rdmsr )
+            ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+        else if ( vpmu->arch_vpmu_ops->do_wrmsr )
+            ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, *msr_content);
+        else
+            return 0;
 
         /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
+         * We may have received a PMU interrupt while handling MSR access
+         * and since do_wr/rdmsr may load VPMU context we should save
          * (and unload) it again.
          */
         if ( !is_hvm_domain(current->domain) &&
@@ -107,31 +116,10 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             vpmu->arch_vpmu_ops->arch_vpmu_save(current);
             vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         }
-        return ret;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !is_hvm_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
         return ret;
     }
+
     return 0;
 }
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index a0d0ba7..adbdebe 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2525,7 +2525,7 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
-            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+            if ( !vpmu_do_msr(regs->ecx, &msr_content, VPMU_MSR_WRITE) )
                 goto invalid;
             break;
         default:
@@ -2638,7 +2638,7 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
-            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) )
+            if ( vpmu_do_msr(regs->ecx, &msr_content, VPMU_MSR_READ) )
             {
                 regs->eax = (uint32_t)msr_content;
                 regs->edx = (uint32_t)(msr_content >> 32);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 438a913..bab8779 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -78,9 +78,11 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+#define VPMU_MSR_READ  0
+#define VPMU_MSR_WRITE 1
+
 void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
 void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
                                        unsigned int *ecx, unsigned int *edx);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (13 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-26 11:48   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch Boris Ostrovsky
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/vpmu.c  | 100 +++++++++++++++++++++++++++++++++--------------
 xen/arch/x86/traps.c     |  10 +++++
 xen/include/public/pmu.h |   3 ++
 3 files changed, 84 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 896e2be..7cb2231 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -88,7 +88,9 @@ int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) &&
+          !is_hardware_domain(current->domain)) )
         return 0;
 
     ASSERT((rw == VPMU_MSR_READ) || (rw == VPMU_MSR_WRITE));
@@ -128,16 +130,23 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     struct vpmu_struct *vpmu;
 
-    /* dom0 will handle interrupt for special domains (e.g. idle domain) */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    /*
+     * dom0 will handle interrupt for special domains (e.g. idle domain) or,
+     * in XENPMU_MODE_PRIV, for everyone.
+     */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
         v = hardware_domain->vcpu[smp_processor_id() %
             hardware_domain->max_vcpus];
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain)  || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
-        const struct cpu_user_regs *gregs;
+        struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
@@ -148,34 +157,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_hardware_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
 
-            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   &cmp, sizeof(struct compat_cpu_user_regs));
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
         }
-        else if ( !is_hardware_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -481,15 +518,20 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         if ( copy_from_guest(&pmu_params, arg, 1) )
             return -EFAULT;
 
-        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode = pmu_params.d.val;
-        if ( vpmu_mode == XENPMU_MODE_OFF )
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
             /*
              * After this VPMU context will never be loaded during context
-             * switch. We also prevent PMU MSR accesses (which can load
-             * context) when VPMU is disabled.
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
              */
             vpmu_unload_all();
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index adbdebe..90c5adb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2526,7 +2526,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_msr(regs->ecx, &msr_content, VPMU_MSR_WRITE) )
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     is_hardware_domain(v->domain) )
                 goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
@@ -2644,6 +2648,12 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
                 regs->edx = (uint32_t)(msr_content >> 32);
                 break;
             }
+            else if ( !is_hardware_domain(v->domain) )
+            {
+                /* Don't leak PMU MSRs to unprivileged domains */
+                regs->eax = regs->edx = 0;
+                break;
+            }
             goto rdmsr_normal;
 
         default:
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 50f6d6d..e3352a2 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -56,9 +56,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (14 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-26 12:03   ` Jan Beulich
  2014-05-13 15:53 ` [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support Boris Ostrovsky
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/domain.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 4a122da..a853071 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1478,16 +1478,14 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if ( prev != next )
-        _update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+        _update_runstate_area(prev);
+        if ( vpmu_mode & XENPMU_MODE_ON )
             vpmu_save(prev);
+    }
 
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+    if ( is_hvm_vcpu(prev) &&  !list_empty(&prev->arch.hvm_vcpu.tm_list) )
             pt_save_timer(prev);
-    }
 
     local_irq_disable();
 
@@ -1526,7 +1524,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            !is_hardware_domain(next->domain));
     }
 
-    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+    if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (15 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-26 15:55   ` Jan Beulich
  2014-05-30 21:12   ` Tian, Kevin
  2014-05-13 15:53 ` [PATCH v6 18/19] x86/VPMU: Suport for PVH guests Boris Ostrovsky
                   ` (2 subsequent siblings)
  19 siblings, 2 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add support for using NMIs as PMU interrupts.

Most of processing is still performed by vpmu_do_interrupt(). However, since
certain operations are not NMI-safe we defer them to a softint that vpmu_do_interrupt()
will schedule:
* For PV guests that would be send_guest_vcpu_virq()
* For HVM guests it's VLAPIC accesses and hvm_get_segment_register() (the later
can be called in privileged profiling mode when the interrupted guest is an HVM one).

With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and vlapic
accesses for HVM moved to sofint, the only routines/macros that vpmu_do_interrupt()
calls in NMI mode are:
* memcpy()
* querying domain type (is_XX_domain())
* guest_cpu_user_regs()
* XLAT_cpu_user_regs()
* raise_softirq()
* vcpu_vpmu()
* vpmu_ops->arch_vpmu_save()
* vpmu_ops->do_interrupt() (in the future for PVH support)

The latter two only access PMU MSRs with {rd,wr}msrl() (not the _safe versions
which would not be NMI-safe).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |   1 +
 xen/arch/x86/hvm/vmx/vpmu_core2.c |   1 +
 xen/arch/x86/hvm/vpmu.c           | 183 +++++++++++++++++++++++++++++++-------
 3 files changed, 152 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 42c3530..8711e86 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -185,6 +185,7 @@ static inline void context_load(struct vcpu *v)
     }
 }
 
+/* Must be NMI-safe */
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8182dc3..c06b305 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -303,6 +303,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
+/* Must be NMI-safe */
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 7cb2231..f73ebbb 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -36,6 +36,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/pmu.h>
 
 /*
@@ -48,34 +49,60 @@ uint64_t __read_mostly vpmu_features = 0;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
+static void pmu_softnmi(void);
+
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode = XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if ( !is_pv_domain(current->domain) ||
@@ -84,6 +111,27 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
 int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -125,6 +173,7 @@ int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
     return 0;
 }
 
+/* This routine may be called in NMI context */
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -209,9 +258,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
 
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -222,30 +275,30 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
 
     if ( vpmu->arch_vpmu_ops )
     {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;
 
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
         else
-            v->nmi_pending = 1;
+            vpmu_send_nmi(v);
+
         return 1;
     }
 
@@ -276,6 +329,8 @@ static void vpmu_save_force(void *arg)
     vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
 
     per_cpu(last_vcpu, smp_processor_id()) = NULL;
+
+    pmu_softnmi();
 }
 
 void vpmu_save(struct vcpu *v)
@@ -293,7 +348,10 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
+
+    /* Make sure there are no outstanding PMU NMIs */
+    pmu_softnmi();
 }
 
 void vpmu_load(struct vcpu *v)
@@ -338,6 +396,8 @@ void vpmu_load(struct vcpu *v)
         vpmu_save_force(prev);
         vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
+        pmu_softnmi();
+
         vpmu = vcpu_vpmu(v);
     }
 
@@ -442,11 +502,53 @@ static void vpmu_unload_all(void)
     }
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( sampled == NULL )
+        return;
+    per_cpu(sampled_vcpu, smp_processor_id()) = NULL;
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = hardware_domain->vcpu[smp_processor_id() %
+                                  hardware_domain->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
 static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct page_info *page;
     uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
 
     if ( !is_pv_domain(d) )
         return -EINVAL;
@@ -472,6 +574,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
         return -EINVAL;
     }
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     vpmu_initialise(v);
 
     return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 18/19] x86/VPMU: Suport for PVH guests
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (16 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-13 15:53 ` [PATCH v6 19/19] x86/VPMU: Move VPMU files up from hvm/ directory Boris Ostrovsky
  2014-05-16  7:40 ` [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Jan Beulich
  19 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Add support for PVH guests. Most of operations are performed as in an HVM guest.
However, interrupt management is done in PV-like manner.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/hvm/hvm.c            |  3 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 13 +++++++------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 26 +++++++++++++-------------
 xen/arch/x86/hvm/vpmu.c           | 32 +++++++++++++++++++++-----------
 4 files changed, 43 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ac05160..7773274 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3607,7 +3607,8 @@ static hvm_hypercall_t *const pvh_hypercall64_table[NR_hypercalls] = {
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    HYPERCALL(xenpmu_op)
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 8711e86..dc3a1c9 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -165,6 +165,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
     msr_bitmap_off(vpmu);
 }
 
+/* Must be NMI-safe */
 static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     return 1;
@@ -245,7 +246,7 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( is_hvm_domain(v->domain) &&
+    if ( has_hvm_container_domain(v->domain) &&
         !vpmu_is_set(vpmu, VPMU_RUNNING) && is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
 
@@ -288,7 +289,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -302,7 +303,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
 
-        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
+        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -311,7 +312,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
+        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -382,7 +383,7 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
                              sizeof(uint64_t) * AMD_MAX_COUNTERS + 
@@ -414,7 +415,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         if ( is_msr_bitmap_on(vpmu) )
             amd_vpmu_unset_msr_bitmap(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index c06b305..7dee766 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -299,7 +299,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
@@ -308,7 +308,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
@@ -317,8 +317,8 @@ static int core2_vpmu_save(struct vcpu *v)
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_domain(v->domain) &&
-         cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
+         has_hvm_container_domain(v->domain) && cpu_has_vmx_msr_bitmap )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -349,7 +349,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
     {
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
         core2_vpmu_cxt->global_ovf_ctrl = 0;
@@ -445,7 +445,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -453,7 +453,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 
 static void inject_trap(struct vcpu *v, unsigned int trapno)
 {
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
         hvm_inject_hw_exception(trapno, 0);
     else
         send_guest_trap(v->domain, v->vcpu_id, trapno);
@@ -530,7 +530,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        if ( !is_pv_domain(v->domain) )
+        if ( has_hvm_container_domain(v->domain) )
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                                &core2_vpmu_cxt->global_ctrl);
         else
@@ -556,7 +556,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                                    &core2_vpmu_cxt->global_ctrl);
             else
@@ -610,7 +610,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     }
     else
     {
-       if ( !is_pv_domain(v->domain) )
+       if ( has_hvm_container_domain(v->domain) )
            vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
        else
            wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -638,7 +638,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
@@ -805,7 +805,7 @@ func_out:
     check_pmc_quirk();
 
     /* PV domains can allocate resources immediately */
-    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
             return 1;
 
     return 0;
@@ -820,7 +820,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
 
     xfree(vpmu->context);
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         xfree(vpmu->priv_context);
         if ( cpu_has_vmx_msr_bitmap )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index f73ebbb..fdafe89 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -105,7 +105,7 @@ void vpmu_lvtpc_update(uint32_t val)
     vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !is_pv_domain(current->domain) ||
+    if ( !has_hvm_container_domain(current->domain) ||
          !(current->arch.vpmu.xenpmu_data &&
            current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
@@ -159,7 +159,7 @@ int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
          * and since do_wr/rdmsr may load VPMU context we should save
          * (and unload) it again.
          */
-        if ( !is_hvm_domain(current->domain) &&
+        if ( !has_hvm_container_domain(current->domain) &&
             (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         {
             vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -194,13 +194,17 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
     if ( !is_hvm_domain(v->domain)  || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
-        /* PV guest or dom0 is doing system profiling */
+        /* PV(H) guest or dom0 is doing system profiling */
         struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
             return 1;
 
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
         /* PV guest will be reading PMU MSRs from xenpmu_data */
         vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
@@ -237,7 +241,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             else if ( !is_hardware_domain(current->domain) &&
                       !is_idle_vcpu(current) )
             {
-                /* PV guest */
+                /* PV(H) guest */
                 gregs = guest_cpu_user_regs();
                 memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                        gregs, sizeof(struct cpu_user_regs));
@@ -247,7 +251,15 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
                        regs, sizeof(struct cpu_user_regs));
 
             gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
         }
         else
         {
@@ -271,7 +283,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
         v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
 
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
@@ -405,7 +418,7 @@ void vpmu_load(struct vcpu *v)
 
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -527,7 +540,7 @@ static void pmu_softnmi(void)
     }
 
     regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( !is_pv_domain(sampled->domain) )
+    if ( has_hvm_container_domain(sampled->domain) )
     {
         struct segment_register cs;
 
@@ -550,9 +563,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
     uint64_t gfn = params->d.val;
     static bool_t __read_mostly pvpmu_initted = 0;
 
-    if ( !is_pv_domain(d) )
-        return -EINVAL;
-
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v6 19/19] x86/VPMU: Move VPMU files up from hvm/ directory
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (17 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 18/19] x86/VPMU: Suport for PVH guests Boris Ostrovsky
@ 2014-05-13 15:53 ` Boris Ostrovsky
  2014-05-16  7:40 ` [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Jan Beulich
  19 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-13 15:53 UTC (permalink / raw)
  To: JBeulich, kevin.tian, dietmar.hahn, suravee.suthikulpanit
  Cc: keir, andrew.cooper3, donald.d.dugger, xen-devel, jun.nakajima,
	boris.ostrovsky

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 510 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 948 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 726 --------------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 726 ++++++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 510 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 948 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        | 102 ----
 xen/include/asm-x86/vpmu.h            | 102 ++++
 16 files changed, 2291 insertions(+), 2293 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index dc3a1c9..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,510 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/pmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-/* Use private context as a flag for MSR bitmap */
-#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
-#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}
-#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    msr_bitmap_on(vpmu);
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    msr_bitmap_off(vpmu);
-}
-
-/* Must be NMI-safe */
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-/* Must be NMI-safe */
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    unsigned int i;
-
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-    {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        return 0;
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    context_save(v);
-
-    if ( has_hvm_container_domain(v->domain) &&
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && is_msr_bitmap_on(vpmu) )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct xen_pmu_amd_ctxt *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
-
-    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu->priv_context = NULL;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        if ( is_msr_bitmap_on(vpmu) )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index f1b543c..b3fe4d3 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index 7dee766..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,948 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/pmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-/* Must be NMI-safe */
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         has_hvm_container_domain(v->domain) && cpu_has_vmx_msr_bitmap )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( !has_hvm_container_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    uint64_t *p = NULL;
-
-    p = xzalloc_bytes(sizeof(uint64_t));
-    if ( !p )
-        goto out_err;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            goto out_err;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
-            goto out_err_hvm;
-        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
-            goto out_err_hvm;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct xen_pmu_cntr_pair) *
-                                       arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err_hvm;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-                                    sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-    vpmu->priv_context = (void *)p;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err_hvm:
-    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
-    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    xfree(core2_vpmu_cxt);
-    xfree(p);
-
-out_err:
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( has_hvm_container_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    uint64_t *enabled_cntrs;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    enabled_cntrs = (uint64_t *)vpmu->priv_context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        if ( has_hvm_container_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                               &core2_vpmu_cxt->global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-        *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
-        if ( msr_content != 0 )
-        {
-            u64 val = msr_content;
-            for ( i = 0; i < fixed_pmc_cnt; i++ )
-            {
-                if ( val & 3 )
-                    *enabled_cntrs |= (1ULL << 32) << i;
-                val >>= FIXED_CTR_CTRL_BITS;
-            }
-        }
-
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                                   &core2_vpmu_cxt->global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-
-            if ( msr_content & (1ULL << 22) )
-                *enabled_cntrs |= 1ULL << tmp;
-            else
-                *enabled_cntrs &= ~(1ULL << tmp);
-
-            xen_pmu_cntr_pair[tmp].control = msr_content;
-        }
-    }
-
-    if ((core2_vpmu_cxt->global_ctrl & *enabled_cntrs) ||
-        (core2_vpmu_cxt->ds_area != 0)  )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( has_hvm_container_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for %pv\n",
-                   v);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for %pv\n",
-                   v);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    xfree(vpmu->context);
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        xfree(vpmu->priv_context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-
-        /* future: */
-        case 0x3d:
-        case 0x4e:
-            ret = core2_vpmu_initialise(v);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index fdafe89..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,726 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/p2m.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <public/pmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-uint64_t __read_mostly vpmu_features = 0;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static void pmu_softnmi(void);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  ( parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_interrupt_type = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !has_hvm_container_domain(current->domain) ||
-         !(current->arch.vpmu.xenpmu_data &&
-           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-static void vpmu_send_nmi(struct vcpu *v)
-{
-    struct vlapic *vlapic;
-    u32 vlapic_lvtpc;
-    unsigned char int_vec;
-
-    ASSERT( is_hvm_vcpu(v) );
-
-    vlapic = vcpu_vlapic(v);
-    if ( !is_vlapic_lvtpc_enabled(vlapic) )
-        return;
-
-    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-    else
-        v->nmi_pending = 1;
-}
-
-int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
-         ((vpmu_mode & XENPMU_MODE_PRIV) &&
-          !is_hardware_domain(current->domain)) )
-        return 0;
-
-    ASSERT((rw == VPMU_MSR_READ) || (rw == VPMU_MSR_WRITE));
-
-    if ( vpmu->arch_vpmu_ops )
-    {
-        int ret;
-
-        if ( (rw == VPMU_MSR_READ) && vpmu->arch_vpmu_ops->do_rdmsr )
-            ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-        else if ( vpmu->arch_vpmu_ops->do_wrmsr )
-            ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, *msr_content);
-        else
-            return 0;
-
-        /*
-         * We may have received a PMU interrupt while handling MSR access
-         * and since do_wr/rdmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !has_hvm_container_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-
-        return ret;
-    }
-
-    return 0;
-}
-
-/* This routine may be called in NMI context */
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /*
-     * dom0 will handle interrupt for special domains (e.g. idle domain) or,
-     * in XENPMU_MODE_PRIV, for everyone.
-     */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = hardware_domain->vcpu[smp_processor_id() %
-            hardware_domain->max_vcpus];
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain)  || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV(H) guest or dom0 is doing system profiling */
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-            return 1;
-
-        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
-             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        if ( !is_hvm_domain(current->domain) )
-        {
-            /* Store appropriate registers in xenpmu_data */
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                gregs = guest_cpu_user_regs();
-
-                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
-                     !is_pv_32bit_domain(v->domain) )
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           gregs, sizeof(struct cpu_user_regs));
-                else
-                {
-                    /*
-                     * 32-bit dom0 cannot process Xen's addresses (which are
-                     * 64 bit) and therefore we treat it the same way as a
-                     * non-priviledged PV 32-bit domain.
-                     */
-
-                    struct compat_cpu_user_regs *cmp;
-
-                    cmp = (struct compat_cpu_user_regs *)
-                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                    XLAT_cpu_user_regs(cmp, gregs);
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           &cmp, sizeof(struct compat_cpu_user_regs));
-                }
-            }
-            else if ( !is_hardware_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV(H) guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       regs, sizeof(struct cpu_user_regs));
-
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            if ( !is_pvh_domain(current->domain) )
-                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
-            {
-                struct segment_register seg_cs;
-
-                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
-                gregs->cs = seg_cs.attr.fields.dpl;
-            }
-        }
-        else
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   gregs, sizeof(struct cpu_user_regs));
-
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                gregs->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-
-    if ( vpmu->arch_vpmu_ops )
-    {
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            vpmu_send_nmi(v);
-
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-
-    pmu_softnmi();
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
-
-    /* Make sure there are no outstanding PMU NMIs */
-    pmu_softnmi();
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        pmu_softnmi();
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        /* Arch code needs to set VPMU_CONTEXT_LOADED */
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        break;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        break;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        break;
-    }
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Unload VPMU contexts */
-static void vpmu_unload_all(void)
-{
-    struct domain *d;
-    struct vcpu *v;
-    struct vpmu_struct *vpmu;
-
-    for_each_domain(d)
-    {
-        for_each_vcpu ( d, v )
-        {
-            if ( v != current )
-                vcpu_pause(v);
-            vpmu = vcpu_vpmu(v);
-
-            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            {
-                if ( v != current )
-                    vcpu_unpause(v);
-                continue;
-            }
-
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-            if ( v != current )
-                vcpu_unpause(v);
-        }
-    }
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( sampled == NULL )
-        return;
-    per_cpu(sampled_vcpu, smp_processor_id()) = NULL;
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = hardware_domain->vcpu[smp_processor_id() %
-                                  hardware_domain->max_vcpus];
-    else
-    {
-        if ( is_hvm_domain(sampled->domain) )
-        {
-            vpmu_send_nmi(sampled);
-            return;
-        }
-        v = sampled;
-    }
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( has_hvm_container_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    struct page_info *page;
-    uint64_t gfn = params->d.val;
-    static bool_t __read_mostly pvpmu_initted = 0;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
-    if ( !v->arch.vpmu.xenpmu_data )
-    {
-        put_page_and_type(page);
-        return -EINVAL;
-    }
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            put_page(page);
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page_and_type(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xen_pmu_params_t pmu_params;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((pmu_params.d.val & XENPMU_MODE_ON) &&
-              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode = pmu_params.d.val;
-
-        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            /*
-             * After this VPMU context will never be loaded during context
-             * switch. Because PMU MSR accesses load VPMU context we don't
-             * allow them when VPMU is off and, for non-provileged domains,
-             * when we are in privileged mode. (We do want these accesses to
-             * load VPMU context for control domain in this mode)
-             */
-            vpmu_unload_all();
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.d.val = vpmu_mode;
-        pmu_params.v.version.maj = XENPMU_VER_MAJ;
-        pmu_params.v.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_features = pmu_params.d.val;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.d.val = vpmu_mode;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( current->arch.vpmu.xenpmu_data == NULL )
-            return -EINVAL;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        vpmu_load(current);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 90c5adb..529e409 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..809c11e
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,726 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/p2m.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <public/pmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static void pmu_softnmi(void);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !has_hvm_container_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
+int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) &&
+          !is_hardware_domain(current->domain)) )
+        return 0;
+
+    ASSERT((rw == VPMU_MSR_READ) || (rw == VPMU_MSR_WRITE));
+
+    if ( vpmu->arch_vpmu_ops )
+    {
+        int ret;
+
+        if ( (rw == VPMU_MSR_READ) && vpmu->arch_vpmu_ops->do_rdmsr )
+            ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+        else if ( vpmu->arch_vpmu_ops->do_wrmsr )
+            ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, *msr_content);
+        else
+            return 0;
+
+        /*
+         * We may have received a PMU interrupt while handling MSR access
+         * and since do_wr/rdmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !has_hvm_container_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+
+        return ret;
+    }
+
+    return 0;
+}
+
+/* This routine may be called in NMI context */
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /*
+     * dom0 will handle interrupt for special domains (e.g. idle domain) or,
+     * in XENPMU_MODE_PRIV, for everyone.
+     */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = hardware_domain->vcpu[smp_processor_id() %
+            hardware_domain->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain)  || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV(H) guest or dom0 is doing system profiling */
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        if ( !is_hvm_domain(current->domain) )
+        {
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_hardware_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV(H) guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
+        }
+        else
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+
+    if ( vpmu->arch_vpmu_ops )
+    {
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            vpmu_send_nmi(v);
+
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+
+    pmu_softnmi();
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
+
+    /* Make sure there are no outstanding PMU NMIs */
+    pmu_softnmi();
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+        }
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+        vpmu_save_force(prev);
+        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+        pmu_softnmi();
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* Only when PMU is counting, we load PMU context immediately. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        break;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        break;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        break;
+    }
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( sampled == NULL )
+        return;
+    per_cpu(sampled_vcpu, smp_processor_id()) = NULL;
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = hardware_domain->vcpu[smp_processor_id() %
+                                  hardware_domain->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( has_hvm_container_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..1fdbee4
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,510 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/pmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+/* Use private context as a flag for MSR bitmap */
+#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
+#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}
+#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    msr_bitmap_on(vpmu);
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    msr_bitmap_off(vpmu);
+}
+
+/* Must be NMI-safe */
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+/* Must be NMI-safe */
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    unsigned int i;
+
+    /*
+     * Stop the counters. If we came here via vpmu_save_force (i.e.
+     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    {
+        vpmu_set(vpmu, VPMU_FROZEN);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        return 0;
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    context_save(v);
+
+    if ( has_hvm_container_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && is_msr_bitmap_on(vpmu) )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( has_hvm_container_domain(v->domain) && is_msr_bitmap_on(vpmu) )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct xen_pmu_amd_ctxt *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
+
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu->priv_context = NULL;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        if ( is_msr_bitmap_on(vpmu) )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..2b9161b
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,948 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/pmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+/* Must be NMI-safe */
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
+         has_hvm_container_domain(v->domain) && cpu_has_vmx_msr_bitmap )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !has_hvm_container_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    uint64_t *p = NULL;
+
+    p = xzalloc_bytes(sizeof(uint64_t));
+    if ( !p )
+        goto out_err;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            goto out_err;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
+            goto out_err_hvm;
+        if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
+            goto out_err_hvm;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) *
+                                       arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err_hvm;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+                                    sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+    vpmu->priv_context = (void *)p;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err_hvm:
+    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
+    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    xfree(core2_vpmu_cxt);
+    xfree(p);
+
+out_err:
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( has_hvm_container_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    uint64_t *enabled_cntrs;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    enabled_cntrs = (uint64_t *)vpmu->priv_context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        if ( has_hvm_container_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
+                               &core2_vpmu_cxt->global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+        *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
+        if ( msr_content != 0 )
+        {
+            u64 val = msr_content;
+            for ( i = 0; i < fixed_pmc_cnt; i++ )
+            {
+                if ( val & 3 )
+                    *enabled_cntrs |= (1ULL << 32) << i;
+                val >>= FIXED_CTR_CTRL_BITS;
+            }
+        }
+
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
+                                   &core2_vpmu_cxt->global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+
+            if ( msr_content & (1ULL << 22) )
+                *enabled_cntrs |= 1ULL << tmp;
+            else
+                *enabled_cntrs &= ~(1ULL << tmp);
+
+            xen_pmu_cntr_pair[tmp].control = msr_content;
+        }
+    }
+
+    if ((core2_vpmu_cxt->global_ctrl & *enabled_cntrs) ||
+        (core2_vpmu_cxt->ds_area != 0)  )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( has_hvm_container_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for %pv\n",
+                   v);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for %pv\n",
+                   v);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    xfree(vpmu->context);
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        xfree(vpmu->priv_context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+
+        /* future: */
+        case 0x3d:
+        case 0x4e:
+            ret = core2_vpmu_initialise(v);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index dd34b2c..a9823fa 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index bab8779..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/pmu.h>
-
-#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
-#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-/* Start of PMU register bank */
-#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
-                                                 (uintptr_t)ctxt->offset))
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *);
-int svm_vpmu_initialise(struct vcpu *);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;      /* May be shared with PV guest */
-    void *priv_context; /* hypervisor-only */
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xen_pmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-#define VPMU_MSR_READ  0
-#define VPMU_MSR_WRITE 1
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint64_t vpmu_mode;
-extern uint64_t vpmu_features;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..bab8779
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,102 @@
+/*
+ * vpmu.h: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_HVM_VPMU_H_
+#define __ASM_X86_HVM_VPMU_H_
+
+#include <public/pmu.h>
+
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;      /* May be shared with PV guest */
+    void *priv_context; /* hypervisor-only */
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+#define VPMU_MSR_READ  0
+#define VPMU_MSR_WRITE 1
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
+#endif /* __ASM_X86_HVM_VPMU_H_*/
+
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support
  2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
                   ` (18 preceding siblings ...)
  2014-05-13 15:53 ` [PATCH v6 19/19] x86/VPMU: Move VPMU files up from hvm/ directory Boris Ostrovsky
@ 2014-05-16  7:40 ` Jan Beulich
  2014-05-16 14:57   ` Boris Ostrovsky
  19 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-16  7:40 UTC (permalink / raw)
  To: suravee.suthikulpanit, kevin.tian, Boris Ostrovsky, dietmar.hahn
  Cc: andrew.cooper3, keir, jun.nakajima, donald.d.dugger, xen-devel

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> * Replaced references to dom0 with hardware_domain (and is_control_domain with
>   is_hardware_domain for consistency)

I hope this wasn't a blind replacement - there are certainly cases where
is_control_domain() is warranted to remain for documentation purposes.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-13 15:53 ` [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest Boris Ostrovsky
@ 2014-05-16  8:05   ` Jan Beulich
  2014-05-16 14:58     ` Boris Ostrovsky
  2014-06-05 10:29     ` Tim Deegan
  0 siblings, 2 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-16  8:05 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, kevin.tian, keir, jun.nakajima, andrew.cooper3,
	Ian Jackson, donald.d.dugger, xen-devel, Ian Campbell,
	dietmar.hahn, suravee.suthikulpanit

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
> hypercall

I already voiced my reservations on a very early version of this series.
While I can see the need of exposing these internals, I also see the
potential for abuse. I'd clearly want at least one other common code
maintainer's opinion here; sadly you didn't properly Cc them all (done
now).

> +    case XENPF_get_symbol:
> +    {
> +        char name[XEN_KSYM_NAME_LEN + 1];

This is a fairly large object you place on the stack. Considering that
there's a lock being held already, did you consider making this static?

> +        XEN_GUEST_HANDLE(char) nameh;
> +
> +        guest_from_compat_handle(nameh, op->u.symdata.name);
> +
> +        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
> +                           &op->u.symdata.address, name);
> +
> +        if ( !ret && copy_to_guest(nameh, name, strlen(name)) )

I think I commented on this earlier on already - you end up calling
strlen() on uninitialized data if xensyms_read() takes its second
early exit path. Additionally there's no way for the caller to
distinguish that success case from other success cases.

Considering that xensyms_read() is there only to implement this sub-
hypercall, I wonder whether you wouldn't be better of passing it the
handle, and have it do the copying. It's holding a lock itself, so the
static buffer could be placed there, along with the other statics it
already uses (which btw should go into the only function that make
use of them).

And finally (I think I commented on this too earlier on) you still don't
have the caller pass in the size of the buffer it wants the symbol
copied to, and instead bake into the hypercall interface an arbitrary
(derived from current Xen internals) fixed name length. That's non-
extensible.

> --- a/xen/arch/x86/x86_64/platform_hypercall.c
> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>  CHECK_pf_enter_acpi_sleep;
>  #undef xen_pf_enter_acpi_sleep
>  
> +#define xenpf_symdata   compat_pf_symdata

Did you check that you really need this? There's no explicit instance of
the structure, so I would think it's not needed.

> --- a/xen/include/public/platform.h
> +++ b/xen/include/public/platform.h
> @@ -527,6 +527,21 @@ struct xenpf_core_parking {
>  typedef struct xenpf_core_parking xenpf_core_parking_t;
>  DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
>  
> +#define XENPF_get_symbol   61
> +#define XEN_KSYM_NAME_LEN 127
> +struct xenpf_symdata {
> +    /* IN variables */
> +    uint32_t symnum;
> +
> +    /* OUT variables */
> +    uint32_t type;

Do you really need 32 bits here? Looks like this really is a char, i.e.
you could leave 3 bytes for future extensibility.

Also I think in a comment you should specify here how the loop
termination is to be recognized - after all that's part of the ABI
you introduce.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support
  2014-05-16  7:40 ` [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Jan Beulich
@ 2014-05-16 14:57   ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-16 14:57 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	dietmar.hahn, xen-devel, donald.d.dugger, jun.nakajima

On 05/16/2014 03:40 AM, Jan Beulich wrote:
>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>> * Replaced references to dom0 with hardware_domain (and is_control_domain with
>>    is_hardware_domain for consistency)
> I hope this wasn't a blind replacement - there are certainly cases where
> is_control_domain() is warranted to remain for documentation purposes.

There were 2 or 3 instances that were replaced: it used for determining 
who controls system's PMU in privileged profiling mode.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-16  8:05   ` Jan Beulich
@ 2014-05-16 14:58     ` Boris Ostrovsky
  2014-05-16 15:16       ` Jan Beulich
  2014-06-05 10:29     ` Tim Deegan
  1 sibling, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-16 14:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, kevin.tian, keir, jun.nakajima, andrew.cooper3,
	Ian Jackson, donald.d.dugger, xen-devel, Ian Campbell,
	dietmar.hahn, suravee.suthikulpanit

On 05/16/2014 04:05 AM, Jan Beulich wrote:
>
>> +    case XENPF_get_symbol:
>> +    {
>> +        char name[XEN_KSYM_NAME_LEN + 1];
> This is a fairly large object you place on the stack. Considering that
> there's a lock being held already, did you consider making this static?
>
>> +        XEN_GUEST_HANDLE(char) nameh;
>> +
>> +        guest_from_compat_handle(nameh, op->u.symdata.name);
>> +
>> +        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
>> +                           &op->u.symdata.address, name);
>> +
>> +        if ( !ret && copy_to_guest(nameh, name, strlen(name)) )
> I think I commented on this earlier on already - you end up calling
> strlen() on uninitialized data if xensyms_read() takes its second
> early exit path. Additionally there's no way for the caller to
> distinguish that success case from other success cases.


" name[0] = '\0' " in xensyms_read() should address both of these concerns.


> Considering that xensyms_read() is there only to implement this sub-
> hypercall, I wonder whether you wouldn't be better of passing it the
> handle, and have it do the copying. It's holding a lock itself, so the
> static buffer could be placed there, along with the other statics it
> already uses (which btw should go into the only function that make
> use of them).

I didn't want to expose xensyms_read() to any handles to keep it a 
purely "internal" (for the lack of a better term) routine.

But if 'name' is to become static I don't want to introduce another lock 
(or move existing one up from xensyms_read()) so I guess I would have to 
pass the handle.


>
> And finally (I think I commented on this too earlier on) you still don't
> have the caller pass in the size of the buffer it wants the symbol
> copied to, and instead bake into the hypercall interface an arbitrary
> (derived from current Xen internals) fixed name length. That's non-
> extensible.

So do you suggest passing in (and out) buffer size?

>
>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>   CHECK_pf_enter_acpi_sleep;
>>   #undef xen_pf_enter_acpi_sleep
>>   
>> +#define xenpf_symdata   compat_pf_symdata
> Did you check that you really need this? There's no explicit instance of
> the structure, so I would think it's not needed.

Isn't this required for auto-building compat interfaces?

>
>> --- a/xen/include/public/platform.h
>> +++ b/xen/include/public/platform.h
>> @@ -527,6 +527,21 @@ struct xenpf_core_parking {
>>   typedef struct xenpf_core_parking xenpf_core_parking_t;
>>   DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
>>   
>> +#define XENPF_get_symbol   61
>> +#define XEN_KSYM_NAME_LEN 127
>> +struct xenpf_symdata {
>> +    /* IN variables */
>> +    uint32_t symnum;
>> +
>> +    /* OUT variables */
>> +    uint32_t type;
> Do you really need 32 bits here? Looks like this really is a char, i.e.
> you could leave 3 bytes for future extensibility.

Yes, this is a char. I can replace it with a padded union.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-16 14:58     ` Boris Ostrovsky
@ 2014-05-16 15:16       ` Jan Beulich
  2014-05-16 16:12         ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-16 15:16 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Tim Deegan, kevin.tian, keir, jun.nakajima, andrew.cooper3,
	Ian Jackson, donald.d.dugger, xen-devel, Ian Campbell,
	dietmar.hahn, suravee.suthikulpanit

>>> On 16.05.14 at 16:58, <boris.ostrovsky@oracle.com> wrote:
> On 05/16/2014 04:05 AM, Jan Beulich wrote:
>> Considering that xensyms_read() is there only to implement this sub-
>> hypercall, I wonder whether you wouldn't be better of passing it the
>> handle, and have it do the copying. It's holding a lock itself, so the
>> static buffer could be placed there, along with the other statics it
>> already uses (which btw should go into the only function that make
>> use of them).
> 
> I didn't want to expose xensyms_read() to any handles to keep it a 
> purely "internal" (for the lack of a better term) routine.
> 
> But if 'name' is to become static I don't want to introduce another lock 
> (or move existing one up from xensyms_read()) so I guess I would have to 
> pass the handle.

No, even in the caller you're already protected by a lock.

>> And finally (I think I commented on this too earlier on) you still don't
>> have the caller pass in the size of the buffer it wants the symbol
>> copied to, and instead bake into the hypercall interface an arbitrary
>> (derived from current Xen internals) fixed name length. That's non-
>> extensible.
> 
> So do you suggest passing in (and out) buffer size?

Since the output is nul-terminated, I don't see a strict need for passing
it out, but I clearly want it to be passed in.

>>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>>   CHECK_pf_enter_acpi_sleep;
>>>   #undef xen_pf_enter_acpi_sleep
>>>   
>>> +#define xenpf_symdata   compat_pf_symdata
>> Did you check that you really need this? There's no explicit instance of
>> the structure, so I would think it's not needed.
> 
> Isn't this required for auto-building compat interfaces?

That depends, and it seems to me that it's not needed here.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-16 15:16       ` Jan Beulich
@ 2014-05-16 16:12         ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-16 16:12 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tim Deegan, kevin.tian, keir, jun.nakajima, andrew.cooper3,
	Ian Jackson, donald.d.dugger, xen-devel, Ian Campbell,
	dietmar.hahn, suravee.suthikulpanit

On 05/16/2014 11:16 AM, Jan Beulich wrote:
>>>> On 16.05.14 at 16:58, <boris.ostrovsky@oracle.com> wrote:
>> On 05/16/2014 04:05 AM, Jan Beulich wrote:
>>> Considering that xensyms_read() is there only to implement this sub-
>>> hypercall, I wonder whether you wouldn't be better of passing it the
>>> handle, and have it do the copying. It's holding a lock itself, so the
>>> static buffer could be placed there, along with the other statics it
>>> already uses (which btw should go into the only function that make
>>> use of them).
>> I didn't want to expose xensyms_read() to any handles to keep it a
>> purely "internal" (for the lack of a better term) routine.
>>
>> But if 'name' is to become static I don't want to introduce another lock
>> (or move existing one up from xensyms_read()) so I guess I would have to
>> pass the handle.
> No, even in the caller you're already protected by a lock.


Ah, yes. Then I should be able to simply make 'name' static in 
do_platform_op().


>>>> --- a/xen/arch/x86/x86_64/platform_hypercall.c
>>>> +++ b/xen/arch/x86/x86_64/platform_hypercall.c
>>>> @@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
>>>>    CHECK_pf_enter_acpi_sleep;
>>>>    #undef xen_pf_enter_acpi_sleep
>>>>    
>>>> +#define xenpf_symdata   compat_pf_symdata
>>> Did you check that you really need this? There's no explicit instance of
>>> the structure, so I would think it's not needed.
>> Isn't this required for auto-building compat interfaces?
> That depends, and it seems to me that it's not needed here.

OK, I'll see if I really need it.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup
  2014-05-13 15:53 ` [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup Boris Ostrovsky
@ 2014-05-19 11:55   ` Tian, Kevin
  2014-05-19 14:26   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-19 11:55 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 11:53 PM
> 
> Update macros that modify VPMU flags to allow changing multiple bits at
> once.
> 
> Make sure that we only touch MSR bitmap on HVM guests (both VMX and
> SVM). This
> is needed by subsequent PMU patches.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
>  xen/arch/x86/hvm/vmx/vpmu_core2.c | 12 +++++-------
>  xen/arch/x86/hvm/vpmu.c           |  3 +--
>  xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
>  4 files changed, 20 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 3ac7d53..3666915 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -244,7 +244,8 @@ static int amd_vpmu_save(struct vcpu *v)
> 
>      context_save(v);
> 
> -    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
> +    if ( is_hvm_domain(v->domain) &&
> +        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
>          amd_vpmu_unset_msr_bitmap(v);
> 
>      return 1;
> @@ -284,7 +285,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> 
>      /* For all counters, enable guest only mode for HVM guest */
> -    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
> +    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) ==
> MSR_TYPE_CTRL) &&
>          !(is_guest_mode(msr_content)) )
>      {
>          set_guest_mode(msr_content);
> @@ -300,7 +301,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
>          vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
> 
> -        if ( !((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) &&
> +             !((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
>              amd_vpmu_set_msr_bitmap(v);
>      }
> 
> @@ -311,7 +313,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
>          vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>          vpmu_reset(vpmu, VPMU_RUNNING);
> -        if ( ((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) &&
> +             ((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
>              amd_vpmu_unset_msr_bitmap(v);
>          release_pmu_ownship(PMU_OWNER_HVM);
>      }
> @@ -403,7 +406,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>          return;
> 
> -    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> +    if ( is_hvm_domain(v->domain) &&
> +         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
>          amd_vpmu_unset_msr_bitmap(v);
> 
>      xfree(vpmu->context);
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index ccd14d9..a3fb458 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -326,16 +326,14 @@ static int core2_vpmu_save(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> 
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
> -        return 0;
> -
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
> +    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE |
> VPMU_CONTEXT_LOADED) )
>          return 0;
> 
>      __core2_vpmu_save(v);
> 
>      /* Unset PMU MSR bitmap to trap lazy load. */
> -    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
> cpu_has_vmx_msr_bitmap )
> +    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
> is_hvm_domain(v->domain) &&
> +         cpu_has_vmx_msr_bitmap )
>          core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
> 
>      return 1;
> @@ -448,7 +446,7 @@ static int core2_vpmu_msr_common_check(u32
> msr_index, int *type, int *index)
>      {
>          __core2_vpmu_load(current);
>          vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
> -        if ( cpu_has_vmx_msr_bitmap )
> +        if ( cpu_has_vmx_msr_bitmap &&
> is_hvm_domain(current->domain) )
> 
> core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
>      }
>      return 1;
> @@ -815,7 +813,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
>          return;
>      xfree(core2_vpmu_cxt->pmu_enable);
>      xfree(vpmu->context);
> -    if ( cpu_has_vmx_msr_bitmap )
> +    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
>          core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
>      release_pmu_ownship(PMU_OWNER_HVM);
>      vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 63765fa..a48dae2 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -143,8 +143,7 @@ void vpmu_save(struct vcpu *v)
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      int pcpu = smp_processor_id();
> 
> -    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
> -           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
> +    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED |
> VPMU_CONTEXT_LOADED) )
>         return;
> 
>      vpmu->last_pcpu = pcpu;
> diff --git a/xen/include/asm-x86/hvm/vpmu.h
> b/xen/include/asm-x86/hvm/vpmu.h
> index 40f63fb..2a713be 100644
> --- a/xen/include/asm-x86/hvm/vpmu.h
> +++ b/xen/include/asm-x86/hvm/vpmu.h
> @@ -81,10 +81,11 @@ struct vpmu_struct {
>  #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch
> Trace Store */
> 
> 
> -#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
> -#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
> -#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
> -#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
> +#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
> +#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
> +#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
> +#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
> +#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
> 
>  int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
>  int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code
  2014-05-13 15:53 ` [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code Boris Ostrovsky
@ 2014-05-19 11:59   ` Tian, Kevin
  2014-05-19 14:30   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-19 11:59 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 11:53 PM
> 
> Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures
> with
> fields in core2_vpmu_context.
> 
> Call core2_get_pmc_count() once, during initialization.
> 
> Properly clean up when core2_vpmu_alloc_resource() fails and add routines
> to remove MSRs from VMCS.
> 
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c              |  55 +++++
>  xen/arch/x86/hvm/vmx/vpmu_core2.c        | 337
> ++++++++++++++-----------------
>  xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
>  xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
>  4 files changed, 209 insertions(+), 204 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index cc84ca2..0f43a1b 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1204,6 +1204,34 @@ int vmx_add_guest_msr(u32 msr)
>      return 0;
>  }
> 
> +void vmx_rm_guest_msr(u32 msr)
> +{
> +    struct vcpu *curr = current;
> +    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
> +    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +
> +    if ( msr_area == NULL )
> +        return;
> +
> +    for ( idx = 0; idx < msr_count; idx++ )
> +        if ( msr_area[idx].index == msr )
> +            break;
> +
> +    if ( idx == msr_count )
> +        return;
> +
> +    for ( ; idx < msr_count - 1; idx++ )
> +    {
> +        msr_area[idx].index = msr_area[idx + 1].index;
> +        msr_area[idx].data = msr_area[idx + 1].data;
> +    }
> +    msr_area[msr_count - 1].index = 0;
> +
> +    curr->arch.hvm_vmx.msr_count = --msr_count;
> +    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
> +    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
> +}
> +
>  int vmx_add_host_load_msr(u32 msr)
>  {
>      struct vcpu *curr = current;
> @@ -1234,6 +1262,33 @@ int vmx_add_host_load_msr(u32 msr)
>      return 0;
>  }
> 
> +void vmx_rm_host_load_msr(u32 msr)
> +{
> +    struct vcpu *curr = current;
> +    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
> +    struct vmx_msr_entry *msr_area =
> curr->arch.hvm_vmx.host_msr_area;
> +
> +    if ( msr_area == NULL )
> +        return;
> +
> +    for ( idx = 0; idx < msr_count; idx++ )
> +        if ( msr_area[idx].index == msr )
> +            break;
> +
> +    if ( idx == msr_count )
> +        return;
> +
> +    for ( ; idx < msr_count - 1; idx++ )
> +    {
> +        msr_area[idx].index = msr_area[idx + 1].index;
> +        msr_area[idx].data = msr_area[idx + 1].data;
> +    }
> +    msr_area[msr_count - 1].index = 0;
> +
> +    curr->arch.hvm_vmx.host_msr_count = --msr_count;
> +    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
> +}
> +
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
>  {
>      if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index a3fb458..0a9c643 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -69,6 +69,27 @@
>  static bool_t __read_mostly full_width_write;
> 
>  /*
> + * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
> + * counters. 4 bits for every counter.
> + */
> +#define FIXED_CTR_CTRL_BITS 4
> +#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
> +
> +#define VPMU_CORE2_MAX_FIXED_PMCS     4
> +struct core2_vpmu_context {
> +    u64 fixed_ctrl;
> +    u64 ds_area;
> +    u64 pebs_enable;
> +    u64 global_ovf_status;
> +    u64 enabled_cntrs;  /* Follows PERF_GLOBAL_CTRL MSR format */
> +    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
> +    struct arch_msr_pair arch_msr_pair[1];
> +};
> +
> +/* Number of general-purpose and fixed performance counters */
> +static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
> +
> +/*
>   * QUIRK to workaround an issue on various family 6 cpus.
>   * The issue leads to endless PMC interrupt loops on the processor.
>   * If the interrupt handler is running and a pmc reaches the value 0, this
> @@ -88,11 +109,8 @@ static void check_pmc_quirk(void)
>          is_pmc_quirk = 0;
>  }
> 
> -static int core2_get_pmc_count(void);
>  static void handle_pmc_quirk(u64 msr_content)
>  {
> -    int num_gen_pmc = core2_get_pmc_count();
> -    int num_fix_pmc  = 3;
>      int i;
>      u64 val;
> 
> @@ -100,7 +118,7 @@ static void handle_pmc_quirk(u64 msr_content)
>          return;
> 
>      val = msr_content;
> -    for ( i = 0; i < num_gen_pmc; i++ )
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
>          if ( val & 0x1 )
>          {
> @@ -112,7 +130,7 @@ static void handle_pmc_quirk(u64 msr_content)
>          val >>= 1;
>      }
>      val = msr_content >> 32;
> -    for ( i = 0; i < num_fix_pmc; i++ )
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
>          if ( val & 0x1 )
>          {
> @@ -125,75 +143,42 @@ static void handle_pmc_quirk(u64 msr_content)
>      }
>  }
> 
> -static const u32 core2_fix_counters_msr[] = {
> -    MSR_CORE_PERF_FIXED_CTR0,
> -    MSR_CORE_PERF_FIXED_CTR1,
> -    MSR_CORE_PERF_FIXED_CTR2
> -};
> -
>  /*
> - * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
> - * counters. 4 bits for every counter.
> + * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
>   */
> -#define FIXED_CTR_CTRL_BITS 4
> -#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
> -
> -/* The index into the core2_ctrls_msr[] of this MSR used in
> core2_vpmu_dump() */
> -#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
> -
> -/* Core 2 Non-architectual Performance Control MSRs. */
> -static const u32 core2_ctrls_msr[] = {
> -    MSR_CORE_PERF_FIXED_CTR_CTRL,
> -    MSR_IA32_PEBS_ENABLE,
> -    MSR_IA32_DS_AREA
> -};
> -
> -struct pmumsr {
> -    unsigned int num;
> -    const u32 *msr;
> -};
> -
> -static const struct pmumsr core2_fix_counters = {
> -    VPMU_CORE2_NUM_FIXED,
> -    core2_fix_counters_msr
> -};
> +static int core2_get_arch_pmc_count(void)
> +{
> +    u32 eax;
> 
> -static const struct pmumsr core2_ctrls = {
> -    VPMU_CORE2_NUM_CTRLS,
> -    core2_ctrls_msr
> -};
> -static int arch_pmc_cnt;
> +    eax = cpuid_eax(0xa);
> +    return ( (eax & PMU_GENERAL_NR_MASK) >>
> PMU_GENERAL_NR_SHIFT );
> +}
> 
>  /*
> - * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
> + * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
>   */
> -static int core2_get_pmc_count(void)
> +static int core2_get_fixed_pmc_count(void)
>  {
> -    u32 eax, ebx, ecx, edx;
> -
> -    if ( arch_pmc_cnt == 0 )
> -    {
> -        cpuid(0xa, &eax, &ebx, &ecx, &edx);
> -        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >>
> PMU_GENERAL_NR_SHIFT;
> -    }
> +    u32 eax;
> 
> -    return arch_pmc_cnt;
> +    eax = cpuid_eax(0xa);
> +    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
>  }
> 
>  static u64 core2_calc_intial_glb_ctrl_msr(void)
>  {
> -    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
> -    u64 fix_pmc_bits  = (1 << 3) - 1;
> -    return ((fix_pmc_bits << 32) | arch_pmc_bits);
> +    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
> +    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
> +    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
>  }
> 
>  /* edx bits 5-12: Bit width of fixed-function performance counters  */
>  static int core2_get_bitwidth_fix_count(void)
>  {
> -    u32 eax, ebx, ecx, edx;
> +    u32 edx;
> 
> -    cpuid(0xa, &eax, &ebx, &ecx, &edx);
> -    return ((edx & PMU_FIXED_WIDTH_MASK) >>
> PMU_FIXED_WIDTH_SHIFT);
> +    edx = cpuid_edx(0xa);
> +    return ( (edx & PMU_FIXED_WIDTH_MASK) >>
> PMU_FIXED_WIDTH_SHIFT );
>  }
> 
>  static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
> @@ -201,9 +186,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
>      int i;
>      u32 msr_index_pmc;
> 
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
> -        if ( core2_fix_counters.msr[i] == msr_index )
> +        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
>          {
>              *type = MSR_TYPE_COUNTER;
>              *index = i;
> @@ -211,14 +196,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
>          }
>      }
> 
> -    for ( i = 0; i < core2_ctrls.num; i++ )
> +    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
> +        (msr_index == MSR_IA32_DS_AREA) ||
> +        (msr_index == MSR_IA32_PEBS_ENABLE) )
>      {
> -        if ( core2_ctrls.msr[i] == msr_index )
> -        {
> -            *type = MSR_TYPE_CTRL;
> -            *index = i;
> -            return 1;
> -        }
> +        *type = MSR_TYPE_CTRL;
> +        return 1;
>      }
> 
>      if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
> @@ -231,7 +214,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
> 
>      msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
>      if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
> -         (msr_index_pmc < (MSR_IA32_PERFCTR0 +
> core2_get_pmc_count())) )
> +         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
>      {
>          *type = MSR_TYPE_ARCH_COUNTER;
>          *index = msr_index_pmc - MSR_IA32_PERFCTR0;
> @@ -239,7 +222,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
>      }
> 
>      if ( (msr_index >= MSR_P6_EVNTSEL0) &&
> -         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
> +         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
>      {
>          *type = MSR_TYPE_ARCH_CTRL;
>          *index = msr_index - MSR_P6_EVNTSEL0;
> @@ -254,13 +237,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned
> long *msr_bitmap)
>      int i;
> 
>      /* Allow Read/Write PMU Counters MSR Directly. */
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
> -        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
> msr_bitmap);
> -        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
> +        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
> msr_bitmap);
> +        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
>                    msr_bitmap + 0x800/BYTES_PER_LONG);
>      }
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
>          clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
> msr_bitmap);
>          clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
> @@ -275,26 +258,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned
> long *msr_bitmap)
>      }
> 
>      /* Allow Read PMU Non-global Controls Directly. */
> -    for ( i = 0; i < core2_ctrls.num; i++ )
> -        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
> +         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i),
> msr_bitmap);
> +
> +    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL),
> msr_bitmap);
> +    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
> +    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
>  }
> 
>  static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
>  {
>      int i;
> 
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
> -        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
> msr_bitmap);
> -        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
> +        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
> msr_bitmap);
> +        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
>                  msr_bitmap + 0x800/BYTES_PER_LONG);
>      }
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
> -        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
> -        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
> +        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
> +        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
>                  msr_bitmap + 0x800/BYTES_PER_LONG);
> 
>          if ( full_width_write )
> @@ -305,10 +290,12 @@ static void
> core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
>          }
>      }
> 
> -    for ( i = 0; i < core2_ctrls.num; i++ )
> -        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
> +        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
> +
> +    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL),
> msr_bitmap);
> +    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
> +    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
>  }
> 
>  static inline void __core2_vpmu_save(struct vcpu *v)
> @@ -316,10 +303,10 @@ static inline void __core2_vpmu_save(struct vcpu
> *v)
>      int i;
>      struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> 
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> -        rdmsrl(core2_fix_counters.msr[i],
> core2_vpmu_cxt->fix_counters[i]);
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        rdmsrl(MSR_IA32_PERFCTR0+i,
> core2_vpmu_cxt->arch_msr_pair[i].counter);
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
> +        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i,
> core2_vpmu_cxt->fix_counters[i]);
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
> +        rdmsrl(MSR_IA32_PERFCTR0 + i,
> core2_vpmu_cxt->arch_msr_pair[i].counter);
>  }
> 
>  static int core2_vpmu_save(struct vcpu *v)
> @@ -344,20 +331,22 @@ static inline void __core2_vpmu_load(struct vcpu
> *v)
>      unsigned int i, pmc_start;
>      struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> 
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> -        wrmsrl(core2_fix_counters.msr[i],
> core2_vpmu_cxt->fix_counters[i]);
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
> +        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i,
> core2_vpmu_cxt->fix_counters[i]);
> 
>      if ( full_width_write )
>          pmc_start = MSR_IA32_A_PERFCTR0;
>      else
>          pmc_start = MSR_IA32_PERFCTR0;
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
> +    {
>          wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
> +        wrmsrl(MSR_P6_EVNTSEL0 + i,
> core2_vpmu_cxt->arch_msr_pair[i].control);
> +    }
> 
> -    for ( i = 0; i < core2_ctrls.num; i++ )
> -        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        wrmsrl(MSR_P6_EVNTSEL0+i,
> core2_vpmu_cxt->arch_msr_pair[i].control);
> +    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL,
> core2_vpmu_cxt->fixed_ctrl);
> +    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
> +    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
>  }
> 
>  static void core2_vpmu_load(struct vcpu *v)
> @@ -376,56 +365,39 @@ static int core2_vpmu_alloc_resource(struct vcpu
> *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      struct core2_vpmu_context *core2_vpmu_cxt;
> -    struct core2_pmu_enable *pmu_enable;
> 
>      if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
>          return 0;
> 
>      wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
>      if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> -        return 0;
> +        goto out_err;
> 
>      if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> -        return 0;
> +        goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
>                   core2_calc_intial_glb_ctrl_msr());
> 
> -    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
> -                               core2_get_pmc_count() - 1);
> -    if ( !pmu_enable )
> -        goto out1;
> -
>      core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
> -                    (core2_get_pmc_count()-1)*sizeof(struct
> arch_msr_pair));
> +                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
>      if ( !core2_vpmu_cxt )
> -        goto out2;
> -    core2_vpmu_cxt->pmu_enable = pmu_enable;
> +        goto out_err;
> +
>      vpmu->context = (void *)core2_vpmu_cxt;
> 
> +    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
> +
>      return 1;
> - out2:
> -    xfree(pmu_enable);
> - out1:
> -    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU
> feature is "
> -             "unavailable on domain %d vcpu %d.\n",
> -             v->vcpu_id, v->domain->domain_id);
> -    return 0;
> -}
> 
> -static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
> -                                       int index, u64 msr_data)
> -{
> -    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +out_err:
> +    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> +    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> +    release_pmu_ownship(PMU_OWNER_HVM);
> 
> -    switch ( type )
> -    {
> -    case MSR_TYPE_CTRL:
> -        core2_vpmu_cxt->ctrls[index] = msr_data;
> -        break;
> -    case MSR_TYPE_ARCH_CTRL:
> -        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
> -        break;
> -    }
> +    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
> +           v->vcpu_id, v->domain->domain_id);
> +
> +    return 0;
>  }
> 
>  static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int
> *index)
> @@ -436,10 +408,8 @@ static int core2_vpmu_msr_common_check(u32
> msr_index, int *type, int *index)
>          return 0;
> 
>      if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
> -	 (vpmu->context != NULL ||
> -	  !core2_vpmu_alloc_resource(current)) )
> +         !core2_vpmu_alloc_resource(current) )
>          return 0;
> -    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
> 
>      /* Do the lazy load staff. */
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
> @@ -454,8 +424,7 @@ static int core2_vpmu_msr_common_check(u32
> msr_index, int *type, int *index)
> 
>  static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>  {
> -    u64 global_ctrl, non_global_ctrl;
> -    char pmu_enable = 0;
> +    u64 global_ctrl;
>      int i, tmp;
>      int type = -1, index = -1;
>      struct vcpu *v = current;
> @@ -500,6 +469,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          if ( msr_content & 1 )
>              gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
>                       "which is not supported.\n");
> +        core2_vpmu_cxt->pebs_enable = msr_content;
>          return 1;
>      case MSR_IA32_DS_AREA:
>          if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
> @@ -512,57 +482,48 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
>                  return 1;
>              }
> -            core2_vpmu_cxt->pmu_enable->ds_area_enable =
> msr_content ? 1 : 0;
> +            core2_vpmu_cxt->ds_area = msr_content;
>              break;
>          }
>          gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
>          return 1;
>      case MSR_CORE_PERF_GLOBAL_CTRL:
>          global_ctrl = msr_content;
> -        for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        {
> -            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
> -            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
> -                    global_ctrl & (non_global_ctrl >> 22) & 1;
> -            global_ctrl >>= 1;
> -        }
> -
> -        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
> -        global_ctrl = msr_content >> 32;
> -        for ( i = 0; i < core2_fix_counters.num; i++ )
> -        {
> -            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
> -                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
> -            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
> -            global_ctrl >>= 1;
> -        }
>          break;
>      case MSR_CORE_PERF_FIXED_CTR_CTRL:
> -        non_global_ctrl = msr_content;
>          vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> &global_ctrl);
> -        global_ctrl >>= 32;
> -        for ( i = 0; i < core2_fix_counters.num; i++ )
> +        core2_vpmu_cxt->enabled_cntrs &=
> +                ~(((1ULL << VPMU_CORE2_MAX_FIXED_PMCS) - 1) <<
> 32);
> +        if ( msr_content != 0 )
>          {
> -            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
> -                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
> -            non_global_ctrl >>= 4;
> -            global_ctrl >>= 1;
> +            u64 val = msr_content;
> +            for ( i = 0; i < fixed_pmc_cnt; i++ )
> +            {
> +                if ( val & 3 )
> +                    core2_vpmu_cxt->enabled_cntrs |= (1ULL << 32) <<
> i;
> +                val >>= FIXED_CTR_CTRL_BITS;
> +            }
>          }
> +
> +        core2_vpmu_cxt->fixed_ctrl = msr_content;
>          break;
>      default:
>          tmp = msr - MSR_P6_EVNTSEL0;
> -        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> &global_ctrl);
> -        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
> -            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
> -                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
> +        if ( tmp >= 0 && tmp < arch_pmc_cnt )
> +        {
> +            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> &global_ctrl);
> +
> +            if ( msr_content & (1ULL << 22) )
> +                core2_vpmu_cxt->enabled_cntrs |= 1ULL << tmp;
> +            else
> +                core2_vpmu_cxt->enabled_cntrs &= ~(1ULL << tmp);
> +
> +            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
> +        }
>      }
> 
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> -        pmu_enable |=
> core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
> -    for ( i = 0; i < core2_get_pmc_count(); i++ )
> -        pmu_enable |=
> core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
> -    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
> -    if ( pmu_enable )
> +    if ((global_ctrl & core2_vpmu_cxt->enabled_cntrs) ||
> +        (core2_vpmu_cxt->ds_area != 0)  )
>          vpmu_set(vpmu, VPMU_RUNNING);
>      else
>          vpmu_reset(vpmu, VPMU_RUNNING);
> @@ -580,7 +541,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
>      }
> 
> -    core2_vpmu_save_msr_context(v, type, index, msr_content);
>      if ( type != MSR_TYPE_GLOBAL )
>      {
>          u64 mask;
> @@ -596,7 +556,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>              if  ( msr == MSR_IA32_DS_AREA )
>                  break;
>              /* 4 bits per counter, currently 3 fixed counters implemented.
> */
> -            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED *
> FIXED_CTR_CTRL_BITS)) - 1);
> +            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) -
> 1);
>              if (msr_content & mask)
>                  inject_gp = 1;
>              break;
> @@ -681,7 +641,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
>  static void core2_vpmu_dump(const struct vcpu *v)
>  {
>      const struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    int i, num;
> +    int i;
>      const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
>      u64 val;
> 
> @@ -699,27 +659,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
> 
>      printk("    vPMU running\n");
>      core2_vpmu_cxt = vpmu->context;
> -    num = core2_get_pmc_count();
> +
>      /* Print the contents of the counter and its configuration msr. */
> -    for ( i = 0; i < num; i++ )
> +    for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
>          const struct arch_msr_pair *msr_pair =
> core2_vpmu_cxt->arch_msr_pair;
> 
> -        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
> -            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
> -                   i, msr_pair[i].counter, msr_pair[i].control);
> +        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
> +               i, msr_pair[i].counter, msr_pair[i].control);
>      }
>      /*
>       * The configuration of the fixed counter is 4 bits each in the
>       * MSR_CORE_PERF_FIXED_CTR_CTRL.
>       */
> -    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
> -    for ( i = 0; i < core2_fix_counters.num; i++ )
> +    val = core2_vpmu_cxt->fixed_ctrl;
> +    for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
> -        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
> -            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
> -                   i, core2_vpmu_cxt->fix_counters[i],
> -                   val & FIXED_CTR_CTRL_MASK);
> +        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
> +               i, core2_vpmu_cxt->fix_counters[i],
> +               val & FIXED_CTR_CTRL_MASK);
>          val >>= FIXED_CTR_CTRL_BITS;
>      }
>  }
> @@ -737,7 +695,7 @@ static int core2_vpmu_do_interrupt(struct
> cpu_user_regs *regs)
>          if ( is_pmc_quirk )
>              handle_pmc_quirk(msr_content);
>          core2_vpmu_cxt->global_ovf_status |= msr_content;
> -        msr_content = 0xC000000700000000 | ((1 <<
> core2_get_pmc_count()) - 1);
> +        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
>          wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
>      }
>      else
> @@ -800,18 +758,27 @@ static int core2_vpmu_initialise(struct vcpu *v,
> unsigned int vpmu_flags)
>          }
>      }
>  func_out:
> +
> +    arch_pmc_cnt = core2_get_arch_pmc_count();
> +    fixed_pmc_cnt = core2_get_fixed_pmc_count();
> +    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
> +    {
> +        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
> +        printk(XENLOG_G_WARNING "Limiting number of fixed counters
> to %d\n",
> +               fixed_pmc_cnt);
> +    }
>      check_pmc_quirk();
> +
>      return 0;
>  }
> 
>  static void core2_vpmu_destroy(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
> 
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>          return;
> -    xfree(core2_vpmu_cxt->pmu_enable);
> +
>      xfree(vpmu->context);
>      if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
>          core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 445b39f..50befe1 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -480,7 +480,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v,
> u32 msr, int type);
>  int vmx_read_guest_msr(u32 msr, u64 *val);
>  int vmx_write_guest_msr(u32 msr, u64 val);
>  int vmx_add_guest_msr(u32 msr);
> +void vmx_rm_guest_msr(u32 msr);
>  int vmx_add_host_load_msr(u32 msr);
> +void vmx_rm_host_load_msr(u32 msr);
>  void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
>  void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
> diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> index 60b05fd..410372d 100644
> --- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> +++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> @@ -23,29 +23,10 @@
>  #ifndef __ASM_X86_HVM_VPMU_CORE_H_
>  #define __ASM_X86_HVM_VPMU_CORE_H_
> 
> -/* Currently only 3 fixed counters are supported. */
> -#define VPMU_CORE2_NUM_FIXED 3
> -/* Currently only 3 Non-architectual Performance Control MSRs */
> -#define VPMU_CORE2_NUM_CTRLS 3
> -
>  struct arch_msr_pair {
>      u64 counter;
>      u64 control;
>  };
> 
> -struct core2_pmu_enable {
> -    char ds_area_enable;
> -    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
> -    char arch_pmc_enable[1];
> -};
> -
> -struct core2_vpmu_context {
> -    struct core2_pmu_enable *pmu_enable;
> -    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
> -    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
> -    u64 global_ovf_status;
> -    struct arch_msr_pair arch_msr_pair[1];
> -};
> -
>  #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
> 
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 05/19] vmx: Merge MSR management routines
  2014-05-13 15:53 ` [PATCH v6 05/19] vmx: Merge MSR management routines Boris Ostrovsky
@ 2014-05-19 12:00   ` Tian, Kevin
  2014-05-22 10:24   ` Dietmar Hahn
  1 sibling, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-19 12:00 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 11:53 PM
> 
> vmx_add_host_load_msr()/vmx_rm_guest_msr() and
> vmx_add_guest_msr()/vmx_rm_guest_msr()
> share fair amount of code. Merge them to simplify code maintenance.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        | 154
> +++++++++++++++++--------------------
>  xen/arch/x86/hvm/vmx/vmx.c         |   4 +-
>  xen/arch/x86/hvm/vmx/vpmu_core2.c  |   8 +-
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  10 ++-
>  4 files changed, 83 insertions(+), 93 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 0f43a1b..aaa3691 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1172,121 +1172,109 @@ int vmx_write_guest_msr(u32 msr, u64 val)
>      return -ESRCH;
>  }
> 
> -int vmx_add_guest_msr(u32 msr)
> +int vmx_add_msr(u32 msr, u8 type)
>  {
>      struct vcpu *curr = current;
> -    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    unsigned int idx, *msr_count;
> +    struct vmx_msr_entry **msr_area;
> 
> -    if ( msr_area == NULL )
> +    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
> +
> +    if ( type == VMX_GUEST_MSR )
>      {
> -        if ( (msr_area = alloc_xenheap_page()) == NULL )
> +        msr_count = &curr->arch.hvm_vmx.msr_count;
> +        msr_area = &curr->arch.hvm_vmx.msr_area;
> +    }
> +    else
> +    {
> +        msr_count = &curr->arch.hvm_vmx.host_msr_count;
> +        msr_area = &curr->arch.hvm_vmx.host_msr_area;
> +    }
> +
> +    if ( *msr_area == NULL )
> +    {
> +        if ( (*msr_area = alloc_xenheap_page()) == NULL )
>              return -ENOMEM;
> -        curr->arch.hvm_vmx.msr_area = msr_area;
> -        __vmwrite(VM_EXIT_MSR_STORE_ADDR,
> virt_to_maddr(msr_area));
> -        __vmwrite(VM_ENTRY_MSR_LOAD_ADDR,
> virt_to_maddr(msr_area));
> +
> +        if ( type == VMX_GUEST_MSR )
> +        {
> +            __vmwrite(VM_EXIT_MSR_STORE_ADDR,
> virt_to_maddr(*msr_area));
> +            __vmwrite(VM_ENTRY_MSR_LOAD_ADDR,
> virt_to_maddr(*msr_area));
> +        }
> +        else
> +            __vmwrite(VM_EXIT_MSR_LOAD_ADDR,
> virt_to_maddr(*msr_area));
>      }
> 
> -    for ( i = 0; i < msr_count; i++ )
> -        if ( msr_area[i].index == msr )
> +    for ( idx = 0; idx < *msr_count; idx++ )
> +        if ( msr_area[idx]->index == msr )
>              return 0;
> 
> -    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
> +    if ( *msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
>          return -ENOSPC;
> 
> -    msr_area[msr_count].index = msr;
> -    msr_area[msr_count].mbz   = 0;
> -    msr_area[msr_count].data  = 0;
> -    curr->arch.hvm_vmx.msr_count = ++msr_count;
> -    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
> -    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
> +    msr_area[*msr_count]->index = msr;
> +    msr_area[*msr_count]->mbz   = 0;
> +    (*msr_count)++;
> +    if ( type == VMX_GUEST_MSR )
> +    {
> +        msr_area[*msr_count - 1]->data  = 0;
> +        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
> +        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
> +    }
> +    else
> +    {
> +        rdmsrl(msr, msr_area[*msr_count - 1]->data);
> +        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
> +    }
> 
>      return 0;
>  }
> 
> -void vmx_rm_guest_msr(u32 msr)
> +void vmx_rm_msr(u32 msr, u8 type)
>  {
>      struct vcpu *curr = current;
> -    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    unsigned int idx, *msr_count;
> +    struct vmx_msr_entry **msr_area;
> 
> -    if ( msr_area == NULL )
> -        return;
> -
> -    for ( idx = 0; idx < msr_count; idx++ )
> -        if ( msr_area[idx].index == msr )
> -            break;
> +    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
> 
> -    if ( idx == msr_count )
> -        return;
> -
> -    for ( ; idx < msr_count - 1; idx++ )
> +    if ( type == VMX_GUEST_MSR )
>      {
> -        msr_area[idx].index = msr_area[idx + 1].index;
> -        msr_area[idx].data = msr_area[idx + 1].data;
> +        msr_count = &curr->arch.hvm_vmx.msr_count;
> +        msr_area = &curr->arch.hvm_vmx.msr_area;
>      }
> -    msr_area[msr_count - 1].index = 0;
> -
> -    curr->arch.hvm_vmx.msr_count = --msr_count;
> -    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
> -    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
> -}
> -
> -int vmx_add_host_load_msr(u32 msr)
> -{
> -    struct vcpu *curr = current;
> -    unsigned int i, msr_count = curr->arch.hvm_vmx.host_msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
> -
> -    if ( msr_area == NULL )
> +    else
>      {
> -        if ( (msr_area = alloc_xenheap_page()) == NULL )
> -            return -ENOMEM;
> -        curr->arch.hvm_vmx.host_msr_area = msr_area;
> -        __vmwrite(VM_EXIT_MSR_LOAD_ADDR,
> virt_to_maddr(msr_area));
> +        msr_count = &curr->arch.hvm_vmx.host_msr_count;
> +        msr_area = &curr->arch.hvm_vmx.host_msr_area;
>      }
> 
> -    for ( i = 0; i < msr_count; i++ )
> -        if ( msr_area[i].index == msr )
> -            return 0;
> -
> -    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
> -        return -ENOSPC;
> -
> -    msr_area[msr_count].index = msr;
> -    msr_area[msr_count].mbz   = 0;
> -    rdmsrl(msr, msr_area[msr_count].data);
> -    curr->arch.hvm_vmx.host_msr_count = ++msr_count;
> -    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
> -
> -    return 0;
> -}
> -
> -void vmx_rm_host_load_msr(u32 msr)
> -{
> -    struct vcpu *curr = current;
> -    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
> -
> -    if ( msr_area == NULL )
> +    if ( *msr_area == NULL )
>          return;
> 
> -    for ( idx = 0; idx < msr_count; idx++ )
> -        if ( msr_area[idx].index == msr )
> +    for ( idx = 0; idx < *msr_count; idx++ )
> +        if ( msr_area[idx]->index == msr )
>              break;
> 
> -    if ( idx == msr_count )
> +    if ( idx == *msr_count )
>          return;
> 
> -    for ( ; idx < msr_count - 1; idx++ )
> +    for ( ; idx < *msr_count - 1; idx++ )
>      {
> -        msr_area[idx].index = msr_area[idx + 1].index;
> -        msr_area[idx].data = msr_area[idx + 1].data;
> +        msr_area[idx]->index = msr_area[idx + 1]->index;
> +        msr_area[idx]->data = msr_area[idx + 1]->data;
> +    }
> +    msr_area[*msr_count - 1]->index = 0;
> +    (*msr_count)--;
> +    if ( type == VMX_GUEST_MSR )
> +    {
> +        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
> +        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
> +    }
> +    else
> +    {
> +        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
>      }
> -    msr_area[msr_count - 1].index = 0;
> -
> -    curr->arch.hvm_vmx.host_msr_count = --msr_count;
> -    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
>  }
> 
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index ecdbc17..23d58d9 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2234,12 +2234,12 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
> 
>              for ( ; (rc == 0) && lbr->count; lbr++ )
>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
> -                    if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> +                    if ( (rc = vmx_add_msr(lbr->base + i,
> VMX_GUEST_MSR)) == 0 )
>                          vmx_disable_intercept_for_msr(v, lbr->base + i,
> MSR_TYPE_R | MSR_TYPE_W);
>          }
> 
>          if ( (rc < 0) ||
> -             (vmx_add_host_load_msr(msr) < 0) )
> +             (vmx_add_msr(msr, VMX_HOST_MSR) < 0) )
>              hvm_inject_hw_exception(TRAP_machine_check, 0);
>          else
>          {
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 0a9c643..5e980fa 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -370,10 +370,10 @@ static int core2_vpmu_alloc_resource(struct vcpu
> *v)
>          return 0;
> 
>      wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> -    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> +    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
>          goto out_err;
> 
> -    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> +    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> VMX_GUEST_MSR) )
>          goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
>                   core2_calc_intial_glb_ctrl_msr());
> @@ -390,8 +390,8 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>      return 1;
> 
>  out_err:
> -    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> -    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> +    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
> +    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
>      release_pmu_ownship(PMU_OWNER_HVM);
> 
>      printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 50befe1..dd34b2c 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -475,14 +475,16 @@ enum vmcs_field {
> 
>  #define MSR_TYPE_R 1
>  #define MSR_TYPE_W 2
> +
> +#define VMX_GUEST_MSR 0
> +#define VMX_HOST_MSR  1
> +
>  void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
>  void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
>  int vmx_read_guest_msr(u32 msr, u64 *val);
>  int vmx_write_guest_msr(u32 msr, u64 val);
> -int vmx_add_guest_msr(u32 msr);
> -void vmx_rm_guest_msr(u32 msr);
> -int vmx_add_host_load_msr(u32 msr);
> -void vmx_rm_host_load_msr(u32 msr);
> +int vmx_add_msr(u32 msr, u8 type);
> +void vmx_rm_msr(u32 msr, u8 type);
>  void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
>  void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
@ 2014-05-19 12:02   ` Tian, Kevin
  2014-05-20 15:24   ` Jan Beulich
  2014-05-21  7:19   ` Dietmar Hahn
  2 siblings, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-19 12:02 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 11:53 PM
> 
> Add pmu.h header files, move various macros and structures that will be
> shared between hypervisor and PV guests to it.
> 
> Move MSR banks out of architectural PMU structures to allow for larger sizes
> in the future. The banks are allocated immediately after the context and
> PMU structures store offsets to them.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/svm/vpmu.c              |  81
> ++++++++++++-----------
>  xen/arch/x86/hvm/vmx/vpmu_core2.c        | 110
> +++++++++++++++++--------------
>  xen/arch/x86/hvm/vpmu.c                  |   1 +
>  xen/arch/x86/oprofile/op_model_ppro.c    |   6 +-
>  xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  32 ---------
>  xen/include/asm-x86/hvm/vpmu.h           |  16 ++---
>  xen/include/public/arch-arm.h            |   3 +
>  xen/include/public/arch-x86/pmu.h        |  62 +++++++++++++++++
>  xen/include/public/pmu.h                 |  38 +++++++++++
>  9 files changed, 220 insertions(+), 129 deletions(-)
>  delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
>  create mode 100644 xen/include/public/arch-x86/pmu.h
>  create mode 100644 xen/include/public/pmu.h
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 2fbe2c1..ebdba8e 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -30,10 +30,7 @@
>  #include <asm/apic.h>
>  #include <asm/hvm/vlapic.h>
>  #include <asm/hvm/vpmu.h>
> -
> -#define F10H_NUM_COUNTERS 4
> -#define F15H_NUM_COUNTERS 6
> -#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
> +#include <public/pmu.h>
> 
>  #define MSR_F10H_EVNTSEL_GO_SHIFT   40
>  #define MSR_F10H_EVNTSEL_EN_SHIFT   22
> @@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
>  static const u32 __read_mostly *ctrls;
>  static bool_t __read_mostly k7_counters_mirrored;
> 
> +#define F10H_NUM_COUNTERS   4
> +#define F15H_NUM_COUNTERS   6
> +#define AMD_MAX_COUNTERS    6
> +
>  /* PMU Counter MSRs. */
>  static const u32 AMD_F10H_COUNTERS[] = {
>      MSR_K7_PERFCTR0,
> @@ -83,12 +84,10 @@ static const u32 AMD_F15H_CTRLS[] = {
>      MSR_AMD_FAM15H_EVNTSEL5
>  };
> 
> -/* storage for context switching */
> -struct amd_vpmu_context {
> -    u64 counters[MAX_NUM_COUNTERS];
> -    u64 ctrls[MAX_NUM_COUNTERS];
> -    bool_t msr_bitmap_set;
> -};
> +/* Use private context as a flag for MSR bitmap */
> +#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
> +#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}
> +#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)
> 
>  static inline int get_pmu_reg_type(u32 addr)
>  {
> @@ -142,7 +141,6 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu
> *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> 
>      for ( i = 0; i < num_counters; i++ )
>      {
> @@ -150,14 +148,13 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu
> *v)
>          svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
>      }
> 
> -    ctxt->msr_bitmap_set = 1;
> +    msr_bitmap_on(vpmu);
>  }
> 
>  static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> 
>      for ( i = 0; i < num_counters; i++ )
>      {
> @@ -165,7 +162,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu
> *v)
>          svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
>      }
> 
> -    ctxt->msr_bitmap_set = 0;
> +    msr_bitmap_off(vpmu);
>  }
> 
>  static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
> @@ -177,19 +174,22 @@ static inline void context_load(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
> 
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        wrmsrl(counters[i], ctxt->counters[i]);
> -        wrmsrl(ctrls[i], ctxt->ctrls[i]);
> +        wrmsrl(counters[i], counter_regs[i]);
> +        wrmsrl(ctrls[i], ctrl_regs[i]);
>      }
>  }
> 
>  static void amd_vpmu_load(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
> 
>      vpmu_reset(vpmu, VPMU_FROZEN);
> 
> @@ -198,7 +198,7 @@ static void amd_vpmu_load(struct vcpu *v)
>          unsigned int i;
> 
>          for ( i = 0; i < num_counters; i++ )
> -            wrmsrl(ctrls[i], ctxt->ctrls[i]);
> +            wrmsrl(ctrls[i], ctrl_regs[i]);
> 
>          return;
>      }
> @@ -212,17 +212,17 @@ static inline void context_save(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> 
>      /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr
> */
>      for ( i = 0; i < num_counters; i++ )
> -        rdmsrl(counters[i], ctxt->counters[i]);
> +        rdmsrl(counters[i], counter_regs[i]);
>  }
> 
>  static int amd_vpmu_save(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctx = vpmu->context;
>      unsigned int i;
> 
>      /*
> @@ -245,7 +245,7 @@ static int amd_vpmu_save(struct vcpu *v)
>      context_save(v);
> 
>      if ( is_hvm_domain(v->domain) &&
> -        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
> +        !vpmu_is_set(vpmu, VPMU_RUNNING) &&
> is_msr_bitmap_on(vpmu) )
>          amd_vpmu_unset_msr_bitmap(v);
> 
>      return 1;
> @@ -256,7 +256,9 @@ static void context_update(unsigned int msr, u64
> msr_content)
>      unsigned int i;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
> 
>      if ( k7_counters_mirrored &&
>          ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
> @@ -268,12 +270,12 @@ static void context_update(unsigned int msr, u64
> msr_content)
>      {
>         if ( msr == ctrls[i] )
>         {
> -           ctxt->ctrls[i] = msr_content;
> +           ctrl_regs[i] = msr_content;
>             return;
>         }
>          else if (msr == counters[i] )
>          {
> -            ctxt->counters[i] = msr_content;
> +            counter_regs[i] = msr_content;
>              return;
>          }
>      }
> @@ -299,8 +301,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>              return 1;
>          vpmu_set(vpmu, VPMU_RUNNING);
> 
> -        if ( is_hvm_domain(v->domain) &&
> -             !((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>              amd_vpmu_set_msr_bitmap(v);
>      }
> 
> @@ -309,8 +310,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu,
> VPMU_RUNNING) )
>      {
>          vpmu_reset(vpmu, VPMU_RUNNING);
> -        if ( is_hvm_domain(v->domain) &&
> -             ((struct amd_vpmu_context
> *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>              amd_vpmu_unset_msr_bitmap(v);
>          release_pmu_ownship(PMU_OWNER_HVM);
>      }
> @@ -351,7 +351,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr,
> uint64_t *msr_content)
> 
>  static int amd_vpmu_initialise(struct vcpu *v)
>  {
> -    struct amd_vpmu_context *ctxt;
> +    struct xen_pmu_amd_ctxt *ctxt;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      uint8_t family = current_cpu_data.x86;
> 
> @@ -381,7 +381,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
>  	 }
>      }
> 
> -    ctxt = xzalloc(struct amd_vpmu_context);
> +    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) +
> +			 sizeof(uint64_t) * AMD_MAX_COUNTERS +
> +			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
>      if ( !ctxt )
>      {
>          gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
> @@ -390,7 +392,11 @@ static int amd_vpmu_initialise(struct vcpu *v)
>          return -ENOMEM;
>      }
> 
> +    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
> +    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
> +
>      vpmu->context = ctxt;
> +    vpmu->priv_context = NULL;
>      vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
>      return 0;
>  }
> @@ -402,8 +408,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>          return;
> 
> -    if ( is_hvm_domain(v->domain) &&
> -         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> +    if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>          amd_vpmu_unset_msr_bitmap(v);
> 
>      xfree(vpmu->context);
> @@ -420,7 +425,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
>  static void amd_vpmu_dump(const struct vcpu *v)
>  {
>      const struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    const struct amd_vpmu_context *ctxt = vpmu->context;
> +    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
>      unsigned int i;
> 
>      printk("    VPMU state: 0x%x ", vpmu->flags);
> @@ -450,8 +457,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
>          rdmsrl(ctrls[i], ctrl);
>          rdmsrl(counters[i], cntr);
>          printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in
> HW)\n",
> -               ctrls[i], ctxt->ctrls[i], ctrl,
> -               counters[i], ctxt->counters[i], cntr);
> +               ctrls[i], ctrl_regs[i], ctrl,
> +               counters[i], counter_regs[i], cntr);
>      }
>  }
> 
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index dffdd80..1fe583f 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -35,8 +35,8 @@
>  #include <asm/hvm/vmx/vmcs.h>
>  #include <public/sched.h>
>  #include <public/hvm/save.h>
> +#include <public/pmu.h>
>  #include <asm/hvm/vpmu.h>
> -#include <asm/hvm/vmx/vpmu_core2.h>
> 
>  /*
>   * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
> @@ -68,6 +68,10 @@
>  #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^
> MSR_IA32_A_PERFCTR0))
>  static bool_t __read_mostly full_width_write;
> 
> +/* Intel-specific VPMU features */
> +#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug
> Store */
> +#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch
> Trace Store */
> +
>  /*
>   * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
>   * counters. 4 bits for every counter.
> @@ -75,17 +79,6 @@ static bool_t __read_mostly full_width_write;
>  #define FIXED_CTR_CTRL_BITS 4
>  #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
> 
> -#define VPMU_CORE2_MAX_FIXED_PMCS     4
> -struct core2_vpmu_context {
> -    u64 fixed_ctrl;
> -    u64 ds_area;
> -    u64 pebs_enable;
> -    u64 global_ovf_status;
> -    u64 enabled_cntrs;  /* Follows PERF_GLOBAL_CTRL MSR format */
> -    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
> -    struct arch_msr_pair arch_msr_pair[1];
> -};
> -
>  /* Number of general-purpose and fixed performance counters */
>  static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
> 
> @@ -225,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int
> *type, int *index)
>      return 0;
>  }
> 
> +#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
>  static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
>  {
>      int i;
> @@ -294,12 +288,15 @@ static void
> core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
>  static inline void __core2_vpmu_save(struct vcpu *v)
>  {
>      int i;
> -    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt,
> fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> 
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i,
> core2_vpmu_cxt->fix_counters[i]);
> +        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -        rdmsrl(MSR_IA32_PERFCTR0 + i,
> core2_vpmu_cxt->arch_msr_pair[i].counter);
> +        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
>  }
> 
>  static int core2_vpmu_save(struct vcpu *v)
> @@ -322,10 +319,13 @@ static int core2_vpmu_save(struct vcpu *v)
>  static inline void __core2_vpmu_load(struct vcpu *v)
>  {
>      unsigned int i, pmc_start;
> -    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt,
> fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> 
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i,
> core2_vpmu_cxt->fix_counters[i]);
> +        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
> 
>      if ( full_width_write )
>          pmc_start = MSR_IA32_A_PERFCTR0;
> @@ -333,8 +333,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>          pmc_start = MSR_IA32_PERFCTR0;
>      for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
> -        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
> -        wrmsrl(MSR_P6_EVNTSEL0 + i,
> core2_vpmu_cxt->arch_msr_pair[i].control);
> +        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
> +        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
>      }
> 
>      wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL,
> core2_vpmu_cxt->fixed_ctrl);
> @@ -357,7 +357,8 @@ static void core2_vpmu_load(struct vcpu *v)
>  static int core2_vpmu_alloc_resource(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
> +    uint64_t *p = NULL;
> 
>      if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
>          return 0;
> @@ -370,12 +371,20 @@ static int core2_vpmu_alloc_resource(struct vcpu
> *v)
>          goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> 
> -    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
> -                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
> -    if ( !core2_vpmu_cxt )
> +    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
> +                                   sizeof(uint64_t) * fixed_pmc_cnt +
> +                                   sizeof(struct xen_pmu_cntr_pair)
> *
> +                                   arch_pmc_cnt);
> +    p = xzalloc_bytes(sizeof(uint64_t));
> +    if ( !core2_vpmu_cxt || !p )
>          goto out_err;
> 
> +    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
> +    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
> +                                    sizeof(uint64_t) * fixed_pmc_cnt;
> +
>      vpmu->context = (void *)core2_vpmu_cxt;
> +    vpmu->priv_context = (void *)p;
> 
>      vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
> 
> @@ -386,6 +395,9 @@ out_err:
>      vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
>      release_pmu_ownship(PMU_OWNER_HVM);
> 
> +    xfree(core2_vpmu_cxt);
> +    xfree(p);
> +
>      printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
>             v->vcpu_id, v->domain->domain_id);
> 
> @@ -421,7 +433,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>      int type = -1, index = -1;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
> +    uint64_t *enabled_cntrs;
> 
>      if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
>      {
> @@ -447,10 +460,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>      }
> 
>      core2_vpmu_cxt = vpmu->context;
> +    enabled_cntrs = (uint64_t *)vpmu->priv_context;
>      switch ( msr )
>      {
>      case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> -        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
> +        core2_vpmu_cxt->global_status &= ~msr_content;
>          return 1;
>      case MSR_CORE_PERF_GLOBAL_STATUS:
>          gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
> @@ -484,15 +498,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          break;
>      case MSR_CORE_PERF_FIXED_CTR_CTRL:
>          vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> &global_ctrl);
> -        core2_vpmu_cxt->enabled_cntrs &=
> -                ~(((1ULL << VPMU_CORE2_MAX_FIXED_PMCS) - 1) <<
> 32);
> +        *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
>          if ( msr_content != 0 )
>          {
>              u64 val = msr_content;
>              for ( i = 0; i < fixed_pmc_cnt; i++ )
>              {
>                  if ( val & 3 )
> -                    core2_vpmu_cxt->enabled_cntrs |= (1ULL << 32) <<
> i;
> +                    *enabled_cntrs |= (1ULL << 32) << i;
>                  val >>= FIXED_CTR_CTRL_BITS;
>              }
>          }
> @@ -503,19 +516,21 @@ static int core2_vpmu_do_wrmsr(unsigned int msr,
> uint64_t msr_content)
>          tmp = msr - MSR_P6_EVNTSEL0;
>          if ( tmp >= 0 && tmp < arch_pmc_cnt )
>          {
> +            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> +
>              vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> &global_ctrl);
> 
>              if ( msr_content & (1ULL << 22) )
> -                core2_vpmu_cxt->enabled_cntrs |= 1ULL << tmp;
> +                *enabled_cntrs |= 1ULL << tmp;
>              else
> -                core2_vpmu_cxt->enabled_cntrs &= ~(1ULL << tmp);
> +                *enabled_cntrs &= ~(1ULL << tmp);
> 
> -            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
> +            xen_pmu_cntr_pair[tmp].control = msr_content;
>          }
>      }
> 
> -    if ((global_ctrl & core2_vpmu_cxt->enabled_cntrs) ||
> -        (core2_vpmu_cxt->ds_area != 0)  )
> +    if ((global_ctrl & *enabled_cntrs) || (core2_vpmu_cxt->ds_area != 0)  )
>          vpmu_set(vpmu, VPMU_RUNNING);
>      else
>          vpmu_reset(vpmu, VPMU_RUNNING);
> @@ -561,7 +576,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr,
> uint64_t *msr_content)
>      int type = -1, index = -1;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
> 
>      if ( core2_vpmu_msr_common_check(msr, &type, &index) )
>      {
> @@ -572,7 +587,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr,
> uint64_t *msr_content)
>              *msr_content = 0;
>              break;
>          case MSR_CORE_PERF_GLOBAL_STATUS:
> -            *msr_content = core2_vpmu_cxt->global_ovf_status;
> +            *msr_content = core2_vpmu_cxt->global_status;
>              break;
>          case MSR_CORE_PERF_GLOBAL_CTRL:
>              vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> msr_content);
> @@ -621,8 +636,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
>  {
>      const struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      int i;
> -    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
>      u64 val;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt,
> fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> 
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>           return;
> @@ -641,12 +659,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
> 
>      /* Print the contents of the counter and its configuration msr. */
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -    {
> -        const struct arch_msr_pair *msr_pair =
> core2_vpmu_cxt->arch_msr_pair;
> -
>          printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
> -               i, msr_pair[i].counter, msr_pair[i].control);
> -    }
> +            i, xen_pmu_cntr_pair[i].counter,
> xen_pmu_cntr_pair[i].control);
> +
>      /*
>       * The configuration of the fixed counter is 4 bits each in the
>       * MSR_CORE_PERF_FIXED_CTR_CTRL.
> @@ -655,7 +670,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
>          printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
> -               i, core2_vpmu_cxt->fix_counters[i],
> +               i, fixed_counters[i],
>                 val & FIXED_CTR_CTRL_MASK);
>          val >>= FIXED_CTR_CTRL_BITS;
>      }
> @@ -666,14 +681,14 @@ static int core2_vpmu_do_interrupt(struct
> cpu_user_regs *regs)
>      struct vcpu *v = current;
>      u64 msr_content;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
> 
>      rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
>      if ( msr_content )
>      {
>          if ( is_pmc_quirk )
>              handle_pmc_quirk(msr_content);
> -        core2_vpmu_cxt->global_ovf_status |= msr_content;
> +        core2_vpmu_cxt->global_status |= msr_content;
>          msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
>          wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
>      }
> @@ -736,12 +751,6 @@ func_out:
> 
>      arch_pmc_cnt = core2_get_arch_pmc_count();
>      fixed_pmc_cnt = core2_get_fixed_pmc_count();
> -    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
> -    {
> -        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
> -        printk(XENLOG_G_WARNING "Limiting number of fixed counters
> to %d\n",
> -               fixed_pmc_cnt);
> -    }
>      check_pmc_quirk();
> 
>      return 0;
> @@ -755,6 +764,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
>          return;
> 
>      xfree(vpmu->context);
> +    xfree(vpmu->priv_context);
>      if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
>          core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
>      release_pmu_ownship(PMU_OWNER_HVM);
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 0340e5b..0acc486 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -31,6 +31,7 @@
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/hvm/svm/vmcb.h>
>  #include <asm/apic.h>
> +#include <public/pmu.h>
> 
>  /*
>   * "vpmu" :     vpmu generally enabled
> diff --git a/xen/arch/x86/oprofile/op_model_ppro.c
> b/xen/arch/x86/oprofile/op_model_ppro.c
> index 3225937..5aae2e7 100644
> --- a/xen/arch/x86/oprofile/op_model_ppro.c
> +++ b/xen/arch/x86/oprofile/op_model_ppro.c
> @@ -20,11 +20,15 @@
>  #include <asm/regs.h>
>  #include <asm/current.h>
>  #include <asm/hvm/vpmu.h>
> -#include <asm/hvm/vmx/vpmu_core2.h>
> 
>  #include "op_x86_model.h"
>  #include "op_counter.h"
> 
> +struct arch_msr_pair {
> +    u64 counter;
> +    u64 control;
> +};
> +
>  /*
>   * Intel "Architectural Performance Monitoring" CPUID
>   * detection/enumeration details:
> diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> deleted file mode 100644
> index 410372d..0000000
> --- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> +++ /dev/null
> @@ -1,32 +0,0 @@
> -
> -/*
> - * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
> - *
> - * Copyright (c) 2007, Intel Corporation.
> - *
> - * This program is free software; you can redistribute it and/or modify it
> - * under the terms and conditions of the GNU General Public License,
> - * version 2, as published by the Free Software Foundation.
> - *
> - * This program is distributed in the hope it will be useful, but WITHOUT
> - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
> or
> - * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
> License for
> - * more details.
> - *
> - * You should have received a copy of the GNU General Public License along
> with
> - * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> - * Place - Suite 330, Boston, MA 02111-1307 USA.
> - *
> - * Author: Haitao Shan <haitao.shan@intel.com>
> - */
> -
> -#ifndef __ASM_X86_HVM_VPMU_CORE_H_
> -#define __ASM_X86_HVM_VPMU_CORE_H_
> -
> -struct arch_msr_pair {
> -    u64 counter;
> -    u64 control;
> -};
> -
> -#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
> -
> diff --git a/xen/include/asm-x86/hvm/vpmu.h
> b/xen/include/asm-x86/hvm/vpmu.h
> index 7ee0f01..3e5d9de 100644
> --- a/xen/include/asm-x86/hvm/vpmu.h
> +++ b/xen/include/asm-x86/hvm/vpmu.h
> @@ -22,6 +22,8 @@
>  #ifndef __ASM_X86_HVM_VPMU_H_
>  #define __ASM_X86_HVM_VPMU_H_
> 
> +#include <public/pmu.h>
> +
>  /*
>   * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
>   * See arch/x86/hvm/vpmu.c.
> @@ -29,12 +31,9 @@
>  #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
>  #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
> 
> -
> -#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
>  #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
>  #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
>                                            arch.hvm_vcpu.vpmu))
> -#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
> 
>  #define MSR_TYPE_COUNTER            0
>  #define MSR_TYPE_CTRL               1
> @@ -42,6 +41,9 @@
>  #define MSR_TYPE_ARCH_COUNTER       3
>  #define MSR_TYPE_ARCH_CTRL          4
> 
> +/* Start of PMU register bank */
> +#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
> +
> (uintptr_t)ctxt->offset))
> 
>  /* Arch specific operations shared by all vpmus */
>  struct arch_vpmu_ops {
> @@ -64,7 +66,8 @@ struct vpmu_struct {
>      u32 flags;
>      u32 last_pcpu;
>      u32 hw_lapic_lvtpc;
> -    void *context;
> +    void *context;      /* May be shared with PV guest */
> +    void *priv_context; /* hypervisor-only */
>      struct arch_vpmu_ops *arch_vpmu_ops;
>  };
> 
> @@ -76,11 +79,6 @@ struct vpmu_struct {
>  #define VPMU_FROZEN                         0x10  /* Stop
> counters while VCPU is not running */
>  #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
> 
> -/* VPMU features */
> -#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug
> Store */
> -#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch
> Trace Store */
> -
> -
>  #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
>  #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
>  #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 7496556..e982b53 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -388,6 +388,9 @@ typedef uint64_t xen_callback_t;
> 
>  #endif
> 
> +/* Stub definition of PMU structure */
> +typedef struct xen_arch_pmu {} xen_arch_pmu_t;
> +
>  #endif /*  __XEN_PUBLIC_ARCH_ARM_H__ */
> 
>  /*
> diff --git a/xen/include/public/arch-x86/pmu.h
> b/xen/include/public/arch-x86/pmu.h
> new file mode 100644
> index 0000000..b4eda67
> --- /dev/null
> +++ b/xen/include/public/arch-x86/pmu.h
> @@ -0,0 +1,62 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +
> +/* AMD PMU registers and structures */
> +struct xen_pmu_amd_ctxt {
> +    uint32_t counters;       /* Offset to counter MSRs */
> +    uint32_t ctrls;          /* Offset to control MSRs */
> +};
> +
> +/* Intel PMU registers and structures */
> +struct xen_pmu_cntr_pair {
> +    uint64_t counter;
> +    uint64_t control;
> +};
> +
> +struct xen_pmu_intel_ctxt {
> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
> +};
> +
> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) >
> \
> +                                    sizeof(struct
> xen_pmu_intel_ctxt) ? \
> +                                     sizeof(struct
> xen_pmu_amd_ctxt) : \
> +                                     sizeof(struct
> xen_pmu_intel_ctxt))
> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) &
> ~63) + 128)
> +struct xen_arch_pmu {
> +    union {
> +        struct cpu_user_regs regs;
> +        uint8_t pad1[256];
> +    } r;
> +    union {
> +        uint32_t lapic_lvtpc;
> +        uint64_t pad2;
> +    } l;
> +    union {
> +        struct xen_pmu_amd_ctxt amd;
> +        struct xen_pmu_intel_ctxt intel;
> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
> +    } c;
> +};
> +typedef struct xen_arch_pmu xen_arch_pmu_t;
> +
> +#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> +
> diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
> new file mode 100644
> index 0000000..3ffd2cf
> --- /dev/null
> +++ b/xen/include/public/pmu.h
> @@ -0,0 +1,38 @@
> +#ifndef __XEN_PUBLIC_PMU_H__
> +#define __XEN_PUBLIC_PMU_H__
> +
> +#include "xen.h"
> +#if defined(__i386__) || defined(__x86_64__)
> +#include "arch-x86/pmu.h"
> +#elif defined (__arm__) || defined (__aarch64__)
> +#include "arch-arm.h"
> +#else
> +#error "Unsupported architecture"
> +#endif
> +
> +#define XENPMU_VER_MAJ    0
> +#define XENPMU_VER_MIN    0
> +
> +
> +/* Shared between hypervisor and PV domain */
> +struct xen_pmu_data {
> +    uint32_t domain_id;
> +    uint32_t vcpu_id;
> +    uint32_t pcpu_id;
> +    uint32_t pmu_flags;
> +
> +    xen_arch_pmu_t pmu;
> +};
> +typedef struct xen_pmu_data xen_pmu_data_t;
> +
> +#endif /* __XEN_PUBLIC_PMU_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr
  2014-05-13 15:53 ` [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr Boris Ostrovsky
@ 2014-05-19 12:04   ` Tian, Kevin
  0 siblings, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-19 12:04 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 11:53 PM
> 
> The two routines share most of their logic.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/svm/svm.c     |  9 ++++++---
>  xen/arch/x86/hvm/vmx/vmx.c     | 11 +++++++----
>  xen/arch/x86/hvm/vpmu.c        | 42
> +++++++++++++++---------------------------
>  xen/arch/x86/traps.c           |  4 ++--
>  xen/include/asm-x86/hvm/vpmu.h |  6 ++++--
>  5 files changed, 34 insertions(+), 38 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index c23db32..3d652c2 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1632,7 +1632,7 @@ static int svm_msr_read_intercept(unsigned int msr,
> uint64_t *msr_content)
>      case MSR_AMD_FAM15H_EVNTSEL3:
>      case MSR_AMD_FAM15H_EVNTSEL4:
>      case MSR_AMD_FAM15H_EVNTSEL5:
> -        vpmu_do_rdmsr(msr, msr_content);
> +        vpmu_do_msr(msr, msr_content, VPMU_MSR_READ);
>          break;
> 
>      case MSR_AMD64_DR0_ADDRESS_MASK:
> @@ -1783,9 +1783,12 @@ static int svm_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>      case MSR_AMD_FAM15H_EVNTSEL3:
>      case MSR_AMD_FAM15H_EVNTSEL4:
>      case MSR_AMD_FAM15H_EVNTSEL5:
> -        vpmu_do_wrmsr(msr, msr_content);
> -        break;
> +    {
> +        uint64_t msr_val = msr_content;
> 
> +        vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE);
> +        break;
> +    }
>      case MSR_IA32_MCx_MISC(4): /* Threshold register */
>      case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
>          /*
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 1c9e742..8588f48 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2047,11 +2047,11 @@ static int vmx_msr_read_intercept(unsigned int
> msr, uint64_t *msr_content)
>          *msr_content |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL |
>                         MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
>          /* Perhaps vpmu will change some bits. */
> -        if ( vpmu_do_rdmsr(msr, msr_content) )
> +        if ( vpmu_do_msr(msr, msr_content, VPMU_MSR_READ) )
>              goto done;
>          break;
>      default:
> -        if ( vpmu_do_rdmsr(msr, msr_content) )
> +        if ( vpmu_do_msr(msr, msr_content, VPMU_MSR_READ) )
>              break;
>          if ( passive_domain_do_rdmsr(msr, msr_content) )
>              goto done;
> @@ -2202,6 +2202,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
>  static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>  {
>      struct vcpu *v = current;
> +    uint64_t msr_val;
> 
>      HVM_DBG_LOG(DBG_LEVEL_1, "ecx=%#x, msr_value=%#"PRIx64, msr,
> msr_content);
> 
> @@ -2225,7 +2226,8 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>          if ( msr_content & ~supported )
>          {
>              /* Perhaps some other bits are supported in vpmu. */
> -            if ( !vpmu_do_wrmsr(msr, msr_content) )
> +            msr_val = msr_content;
> +            if ( !vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE) )
>                  break;
>          }
>          if ( msr_content & IA32_DEBUGCTLMSR_LBR )
> @@ -2256,7 +2258,8 @@ static int vmx_msr_write_intercept(unsigned int
> msr, uint64_t msr_content)
>              goto gp_fault;
>          break;
>      default:
> -        if ( vpmu_do_wrmsr(msr, msr_content) )
> +        msr_val = msr_content;
> +        if ( vpmu_do_msr(msr, &msr_val, VPMU_MSR_WRITE) )
>              return X86EMUL_OKAY;
>          if ( passive_domain_do_wrmsr(msr, msr_content) )
>              return X86EMUL_OKAY;
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 9995728..896e2be 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -84,20 +84,29 @@ void vpmu_lvtpc_update(uint32_t val)
>          apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>  }
> 
> -int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
> +int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
> 
>      if ( !(vpmu_mode & XENPMU_MODE_ON) )
>          return 0;
> 
> -    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
> +    ASSERT((rw == VPMU_MSR_READ) || (rw == VPMU_MSR_WRITE));
> +
> +    if ( vpmu->arch_vpmu_ops )
>      {
> -        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
> +        int ret;
> +
> +        if ( (rw == VPMU_MSR_READ) &&
> vpmu->arch_vpmu_ops->do_rdmsr )
> +            ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +        else if ( vpmu->arch_vpmu_ops->do_wrmsr )
> +            ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, *msr_content);
> +        else
> +            return 0;
> 
>          /*
> -         * We may have received a PMU interrupt during WRMSR handling
> -         * and since do_wrmsr may load VPMU context we should save
> +         * We may have received a PMU interrupt while handling MSR
> access
> +         * and since do_wr/rdmsr may load VPMU context we should save
>           * (and unload) it again.
>           */
>          if ( !is_hvm_domain(current->domain) &&
> @@ -107,31 +116,10 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t
> msr_content)
>              vpmu->arch_vpmu_ops->arch_vpmu_save(current);
>              vpmu_reset(vpmu, VPMU_CONTEXT_SAVE |
> VPMU_CONTEXT_LOADED);
>          }
> -        return ret;
> -    }
> -    return 0;
> -}
> -
> -int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
> -{
> -    struct vpmu_struct *vpmu = vcpu_vpmu(current);
> 
> -    if ( !(vpmu_mode & XENPMU_MODE_ON) )
> -        return 0;
> -
> -    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
> -    {
> -        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> -
> -        if ( !is_hvm_domain(current->domain) &&
> -            (current->arch.vpmu.xenpmu_data->pmu_flags &
> PMU_CACHED) )
> -        {
> -            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
> -            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
> -            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE |
> VPMU_CONTEXT_LOADED);
> -        }
>          return ret;
>      }
> +
>      return 0;
>  }
> 
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index a0d0ba7..adbdebe 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2525,7 +2525,7 @@ static int emulate_privileged_op(struct
> cpu_user_regs *regs)
>          case
> MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
>          case
> MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>          case
> MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
> -            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
> +            if ( !vpmu_do_msr(regs->ecx, &msr_content,
> VPMU_MSR_WRITE) )
>                  goto invalid;
>              break;
>          default:
> @@ -2638,7 +2638,7 @@ static int emulate_privileged_op(struct
> cpu_user_regs *regs)
>          case
> MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
>          case
> MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>          case
> MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
> -            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) )
> +            if ( vpmu_do_msr(regs->ecx, &msr_content,
> VPMU_MSR_READ) )
>              {
>                  regs->eax = (uint32_t)msr_content;
>                  regs->edx = (uint32_t)(msr_content >> 32);
> diff --git a/xen/include/asm-x86/hvm/vpmu.h
> b/xen/include/asm-x86/hvm/vpmu.h
> index 438a913..bab8779 100644
> --- a/xen/include/asm-x86/hvm/vpmu.h
> +++ b/xen/include/asm-x86/hvm/vpmu.h
> @@ -78,9 +78,11 @@ struct vpmu_struct {
>  #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
>  #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
> 
> +#define VPMU_MSR_READ  0
> +#define VPMU_MSR_WRITE 1
> +
>  void vpmu_lvtpc_update(uint32_t val);
> -int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
> -int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
> +int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw);
>  int vpmu_do_interrupt(struct cpu_user_regs *regs);
>  void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
>                                         unsigned int *ecx, unsigned
> int *edx);
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded
  2014-05-13 15:53 ` [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded Boris Ostrovsky
@ 2014-05-19 14:18   ` Jan Beulich
  2014-05-19 15:28     ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-19 14:18 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> Because a PMU interrupt may be generated as soon as PMU registers are loaded 
> (or,
> more precisely, as soon as HW PMU is "armed") we don't want to delay marking
> context as LOADED until after registers are loaded. Otherwise during 
> interrupt
> handling VPMU_CONTEXT_LOADED may not be set and this could be confusing.
> 
> (Technically, only SVM needs this change right now since VMX will "arm" PMU 
> later,
> during VMRUN when global control register is loaded from VMCS. However, both
> AMD and Intel code will require this patch when we introduce PV VPMU).
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>
> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

With patch 1 needing further work, but this patch (for example) being
ready to go in, could you please indicate up to which point the series,
leaving out patch 1, could be applied if no other comments arise?

Thanks, Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup
  2014-05-13 15:53 ` [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup Boris Ostrovsky
  2014-05-19 11:55   ` Tian, Kevin
@ 2014-05-19 14:26   ` Jan Beulich
  2014-05-19 15:35     ` Boris Ostrovsky
  1 sibling, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-19 14:26 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> Update macros that modify VPMU flags to allow changing multiple bits at once.
> 
> Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). 
> This is needed by subsequent PMU patches.

This part is at least questionable - why would these bitmaps not
similarly be used by PVH? And if so, this second change is kind of
a policy one, while the first change is a purely mechanical one. I.e.
I don't think they fit well together into a single patch.

> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -326,16 +326,14 @@ static int core2_vpmu_save(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>  
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
> -        return 0;
> -
> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
> +    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )

Is this really a good name? How about vpmu_are_all_set() or
vpmu_all_set()?

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code
  2014-05-13 15:53 ` [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code Boris Ostrovsky
  2014-05-19 11:59   ` Tian, Kevin
@ 2014-05-19 14:30   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-19 14:30 UTC (permalink / raw)
  To: suravee.suthikulpanit, kevin.tian, Boris Ostrovsky, dietmar.hahn
  Cc: andrew.cooper3, keir, jun.nakajima, donald.d.dugger, xen-devel

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> +void vmx_rm_guest_msr(u32 msr)
> +{
> +    struct vcpu *curr = current;
> +    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
> +    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +
> +    if ( msr_area == NULL )
> +        return;
> +
> +    for ( idx = 0; idx < msr_count; idx++ )
> +        if ( msr_area[idx].index == msr )
> +            break;
> +
> +    if ( idx == msr_count )
> +        return;
> +
> +    for ( ; idx < msr_count - 1; idx++ )
> +    {
> +        msr_area[idx].index = msr_area[idx + 1].index;
> +        msr_area[idx].data = msr_area[idx + 1].data;
> +    }

Perhaps more efficiently done via memmove()?

> +    msr_area[msr_count - 1].index = 0;
> +
> +    curr->arch.hvm_vmx.msr_count = --msr_count;

This decrement could obviously be pulled up, avoiding earlier
"msr_count - 1".

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded
  2014-05-19 14:18   ` Jan Beulich
@ 2014-05-19 15:28     ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-19 15:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/19/2014 10:18 AM, Jan Beulich wrote:
>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>> Because a PMU interrupt may be generated as soon as PMU registers are loaded
>> (or,
>> more precisely, as soon as HW PMU is "armed") we don't want to delay marking
>> context as LOADED until after registers are loaded. Otherwise during
>> interrupt
>> handling VPMU_CONTEXT_LOADED may not be set and this could be confusing.
>>
>> (Technically, only SVM needs this change right now since VMX will "arm" PMU
>> later,
>> during VMRUN when global control register is loaded from VMCS. However, both
>> AMD and Intel code will require this patch when we introduce PV VPMU).
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Acked-by: Kevin Tian <kevin.tian@intel.com>
>> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
>> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> With patch 1 needing further work, but this patch (for example) being
> ready to go in, could you please indicate up to which point the series,
> leaving out patch 1, could be applied if no other comments arise?

My thinking was that once there is feeling that Xen component is 
starting to get closer to acceptable shape I would post Linux kernel 
part and have at least one (preferably a couple) round of comments to 
avoid making changes to hypervisor once code is there. Or at least 
minimize the amount of these changes.

And I in fact was planning to do it this week.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup
  2014-05-19 14:26   ` Jan Beulich
@ 2014-05-19 15:35     ` Boris Ostrovsky
  2014-05-19 15:42       ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-19 15:35 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/19/2014 10:26 AM, Jan Beulich wrote:
>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>> Update macros that modify VPMU flags to allow changing multiple bits at once.
>>
>> Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM).
>> This is needed by subsequent PMU patches.
> This part is at least questionable - why would these bitmaps not
> similarly be used by PVH? And if so, this second change is kind of
> a policy one, while the first change is a purely mechanical one. I.e.


At this patch PVH VPMU won't work at all, bitmaps or not. It is enabled 
in patch 18 and there I replace is_hvm_domain() with 
has_hvm_container_domain(). I didn't want to do this here so that 
PVH-related changes are more explicit in 18.

> I don't think they fit well together into a single patch.

I can break this into two parts.

>
>> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
>> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
>> @@ -326,16 +326,14 @@ static int core2_vpmu_save(struct vcpu *v)
>>   {
>>       struct vpmu_struct *vpmu = vcpu_vpmu(v);
>>   
>> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
>> -        return 0;
>> -
>> -    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
>> +    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
> Is this really a good name? How about vpmu_are_all_set() or
> vpmu_all_set()?


vpmu_are_all_set() is better.

-boris

vpmu_are_all_set

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup
  2014-05-19 15:35     ` Boris Ostrovsky
@ 2014-05-19 15:42       ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-19 15:42 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 19.05.14 at 17:35, <boris.ostrovsky@oracle.com> wrote:
> On 05/19/2014 10:26 AM, Jan Beulich wrote:
>>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>>> Update macros that modify VPMU flags to allow changing multiple bits at 
> once.
>>>
>>> Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM).
>>> This is needed by subsequent PMU patches.
>> This part is at least questionable - why would these bitmaps not
>> similarly be used by PVH? And if so, this second change is kind of
>> a policy one, while the first change is a purely mechanical one. I.e.
> 
> 
> At this patch PVH VPMU won't work at all, bitmaps or not. It is enabled 
> in patch 18 and there I replace is_hvm_domain() with 
> has_hvm_container_domain(). I didn't want to do this here so that 
> PVH-related changes are more explicit in 18.

Okay, that's extra churn, but fine with me as long as you add a brief
comment to the patch description saying why this is being done that
way (to avoid the same question popping up again).

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
  2014-05-19 12:02   ` Tian, Kevin
@ 2014-05-20 15:24   ` Jan Beulich
  2014-05-20 17:28     ` Boris Ostrovsky
  2014-05-21  7:19   ` Dietmar Hahn
  2 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-20 15:24 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> @@ -83,12 +84,10 @@ static const u32 AMD_F15H_CTRLS[] = {
>      MSR_AMD_FAM15H_EVNTSEL5
>  };
>  
> -/* storage for context switching */
> -struct amd_vpmu_context {
> -    u64 counters[MAX_NUM_COUNTERS];
> -    u64 ctrls[MAX_NUM_COUNTERS];
> -    bool_t msr_bitmap_set;
> -};
> +/* Use private context as a flag for MSR bitmap */

Stop missing.

> +#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
> +#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}

Blanks inside the braces please. Also the constant above would better
be -1L.

> +#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)

All three macros fail to properly parenthesize their parameter.

> @@ -370,12 +371,20 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>          goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
>  
> -    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
> -                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
> -    if ( !core2_vpmu_cxt )
> +    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
> +                                   sizeof(uint64_t) * fixed_pmc_cnt +
> +                                   sizeof(struct xen_pmu_cntr_pair) *
> +                                   arch_pmc_cnt);
> +    p = xzalloc_bytes(sizeof(uint64_t));
> +    if ( !core2_vpmu_cxt || !p )
>          goto out_err;
>  
> +    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
> +    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
> +                                    sizeof(uint64_t) * fixed_pmc_cnt;
> +
>      vpmu->context = (void *)core2_vpmu_cxt;
> +    vpmu->priv_context = (void *)p;

Pointless cast.

> @@ -447,10 +460,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>      }
>  
>      core2_vpmu_cxt = vpmu->context;
> +    enabled_cntrs = (uint64_t *)vpmu->priv_context;

Again.

> --- /dev/null
> +++ b/xen/include/public/arch-x86/pmu.h
> @@ -0,0 +1,62 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +
> +/* AMD PMU registers and structures */
> +struct xen_pmu_amd_ctxt {
> +    uint32_t counters;       /* Offset to counter MSRs */
> +    uint32_t ctrls;          /* Offset to control MSRs */
> +};
> +
> +/* Intel PMU registers and structures */
> +struct xen_pmu_cntr_pair {
> +    uint64_t counter;
> +    uint64_t control;
> +};
> +
> +struct xen_pmu_intel_ctxt {
> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
> +};
> +
> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
> +                                    sizeof(struct xen_pmu_intel_ctxt) ? \
> +                                     sizeof(struct xen_pmu_amd_ctxt) : \
> +                                     sizeof(struct xen_pmu_intel_ctxt))
> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)

Is this really usefully derived from XENPMU_MAX_CTXT_SZ? I.e. is it
okay for this value to change when one of the structures grows big
enough? I would have thought that the padding below is to fix the
size of the structure once and for all (and if that's right, I suppose
a respective BUILD_BUG_ON() would be quite desirable).

> +struct xen_arch_pmu {
> +    union {
> +        struct cpu_user_regs regs;
> +        uint8_t pad1[256];
> +    } r;
> +    union {
> +        uint32_t lapic_lvtpc;
> +        uint64_t pad2;
> +    } l;
> +    union {
> +        struct xen_pmu_amd_ctxt amd;
> +        struct xen_pmu_intel_ctxt intel;
> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];

No need for the number suffixes on the pad fields now that the
unions each have a name.

> --- /dev/null
> +++ b/xen/include/public/pmu.h
> @@ -0,0 +1,38 @@
> +#ifndef __XEN_PUBLIC_PMU_H__
> +#define __XEN_PUBLIC_PMU_H__
> +
> +#include "xen.h"
> +#if defined(__i386__) || defined(__x86_64__)
> +#include "arch-x86/pmu.h"
> +#elif defined (__arm__) || defined (__aarch64__)
> +#include "arch-arm.h"
> +#else
> +#error "Unsupported architecture"
> +#endif
> +
> +#define XENPMU_VER_MAJ    0
> +#define XENPMU_VER_MIN    0

Do you really want to start at 0.0?

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags
  2014-05-13 15:53 ` [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags Boris Ostrovsky
@ 2014-05-20 15:40   ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-20 15:40 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> +static void vpmu_unload_all(void)
> +{
> +    struct domain *d;
> +    struct vcpu *v;
> +    struct vpmu_struct *vpmu;
> +
> +    for_each_domain(d)
> +    {
> +        for_each_vcpu ( d, v )

Consistent style please.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-13 15:53 ` [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests Boris Ostrovsky
@ 2014-05-20 15:51   ` Jan Beulich
  2014-05-20 17:47     ` Boris Ostrovsky
  2014-05-20 15:52   ` Jan Beulich
  1 sibling, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-20 15:51 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1150,7 +1150,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
>          return rc;
>      }
>  
> -    vpmu_initialise(v);
> +    if ( is_hvm_domain(v->domain) )
> +        vpmu_initialise(v);

Why?

> @@ -1159,7 +1160,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
>  
>  static void svm_vcpu_destroy(struct vcpu *v)
>  {
> -    vpmu_destroy(v);
> +    if ( is_hvm_domain(v->domain) )
> +        vpmu_destroy(v);

Again.

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -115,7 +115,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
>          return rc;
>      }
>  
> -    vpmu_initialise(v);
> +    if ( is_hvm_domain(v->domain) )
> +        vpmu_initialise(v);

Same here.

> @@ -129,7 +130,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
>  static void vmx_vcpu_destroy(struct vcpu *v)
>  {
>      vmx_destroy_vmcs(v);
> -    vpmu_destroy(v);
> +    if ( is_hvm_domain(v->domain) )
> +        vpmu_destroy(v);

And here.

> @@ -753,6 +763,10 @@ func_out:
>      fixed_pmc_cnt = core2_get_fixed_pmc_count();
>      check_pmc_quirk();
>  
> +    /* PV domains can allocate resources immediately */
> +    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
> +            return 1;
> +

Broken indentation.

> @@ -764,11 +778,17 @@ static void core2_vpmu_destroy(struct vcpu *v)
>          return;
>  
>      xfree(vpmu->context);
> -    xfree(vpmu->priv_context);
> -    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
> -        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
> -    release_pmu_ownship(PMU_OWNER_HVM);
> -    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
> +
> +    if ( is_hvm_domain(v->domain) )
> +    {
> +        xfree(vpmu->priv_context);
> +        if ( cpu_has_vmx_msr_bitmap )
> +            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
> +        release_pmu_ownship(PMU_OWNER_HVM);
> +    }

Looks like this is a case where has_hvm_container_domain() would
be more appropriate. Since PVH - iiuc - doesn't work with vPMU anyway
until patch 18, let's at least limit the churn this series does.

> @@ -267,7 +271,13 @@ void vpmu_destroy(struct vcpu *v)
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
> +    {
> +        /* Unload VPMU first. This will stop counters */
> +        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
> +                         vpmu_save_force, (void *)v, 1);

Pointless cast.

> @@ -312,6 +322,67 @@ static void vpmu_unload_all(void)
>      }
>  }
>  
> +static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
> +{
> +    struct vcpu *v;
> +    struct page_info *page;
> +    uint64_t gfn = params->d.val;
> +
> +    if ( !is_pv_domain(d) )
> +        return -EINVAL;
> +
> +    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )

Is ->vcpu a signed field? If so, why?

> +        return -EINVAL;
> +
> +    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
> +    if ( !page )
> +        return -EINVAL;
> +
> +    if ( !get_page_type(page, PGT_writable_page) )
> +    {
> +        put_page(page);
> +        return -EINVAL;
> +    }
> +
> +    v = d->vcpu[params->vcpu];

There's no check above that this wouldn't yield NULL.

> +static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
> +{
> +    struct vcpu *v;
> +    uint64_t mfn;
> +
> +    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
> +        return;
> +
> +    v = d->vcpu[params->vcpu];
> +    if (v != current)

Same comments as above plus - coding style.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-13 15:53 ` [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests Boris Ostrovsky
  2014-05-20 15:51   ` Jan Beulich
@ 2014-05-20 15:52   ` Jan Beulich
  1 sibling, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-20 15:52 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/include/xen/softirq.h
> +++ b/xen/include/xen/softirq.h
> @@ -8,6 +8,7 @@ enum {
>      NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
>      RCU_SOFTIRQ,
>      TASKLET_SOFTIRQ,
> +    PMU_SOFTIRQ,
>      NR_COMMON_SOFTIRQS
>  };

This isn't being used elsewhere in the patch - please move it to where
it is being used.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-20 15:24   ` Jan Beulich
@ 2014-05-20 17:28     ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-20 17:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, jun.nakajima, andrew.cooper3, donald.d.dugger,
	xen-devel, dietmar.hahn, suravee.suthikulpanit

On 05/20/2014 11:24 AM, Jan Beulich wrote:
>
>> --- /dev/null
>> +++ b/xen/include/public/arch-x86/pmu.h
>> @@ -0,0 +1,62 @@
>> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
>> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
>> +
>> +/* x86-specific PMU definitions */
>> +
>> +/* AMD PMU registers and structures */
>> +struct xen_pmu_amd_ctxt {
>> +    uint32_t counters;       /* Offset to counter MSRs */
>> +    uint32_t ctrls;          /* Offset to control MSRs */
>> +};
>> +
>> +/* Intel PMU registers and structures */
>> +struct xen_pmu_cntr_pair {
>> +    uint64_t counter;
>> +    uint64_t control;
>> +};
>> +
>> +struct xen_pmu_intel_ctxt {
>> +    uint64_t global_ctrl;
>> +    uint64_t global_ovf_ctrl;
>> +    uint64_t global_status;
>> +    uint64_t fixed_ctrl;
>> +    uint64_t ds_area;
>> +    uint64_t pebs_enable;
>> +    uint64_t debugctl;
>> +    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
>> +    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
>> +};
>> +
>> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
>> +                                    sizeof(struct xen_pmu_intel_ctxt) ? \
>> +                                     sizeof(struct xen_pmu_amd_ctxt) : \
>> +                                     sizeof(struct xen_pmu_intel_ctxt))
>> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
> Is this really usefully derived from XENPMU_MAX_CTXT_SZ? I.e. is it
> okay for this value to change when one of the structures grows big
> enough? I would have thought that the padding below is to fix the
> size of the structure once and for all (and if that's right, I suppose
> a respective BUILD_BUG_ON() would be quite desirable).


Yes, the way XENPMU_CTXT_PAD_SZ is calculated is wrong, it should be a 
fixed value, not derived from arch-specific structures sizes.


>> --- /dev/null
>> +++ b/xen/include/public/pmu.h
>> @@ -0,0 +1,38 @@
>> +#ifndef __XEN_PUBLIC_PMU_H__
>> +#define __XEN_PUBLIC_PMU_H__
>> +
>> +#include "xen.h"
>> +#if defined(__i386__) || defined(__x86_64__)
>> +#include "arch-x86/pmu.h"
>> +#elif defined (__arm__) || defined (__aarch64__)
>> +#include "arch-arm.h"
>> +#else
>> +#error "Unsupported architecture"
>> +#endif
>> +
>> +#define XENPMU_VER_MAJ    0
>> +#define XENPMU_VER_MIN    0
> Do you really want to start at 0.0?

Right, I'll start with 0.1

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-20 15:51   ` Jan Beulich
@ 2014-05-20 17:47     ` Boris Ostrovsky
  2014-05-21  8:01       ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-20 17:47 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/20/2014 11:51 AM, Jan Beulich wrote:
>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1150,7 +1150,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
>>           return rc;
>>       }
>>   
>> -    vpmu_initialise(v);
>> +    if ( is_hvm_domain(v->domain) )
>> +        vpmu_initialise(v);
> Why?

This patch adds initialization for PV domains, which is conditioned by 
is_pv_domain() in amd_vpmu_initialise()/core2_vpmu_alloc_resource(). I 
don't want PVH domains (which call these routines) to try to set up 
their VPMUs from here. This is supposed to happen via pvpmu_init() 
(which in this patch will return -EINVAL for PVH).

Same reason for conditioning vpmu_destroy().

I think I should drop this whole business of delaying PVH support until 
later patch and do it incrementally at the same time as I add PV 
support. (The reason, BTW, for doing it this way was because when I 
started with this project PVH support wasn't there yet and I couldn't 
test it).


-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
  2014-05-19 12:02   ` Tian, Kevin
  2014-05-20 15:24   ` Jan Beulich
@ 2014-05-21  7:19   ` Dietmar Hahn
  2014-05-21 13:56     ` Boris Ostrovsky
  2 siblings, 1 reply; 65+ messages in thread
From: Dietmar Hahn @ 2014-05-21  7:19 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, keir, JBeulich, jun.nakajima, andrew.cooper3,
	donald.d.dugger, suravee.suthikulpanit, Boris Ostrovsky

Am Dienstag 13 Mai 2014, 11:53:22 schrieb Boris Ostrovsky:
> Add pmu.h header files, move various macros and structures that will be
> shared between hypervisor and PV guests to it.
> 
> Move MSR banks out of architectural PMU structures to allow for larger sizes
> in the future. The banks are allocated immediately after the context and
> PMU structures store offsets to them.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

It seems I didn't test right because calling
# xl debug-keys q
when running a HVM linux guest crashs the hypervisor.
Please see below.


> ---
>  xen/arch/x86/hvm/svm/vpmu.c              |  81 ++++++++++++-----------
>  xen/arch/x86/hvm/vmx/vpmu_core2.c        | 110 +++++++++++++++++--------------
>  xen/arch/x86/hvm/vpmu.c                  |   1 +
>  xen/arch/x86/oprofile/op_model_ppro.c    |   6 +-
>  xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  32 ---------
>  xen/include/asm-x86/hvm/vpmu.h           |  16 ++---
>  xen/include/public/arch-arm.h            |   3 +
>  xen/include/public/arch-x86/pmu.h        |  62 +++++++++++++++++
>  xen/include/public/pmu.h                 |  38 +++++++++++
>  9 files changed, 220 insertions(+), 129 deletions(-)
>  delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
>  create mode 100644 xen/include/public/arch-x86/pmu.h
>  create mode 100644 xen/include/public/pmu.h
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 2fbe2c1..ebdba8e 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -30,10 +30,7 @@
>  #include <asm/apic.h>
>  #include <asm/hvm/vlapic.h>
>  #include <asm/hvm/vpmu.h>
> -
> -#define F10H_NUM_COUNTERS 4
> -#define F15H_NUM_COUNTERS 6
> -#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
> +#include <public/pmu.h>
>  
>  #define MSR_F10H_EVNTSEL_GO_SHIFT   40
>  #define MSR_F10H_EVNTSEL_EN_SHIFT   22
> @@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
>  static const u32 __read_mostly *ctrls;
>  static bool_t __read_mostly k7_counters_mirrored;
>  
> +#define F10H_NUM_COUNTERS   4
> +#define F15H_NUM_COUNTERS   6
> +#define AMD_MAX_COUNTERS    6
> +
>  /* PMU Counter MSRs. */
>  static const u32 AMD_F10H_COUNTERS[] = {
>      MSR_K7_PERFCTR0,
> @@ -83,12 +84,10 @@ static const u32 AMD_F15H_CTRLS[] = {
>      MSR_AMD_FAM15H_EVNTSEL5
>  };
>  
> -/* storage for context switching */
> -struct amd_vpmu_context {
> -    u64 counters[MAX_NUM_COUNTERS];
> -    u64 ctrls[MAX_NUM_COUNTERS];
> -    bool_t msr_bitmap_set;
> -};
> +/* Use private context as a flag for MSR bitmap */
> +#define msr_bitmap_on(vpmu)    {vpmu->priv_context = (void *)-1;}
> +#define msr_bitmap_off(vpmu)   {vpmu->priv_context = NULL;}
> +#define is_msr_bitmap_on(vpmu) (vpmu->priv_context != NULL)
>  
>  static inline int get_pmu_reg_type(u32 addr)
>  {
> @@ -142,7 +141,6 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> @@ -150,14 +148,13 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
>          svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
>      }
>  
> -    ctxt->msr_bitmap_set = 1;
> +    msr_bitmap_on(vpmu);
>  }
>  
>  static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> @@ -165,7 +162,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
>          svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
>      }
>  
> -    ctxt->msr_bitmap_set = 0;
> +    msr_bitmap_off(vpmu);
>  }
>  
>  static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
> @@ -177,19 +174,22 @@ static inline void context_load(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
>  
>      for ( i = 0; i < num_counters; i++ )
>      {
> -        wrmsrl(counters[i], ctxt->counters[i]);
> -        wrmsrl(ctrls[i], ctxt->ctrls[i]);
> +        wrmsrl(counters[i], counter_regs[i]);
> +        wrmsrl(ctrls[i], ctrl_regs[i]);
>      }
>  }
>  
>  static void amd_vpmu_load(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
>  
>      vpmu_reset(vpmu, VPMU_FROZEN);
>  
> @@ -198,7 +198,7 @@ static void amd_vpmu_load(struct vcpu *v)
>          unsigned int i;
>  
>          for ( i = 0; i < num_counters; i++ )
> -            wrmsrl(ctrls[i], ctxt->ctrls[i]);
> +            wrmsrl(ctrls[i], ctrl_regs[i]);
>  
>          return;
>      }
> @@ -212,17 +212,17 @@ static inline void context_save(struct vcpu *v)
>  {
>      unsigned int i;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
>  
>      /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
>      for ( i = 0; i < num_counters; i++ )
> -        rdmsrl(counters[i], ctxt->counters[i]);
> +        rdmsrl(counters[i], counter_regs[i]);
>  }
>  
>  static int amd_vpmu_save(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctx = vpmu->context;
>      unsigned int i;
>  
>      /*
> @@ -245,7 +245,7 @@ static int amd_vpmu_save(struct vcpu *v)
>      context_save(v);
>  
>      if ( is_hvm_domain(v->domain) &&
> -        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
> +        !vpmu_is_set(vpmu, VPMU_RUNNING) && is_msr_bitmap_on(vpmu) )
>          amd_vpmu_unset_msr_bitmap(v);
>  
>      return 1;
> @@ -256,7 +256,9 @@ static void context_update(unsigned int msr, u64 msr_content)
>      unsigned int i;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct amd_vpmu_context *ctxt = vpmu->context;
> +    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
>  
>      if ( k7_counters_mirrored &&
>          ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
> @@ -268,12 +270,12 @@ static void context_update(unsigned int msr, u64 msr_content)
>      {
>         if ( msr == ctrls[i] )
>         {
> -           ctxt->ctrls[i] = msr_content;
> +           ctrl_regs[i] = msr_content;
>             return;
>         }
>          else if (msr == counters[i] )
>          {
> -            ctxt->counters[i] = msr_content;
> +            counter_regs[i] = msr_content;
>              return;
>          }
>      }
> @@ -299,8 +301,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>              return 1;
>          vpmu_set(vpmu, VPMU_RUNNING);
>  
> -        if ( is_hvm_domain(v->domain) &&
> -             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>              amd_vpmu_set_msr_bitmap(v);
>      }
>  
> @@ -309,8 +310,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>          (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
>      {
>          vpmu_reset(vpmu, VPMU_RUNNING);
> -        if ( is_hvm_domain(v->domain) &&
> -             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> +        if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>              amd_vpmu_unset_msr_bitmap(v);
>          release_pmu_ownship(PMU_OWNER_HVM);
>      }
> @@ -351,7 +351,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>  
>  static int amd_vpmu_initialise(struct vcpu *v)
>  {
> -    struct amd_vpmu_context *ctxt;
> +    struct xen_pmu_amd_ctxt *ctxt;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      uint8_t family = current_cpu_data.x86;
>  
> @@ -381,7 +381,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
>  	 }
>      }
>  
> -    ctxt = xzalloc(struct amd_vpmu_context);
> +    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
> +			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
> +			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
>      if ( !ctxt )
>      {
>          gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
> @@ -390,7 +392,11 @@ static int amd_vpmu_initialise(struct vcpu *v)
>          return -ENOMEM;
>      }
>  
> +    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
> +    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
> +
>      vpmu->context = ctxt;
> +    vpmu->priv_context = NULL;

msr_bitmap_off(vpmu) ?

>      vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
>      return 0;
>  }
> @@ -402,8 +408,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>          return;
>  
> -    if ( is_hvm_domain(v->domain) &&
> -         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
> +    if ( is_hvm_domain(v->domain) && is_msr_bitmap_on(vpmu) )
>          amd_vpmu_unset_msr_bitmap(v);
>  
>      xfree(vpmu->context);
> @@ -420,7 +425,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
>  static void amd_vpmu_dump(const struct vcpu *v)
>  {
>      const struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    const struct amd_vpmu_context *ctxt = vpmu->context;
> +    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
> +    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
> +    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
>      unsigned int i;
>  
>      printk("    VPMU state: 0x%x ", vpmu->flags);
> @@ -450,8 +457,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
>          rdmsrl(ctrls[i], ctrl);
>          rdmsrl(counters[i], cntr);
>          printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
> -               ctrls[i], ctxt->ctrls[i], ctrl,
> -               counters[i], ctxt->counters[i], cntr);
> +               ctrls[i], ctrl_regs[i], ctrl,
> +               counters[i], counter_regs[i], cntr);
>      }
>  }
>  
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index dffdd80..1fe583f 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -35,8 +35,8 @@
>  #include <asm/hvm/vmx/vmcs.h>
>  #include <public/sched.h>
>  #include <public/hvm/save.h>
> +#include <public/pmu.h>
>  #include <asm/hvm/vpmu.h>
> -#include <asm/hvm/vmx/vpmu_core2.h>
>  
>  /*
>   * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
> @@ -68,6 +68,10 @@
>  #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
>  static bool_t __read_mostly full_width_write;
>  
> +/* Intel-specific VPMU features */
> +#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
> +#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
> +
>  /*
>   * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
>   * counters. 4 bits for every counter.
> @@ -75,17 +79,6 @@ static bool_t __read_mostly full_width_write;
>  #define FIXED_CTR_CTRL_BITS 4
>  #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
>  
> -#define VPMU_CORE2_MAX_FIXED_PMCS     4
> -struct core2_vpmu_context {
> -    u64 fixed_ctrl;
> -    u64 ds_area;
> -    u64 pebs_enable;
> -    u64 global_ovf_status;
> -    u64 enabled_cntrs;  /* Follows PERF_GLOBAL_CTRL MSR format */
> -    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
> -    struct arch_msr_pair arch_msr_pair[1];
> -};
> -
>  /* Number of general-purpose and fixed performance counters */
>  static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
>  
> @@ -225,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
>      return 0;
>  }
>  
> +#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
>  static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
>  {
>      int i;
> @@ -294,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
>  static inline void __core2_vpmu_save(struct vcpu *v)
>  {
>      int i;
> -    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
>  
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
> +        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
> +        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
>  }
>  
>  static int core2_vpmu_save(struct vcpu *v)
> @@ -322,10 +319,13 @@ static int core2_vpmu_save(struct vcpu *v)
>  static inline void __core2_vpmu_load(struct vcpu *v)
>  {
>      unsigned int i, pmc_start;
> -    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
>  
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
> -        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
> +        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
>  
>      if ( full_width_write )
>          pmc_start = MSR_IA32_A_PERFCTR0;
> @@ -333,8 +333,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
>          pmc_start = MSR_IA32_PERFCTR0;
>      for ( i = 0; i < arch_pmc_cnt; i++ )
>      {
> -        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
> -        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
> +        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
> +        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
>      }
>  
>      wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
> @@ -357,7 +357,8 @@ static void core2_vpmu_load(struct vcpu *v)
>  static int core2_vpmu_alloc_resource(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
> +    uint64_t *p = NULL;
>  
>      if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
>          return 0;
> @@ -370,12 +371,20 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>          goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
>  
> -    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
> -                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
> -    if ( !core2_vpmu_cxt )
> +    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
> +                                   sizeof(uint64_t) * fixed_pmc_cnt +
> +                                   sizeof(struct xen_pmu_cntr_pair) *
> +                                   arch_pmc_cnt);
> +    p = xzalloc_bytes(sizeof(uint64_t));
> +    if ( !core2_vpmu_cxt || !p )
>          goto out_err;
>  
> +    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
> +    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
> +                                    sizeof(uint64_t) * fixed_pmc_cnt;
> +
>      vpmu->context = (void *)core2_vpmu_cxt;
> +    vpmu->priv_context = (void *)p;
>  
>      vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
>  
> @@ -386,6 +395,9 @@ out_err:
>      vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
>      release_pmu_ownship(PMU_OWNER_HVM);
>  
> +    xfree(core2_vpmu_cxt);
> +    xfree(p);
> +
>      printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
>             v->vcpu_id, v->domain->domain_id);
>  
> @@ -421,7 +433,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>      int type = -1, index = -1;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
> +    uint64_t *enabled_cntrs;
>  
>      if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
>      {
> @@ -447,10 +460,11 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>      }
>  
>      core2_vpmu_cxt = vpmu->context;
> +    enabled_cntrs = (uint64_t *)vpmu->priv_context;
>      switch ( msr )
>      {
>      case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> -        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
> +        core2_vpmu_cxt->global_status &= ~msr_content;
>          return 1;
>      case MSR_CORE_PERF_GLOBAL_STATUS:
>          gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
> @@ -484,15 +498,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>          break;
>      case MSR_CORE_PERF_FIXED_CTR_CTRL:
>          vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
> -        core2_vpmu_cxt->enabled_cntrs &=
> -                ~(((1ULL << VPMU_CORE2_MAX_FIXED_PMCS) - 1) << 32);
> +        *enabled_cntrs &= ~(((1ULL << fixed_pmc_cnt) - 1) << 32);
>          if ( msr_content != 0 )
>          {
>              u64 val = msr_content;
>              for ( i = 0; i < fixed_pmc_cnt; i++ )
>              {
>                  if ( val & 3 )
> -                    core2_vpmu_cxt->enabled_cntrs |= (1ULL << 32) << i;
> +                    *enabled_cntrs |= (1ULL << 32) << i;
>                  val >>= FIXED_CTR_CTRL_BITS;
>              }
>          }
> @@ -503,19 +516,21 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>          tmp = msr - MSR_P6_EVNTSEL0;
>          if ( tmp >= 0 && tmp < arch_pmc_cnt )
>          {
> +            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> +
>              vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
>  
>              if ( msr_content & (1ULL << 22) )
> -                core2_vpmu_cxt->enabled_cntrs |= 1ULL << tmp;
> +                *enabled_cntrs |= 1ULL << tmp;
>              else
> -                core2_vpmu_cxt->enabled_cntrs &= ~(1ULL << tmp);
> +                *enabled_cntrs &= ~(1ULL << tmp);
>  
> -            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
> +            xen_pmu_cntr_pair[tmp].control = msr_content;
>          }
>      }
>  
> -    if ((global_ctrl & core2_vpmu_cxt->enabled_cntrs) ||
> -        (core2_vpmu_cxt->ds_area != 0)  )
> +    if ((global_ctrl & *enabled_cntrs) || (core2_vpmu_cxt->ds_area != 0)  )
>          vpmu_set(vpmu, VPMU_RUNNING);
>      else
>          vpmu_reset(vpmu, VPMU_RUNNING);
> @@ -561,7 +576,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>      int type = -1, index = -1;
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
>  
>      if ( core2_vpmu_msr_common_check(msr, &type, &index) )
>      {
> @@ -572,7 +587,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>              *msr_content = 0;
>              break;
>          case MSR_CORE_PERF_GLOBAL_STATUS:
> -            *msr_content = core2_vpmu_cxt->global_ovf_status;
> +            *msr_content = core2_vpmu_cxt->global_status;
>              break;
>          case MSR_CORE_PERF_GLOBAL_CTRL:
>              vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
> @@ -621,8 +636,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
>  {
>      const struct vpmu_struct *vpmu = vcpu_vpmu(v);
>      int i;
> -    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
> +    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
>      u64 val;
> +    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
> +    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
> +        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);

This crashs the hypervisor because of derefencing core2_vpmu_cxt (NULL) in
vpmu_req_pointer().

  --> +    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;

Dietmar.

>  
>      if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
>           return;
> @@ -641,12 +659,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
>  
>      /* Print the contents of the counter and its configuration msr. */
>      for ( i = 0; i < arch_pmc_cnt; i++ )
> -    {
> -        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
> -
>          printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
> -               i, msr_pair[i].counter, msr_pair[i].control);
> -    }
> +            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
> +
>      /*
>       * The configuration of the fixed counter is 4 bits each in the
>       * MSR_CORE_PERF_FIXED_CTR_CTRL.
> @@ -655,7 +670,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
>      for ( i = 0; i < fixed_pmc_cnt; i++ )
>      {
>          printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
> -               i, core2_vpmu_cxt->fix_counters[i],
> +               i, fixed_counters[i],
>                 val & FIXED_CTR_CTRL_MASK);
>          val >>= FIXED_CTR_CTRL_BITS;
>      }
> @@ -666,14 +681,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
>      struct vcpu *v = current;
>      u64 msr_content;
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
> +    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
>  
>      rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
>      if ( msr_content )
>      {
>          if ( is_pmc_quirk )
>              handle_pmc_quirk(msr_content);
> -        core2_vpmu_cxt->global_ovf_status |= msr_content;
> +        core2_vpmu_cxt->global_status |= msr_content;
>          msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
>          wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
>      }
> @@ -736,12 +751,6 @@ func_out:
>  
>      arch_pmc_cnt = core2_get_arch_pmc_count();
>      fixed_pmc_cnt = core2_get_fixed_pmc_count();
> -    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
> -    {
> -        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
> -        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
> -               fixed_pmc_cnt);
> -    }
>      check_pmc_quirk();
>  
>      return 0;
> @@ -755,6 +764,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
>          return;
>  
>      xfree(vpmu->context);
> +    xfree(vpmu->priv_context);
>      if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
>          core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
>      release_pmu_ownship(PMU_OWNER_HVM);
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 0340e5b..0acc486 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -31,6 +31,7 @@
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/hvm/svm/vmcb.h>
>  #include <asm/apic.h>
> +#include <public/pmu.h>
>  
>  /*
>   * "vpmu" :     vpmu generally enabled
> diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
> index 3225937..5aae2e7 100644
> --- a/xen/arch/x86/oprofile/op_model_ppro.c
> +++ b/xen/arch/x86/oprofile/op_model_ppro.c
> @@ -20,11 +20,15 @@
>  #include <asm/regs.h>
>  #include <asm/current.h>
>  #include <asm/hvm/vpmu.h>
> -#include <asm/hvm/vmx/vpmu_core2.h>
>  
>  #include "op_x86_model.h"
>  #include "op_counter.h"
>  
> +struct arch_msr_pair {
> +    u64 counter;
> +    u64 control;
> +};
> +
>  /*
>   * Intel "Architectural Performance Monitoring" CPUID
>   * detection/enumeration details:
> diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> deleted file mode 100644
> index 410372d..0000000
> --- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
> +++ /dev/null
> @@ -1,32 +0,0 @@
> -
> -/*
> - * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
> - *
> - * Copyright (c) 2007, Intel Corporation.
> - *
> - * This program is free software; you can redistribute it and/or modify it
> - * under the terms and conditions of the GNU General Public License,
> - * version 2, as published by the Free Software Foundation.
> - *
> - * This program is distributed in the hope it will be useful, but WITHOUT
> - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> - * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> - * more details.
> - *
> - * You should have received a copy of the GNU General Public License along with
> - * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> - * Place - Suite 330, Boston, MA 02111-1307 USA.
> - *
> - * Author: Haitao Shan <haitao.shan@intel.com>
> - */
> -
> -#ifndef __ASM_X86_HVM_VPMU_CORE_H_
> -#define __ASM_X86_HVM_VPMU_CORE_H_
> -
> -struct arch_msr_pair {
> -    u64 counter;
> -    u64 control;
> -};
> -
> -#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
> -
> diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
> index 7ee0f01..3e5d9de 100644
> --- a/xen/include/asm-x86/hvm/vpmu.h
> +++ b/xen/include/asm-x86/hvm/vpmu.h
> @@ -22,6 +22,8 @@
>  #ifndef __ASM_X86_HVM_VPMU_H_
>  #define __ASM_X86_HVM_VPMU_H_
>  
> +#include <public/pmu.h>
> +
>  /*
>   * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
>   * See arch/x86/hvm/vpmu.c.
> @@ -29,12 +31,9 @@
>  #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
>  #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
>  
> -
> -#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
>  #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
>  #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
>                                            arch.hvm_vcpu.vpmu))
> -#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
>  
>  #define MSR_TYPE_COUNTER            0
>  #define MSR_TYPE_CTRL               1
> @@ -42,6 +41,9 @@
>  #define MSR_TYPE_ARCH_COUNTER       3
>  #define MSR_TYPE_ARCH_CTRL          4
>  
> +/* Start of PMU register bank */
> +#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
> +                                                 (uintptr_t)ctxt->offset))
>  
>  /* Arch specific operations shared by all vpmus */
>  struct arch_vpmu_ops {
> @@ -64,7 +66,8 @@ struct vpmu_struct {
>      u32 flags;
>      u32 last_pcpu;
>      u32 hw_lapic_lvtpc;
> -    void *context;
> +    void *context;      /* May be shared with PV guest */
> +    void *priv_context; /* hypervisor-only */
>      struct arch_vpmu_ops *arch_vpmu_ops;
>  };
>  
> @@ -76,11 +79,6 @@ struct vpmu_struct {
>  #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
>  #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
>  
> -/* VPMU features */
> -#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
> -#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
> -
> -
>  #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
>  #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
>  #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 7496556..e982b53 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -388,6 +388,9 @@ typedef uint64_t xen_callback_t;
>  
>  #endif
>  
> +/* Stub definition of PMU structure */
> +typedef struct xen_arch_pmu {} xen_arch_pmu_t;
> +
>  #endif /*  __XEN_PUBLIC_ARCH_ARM_H__ */
>  
>  /*
> diff --git a/xen/include/public/arch-x86/pmu.h b/xen/include/public/arch-x86/pmu.h
> new file mode 100644
> index 0000000..b4eda67
> --- /dev/null
> +++ b/xen/include/public/arch-x86/pmu.h
> @@ -0,0 +1,62 @@
> +#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
> +#define __XEN_PUBLIC_ARCH_X86_PMU_H__
> +
> +/* x86-specific PMU definitions */
> +
> +/* AMD PMU registers and structures */
> +struct xen_pmu_amd_ctxt {
> +    uint32_t counters;       /* Offset to counter MSRs */
> +    uint32_t ctrls;          /* Offset to control MSRs */
> +};
> +
> +/* Intel PMU registers and structures */
> +struct xen_pmu_cntr_pair {
> +    uint64_t counter;
> +    uint64_t control;
> +};
> +
> +struct xen_pmu_intel_ctxt {
> +    uint64_t global_ctrl;
> +    uint64_t global_ovf_ctrl;
> +    uint64_t global_status;
> +    uint64_t fixed_ctrl;
> +    uint64_t ds_area;
> +    uint64_t pebs_enable;
> +    uint64_t debugctl;
> +    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
> +    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
> +};
> +
> +#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
> +                                    sizeof(struct xen_pmu_intel_ctxt) ? \
> +                                     sizeof(struct xen_pmu_amd_ctxt) : \
> +                                     sizeof(struct xen_pmu_intel_ctxt))
> +#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
> +struct xen_arch_pmu {
> +    union {
> +        struct cpu_user_regs regs;
> +        uint8_t pad1[256];
> +    } r;
> +    union {
> +        uint32_t lapic_lvtpc;
> +        uint64_t pad2;
> +    } l;
> +    union {
> +        struct xen_pmu_amd_ctxt amd;
> +        struct xen_pmu_intel_ctxt intel;
> +        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
> +    } c;
> +};
> +typedef struct xen_arch_pmu xen_arch_pmu_t;
> +
> +#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> +
> diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
> new file mode 100644
> index 0000000..3ffd2cf
> --- /dev/null
> +++ b/xen/include/public/pmu.h
> @@ -0,0 +1,38 @@
> +#ifndef __XEN_PUBLIC_PMU_H__
> +#define __XEN_PUBLIC_PMU_H__
> +
> +#include "xen.h"
> +#if defined(__i386__) || defined(__x86_64__)
> +#include "arch-x86/pmu.h"
> +#elif defined (__arm__) || defined (__aarch64__)
> +#include "arch-arm.h"
> +#else
> +#error "Unsupported architecture"
> +#endif
> +
> +#define XENPMU_VER_MAJ    0
> +#define XENPMU_VER_MIN    0
> +
> +
> +/* Shared between hypervisor and PV domain */
> +struct xen_pmu_data {
> +    uint32_t domain_id;
> +    uint32_t vcpu_id;
> +    uint32_t pcpu_id;
> +    uint32_t pmu_flags;
> +
> +    xen_arch_pmu_t pmu;
> +};
> +typedef struct xen_pmu_data xen_pmu_data_t;
> +
> +#endif /* __XEN_PUBLIC_PMU_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 

-- 
Company details: http://ts.fujitsu.com/imprint.html

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-20 17:47     ` Boris Ostrovsky
@ 2014-05-21  8:01       ` Jan Beulich
  2014-05-21 14:03         ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-21  8:01 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 20.05.14 at 19:47, <boris.ostrovsky@oracle.com> wrote:
> On 05/20/2014 11:51 AM, Jan Beulich wrote:
>>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -1150,7 +1150,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
>>>           return rc;
>>>       }
>>>   
>>> -    vpmu_initialise(v);
>>> +    if ( is_hvm_domain(v->domain) )
>>> +        vpmu_initialise(v);
>> Why?
> 
> This patch adds initialization for PV domains, which is conditioned by 
> is_pv_domain() in amd_vpmu_initialise()/core2_vpmu_alloc_resource(). I 
> don't want PVH domains (which call these routines) to try to set up 
> their VPMUs from here. This is supposed to happen via pvpmu_init() 
> (which in this patch will return -EINVAL for PVH).

So what's the reason for making PVH PV-like here rather than
HVM-like? With the intended goal of making PVH a HVM sub-mode,
that'll require more exceptions in the long run.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h
  2014-05-21  7:19   ` Dietmar Hahn
@ 2014-05-21 13:56     ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-21 13:56 UTC (permalink / raw)
  To: Dietmar Hahn
  Cc: kevin.tian, keir, JBeulich, jun.nakajima, andrew.cooper3,
	donald.d.dugger, xen-devel, suravee.suthikulpanit

On 05/21/2014 03:19 AM, Dietmar Hahn wrote:
> Am Dienstag 13 Mai 2014, 11:53:22 schrieb Boris Ostrovsky:
>> Add pmu.h header files, move various macros and structures that will be
>> shared between hypervisor and PV guests to it.
>>
>> Move MSR banks out of architectural PMU structures to allow for larger sizes
>> in the future. The banks are allocated immediately after the context and
>> PMU structures store offsets to them.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
>> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> It seems I didn't test right because calling
> # xl debug-keys q
> when running a HVM linux guest crashs the hypervisor.
> Please see below.


Yes, trying to dereference pointer that it explicitly set to NULL two 
lines above is not a particularly good idea.

Thanks for debugging this.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests
  2014-05-21  8:01       ` Jan Beulich
@ 2014-05-21 14:03         ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-21 14:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/21/2014 04:01 AM, Jan Beulich wrote:
>>>> On 20.05.14 at 19:47, <boris.ostrovsky@oracle.com> wrote:
>> On 05/20/2014 11:51 AM, Jan Beulich wrote:
>>>>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>> @@ -1150,7 +1150,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
>>>>            return rc;
>>>>        }
>>>>    
>>>> -    vpmu_initialise(v);
>>>> +    if ( is_hvm_domain(v->domain) )
>>>> +        vpmu_initialise(v);
>>> Why?
>> This patch adds initialization for PV domains, which is conditioned by
>> is_pv_domain() in amd_vpmu_initialise()/core2_vpmu_alloc_resource(). I
>> don't want PVH domains (which call these routines) to try to set up
>> their VPMUs from here. This is supposed to happen via pvpmu_init()
>> (which in this patch will return -EINVAL for PVH).
> So what's the reason for making PVH PV-like here rather than
> HVM-like? With the intended goal of making PVH a HVM sub-mode,
> that'll require more exceptions in the long run.


Primarily because of interrupt handling: we don't end up in HVM guest's 
interrupt handler for PMU interrupts but rather in the PV PMU_VIRQ 
handler which expects shared context to be set up. This means that 
initialization for PVH has to be slightly different.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 05/19] vmx: Merge MSR management routines
  2014-05-13 15:53 ` [PATCH v6 05/19] vmx: Merge MSR management routines Boris Ostrovsky
  2014-05-19 12:00   ` Tian, Kevin
@ 2014-05-22 10:24   ` Dietmar Hahn
  2014-05-22 13:48     ` Boris Ostrovsky
  1 sibling, 1 reply; 65+ messages in thread
From: Dietmar Hahn @ 2014-05-22 10:24 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, keir, JBeulich, jun.nakajima, andrew.cooper3,
	donald.d.dugger, suravee.suthikulpanit, Boris Ostrovsky

Am Dienstag 13 Mai 2014, 11:53:19 schrieb Boris Ostrovsky:
> vmx_add_host_load_msr()/vmx_rm_guest_msr() and vmx_add_guest_msr()/vmx_rm_guest_msr()
> share fair amount of code. Merge them to simplify code maintenance.

Another hypervisor crash.

> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        | 154 +++++++++++++++++--------------------
>  xen/arch/x86/hvm/vmx/vmx.c         |   4 +-
>  xen/arch/x86/hvm/vmx/vpmu_core2.c  |   8 +-
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  10 ++-
>  4 files changed, 83 insertions(+), 93 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 0f43a1b..aaa3691 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1172,121 +1172,109 @@ int vmx_write_guest_msr(u32 msr, u64 val)
>      return -ESRCH;
>  }
>  
> -int vmx_add_guest_msr(u32 msr)
> +int vmx_add_msr(u32 msr, u8 type)
>  {
>      struct vcpu *curr = current;
> -    unsigned int i, msr_count = curr->arch.hvm_vmx.msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    unsigned int idx, *msr_count;
> +    struct vmx_msr_entry **msr_area;
>  
> -    if ( msr_area == NULL )
> +    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
> +
> +    if ( type == VMX_GUEST_MSR )
>      {
> -        if ( (msr_area = alloc_xenheap_page()) == NULL )
> +        msr_count = &curr->arch.hvm_vmx.msr_count;
> +        msr_area = &curr->arch.hvm_vmx.msr_area;
> +    }
> +    else
> +    {
> +        msr_count = &curr->arch.hvm_vmx.host_msr_count;
> +        msr_area = &curr->arch.hvm_vmx.host_msr_area;
> +    }
> +
> +    if ( *msr_area == NULL )
> +    {
> +        if ( (*msr_area = alloc_xenheap_page()) == NULL )
>              return -ENOMEM;
> -        curr->arch.hvm_vmx.msr_area = msr_area;
> -        __vmwrite(VM_EXIT_MSR_STORE_ADDR, virt_to_maddr(msr_area));
> -        __vmwrite(VM_ENTRY_MSR_LOAD_ADDR, virt_to_maddr(msr_area));
> +
> +        if ( type == VMX_GUEST_MSR )
> +        {
> +            __vmwrite(VM_EXIT_MSR_STORE_ADDR, virt_to_maddr(*msr_area));
> +            __vmwrite(VM_ENTRY_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
> +        }
> +        else
> +            __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(*msr_area));
>      }
>  
> -    for ( i = 0; i < msr_count; i++ )
> -        if ( msr_area[i].index == msr )
> +    for ( idx = 0; idx < *msr_count; idx++ )
> +        if ( msr_area[idx]->index == msr )
>              return 0;
>  
> -    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
> +    if ( *msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
>          return -ENOSPC;
>  
> -    msr_area[msr_count].index = msr;
> -    msr_area[msr_count].mbz   = 0;
> -    msr_area[msr_count].data  = 0;
> -    curr->arch.hvm_vmx.msr_count = ++msr_count;
> -    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
> -    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
> +    msr_area[*msr_count]->index = msr;

The addressing of the vector msr_area[] is wrong. You need something like
       (*msr_area)[*msr_count].index = msr;
or similar.

Dietmar.

> +    msr_area[*msr_count]->mbz   = 0;
> +    (*msr_count)++;
> +    if ( type == VMX_GUEST_MSR )
> +    {
> +        msr_area[*msr_count - 1]->data  = 0;
> +        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
> +        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
> +    }
> +    else
> +    {
> +        rdmsrl(msr, msr_area[*msr_count - 1]->data);
> +        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
> +    }
>  
>      return 0;
>  }
>  
> -void vmx_rm_guest_msr(u32 msr)
> +void vmx_rm_msr(u32 msr, u8 type)
>  {
>      struct vcpu *curr = current;
> -    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> +    unsigned int idx, *msr_count;
> +    struct vmx_msr_entry **msr_area;
>  
> -    if ( msr_area == NULL )
> -        return;
> -
> -    for ( idx = 0; idx < msr_count; idx++ )
> -        if ( msr_area[idx].index == msr )
> -            break;
> +    ASSERT( (type == VMX_GUEST_MSR) || (type == VMX_HOST_MSR) );
>  
> -    if ( idx == msr_count )
> -        return;
> -
> -    for ( ; idx < msr_count - 1; idx++ )
> +    if ( type == VMX_GUEST_MSR )
>      {
> -        msr_area[idx].index = msr_area[idx + 1].index;
> -        msr_area[idx].data = msr_area[idx + 1].data;
> +        msr_count = &curr->arch.hvm_vmx.msr_count;
> +        msr_area = &curr->arch.hvm_vmx.msr_area;
>      }
> -    msr_area[msr_count - 1].index = 0;
> -
> -    curr->arch.hvm_vmx.msr_count = --msr_count;
> -    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
> -    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
> -}
> -
> -int vmx_add_host_load_msr(u32 msr)
> -{
> -    struct vcpu *curr = current;
> -    unsigned int i, msr_count = curr->arch.hvm_vmx.host_msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
> -
> -    if ( msr_area == NULL )
> +    else
>      {
> -        if ( (msr_area = alloc_xenheap_page()) == NULL )
> -            return -ENOMEM;
> -        curr->arch.hvm_vmx.host_msr_area = msr_area;
> -        __vmwrite(VM_EXIT_MSR_LOAD_ADDR, virt_to_maddr(msr_area));
> +        msr_count = &curr->arch.hvm_vmx.host_msr_count;
> +        msr_area = &curr->arch.hvm_vmx.host_msr_area;
>      }
>  
> -    for ( i = 0; i < msr_count; i++ )
> -        if ( msr_area[i].index == msr )
> -            return 0;
> -
> -    if ( msr_count == (PAGE_SIZE / sizeof(struct vmx_msr_entry)) )
> -        return -ENOSPC;
> -
> -    msr_area[msr_count].index = msr;
> -    msr_area[msr_count].mbz   = 0;
> -    rdmsrl(msr, msr_area[msr_count].data);
> -    curr->arch.hvm_vmx.host_msr_count = ++msr_count;
> -    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
> -
> -    return 0;
> -}
> -
> -void vmx_rm_host_load_msr(u32 msr)
> -{
> -    struct vcpu *curr = current;
> -    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
> -    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
> -
> -    if ( msr_area == NULL )
> +    if ( *msr_area == NULL )
>          return;
>  
> -    for ( idx = 0; idx < msr_count; idx++ )
> -        if ( msr_area[idx].index == msr )
> +    for ( idx = 0; idx < *msr_count; idx++ )
> +        if ( msr_area[idx]->index == msr )
>              break;
>  
> -    if ( idx == msr_count )
> +    if ( idx == *msr_count )
>          return;
>  
> -    for ( ; idx < msr_count - 1; idx++ )
> +    for ( ; idx < *msr_count - 1; idx++ )
>      {
> -        msr_area[idx].index = msr_area[idx + 1].index;
> -        msr_area[idx].data = msr_area[idx + 1].data;
> +        msr_area[idx]->index = msr_area[idx + 1]->index;
> +        msr_area[idx]->data = msr_area[idx + 1]->data;
> +    }
> +    msr_area[*msr_count - 1]->index = 0;
> +    (*msr_count)--;
> +    if ( type == VMX_GUEST_MSR )
> +    {
> +        __vmwrite(VM_EXIT_MSR_STORE_COUNT, *msr_count);
> +        __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, *msr_count);
> +    }
> +    else
> +    {
> +        __vmwrite(VM_EXIT_MSR_LOAD_COUNT, *msr_count);
>      }
> -    msr_area[msr_count - 1].index = 0;
> -
> -    curr->arch.hvm_vmx.host_msr_count = --msr_count;
> -    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
>  }
>  
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index ecdbc17..23d58d9 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2234,12 +2234,12 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>  
>              for ( ; (rc == 0) && lbr->count; lbr++ )
>                  for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
> -                    if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> +                    if ( (rc = vmx_add_msr(lbr->base + i, VMX_GUEST_MSR)) == 0 )
>                          vmx_disable_intercept_for_msr(v, lbr->base + i, MSR_TYPE_R | MSR_TYPE_W);
>          }
>  
>          if ( (rc < 0) ||
> -             (vmx_add_host_load_msr(msr) < 0) )
> +             (vmx_add_msr(msr, VMX_HOST_MSR) < 0) )
>              hvm_inject_hw_exception(TRAP_machine_check, 0);
>          else
>          {
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 0a9c643..5e980fa 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -370,10 +370,10 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>          return 0;
>  
>      wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
> -    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> +    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR) )
>          goto out_err;
>  
> -    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
> +    if ( vmx_add_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR) )
>          goto out_err;
>      vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
>                   core2_calc_intial_glb_ctrl_msr());
> @@ -390,8 +390,8 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>      return 1;
>  
>  out_err:
> -    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> -    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
> +    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_HOST_MSR);
> +    vmx_rm_msr(MSR_CORE_PERF_GLOBAL_CTRL, VMX_GUEST_MSR);
>      release_pmu_ownship(PMU_OWNER_HVM);
>  
>      printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 50befe1..dd34b2c 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -475,14 +475,16 @@ enum vmcs_field {
>  
>  #define MSR_TYPE_R 1
>  #define MSR_TYPE_W 2
> +
> +#define VMX_GUEST_MSR 0
> +#define VMX_HOST_MSR  1
> +
>  void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
>  void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
>  int vmx_read_guest_msr(u32 msr, u64 *val);
>  int vmx_write_guest_msr(u32 msr, u64 val);
> -int vmx_add_guest_msr(u32 msr);
> -void vmx_rm_guest_msr(u32 msr);
> -int vmx_add_host_load_msr(u32 msr);
> -void vmx_rm_host_load_msr(u32 msr);
> +int vmx_add_msr(u32 msr, u8 type);
> +void vmx_rm_msr(u32 msr, u8 type);
>  void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
>  void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
>  void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
> 

-- 
Company details: http://ts.fujitsu.com/imprint.html

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 05/19] vmx: Merge MSR management routines
  2014-05-22 10:24   ` Dietmar Hahn
@ 2014-05-22 13:48     ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-22 13:48 UTC (permalink / raw)
  To: Dietmar Hahn
  Cc: kevin.tian, keir, JBeulich, jun.nakajima, andrew.cooper3,
	donald.d.dugger, xen-devel, suravee.suthikulpanit

On 05/22/2014 06:24 AM, Dietmar Hahn wrote:
> Am Dienstag 13 Mai 2014, 11:53:19 schrieb Boris Ostrovsky:
>> vmx_add_host_load_msr()/vmx_rm_guest_msr() and vmx_add_guest_msr()/vmx_rm_guest_msr()
>> share fair amount of code. Merge them to simplify code maintenance.
> Another hypervisor crash.

Thanks, I already fixed this for v7 (this was a new change in v6).


-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on PV guests
  2014-05-13 15:53 ` [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on " Boris Ostrovsky
@ 2014-05-22 14:50   ` Jan Beulich
  2014-05-22 17:16     ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-22 14:50 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> @@ -540,7 +570,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>          }
>      }
>  
> -    if ((global_ctrl & *enabled_cntrs) || (core2_vpmu_cxt->ds_area != 0)  )
> +    if ((core2_vpmu_cxt->global_ctrl & *enabled_cntrs) ||
> +        (core2_vpmu_cxt->ds_area != 0)  )

Please fix coding style issues in code that you touch anyway ...

> @@ -570,13 +601,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>                  inject_gp = 1;
>              break;
>          }
> -        if (inject_gp)
> -            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +
> +        if (inject_gp) 
> +            inject_trap(v, TRAP_gp_fault);

... and don't make coding style issues even worse (trailing blank
being added here).

> @@ -868,8 +869,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
>          __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
>          break;
>  
> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
> +        break; 
> +
>      case 0x00000005: /* MONITOR/MWAIT */
> -    case 0x0000000a: /* Architectural Performance Monitor Features */

Is there actually anything wrong with leaving this as it was, i.e.
clearing a, b, c, and d?

> @@ -885,6 +888,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
>      }
>  
>   out:
> +    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);

This seems incomplete without passing in regs->ecx. And without at
least a brief comment it also looks misplaced at the first glance.

> @@ -2515,7 +2520,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
>              if ( v->arch.debugreg[7] & DR7_ACTIVE_MASK )
>                  wrmsrl(regs->_ecx, msr_content);
>              break;
> -
> +        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
> +        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
> +        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
> +        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
> +        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
> +            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
> +                goto invalid;
> +            break;

Can you really handle both Intel and AMD ones as a group here,
without consideration whose CPU you're actually running on? I
think for forward compatibility you should be making the call only
for Intel MSRs on Intel CPUs, and respectively for AMD.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for PV guests
  2014-05-13 15:53 ` [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for " Boris Ostrovsky
@ 2014-05-22 15:30   ` Jan Beulich
  2014-05-22 17:25     ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-22 15:30 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -76,7 +76,12 @@ void vpmu_lvtpc_update(uint32_t val)
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
>  
>      vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
> -    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
> +
> +    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
> +    if ( !is_pv_domain(current->domain) ||
> +         !(current->arch.vpmu.xenpmu_data &&
> +           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )

Please be consistent with parenthesization - compare this and ...

> @@ -87,7 +92,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
>          return 0;
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
> -        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
> +    {
> +        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
> +
> +        /*
> +         * We may have received a PMU interrupt during WRMSR handling
> +         * and since do_wrmsr may load VPMU context we should save
> +         * (and unload) it again.
> +         */
> +        if ( !is_hvm_domain(current->domain) &&
> +            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )

... this.

> @@ -99,14 +120,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>          return 0;
>  
>      if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
> -        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +    {
> +        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
> +
> +        if ( !is_hvm_domain(current->domain) &&
> +            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )

Wouldn't the same comment as in WRMSR handling apply here too?
If so, either replicate it or add a brief comment referring to the
other one.

>  int vpmu_do_interrupt(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> -    struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +    struct vpmu_struct *vpmu;
> +
> +    /* dom0 will handle interrupt for special domains (e.g. idle domain) */
> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +        v = hardware_domain->vcpu[smp_processor_id() %
> +            hardware_domain->max_vcpus];

Please don't assume that all fields in the array are populated - there
are plenty of examples where pointers read from this array get
checked against NULL.

> +        /* Store appropriate registers in xenpmu_data */
> +        if ( is_pv_32bit_domain(current->domain) )
> +        {
> +            /*
> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> +             * and therefore we treat it the same way as a non-priviledged
> +             * PV 32-bit domain.
> +             */
> +            struct compat_cpu_user_regs *cmp;
> +
> +            gregs = guest_cpu_user_regs();
> +
> +            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            XLAT_cpu_user_regs(cmp, gregs);
> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                   &cmp, sizeof(struct compat_cpu_user_regs));

Afaict this memcpy() does nothing (src == dst).

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on PV guests
  2014-05-22 14:50   ` Jan Beulich
@ 2014-05-22 17:16     ` Boris Ostrovsky
  2014-05-23  6:27       ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-22 17:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/22/2014 10:50 AM, Jan Beulich wrote:
>
>> @@ -868,8 +869,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>           __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
>>           break;
>>   
>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
>> +        break;
>> +
>>       case 0x00000005: /* MONITOR/MWAIT */
>> -    case 0x0000000a: /* Architectural Performance Monitor Features */
> Is there actually anything wrong with leaving this as it was, i.e.
> clearing a, b, c, and d?


Since AMD's PMU-related CPUID is 0x80000001 (and is not used currently 
anyway as there is no do_cpuid op in AMD's vpmu) I think I'll just move 
vpmu_do_cpuid() into 0x0000000a case.

>
>> @@ -885,6 +888,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>       }
>>   
>>    out:
>> +    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
> This seems incomplete without passing in regs->ecx. And without at
> least a brief comment it also looks misplaced at the first glance.


vpmu_cpuid() doesn't use indexed CPUIDs (but I can see how ecx could 
have been added for consistency if I kept the call where it is now.)


>
>> @@ -2515,7 +2520,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
>>               if ( v->arch.debugreg[7] & DR7_ACTIVE_MASK )
>>                   wrmsrl(regs->_ecx, msr_content);
>>               break;
>> -
>> +        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
>> +        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
>> +        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
>> +        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>> +        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
>> +            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
>> +                goto invalid;
>> +            break;
> Can you really handle both Intel and AMD ones as a group here,
> without consideration whose CPU you're actually running on? I
> think for forward compatibility you should be making the call only
> for Intel MSRs on Intel CPUs, and respectively for AMD.


The vendor-specific paths are taken in vpmu_do_wrmsr() (and rdmsr). Not 
sure if splitting this into two cases would be better but if you feel it 
adds to clarity I can do this.


-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for PV guests
  2014-05-22 15:30   ` Jan Beulich
@ 2014-05-22 17:25     ` Boris Ostrovsky
  2014-05-23  6:29       ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-22 17:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/22/2014 11:30 AM, Jan Beulich wrote:
>
>> @@ -99,14 +120,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>>           return 0;
>>   
>>       if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
>> -        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
>> +    {
>> +        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
>> +
>> +        if ( !is_hvm_domain(current->domain) &&
>> +            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
> Wouldn't the same comment as in WRMSR handling apply here too?
> If so, either replicate it or add a brief comment referring to the
> other one.


Yes, the comment is applicable here too and the next patch merges the 
two routines.


>> +        /* Store appropriate registers in xenpmu_data */
>> +        if ( is_pv_32bit_domain(current->domain) )
>> +        {
>> +            /*
>> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
>> +             * and therefore we treat it the same way as a non-priviledged
>> +             * PV 32-bit domain.
>> +             */
>> +            struct compat_cpu_user_regs *cmp;
>> +
>> +            gregs = guest_cpu_user_regs();
>> +
>> +            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            XLAT_cpu_user_regs(cmp, gregs);
>> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                   &cmp, sizeof(struct compat_cpu_user_regs));
> Afaict this memcpy() does nothing (src == dst).

Yes, memcpy is unnecessary here, copying is already done by the 
preceeding XLAT_cpu_user_reg().


-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on PV guests
  2014-05-22 17:16     ` Boris Ostrovsky
@ 2014-05-23  6:27       ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-23  6:27 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 22.05.14 at 19:16, <boris.ostrovsky@oracle.com> wrote:
> On 05/22/2014 10:50 AM, Jan Beulich wrote:
>>
>>> @@ -868,8 +869,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
>>>           __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
>>>           break;
>>>   
>>> +    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) 
> */
>>> +        break;
>>> +
>>>       case 0x00000005: /* MONITOR/MWAIT */
>>> -    case 0x0000000a: /* Architectural Performance Monitor Features */
>> Is there actually anything wrong with leaving this as it was, i.e.
>> clearing a, b, c, and d?
> 
> 
> Since AMD's PMU-related CPUID is 0x80000001 (and is not used currently 
> anyway as there is no do_cpuid op in AMD's vpmu) I think I'll just move 
> vpmu_do_cpuid() into 0x0000000a case.

Iirc this wouldn't work, as the function as it is right now (without any
of your patches, and I don't think they remove that code) wants to
alter leaf 1.

>>> @@ -2515,7 +2520,14 @@ static int emulate_privileged_op(struct cpu_user_regs 
> *regs)
>>>               if ( v->arch.debugreg[7] & DR7_ACTIVE_MASK )
>>>                   wrmsrl(regs->_ecx, msr_content);
>>>               break;
>>> -
>>> +        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
>>> +        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
>>> +        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
>>> +        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
>>> +        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
>>> +            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
>>> +                goto invalid;
>>> +            break;
>> Can you really handle both Intel and AMD ones as a group here,
>> without consideration whose CPU you're actually running on? I
>> think for forward compatibility you should be making the call only
>> for Intel MSRs on Intel CPUs, and respectively for AMD.
> 
> 
> The vendor-specific paths are taken in vpmu_do_wrmsr() (and rdmsr). Not 
> sure if splitting this into two cases would be better but if you feel it 
> adds to clarity I can do this.

The fear I'm having is that eventually there might be MSR index
clashes. Doing it properly now saves us from future headache.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for PV guests
  2014-05-22 17:25     ` Boris Ostrovsky
@ 2014-05-23  6:29       ` Jan Beulich
  0 siblings, 0 replies; 65+ messages in thread
From: Jan Beulich @ 2014-05-23  6:29 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 22.05.14 at 19:25, <boris.ostrovsky@oracle.com> wrote:
> On 05/22/2014 11:30 AM, Jan Beulich wrote:
>>
>>> @@ -99,14 +120,87 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t 
> *msr_content)
>>>           return 0;
>>>   
>>>       if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
>>> -        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
>>> +    {
>>> +        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
>>> +
>>> +        if ( !is_hvm_domain(current->domain) &&
>>> +            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
>> Wouldn't the same comment as in WRMSR handling apply here too?
>> If so, either replicate it or add a brief comment referring to the
>> other one.
> 
> 
> Yes, the comment is applicable here too and the next patch merges the 
> two routines.

When I saw that the next patch merges the two, I realized that my
request here was sort of pointless. Question in such cases of course
is whether the merging wouldn't then better be done up front.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode
  2014-05-13 15:53 ` [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode Boris Ostrovsky
@ 2014-05-26 11:48   ` Jan Beulich
  2014-05-27  2:08     ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-26 11:48 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> @@ -128,16 +130,23 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>      struct vcpu *v = current;
>      struct vpmu_struct *vpmu;
>  
> -    /* dom0 will handle interrupt for special domains (e.g. idle domain) */
> -    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +    /*
> +     * dom0 will handle interrupt for special domains (e.g. idle domain) or,
> +     * in XENPMU_MODE_PRIV, for everyone.
> +     */
> +    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
> +         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
>          v = hardware_domain->vcpu[smp_processor_id() %
>              hardware_domain->max_vcpus];
>  
>      vpmu = vcpu_vpmu(v);
> -    if ( !is_hvm_domain(v->domain) )
> +    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(v->domain)  || (vpmu_mode & XENPMU_MODE_PRIV) )
>      {
>          /* PV guest or dom0 is doing system profiling */
> -        const struct cpu_user_regs *gregs;
> +        struct cpu_user_regs *gregs;

Stray/unintended change?

> @@ -148,34 +157,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>          err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>          vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>  
> -        /* Store appropriate registers in xenpmu_data */
> -        if ( is_pv_32bit_domain(current->domain) )
> +        if ( !is_hvm_domain(current->domain) )
>          {
> -            /*
> -             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> -             * and therefore we treat it the same way as a non-priviledged
> -             * PV 32-bit domain.
> -             */
> -            struct compat_cpu_user_regs *cmp;
> -
> -            gregs = guest_cpu_user_regs();
> +            /* Store appropriate registers in xenpmu_data */
> +            if ( is_pv_32bit_domain(current->domain) )
> +            {
> +                gregs = guest_cpu_user_regs();
> +
> +                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
> +                     !is_pv_32bit_domain(v->domain) )
> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                           gregs, sizeof(struct cpu_user_regs));
> +                else
> +                {
> +                    /*
> +                     * 32-bit dom0 cannot process Xen's addresses (which are
> +                     * 64 bit) and therefore we treat it the same way as a
> +                     * non-priviledged PV 32-bit domain.
> +                     */
> +
> +                    struct compat_cpu_user_regs *cmp;
> +
> +                    cmp = (struct compat_cpu_user_regs *)
> +                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +                    XLAT_cpu_user_regs(cmp, gregs);
> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                           &cmp, sizeof(struct compat_cpu_user_regs));
> +                }
> +            }
> +            else if ( !is_hardware_domain(current->domain) &&
> +                      !is_idle_vcpu(current) )
> +            {
> +                /* PV guest */

/* 64-bit PV guest */?

> +                gregs = guest_cpu_user_regs();
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       gregs, sizeof(struct cpu_user_regs));
> +            }
> +            else
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       regs, sizeof(struct cpu_user_regs));
>  
> -            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
> -            XLAT_cpu_user_regs(cmp, gregs);
> -            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> -                   &cmp, sizeof(struct compat_cpu_user_regs));
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;

Ah, no - you want to modify the structure here. But you could do this
directly on the ->pmu.r.regs field rather than first latching the pointer.

And as said before, it doesn't really look correct to simply set ->cs to
just the RPL, especially without any comment explaining why this is
(a) being done and (b) correct.


>          }
> -        else if ( !is_hardware_domain(current->domain) &&
> -                 !is_idle_vcpu(current) )
> +        else
>          {
> -            /* PV guest */
> +            /* HVM guest */
> +            struct segment_register cs;
> +
>              gregs = guest_cpu_user_regs();
>              memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>                     gregs, sizeof(struct cpu_user_regs));
> +
> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = cs.attr.fields.dpl;

Same here then obviously.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch
  2014-05-13 15:53 ` [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch Boris Ostrovsky
@ 2014-05-26 12:03   ` Jan Beulich
  2014-05-30 21:13     ` Tian, Kevin
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-26 12:03 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> Save VPMU state during context switch for both HVM and PV guests unless we
> are in PMU privileged mode (i.e. dom0 is doing all profiling).
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Acked-by: Kevin Tian <kevin.tian@intel.com>

May I ask that you don't put in inapplicable acks - there's no VMX
code being modfied here, i.e. the above either should be dropped
or become a Reviewed-by.

> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/arch/x86/domain.c | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 4a122da..a853071 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1478,16 +1478,14 @@ void context_switch(struct vcpu *prev, struct vcpu 
> *next)
>      }
>  
>      if ( prev != next )
> -        _update_runstate_area(prev);
> -
> -    if ( is_hvm_vcpu(prev) )
>      {
> -        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
> +        _update_runstate_area(prev);
> +        if ( vpmu_mode & XENPMU_MODE_ON )
>              vpmu_save(prev);
> +    }
>  
> -        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
> +    if ( is_hvm_vcpu(prev) &&  !list_empty(&prev->arch.hvm_vcpu.tm_list) )
>              pt_save_timer(prev);
> -    }
>  
>      local_irq_disable();
>  
> @@ -1526,7 +1524,7 @@ void context_switch(struct vcpu *prev, struct vcpu 
> *next)
>                             !is_hardware_domain(next->domain));
>      }
>  
> -    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
> +    if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
>          /* Must be done with interrupts enabled */
>          vpmu_load(next);
>  
> -- 
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support
  2014-05-13 15:53 ` [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support Boris Ostrovsky
@ 2014-05-26 15:55   ` Jan Beulich
  2014-05-27  2:57     ` Boris Ostrovsky
  2014-05-30 21:12   ` Tian, Kevin
  1 sibling, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-26 15:55 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and 
> vlapic
> accesses for HVM moved to sofint, the only routines/macros that vpmu_do_interrupt()
> calls in NMI mode are:
> * memcpy()
> * querying domain type (is_XX_domain())
> * guest_cpu_user_regs()
> * XLAT_cpu_user_regs()
> * raise_softirq()
> * vcpu_vpmu()
> * vpmu_ops->arch_vpmu_save()

With this ...

> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -185,6 +185,7 @@ static inline void context_load(struct vcpu *v)
>      }
>  }
>  
> +/* Must be NMI-safe */
>  static void amd_vpmu_load(struct vcpu *v)

... is the comment perhaps misplaced?

> +static void vpmu_send_nmi(struct vcpu *v)
> +{
> +    struct vlapic *vlapic;
> +    u32 vlapic_lvtpc;
> +    unsigned char int_vec;
> +
> +    ASSERT( is_hvm_vcpu(v) );
> +
> +    vlapic = vcpu_vlapic(v);
> +    if ( !is_vlapic_lvtpc_enabled(vlapic) )
> +        return;
> +
> +    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
> +    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
> +
> +    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
> +        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
> +    else
> +        v->nmi_pending = 1;

I realize this is only the result of code movement, but what is this
if/else pair doing?

> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());

this_cpu().

> +
> +    if ( sampled == NULL )
> +        return;
> +    per_cpu(sampled_vcpu, smp_processor_id()) = NULL;

Again.

> +
> +    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
> +         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
> +        v = hardware_domain->vcpu[smp_processor_id() %
> +                                  hardware_domain->max_vcpus];
> +    else
> +    {
> +        if ( is_hvm_domain(sampled->domain) )
> +        {
> +            vpmu_send_nmi(sampled);
> +            return;
> +        }
> +        v = sampled;
> +    }
> +
> +    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +    if ( !is_pv_domain(sampled->domain) )

Even if it means the same, has_hvm_container_vcpu(sampled)
please: You're guarding an operation here that requires a HVM
container.

> +    {
> +        struct segment_register cs;
> +
> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);

I hope you understand that what you read here is only implicitly (due
to the softirq getting serviced before the guest gets re-entered) the
current value. Or wait - is it at all? What if a scheduler softirq first
causes the guest to be de-scheduled and another CPU manages to
pick it up before you get here?

> @@ -472,6 +574,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>          return -EINVAL;
>      }
>  
> +    if ( !pvpmu_initted )
> +    {
> +        if (reserve_lapic_nmi() == 0)

Coding style.

> +            set_nmi_callback(pmu_nmi_interrupt);
> +        else
> +        {
> +            printk("Failed to reserve PMU NMI\n");
> +            put_page(page);
> +            return -EBUSY;
> +        }
> +        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
> +
> +        pvpmu_initted = 1;

Is it excluded that you get two racing pvpmu_init() calls (i.e. are
these exclusively coming from e.g. a domctl)? If not, better
serialization would be needed here.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode
  2014-05-26 11:48   ` Jan Beulich
@ 2014-05-27  2:08     ` Boris Ostrovsky
  2014-05-27  9:10       ` Jan Beulich
  0 siblings, 1 reply; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-27  2:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/26/2014 07:48 AM, Jan Beulich wrote:
>
>> +                gregs = guest_cpu_user_regs();
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       gregs, sizeof(struct cpu_user_regs));
>> +            }
>> +            else
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       regs, sizeof(struct cpu_user_regs));
>>   
>> -            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> -            XLAT_cpu_user_regs(cmp, gregs);
>> -            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> -                   &cmp, sizeof(struct compat_cpu_user_regs));
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
> Ah, no - you want to modify the structure here. But you could do this
> directly on the ->pmu.r.regs field rather than first latching the pointer.
>
> And as said before, it doesn't really look correct to simply set ->cs to
> just the RPL, especially without any comment explaining why this is
> (a) being done and (b) correct.

The reason for only passing up the RPL is that's the only field that the 
guest is interested in (whether the interrupt happened in kernel or user 
space). I added a comment in the code to this effect.

Do you think that all fields need to be passed?

-boris


>
>>           }
>> -        else if ( !is_hardware_domain(current->domain) &&
>> -                 !is_idle_vcpu(current) )
>> +        else
>>           {
>> -            /* PV guest */
>> +            /* HVM guest */
>> +            struct segment_register cs;
>> +
>>               gregs = guest_cpu_user_regs();
>>               memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>                      gregs, sizeof(struct cpu_user_regs));
>> +
>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = cs.attr.fields.dpl;
> Same here then obviously.
>
> Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support
  2014-05-26 15:55   ` Jan Beulich
@ 2014-05-27  2:57     ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-27  2:57 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/26/2014 11:55 AM, Jan Beulich wrote:
>
>> +static void vpmu_send_nmi(struct vcpu *v)
>> +{
>> +    struct vlapic *vlapic;
>> +    u32 vlapic_lvtpc;
>> +    unsigned char int_vec;
>> +
>> +    ASSERT( is_hvm_vcpu(v) );
>> +
>> +    vlapic = vcpu_vlapic(v);
>> +    if ( !is_vlapic_lvtpc_enabled(vlapic) )
>> +        return;
>> +
>> +    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
>> +    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
>> +
>> +    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
>> +        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
>> +    else
>> +        v->nmi_pending = 1;
> I realize this is only the result of code movement, but what is this
> if/else pair doing?

To be honest, I don't know.  This code has been in VPMU since day one 
and I just moved it here.

I am not sure whether 'if' clause has ever been tested since perf (and 
before that, oprofile) use NMIs for PMU interrupts. But I should 
probably at least rename the routine to something like 
send_pmu_interrupt (and move int_vec calculation into the 'if' clause).

>> +
>> +    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
>> +         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
>> +        v = hardware_domain->vcpu[smp_processor_id() %
>> +                                  hardware_domain->max_vcpus];
>> +    else
>> +    {
>> +        if ( is_hvm_domain(sampled->domain) )
>> +        {
>> +            vpmu_send_nmi(sampled);
>> +            return;
>> +        }
>> +        v = sampled;
>> +    }
>> +
>> +    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +    if ( !is_pv_domain(sampled->domain) )
> Even if it means the same, has_hvm_container_vcpu(sampled)
> please: You're guarding an operation here that requires a HVM
> container.

I am removing PVH-enabling patch (which currently comes after this one) 
so all is_*_domain() will hopefully make sense at the time they are added.

>
>> +    {
>> +        struct segment_register cs;
>> +
>> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
> I hope you understand that what you read here is only implicitly (due
> to the softirq getting serviced before the guest gets re-entered) the
> current value. Or wait - is it at all? What if a scheduler softirq first
> causes the guest to be de-scheduled and another CPU manages to
> pick it up before you get here?

Based on a similar comment from Kevin I added a check for pending PMU 
interrupts (i.e. a call to this routine) in vpmu_save() which is called 
during context switch. We should therefore always pick the current value 
of guest's CS.


>
>> @@ -472,6 +574,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>>           return -EINVAL;
>>       }
>>   
>> +    if ( !pvpmu_initted )
>> +    {
>> +        if (reserve_lapic_nmi() == 0)
> Coding style.
>
>> +            set_nmi_callback(pmu_nmi_interrupt);
>> +        else
>> +        {
>> +            printk("Failed to reserve PMU NMI\n");
>> +            put_page(page);
>> +            return -EBUSY;
>> +        }
>> +        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
>> +
>> +        pvpmu_initted = 1;
> Is it excluded that you get two racing pvpmu_init() calls (i.e. are
> these exclusively coming from e.g. a domctl)? If not, better
> serialization would be needed here.

In Linux the first call (it's not a domctl but a dedicated hypercall) is 
done by boot CPU pre-SMP so I think we should be safe.

But then there may be non-Linux users so a lock wouldn't hurt.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode
  2014-05-27  2:08     ` Boris Ostrovsky
@ 2014-05-27  9:10       ` Jan Beulich
  2014-05-27 13:31         ` Boris Ostrovsky
  0 siblings, 1 reply; 65+ messages in thread
From: Jan Beulich @ 2014-05-27  9:10 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

>>> On 27.05.14 at 04:08, <boris.ostrovsky@oracle.com> wrote:
> On 05/26/2014 07:48 AM, Jan Beulich wrote:
>>
>>> +                gregs = guest_cpu_user_regs();
>>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>> +                       gregs, sizeof(struct cpu_user_regs));
>>> +            }
>>> +            else
>>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>> +                       regs, sizeof(struct cpu_user_regs));
>>>   
>>> -            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> -            XLAT_cpu_user_regs(cmp, gregs);
>>> -            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>> -                   &cmp, sizeof(struct compat_cpu_user_regs));
>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> +            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
>> Ah, no - you want to modify the structure here. But you could do this
>> directly on the ->pmu.r.regs field rather than first latching the pointer.
>>
>> And as said before, it doesn't really look correct to simply set ->cs to
>> just the RPL, especially without any comment explaining why this is
>> (a) being done and (b) correct.
> 
> The reason for only passing up the RPL is that's the only field that the 
> guest is interested in (whether the interrupt happened in kernel or user 
> space). I added a comment in the code to this effect.
> 
> Do you think that all fields need to be passed?

How do you know what a guest is interested in? Namely 32-bit OSes
may have uses for more than a single flat code segment, and hence
may have an interest in knowing the full selector.

Jan

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode
  2014-05-27  9:10       ` Jan Beulich
@ 2014-05-27 13:31         ` Boris Ostrovsky
  0 siblings, 0 replies; 65+ messages in thread
From: Boris Ostrovsky @ 2014-05-27 13:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	donald.d.dugger, xen-devel, dietmar.hahn, jun.nakajima

On 05/27/2014 05:10 AM, Jan Beulich wrote:
>>>> On 27.05.14 at 04:08, <boris.ostrovsky@oracle.com> wrote:
>> On 05/26/2014 07:48 AM, Jan Beulich wrote:
>>>> +                gregs = guest_cpu_user_regs();
>>>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>> +                       gregs, sizeof(struct cpu_user_regs));
>>>> +            }
>>>> +            else
>>>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>> +                       regs, sizeof(struct cpu_user_regs));
>>>>    
>>>> -            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> -            XLAT_cpu_user_regs(cmp, gregs);
>>>> -            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>> -                   &cmp, sizeof(struct compat_cpu_user_regs));
>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> +            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
>>> Ah, no - you want to modify the structure here. But you could do this
>>> directly on the ->pmu.r.regs field rather than first latching the pointer.
>>>
>>> And as said before, it doesn't really look correct to simply set ->cs to
>>> just the RPL, especially without any comment explaining why this is
>>> (a) being done and (b) correct.
>> The reason for only passing up the RPL is that's the only field that the
>> guest is interested in (whether the interrupt happened in kernel or user
>> space). I added a comment in the code to this effect.
>>
>> Do you think that all fields need to be passed?
> How do you know what a guest is interested in? Namely 32-bit OSes
> may have uses for more than a single flat code segment, and hence
> may have an interest in knowing the full selector.

The registers are used only by the PMU handler in the guest and that 
code only wants to know where the (v)cpu was sampled.

OTOH there is really no reason *not* to pass full selector value. So 
I'll fix this.

-boris

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support
  2014-05-13 15:53 ` [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support Boris Ostrovsky
  2014-05-26 15:55   ` Jan Beulich
@ 2014-05-30 21:12   ` Tian, Kevin
  1 sibling, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-30 21:12 UTC (permalink / raw)
  To: Boris Ostrovsky, JBeulich, dietmar.hahn, suravee.suthikulpanit
  Cc: andrew.cooper3, keir, Dugger, Donald D, Nakajima, Jun, xen-devel

> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, May 13, 2014 8:54 AM
> 
> Add support for using NMIs as PMU interrupts.
> 
> Most of processing is still performed by vpmu_do_interrupt(). However, since
> certain operations are not NMI-safe we defer them to a softint that
> vpmu_do_interrupt()
> will schedule:
> * For PV guests that would be send_guest_vcpu_virq()
> * For HVM guests it's VLAPIC accesses and hvm_get_segment_register() (the
> later
> can be called in privileged profiling mode when the interrupted guest is an
> HVM one).
> 
> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and
> vlapic
> accesses for HVM moved to sofint, the only routines/macros that
> vpmu_do_interrupt()
> calls in NMI mode are:
> * memcpy()
> * querying domain type (is_XX_domain())
> * guest_cpu_user_regs()
> * XLAT_cpu_user_regs()
> * raise_softirq()
> * vcpu_vpmu()
> * vpmu_ops->arch_vpmu_save()
> * vpmu_ops->do_interrupt() (in the future for PVH support)
> 
> The latter two only access PMU MSRs with {rd,wr}msrl() (not the _safe
> versions
> which would not be NMI-safe).
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>

The whole concept looks OK to me, except that you need address several
other comments from Jan.

Reviewed-by: Kevin Tian <kevint.tian@intel.com>

> ---
>  xen/arch/x86/hvm/svm/vpmu.c       |   1 +
>  xen/arch/x86/hvm/vmx/vpmu_core2.c |   1 +
>  xen/arch/x86/hvm/vpmu.c           | 183
> +++++++++++++++++++++++++++++++-------
>  3 files changed, 152 insertions(+), 33 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
> index 42c3530..8711e86 100644
> --- a/xen/arch/x86/hvm/svm/vpmu.c
> +++ b/xen/arch/x86/hvm/svm/vpmu.c
> @@ -185,6 +185,7 @@ static inline void context_load(struct vcpu *v)
>      }
>  }
> 
> +/* Must be NMI-safe */
>  static void amd_vpmu_load(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> index 8182dc3..c06b305 100644
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -303,6 +303,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
>          rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS,
> core2_vpmu_cxt->global_status);
>  }
> 
> +/* Must be NMI-safe */
>  static int core2_vpmu_save(struct vcpu *v)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(v);
> diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
> index 7cb2231..f73ebbb 100644
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -36,6 +36,7 @@
>  #include <asm/hvm/svm/svm.h>
>  #include <asm/hvm/svm/vmcb.h>
>  #include <asm/apic.h>
> +#include <asm/nmi.h>
>  #include <public/pmu.h>
> 
>  /*
> @@ -48,34 +49,60 @@ uint64_t __read_mostly vpmu_features = 0;
>  static void parse_vpmu_param(char *s);
>  custom_param("vpmu", parse_vpmu_param);
> 
> +static void pmu_softnmi(void);
> +
>  static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
> +static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
> +
> +static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
> 
>  static void __init parse_vpmu_param(char *s)
>  {
> -    switch ( parse_bool(s) )
> -    {
> -    case 0:
> -        break;
> -    default:
> -        if ( !strcmp(s, "bts") )
> -            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
> -        else if ( *s )
> +    char *ss;
> +
> +    vpmu_mode = XENPMU_MODE_ON;
> +    if (*s == '\0')
> +        return;
> +
> +    do {
> +        ss = strchr(s, ',');
> +        if ( ss )
> +            *ss = '\0';
> +
> +        switch  ( parse_bool(s) )
>          {
> -            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
> +        case 0:
> +            vpmu_mode = XENPMU_MODE_OFF;
> +            return;
> +        case -1:
> +            if ( !strcmp(s, "nmi") )
> +                vpmu_interrupt_type = APIC_DM_NMI;
> +            else if ( !strcmp(s, "bts") )
> +                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
> +            else if ( !strcmp(s, "priv") )
> +            {
> +                vpmu_mode &= ~XENPMU_MODE_ON;
> +                vpmu_mode |= XENPMU_MODE_PRIV;
> +            }
> +            else
> +            {
> +                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
> +                vpmu_mode = XENPMU_MODE_OFF;
> +                return;
> +            }
> +        default:
>              break;
>          }
> -        /* fall through */
> -    case 1:
> -        vpmu_mode = XENPMU_MODE_ON;
> -        break;
> -    }
> +
> +        s = ss + 1;
> +    } while ( ss );
>  }
> 
>  void vpmu_lvtpc_update(uint32_t val)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
> 
> -    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val &
> APIC_LVT_MASKED);
> +    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val &
> APIC_LVT_MASKED);
> 
>      /* Postpone APIC updates for PV guests if PMU interrupt is pending */
>      if ( !is_pv_domain(current->domain) ||
> @@ -84,6 +111,27 @@ void vpmu_lvtpc_update(uint32_t val)
>          apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
>  }
> 
> +static void vpmu_send_nmi(struct vcpu *v)
> +{
> +    struct vlapic *vlapic;
> +    u32 vlapic_lvtpc;
> +    unsigned char int_vec;
> +
> +    ASSERT( is_hvm_vcpu(v) );
> +
> +    vlapic = vcpu_vlapic(v);
> +    if ( !is_vlapic_lvtpc_enabled(vlapic) )
> +        return;
> +
> +    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
> +    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
> +
> +    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
> +        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
> +    else
> +        v->nmi_pending = 1;
> +}
> +
>  int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, uint8_t rw)
>  {
>      struct vpmu_struct *vpmu = vcpu_vpmu(current);
> @@ -125,6 +173,7 @@ int vpmu_do_msr(unsigned int msr, uint64_t
> *msr_content, uint8_t rw)
>      return 0;
>  }
> 
> +/* This routine may be called in NMI context */
>  int vpmu_do_interrupt(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> @@ -209,9 +258,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>              memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>                     gregs, sizeof(struct cpu_user_regs));
> 
> -            hvm_get_segment_register(current, x86_seg_cs, &cs);
> -            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> -            gregs->cs = cs.attr.fields.dpl;
> +            /* This is unsafe in NMI context, we'll do it in softint handler
> */
> +            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
> +            {
> +                hvm_get_segment_register(current, x86_seg_cs, &cs);
> +                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +                gregs->cs = cs.attr.fields.dpl;
> +            }
>          }
> 
>          v->arch.vpmu.xenpmu_data->domain_id =
> current->domain->domain_id;
> @@ -222,30 +275,30 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>          apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc |
> APIC_LVT_MASKED);
>          vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
> 
> -        send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +        if ( vpmu_interrupt_type & APIC_DM_NMI )
> +        {
> +            per_cpu(sampled_vcpu, smp_processor_id()) = current;
> +            raise_softirq(PMU_SOFTIRQ);
> +        }
> +        else
> +            send_guest_vcpu_virq(v, VIRQ_XENPMU);
> 
>          return 1;
>      }
> 
>      if ( vpmu->arch_vpmu_ops )
>      {
> -        struct vlapic *vlapic = vcpu_vlapic(v);
> -        u32 vlapic_lvtpc;
> -        unsigned char int_vec;
> -
>          if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
>              return 0;
> 
> -        if ( !is_vlapic_lvtpc_enabled(vlapic) )
> -            return 1;
> -
> -        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
> -        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
> -
> -        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) ==
> APIC_MODE_FIXED )
> -            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
> +        if ( vpmu_interrupt_type & APIC_DM_NMI )
> +        {
> +            per_cpu(sampled_vcpu, smp_processor_id()) = current;
> +            raise_softirq(PMU_SOFTIRQ);
> +        }
>          else
> -            v->nmi_pending = 1;
> +            vpmu_send_nmi(v);
> +
>          return 1;
>      }
> 
> @@ -276,6 +329,8 @@ static void vpmu_save_force(void *arg)
>      vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
> 
>      per_cpu(last_vcpu, smp_processor_id()) = NULL;
> +
> +    pmu_softnmi();
>  }
> 
>  void vpmu_save(struct vcpu *v)
> @@ -293,7 +348,10 @@ void vpmu_save(struct vcpu *v)
>          if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
>              vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
> 
> -    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
> +    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
> +
> +    /* Make sure there are no outstanding PMU NMIs */
> +    pmu_softnmi();
>  }
> 
>  void vpmu_load(struct vcpu *v)
> @@ -338,6 +396,8 @@ void vpmu_load(struct vcpu *v)
>          vpmu_save_force(prev);
>          vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
> 
> +        pmu_softnmi();
> +
>          vpmu = vcpu_vpmu(v);
>      }
> 
> @@ -442,11 +502,53 @@ static void vpmu_unload_all(void)
>      }
>  }
> 
> +/* Process the softirq set by PMU NMI handler */
> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
> +
> +    if ( sampled == NULL )
> +        return;
> +    per_cpu(sampled_vcpu, smp_processor_id()) = NULL;
> +
> +    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
> +         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
> +        v = hardware_domain->vcpu[smp_processor_id() %
> +                                  hardware_domain->max_vcpus];
> +    else
> +    {
> +        if ( is_hvm_domain(sampled->domain) )
> +        {
> +            vpmu_send_nmi(sampled);
> +            return;
> +        }
> +        v = sampled;
> +    }
> +
> +    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +    if ( !is_pv_domain(sampled->domain) )
> +    {
> +        struct segment_register cs;
> +
> +        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
> +        regs->cs = cs.attr.fields.dpl;
> +    }
> +
> +    send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +}
> +
> +int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
> +{
> +    return vpmu_do_interrupt(regs);
> +}
> +
>  static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>  {
>      struct vcpu *v;
>      struct page_info *page;
>      uint64_t gfn = params->d.val;
> +    static bool_t __read_mostly pvpmu_initted = 0;
> 
>      if ( !is_pv_domain(d) )
>          return -EINVAL;
> @@ -472,6 +574,21 @@ static int pvpmu_init(struct domain *d,
> xen_pmu_params_t *params)
>          return -EINVAL;
>      }
> 
> +    if ( !pvpmu_initted )
> +    {
> +        if (reserve_lapic_nmi() == 0)
> +            set_nmi_callback(pmu_nmi_interrupt);
> +        else
> +        {
> +            printk("Failed to reserve PMU NMI\n");
> +            put_page(page);
> +            return -EBUSY;
> +        }
> +        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
> +
> +        pvpmu_initted = 1;
> +    }
> +
>      vpmu_initialise(v);
> 
>      return 0;
> --
> 1.8.1.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch
  2014-05-26 12:03   ` Jan Beulich
@ 2014-05-30 21:13     ` Tian, Kevin
  0 siblings, 0 replies; 65+ messages in thread
From: Tian, Kevin @ 2014-05-30 21:13 UTC (permalink / raw)
  To: Jan Beulich, Boris Ostrovsky
  Cc: keir, suravee.suthikulpanit, andrew.cooper3, dietmar.hahn,
	xen-devel, Dugger, Donald D, Nakajima, Jun

> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, May 26, 2014 5:04 AM
> 
> >>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> > Save VPMU state during context switch for both HVM and PV guests unless
> we
> > are in PMU privileged mode (i.e. dom0 is doing all profiling).
> >
> > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Acked-by: Kevin Tian <kevin.tian@intel.com>
> 
> May I ask that you don't put in inapplicable acks - there's no VMX
> code being modfied here, i.e. the above either should be dropped
> or become a Reviewed-by.

OK, please replace with a Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> 
> > Reviewed-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> > Tested-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> > ---
> >  xen/arch/x86/domain.c | 12 +++++-------
> >  1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 4a122da..a853071 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -1478,16 +1478,14 @@ void context_switch(struct vcpu *prev, struct
> vcpu
> > *next)
> >      }
> >
> >      if ( prev != next )
> > -        _update_runstate_area(prev);
> > -
> > -    if ( is_hvm_vcpu(prev) )
> >      {
> > -        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
> > +        _update_runstate_area(prev);
> > +        if ( vpmu_mode & XENPMU_MODE_ON )
> >              vpmu_save(prev);
> > +    }
> >
> > -        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
> > +    if ( is_hvm_vcpu(prev)
> &&  !list_empty(&prev->arch.hvm_vcpu.tm_list) )
> >              pt_save_timer(prev);
> > -    }
> >
> >      local_irq_disable();
> >
> > @@ -1526,7 +1524,7 @@ void context_switch(struct vcpu *prev, struct vcpu
> > *next)
> >                             !is_hardware_domain(next->domain));
> >      }
> >
> > -    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode &
> XENPMU_MODE_ON) )
> > +    if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
> >          /* Must be done with interrupts enabled */
> >          vpmu_load(next);
> >
> > --
> > 1.8.1.4
> 
> 

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest
  2014-05-16  8:05   ` Jan Beulich
  2014-05-16 14:58     ` Boris Ostrovsky
@ 2014-06-05 10:29     ` Tim Deegan
  1 sibling, 0 replies; 65+ messages in thread
From: Tim Deegan @ 2014-06-05 10:29 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, keir, suravee.suthikulpanit, andrew.cooper3,
	Ian Jackson, donald.d.dugger, xen-devel, Ian Campbell,
	dietmar.hahn, jun.nakajima, Boris Ostrovsky

At 09:05 +0100 on 16 May (1400227507), Jan Beulich wrote:
> >>> On 13.05.14 at 17:53, <boris.ostrovsky@oracle.com> wrote:
> > Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
> > hypercall
> 
> I already voiced my reservations on a very early version of this series.
> While I can see the need of exposing these internals, I also see the
> potential for abuse. I'd clearly want at least one other common code
> maintainer's opinion here; sadly you didn't properly Cc them all (done
> now).

Sorry, didn't see that until my post-hackathon sweep of Xen-devel.
I'm not worried about exposing hypervisor symbols to privileged
guests.  In many cases dom0 will already have access to the binary
from /boot, and in any case with access to the version strings &
dmesg, they're not terribly hard to guess.

Tim.

^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2014-06-05 10:29 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-13 15:53 [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 01/19] common/symbols: Export hypervisor symbols to privileged guest Boris Ostrovsky
2014-05-16  8:05   ` Jan Beulich
2014-05-16 14:58     ` Boris Ostrovsky
2014-05-16 15:16       ` Jan Beulich
2014-05-16 16:12         ` Boris Ostrovsky
2014-06-05 10:29     ` Tim Deegan
2014-05-13 15:53 ` [PATCH v6 02/19] VPMU: Mark context LOADED before registers are loaded Boris Ostrovsky
2014-05-19 14:18   ` Jan Beulich
2014-05-19 15:28     ` Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 03/19] x86/VPMU: Minor VPMU cleanup Boris Ostrovsky
2014-05-19 11:55   ` Tian, Kevin
2014-05-19 14:26   ` Jan Beulich
2014-05-19 15:35     ` Boris Ostrovsky
2014-05-19 15:42       ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 04/19] intel/VPMU: Clean up Intel VPMU code Boris Ostrovsky
2014-05-19 11:59   ` Tian, Kevin
2014-05-19 14:30   ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 05/19] vmx: Merge MSR management routines Boris Ostrovsky
2014-05-19 12:00   ` Tian, Kevin
2014-05-22 10:24   ` Dietmar Hahn
2014-05-22 13:48     ` Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 06/19] x86/VPMU: Handle APIC_LVTPC accesses Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 07/19] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 08/19] x86/VPMU: Add public xenpmu.h Boris Ostrovsky
2014-05-19 12:02   ` Tian, Kevin
2014-05-20 15:24   ` Jan Beulich
2014-05-20 17:28     ` Boris Ostrovsky
2014-05-21  7:19   ` Dietmar Hahn
2014-05-21 13:56     ` Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 09/19] x86/VPMU: Make vpmu not HVM-specific Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 10/19] x86/VPMU: Interface for setting PMU mode and flags Boris Ostrovsky
2014-05-20 15:40   ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 11/19] x86/VPMU: Initialize PMU for PV guests Boris Ostrovsky
2014-05-20 15:51   ` Jan Beulich
2014-05-20 17:47     ` Boris Ostrovsky
2014-05-21  8:01       ` Jan Beulich
2014-05-21 14:03         ` Boris Ostrovsky
2014-05-20 15:52   ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 12/19] x86/VPMU: Add support for PMU register handling on " Boris Ostrovsky
2014-05-22 14:50   ` Jan Beulich
2014-05-22 17:16     ` Boris Ostrovsky
2014-05-23  6:27       ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 13/19] x86/VPMU: Handle PMU interrupts for " Boris Ostrovsky
2014-05-22 15:30   ` Jan Beulich
2014-05-22 17:25     ` Boris Ostrovsky
2014-05-23  6:29       ` Jan Beulich
2014-05-13 15:53 ` [PATCH v6 14/19] x86/VPMU: Merge vpmu_rdmsr and vpmu_wrmsr Boris Ostrovsky
2014-05-19 12:04   ` Tian, Kevin
2014-05-13 15:53 ` [PATCH v6 15/19] x86/VPMU: Add privileged PMU mode Boris Ostrovsky
2014-05-26 11:48   ` Jan Beulich
2014-05-27  2:08     ` Boris Ostrovsky
2014-05-27  9:10       ` Jan Beulich
2014-05-27 13:31         ` Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 16/19] x86/VPMU: Save VPMU state for PV guests during context switch Boris Ostrovsky
2014-05-26 12:03   ` Jan Beulich
2014-05-30 21:13     ` Tian, Kevin
2014-05-13 15:53 ` [PATCH v6 17/19] x86/VPMU: NMI-based VPMU support Boris Ostrovsky
2014-05-26 15:55   ` Jan Beulich
2014-05-27  2:57     ` Boris Ostrovsky
2014-05-30 21:12   ` Tian, Kevin
2014-05-13 15:53 ` [PATCH v6 18/19] x86/VPMU: Suport for PVH guests Boris Ostrovsky
2014-05-13 15:53 ` [PATCH v6 19/19] x86/VPMU: Move VPMU files up from hvm/ directory Boris Ostrovsky
2014-05-16  7:40 ` [PATCH v6 00/19] x86/PMU: Xen PMU PV(H) support Jan Beulich
2014-05-16 14:57   ` Boris Ostrovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.