All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] Support Perf Extensions on AMD KVM guests
@ 2018-01-30 17:32 Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 1/3] x86/msr: Add AMD Core Perf Extension MSRs Janakarajan Natarajan
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Janakarajan Natarajan @ 2018-01-30 17:32 UTC (permalink / raw)
  To: kvm, x86, linux-kernel
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Paolo Bonzini,
	Radim Krcmar, Len Brown, Kyle Huey, Tom Lendacky,
	Borislav Petkov, Grzegorz Andrejczuk, Kan Liang,
	Janakarajan Natarajan

This patchset adds support for Perf Extension on AMD KVM guests.

When perf runs on a guest with family = 15h || 17h, the MSRs that are
accessed, when the Perf Extension flag is made available, differ from
the existing K7 MSRs. The accesses are to the AMD Core Performance
Extension counters which provide 2 extra counters and new MSRs for both
the event select and counter registers.

Since the new event select and counter MSRs are interleaved and K7 MSRs
are contiguous, the logic to map them to the gp_counters[] is changed.

This patchset has been tested with Family 17h and Opteron G1 guests.

v1->v2:
* Rearranged MSR #defines based on Boris's suggestion.

v2->v3:
* Changed the logic of mapping MSR to gp_counters[] index based on
  Boris's feedback.
* Removed use of family checks based on Radim's feedback.
* Removed KVM bugfix patch since it is already applied.

v3->v4:
* Rebased to latest KVM tree.

Janakarajan Natarajan (3):
  x86/msr: Add AMD Core Perf Extension MSRs
  x86/kvm: Add support for AMD Core Perf Extension in guest
  x86/kvm: Expose AMD Core Perf Extension flag to guests

 arch/x86/include/asm/msr-index.h |  14 ++++
 arch/x86/kvm/cpuid.c             |   8 ++-
 arch/x86/kvm/pmu_amd.c           | 140 +++++++++++++++++++++++++++++++++++----
 arch/x86/kvm/x86.c               |   1 +
 4 files changed, 148 insertions(+), 15 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v4 1/3] x86/msr: Add AMD Core Perf Extension MSRs
  2018-01-30 17:32 [PATCH v4 0/3] Support Perf Extensions on AMD KVM guests Janakarajan Natarajan
@ 2018-01-30 17:32 ` Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 2/3] x86/kvm: Add support for AMD Core Perf Extension in guest Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
  2 siblings, 0 replies; 9+ messages in thread
From: Janakarajan Natarajan @ 2018-01-30 17:32 UTC (permalink / raw)
  To: kvm, x86, linux-kernel
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Paolo Bonzini,
	Radim Krcmar, Len Brown, Kyle Huey, Tom Lendacky,
	Borislav Petkov, Grzegorz Andrejczuk, Kan Liang,
	Janakarajan Natarajan

Add the EventSelect and Counter MSRs for AMD Core Perf Extension.

Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
---
 arch/x86/include/asm/msr-index.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e7b983a..2885363 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -341,7 +341,21 @@
 
 /* Fam 15h MSRs */
 #define MSR_F15H_PERF_CTL		0xc0010200
+#define MSR_F15H_PERF_CTL0		MSR_F15H_PERF_CTL
+#define MSR_F15H_PERF_CTL1		(MSR_F15H_PERF_CTL + 2)
+#define MSR_F15H_PERF_CTL2		(MSR_F15H_PERF_CTL + 4)
+#define MSR_F15H_PERF_CTL3		(MSR_F15H_PERF_CTL + 6)
+#define MSR_F15H_PERF_CTL4		(MSR_F15H_PERF_CTL + 8)
+#define MSR_F15H_PERF_CTL5		(MSR_F15H_PERF_CTL + 10)
+
 #define MSR_F15H_PERF_CTR		0xc0010201
+#define MSR_F15H_PERF_CTR0		MSR_F15H_PERF_CTR
+#define MSR_F15H_PERF_CTR1		(MSR_F15H_PERF_CTR + 2)
+#define MSR_F15H_PERF_CTR2		(MSR_F15H_PERF_CTR + 4)
+#define MSR_F15H_PERF_CTR3		(MSR_F15H_PERF_CTR + 6)
+#define MSR_F15H_PERF_CTR4		(MSR_F15H_PERF_CTR + 8)
+#define MSR_F15H_PERF_CTR5		(MSR_F15H_PERF_CTR + 10)
+
 #define MSR_F15H_NB_PERF_CTL		0xc0010240
 #define MSR_F15H_NB_PERF_CTR		0xc0010241
 #define MSR_F15H_PTSC			0xc0010280
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 2/3] x86/kvm: Add support for AMD Core Perf Extension in guest
  2018-01-30 17:32 [PATCH v4 0/3] Support Perf Extensions on AMD KVM guests Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 1/3] x86/msr: Add AMD Core Perf Extension MSRs Janakarajan Natarajan
@ 2018-01-30 17:32 ` Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
  2 siblings, 0 replies; 9+ messages in thread
From: Janakarajan Natarajan @ 2018-01-30 17:32 UTC (permalink / raw)
  To: kvm, x86, linux-kernel
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Paolo Bonzini,
	Radim Krcmar, Len Brown, Kyle Huey, Tom Lendacky,
	Borislav Petkov, Grzegorz Andrejczuk, Kan Liang,
	Janakarajan Natarajan

Add support for AMD Core Performance counters in the guest. The base
event select and counter MSRs are changed. In addition, with the core
extension, there are 2 extra counters available for performance
measurements for a total of 6.

With the new MSRs, the logic to map them to the gp_counters[] is changed.
New functions are added to check the validity of the get/set MSRs.

If the guest has the X86_FEATURE_PERFCTR_CORE cpuid flag set, the number
of counters available to the vcpu is set to 6. It the flag is not set
then it is 4.

Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
---
 arch/x86/kvm/pmu_amd.c | 140 ++++++++++++++++++++++++++++++++++++++++++++-----
 arch/x86/kvm/x86.c     |   1 +
 2 files changed, 127 insertions(+), 14 deletions(-)

diff --git a/arch/x86/kvm/pmu_amd.c b/arch/x86/kvm/pmu_amd.c
index cd94443..233354a 100644
--- a/arch/x86/kvm/pmu_amd.c
+++ b/arch/x86/kvm/pmu_amd.c
@@ -19,6 +19,21 @@
 #include "lapic.h"
 #include "pmu.h"
 
+enum pmu_type {
+	PMU_TYPE_COUNTER = 0,
+	PMU_TYPE_EVNTSEL,
+};
+
+enum index {
+	INDEX_ZERO = 0,
+	INDEX_ONE,
+	INDEX_TWO,
+	INDEX_THREE,
+	INDEX_FOUR,
+	INDEX_FIVE,
+	INDEX_ERROR,
+};
+
 /* duplicated from amd_perfmon_event_map, K7 and above should work. */
 static struct kvm_event_hw_type_mapping amd_event_mapping[] = {
 	[0] = { 0x76, 0x00, PERF_COUNT_HW_CPU_CYCLES },
@@ -31,6 +46,88 @@ static struct kvm_event_hw_type_mapping amd_event_mapping[] = {
 	[7] = { 0xd1, 0x00, PERF_COUNT_HW_STALLED_CYCLES_BACKEND },
 };
 
+static unsigned int get_msr_base(struct kvm_pmu *pmu, enum pmu_type type)
+{
+	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
+
+	if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) {
+		if (type == PMU_TYPE_COUNTER)
+			return MSR_F15H_PERF_CTR;
+		else
+			return MSR_F15H_PERF_CTL;
+	} else {
+		if (type == PMU_TYPE_COUNTER)
+			return MSR_K7_PERFCTR0;
+		else
+			return MSR_K7_EVNTSEL0;
+	}
+}
+
+static enum index msr_to_index(u32 msr)
+{
+	switch (msr) {
+	case MSR_F15H_PERF_CTL0:
+	case MSR_F15H_PERF_CTR0:
+	case MSR_K7_EVNTSEL0:
+	case MSR_K7_PERFCTR0:
+		return INDEX_ZERO;
+	case MSR_F15H_PERF_CTL1:
+	case MSR_F15H_PERF_CTR1:
+	case MSR_K7_EVNTSEL1:
+	case MSR_K7_PERFCTR1:
+		return INDEX_ONE;
+	case MSR_F15H_PERF_CTL2:
+	case MSR_F15H_PERF_CTR2:
+	case MSR_K7_EVNTSEL2:
+	case MSR_K7_PERFCTR2:
+		return INDEX_TWO;
+	case MSR_F15H_PERF_CTL3:
+	case MSR_F15H_PERF_CTR3:
+	case MSR_K7_EVNTSEL3:
+	case MSR_K7_PERFCTR3:
+		return INDEX_THREE;
+	case MSR_F15H_PERF_CTL4:
+	case MSR_F15H_PERF_CTR4:
+		return INDEX_FOUR;
+	case MSR_F15H_PERF_CTL5:
+	case MSR_F15H_PERF_CTR5:
+		return INDEX_FIVE;
+	default:
+		return INDEX_ERROR;
+	}
+}
+
+static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,
+					     enum pmu_type type)
+{
+	switch (msr) {
+	case MSR_F15H_PERF_CTL0:
+	case MSR_F15H_PERF_CTL1:
+	case MSR_F15H_PERF_CTL2:
+	case MSR_F15H_PERF_CTL3:
+	case MSR_F15H_PERF_CTL4:
+	case MSR_F15H_PERF_CTL5:
+	case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:
+		if (type != PMU_TYPE_EVNTSEL)
+			return NULL;
+		break;
+	case MSR_F15H_PERF_CTR0:
+	case MSR_F15H_PERF_CTR1:
+	case MSR_F15H_PERF_CTR2:
+	case MSR_F15H_PERF_CTR3:
+	case MSR_F15H_PERF_CTR4:
+	case MSR_F15H_PERF_CTR5:
+	case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3:
+		if (type != PMU_TYPE_COUNTER)
+			return NULL;
+		break;
+	default:
+		return NULL;
+	}
+
+	return &pmu->gp_counters[msr_to_index(msr)];
+}
+
 static unsigned amd_find_arch_event(struct kvm_pmu *pmu,
 				    u8 event_select,
 				    u8 unit_mask)
@@ -64,7 +161,18 @@ static bool amd_pmc_is_enabled(struct kvm_pmc *pmc)
 
 static struct kvm_pmc *amd_pmc_idx_to_pmc(struct kvm_pmu *pmu, int pmc_idx)
 {
-	return get_gp_pmc(pmu, MSR_K7_EVNTSEL0 + pmc_idx, MSR_K7_EVNTSEL0);
+	unsigned int base = get_msr_base(pmu, PMU_TYPE_COUNTER);
+	struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);
+
+	if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE)) {
+		/*
+		 * The idx is contiguous. The MSRs are not. The counter MSRs
+		 * are interleaved with the event select MSRs.
+		 */
+		pmc_idx *= 2;
+	}
+
+	return get_gp_pmc_amd(pmu, base + pmc_idx, PMU_TYPE_COUNTER);
 }
 
 /* returns 0 if idx's corresponding MSR exists; otherwise returns 1. */
@@ -96,8 +204,8 @@ static bool amd_is_valid_msr(struct kvm_vcpu *vcpu, u32 msr)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int ret = false;
 
-	ret = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0) ||
-		get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0);
+	ret = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER) ||
+		get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
 
 	return ret;
 }
@@ -107,14 +215,14 @@ static int amd_pmu_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	struct kvm_pmc *pmc;
 
-	/* MSR_K7_PERFCTRn */
-	pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0);
+	/* MSR_PERFCTRn */
+	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER);
 	if (pmc) {
 		*data = pmc_read_counter(pmc);
 		return 0;
 	}
-	/* MSR_K7_EVNTSELn */
-	pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0);
+	/* MSR_EVNTSELn */
+	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
 	if (pmc) {
 		*data = pmc->eventsel;
 		return 0;
@@ -130,14 +238,14 @@ static int amd_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	u32 msr = msr_info->index;
 	u64 data = msr_info->data;
 
-	/* MSR_K7_PERFCTRn */
-	pmc = get_gp_pmc(pmu, msr, MSR_K7_PERFCTR0);
+	/* MSR_PERFCTRn */
+	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_COUNTER);
 	if (pmc) {
 		pmc->counter += data - pmc_read_counter(pmc);
 		return 0;
 	}
-	/* MSR_K7_EVNTSELn */
-	pmc = get_gp_pmc(pmu, msr, MSR_K7_EVNTSEL0);
+	/* MSR_EVNTSELn */
+	pmc = get_gp_pmc_amd(pmu, msr, PMU_TYPE_EVNTSEL);
 	if (pmc) {
 		if (data == pmc->eventsel)
 			return 0;
@@ -154,7 +262,11 @@ static void amd_pmu_refresh(struct kvm_vcpu *vcpu)
 {
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 
-	pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;
+	if (guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))
+		pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS_CORE;
+	else
+		pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;
+
 	pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;
 	pmu->reserved_bits = 0xffffffff00200000ull;
 	/* not applicable to AMD; but clean them to prevent any fall out */
@@ -169,7 +281,7 @@ static void amd_pmu_init(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int i;
 
-	for (i = 0; i < AMD64_NUM_COUNTERS ; i++) {
+	for (i = 0; i < AMD64_NUM_COUNTERS_CORE ; i++) {
 		pmu->gp_counters[i].type = KVM_PMC_GP;
 		pmu->gp_counters[i].vcpu = vcpu;
 		pmu->gp_counters[i].idx = i;
@@ -181,7 +293,7 @@ static void amd_pmu_reset(struct kvm_vcpu *vcpu)
 	struct kvm_pmu *pmu = vcpu_to_pmu(vcpu);
 	int i;
 
-	for (i = 0; i < AMD64_NUM_COUNTERS; i++) {
+	for (i = 0; i < AMD64_NUM_COUNTERS_CORE; i++) {
 		struct kvm_pmc *pmc = &pmu->gp_counters[i];
 
 		pmc_stop_counter(pmc);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c53298d..acfd395 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2451,6 +2451,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_AMD64_DC_CFG:
 		msr_info->data = 0;
 		break;
+	case MSR_F15H_PERF_CTL0 ... MSR_F15H_PERF_CTR5:
 	case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:
 	case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3:
 	case MSR_P6_PERFCTR0 ... MSR_P6_PERFCTR1:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests
  2018-01-30 17:32 [PATCH v4 0/3] Support Perf Extensions on AMD KVM guests Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 1/3] x86/msr: Add AMD Core Perf Extension MSRs Janakarajan Natarajan
  2018-01-30 17:32 ` [PATCH v4 2/3] x86/kvm: Add support for AMD Core Perf Extension in guest Janakarajan Natarajan
@ 2018-01-30 17:32 ` Janakarajan Natarajan
  2018-02-02 20:03   ` kbuild test robot
                     ` (2 more replies)
  2 siblings, 3 replies; 9+ messages in thread
From: Janakarajan Natarajan @ 2018-01-30 17:32 UTC (permalink / raw)
  To: kvm, x86, linux-kernel
  Cc: Thomas Gleixner, Ingo Molnar, H . Peter Anvin, Paolo Bonzini,
	Radim Krcmar, Len Brown, Kyle Huey, Tom Lendacky,
	Borislav Petkov, Grzegorz Andrejczuk, Kan Liang,
	Janakarajan Natarajan

Expose the AMD Core Perf Extension flag to the guests.

Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
---
 arch/x86/kvm/cpuid.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 0099e10..8c95a7c 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -55,6 +55,11 @@ bool kvm_mpx_supported(void)
 }
 EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
+bool perf_ext_supported(void)
+{
+	return boot_cpu_has(X86_FEATURE_PERFCTR_CORE);
+}
+
 u64 kvm_supported_xcr0(void)
 {
 	u64 xcr0 = KVM_SUPPORTED_XCR0 & host_xcr0;
@@ -327,6 +332,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 	unsigned f_invpcid = kvm_x86_ops->invpcid_supported() ? F(INVPCID) : 0;
 	unsigned f_mpx = kvm_mpx_supported() ? F(MPX) : 0;
 	unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0;
+	unsigned f_perfext = perf_ext_supported() ? F(PERFCTR_CORE) : 0;
 
 	/* cpuid 1.edx */
 	const u32 kvm_cpuid_1_edx_x86_features =
@@ -365,7 +371,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
 		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
 		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
 		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
-		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
+		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) | f_perfext;
 
 	/* cpuid 0xC0000001.edx */
 	const u32 kvm_cpuid_C000_0001_edx_x86_features =
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH] x86/kvm: perf_ext_supported() can be static
  2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
  2018-02-02 20:03   ` kbuild test robot
@ 2018-02-02 20:03   ` kbuild test robot
  2018-02-05 13:43   ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Radim Krcmar
  2 siblings, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2018-02-02 20:03 UTC (permalink / raw)
  To: Janakarajan Natarajan
  Cc: kbuild-all, kvm, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H . Peter Anvin, Paolo Bonzini, Radim Krcmar, Len Brown,
	Kyle Huey, Tom Lendacky, Borislav Petkov, Grzegorz Andrejczuk,
	Kan Liang, Janakarajan Natarajan


Fixes: 60e6688e74ee ("x86/kvm: Expose AMD Core Perf Extension flag to guests")
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---
 cpuid.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 0c991c6a..5cfd3c2 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -55,7 +55,7 @@ bool kvm_mpx_supported(void)
 }
 EXPORT_SYMBOL_GPL(kvm_mpx_supported);
 
-bool perf_ext_supported(void)
+static bool perf_ext_supported(void)
 {
 	return boot_cpu_has(X86_FEATURE_PERFCTR_CORE);
 }

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests
  2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
@ 2018-02-02 20:03   ` kbuild test robot
  2018-02-02 23:26     ` Natarajan, Janakarajan
  2018-02-02 20:03   ` [RFC PATCH] x86/kvm: perf_ext_supported() can be static kbuild test robot
  2018-02-05 13:43   ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Radim Krcmar
  2 siblings, 1 reply; 9+ messages in thread
From: kbuild test robot @ 2018-02-02 20:03 UTC (permalink / raw)
  To: Janakarajan Natarajan
  Cc: kbuild-all, kvm, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H . Peter Anvin, Paolo Bonzini, Radim Krcmar, Len Brown,
	Kyle Huey, Tom Lendacky, Borislav Petkov, Grzegorz Andrejczuk,
	Kan Liang, Janakarajan Natarajan

Hi Janakarajan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on tip/x86/core]
[also build test WARNING on v4.15]
[cannot apply to kvm/linux-next next-20180202]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Janakarajan-Natarajan/Support-Perf-Extensions-on-AMD-KVM-guests/20180202-231344
reproduce:
        # apt-get install sparse
        make ARCH=x86_64 allmodconfig
        make C=1 CF=-D__CHECK_ENDIAN__


sparse warnings: (new ones prefixed by >>)

>> arch/x86/kvm/cpuid.c:58:6: sparse: symbol 'perf_ext_supported' was not declared. Should it be

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests
  2018-02-02 20:03   ` kbuild test robot
@ 2018-02-02 23:26     ` Natarajan, Janakarajan
  0 siblings, 0 replies; 9+ messages in thread
From: Natarajan, Janakarajan @ 2018-02-02 23:26 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, kvm, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H . Peter Anvin, Paolo Bonzini, Radim Krcmar, Len Brown,
	Kyle Huey, Tom Lendacky, Borislav Petkov, Grzegorz Andrejczuk,
	Kan Liang

On 2/2/2018 2:03 PM, kbuild test robot wrote:
> Hi Janakarajan,
>
> Thank you for the patch! Perhaps something to improve:
>
> [auto build test WARNING on tip/x86/core]
> [also build test WARNING on v4.15]
> [cannot apply to kvm/linux-next next-20180202]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
This patch uses functions defined in commit 
'd6321d493319bfd406c484e8359c6101cbda39d3 KVM: x86: generalize 
guest_cpuid_has_ helpers'.
https://lkml.org/lkml/2017/8/2/811
>
> url:    https://github.com/0day-ci/linux/commits/Janakarajan-Natarajan/Support-Perf-Extensions-on-AMD-KVM-guests/20180202-231344
> reproduce:
>          # apt-get install sparse
>          make ARCH=x86_64 allmodconfig
>          make C=1 CF=-D__CHECK_ENDIAN__
>
>
> sparse warnings: (new ones prefixed by >>)
>
>>> arch/x86/kvm/cpuid.c:58:6: sparse: symbol 'perf_ext_supported' was not declared. Should it be
> Please review and possibly fold the followup patch.
>
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests
  2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
  2018-02-02 20:03   ` kbuild test robot
  2018-02-02 20:03   ` [RFC PATCH] x86/kvm: perf_ext_supported() can be static kbuild test robot
@ 2018-02-05 13:43   ` Radim Krcmar
  2018-02-05 17:48     ` Natarajan, Janakarajan
  2 siblings, 1 reply; 9+ messages in thread
From: Radim Krcmar @ 2018-02-05 13:43 UTC (permalink / raw)
  To: Janakarajan Natarajan
  Cc: kvm, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H . Peter Anvin, Paolo Bonzini, Len Brown, Kyle Huey,
	Tom Lendacky, Borislav Petkov, Grzegorz Andrejczuk, Kan Liang

2018-01-30 11:32-0600, Janakarajan Natarajan:
> Expose the AMD Core Perf Extension flag to the guests.
> 
> Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
> ---
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> @@ -365,7 +371,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>  		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
>  		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
>  		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
> -		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
> +		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) | f_perfext;

You can just say F(PERFCTR_CORE) here.  The conditional features are
needed when there is a runtime config option for them.  We are
automatically masking features that the host doesn't support,

thanks.

>  
>  	/* cpuid 0xC0000001.edx */
>  	const u32 kvm_cpuid_C000_0001_edx_x86_features =
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests
  2018-02-05 13:43   ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Radim Krcmar
@ 2018-02-05 17:48     ` Natarajan, Janakarajan
  0 siblings, 0 replies; 9+ messages in thread
From: Natarajan, Janakarajan @ 2018-02-05 17:48 UTC (permalink / raw)
  To: Radim Krcmar
  Cc: kvm, x86, linux-kernel, Thomas Gleixner, Ingo Molnar,
	H . Peter Anvin, Paolo Bonzini, Len Brown, Kyle Huey,
	Tom Lendacky, Borislav Petkov, Grzegorz Andrejczuk, Kan Liang

On 2/5/2018 7:43 AM, Radim Krcmar wrote:
> 2018-01-30 11:32-0600, Janakarajan Natarajan:
>> Expose the AMD Core Perf Extension flag to the guests.
>>
>> Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
>> ---
>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>> @@ -365,7 +371,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
>>   		F(LAHF_LM) | F(CMP_LEGACY) | 0 /*SVM*/ | 0 /* ExtApicSpace */ |
>>   		F(CR8_LEGACY) | F(ABM) | F(SSE4A) | F(MISALIGNSSE) |
>>   		F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
>> -		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
>> +		0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM) | f_perfext;
> You can just say F(PERFCTR_CORE) here.  The conditional features are
> needed when there is a runtime config option for them.  We are
> automatically masking features that the host doesn't support,

Okay. I'll send a v5 with the changes.

Thanks.

>
> thanks.
>
>>   
>>   	/* cpuid 0xC0000001.edx */
>>   	const u32 kvm_cpuid_C000_0001_edx_x86_features =
>> -- 
>> 2.7.4
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-02-05 17:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-30 17:32 [PATCH v4 0/3] Support Perf Extensions on AMD KVM guests Janakarajan Natarajan
2018-01-30 17:32 ` [PATCH v4 1/3] x86/msr: Add AMD Core Perf Extension MSRs Janakarajan Natarajan
2018-01-30 17:32 ` [PATCH v4 2/3] x86/kvm: Add support for AMD Core Perf Extension in guest Janakarajan Natarajan
2018-01-30 17:32 ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Janakarajan Natarajan
2018-02-02 20:03   ` kbuild test robot
2018-02-02 23:26     ` Natarajan, Janakarajan
2018-02-02 20:03   ` [RFC PATCH] x86/kvm: perf_ext_supported() can be static kbuild test robot
2018-02-05 13:43   ` [PATCH v4 3/3] x86/kvm: Expose AMD Core Perf Extension flag to guests Radim Krcmar
2018-02-05 17:48     ` Natarajan, Janakarajan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.