linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/5] x86: Intel Code Data Prioritization Support
@ 2015-10-02  6:33 Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 1/5] x86/intel_rdt: Intel Code Data Prioritization detection Fenghua Yu
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

This patch set supports Intel code data prioritization which is an
extension of cache allocation and allows to allocate code and data cache
seperately. It also includes cgroup interface for the user as seperate
patches. The cgroup interface for cache alloc is also resent.

Details of the feature can be found in the Intel SDM june 2015 volume 3,
	section 17.16.

*All patches apply on the cache allocation patches v15 and are
dependent on the same*

https://lkml.org/lkml/2015/10/2/72

Changes in v2:
  - Fix some compilation warnings
  - Port to 4.3-rc.

Fenghua Yu (5):
  x86/intel_rdt: Intel Code Data Prioritization detection
  x86/intel_rdt: Adds support to enable Code Data Prioritization
  x86/intel_rdt: Class of service and capacity bitmask management for
    CDP
  x86/intel_rdt: Hot cpu update for code data prioritization
  x86,cgroup/intel_rdt: Add cgroup interface for code data
    prioritization

 arch/x86/include/asm/cpufeature.h |   5 +-
 arch/x86/include/asm/intel_rdt.h  |   7 +
 arch/x86/kernel/cpu/common.c      |   1 +
 arch/x86/kernel/cpu/intel_rdt.c   | 613 ++++++++++++++++++++++++++++++++++----
 4 files changed, 562 insertions(+), 64 deletions(-)

-- 
1.8.1.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V2 1/5] x86/intel_rdt: Intel Code Data Prioritization detection
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
@ 2015-10-02  6:33 ` Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 2/5] x86/intel_rdt: Adds support to enable Code Data Prioritization Fenghua Yu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

This patch adds enumeration support for Code Data Prioritization(CDP)
feature found in future Intel Xeon processors. It includes CPUID
enumeration routines for CDP.

CDP is an extension to Cache Allocation and lets threads allocate subset
of L3 cache for code and data separately. The allocation is represented
by the code or data cache capacity bit mask(cbm) MSRs
IA32_L3_QOS_MASK_n. Each Class of service would be associated with one
dcache_cbm and one icache_cbm MSR and hence the number of available
CLOSids is halved with CDP. The association for a CLOSid 'n' is shown
below :

data_cbm_address (n) = base + (n <<1)
code_cbm_address (n) = base + (n <<1) +1.
During scheduling the kernel writes the CLOSid
of the thread to IA32_PQR_ASSOC_MSR.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 arch/x86/include/asm/cpufeature.h | 5 ++++-
 arch/x86/kernel/cpu/common.c      | 1 +
 arch/x86/kernel/cpu/intel_rdt.c   | 2 ++
 3 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 4e93006..6dc0701 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -12,7 +12,7 @@
 #include <asm/disabled-features.h>
 #endif
 
-#define NCAPINTS	14	/* N 32-bit words worth of info */
+#define NCAPINTS	15	/* N 32-bit words worth of info */
 #define NBUGINTS	1	/* N 32-bit bug flags */
 
 /*
@@ -259,6 +259,9 @@
 /* Intel-defined CPU features, CPUID level 0x00000010:0 (ebx), word 13 */
 #define X86_FEATURE_CAT_L3	(13*32 + 1) /* Cache Allocation L3 */
 
+/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x00000010:1 (ecx), word 14 */
+#define X86_FEATURE_CDP_L3     (14*32 + 2) /* Code data prioritization L3 */
+
 /*
  * BUG word(s)
  */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 026a416..9886e23 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -666,6 +666,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
 			cpuid_count(0x00000010, 1, &eax, &ebx, &ecx, &edx);
 			c->x86_cache_max_closid = edx + 1;
 			c->x86_cache_max_cbm_len = eax + 1;
+			c->x86_capability[14] = ecx;
 		}
 	}
 
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index cb4d2ef..29c8a19 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -561,6 +561,8 @@ static int __init intel_rdt_late_init(void)
 
 	static_key_slow_inc(&rdt_enable_key);
 	pr_info("Intel cache allocation enabled\n");
+	if (cpu_has(c, X86_FEATURE_CDP_L3))
+		pr_info("Intel code data prioritization detected\n");
 out_err:
 
 	return err;
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V2 2/5] x86/intel_rdt: Adds support to enable Code Data Prioritization
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 1/5] x86/intel_rdt: Intel Code Data Prioritization detection Fenghua Yu
@ 2015-10-02  6:33 ` Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 3/5] x86/intel_rdt: Class of service and capacity bitmask management for CDP Fenghua Yu
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

On Intel SKUs that support Code Data Prioritization(CDP), intel_rdt
operates in 2 modes - legacy cache allocation mode/default or CDP mode.

When CDP is enabled, the number of available CLOSids is halved. Hence the
enabling is done when less than half the number of CLOSids available are
used. When CDP is enabled each CLOSid maps to a
data cache mask and an instruction cache mask. The enabling itself is done
by writing to the IA32_PQOS_CFG MSR and can dynamically be enabled or
disabled.

CDP is disabled when for each (dcache_cbm,icache_cbm) pair, the
dcache_cbm = icache_cbm.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 arch/x86/include/asm/intel_rdt.h |  7 +++++
 arch/x86/kernel/cpu/intel_rdt.c  | 66 ++++++++++++++++++++++++++--------------
 2 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/intel_rdt.h b/arch/x86/include/asm/intel_rdt.h
index fbe1e00..3080008 100644
--- a/arch/x86/include/asm/intel_rdt.h
+++ b/arch/x86/include/asm/intel_rdt.h
@@ -9,6 +9,7 @@
 #define MAX_CBM_LENGTH			32
 #define IA32_L3_CBM_BASE		0xc90
 #define CBM_FROM_INDEX(x)		(IA32_L3_CBM_BASE + x)
+#define MSR_IA32_PQOS_CFG		0xc81
 
 extern struct static_key rdt_enable_key;
 void __intel_rdt_sched_in(void *dummy);
@@ -23,6 +24,12 @@ struct clos_cbm_table {
 	unsigned int clos_refcnt;
 };
 
+struct clos_config {
+	unsigned long *closmap;
+	u32 max_closid;
+	u32 closids_used;
+};
+
 /*
  * Return rdt group corresponding to this container.
  */
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 29c8a19..54a8e29 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -34,10 +34,6 @@
  */
 static struct clos_cbm_table *cctable;
 /*
- * closid availability bit map.
- */
-unsigned long *closmap;
-/*
  * Minimum bits required in Cache bitmask.
  */
 static unsigned int min_bitmask_len = 1;
@@ -52,6 +48,11 @@ static cpumask_t rdt_cpumask;
 static cpumask_t tmp_cpumask;
 static DEFINE_MUTEX(rdt_group_mutex);
 struct static_key __read_mostly rdt_enable_key = STATIC_KEY_INIT_FALSE;
+static struct clos_config cconfig;
+static bool cdp_enabled;
+
+#define __DCBM_TABLE_INDEX(x)	(x << 1)
+#define __ICBM_TABLE_INDEX(x)	((x << 1) + 1)
 
 static struct intel_rdt rdt_root_group;
 #define rdt_for_each_child(pos_css, parent_ir)		\
@@ -148,22 +149,28 @@ static int closid_alloc(u32 *closid)
 
 	lockdep_assert_held(&rdt_group_mutex);
 
-	maxid = boot_cpu_data.x86_cache_max_closid;
-	id = find_first_zero_bit(closmap, maxid);
+	maxid = cconfig.max_closid;
+	id = find_first_zero_bit(cconfig.closmap, maxid);
 	if (id == maxid)
 		return -ENOSPC;
 
-	set_bit(id, closmap);
+	set_bit(id, cconfig.closmap);
 	closid_get(id);
 	*closid = id;
+	cconfig.closids_used++;
 
 	return 0;
 }
 
 static inline void closid_free(u32 closid)
 {
-	clear_bit(closid, closmap);
+	clear_bit(closid, cconfig.closmap);
 	cctable[closid].l3_cbm = 0;
+
+	if (WARN_ON(!cconfig.closids_used))
+		return;
+
+	cconfig.closids_used--;
 }
 
 static void closid_put(u32 closid)
@@ -200,45 +207,45 @@ static bool cbm_validate(unsigned long var)
 	return true;
 }
 
-static int clos_cbm_table_read(u32 closid, unsigned long *l3_cbm)
+static int clos_cbm_table_read(u32 index, unsigned long *l3_cbm)
 {
-	u32 maxid = boot_cpu_data.x86_cache_max_closid;
+	u32 orig_maxid = boot_cpu_data.x86_cache_max_closid;
 
 	lockdep_assert_held(&rdt_group_mutex);
 
-	if (closid >= maxid)
+	if (index >= orig_maxid)
 		return -EINVAL;
 
-	*l3_cbm = cctable[closid].l3_cbm;
+	*l3_cbm = cctable[index].l3_cbm;
 
 	return 0;
 }
 
 /*
  * clos_cbm_table_update() - Update a clos cbm table entry.
- * @closid: the closid whose cbm needs to be updated
+ * @index: index of the table entry whose cbm needs to be updated
  * @cbm: the new cbm value that has to be updated
  *
  * This assumes the cbm is validated as per the interface requirements
  * and the cache allocation requirements(through the cbm_validate).
  */
-static int clos_cbm_table_update(u32 closid, unsigned long cbm)
+static int clos_cbm_table_update(u32 index, unsigned long cbm)
 {
-	u32 maxid = boot_cpu_data.x86_cache_max_closid;
+	u32 orig_maxid = boot_cpu_data.x86_cache_max_closid;
 
 	lockdep_assert_held(&rdt_group_mutex);
 
-	if (closid >= maxid)
+	if (index >= orig_maxid)
 		return -EINVAL;
 
-	cctable[closid].l3_cbm = cbm;
+	cctable[index].l3_cbm = cbm;
 
 	return 0;
 }
 
 static bool cbm_search(unsigned long cbm, u32 *closid)
 {
-	u32 maxid = boot_cpu_data.x86_cache_max_closid;
+	u32 maxid = cconfig.max_closid;
 	u32 i;
 
 	for (i = 0; i < maxid; i++) {
@@ -282,6 +289,21 @@ static inline void msr_update_all(int msr, u64 val)
 	on_each_cpu_mask(&rdt_cpumask, msr_cpu_update, &info, 1);
 }
 
+static bool code_data_mask_equal(void)
+{
+	int i, dindex, iindex;
+
+	for (i = 0; i < cconfig.max_closid; i++) {
+		dindex = __DCBM_TABLE_INDEX(i);
+		iindex = __ICBM_TABLE_INDEX(i);
+		if (cctable[dindex].clos_refcnt &&
+		     (cctable[dindex].l3_cbm != cctable[iindex].l3_cbm))
+			return false;
+	}
+
+	return true;
+}
+
 static inline bool rdt_cpumask_update(int cpu)
 {
 	cpumask_and(&tmp_cpumask, &rdt_cpumask, topology_core_cpumask(cpu));
@@ -299,7 +321,7 @@ static inline bool rdt_cpumask_update(int cpu)
  */
 static void cbm_update_msrs(void *dummy)
 {
-	int maxid = boot_cpu_data.x86_cache_max_closid;
+	int maxid = cconfig.max_closid;
 	struct rdt_remote_data info;
 	unsigned int i;
 
@@ -307,7 +329,7 @@ static void cbm_update_msrs(void *dummy)
 		if (cctable[i].clos_refcnt) {
 			info.msr = CBM_FROM_INDEX(i);
 			info.val = cctable[i].l3_cbm;
-			msr_cpu_update(&info);
+			msr_cpu_update((void *) &info);
 		}
 	}
 }
@@ -542,8 +564,8 @@ static int __init intel_rdt_late_init(void)
 	}
 
 	size = BITS_TO_LONGS(maxid) * sizeof(long);
-	closmap = kzalloc(size, GFP_KERNEL);
-	if (!closmap) {
+	cconfig.closmap = kzalloc(size, GFP_KERNEL);
+	if (!cconfig.closmap) {
 		kfree(cctable);
 		err = -ENOMEM;
 		goto out_err;
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V2 3/5] x86/intel_rdt: Class of service and capacity bitmask management for CDP
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 1/5] x86/intel_rdt: Intel Code Data Prioritization detection Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 2/5] x86/intel_rdt: Adds support to enable Code Data Prioritization Fenghua Yu
@ 2015-10-02  6:33 ` Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 4/5] x86/intel_rdt: Hot cpu update for code data prioritization Fenghua Yu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

Add support to manage CLOSid(CLass Of Service id) and capacity
bitmask(cbm) for code data prioritization(CDP).

Closid management includes changes to allocating, freeing closid and
closid_get and closid_put and changes to closid availability map during
mode switch. CDP has a separate cbm for code and data.

Once the mode is switched to cdp, the number of CLOSids is halved.
The clos_cbm_table is reused to store dcache_cbm and
icache_cbm entries and the index is calculated as below:
index of dcache_cbm for a CLOSid 'n' = (n << 1)
index of icache_cbm for a CLOSid 'n' = (n << 1) + 1.
The offset of the IA32_L3_MASK_n MSRs from the base is related to the
CLOSid 'n' in the same way.

In other words, each closid is mapped to a (dcache_cbm, icache_cbm) pair
when cdp mode is enabled. Support for setting up of the
clos_cbm_table and closmap when the mode switch happens is added.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 arch/x86/kernel/cpu/intel_rdt.c | 194 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 190 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index 54a8e29..eab9d73 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -30,7 +30,13 @@
 #include <asm/intel_rdt.h>
 
 /*
- * cctable maintains 1:1 mapping between CLOSid and cache bitmask.
+ * During cache alloc mode cctable maintains 1:1 mapping between
+ * CLOSid and l3_cbm.
+ *
+ * During CDP mode, the cctable maintains a 1:2 mapping between the closid
+ * and (dcache_cbm, icache_cbm) pair.
+ * index of a dcache_cbm for CLOSid 'n' = n << 1.
+ * index of a icache_cbm for CLOSid 'n' = n << 1 + 1
  */
 static struct clos_cbm_table *cctable;
 /*
@@ -53,6 +59,13 @@ static bool cdp_enabled;
 
 #define __DCBM_TABLE_INDEX(x)	(x << 1)
 #define __ICBM_TABLE_INDEX(x)	((x << 1) + 1)
+#define __DCBM_MSR_INDEX(x)			\
+	CBM_FROM_INDEX(__DCBM_TABLE_INDEX(x))
+#define __ICBM_MSR_INDEX(x)			\
+	CBM_FROM_INDEX(__ICBM_TABLE_INDEX(x))
+
+#define DCBM_TABLE_INDEX(x)	(x << cdp_enabled)
+#define ICBM_TABLE_INDEX(x)	((x << cdp_enabled) + cdp_enabled)
 
 static struct intel_rdt rdt_root_group;
 #define rdt_for_each_child(pos_css, parent_ir)		\
@@ -133,9 +146,12 @@ static inline void closid_tasks_sync(void)
 	on_each_cpu_mask(cpu_online_mask, __intel_rdt_sched_in, NULL, 1);
 }
 
+/*
+ * When cdp mode is enabled, refcnt is maintained in the dcache_cbm entry.
+ */
 static inline void closid_get(u32 closid)
 {
-	struct clos_cbm_table *cct = &cctable[closid];
+	struct clos_cbm_table *cct = &cctable[DCBM_TABLE_INDEX(closid)];
 
 	lockdep_assert_held(&rdt_group_mutex);
 
@@ -165,7 +181,7 @@ static int closid_alloc(u32 *closid)
 static inline void closid_free(u32 closid)
 {
 	clear_bit(closid, cconfig.closmap);
-	cctable[closid].l3_cbm = 0;
+	cctable[DCBM_TABLE_INDEX(closid)].l3_cbm = 0;
 
 	if (WARN_ON(!cconfig.closids_used))
 		return;
@@ -175,7 +191,7 @@ static inline void closid_free(u32 closid)
 
 static void closid_put(u32 closid)
 {
-	struct clos_cbm_table *cct = &cctable[closid];
+	struct clos_cbm_table *cct = &cctable[DCBM_TABLE_INDEX(closid)];
 
 	lockdep_assert_held(&rdt_group_mutex);
 	if (WARN_ON(!cct->clos_refcnt))
@@ -259,6 +275,30 @@ static bool cbm_search(unsigned long cbm, u32 *closid)
 	return false;
 }
 
+static bool cbm_pair_search(unsigned long dcache_cbm, unsigned long icache_cbm,
+			    u32 *closid)
+{
+	u32 maxid = cconfig.max_closid;
+	unsigned long dcbm, icbm;
+	u32 i, dindex, iindex;
+
+	for (i = 0; i < maxid; i++) {
+		dindex = __DCBM_TABLE_INDEX(i);
+		iindex = __ICBM_TABLE_INDEX(i);
+		dcbm = cctable[dindex].l3_cbm;
+		icbm = cctable[iindex].l3_cbm;
+
+		if (cctable[dindex].clos_refcnt &&
+		    bitmap_equal(&dcache_cbm, &dcbm, MAX_CBM_LENGTH) &&
+		    bitmap_equal(&icache_cbm, &icbm, MAX_CBM_LENGTH)) {
+			*closid = i;
+			return true;
+		}
+	}
+
+	return false;
+}
+
 static void closcbm_map_dump(void)
 {
 	u32 i;
@@ -289,6 +329,93 @@ static inline void msr_update_all(int msr, u64 val)
 	on_each_cpu_mask(&rdt_cpumask, msr_cpu_update, &info, 1);
 }
 
+/*
+ * clos_cbm_table_df() - Defragments the clos_cbm_table entries
+ * @ct: The clos_cbm_table to which the defragmented entries are copied.
+ *
+ * The max entries in ct is never > original max closids / 2.
+ */
+static void clos_cbm_table_df(struct clos_cbm_table *ct)
+{
+	u32 orig_maxid = boot_cpu_data.x86_cache_max_closid;
+	int i, j;
+
+	for (i = 0, j = 0; i < orig_maxid; i++) {
+		if (cctable[i].clos_refcnt) {
+			ct[j] = cctable[i];
+			set_bit(j, cconfig.closmap);
+			j++;
+		}
+	}
+}
+
+/*
+ * post_cdp_enable() - This sets up the clos_cbm_table and
+ * IA32_L3_MASK_n MSRs before starting to use CDP.
+ *
+ * The existing l3_cbm entries are retained as dcache_cbm and
+ * icache_cbm entries. The IA32_L3_QOS_n MSRs are also updated
+ * as they were reset to all 1s before mode change.
+ */
+static int post_cdp_enable(void)
+{
+	u32 orig_maxid = boot_cpu_data.x86_cache_max_closid;
+	u32 maxid = cconfig.max_closid;
+	int size, dindex, iindex, i;
+	struct clos_cbm_table *ct;
+
+	maxid = cconfig.max_closid;
+	size = maxid * sizeof(struct clos_cbm_table);
+	ct = kzalloc(size, GFP_KERNEL);
+	if (!ct)
+		return -ENOMEM;
+
+	bitmap_zero(cconfig.closmap, orig_maxid);
+	clos_cbm_table_df(ct);
+
+	for (i = 0; i < maxid; i++) {
+		if (ct[i].clos_refcnt) {
+			msr_update_all(__DCBM_MSR_INDEX(i), ct[i].l3_cbm);
+			msr_update_all(__ICBM_MSR_INDEX(i), ct[i].l3_cbm);
+		}
+		dindex = __DCBM_TABLE_INDEX(i);
+		iindex = __ICBM_TABLE_INDEX(i);
+		cctable[dindex] = cctable[iindex] = ct[i];
+	}
+	kfree(ct);
+
+	return 0;
+}
+
+/*
+ * post_cdp_disable() - Set the state of closmap and clos_cbm_table
+ * before using the cache alloc mode.
+ *
+ * The existing dcache_cbm entries are retained as l3_cbm entries.
+ * The IA32_L3_QOS_n MSRs are also updated
+ * as they were reset to all 1s before mode change.
+ */
+static void post_cdp_disable(void)
+{
+	int dindex, maxid, i;
+
+	maxid = cconfig.max_closid >> 1;
+	for (i = 0; i < maxid; i++) {
+		dindex = __DCBM_TABLE_INDEX(i);
+		if (cctable[dindex].clos_refcnt)
+			msr_update_all(CBM_FROM_INDEX(i),
+					 cctable[dindex].l3_cbm);
+
+		cctable[i] = cctable[dindex];
+	}
+
+	/*
+	 * We updated half of the clos_cbm_table entries, initialize the
+	 * rest of the clos_cbm_table entries.
+	 */
+	memset(&cctable[maxid], 0, maxid * sizeof(struct clos_cbm_table));
+}
+
 static bool code_data_mask_equal(void)
 {
 	int i, dindex, iindex;
@@ -304,6 +431,65 @@ static bool code_data_mask_equal(void)
 	return true;
 }
 
+static void __switch_mode_cdp(bool cdpenable)
+{
+	u32 max_cbm_len = boot_cpu_data.x86_cache_max_cbm_len;
+	u32 orig_maxid = boot_cpu_data.x86_cache_max_closid;
+	u32 max_cbm, i;
+
+	max_cbm = (1ULL << max_cbm_len) - 1;
+	for (i = 0; i < orig_maxid; i++)
+		msr_update_all(CBM_FROM_INDEX(i), max_cbm);
+
+	msr_update_all(MSR_IA32_PQOS_CFG, cdpenable);
+
+	if (cdpenable)
+		cconfig.max_closid = cconfig.max_closid >> 1;
+	else
+		cconfig.max_closid = cconfig.max_closid << 1;
+	cdp_enabled = cdpenable;
+}
+
+/*
+ * switch_mode_cdp() - switch between legacy cache alloc and cdp modes
+ * @cdpmode: '0' to disable cdp and '1' to enable cdp
+ *
+ * cdp is enabled only when the number of closids used is less than half
+ * of available closids. Switch to legacy cache alloc mode when
+ * for each (dcache_cbm,icache_cbm) pair, the dcache_cbm = icache_cbm.
+ */
+static int switch_mode_cdp(bool cdpenable)
+{
+	struct cpuinfo_x86 *c = &boot_cpu_data;
+	u32 maxid = cconfig.max_closid;
+	int err = 0;
+
+	lockdep_assert_held(&rdt_group_mutex);
+
+	if (!cpu_has(c, X86_FEATURE_CDP_L3) || cdpenable == cdp_enabled)
+		return -EINVAL;
+
+	if ((cdpenable && (cconfig.closids_used >= (maxid >> 1))) ||
+	     (!cdpenable && !code_data_mask_equal()))
+		return -ENOSPC;
+
+	__switch_mode_cdp(cdpenable);
+
+	/*
+	 * After mode switch to cdp, the index for IA32_L3_MASK_n from the base
+	 * for a CLOSid 'n' is:
+	 * dcache_cbm_index (n) = (n << 1)
+	 * icache_cbm_index (n) = (n << 1) +1
+	*/
+	if (cdpenable)
+		err = post_cdp_enable();
+	else
+		post_cdp_disable();
+	closcbm_map_dump();
+
+	return err;
+}
+
 static inline bool rdt_cpumask_update(int cpu)
 {
 	cpumask_and(&tmp_cpumask, &rdt_cpumask, topology_core_cpumask(cpu));
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V2 4/5] x86/intel_rdt: Hot cpu update for code data prioritization
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
                   ` (2 preceding siblings ...)
  2015-10-02  6:33 ` [PATCH V2 3/5] x86/intel_rdt: Class of service and capacity bitmask management for CDP Fenghua Yu
@ 2015-10-02  6:33 ` Fenghua Yu
  2015-10-02  6:33 ` [PATCH V2 5/5] x86,cgroup/intel_rdt: Add cgroup interface " Fenghua Yu
  2015-10-11 19:51 ` [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Thomas Gleixner
  5 siblings, 0 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

Updates hot cpu notification handling for code data prioritization(cdp).
The capacity bitmask(cbm) is global for both data and instruction and we
need to update the new online package with all the cbms by writing to
the IA32_L3_QOS_n MSRs.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 arch/x86/kernel/cpu/intel_rdt.c | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index eab9d73..fe4d8e3 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -501,6 +501,26 @@ static inline bool rdt_cpumask_update(int cpu)
 	return false;
 }
 
+static void cbm_update_msr(u32 index)
+{
+	struct rdt_remote_data info;
+	int dindex;
+
+	dindex = DCBM_TABLE_INDEX(index);
+	if (cctable[dindex].clos_refcnt) {
+
+		info.msr = CBM_FROM_INDEX(dindex);
+		info.val = cctable[dindex].l3_cbm;
+		msr_cpu_update((void *) &info);
+
+		if (cdp_enabled) {
+			info.msr = __ICBM_MSR_INDEX(index);
+			info.val = cctable[dindex + 1].l3_cbm;
+			msr_cpu_update((void *) &info);
+		}
+	}
+}
+
 /*
  * cbm_update_msrs() - Updates all the existing IA32_L3_MASK_n MSRs
  * which are one per CLOSid on the current package.
@@ -508,15 +528,10 @@ static inline bool rdt_cpumask_update(int cpu)
 static void cbm_update_msrs(void *dummy)
 {
 	int maxid = cconfig.max_closid;
-	struct rdt_remote_data info;
 	unsigned int i;
 
 	for (i = 0; i < maxid; i++) {
-		if (cctable[i].clos_refcnt) {
-			info.msr = CBM_FROM_INDEX(i);
-			info.val = cctable[i].l3_cbm;
-			msr_cpu_update((void *) &info);
-		}
+		cbm_update_msr(i);
 	}
 }
 
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V2 5/5] x86,cgroup/intel_rdt: Add cgroup interface for code data prioritization
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
                   ` (3 preceding siblings ...)
  2015-10-02  6:33 ` [PATCH V2 4/5] x86/intel_rdt: Hot cpu update for code data prioritization Fenghua Yu
@ 2015-10-02  6:33 ` Fenghua Yu
  2015-10-11 19:51 ` [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Thomas Gleixner
  5 siblings, 0 replies; 8+ messages in thread
From: Fenghua Yu @ 2015-10-02  6:33 UTC (permalink / raw)
  To: H Peter Anvin, Ingo Molnar, Thomas Gleixner, Peter Zijlstra
  Cc: linux-kernel, x86, Fenghua Yu, Vikas Shivappa

Adds two files to the intel_rdt cgroup 'dcache_cbm' and 'icache_cbm'
when code data prioritization(cdp) support is present. The files
represent the data capacity bit mask(cbm) and instruction cbm for L3
cache. User can specify the data and code cbm and the threads belonging
to the cgroup would get to fill the l3 cache represented by the cbm with
the data or code.

For ex: Consider a scenario where the max cbm bits is 10 and L3 cache
size is 10MB:
then specifying a dcache_cbm = 0x3 and icache_cbm = 0xc would give 2MB
of exclusive cache for data and code for the tasks to fill in.

This feature is an extension to cache allocation and lets user specify a
capacity for code and data separately. Initially these cbms would have
the same value as the l3_cbm(which represents the common cbm for code
and data). Once the user tries to write to either the dcache_cbm or
icache_cbm, the kernel tries to enable the cdp mode in hardware by
writing to the IA32_PQOS_CFG MSR. The switch is only possible if the
number of Class of service IDs(CLOSids) used is < half of total CLOSids
available at the time of switch. This is because the CLOSIds are halved
once CDP is enabled and each CLOSid now maps to a data IA32_L3_QOS_n MSR
and a code IA32_L3_QOS_n MSR.
Once the CDP is enabled user can use the dcache_cbm and icache_cbm just
like the l3_cbm. The CLOSids are not exposed to the user and maintained
by the kernel internally.

The kernel automatically tries to switch back to legacy cache alloc mode
if it finds that all the code and data cbms are same *and* there is a
need for additional CLOSids.

A write to l3_cbm writes to both dcache_cbm and icache_cbm during all
modes.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
---
 arch/x86/kernel/cpu/intel_rdt.c | 328 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 295 insertions(+), 33 deletions(-)

diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index fe4d8e3..389c690 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -329,6 +329,24 @@ static inline void msr_update_all(int msr, u64 val)
 	on_each_cpu_mask(&rdt_cpumask, msr_cpu_update, &info, 1);
 }
 
+static void cgroup_rdt_closid_update(struct intel_rdt *par,
+				     u32 oldclosid, u32 newclosid)
+{
+	struct cgroup_subsys_state *css;
+	struct intel_rdt *c;
+
+	if (par == NULL || oldclosid == newclosid)
+		return;
+
+	if (par->closid == oldclosid)
+		par->closid = newclosid;
+
+	rdt_for_each_child(css, par) {
+		c = css_rdt(css);
+		cgroup_rdt_closid_update(c, oldclosid, newclosid);
+	}
+}
+
 /*
  * clos_cbm_table_df() - Defragments the clos_cbm_table entries
  * @ct: The clos_cbm_table to which the defragmented entries are copied.
@@ -344,6 +362,9 @@ static void clos_cbm_table_df(struct clos_cbm_table *ct)
 		if (cctable[i].clos_refcnt) {
 			ct[j] = cctable[i];
 			set_bit(j, cconfig.closmap);
+			rcu_read_lock();
+			cgroup_rdt_closid_update(&rdt_root_group, i, j);
+			rcu_read_unlock();
 			j++;
 		}
 	}
@@ -621,31 +642,55 @@ static void intel_rdt_css_free(struct cgroup_subsys_state *css)
 	mutex_unlock(&rdt_group_mutex);
 }
 
-static int intel_cache_alloc_cbm_read(struct seq_file *m, void *v)
+static inline u32 dcbm_table_index(u32 closid)
+{
+	return DCBM_TABLE_INDEX(closid);
+}
+
+static inline u32 icbm_table_index(u32 closid)
+{
+	return ICBM_TABLE_INDEX(closid);
+}
+
+static int cbm_read_common(struct seq_file *m, u32 (*clos_fn)(u32))
 {
 	struct intel_rdt *ir = css_rdt(seq_css(m));
 	unsigned long l3_cbm = 0;
 
-	clos_cbm_table_read(ir->closid, &l3_cbm);
+	clos_cbm_table_read(clos_fn(ir->closid), &l3_cbm);
 	seq_printf(m, "%08lx\n", l3_cbm);
 
 	return 0;
 }
 
-static int cbm_validate_rdt_cgroup(struct intel_rdt *ir, unsigned long cbmvalue)
+/*
+ * Reads the dcache_cbm when cdp is enabled.
+ */
+static int intel_cache_alloc_cbm_read(struct seq_file *m, void *v)
+{
+	return cbm_read_common(m, dcbm_table_index);
+}
+
+static int cdp_dcache_cbm_read(struct seq_file *m, void *v)
+{
+	return cbm_read_common(m, dcbm_table_index);
+}
+
+static int cdp_icache_cbm_read(struct seq_file *m, void *v)
+{
+	return cbm_read_common(m, icbm_table_index);
+}
+
+static int cbm_validate_rdt_cgroup(struct intel_rdt *ir, unsigned long cbmvalue,
+				    u32 (*clos_fn)(u32))
 {
 	struct cgroup_subsys_state *css;
 	struct intel_rdt *par, *c;
 	unsigned long cbm_tmp = 0;
 	int err = 0;
 
-	if (!cbm_validate(cbmvalue)) {
-		err = -EINVAL;
-		goto out_err;
-	}
-
 	par = parent_rdt(ir);
-	clos_cbm_table_read(par->closid, &cbm_tmp);
+	clos_cbm_table_read(clos_fn(par->closid), &cbm_tmp);
 	if (!bitmap_subset(&cbmvalue, &cbm_tmp, MAX_CBM_LENGTH)) {
 		err = -EINVAL;
 		goto out_err;
@@ -654,7 +699,7 @@ static int cbm_validate_rdt_cgroup(struct intel_rdt *ir, unsigned long cbmvalue)
 	rcu_read_lock();
 	rdt_for_each_child(css, ir) {
 		c = css_rdt(css);
-		clos_cbm_table_read(par->closid, &cbm_tmp);
+		clos_cbm_table_read(clos_fn(c->closid), &cbm_tmp);
 		if (!bitmap_subset(&cbm_tmp, &cbmvalue, MAX_CBM_LENGTH)) {
 			rcu_read_unlock();
 			err = -EINVAL;
@@ -667,6 +712,76 @@ out_err:
 	return err;
 }
 
+static inline int closid_alloc_try_cdpdisable(struct intel_rdt *ir,
+					       u64 dcache_mask,
+					       u64 icache_mask)
+{
+	int err = 0;
+
+	err = closid_alloc(&ir->closid);
+	if (err && (dcache_mask == icache_mask)) {
+		/*
+		 * Can try allocating again after disabling cdp.
+		 * This is an attempt to see if all the dcache and icache
+		 * masks are going to be same and hence can free more
+		 * closids by switching to cache alloc mode.
+		 */
+		if (!switch_mode_cdp(false))
+			err = closid_alloc(&ir->closid);
+	}
+
+	return err;
+}
+
+static inline void dcache_icache_cbm_read(u32 closid, unsigned long *dcbm,
+			       unsigned long *icbm)
+{
+	clos_cbm_table_read(__DCBM_TABLE_INDEX(closid), dcbm);
+	clos_cbm_table_read(__ICBM_TABLE_INDEX(closid), icbm);
+}
+
+static int cdp_mask_write(struct intel_rdt *ir, u64 dcache_mask,
+			  u64 icache_mask)
+{
+	u32 dindex, iindex;
+	int err = 0;
+	u32 closid;
+
+	/*
+	 * Try to get a reference for a different CLOSid and release the
+	 * reference to the current CLOSid.
+	 * Need to put down the reference here and get it back in case we
+	 * run out of closids. Otherwise we run into a problem when
+	 * we could be using the last closid that could have been available.
+	 */
+	closid_put(ir->closid);
+	if (cbm_pair_search(dcache_mask, icache_mask, &closid)) {
+		ir->closid = closid;
+		closid_get(closid);
+	} else {
+		err = closid_alloc_try_cdpdisable(ir, dcache_mask, icache_mask);
+		if (err) {
+			closid_get(ir->closid);
+			goto out;
+		}
+		closid = ir->closid;
+		dindex = DCBM_TABLE_INDEX(closid);
+		clos_cbm_table_update(dindex, dcache_mask);
+		msr_update_all(CBM_FROM_INDEX(dindex), dcache_mask);
+
+		if (cdp_enabled) {
+			iindex = ICBM_TABLE_INDEX(closid);
+			clos_cbm_table_update(iindex, icache_mask);
+			msr_update_all(CBM_FROM_INDEX(iindex), icache_mask);
+		}
+	}
+	closid_tasks_sync();
+	closcbm_map_dump();
+out:
+
+	return err;
+}
+
 /*
  * intel_cache_alloc_cbm_write() - Validates and writes the
  * cache bit mask(cbm) to the IA32_L3_MASK_n
@@ -693,11 +808,39 @@ static int intel_cache_alloc_cbm_write(struct cgroup_subsys_state *css,
 	 */
 	mutex_lock(&rdt_group_mutex);
 
+	if (!cbm_validate(cbmvalue)) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	/*
+	 * A write to l3_cbm when cdp is enabled writes to both
+	 * dcache_cbm and icache_cbm.
+	 */
+	if (cdp_enabled) {
+		unsigned long cdm = 0, cim = 0;
+
+		dcache_icache_cbm_read(ir->closid, &cdm, &cim);
+		if (cbmvalue == cdm && cbmvalue == cim)
+			goto out;
+
+		err = cbm_validate_rdt_cgroup(ir, cbmvalue, dcbm_table_index);
+		if (err)
+			goto out;
+
+		err = cbm_validate_rdt_cgroup(ir, cbmvalue, icbm_table_index);
+		if (err)
+			goto out;
+
+		err = cdp_mask_write(ir, cbmvalue, cbmvalue);
+		goto out;
+	}
+
 	clos_cbm_table_read(ir->closid, &ccbm);
 	if (cbmvalue == ccbm)
 		goto out;
 
-	err = cbm_validate_rdt_cgroup(ir, cbmvalue);
+	err = cbm_validate_rdt_cgroup(ir, cbmvalue, dcbm_table_index);
 	if (err)
 		goto out;
 
@@ -731,22 +874,154 @@ out:
 	return err;
 }
 
-static void rdt_cgroup_init(void)
+/*
+ * A write to the dcache_cbm may trigger a cdp mode switch
+ */
+static int cdp_dcache_cbm_write(struct cgroup_subsys_state *css,
+				 struct cftype *cft, u64 new_dcache_mask)
+{
+	unsigned long curr_icache_mask = 0, curr_dcache_mask = 0;
+	struct intel_rdt *ir = css_rdt(css);
+	int err = 0;
+
+	if (ir == &rdt_root_group)
+		return -EPERM;
+
+	/*
+	 * Need global mutex as cache mask write may allocate a closid.
+	 */
+	mutex_lock(&rdt_group_mutex);
+
+	if (!cbm_validate(new_dcache_mask)) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (!cdp_enabled) {
+		err = switch_mode_cdp(true);
+		if (err)
+			goto out;
+	}
+
+	dcache_icache_cbm_read(ir->closid, &curr_dcache_mask,
+			       &curr_icache_mask);
+	if (new_dcache_mask == curr_dcache_mask)
+		goto out;
+
+	err = cbm_validate_rdt_cgroup(ir, new_dcache_mask, dcbm_table_index);
+	if (err)
+		goto out;
+
+	err = cdp_mask_write(ir, new_dcache_mask, curr_icache_mask);
+out:
+	mutex_unlock(&rdt_group_mutex);
+
+	return err;
+}
+
+/*
+ * A write to the icache_cbm may trigger a cdp mode switch
+ */
+static int cdp_icache_cbm_write(struct cgroup_subsys_state *css,
+				 struct cftype *cft, u64 new_icache_mask)
+{
+	unsigned long curr_icache_mask = 0, curr_dcache_mask = 0;
+	struct intel_rdt *ir = css_rdt(css);
+	int err = 0;
+
+	if (ir == &rdt_root_group)
+		return -EPERM;
+
+	/*
+	 * Need global mutex as cache mask write may allocate a closid.
+	 */
+	mutex_lock(&rdt_group_mutex);
+
+	if (!cbm_validate(new_icache_mask)) {
+		err = -EINVAL;
+		goto out;
+	}
+
+	if (!cdp_enabled) {
+		err = switch_mode_cdp(true);
+		if (err)
+			goto out;
+	}
+
+	dcache_icache_cbm_read(ir->closid, &curr_dcache_mask,
+			       &curr_icache_mask);
+	if (new_icache_mask == curr_icache_mask)
+		goto out;
+
+	err = cbm_validate_rdt_cgroup(ir, new_icache_mask, icbm_table_index);
+	if (err)
+		goto out;
+
+	err = cdp_mask_write(ir, curr_dcache_mask, new_icache_mask);
+out:
+	mutex_unlock(&rdt_group_mutex);
+
+	return err;
+}
+
+static struct cftype *rdt_files;
+
+struct cgroup_subsys intel_rdt_cgrp_subsys = {
+	.css_alloc		= intel_rdt_css_alloc,
+	.css_free		= intel_rdt_css_free,
+	.early_init		= 0,
+};
+
+static int rdt_cgroup_init(bool cdpsupported)
 {
 	int max_cbm_len = boot_cpu_data.x86_cache_max_cbm_len;
+	struct cftype *cft;
+	int size, i = 0;
 	u32 closid;
 
-	closid_alloc(&closid);
+	size = (2 << cdpsupported) * sizeof(struct cftype);
+	rdt_files = kzalloc(size, GFP_KERNEL);
 
-	WARN_ON(closid != 0);
+	if (!rdt_files) {
+		rdt_root_group.css.ss->disabled = 1;
+		return -ENOMEM;
+	}
+
+	cft = &rdt_files[i++];
+	snprintf(cft->name, MAX_CFTYPE_NAME, "l3_cbm");
+	cft->seq_show = intel_cache_alloc_cbm_read;
+	cft->write_u64 = intel_cache_alloc_cbm_write;
+
+	if (cdpsupported) {
+		cft = &rdt_files[i++];
+		snprintf(cft->name, MAX_CFTYPE_NAME, "dcache_cbm");
+		cft->seq_show = cdp_dcache_cbm_read;
+		cft->write_u64 = cdp_dcache_cbm_write;
+
+		cft = &rdt_files[i++];
+		snprintf(cft->name, MAX_CFTYPE_NAME, "icache_cbm");
+		cft->seq_show = cdp_icache_cbm_read;
+		cft->write_u64 = cdp_icache_cbm_write;
+	}
+
+	cft = &rdt_files[i];
+	memset(cft, 0, sizeof(*cft));
 
+	closid_alloc(&closid);
 	rdt_root_group.closid = closid;
 	clos_cbm_table_update(closid, (1ULL << max_cbm_len) - 1);
+
+	WARN_ON(closid != 0);
+	WARN_ON(cgroup_add_legacy_cftypes(&intel_rdt_cgrp_subsys,
+					  rdt_files));
+
+	return 0;
 }
 
 static int __init intel_rdt_late_init(void)
 {
 	struct cpuinfo_x86 *c = &boot_cpu_data;
+	bool cdpsupported = false;
 	u32 maxid, max_cbm_len;
 	int err = 0, size, i;
 
@@ -754,7 +1029,7 @@ static int __init intel_rdt_late_init(void)
 		rdt_root_group.css.ss->disabled = 1;
 		return -ENODEV;
 	}
-	maxid = c->x86_cache_max_closid;
+	maxid = cconfig.max_closid = c->x86_cache_max_closid;
 	max_cbm_len = c->x86_cache_max_cbm_len;
 
 	size = maxid * sizeof(struct clos_cbm_table);
@@ -780,31 +1055,18 @@ static int __init intel_rdt_late_init(void)
 	__hotcpu_notifier(intel_rdt_cpu_notifier, 0);
 
 	cpu_notifier_register_done();
-	rdt_cgroup_init();
 
 	static_key_slow_inc(&rdt_enable_key);
 	pr_info("Intel cache allocation enabled\n");
-	if (cpu_has(c, X86_FEATURE_CDP_L3))
+	if (cpu_has(c, X86_FEATURE_CDP_L3)) {
+		cdpsupported = true;
 		pr_info("Intel code data prioritization detected\n");
+	}
+	rdt_cgroup_init(cdpsupported);
+
 out_err:
 
 	return err;
 }
 
 late_initcall(intel_rdt_late_init);
-
-static struct cftype rdt_files[] = {
-	{
-		.name		= "l3_cbm",
-		.seq_show	= intel_cache_alloc_cbm_read,
-		.write_u64	= intel_cache_alloc_cbm_write,
-	},
-	{ }	/* terminate */
-};
-
-struct cgroup_subsys intel_rdt_cgrp_subsys = {
-	.css_alloc		= intel_rdt_css_alloc,
-	.css_free		= intel_rdt_css_free,
-	.legacy_cftypes		= rdt_files,
-	.early_init		= 0,
-};
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V2 0/5] x86: Intel Code Data Prioritization Support
  2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
                   ` (4 preceding siblings ...)
  2015-10-02  6:33 ` [PATCH V2 5/5] x86,cgroup/intel_rdt: Add cgroup interface " Fenghua Yu
@ 2015-10-11 19:51 ` Thomas Gleixner
  2015-10-12 18:55   ` Yu, Fenghua
  5 siblings, 1 reply; 8+ messages in thread
From: Thomas Gleixner @ 2015-10-11 19:51 UTC (permalink / raw)
  To: Fenghua Yu
  Cc: H Peter Anvin, Ingo Molnar, Peter Zijlstra, linux-kernel, x86,
	Vikas Shivappa, Marcelo Tosatti

On Thu, 1 Oct 2015, Fenghua Yu wrote:

> This patch set supports Intel code data prioritization which is an
> extension of cache allocation and allows to allocate code and data cache
> seperately. It also includes cgroup interface for the user as seperate
> patches. The cgroup interface for cache alloc is also resent.

Please sort out the interface discussion of the first set. When that
is settled we can have a look at this part.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH V2 0/5] x86: Intel Code Data Prioritization Support
  2015-10-11 19:51 ` [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Thomas Gleixner
@ 2015-10-12 18:55   ` Yu, Fenghua
  0 siblings, 0 replies; 8+ messages in thread
From: Yu, Fenghua @ 2015-10-12 18:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: H Peter Anvin, Ingo Molnar, Peter Zijlstra, linux-kernel, x86,
	Vikas Shivappa, Marcelo Tosatti

> From: Thomas Gleixner [mailto:tglx@linutronix.de]
> Sent: Sunday, October 11, 2015 12:52 PM
> To: Yu, Fenghua
> Cc: H Peter Anvin; Ingo Molnar; Peter Zijlstra; linux-kernel; x86; Vikas
> Shivappa; Marcelo Tosatti
> Subject: Re: [PATCH V2 0/5] x86: Intel Code Data Prioritization Support
> 
> On Thu, 1 Oct 2015, Fenghua Yu wrote:
> 
> > This patch set supports Intel code data prioritization which is an
> > extension of cache allocation and allows to allocate code and data
> > cache seperately. It also includes cgroup interface for the user as
> > seperate patches. The cgroup interface for cache alloc is also resent.
> 
> Please sort out the interface discussion of the first set. When that is settled
> we can have a look at this part.

Yes, Peter Anvin is working on the interface with Tejun.

Thanks.

-Fenghua

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-10-12 18:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-02  6:33 [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Fenghua Yu
2015-10-02  6:33 ` [PATCH V2 1/5] x86/intel_rdt: Intel Code Data Prioritization detection Fenghua Yu
2015-10-02  6:33 ` [PATCH V2 2/5] x86/intel_rdt: Adds support to enable Code Data Prioritization Fenghua Yu
2015-10-02  6:33 ` [PATCH V2 3/5] x86/intel_rdt: Class of service and capacity bitmask management for CDP Fenghua Yu
2015-10-02  6:33 ` [PATCH V2 4/5] x86/intel_rdt: Hot cpu update for code data prioritization Fenghua Yu
2015-10-02  6:33 ` [PATCH V2 5/5] x86,cgroup/intel_rdt: Add cgroup interface " Fenghua Yu
2015-10-11 19:51 ` [PATCH V2 0/5] x86: Intel Code Data Prioritization Support Thomas Gleixner
2015-10-12 18:55   ` Yu, Fenghua

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).