All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/3] x86: Full support of PAT
@ 2014-08-19 13:25 jgross
  2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: jgross @ 2014-08-19 13:25 UTC (permalink / raw)
  To: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

The x86 architecture offers via the PAT (Page Attribute Table) a way to
specify different caching modes in page table entries. The PAT MSR contains
8 entries each specifying one of 6 possible cache modes. A pte references one
of those entries via 3 bits: _PAGE_PAT, _PAGE_PWT and _PAGE_PCD.

The Linux kernel currently supports only 4 different cache modes. The PAT MSR
is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
top 4 entries in the PAT MSR are the same as the 4 lower entries.

This results in the kernel not supporting e.g. write-through mode. Especially
this cache mode would speed up drivers of video cards which now have to use
uncached accesses.

OTOH some old processors (Pentium) don't support PAT and the Xen hypervisor
has been using a different PAT MSR configuration for some time now and can't
change that as this setting is part of the ABI.

This patch set abstracts the cache mode from the pte and introduces tables to
translate between cache mode and pte bits (the default cache mode "write back"
is hard-wired to PAT entry 0). The tables are statically initialized with
values being compatible to old processors and current usage. As soon as the
PAT MSR is changed (or - in case of Xen - is read at boot time) the tables are
changed accordingly. Requests of mappings with special cache modes are always
possible now, in case they are not supported there will be a fallback to a
compatible but slower mode.

[PATCH RFC 1/3] x86: Make page cache mode a real type
[PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
[PATCH RFC 3/3] Support Xen pv-domains using PAT

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-19 13:25 [PATCH RFC 0/3] x86: Full support of PAT jgross
@ 2014-08-19 13:25 ` jgross
  2014-08-20 19:26   ` Toshi Kani
  2014-08-21 22:09   ` Toshi Kani
  2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 22+ messages in thread
From: jgross @ 2014-08-19 13:25 UTC (permalink / raw)
  To: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86
  Cc: Juergen Gross

From: Juergen Gross <jgross@suse.com>

At the moment there are a lot of places that handle setting or getting
the page cache mode by treating the pgprot bits equal to the cache mode.
This is only true because there are a lot of assumptions about the setup
of the PAT MSR. Otherwise the cache type needs to get translated into
pgprot bits and vice versa.

This patch tries to prepare for that by introducing a seperate type
for the cache mode and adding functions to translate between those and pgprot
values.

To avoid too much performance penalty the translation between cache mode
and pgprot values is done via tables which contain the relevant information.
Write-back cache mode is hard-wired to be 0, all other modes are configurable
via those tables. For large pages there are translation functions as the
PAT bit is located at different positions in the ptes of 4k and large pages.

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/cacheflush.h         |  34 +++++---
 arch/x86/include/asm/fb.h                 |   6 +-
 arch/x86/include/asm/io.h                 |   2 +-
 arch/x86/include/asm/pat.h                |   6 +-
 arch/x86/include/asm/pgtable.h            |  19 ++---
 arch/x86/include/asm/pgtable_types.h      |  92 +++++++++++++++++-----
 arch/x86/mm/dump_pagetables.c             |  24 +++---
 arch/x86/mm/init.c                        |  29 +++++++
 arch/x86/mm/init_64.c                     |   9 ++-
 arch/x86/mm/iomap_32.c                    |  15 ++--
 arch/x86/mm/ioremap.c                     |  63 ++++++++-------
 arch/x86/mm/pageattr.c                    |  84 ++++++++++++--------
 arch/x86/mm/pat.c                         | 126 +++++++++++++++---------------
 arch/x86/mm/pat_internal.h                |  20 ++---
 arch/x86/mm/pat_rbtree.c                  |   8 +-
 arch/x86/pci/i386.c                       |   4 +-
 drivers/video/fbdev/gbefb.c               |   3 +-
 drivers/video/fbdev/vermilion/vermilion.c |   6 +-
 18 files changed, 339 insertions(+), 211 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 9863ee3..8789280 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -9,10 +9,10 @@
 /*
  * X86 PAT uses page flags WC and Uncached together to keep track of
  * memory type of pages that have backing page struct. X86 PAT supports 3
- * different memory types, _PAGE_CACHE_WB, _PAGE_CACHE_WC and
- * _PAGE_CACHE_UC_MINUS and fourth state where page's memory type has not
+ * different memory types, _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC and
+ * _PAGE_CACHE_MODE_UC_MINUS and fourth state where page's memory type has not
  * been changed from its default (value of -1 used to denote this).
- * Note we do not support _PAGE_CACHE_UC here.
+ * Note we do not support _PAGE_CACHE_MODE_UC here.
  */
 
 #define _PGMT_DEFAULT		0
@@ -22,34 +22,36 @@
 #define _PGMT_MASK		(1UL << PG_uncached | 1UL << PG_arch_1)
 #define _PGMT_CLEAR_MASK	(~_PGMT_MASK)
 
-static inline unsigned long get_page_memtype(struct page *pg)
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
 {
 	unsigned long pg_flags = pg->flags & _PGMT_MASK;
 
 	if (pg_flags == _PGMT_DEFAULT)
 		return -1;
 	else if (pg_flags == _PGMT_WC)
-		return _PAGE_CACHE_WC;
+		return _PAGE_CACHE_MODE_WC;
 	else if (pg_flags == _PGMT_UC_MINUS)
-		return _PAGE_CACHE_UC_MINUS;
+		return _PAGE_CACHE_MODE_UC_MINUS;
 	else
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 }
 
-static inline void set_page_memtype(struct page *pg, unsigned long memtype)
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
 {
 	unsigned long memtype_flags = _PGMT_DEFAULT;
 	unsigned long old_flags;
 	unsigned long new_flags;
 
 	switch (memtype) {
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		memtype_flags = _PGMT_WC;
 		break;
-	case _PAGE_CACHE_UC_MINUS:
+	case _PAGE_CACHE_MODE_UC_MINUS:
 		memtype_flags = _PGMT_UC_MINUS;
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
+	default:
 		memtype_flags = _PGMT_WB;
 		break;
 	}
@@ -60,8 +62,14 @@ static inline void set_page_memtype(struct page *pg, unsigned long memtype)
 	} while (cmpxchg(&pg->flags, old_flags, new_flags) != old_flags);
 }
 #else
-static inline unsigned long get_page_memtype(struct page *pg) { return -1; }
-static inline void set_page_memtype(struct page *pg, unsigned long memtype) { }
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
+{
+	return -1;
+}
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
+{
+}
 #endif
 
 /*
diff --git a/arch/x86/include/asm/fb.h b/arch/x86/include/asm/fb.h
index 2519d06..c3766d1 100644
--- a/arch/x86/include/asm/fb.h
+++ b/arch/x86/include/asm/fb.h
@@ -8,8 +8,12 @@
 static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
 				unsigned long off)
 {
+	unsigned long prot;
+
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
 	if (boot_cpu_data.x86 > 3)
-		pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
+		pgprot_val(vma->vm_page_prot) =
+			prot | protval_cachemode(_PAGE_CACHE_MODE_UC);
 }
 
 extern int fb_is_primary_device(struct fb_info *info);
diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index b8237d8..b7d9804 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -314,7 +314,7 @@ extern void *xlate_dev_mem_ptr(unsigned long phys);
 extern void unxlate_dev_mem_ptr(unsigned long phys, void *addr);
 
 extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-				unsigned long prot_val);
+				enum page_cache_mode pct);
 extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);
 
 extern bool is_early_ioremap_ptep(pte_t *ptep);
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index e2c1668..65b497b 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -13,14 +13,14 @@ static const int pat_enabled;
 extern void pat_init(void);
 
 extern int reserve_memtype(u64 start, u64 end,
-		unsigned long req_type, unsigned long *ret_type);
+		enum page_cache_mode req_pct, enum page_cache_mode *ret_pct);
 extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
-		unsigned long flag);
+		enum page_cache_mode pct);
 
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type);
+			enum page_cache_mode *pct);
 
 void io_free_memtype(resource_size_t start, resource_size_t end);
 
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 0ec0560..3d9b07e 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -9,9 +9,10 @@
 /*
  * Macro to mark a page protection value as UC-
  */
-#define pgprot_noncached(prot)					\
-	((boot_cpu_data.x86 > 3)				\
-	 ? (__pgprot(pgprot_val(prot) | _PAGE_CACHE_UC_MINUS))	\
+#define pgprot_noncached(prot)						\
+	((boot_cpu_data.x86 > 3)					\
+	 ? (__pgprot(pgprot_val(prot) |					\
+		     protval_cachemode(_PAGE_CACHE_MODE_UC_MINUS)))	\
 	 : (prot))
 
 #ifndef __ASSEMBLY__
@@ -399,8 +400,8 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 #define canon_pgprot(p) __pgprot(massage_pgprot(p))
 
 static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
-					 unsigned long flags,
-					 unsigned long new_flags)
+					 enum page_cache_mode pct,
+					 enum page_cache_mode new_pct)
 {
 	/*
 	 * PAT type is always WB for untracked ranges, so no need to check.
@@ -414,10 +415,10 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
 	 * - request is uncached, return cannot be write-back
 	 * - request is write-combine, return cannot be write-back
 	 */
-	if ((flags == _PAGE_CACHE_UC_MINUS &&
-	     new_flags == _PAGE_CACHE_WB) ||
-	    (flags == _PAGE_CACHE_WC &&
-	     new_flags == _PAGE_CACHE_WB)) {
+	if ((pct == _PAGE_CACHE_MODE_UC_MINUS &&
+	     new_pct == _PAGE_CACHE_MODE_WB) ||
+	    (pct == _PAGE_CACHE_MODE_WC &&
+	     new_pct == _PAGE_CACHE_MODE_WB)) {
 		return 0;
 	}
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index f216963..7685b34 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -129,11 +129,28 @@
 			 _PAGE_SOFT_DIRTY | _PAGE_NUMA)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
 
-#define _PAGE_CACHE_MASK	(_PAGE_PCD | _PAGE_PWT)
-#define _PAGE_CACHE_WB		(0)
-#define _PAGE_CACHE_WC		(_PAGE_PWT)
-#define _PAGE_CACHE_UC_MINUS	(_PAGE_PCD)
-#define _PAGE_CACHE_UC		(_PAGE_PCD | _PAGE_PWT)
+/*
+ * The cache modes defined here are used to translate between pure SW usage
+ * and the HW defined cache mode bits and/or PAT entries.
+ *
+ * The resulting bits for PWT, PCD and PAT should be chosen in a way
+ * to have the WB mode at index 0 (all bits clear). This is the default
+ * right now and likely would break too much if changed.
+ */
+#ifndef __ASSEMBLY__
+enum page_cache_mode {
+	_PAGE_CACHE_MODE_WB = 0,
+	_PAGE_CACHE_MODE_WC = 1,
+	_PAGE_CACHE_MODE_UC_MINUS = 2,
+	_PAGE_CACHE_MODE_UC = 3,
+	_PAGE_CACHE_MODE_WT = 4,
+	_PAGE_CACHE_MODE_WP = 5,
+	_PAGE_CACHE_MODE_NUM = 8
+};
+#endif
+
+#define _PAGE_CACHE_MASK	(_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
+#define _PAGE_NOCACHE		(protval_cachemode(_PAGE_CACHE_MODE_UC))
 
 #define PAGE_NONE	__pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
 #define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
@@ -157,41 +174,27 @@
 
 #define __PAGE_KERNEL_RO		(__PAGE_KERNEL & ~_PAGE_RW)
 #define __PAGE_KERNEL_RX		(__PAGE_KERNEL_EXEC & ~_PAGE_RW)
-#define __PAGE_KERNEL_EXEC_NOCACHE	(__PAGE_KERNEL_EXEC | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_WC		(__PAGE_KERNEL | _PAGE_CACHE_WC)
-#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_UC_MINUS		(__PAGE_KERNEL | _PAGE_PCD)
+#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_NOCACHE)
 #define __PAGE_KERNEL_VSYSCALL		(__PAGE_KERNEL_RX | _PAGE_USER)
 #define __PAGE_KERNEL_VVAR		(__PAGE_KERNEL_RO | _PAGE_USER)
-#define __PAGE_KERNEL_VVAR_NOCACHE	(__PAGE_KERNEL_VVAR | _PAGE_PCD | _PAGE_PWT)
 #define __PAGE_KERNEL_LARGE		(__PAGE_KERNEL | _PAGE_PSE)
-#define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
 #define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
 #define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
 #define PAGE_KERNEL_EXEC		__pgprot(__PAGE_KERNEL_EXEC)
 #define PAGE_KERNEL_RX			__pgprot(__PAGE_KERNEL_RX)
-#define PAGE_KERNEL_WC			__pgprot(__PAGE_KERNEL_WC)
 #define PAGE_KERNEL_NOCACHE		__pgprot(__PAGE_KERNEL_NOCACHE)
-#define PAGE_KERNEL_UC_MINUS		__pgprot(__PAGE_KERNEL_UC_MINUS)
-#define PAGE_KERNEL_EXEC_NOCACHE	__pgprot(__PAGE_KERNEL_EXEC_NOCACHE)
 #define PAGE_KERNEL_LARGE		__pgprot(__PAGE_KERNEL_LARGE)
-#define PAGE_KERNEL_LARGE_NOCACHE	__pgprot(__PAGE_KERNEL_LARGE_NOCACHE)
 #define PAGE_KERNEL_LARGE_EXEC		__pgprot(__PAGE_KERNEL_LARGE_EXEC)
 #define PAGE_KERNEL_VSYSCALL		__pgprot(__PAGE_KERNEL_VSYSCALL)
 #define PAGE_KERNEL_VVAR		__pgprot(__PAGE_KERNEL_VVAR)
-#define PAGE_KERNEL_VVAR_NOCACHE	__pgprot(__PAGE_KERNEL_VVAR_NOCACHE)
 
 #define PAGE_KERNEL_IO			__pgprot(__PAGE_KERNEL_IO)
 #define PAGE_KERNEL_IO_NOCACHE		__pgprot(__PAGE_KERNEL_IO_NOCACHE)
-#define PAGE_KERNEL_IO_UC_MINUS		__pgprot(__PAGE_KERNEL_IO_UC_MINUS)
-#define PAGE_KERNEL_IO_WC		__pgprot(__PAGE_KERNEL_IO_WC)
 
 /*         xwr */
 #define __P000	PAGE_NONE
@@ -328,6 +331,55 @@ static inline pteval_t pte_flags(pte_t pte)
 #define pgprot_val(x)	((x).pgprot)
 #define __pgprot(x)	((pgprot_t) { (x) } )
 
+extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
+extern uint8_t __pte2cachemode_tbl[8];
+
+#define __pte2cm_idx(cb)				\
+	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
+	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
+	 (((cb) >> _PAGE_BIT_PWT) & 1))
+
+static inline unsigned long protval_cachemode(enum page_cache_mode pct)
+{
+	if (likely(pct == 0))
+		return 0;
+	return __cachemode2pte_tbl[pct];
+}
+static inline pgprot_t pgprot_cachemode(enum page_cache_mode pct)
+{
+	return __pgprot(protval_cachemode(pct));
+}
+static inline enum page_cache_mode cachemode_pgprot(pgprot_t pgprot)
+{
+	unsigned long masked;
+
+	masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
+	if (likely(masked == 0))
+		return 0;
+	return __pte2cachemode_tbl[__pte2cm_idx(masked)];
+}
+static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+		((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+		((val & _PAGE_PAT_LARGE) >>
+		 (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+
 
 typedef struct page *pgtable_t;
 
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 167ffca..52a336a 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -122,7 +122,7 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 
 	if (!pgprot_val(prot)) {
 		/* Not present */
-		pt_dump_cont_printf(m, dmsg, "                          ");
+		pt_dump_cont_printf(m, dmsg, "                              ");
 	} else {
 		if (pr & _PAGE_USER)
 			pt_dump_cont_printf(m, dmsg, "USR ");
@@ -141,18 +141,16 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 		else
 			pt_dump_cont_printf(m, dmsg, "    ");
 
-		/* Bit 9 has a different meaning on level 3 vs 4 */
-		if (level <= 3) {
-			if (pr & _PAGE_PSE)
-				pt_dump_cont_printf(m, dmsg, "PSE ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		} else {
-			if (pr & _PAGE_PAT)
-				pt_dump_cont_printf(m, dmsg, "pat ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		}
+		/* Bit 7 has a different meaning on level 3 vs 4 */
+		if (level <= 3 && pr & _PAGE_PSE)
+			pt_dump_cont_printf(m, dmsg, "PSE ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
+		if ((level == 4 && pr & _PAGE_PAT) ||
+		    ((level == 3 || level == 2) && pr & _PAGE_PAT_LARGE))
+			pt_dump_cont_printf(m, dmsg, "pat ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
 		if (pr & _PAGE_GLOBAL)
 			pt_dump_cont_printf(m, dmsg, "GLB ");
 		else
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..0500124 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -27,6 +27,35 @@
 
 #include "mm_internal.h"
 
+/*
+ * Tables translating between page_cache_type_t and pte encoding.
+ * Minimal supported modes are defined statically, modified if more supported
+ * cache modes are available.
+ * Index into __cachemode2pte_tbl is the cachemode.
+ * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
+ * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
+ */
+uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
+	[_PAGE_CACHE_MODE_WB]		= 0,
+	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
+	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
+	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
+	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
+	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
+};
+EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
+uint8_t __pte2cachemode_tbl[8] = {
+	[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
+	[__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
+};
+EXPORT_SYMBOL_GPL(__pte2cachemode_tbl);
+
 static unsigned long __initdata pgt_buf_start;
 static unsigned long __initdata pgt_buf_end;
 static unsigned long __initdata pgt_buf_top;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 5621c47..37ff0b5 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -327,12 +327,15 @@ pte_t * __init populate_extra_pte(unsigned long vaddr)
  * Create large page table mappings for a range of physical addresses.
  */
 static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
-						pgprot_t prot)
+					enum page_cache_mode cache)
 {
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
+	pgprot_t prot;
 
+	pgprot_val(prot) = pgprot_val(PAGE_KERNEL_LARGE) |
+		pgprot_val(pgprot_4k_2_large(pgprot_cachemode(cache)));
 	BUG_ON((phys & ~PMD_MASK) || (size & ~PMD_MASK));
 	for (; size; phys += PMD_SIZE, size -= PMD_SIZE) {
 		pgd = pgd_offset_k((unsigned long)__va(phys));
@@ -355,12 +358,12 @@ static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
 
 void __init init_extra_mapping_wb(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_WB);
 }
 
 void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE_NOCACHE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_UC);
 }
 
 /*
diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
index 7b179b4..91a2d3b 100644
--- a/arch/x86/mm/iomap_32.c
+++ b/arch/x86/mm/iomap_32.c
@@ -33,17 +33,17 @@ static int is_io_mapping_possible(resource_size_t base, unsigned long size)
 
 int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot)
 {
-	unsigned long flag = _PAGE_CACHE_WC;
+	enum page_cache_mode pct = _PAGE_CACHE_MODE_WC;
 	int ret;
 
 	if (!is_io_mapping_possible(base, size))
 		return -EINVAL;
 
-	ret = io_reserve_memtype(base, base + size, &flag);
+	ret = io_reserve_memtype(base, base + size, &pct);
 	if (ret)
 		return ret;
 
-	*prot = __pgprot(__PAGE_KERNEL | flag);
+	*prot = __pgprot(__PAGE_KERNEL | pgprot_val(pgprot_cachemode(pct)));
 	return 0;
 }
 EXPORT_SYMBOL_GPL(iomap_create_wc);
@@ -73,6 +73,9 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 /*
  * Map 'pfn' using protections 'prot'
  */
+#define __PAGE_KERNEL_WC	(__PAGE_KERNEL | \
+				 protval_cachemode(_PAGE_CACHE_TYPE_WC))
+
 void __iomem *
 iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 {
@@ -82,12 +85,14 @@ iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 	 * MTRR is UC or WC.  UC_MINUS gets the real intention, of the
 	 * user, which is "WC if the MTRR is WC, UC if you can't do that."
 	 */
-	if (!pat_enabled && pgprot_val(prot) == pgprot_val(PAGE_KERNEL_WC))
-		prot = PAGE_KERNEL_UC_MINUS;
+	if (!pat_enabled && pgprot_val(prot) == __PAGE_KERNEL_WC)
+		prot = __pgprot(__PAGE_KERNEL |
+				protval_pagemode(_PAGE_CACHE_TYPE_UC_MINUS));
 
 	return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot);
 }
 EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn);
+#undef __PAGE_KERNEL_WC
 
 void
 iounmap_atomic(void __iomem *kvaddr)
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index baff1da..8a86ee8 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -29,20 +29,20 @@
  * conflicts.
  */
 int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-			       unsigned long prot_val)
+			enum page_cache_mode pct)
 {
 	unsigned long nrpages = size >> PAGE_SHIFT;
 	int err;
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	switch (pct) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
 		err = _set_memory_uc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		err = _set_memory_wc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
 		err = _set_memory_wb(vaddr, nrpages);
 		break;
 	}
@@ -75,14 +75,14 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-		unsigned long size, unsigned long prot_val, void *caller)
+		unsigned long size, enum page_cache_mode pct, void *caller)
 {
 	unsigned long offset, vaddr;
 	resource_size_t pfn, last_pfn, last_addr;
 	const resource_size_t unaligned_phys_addr = phys_addr;
 	const unsigned long unaligned_size = size;
 	struct vm_struct *area;
-	unsigned long new_prot_val;
+	enum page_cache_mode new_pct;
 	pgprot_t prot;
 	int retval;
 	void __iomem *ret_addr;
@@ -122,38 +122,40 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
 
 	retval = reserve_memtype(phys_addr, (u64)phys_addr + size,
-						prot_val, &new_prot_val);
+						pct, &new_pct);
 	if (retval) {
 		printk(KERN_ERR "ioremap reserve_memtype failed %d\n", retval);
 		return NULL;
 	}
 
-	if (prot_val != new_prot_val) {
-		if (!is_new_memtype_allowed(phys_addr, size,
-					    prot_val, new_prot_val)) {
+	if (pct != new_pct) {
+		if (!is_new_memtype_allowed(phys_addr, size, pct, new_pct)) {
 			printk(KERN_ERR
-		"ioremap error for 0x%llx-0x%llx, requested 0x%lx, got 0x%lx\n",
+		"ioremap error for 0x%llx-0x%llx, requested 0x%x, got 0x%x\n",
 				(unsigned long long)phys_addr,
 				(unsigned long long)(phys_addr + size),
-				prot_val, new_prot_val);
+				pct, new_pct);
 			goto err_free_memtype;
 		}
-		prot_val = new_prot_val;
+		pct = new_pct;
 	}
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	prot = PAGE_KERNEL_IO;
+	switch (pct) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
-		prot = PAGE_KERNEL_IO_NOCACHE;
+		prot = __pgprot(pgprot_val(prot) |
+				protval_cachemode(_PAGE_CACHE_MODE_UC));
 		break;
-	case _PAGE_CACHE_UC_MINUS:
-		prot = PAGE_KERNEL_IO_UC_MINUS;
+	case _PAGE_CACHE_MODE_UC_MINUS:
+		prot = __pgprot(pgprot_val(prot) |
+				protval_cachemode(_PAGE_CACHE_MODE_UC_MINUS));
 		break;
-	case _PAGE_CACHE_WC:
-		prot = PAGE_KERNEL_IO_WC;
+	case _PAGE_CACHE_MODE_WC:
+		prot = __pgprot(pgprot_val(prot) |
+				protval_cachemode(_PAGE_CACHE_MODE_WC));
 		break;
-	case _PAGE_CACHE_WB:
-		prot = PAGE_KERNEL_IO;
+	case _PAGE_CACHE_MODE_WB:
 		break;
 	}
 
@@ -166,7 +168,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	area->phys_addr = phys_addr;
 	vaddr = (unsigned long) area->addr;
 
-	if (kernel_map_sync_memtype(phys_addr, size, prot_val))
+	if (kernel_map_sync_memtype(phys_addr, size, pct))
 		goto err_free_area;
 
 	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
@@ -215,14 +217,14 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, unsigned long size)
 {
 	/*
 	 * Ideally, this should be:
-	 *	pat_enabled ? _PAGE_CACHE_UC : _PAGE_CACHE_UC_MINUS;
+	 *	pat_enabled ? _PAGE_CACHE_TYPE_UC : _PAGE_CACHE_TYPE_UC_MINUS;
 	 *
 	 * Till we fix all X drivers to use ioremap_wc(), we will use
 	 * UC MINUS.
 	 */
-	unsigned long val = _PAGE_CACHE_UC_MINUS;
+	enum page_cache_mode pct = _PAGE_CACHE_MODE_UC_MINUS;
 
-	return __ioremap_caller(phys_addr, size, val,
+	return __ioremap_caller(phys_addr, size, pct,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_nocache);
@@ -240,7 +242,7 @@ EXPORT_SYMBOL(ioremap_nocache);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
 	if (pat_enabled)
-		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WC,
+		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
 					__builtin_return_address(0));
 	else
 		return ioremap_nocache(phys_addr, size);
@@ -249,7 +251,7 @@ EXPORT_SYMBOL(ioremap_wc);
 
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
-	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
+	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_cache);
@@ -257,7 +259,8 @@ EXPORT_SYMBOL(ioremap_cache);
 void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
 				unsigned long prot_val)
 {
-	return __ioremap_caller(phys_addr, size, (prot_val & _PAGE_CACHE_MASK),
+	return __ioremap_caller(phys_addr, size,
+				cachemode_pgprot(__pgprot(prot_val)),
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_prot);
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index ae242a7..0857b68 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -485,14 +485,23 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 
 	/*
 	 * We are safe now. Check whether the new pgprot is the same:
+	 * Convert protection attributes to 4k-format, as cpa->mask* are set
+	 * up accordingly.
 	 */
 	old_pte = *kpte;
-	old_prot = req_prot = pte_pgprot(old_pte);
+	old_prot = req_prot = pgprot_large_2_4k(pte_pgprot(old_pte));
 
 	pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
 	pgprot_val(req_prot) |= pgprot_val(cpa->mask_set);
 
 	/*
+	 * req_prot is in format of 4k pages. It must be converted to large
+	 * page format: the caching mode includes the PAT bit located at
+	 * different bit positions in the two formats.
+	 */
+	req_prot = pgprot_4k_2_large(req_prot);
+
+	/*
 	 * Set the PSE and GLOBAL flags only if the PRESENT flag is
 	 * set otherwise pmd_present/pmd_huge will return true even on
 	 * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL
@@ -585,13 +594,10 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 
 	paravirt_alloc_pte(&init_mm, page_to_pfn(base));
 	ref_prot = pte_pgprot(pte_clrhuge(*kpte));
-	/*
-	 * If we ever want to utilize the PAT bit, we need to
-	 * update this function to make sure it's converted from
-	 * bit 12 to bit 7 when we cross from the 2MB level to
-	 * the 4K level:
-	 */
-	WARN_ON_ONCE(pgprot_val(ref_prot) & _PAGE_PAT_LARGE);
+
+	/* promote PAT bit to correct position */
+	if (level == PG_LEVEL_2M)
+		ref_prot = pgprot_large_2_4k(ref_prot);
 
 #ifdef CONFIG_X86_64
 	if (level == PG_LEVEL_1G) {
@@ -879,6 +885,7 @@ static int populate_pmd(struct cpa_data *cpa,
 {
 	unsigned int cur_pages = 0;
 	pmd_t *pmd;
+	pgprot_t pmd_pgprot;
 
 	/*
 	 * Not on a 2M boundary?
@@ -910,6 +917,8 @@ static int populate_pmd(struct cpa_data *cpa,
 	if (num_pages == cur_pages)
 		return cur_pages;
 
+	pmd_pgprot = pgprot_4k_2_large(pgprot);
+
 	while (end - start >= PMD_SIZE) {
 
 		/*
@@ -921,7 +930,8 @@ static int populate_pmd(struct cpa_data *cpa,
 
 		pmd = pmd_offset(pud, start);
 
-		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pmd_pgprot)));
 
 		start	  += PMD_SIZE;
 		cpa->pfn  += PMD_SIZE;
@@ -949,6 +959,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 	pud_t *pud;
 	unsigned long end;
 	int cur_pages = 0;
+	pgprot_t pud_pgprot;
 
 	end = start + (cpa->numpages << PAGE_SHIFT);
 
@@ -986,12 +997,14 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 		return cur_pages;
 
 	pud = pud_offset(pgd, start);
+	pud_pgprot = pgprot_4k_2_large(pgprot);
 
 	/*
 	 * Map everything starting from the Gb boundary, possibly with 1G pages
 	 */
 	while (end - start >= PUD_SIZE) {
-		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pud_pgprot)));
 
 		start	  += PUD_SIZE;
 		cpa->pfn  += PUD_SIZE;
@@ -1304,12 +1317,6 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 	return 0;
 }
 
-static inline int cache_attr(pgprot_t attr)
-{
-	return pgprot_val(attr) &
-		(_PAGE_PAT | _PAGE_PAT_LARGE | _PAGE_PWT | _PAGE_PCD);
-}
-
 static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 				    pgprot_t mask_set, pgprot_t mask_clr,
 				    int force_split, int in_flag,
@@ -1390,7 +1397,7 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 	 * No need to flush, when we did not set any of the caching
 	 * attributes:
 	 */
-	cache = cache_attr(mask_set);
+	cache = !!cachemode_pgprot(mask_set);
 
 	/*
 	 * On success we use CLFLUSH, when the CPU supports it to
@@ -1445,7 +1452,8 @@ int _set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	return change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				    pgprot_cachemode(_PAGE_CACHE_MODE_UC_MINUS),
+				    0);
 }
 
 int set_memory_uc(unsigned long addr, int numpages)
@@ -1456,7 +1464,7 @@ int set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-			    _PAGE_CACHE_UC_MINUS, NULL);
+			      _PAGE_CACHE_MODE_UC_MINUS, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1474,7 +1482,7 @@ out_err:
 EXPORT_SYMBOL(set_memory_uc);
 
 static int _set_memory_array(unsigned long *addr, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	int i, j;
 	int ret;
@@ -1490,11 +1498,13 @@ static int _set_memory_array(unsigned long *addr, int addrinarray,
 	}
 
 	ret = change_page_attr_set(addr, addrinarray,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
+				   pgprot_cachemode(_PAGE_CACHE_MODE_UC_MINUS),
+				   1);
 
-	if (!ret && new_type == _PAGE_CACHE_WC)
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(addr, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       pgprot_cachemode(
+						   _PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_ARRAY, NULL);
 	if (ret)
@@ -1511,13 +1521,13 @@ out_free:
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_memory_array_uc);
 
 int set_memory_array_wc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_WC);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_memory_array_wc);
 
@@ -1527,10 +1537,12 @@ int _set_memory_wc(unsigned long addr, int numpages)
 	unsigned long addr_copy = addr;
 
 	ret = change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				   pgprot_cachemode(_PAGE_CACHE_MODE_UC_MINUS),
+				   0);
 	if (!ret) {
 		ret = change_page_attr_set_clr(&addr_copy, numpages,
-					       __pgprot(_PAGE_CACHE_WC),
+					       pgprot_cachemode(
+						   _PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, 0, NULL);
 	}
@@ -1545,7 +1557,7 @@ int set_memory_wc(unsigned long addr, int numpages)
 		return set_memory_uc(addr, numpages);
 
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-		_PAGE_CACHE_WC, NULL);
+		_PAGE_CACHE_MODE_WC, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1564,6 +1576,7 @@ EXPORT_SYMBOL(set_memory_wc);
 
 int _set_memory_wb(unsigned long addr, int numpages)
 {
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	return change_page_attr_clear(&addr, numpages,
 				      __pgprot(_PAGE_CACHE_MASK), 0);
 }
@@ -1586,6 +1599,7 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 	int i;
 	int ret;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	ret = change_page_attr_clear(addr, addrinarray,
 				      __pgprot(_PAGE_CACHE_MASK), 1);
 	if (ret)
@@ -1648,7 +1662,7 @@ int set_pages_uc(struct page *page, int numpages)
 EXPORT_SYMBOL(set_pages_uc);
 
 static int _set_pages_array(struct page **pages, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	unsigned long start;
 	unsigned long end;
@@ -1666,10 +1680,11 @@ static int _set_pages_array(struct page **pages, int addrinarray,
 	}
 
 	ret = cpa_set_pages_array(pages, addrinarray,
-			__pgprot(_PAGE_CACHE_UC_MINUS));
-	if (!ret && new_type == _PAGE_CACHE_WC)
+			pgprot_cachemode(_PAGE_CACHE_MODE_UC_MINUS));
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(NULL, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       pgprot_cachemode(
+						   _PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_PAGES_ARRAY, pages);
 	if (ret)
@@ -1689,13 +1704,13 @@ err_out:
 
 int set_pages_array_uc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_pages_array_uc);
 
 int set_pages_array_wc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_WC);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_pages_array_wc);
 
@@ -1714,6 +1729,7 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 	unsigned long end;
 	int i;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	retval = cpa_clear_pages_array(pages, addrinarray,
 			__pgprot(_PAGE_CACHE_MASK));
 	if (retval)
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 6574388..0ba0d79 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -139,20 +139,21 @@ static DEFINE_SPINLOCK(memtype_lock);	/* protects memtype accesses */
  * The intersection is based on "Effective Memory Type" tables in IA-32
  * SDM vol 3a
  */
-static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type)
+static unsigned long pat_x_mtrr_type(u64 start, u64 end,
+				     enum page_cache_mode req_type)
 {
 	/*
 	 * Look for MTRR hint to get the effective type in case where PAT
 	 * request is for WB.
 	 */
-	if (req_type == _PAGE_CACHE_WB) {
+	if (req_type == _PAGE_CACHE_MODE_WB) {
 		u8 mtrr_type;
 
 		mtrr_type = mtrr_type_lookup(start, end);
 		if (mtrr_type != MTRR_TYPE_WRBACK)
-			return _PAGE_CACHE_UC_MINUS;
+			return _PAGE_CACHE_MODE_UC_MINUS;
 
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 	}
 
 	return req_type;
@@ -207,25 +208,26 @@ static int pat_pagerange_is_ram(resource_size_t start, resource_size_t end)
  * - Find the memtype of all the pages in the range, look for any conflicts
  * - In case of no conflicts, set the new memtype for pages in the range
  */
-static int reserve_ram_pages_type(u64 start, u64 end, unsigned long req_type,
-				  unsigned long *new_type)
+static int reserve_ram_pages_type(u64 start, u64 end,
+				  enum page_cache_mode req_type,
+				  enum page_cache_mode *new_type)
 {
 	struct page *page;
 	u64 pfn;
 
-	if (req_type == _PAGE_CACHE_UC) {
+	if (req_type == _PAGE_CACHE_MODE_UC) {
 		/* We do not support strong UC */
 		WARN_ON_ONCE(1);
-		req_type = _PAGE_CACHE_UC_MINUS;
+		req_type = _PAGE_CACHE_MODE_UC_MINUS;
 	}
 
 	for (pfn = (start >> PAGE_SHIFT); pfn < (end >> PAGE_SHIFT); ++pfn) {
-		unsigned long type;
+		enum page_cache_mode type;
 
 		page = pfn_to_page(pfn);
 		type = get_page_memtype(page);
 		if (type != -1) {
-			printk(KERN_INFO "reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%lx, req 0x%lx\n",
+			pr_info("reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%x, req 0x%x\n",
 				start, end - 1, type, req_type);
 			if (new_type)
 				*new_type = type;
@@ -258,21 +260,21 @@ static int free_ram_pages_type(u64 start, u64 end)
 
 /*
  * req_type typically has one of the:
- * - _PAGE_CACHE_WB
- * - _PAGE_CACHE_WC
- * - _PAGE_CACHE_UC_MINUS
- * - _PAGE_CACHE_UC
+ * - _PAGE_CACHE_MODE_WB
+ * - _PAGE_CACHE_MODE_WC
+ * - _PAGE_CACHE_MODE_UC_MINUS
+ * - _PAGE_CACHE_MODE_UC
  *
  * If new_type is NULL, function will return an error if it cannot reserve the
  * region with req_type. If new_type is non-NULL, function will return
  * available type in new_type in case of no error. In case of any error
  * it will return a negative return value.
  */
-int reserve_memtype(u64 start, u64 end, unsigned long req_type,
-		    unsigned long *new_type)
+int reserve_memtype(u64 start, u64 end, enum page_cache_mode req_type,
+		    enum page_cache_mode *new_type)
 {
 	struct memtype *new;
-	unsigned long actual_type;
+	enum page_cache_mode actual_type;
 	int is_range_ram;
 	int err = 0;
 
@@ -281,10 +283,10 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	if (!pat_enabled) {
 		/* This is identical to page table setting without PAT */
 		if (new_type) {
-			if (req_type == _PAGE_CACHE_WC)
-				*new_type = _PAGE_CACHE_UC_MINUS;
+			if (req_type == _PAGE_CACHE_MODE_WC)
+				*new_type = _PAGE_CACHE_MODE_UC_MINUS;
 			else
-				*new_type = req_type & _PAGE_CACHE_MASK;
+				*new_type = req_type;
 		}
 		return 0;
 	}
@@ -292,7 +294,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	/* Low ISA region is always mapped WB in page table. No need to track */
 	if (x86_platform.is_untracked_pat_range(start, end)) {
 		if (new_type)
-			*new_type = _PAGE_CACHE_WB;
+			*new_type = _PAGE_CACHE_MODE_WB;
 		return 0;
 	}
 
@@ -302,7 +304,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	 * tools and ACPI tools). Use WB request for WB memory and use
 	 * UC_MINUS otherwise.
 	 */
-	actual_type = pat_x_mtrr_type(start, end, req_type & _PAGE_CACHE_MASK);
+	actual_type = pat_x_mtrr_type(start, end, req_type);
 
 	if (new_type)
 		*new_type = actual_type;
@@ -394,12 +396,12 @@ int free_memtype(u64 start, u64 end)
  *
  * Only to be called when PAT is enabled
  *
- * Returns _PAGE_CACHE_WB, _PAGE_CACHE_WC, _PAGE_CACHE_UC_MINUS or
- * _PAGE_CACHE_UC
+ * Returns _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC, _PAGE_CACHE_MODE_UC_MINUS
+ * or _PAGE_CACHE_MODE_UC
  */
-static unsigned long lookup_memtype(u64 paddr)
+static enum page_cache_mode lookup_memtype(u64 paddr)
 {
-	int rettype = _PAGE_CACHE_WB;
+	enum page_cache_mode rettype = _PAGE_CACHE_MODE_WB;
 	struct memtype *entry;
 
 	if (x86_platform.is_untracked_pat_range(paddr, paddr + PAGE_SIZE))
@@ -414,7 +416,7 @@ static unsigned long lookup_memtype(u64 paddr)
 		 * default state and not reserved, and hence of type WB
 		 */
 		if (rettype == -1)
-			rettype = _PAGE_CACHE_WB;
+			rettype = _PAGE_CACHE_MODE_WB;
 
 		return rettype;
 	}
@@ -425,7 +427,7 @@ static unsigned long lookup_memtype(u64 paddr)
 	if (entry != NULL)
 		rettype = entry->type;
 	else
-		rettype = _PAGE_CACHE_UC_MINUS;
+		rettype = _PAGE_CACHE_MODE_UC_MINUS;
 
 	spin_unlock(&memtype_lock);
 	return rettype;
@@ -442,11 +444,11 @@ static unsigned long lookup_memtype(u64 paddr)
  * On failure, returns non-zero
  */
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type)
+			enum page_cache_mode *type)
 {
 	resource_size_t size = end - start;
-	unsigned long req_type = *type;
-	unsigned long new_type;
+	enum page_cache_mode req_type = *type;
+	enum page_cache_mode new_type;
 	int ret;
 
 	WARN_ON_ONCE(iomem_map_sanity_check(start, size));
@@ -520,13 +522,13 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
 int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 				unsigned long size, pgprot_t *vma_prot)
 {
-	unsigned long flags = _PAGE_CACHE_WB;
+	enum page_cache_mode pct = _PAGE_CACHE_MODE_WB;
 
 	if (!range_is_allowed(pfn, size))
 		return 0;
 
 	if (file->f_flags & O_DSYNC)
-		flags = _PAGE_CACHE_UC_MINUS;
+		pct = _PAGE_CACHE_MODE_UC_MINUS;
 
 #ifdef CONFIG_X86_32
 	/*
@@ -543,12 +545,12 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	      boot_cpu_has(X86_FEATURE_CYRIX_ARR) ||
 	      boot_cpu_has(X86_FEATURE_CENTAUR_MCR)) &&
 	    (pfn << PAGE_SHIFT) >= __pa(high_memory)) {
-		flags = _PAGE_CACHE_UC;
+		pct = _PAGE_CACHE_MODE_UC;
 	}
 #endif
 
 	*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
-			     flags);
+			     protval_cachemode(pct));
 	return 1;
 }
 
@@ -556,7 +558,8 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
  * Change the memory type for the physial address range in kernel identity
  * mapping space if that range is a part of identity map.
  */
-int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
+int kernel_map_sync_memtype(u64 base, unsigned long size,
+			    enum page_cache_mode pct)
 {
 	unsigned long id_sz;
 
@@ -574,11 +577,11 @@ int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
 				__pa(high_memory) - base :
 				size;
 
-	if (ioremap_change_attr((unsigned long)__va(base), id_sz, flags) < 0) {
+	if (ioremap_change_attr((unsigned long)__va(base), id_sz, pct) < 0) {
 		printk(KERN_INFO "%s:%d ioremap_change_attr failed %s "
 			"for [mem %#010Lx-%#010Lx]\n",
 			current->comm, current->pid,
-			cattr_name(flags),
+			cattr_name(pct),
 			base, (unsigned long long)(base + size-1));
 		return -EINVAL;
 	}
@@ -595,8 +598,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 {
 	int is_ram = 0;
 	int ret;
-	unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
-	unsigned long flags = want_flags;
+	enum page_cache_mode want_pct = cachemode_pgprot(*vma_prot);
+	enum page_cache_mode pct = want_pct;
 
 	is_ram = pat_pagerange_is_ram(paddr, paddr + size);
 
@@ -609,36 +612,36 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		if (!pat_enabled)
 			return 0;
 
-		flags = lookup_memtype(paddr);
-		if (want_flags != flags) {
+		pct = lookup_memtype(paddr);
+		if (want_pct != pct) {
 			printk(KERN_WARNING "%s:%d map pfn RAM range req %s for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pct),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pct));
 			*vma_prot = __pgprot((pgprot_val(*vma_prot) &
-					      (~_PAGE_CACHE_MASK)) |
-					     flags);
+					     (~_PAGE_CACHE_MASK)) |
+					     pgprot_val(pgprot_cachemode(pct)));
 		}
 		return 0;
 	}
 
-	ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);
+	ret = reserve_memtype(paddr, paddr + size, want_pct, &pct);
 	if (ret)
 		return ret;
 
-	if (flags != want_flags) {
+	if (pct != want_pct) {
 		if (strict_prot ||
-		    !is_new_memtype_allowed(paddr, size, want_flags, flags)) {
+		    !is_new_memtype_allowed(paddr, size, want_pct, pct)) {
 			free_memtype(paddr, paddr + size);
 			printk(KERN_ERR "%s:%d map pfn expected mapping type %s"
 				" for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pct),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pct));
 			return -EINVAL;
 		}
 		/*
@@ -647,10 +650,10 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		 */
 		*vma_prot = __pgprot((pgprot_val(*vma_prot) &
 				      (~_PAGE_CACHE_MASK)) |
-				     flags);
+				     pgprot_val(pgprot_cachemode(pct)));
 	}
 
-	if (kernel_map_sync_memtype(paddr, size, flags) < 0) {
+	if (kernel_map_sync_memtype(paddr, size, pct) < 0) {
 		free_memtype(paddr, paddr + size);
 		return -EINVAL;
 	}
@@ -709,7 +712,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 		    unsigned long pfn, unsigned long addr, unsigned long size)
 {
 	resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
-	unsigned long flags;
+	enum page_cache_mode pct;
 
 	/* reserve the whole chunk starting from paddr */
 	if (addr == vma->vm_start && size == (vma->vm_end - vma->vm_start)) {
@@ -728,18 +731,18 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 	 * For anything smaller than the vma size we set prot based on the
 	 * lookup.
 	 */
-	flags = lookup_memtype(paddr);
+	pct = lookup_memtype(paddr);
 
 	/* Check memtype for the remaining pages */
 	while (size > PAGE_SIZE) {
 		size -= PAGE_SIZE;
 		paddr += PAGE_SIZE;
-		if (flags != lookup_memtype(paddr))
+		if (pct != lookup_memtype(paddr))
 			return -EINVAL;
 	}
 
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 pgprot_val(pgprot_cachemode(pct)));
 
 	return 0;
 }
@@ -747,15 +750,15 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 int track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
 		     unsigned long pfn)
 {
-	unsigned long flags;
+	enum page_cache_mode pct;
 
 	if (!pat_enabled)
 		return 0;
 
 	/* Set prot based on lookup */
-	flags = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
+	pct = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 pgprot_val(pgprot_cachemode(pct)));
 
 	return 0;
 }
@@ -791,7 +794,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 pgprot_t pgprot_writecombine(pgprot_t prot)
 {
 	if (pat_enabled)
-		return __pgprot(pgprot_val(prot) | _PAGE_CACHE_WC);
+		return __pgprot(pgprot_val(prot) |
+				protval_cachemode(_PAGE_CACHE_MODE_WC));
 	else
 		return pgprot_noncached(prot);
 }
diff --git a/arch/x86/mm/pat_internal.h b/arch/x86/mm/pat_internal.h
index 77e5ba1..b7590d4 100644
--- a/arch/x86/mm/pat_internal.h
+++ b/arch/x86/mm/pat_internal.h
@@ -10,30 +10,30 @@ struct memtype {
 	u64			start;
 	u64			end;
 	u64			subtree_max_end;
-	unsigned long		type;
+	enum page_cache_mode	type;
 	struct rb_node		rb;
 };
 
-static inline char *cattr_name(unsigned long flags)
+static inline char *cattr_name(enum page_cache_mode pct)
 {
-	switch (flags & _PAGE_CACHE_MASK) {
-	case _PAGE_CACHE_UC:		return "uncached";
-	case _PAGE_CACHE_UC_MINUS:	return "uncached-minus";
-	case _PAGE_CACHE_WB:		return "write-back";
-	case _PAGE_CACHE_WC:		return "write-combining";
-	default:			return "broken";
+	switch (pct) {
+	case _PAGE_CACHE_MODE_UC:		return "uncached";
+	case _PAGE_CACHE_MODE_UC_MINUS:		return "uncached-minus";
+	case _PAGE_CACHE_MODE_WB:		return "write-back";
+	case _PAGE_CACHE_MODE_WC:		return "write-combining";
+	default:				return "broken";
 	}
 }
 
 #ifdef CONFIG_X86_PAT
 extern int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type);
+					enum page_cache_mode *new_type);
 extern struct memtype *rbt_memtype_erase(u64 start, u64 end);
 extern struct memtype *rbt_memtype_lookup(u64 addr);
 extern int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos);
 #else
 static inline int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type)
+					enum page_cache_mode *new_type)
 { return 0; }
 static inline struct memtype *rbt_memtype_erase(u64 start, u64 end)
 { return NULL; }
diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c
index 415f6c4..6582adc 100644
--- a/arch/x86/mm/pat_rbtree.c
+++ b/arch/x86/mm/pat_rbtree.c
@@ -122,11 +122,12 @@ static struct memtype *memtype_rb_exact_match(struct rb_root *root,
 
 static int memtype_rb_check_conflict(struct rb_root *root,
 				u64 start, u64 end,
-				unsigned long reqtype, unsigned long *newtype)
+				enum page_cache_mode reqtype,
+				enum page_cache_mode *newtype)
 {
 	struct rb_node *node;
 	struct memtype *match;
-	int found_type = reqtype;
+	enum page_cache_mode found_type = reqtype;
 
 	match = memtype_rb_lowest_match(&memtype_rbroot, start, end);
 	if (match == NULL)
@@ -187,7 +188,8 @@ static void memtype_rb_insert(struct rb_root *root, struct memtype *newdata)
 	rb_insert_augmented(&newdata->rb, root, &memtype_rb_augment_cb);
 }
 
-int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type)
+int rbt_memtype_check_insert(struct memtype *new,
+			     enum page_cache_mode *ret_type)
 {
 	int err = 0;
 
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index 2ae525e..0951023 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,14 +433,14 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		return -EINVAL;
 
 	if (pat_enabled && write_combine)
-		prot |= _PAGE_CACHE_WC;
+		prot |= protval_cachemode(_PAGE_CACHE_MODE_WC);
 	else if (pat_enabled || boot_cpu_data.x86 > 3)
 		/*
 		 * ioremap() and ioremap_nocache() defaults to UC MINUS for now.
 		 * To avoid attribute conflicts, request UC MINUS here
 		 * as well.
 		 */
-		prot |= _PAGE_CACHE_UC_MINUS;
+		prot |= protval_cachemode(_PAGE_CACHE_MODE_UC_MINUS);
 
 	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
 
diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
index 4aa56ba..703ffcd 100644
--- a/drivers/video/fbdev/gbefb.c
+++ b/drivers/video/fbdev/gbefb.c
@@ -54,7 +54,8 @@ struct gbefb_par {
 #endif
 #endif
 #ifdef CONFIG_X86
-#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
+#define pgprot_fb(_prot) (((_prot) & ~_PAGE_CACHE_MASK) |	\
+			  protval_cachemode(_PAGE_CACHE_TYPE_UC))
 #endif
 
 /*
diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
index 048a666..121f257 100644
--- a/drivers/video/fbdev/vermilion/vermilion.c
+++ b/drivers/video/fbdev/vermilion/vermilion.c
@@ -1004,13 +1004,15 @@ static int vmlfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 	struct vml_info *vinfo = container_of(info, struct vml_info, info);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
 	int ret;
+	unsigned long prot;
 
 	ret = vmlfb_vram_offset(vinfo, offset);
 	if (ret)
 		return -EINVAL;
 
-	pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
-	pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
+	pgprot_val(vma->vm_page_prot) =
+		prot | protval_cachemode(_PAGE_CACHE_MODE_UC);
 
 	return vm_iomap_memory(vma, vinfo->vram_start,
 			vinfo->vram_contig_size);
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
  2014-08-19 13:25 [PATCH RFC 0/3] x86: Full support of PAT jgross
  2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
@ 2014-08-19 13:25 ` jgross
  2014-08-22  9:32   ` Jan Beulich
  2014-08-22  9:32   ` [Xen-devel] " Jan Beulich
  2014-08-19 13:25 ` [PATCH RFC 3/3] Support Xen pv-domains using PAT jgross
  2014-08-20 12:05   ` One Thousand Gnomes
  3 siblings, 2 replies; 22+ messages in thread
From: jgross @ 2014-08-19 13:25 UTC (permalink / raw)
  To: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86
  Cc: Juergen Gross

From: Juergen Gross <jgross@suse.com>

Update the translation tables from cache mode to pgprot values according to
the PAT settings. This enables changing the cache attributes of a PAT index in
just one place without having to change at the users side.

With this change it is possible to use the same kernel with different PAT
configurations, e.g. supporting Xen.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/pat.h           |  1 +
 arch/x86/include/asm/pgtable_types.h |  4 +++
 arch/x86/mm/init.c                   |  8 +++++
 arch/x86/mm/pat.c                    | 57 +++++++++++++++++++++++++++++++++++-
 include/linux/mm.h                   |  1 +
 5 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index 65b497b..a7816e1 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -11,6 +11,7 @@ static const int pat_enabled;
 #endif
 
 extern void pat_init(void);
+void pat_init_cache_modes(u64 pat);
 
 extern int reserve_memtype(u64 start, u64 end,
 		enum page_cache_mode req_pct, enum page_cache_mode *ret_pct);
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 7685b34..45c720e 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -338,6 +338,10 @@ extern uint8_t __pte2cachemode_tbl[8];
 	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
 	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
 	 (((cb) >> _PAGE_BIT_PWT) & 1))
+#define __cm_idx2pte(i)					\
+	((((i) & 4) << (_PAGE_BIT_PAT - 2)) |		\
+	 (((i) & 2) << (_PAGE_BIT_PCD - 1)) |		\
+	 (((i) & 1) << _PAGE_BIT_PWT))
 
 static inline unsigned long protval_cachemode(enum page_cache_mode pct)
 {
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 0500124..d2ebe70 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -716,3 +716,11 @@ void __init zone_sizes_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
+void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+{
+	/* entry 0 MUST be WB (hardwired to speed up translations) */
+	BUG_ON(!entry && cache != _PAGE_CACHE_MODE_WB);
+
+	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+	__pte2cachemode_tbl[entry] = cache;
+}
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 0ba0d79..ac8a5d4 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -75,6 +75,55 @@ enum {
 	PAT_UC_MINUS = 7,	/* UC, but can be overriden by MTRR */
 };
 
+/*
+ * Update the cache mode to pgprot translation tables according to PAT
+ * configuration.
+ * Using lower indices is preferred, so we start with highest index.
+ */
+void pat_init_cache_modes(u64 pat)
+{
+	int i;
+	enum page_cache_mode cache;
+	char pat_msg[33];
+	char *cache_mode;
+
+	pat_msg[32] = 0;
+	for (i = 7; i >= 0; i--) {
+		switch ((pat >> (i * 8)) & 7) {
+		case PAT_UC:
+			cache = _PAGE_CACHE_MODE_UC;
+			cache_mode = "UC  ";
+			break;
+		case PAT_WC:
+			cache = _PAGE_CACHE_MODE_WC;
+			cache_mode = "WC  ";
+			break;
+		case PAT_WT:
+			cache = _PAGE_CACHE_MODE_WT;
+			cache_mode = "WT  ";
+			break;
+		case PAT_WP:
+			cache = _PAGE_CACHE_MODE_WP;
+			cache_mode = "WP  ";
+			break;
+		case PAT_WB:
+			cache = _PAGE_CACHE_MODE_WB;
+			cache_mode = "WB  ";
+			break;
+		case PAT_UC_MINUS:
+			cache = _PAGE_CACHE_MODE_UC_MINUS;
+			cache_mode = "UC- ";
+			break;
+		default:
+			cache = _PAGE_CACHE_MODE_WB;
+			cache_mode = "WB  ";
+		}
+		update_cache_mode_entry(i, cache);
+		memcpy(pat_msg + 4 * i, cache_mode, 4);
+	}
+	pr_info("PAT configuration [0-7]: %s\n", pat_msg);
+}
+
 #define PAT(x, y)	((u64)PAT_ ## y << ((x)*8))
 
 void pat_init(void)
@@ -118,8 +167,14 @@ void pat_init(void)
 	      PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
 
 	/* Boot CPU check */
-	if (!boot_pat_state)
+	if (!boot_pat_state) {
 		rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
+		/*
+		 * Init cache mode tables before writing MSR to give Xen a
+		 * chance to correct the changes when doing the write.
+		 */
+		pat_init_cache_modes(pat);
+	}
 
 	wrmsrl(MSR_IA32_CR_PAT, pat);
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8981cc8..d7bf551 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1574,6 +1574,7 @@ extern void free_area_init(unsigned long * zones_size);
 extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
+extern void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
 
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH RFC 3/3] Support Xen pv-domains using PAT
  2014-08-19 13:25 [PATCH RFC 0/3] x86: Full support of PAT jgross
  2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
  2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
@ 2014-08-19 13:25 ` jgross
  2014-08-20 12:05   ` One Thousand Gnomes
  3 siblings, 0 replies; 22+ messages in thread
From: jgross @ 2014-08-19 13:25 UTC (permalink / raw)
  To: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86
  Cc: Juergen Gross

From: Juergen Gross <jgross@suse.com>

With the dynamical mapping between cache modes and pgprot values it is now
possible to use all cache modes via the Xen hypervisor PAT settings in a
pv domain.

All to be done is to read the PAT configuration MSR and set up the translation
tables accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/xen/enlighten.c | 12 ++++++----
 arch/x86/xen/mmu.c       | 60 +++++++++++++++++-------------------------------
 arch/x86/xen/xen-ops.h   |  1 +
 3 files changed, 30 insertions(+), 43 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index c0cb11f..ef705a3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1552,12 +1552,16 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	xen_init_mmu_ops();
 
+	/*
+	 * Modify the cache mode translation tables to match Xen's PAT
+	 * configuration.
+	 */
+
+	if (xen_init_cache_types())
+		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
+
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
-#if 0
-	if (!xen_initial_domain())
-#endif
-		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
 	__supported_pte_mask |= _PAGE_IOMAP;
 
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index e8a1201..49830c0 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -434,13 +434,7 @@ static pteval_t iomap_pte(pteval_t val)
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
-#if 0
-	/* If this is a WC pte, convert back from Xen WC to Linux WC */
-	if ((pteval & (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)) == _PAGE_PAT) {
-		WARN_ON(!pat_enabled);
-		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
-	}
-#endif
+
 	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
 		return pteval;
 
@@ -455,47 +449,35 @@ __visible pgdval_t xen_pgd_val(pgd_t pgd)
 PV_CALLEE_SAVE_REGS_THUNK(xen_pgd_val);
 
 /*
- * Xen's PAT setup is part of its ABI, though I assume entries 6 & 7
- * are reserved for now, to correspond to the Intel-reserved PAT
- * types.
- *
- * We expect Linux's PAT set as follows:
- *
- * Idx  PTE flags        Linux    Xen    Default
- * 0                     WB       WB     WB
- * 1            PWT      WC       WT     WT
- * 2        PCD          UC-      UC-    UC-
- * 3        PCD PWT      UC       UC     UC
- * 4    PAT              WB       WC     WB
- * 5    PAT     PWT      WC       WP     WT
- * 6    PAT PCD          UC-      rsv    UC-
- * 7    PAT PCD PWT      UC       rsv    UC
+ * Xen's PAT setup is part of its ABI.
+ * We don't care what Linux wants to use, just fall back to the Xen PAT
+ * configuration. All we have to do in case of a Linux's PAT configuration
+ * is to overwrite the cache mode translation tables with the correct
+ * values for the Xen configuration.
  */
 
+int xen_init_cache_types(void)
+{
+	u64 pat;
+	int err;
+
+	err = rdmsrl_safe(MSR_IA32_CR_PAT, &pat);
+	if (!err)
+		pat_init_cache_modes(pat);
+	return err;
+}
+
 void xen_set_pat(u64 pat)
 {
-	/* We expect Linux to use a PAT setting of
-	 * UC UC- WC WB (ignoring the PAT flag) */
-	WARN_ON(pat != 0x0007010600070106ull);
+	if (xen_init_cache_types())
+		/* Domain configured PAT, but we can't adapt to the changes */
+		BUG();
 }
 
 __visible pte_t xen_make_pte(pteval_t pte)
 {
 	phys_addr_t addr = (pte & PTE_PFN_MASK);
-#if 0
-	/* If Linux is trying to set a WC pte, then map to the Xen WC.
-	 * If _PAGE_PAT is set, then it probably means it is really
-	 * _PAGE_PSE, so avoid fiddling with the PAT mapping and hope
-	 * things work out OK...
-	 *
-	 * (We should never see kernel mappings with _PAGE_PSE set,
-	 * but we could see hugetlbfs mappings, I think.).
-	 */
-	if (pat_enabled && !WARN_ON(pte & _PAGE_PAT)) {
-		if ((pte & (_PAGE_PCD | _PAGE_PWT)) == _PAGE_PWT)
-			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
-	}
-#endif
+
 	/*
 	 * Unprivileged domains are allowed to do IOMAPpings for
 	 * PCI passthrough, but not map ISA space.  The ISA
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 28c7e0b..da7e666 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -34,6 +34,7 @@ extern unsigned long xen_max_p2m_pfn;
 void xen_mm_pin_all(void);
 void xen_mm_unpin_all(void);
 void xen_set_pat(u64);
+int xen_init_cache_types(void);
 
 char * __init xen_memory_setup(void);
 char * xen_auto_xlated_memory_setup(void);
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 0/3] x86: Full support of PAT
  2014-08-19 13:25 [PATCH RFC 0/3] x86: Full support of PAT jgross
@ 2014-08-20 12:05   ` One Thousand Gnomes
  2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: One Thousand Gnomes @ 2014-08-20 12:05 UTC (permalink / raw)
  To: jgross
  Cc: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
> top 4 entries in the PAT MSR are the same as the 4 lower entries.
> 
> This results in the kernel not supporting e.g. write-through mode. Especially
> this cache mode would speed up drivers of video cards which now have to use
> uncached accesses.


Pentium II erratum A52 (and similar on quite a few other processors)

Problem: The Page Attribute Table (PAT) contains eight entries, which
must all be initialized and considered when setting up memory types for
the Pentium II processor. However, in Mode B or Mode C paging, the upper
four entries do not function correctly for 4-Kbyte pages. Specifically,
bit seven of page table entries that translate addresses to 4-Kbyte pages
should be used as the upper bit of a three-bit index to determine the PAT
entry that specifies the memory type for the page. When Mode B (CR4.PSE =
1) and/or Mode C (CR4.PAE) are enabled, the processor forces this bit to
zero when determining the memory type regardless of the value in the page
table entry. The upper four entries of the PAT function correctly for
2-Mbyte and 4-Mbyte large pages (specified by bit 12 of the page
directory entry for those translations). Implication: Only the lower four
PAT entries are useful for 4 KB translations when Mode B or C paging is
used. In Mode A paging (4-Kbyte pages only), all eight entries may be
used. All eight entries may be used for large pages in Mode B or C paging.



Doing this stuff for Xen also IMHO makes no sense at all. We shouldn't
have a kernel full of crap to deal with Xen-isms. IFF it means the
changes can also implement a mix of four PAT entries on Pentium-M or
earlier CPUs with PAT errata, and the full PAT on processors without
errata then IMHO it becomes a whole world more interesting.

Alan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 0/3] x86: Full support of PAT
@ 2014-08-20 12:05   ` One Thousand Gnomes
  0 siblings, 0 replies; 22+ messages in thread
From: One Thousand Gnomes @ 2014-08-20 12:05 UTC (permalink / raw)
  To: jgross
  Cc: xen-devel, toshi.kani, x86, linux-kernel, stefan.bader, hpa,
	ville.syrjala

> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
> top 4 entries in the PAT MSR are the same as the 4 lower entries.
> 
> This results in the kernel not supporting e.g. write-through mode. Especially
> this cache mode would speed up drivers of video cards which now have to use
> uncached accesses.


Pentium II erratum A52 (and similar on quite a few other processors)

Problem: The Page Attribute Table (PAT) contains eight entries, which
must all be initialized and considered when setting up memory types for
the Pentium II processor. However, in Mode B or Mode C paging, the upper
four entries do not function correctly for 4-Kbyte pages. Specifically,
bit seven of page table entries that translate addresses to 4-Kbyte pages
should be used as the upper bit of a three-bit index to determine the PAT
entry that specifies the memory type for the page. When Mode B (CR4.PSE =
1) and/or Mode C (CR4.PAE) are enabled, the processor forces this bit to
zero when determining the memory type regardless of the value in the page
table entry. The upper four entries of the PAT function correctly for
2-Mbyte and 4-Mbyte large pages (specified by bit 12 of the page
directory entry for those translations). Implication: Only the lower four
PAT entries are useful for 4 KB translations when Mode B or C paging is
used. In Mode A paging (4-Kbyte pages only), all eight entries may be
used. All eight entries may be used for large pages in Mode B or C paging.



Doing this stuff for Xen also IMHO makes no sense at all. We shouldn't
have a kernel full of crap to deal with Xen-isms. IFF it means the
changes can also implement a mix of four PAT entries on Pentium-M or
earlier CPUs with PAT errata, and the full PAT on processors without
errata then IMHO it becomes a whole world more interesting.

Alan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 0/3] x86: Full support of PAT
  2014-08-20 12:05   ` One Thousand Gnomes
@ 2014-08-20 12:21     ` Jan Beulich
  -1 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-20 12:21 UTC (permalink / raw)
  To: gnomes
  Cc: stefan.bader, toshi.kani, x86, ville.syrjala, xen-devel,
	Juergen Gross, linux-kernel, hpa

>>> One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> 08/20/14 2:08 PM >>>
>> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
>> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
>> top 4 entries in the PAT MSR are the same as the 4 lower entries.
>> 
>> This results in the kernel not supporting e.g. write-through mode. Especially
>> this cache mode would speed up drivers of video cards which now have to use
>> uncached accesses.
>
>Pentium II erratum A52 (and similar on quite a few other processors)

But you're aware that for the last two major releases Xen hasn't been supporting
32-bit CPUs at all anymore? I.e. why should we limit functionality under Xen
based on errata on 64-bit-incapable CPUs?

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 0/3] x86: Full support of PAT
@ 2014-08-20 12:21     ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-20 12:21 UTC (permalink / raw)
  To: gnomes
  Cc: Juergen Gross, xen-devel, toshi.kani, x86, linux-kernel,
	stefan.bader, hpa, ville.syrjala

>>> One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> 08/20/14 2:08 PM >>>
>> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
>> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
>> top 4 entries in the PAT MSR are the same as the 4 lower entries.
>> 
>> This results in the kernel not supporting e.g. write-through mode. Especially
>> this cache mode would speed up drivers of video cards which now have to use
>> uncached accesses.
>
>Pentium II erratum A52 (and similar on quite a few other processors)

But you're aware that for the last two major releases Xen hasn't been supporting
32-bit CPUs at all anymore? I.e. why should we limit functionality under Xen
based on errata on 64-bit-incapable CPUs?

Jan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 0/3] x86: Full support of PAT
  2014-08-20 12:05   ` One Thousand Gnomes
  (?)
  (?)
@ 2014-08-20 12:35   ` Juergen Gross
  -1 siblings, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2014-08-20 12:35 UTC (permalink / raw)
  To: One Thousand Gnomes
  Cc: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

On 08/20/2014 02:05 PM, One Thousand Gnomes wrote:
>> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
>> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
>> top 4 entries in the PAT MSR are the same as the 4 lower entries.
>>
>> This results in the kernel not supporting e.g. write-through mode. Especially
>> this cache mode would speed up drivers of video cards which now have to use
>> uncached accesses.
>
>
> Pentium II erratum A52 (and similar on quite a few other processors)
>
> Problem: The Page Attribute Table (PAT) contains eight entries, which
> must all be initialized and considered when setting up memory types for
> the Pentium II processor. However, in Mode B or Mode C paging, the upper
> four entries do not function correctly for 4-Kbyte pages. Specifically,
> bit seven of page table entries that translate addresses to 4-Kbyte pages
> should be used as the upper bit of a three-bit index to determine the PAT
> entry that specifies the memory type for the page. When Mode B (CR4.PSE =
> 1) and/or Mode C (CR4.PAE) are enabled, the processor forces this bit to
> zero when determining the memory type regardless of the value in the page
> table entry. The upper four entries of the PAT function correctly for
> 2-Mbyte and 4-Mbyte large pages (specified by bit 12 of the page
> directory entry for those translations). Implication: Only the lower four
> PAT entries are useful for 4 KB translations when Mode B or C paging is
> used. In Mode A paging (4-Kbyte pages only), all eight entries may be
> used. All eight entries may be used for large pages in Mode B or C paging.
>
>
>
> Doing this stuff for Xen also IMHO makes no sense at all. We shouldn't
> have a kernel full of crap to deal with Xen-isms. IFF it means the
> changes can also implement a mix of four PAT entries on Pentium-M or
> earlier CPUs with PAT errata, and the full PAT on processors without
> errata then IMHO it becomes a whole world more interesting.

This is the case.

The patches as posted don't change PAT MSR settings at all. Using all 8
PAT MSR entries on processors without errata is easy while limit usage
to only 4 entries is possible as well. It is possible to make the
decision dynamically while booting the system. The only difference is
the value written to the PAT MSR, as this value is used to modify the
tables used for translating between cache modes and pte bits. If a
driver wants to set write-through mode on a processor with the PAT
erratum, the translation will result in uncached mode which is okay.

Regarding Xen: The patches don't introduce further Xen-isms. They reduce
the Xen specialties by being able to use the same translation mechanisms
on a native system as on a Xen based system. The only differences are
the values in the translation tables, as Xen is using a different PAT
configuration as native Linux (with my patches Linux could use the same
PAT configuration as Xen, of course).

Juergen


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
@ 2014-08-20 19:26   ` Toshi Kani
  2014-08-21  9:30     ` [Xen-devel] " Juergen Gross
  2014-08-21 22:09   ` Toshi Kani
  1 sibling, 1 reply; 22+ messages in thread
From: Toshi Kani @ 2014-08-20 19:26 UTC (permalink / raw)
  To: jgross
  Cc: stefan.bader, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
> From: Juergen Gross <jgross@suse.com>
> 
> At the moment there are a lot of places that handle setting or getting
> the page cache mode by treating the pgprot bits equal to the cache mode.
> This is only true because there are a lot of assumptions about the setup
> of the PAT MSR. Otherwise the cache type needs to get translated into
> pgprot bits and vice versa.
> 
> This patch tries to prepare for that by introducing a seperate type
> for the cache mode and adding functions to translate between those and pgprot
> values.
> 
> To avoid too much performance penalty the translation between cache mode
> and pgprot values is done via tables which contain the relevant information.
> Write-back cache mode is hard-wired to be 0, all other modes are configurable
> via those tables. For large pages there are translation functions as the
> PAT bit is located at different positions in the ptes of 4k and large pages.

Hi Juergen,

Thanks for driving this.  As we talked before, the changes look good to
me.  I will post a patchset to enable WT on top of your patchset once
this is settled a bit.

I have couples of minor comments below.

> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index f216963..7685b34 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
 :
>  /*         xwr */
>  #define __P000	PAGE_NONE
> @@ -328,6 +331,55 @@ static inline pteval_t pte_flags(pte_t pte)
>  #define pgprot_val(x)	((x).pgprot)
>  #define __pgprot(x)	((pgprot_t) { (x) } )
>  
> +extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
> +extern uint8_t __pte2cachemode_tbl[8];
> +
> +#define __pte2cm_idx(cb)				\
> +	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
> +	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
> +	 (((cb) >> _PAGE_BIT_PWT) & 1))
> +
> +static inline unsigned long protval_cachemode(enum page_cache_mode pct)
> +{
> +	if (likely(pct == 0))
> +		return 0;
> +	return __cachemode2pte_tbl[pct];
> +}

I think this function name is not intuitive. pgprot_val() works as
pgprot-to-protval, but protval_cachemode() works the other way around as
cachemode-to-protval.

How about renaming to cachemode_protval()?

Also, "pct" should probably be changed to "pcm".

> +static inline pgprot_t pgprot_cachemode(enum page_cache_mode pct)
> +{
> +	return __pgprot(protval_cachemode(pct));
> +}

Ditto.

> +static inline enum page_cache_mode cachemode_pgprot(pgprot_t pgprot)
> +{
> +	unsigned long masked;
> +
> +	masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
> +	if (likely(masked == 0))
> +		return 0;
> +	return __pte2cachemode_tbl[__pte2cm_idx(masked)];
> +}

Ditto.

> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 66dba36..0500124 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -27,6 +27,35 @@
>  
>  #include "mm_internal.h"
>  
> +/*
> + * Tables translating between page_cache_type_t and pte encoding.
> + * Minimal supported modes are defined statically, modified if more supported
> + * cache modes are available.
> + * Index into __cachemode2pte_tbl is the cachemode.
> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
> + */
> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
> +	[_PAGE_CACHE_MODE_WB]		= 0,
> +	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
> +	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
> +	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
> +	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
> +	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
> +};

I think WT and WP should be set to _PAGE_PCD (UC_MINUS) for safe.

> +EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
> +uint8_t __pte2cachemode_tbl[8] = {
> +	[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
> +	[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
> +	[__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
> +	[__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
> +	[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
> +};
 :
> diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
> index 7b179b4..91a2d3b 100644
> --- a/arch/x86/mm/iomap_32.c
> +++ b/arch/x86/mm/iomap_32.c
> @@ -33,17 +33,17 @@ static int is_io_mapping_possible(resource_size_t base, unsigned long size)
>  
>  int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot)
>  {
> -	unsigned long flag = _PAGE_CACHE_WC;
> +	enum page_cache_mode pct = _PAGE_CACHE_MODE_WC;
>  	int ret;
>  
>  	if (!is_io_mapping_possible(base, size))
>  		return -EINVAL;
>  
> -	ret = io_reserve_memtype(base, base + size, &flag);
> +	ret = io_reserve_memtype(base, base + size, &pct);
>  	if (ret)
>  		return ret;
>  
> -	*prot = __pgprot(__PAGE_KERNEL | flag);
> +	*prot = __pgprot(__PAGE_KERNEL | pgprot_val(pgprot_cachemode(pct)));

pgrot_val(pgprot_cachemode(pct)) can be simply protval_cachemode(pct).
(again, this should be renamed as cachemode_protval().)

There are other places similar to this.

>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(iomap_create_wc);
> @@ -73,6 +73,9 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
>  /*
>   * Map 'pfn' using protections 'prot'
>   */
> +#define __PAGE_KERNEL_WC	(__PAGE_KERNEL | \
> +				 protval_cachemode(_PAGE_CACHE_TYPE_WC))

_PAGE_CACHE_TYPE_WC should be _PAGE_CACHE_MODE_WC.

There are a few other places where _PAGE_CACHE_TYPE_* are still used, so
please fix for that.


Thanks,
-Toshi


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 0/3] x86: Full support of PAT
  2014-08-20 12:05   ` One Thousand Gnomes
                     ` (2 preceding siblings ...)
  (?)
@ 2014-08-20 21:59   ` H. Peter Anvin
  -1 siblings, 0 replies; 22+ messages in thread
From: H. Peter Anvin @ 2014-08-20 21:59 UTC (permalink / raw)
  To: One Thousand Gnomes, jgross
  Cc: stefan.bader, toshi.kani, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, x86

On 08/20/2014 07:05 AM, One Thousand Gnomes wrote:
> Doing this stuff for Xen also IMHO makes no sense at all. We shouldn't
> have a kernel full of crap to deal with Xen-isms.

We already *have* a kernel full of crap to deal with Xen-isms.  We
screwed that pooch some 10 years ago.

> IFF it means the changes can also implement a mix of four PAT entries
> on Pentium-M or earlier CPUs with PAT errata, and the full PAT on
> processors without errata then IMHO it becomes a whole world more
> interesting.

That is one of the wins, indeed, as well as handle pre-PAT processors.

	-hpa



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 0/3] x86: Full support of PAT
  2014-08-20 12:21     ` Jan Beulich
  (?)
@ 2014-08-20 22:00     ` H. Peter Anvin
  -1 siblings, 0 replies; 22+ messages in thread
From: H. Peter Anvin @ 2014-08-20 22:00 UTC (permalink / raw)
  To: Jan Beulich, gnomes
  Cc: stefan.bader, toshi.kani, x86, ville.syrjala, xen-devel,
	Juergen Gross, linux-kernel

On 08/20/2014 07:21 AM, Jan Beulich wrote:
> 
> But you're aware that for the last two major releases Xen hasn't been supporting
> 32-bit CPUs at all anymore? I.e. why should we limit functionality under Xen
> based on errata on 64-bit-incapable CPUs?
> 

Because there is no fscking way we're doing this differently on 32 and
64 bits.  Quite frankly the issue is if we should support this under Xen
at all, but I think we can do it cleanly.

	-hpa



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-20 19:26   ` Toshi Kani
@ 2014-08-21  9:30     ` Juergen Gross
  2014-08-22  9:24         ` Jan Beulich
  0 siblings, 1 reply; 22+ messages in thread
From: Juergen Gross @ 2014-08-21  9:30 UTC (permalink / raw)
  To: Toshi Kani; +Cc: xen-devel, x86, linux-kernel, stefan.bader, hpa, ville.syrjala

On 08/20/2014 09:26 PM, Toshi Kani wrote:
> On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
>> From: Juergen Gross <jgross@suse.com>
>>
>> At the moment there are a lot of places that handle setting or getting
>> the page cache mode by treating the pgprot bits equal to the cache mode.
>> This is only true because there are a lot of assumptions about the setup
>> of the PAT MSR. Otherwise the cache type needs to get translated into
>> pgprot bits and vice versa.
>>
>> This patch tries to prepare for that by introducing a seperate type
>> for the cache mode and adding functions to translate between those and pgprot
>> values.
>>
>> To avoid too much performance penalty the translation between cache mode
>> and pgprot values is done via tables which contain the relevant information.
>> Write-back cache mode is hard-wired to be 0, all other modes are configurable
>> via those tables. For large pages there are translation functions as the
>> PAT bit is located at different positions in the ptes of 4k and large pages.
>
> Hi Juergen,
>
> Thanks for driving this.  As we talked before, the changes look good to
> me.  I will post a patchset to enable WT on top of your patchset once
> this is settled a bit.
>
> I have couples of minor comments below.
>
>> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
>> index f216963..7685b34 100644
>> --- a/arch/x86/include/asm/pgtable_types.h
>> +++ b/arch/x86/include/asm/pgtable_types.h
>   :
>>   /*         xwr */
>>   #define __P000	PAGE_NONE
>> @@ -328,6 +331,55 @@ static inline pteval_t pte_flags(pte_t pte)
>>   #define pgprot_val(x)	((x).pgprot)
>>   #define __pgprot(x)	((pgprot_t) { (x) } )
>>
>> +extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
>> +extern uint8_t __pte2cachemode_tbl[8];
>> +
>> +#define __pte2cm_idx(cb)				\
>> +	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
>> +	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
>> +	 (((cb) >> _PAGE_BIT_PWT) & 1))
>> +
>> +static inline unsigned long protval_cachemode(enum page_cache_mode pct)
>> +{
>> +	if (likely(pct == 0))
>> +		return 0;
>> +	return __cachemode2pte_tbl[pct];
>> +}
>
> I think this function name is not intuitive. pgprot_val() works as
> pgprot-to-protval, but protval_cachemode() works the other way around as
> cachemode-to-protval.
>
> How about renaming to cachemode_protval()?

I think I'll use cachemode2protval().

>
> Also, "pct" should probably be changed to "pcm".

Yeah.

>
>> +static inline pgprot_t pgprot_cachemode(enum page_cache_mode pct)
>> +{
>> +	return __pgprot(protval_cachemode(pct));
>> +}
>
> Ditto.

Will be cachemode2pgprot().

>
>> +static inline enum page_cache_mode cachemode_pgprot(pgprot_t pgprot)
>> +{
>> +	unsigned long masked;
>> +
>> +	masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
>> +	if (likely(masked == 0))
>> +		return 0;
>> +	return __pte2cachemode_tbl[__pte2cm_idx(masked)];
>> +}
>
> Ditto.

Will be pgprot2cachemode().

>
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 66dba36..0500124 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -27,6 +27,35 @@
>>
>>   #include "mm_internal.h"
>>
>> +/*
>> + * Tables translating between page_cache_type_t and pte encoding.
>> + * Minimal supported modes are defined statically, modified if more supported
>> + * cache modes are available.
>> + * Index into __cachemode2pte_tbl is the cachemode.
>> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
>> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
>> + */
>> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
>> +	[_PAGE_CACHE_MODE_WB]		= 0,
>> +	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
>> +	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
>> +	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
>> +	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
>> +	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
>> +};
>
> I think WT and WP should be set to _PAGE_PCD (UC_MINUS) for safe.

Oh, you are right.

>
>> +EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
>> +uint8_t __pte2cachemode_tbl[8] = {
>> +	[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
>> +	[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
>> +	[__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
>> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
>> +	[__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
>> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
>> +	[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
>> +	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
>> +};
>   :
>> diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
>> index 7b179b4..91a2d3b 100644
>> --- a/arch/x86/mm/iomap_32.c
>> +++ b/arch/x86/mm/iomap_32.c
>> @@ -33,17 +33,17 @@ static int is_io_mapping_possible(resource_size_t base, unsigned long size)
>>
>>   int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot)
>>   {
>> -	unsigned long flag = _PAGE_CACHE_WC;
>> +	enum page_cache_mode pct = _PAGE_CACHE_MODE_WC;
>>   	int ret;
>>
>>   	if (!is_io_mapping_possible(base, size))
>>   		return -EINVAL;
>>
>> -	ret = io_reserve_memtype(base, base + size, &flag);
>> +	ret = io_reserve_memtype(base, base + size, &pct);
>>   	if (ret)
>>   		return ret;
>>
>> -	*prot = __pgprot(__PAGE_KERNEL | flag);
>> +	*prot = __pgprot(__PAGE_KERNEL | pgprot_val(pgprot_cachemode(pct)));
>
> pgrot_val(pgprot_cachemode(pct)) can be simply protval_cachemode(pct).
> (again, this should be renamed as cachemode_protval().)

... or cachemode2protval().

>
> There are other places similar to this.
>
>>   	return 0;
>>   }
>>   EXPORT_SYMBOL_GPL(iomap_create_wc);
>> @@ -73,6 +73,9 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
>>   /*
>>    * Map 'pfn' using protections 'prot'
>>    */
>> +#define __PAGE_KERNEL_WC	(__PAGE_KERNEL | \
>> +				 protval_cachemode(_PAGE_CACHE_TYPE_WC))
>
> _PAGE_CACHE_TYPE_WC should be _PAGE_CACHE_MODE_WC.
>
> There are a few other places where _PAGE_CACHE_TYPE_* are still used, so
> please fix for that.

Oops.


Thanks, Juergen

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
  2014-08-20 19:26   ` Toshi Kani
@ 2014-08-21 22:09   ` Toshi Kani
  2014-08-22  5:25     ` Juergen Gross
  1 sibling, 1 reply; 22+ messages in thread
From: Toshi Kani @ 2014-08-21 22:09 UTC (permalink / raw)
  To: jgross
  Cc: stefan.bader, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
> From: Juergen Gross <jgross@suse.com>
> 
> At the moment there are a lot of places that handle setting or getting
> the page cache mode by treating the pgprot bits equal to the cache mode.
> This is only true because there are a lot of assumptions about the setup
> of the PAT MSR. Otherwise the cache type needs to get translated into
> pgprot bits and vice versa.
> 
> This patch tries to prepare for that by introducing a seperate type
> for the cache mode and adding functions to translate between those and pgprot
> values.
> 
> To avoid too much performance penalty the translation between cache mode
> and pgprot values is done via tables which contain the relevant information.
> Write-back cache mode is hard-wired to be 0, all other modes are configurable
> via those tables. For large pages there are translation functions as the
> PAT bit is located at different positions in the ptes of 4k and large pages.

One more comment below..

> diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
 :
> -static inline void set_page_memtype(struct page *pg, unsigned long memtype)
> +static inline void set_page_memtype(struct page *pg,
> +				    enum page_cache_mode memtype)
>  {
>  	unsigned long memtype_flags = _PGMT_DEFAULT;
>  	unsigned long old_flags;
>  	unsigned long new_flags;
>  
>  	switch (memtype) {
> -	case _PAGE_CACHE_WC:
> +	case _PAGE_CACHE_MODE_WC:
>  		memtype_flags = _PGMT_WC;
>  		break;
> -	case _PAGE_CACHE_UC_MINUS:
> +	case _PAGE_CACHE_MODE_UC_MINUS:
>  		memtype_flags = _PGMT_UC_MINUS;
>  		break;
> -	case _PAGE_CACHE_WB:
> +	case _PAGE_CACHE_MODE_WB:
> +	default:
>  		memtype_flags = _PGMT_WB;
>  		break;
>  	}

Adding the "default" case handled as _PGMT_WB is not correct here.
free_ram_pages_type() calls set_page_memtype() with -1, which needs to
be set to _PGMT_DEFAULT.

Thanks,
-Toshi





^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-21 22:09   ` Toshi Kani
@ 2014-08-22  5:25     ` Juergen Gross
  0 siblings, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2014-08-22  5:25 UTC (permalink / raw)
  To: Toshi Kani
  Cc: stefan.bader, linux-kernel, xen-devel, konrad.wilk,
	ville.syrjala, hpa, x86

On 08/22/2014 12:09 AM, Toshi Kani wrote:
> On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
>> From: Juergen Gross <jgross@suse.com>
>>
>> At the moment there are a lot of places that handle setting or getting
>> the page cache mode by treating the pgprot bits equal to the cache mode.
>> This is only true because there are a lot of assumptions about the setup
>> of the PAT MSR. Otherwise the cache type needs to get translated into
>> pgprot bits and vice versa.
>>
>> This patch tries to prepare for that by introducing a seperate type
>> for the cache mode and adding functions to translate between those and pgprot
>> values.
>>
>> To avoid too much performance penalty the translation between cache mode
>> and pgprot values is done via tables which contain the relevant information.
>> Write-back cache mode is hard-wired to be 0, all other modes are configurable
>> via those tables. For large pages there are translation functions as the
>> PAT bit is located at different positions in the ptes of 4k and large pages.
>
> One more comment below..
>
>> diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
>   :
>> -static inline void set_page_memtype(struct page *pg, unsigned long memtype)
>> +static inline void set_page_memtype(struct page *pg,
>> +				    enum page_cache_mode memtype)
>>   {
>>   	unsigned long memtype_flags = _PGMT_DEFAULT;
>>   	unsigned long old_flags;
>>   	unsigned long new_flags;
>>
>>   	switch (memtype) {
>> -	case _PAGE_CACHE_WC:
>> +	case _PAGE_CACHE_MODE_WC:
>>   		memtype_flags = _PGMT_WC;
>>   		break;
>> -	case _PAGE_CACHE_UC_MINUS:
>> +	case _PAGE_CACHE_MODE_UC_MINUS:
>>   		memtype_flags = _PGMT_UC_MINUS;
>>   		break;
>> -	case _PAGE_CACHE_WB:
>> +	case _PAGE_CACHE_MODE_WB:
>> +	default:
>>   		memtype_flags = _PGMT_WB;
>>   		break;
>>   	}
>
> Adding the "default" case handled as _PGMT_WB is not correct here.
> free_ram_pages_type() calls set_page_memtype() with -1, which needs to
> be set to _PGMT_DEFAULT.

It says so in the comment above. I'll correct it, thanks.

Juergen


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-21  9:30     ` [Xen-devel] " Juergen Gross
@ 2014-08-22  9:24         ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-22  9:24 UTC (permalink / raw)
  To: Toshi Kani, Juergen Gross
  Cc: stefan.bader, x86, ville.syrjala, xen-devel, linux-kernel, hpa

>>> On 21.08.14 at 11:30, <JGross@suse.com> wrote:
> On 08/20/2014 09:26 PM, Toshi Kani wrote:
>> On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
>>> --- a/arch/x86/mm/init.c
>>> +++ b/arch/x86/mm/init.c
>>> @@ -27,6 +27,35 @@
>>>
>>>   #include "mm_internal.h"
>>>
>>> +/*
>>> + * Tables translating between page_cache_type_t and pte encoding.
>>> + * Minimal supported modes are defined statically, modified if more supported
>>> + * cache modes are available.
>>> + * Index into __cachemode2pte_tbl is the cachemode.
>>> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
>>> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
>>> + */
>>> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
>>> +	[_PAGE_CACHE_MODE_WB]		= 0,
>>> +	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
>>> +	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
>>> +};
>>
>> I think WT and WP should be set to _PAGE_PCD (UC_MINUS) for safe.
> 
> Oh, you are right.

Actually I suppose the original comment was about WC and WP;
defaulting WT to _PAGE_PWT seems quite correct to me.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 1/3] x86: Make page cache mode a real type
@ 2014-08-22  9:24         ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-22  9:24 UTC (permalink / raw)
  To: Toshi Kani, Juergen Gross
  Cc: stefan.bader, x86, ville.syrjala, xen-devel, linux-kernel, hpa

>>> On 21.08.14 at 11:30, <JGross@suse.com> wrote:
> On 08/20/2014 09:26 PM, Toshi Kani wrote:
>> On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
>>> --- a/arch/x86/mm/init.c
>>> +++ b/arch/x86/mm/init.c
>>> @@ -27,6 +27,35 @@
>>>
>>>   #include "mm_internal.h"
>>>
>>> +/*
>>> + * Tables translating between page_cache_type_t and pte encoding.
>>> + * Minimal supported modes are defined statically, modified if more supported
>>> + * cache modes are available.
>>> + * Index into __cachemode2pte_tbl is the cachemode.
>>> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
>>> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
>>> + */
>>> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
>>> +	[_PAGE_CACHE_MODE_WB]		= 0,
>>> +	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
>>> +	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
>>> +	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
>>> +};
>>
>> I think WT and WP should be set to _PAGE_PCD (UC_MINUS) for safe.
> 
> Oh, you are right.

Actually I suppose the original comment was about WC and WP;
defaulting WT to _PAGE_PWT seems quite correct to me.

Jan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
  2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
  2014-08-22  9:32   ` Jan Beulich
@ 2014-08-22  9:32   ` Jan Beulich
  2014-08-25 12:22     ` Juergen Gross
  2014-08-25 12:22     ` [Xen-devel] " Juergen Gross
  1 sibling, 2 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-22  9:32 UTC (permalink / raw)
  To: Juergen Gross
  Cc: stefan.bader, toshi.kani, x86, ville.syrjala, xen-devel,
	konrad.wilk, linux-kernel, hpa

>>> On 19.08.14 at 15:25, <JGross@suse.com> wrote:
> @@ -118,8 +167,14 @@ void pat_init(void)
>  	      PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
>  
>  	/* Boot CPU check */
> -	if (!boot_pat_state)
> +	if (!boot_pat_state) {
>  		rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
> +		/*
> +		 * Init cache mode tables before writing MSR to give Xen a
> +		 * chance to correct the changes when doing the write.
> +		 */

This comment seems pretty odd to me: For one, a PV guest on Xen
shouldn't be trying to write PAT MSR at all under the current ABI
(the write will be ignored, yes, but accompanied with a warning
message, which PV kernels - by the mere fact that they're PV -
should try to avoid). And then "correct the changes" both gives
the impression as if they were wrong and as if some of what the
kernel writes may be under the kernel's control. Hence I think this
code and comment should either be consistently assuming that the
kernel has no control at all, or should read back the value after
having written it, and set the internal tables based on the value
read back.

Jan

> +		pat_init_cache_modes(pat);
> +	}
>  
>  	wrmsrl(MSR_IA32_CR_PAT, pat);
>  



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
  2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
@ 2014-08-22  9:32   ` Jan Beulich
  2014-08-22  9:32   ` [Xen-devel] " Jan Beulich
  1 sibling, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2014-08-22  9:32 UTC (permalink / raw)
  To: Juergen Gross
  Cc: toshi.kani, x86, linux-kernel, stefan.bader, hpa, xen-devel,
	ville.syrjala

>>> On 19.08.14 at 15:25, <JGross@suse.com> wrote:
> @@ -118,8 +167,14 @@ void pat_init(void)
>  	      PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
>  
>  	/* Boot CPU check */
> -	if (!boot_pat_state)
> +	if (!boot_pat_state) {
>  		rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
> +		/*
> +		 * Init cache mode tables before writing MSR to give Xen a
> +		 * chance to correct the changes when doing the write.
> +		 */

This comment seems pretty odd to me: For one, a PV guest on Xen
shouldn't be trying to write PAT MSR at all under the current ABI
(the write will be ignored, yes, but accompanied with a warning
message, which PV kernels - by the mere fact that they're PV -
should try to avoid). And then "correct the changes" both gives
the impression as if they were wrong and as if some of what the
kernel writes may be under the kernel's control. Hence I think this
code and comment should either be consistently assuming that the
kernel has no control at all, or should read back the value after
having written it, and set the internal tables based on the value
read back.

Jan

> +		pat_init_cache_modes(pat);
> +	}
>  
>  	wrmsrl(MSR_IA32_CR_PAT, pat);
>  

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 1/3] x86: Make page cache mode a real type
  2014-08-22  9:24         ` Jan Beulich
  (?)
@ 2014-08-22 17:43         ` Toshi Kani
  -1 siblings, 0 replies; 22+ messages in thread
From: Toshi Kani @ 2014-08-22 17:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, stefan.bader, x86, ville.syrjala, xen-devel,
	linux-kernel, hpa

On Fri, 2014-08-22 at 10:24 +0100, Jan Beulich wrote:
> >>> On 21.08.14 at 11:30, <JGross@suse.com> wrote:
> > On 08/20/2014 09:26 PM, Toshi Kani wrote:
> >> On Tue, 2014-08-19 at 15:25 +0200, jgross@suse.com wrote:
> >>> --- a/arch/x86/mm/init.c
> >>> +++ b/arch/x86/mm/init.c
> >>> @@ -27,6 +27,35 @@
> >>>
> >>>   #include "mm_internal.h"
> >>>
> >>> +/*
> >>> + * Tables translating between page_cache_type_t and pte encoding.
> >>> + * Minimal supported modes are defined statically, modified if more supported
> >>> + * cache modes are available.
> >>> + * Index into __cachemode2pte_tbl is the cachemode.
> >>> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
> >>> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
> >>> + */
> >>> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
> >>> +	[_PAGE_CACHE_MODE_WB]		= 0,
> >>> +	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
> >>> +	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
> >>> +	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
> >>> +	[_PAGE_CACHE_MODE_WT]		= _PAGE_PWT,
> >>> +	[_PAGE_CACHE_MODE_WP]		= _PAGE_PWT,
> >>> +};
> >>
> >> I think WT and WP should be set to _PAGE_PCD (UC_MINUS) for safe.
> > 
> > Oh, you are right.
> 
> Actually I suppose the original comment was about WC and WP;
> defaulting WT to _PAGE_PWT seems quite correct to me.

My comment was about WT and WP, whose cache modes are defined here but
are not supported in this patchset.  _PAGE_PWT is used by WC, which has
weakly ordered writes.  WT and WP have strongly ordered writes, so they
should be redirected to UC- for safe.

Thanks,
-Toshi


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [Xen-devel] [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
  2014-08-22  9:32   ` [Xen-devel] " Jan Beulich
  2014-08-25 12:22     ` Juergen Gross
@ 2014-08-25 12:22     ` Juergen Gross
  1 sibling, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2014-08-25 12:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: stefan.bader, toshi.kani, x86, ville.syrjala, xen-devel,
	konrad.wilk, linux-kernel, hpa

On 08/22/2014 11:32 AM, Jan Beulich wrote:
>>>> On 19.08.14 at 15:25, <JGross@suse.com> wrote:
>> @@ -118,8 +167,14 @@ void pat_init(void)
>>   	      PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
>>
>>   	/* Boot CPU check */
>> -	if (!boot_pat_state)
>> +	if (!boot_pat_state) {
>>   		rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
>> +		/*
>> +		 * Init cache mode tables before writing MSR to give Xen a
>> +		 * chance to correct the changes when doing the write.
>> +		 */
>
> This comment seems pretty odd to me: For one, a PV guest on Xen
> shouldn't be trying to write PAT MSR at all under the current ABI
> (the write will be ignored, yes, but accompanied with a warning
> message, which PV kernels - by the mere fact that they're PV -
> should try to avoid). And then "correct the changes" both gives
> the impression as if they were wrong and as if some of what the
> kernel writes may be under the kernel's control. Hence I think this
> code and comment should either be consistently assuming that the
> kernel has no control at all, or should read back the value after
> having written it, and set the internal tables based on the value
> read back.

I think the latter alternative is the better one. I'll change the
patch.

Juergen


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables
  2014-08-22  9:32   ` [Xen-devel] " Jan Beulich
@ 2014-08-25 12:22     ` Juergen Gross
  2014-08-25 12:22     ` [Xen-devel] " Juergen Gross
  1 sibling, 0 replies; 22+ messages in thread
From: Juergen Gross @ 2014-08-25 12:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: toshi.kani, x86, linux-kernel, stefan.bader, hpa, xen-devel,
	ville.syrjala

On 08/22/2014 11:32 AM, Jan Beulich wrote:
>>>> On 19.08.14 at 15:25, <JGross@suse.com> wrote:
>> @@ -118,8 +167,14 @@ void pat_init(void)
>>   	      PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC);
>>
>>   	/* Boot CPU check */
>> -	if (!boot_pat_state)
>> +	if (!boot_pat_state) {
>>   		rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
>> +		/*
>> +		 * Init cache mode tables before writing MSR to give Xen a
>> +		 * chance to correct the changes when doing the write.
>> +		 */
>
> This comment seems pretty odd to me: For one, a PV guest on Xen
> shouldn't be trying to write PAT MSR at all under the current ABI
> (the write will be ignored, yes, but accompanied with a warning
> message, which PV kernels - by the mere fact that they're PV -
> should try to avoid). And then "correct the changes" both gives
> the impression as if they were wrong and as if some of what the
> kernel writes may be under the kernel's control. Hence I think this
> code and comment should either be consistently assuming that the
> kernel has no control at all, or should read back the value after
> having written it, and set the internal tables based on the value
> read back.

I think the latter alternative is the better one. I'll change the
patch.

Juergen

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2014-08-25 12:22 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-19 13:25 [PATCH RFC 0/3] x86: Full support of PAT jgross
2014-08-19 13:25 ` [PATCH RFC 1/3] x86: Make page cache mode a real type jgross
2014-08-20 19:26   ` Toshi Kani
2014-08-21  9:30     ` [Xen-devel] " Juergen Gross
2014-08-22  9:24       ` Jan Beulich
2014-08-22  9:24         ` Jan Beulich
2014-08-22 17:43         ` Toshi Kani
2014-08-21 22:09   ` Toshi Kani
2014-08-22  5:25     ` Juergen Gross
2014-08-19 13:25 ` [PATCH RFC 2/3] x86: Enable PAT to use cache mode translation tables jgross
2014-08-22  9:32   ` Jan Beulich
2014-08-22  9:32   ` [Xen-devel] " Jan Beulich
2014-08-25 12:22     ` Juergen Gross
2014-08-25 12:22     ` [Xen-devel] " Juergen Gross
2014-08-19 13:25 ` [PATCH RFC 3/3] Support Xen pv-domains using PAT jgross
2014-08-20 12:05 ` [PATCH RFC 0/3] x86: Full support of PAT One Thousand Gnomes
2014-08-20 12:05   ` One Thousand Gnomes
2014-08-20 12:21   ` [Xen-devel] " Jan Beulich
2014-08-20 12:21     ` Jan Beulich
2014-08-20 22:00     ` [Xen-devel] " H. Peter Anvin
2014-08-20 12:35   ` Juergen Gross
2014-08-20 21:59   ` H. Peter Anvin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.