linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V6 00/18] x86: Full support of PAT
@ 2014-11-03 13:01 Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
                   ` (20 more replies)
  0 siblings, 21 replies; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

The x86 architecture offers via the PAT (Page Attribute Table) a way to
specify different caching modes in page table entries. The PAT MSR contains
8 entries each specifying one of 6 possible cache modes. A pte references one
of those entries via 3 bits: _PAGE_PAT, _PAGE_PWT and _PAGE_PCD.

The Linux kernel currently supports only 4 different cache modes. The PAT MSR
is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
top 4 entries in the PAT MSR are the same as the 4 lower entries.

This results in the kernel not supporting e.g. write-through mode. Especially
this cache mode would speed up drivers of video cards which now have to use
uncached accesses.

OTOH some old processors (Pentium) don't support PAT correctly and the Xen
hypervisor has been using a different PAT MSR configuration for some time now
and can't change that as this setting is part of the ABI.

This patch set abstracts the cache mode from the pte and introduces tables to
translate between cache mode and pte bits (the default cache mode "write back"
is hard-wired to PAT entry 0). The tables are statically initialized with
values being compatible to old processors and current usage. As soon as the
PAT MSR is changed (or - in case of Xen - is read at boot time) the tables are
changed accordingly. Requests of mappings with special cache modes are always
possible now, in case they are not supported there will be a fallback to a
compatible but slower mode.

Summing it up, this patch set adds the following features:
- capability to support WT and WP cache modes on processors with full PAT
  support
- processors with no or uncorrect PAT support are still working as today, even
  if WT or WP cache mode are selected by drivers for some pages
- reduction of Xen special handling regarding cache mode

Changes in V6:
- add new patch 10 (x86: Remove looking for setting of _PAGE_PAT_LARGE in
  pageattr.c) as suggested by Thomas Gleixner
- replaced SOB of Stefan Bader by "Based-on-patch-by:" as suggested by
  Borislav Petkov

Changes in V5:
- split up first patch as requested by Ingo Molnar and Thomas Gleixner
- add a helper function in pat_init_cache_modes() as requested by Ingo Molnar

Changes in V4:
- rebased to 3.18-rc2

Changes in V3:
- corrected two minor nits (UC_MINUS, again) detected by Toshi Kani

Changes in V2:
- simplified handling of PAT MSR write under Xen as suggested by David Vrabel
- removed resetting of pat_enabled under Xen
- two small corrections requested by Toshi Kani (UC_MINUS cache mode in
  vermilion driver, fix 32 bit kernel build failure)
- correct build error on non-x86 arch by moving definition of
  update_cache_mode_entry() to x86 specific header

Changes since RFC:
- renamed functions and variables as suggested by Toshi Kani
- corrected cache mode bits for WT and WP
- modified handling of PAT MSR write under Xen as suggested by Jan Beulich


Juergen Gross (18):
  x86: Make page cache mode a real type
  x86: Use new cache mode type in include/asm/fb.h
  x86: Use new cache mode type in drivers/video/fbdev/gbefb.c
  x86: Use new cache mode type in drivers/video/fbdev/vermilion
  x86: Use new cache mode type in arch/x86/pci
  x86: Use new cache mode type in arch/x86/mm/init_64.c
  x86: Use new cache mode type in asm/pgtable.h
  x86: Use new cache mode type in mm/iomap_32.c
  x86: Use new cache mode type in track_pfn_remap() and
    track_pfn_insert()
  x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c
  x86: Use new cache mode type in setting page attributes
  x86: Use new cache mode type in mm/ioremap.c
  x86: Use new cache mode type in memtype related functions
  x86: Clean up pgtable_types.h
  x86: Support PAT bit in pagetable dump for lower levels
  x86: Respect PAT bit when copying pte values between large and normal
    pages
  x86: Enable PAT to use cache mode translation tables
  xen: Support Xen pv-domains using PAT

 arch/x86/include/asm/cacheflush.h         |  38 ++++---
 arch/x86/include/asm/fb.h                 |   6 +-
 arch/x86/include/asm/io.h                 |   2 +-
 arch/x86/include/asm/pat.h                |   7 +-
 arch/x86/include/asm/pgtable.h            |  19 ++--
 arch/x86/include/asm/pgtable_types.h      |  96 ++++++++++++----
 arch/x86/mm/dump_pagetables.c             |  24 ++--
 arch/x86/mm/init.c                        |  37 +++++++
 arch/x86/mm/init_64.c                     |   9 +-
 arch/x86/mm/iomap_32.c                    |  12 +-
 arch/x86/mm/ioremap.c                     |  63 ++++++-----
 arch/x86/mm/mm_internal.h                 |   2 +
 arch/x86/mm/pageattr.c                    |  84 ++++++++------
 arch/x86/mm/pat.c                         | 176 +++++++++++++++++++-----------
 arch/x86/mm/pat_internal.h                |  22 ++--
 arch/x86/mm/pat_rbtree.c                  |   8 +-
 arch/x86/pci/i386.c                       |   4 +-
 arch/x86/xen/enlighten.c                  |  25 ++---
 arch/x86/xen/mmu.c                        |  47 +-------
 arch/x86/xen/xen-ops.h                    |   1 -
 drivers/video/fbdev/gbefb.c               |   3 +-
 drivers/video/fbdev/vermilion/vermilion.c |   6 +-
 22 files changed, 412 insertions(+), 279 deletions(-)

-- 
1.8.4.5


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH V6 01/18] x86: Make page cache mode a real type
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:54   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2015-01-22  7:11   ` [PATCH V6 01/18] " Steven Noonan
  2014-11-03 13:01 ` [PATCH V6 02/18] x86: Use new cache mode type in include/asm/fb.h Juergen Gross
                   ` (19 subsequent siblings)
  20 siblings, 2 replies; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

At the moment there are a lot of places that handle setting or getting
the page cache mode by treating the pgprot bits equal to the cache mode.
This is only true because there are a lot of assumptions about the setup
of the PAT MSR. Otherwise the cache type needs to get translated into
pgprot bits and vice versa.

This patch tries to prepare for that by introducing a separate type
for the cache mode and adding functions to translate between those and
pgprot values.

To avoid too much performance penalty the translation between cache mode
and pgprot values is done via tables which contain the relevant
information.  Write-back cache mode is hard-wired to be 0, all other
modes are configurable via those tables. For large pages there are
translation functions as the PAT bit is located at different positions
in the ptes of 4k and large pages.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable_types.h | 73 +++++++++++++++++++++++++++++++++++-
 arch/x86/mm/init.c                   | 29 ++++++++++++++
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0778964..5124642 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -128,12 +128,34 @@
 			 _PAGE_SOFT_DIRTY | _PAGE_NUMA)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
 
-#define _PAGE_CACHE_MASK	(_PAGE_PCD | _PAGE_PWT)
 #define _PAGE_CACHE_WB		(0)
 #define _PAGE_CACHE_WC		(_PAGE_PWT)
 #define _PAGE_CACHE_UC_MINUS	(_PAGE_PCD)
 #define _PAGE_CACHE_UC		(_PAGE_PCD | _PAGE_PWT)
 
+/*
+ * The cache modes defined here are used to translate between pure SW usage
+ * and the HW defined cache mode bits and/or PAT entries.
+ *
+ * The resulting bits for PWT, PCD and PAT should be chosen in a way
+ * to have the WB mode at index 0 (all bits clear). This is the default
+ * right now and likely would break too much if changed.
+ */
+#ifndef __ASSEMBLY__
+enum page_cache_mode {
+	_PAGE_CACHE_MODE_WB = 0,
+	_PAGE_CACHE_MODE_WC = 1,
+	_PAGE_CACHE_MODE_UC_MINUS = 2,
+	_PAGE_CACHE_MODE_UC = 3,
+	_PAGE_CACHE_MODE_WT = 4,
+	_PAGE_CACHE_MODE_WP = 5,
+	_PAGE_CACHE_MODE_NUM = 8
+};
+#endif
+
+#define _PAGE_CACHE_MASK	(_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
+#define _PAGE_NOCACHE		(cachemode2protval(_PAGE_CACHE_MODE_UC))
+
 #define PAGE_NONE	__pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
 #define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
 				 _PAGE_ACCESSED | _PAGE_NX)
@@ -341,6 +363,55 @@ static inline pmdval_t pmdnuma_flags(pmd_t pmd)
 #define pgprot_val(x)	((x).pgprot)
 #define __pgprot(x)	((pgprot_t) { (x) } )
 
+extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
+extern uint8_t __pte2cachemode_tbl[8];
+
+#define __pte2cm_idx(cb)				\
+	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
+	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
+	 (((cb) >> _PAGE_BIT_PWT) & 1))
+
+static inline unsigned long cachemode2protval(enum page_cache_mode pcm)
+{
+	if (likely(pcm == 0))
+		return 0;
+	return __cachemode2pte_tbl[pcm];
+}
+static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm)
+{
+	return __pgprot(cachemode2protval(pcm));
+}
+static inline enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
+{
+	unsigned long masked;
+
+	masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
+	if (likely(masked == 0))
+		return 0;
+	return __pte2cachemode_tbl[__pte2cm_idx(masked)];
+}
+static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+		((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+			  ((val & _PAGE_PAT_LARGE) >>
+			   (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+
 
 typedef struct page *pgtable_t;
 
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..a9776ba 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -27,6 +27,35 @@
 
 #include "mm_internal.h"
 
+/*
+ * Tables translating between page_cache_type_t and pte encoding.
+ * Minimal supported modes are defined statically, modified if more supported
+ * cache modes are available.
+ * Index into __cachemode2pte_tbl is the cachemode.
+ * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
+ * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
+ */
+uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
+	[_PAGE_CACHE_MODE_WB]		= 0,
+	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
+	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
+	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
+	[_PAGE_CACHE_MODE_WT]		= _PAGE_PCD,
+	[_PAGE_CACHE_MODE_WP]		= _PAGE_PCD,
+};
+EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
+uint8_t __pte2cachemode_tbl[8] = {
+	[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
+	[__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
+};
+EXPORT_SYMBOL_GPL(__pte2cachemode_tbl);
+
 static unsigned long __initdata pgt_buf_start;
 static unsigned long __initdata pgt_buf_end;
 static unsigned long __initdata pgt_buf_top;
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 02/18] x86: Use new cache mode type in include/asm/fb.h
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:54   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c Juergen Gross
                   ` (18 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using cache mode bits in the pte switch to usage of
the new cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/fb.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/fb.h b/arch/x86/include/asm/fb.h
index 2519d06..c3dd5e7 100644
--- a/arch/x86/include/asm/fb.h
+++ b/arch/x86/include/asm/fb.h
@@ -8,8 +8,12 @@
 static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
 				unsigned long off)
 {
+	unsigned long prot;
+
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
 	if (boot_cpu_data.x86 > 3)
-		pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
+		pgprot_val(vma->vm_page_prot) =
+			prot | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 }
 
 extern int fb_is_primary_device(struct fb_info *info);
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 02/18] x86: Use new cache mode type in include/asm/fb.h Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-07  8:16   ` Tomi Valkeinen
  2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ gbefb.c tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 04/18] x86: Use new cache mode type in drivers/video/fbdev/vermilion Juergen Gross
                   ` (17 subsequent siblings)
  20 siblings, 2 replies; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/video/fbdev/gbefb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
index 4aa56ba..6d9ef39 100644
--- a/drivers/video/fbdev/gbefb.c
+++ b/drivers/video/fbdev/gbefb.c
@@ -54,7 +54,8 @@ struct gbefb_par {
 #endif
 #endif
 #ifdef CONFIG_X86
-#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
+#define pgprot_fb(_prot) (((_prot) & ~_PAGE_CACHE_MASK) |	\
+			  cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS))
 #endif
 
 /*
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 04/18] x86: Use new cache mode type in drivers/video/fbdev/vermilion
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (2 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ vermilion tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 05/18] x86: Use new cache mode type in arch/x86/pci Juergen Gross
                   ` (16 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/video/fbdev/vermilion/vermilion.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
index 5f930ae..6b70d7f 100644
--- a/drivers/video/fbdev/vermilion/vermilion.c
+++ b/drivers/video/fbdev/vermilion/vermilion.c
@@ -1003,13 +1003,15 @@ static int vmlfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 	struct vml_info *vinfo = container_of(info, struct vml_info, info);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
 	int ret;
+	unsigned long prot;
 
 	ret = vmlfb_vram_offset(vinfo, offset);
 	if (ret)
 		return -EINVAL;
 
-	pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
-	pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
+	pgprot_val(vma->vm_page_prot) =
+		prot | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 
 	return vm_iomap_memory(vma, vinfo->vram_start,
 			vinfo->vram_contig_size);
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 05/18] x86: Use new cache mode type in arch/x86/pci
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (3 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 04/18] x86: Use new cache mode type in drivers/video/fbdev/vermilion Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:55   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 06/18] x86: Use new cache mode type in arch/x86/mm/init_64.c Juergen Gross
                   ` (15 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/pci/i386.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index 37c1435..9b18ef3 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,14 +433,14 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		return -EINVAL;
 
 	if (pat_enabled && write_combine)
-		prot |= _PAGE_CACHE_WC;
+		prot |= cachemode2protval(_PAGE_CACHE_MODE_WC);
 	else if (pat_enabled || boot_cpu_data.x86 > 3)
 		/*
 		 * ioremap() and ioremap_nocache() defaults to UC MINUS for now.
 		 * To avoid attribute conflicts, request UC MINUS here
 		 * as well.
 		 */
-		prot |= _PAGE_CACHE_UC_MINUS;
+		prot |= cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 
 	vma->vm_page_prot = __pgprot(prot);
 
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 06/18] x86: Use new cache mode type in arch/x86/mm/init_64.c
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (4 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 05/18] x86: Use new cache mode type in arch/x86/pci Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in arch/x86/mm/ init_64.c tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 07/18] x86: Use new cache mode type in asm/pgtable.h Juergen Gross
                   ` (14 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/init_64.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4cb8763..5390a5f 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -338,12 +338,15 @@ pte_t * __init populate_extra_pte(unsigned long vaddr)
  * Create large page table mappings for a range of physical addresses.
  */
 static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
-						pgprot_t prot)
+					enum page_cache_mode cache)
 {
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
+	pgprot_t prot;
 
+	pgprot_val(prot) = pgprot_val(PAGE_KERNEL_LARGE) |
+		pgprot_val(pgprot_4k_2_large(cachemode2pgprot(cache)));
 	BUG_ON((phys & ~PMD_MASK) || (size & ~PMD_MASK));
 	for (; size; phys += PMD_SIZE, size -= PMD_SIZE) {
 		pgd = pgd_offset_k((unsigned long)__va(phys));
@@ -366,12 +369,12 @@ static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
 
 void __init init_extra_mapping_wb(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_WB);
 }
 
 void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE_NOCACHE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_UC);
 }
 
 /*
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 07/18] x86: Use new cache mode type in asm/pgtable.h
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (5 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 06/18] x86: Use new cache mode type in arch/x86/mm/init_64.c Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 08/18] x86: Use new cache mode type in mm/iomap_32.c Juergen Gross
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. This requires changing some callers of
is_new_memtype_allowed() to be changed as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable.h | 19 ++++++++++---------
 arch/x86/mm/ioremap.c          |  3 ++-
 arch/x86/mm/pat.c              |  8 ++++++--
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index aa97a07..c112ea6 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -9,9 +9,10 @@
 /*
  * Macro to mark a page protection value as UC-
  */
-#define pgprot_noncached(prot)					\
-	((boot_cpu_data.x86 > 3)				\
-	 ? (__pgprot(pgprot_val(prot) | _PAGE_CACHE_UC_MINUS))	\
+#define pgprot_noncached(prot)						\
+	((boot_cpu_data.x86 > 3)					\
+	 ? (__pgprot(pgprot_val(prot) |					\
+		     cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS)))	\
 	 : (prot))
 
 #ifndef __ASSEMBLY__
@@ -404,8 +405,8 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 #define canon_pgprot(p) __pgprot(massage_pgprot(p))
 
 static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
-					 unsigned long flags,
-					 unsigned long new_flags)
+					 enum page_cache_mode pcm,
+					 enum page_cache_mode new_pcm)
 {
 	/*
 	 * PAT type is always WB for untracked ranges, so no need to check.
@@ -419,10 +420,10 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
 	 * - request is uncached, return cannot be write-back
 	 * - request is write-combine, return cannot be write-back
 	 */
-	if ((flags == _PAGE_CACHE_UC_MINUS &&
-	     new_flags == _PAGE_CACHE_WB) ||
-	    (flags == _PAGE_CACHE_WC &&
-	     new_flags == _PAGE_CACHE_WB)) {
+	if ((pcm == _PAGE_CACHE_MODE_UC_MINUS &&
+	     new_pcm == _PAGE_CACHE_MODE_WB) ||
+	    (pcm == _PAGE_CACHE_MODE_WC &&
+	     new_pcm == _PAGE_CACHE_MODE_WB)) {
 		return 0;
 	}
 
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index af78e50..3a81eb9 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -142,7 +142,8 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 
 	if (prot_val != new_prot_val) {
 		if (!is_new_memtype_allowed(phys_addr, size,
-					    prot_val, new_prot_val)) {
+				pgprot2cachemode(__pgprot(prot_val)),
+				pgprot2cachemode(__pgprot(new_prot_val)))) {
 			printk(KERN_ERR
 		"ioremap error for 0x%llx-0x%llx, requested 0x%lx, got 0x%lx\n",
 				(unsigned long long)phys_addr,
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 6574388..47282c2 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -455,7 +455,9 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	if (ret)
 		goto out_err;
 
-	if (!is_new_memtype_allowed(start, size, req_type, new_type))
+	if (!is_new_memtype_allowed(start, size,
+				    pgprot2cachemode(__pgprot(req_type)),
+				    pgprot2cachemode(__pgprot(new_type))))
 		goto out_free;
 
 	if (kernel_map_sync_memtype(start, size, new_type) < 0)
@@ -630,7 +632,9 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 
 	if (flags != want_flags) {
 		if (strict_prot ||
-		    !is_new_memtype_allowed(paddr, size, want_flags, flags)) {
+		    !is_new_memtype_allowed(paddr, size,
+				pgprot2cachemode(__pgprot(want_flags)),
+				pgprot2cachemode(__pgprot(flags)))) {
 			free_memtype(paddr, paddr + size);
 			printk(KERN_ERR "%s:%d map pfn expected mapping type %s"
 				" for [mem %#010Lx-%#010Lx], got %s\n",
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 08/18] x86: Use new cache mode type in mm/iomap_32.c
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (6 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 07/18] x86: Use new cache mode type in asm/pgtable.h Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 09/18] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert() Juergen Gross
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. This requires to change
io_reserve_memtype() as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pat.h |  2 +-
 arch/x86/mm/iomap_32.c     | 12 +++++++-----
 arch/x86/mm/pat.c          | 18 ++++++++++--------
 3 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index e2c1668..a8438bc 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -20,7 +20,7 @@ extern int kernel_map_sync_memtype(u64 base, unsigned long size,
 		unsigned long flag);
 
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type);
+			enum page_cache_mode *pcm);
 
 void io_free_memtype(resource_size_t start, resource_size_t end);
 
diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
index 7b179b49..9ca35fc 100644
--- a/arch/x86/mm/iomap_32.c
+++ b/arch/x86/mm/iomap_32.c
@@ -33,17 +33,17 @@ static int is_io_mapping_possible(resource_size_t base, unsigned long size)
 
 int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot)
 {
-	unsigned long flag = _PAGE_CACHE_WC;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_WC;
 	int ret;
 
 	if (!is_io_mapping_possible(base, size))
 		return -EINVAL;
 
-	ret = io_reserve_memtype(base, base + size, &flag);
+	ret = io_reserve_memtype(base, base + size, &pcm);
 	if (ret)
 		return ret;
 
-	*prot = __pgprot(__PAGE_KERNEL | flag);
+	*prot = __pgprot(__PAGE_KERNEL | cachemode2protval(pcm));
 	return 0;
 }
 EXPORT_SYMBOL_GPL(iomap_create_wc);
@@ -82,8 +82,10 @@ iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 	 * MTRR is UC or WC.  UC_MINUS gets the real intention, of the
 	 * user, which is "WC if the MTRR is WC, UC if you can't do that."
 	 */
-	if (!pat_enabled && pgprot_val(prot) == pgprot_val(PAGE_KERNEL_WC))
-		prot = PAGE_KERNEL_UC_MINUS;
+	if (!pat_enabled && pgprot_val(prot) ==
+	    (__PAGE_KERNEL | cachemode2protval(_PAGE_CACHE_MODE_WC)))
+		prot = __pgprot(__PAGE_KERNEL |
+				cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS));
 
 	return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot);
 }
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 47282c2..6d5a8e3 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -442,25 +442,27 @@ static unsigned long lookup_memtype(u64 paddr)
  * On failure, returns non-zero
  */
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type)
+			enum page_cache_mode *type)
 {
 	resource_size_t size = end - start;
-	unsigned long req_type = *type;
-	unsigned long new_type;
+	enum page_cache_mode req_type = *type;
+	enum page_cache_mode new_type;
+	unsigned long new_prot;
 	int ret;
 
 	WARN_ON_ONCE(iomem_map_sanity_check(start, size));
 
-	ret = reserve_memtype(start, end, req_type, &new_type);
+	ret = reserve_memtype(start, end, cachemode2protval(req_type),
+				&new_prot);
 	if (ret)
 		goto out_err;
 
-	if (!is_new_memtype_allowed(start, size,
-				    pgprot2cachemode(__pgprot(req_type)),
-				    pgprot2cachemode(__pgprot(new_type))))
+	new_type = pgprot2cachemode(__pgprot(new_prot));
+
+	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
-	if (kernel_map_sync_memtype(start, size, new_type) < 0)
+	if (kernel_map_sync_memtype(start, size, new_prot) < 0)
 		goto out_free;
 
 	*type = new_type;
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 09/18] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert()
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (7 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 08/18] x86: Use new cache mode type in mm/iomap_32.c Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c Juergen Gross
                   ` (11 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. As those are the main callers of
lookup_memtype(), change this as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pat.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 6d5a8e3..2f3744f 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -394,12 +394,12 @@ int free_memtype(u64 start, u64 end)
  *
  * Only to be called when PAT is enabled
  *
- * Returns _PAGE_CACHE_WB, _PAGE_CACHE_WC, _PAGE_CACHE_UC_MINUS or
- * _PAGE_CACHE_UC
+ * Returns _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC, _PAGE_CACHE_MODE_UC_MINUS
+ * or _PAGE_CACHE_MODE_UC
  */
-static unsigned long lookup_memtype(u64 paddr)
+static enum page_cache_mode lookup_memtype(u64 paddr)
 {
-	int rettype = _PAGE_CACHE_WB;
+	enum page_cache_mode rettype = _PAGE_CACHE_MODE_WB;
 	struct memtype *entry;
 
 	if (x86_platform.is_untracked_pat_range(paddr, paddr + PAGE_SIZE))
@@ -408,13 +408,13 @@ static unsigned long lookup_memtype(u64 paddr)
 	if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
 		struct page *page;
 		page = pfn_to_page(paddr >> PAGE_SHIFT);
-		rettype = get_page_memtype(page);
+		rettype = pgprot2cachemode(__pgprot(get_page_memtype(page)));
 		/*
 		 * -1 from get_page_memtype() implies RAM page is in its
 		 * default state and not reserved, and hence of type WB
 		 */
 		if (rettype == -1)
-			rettype = _PAGE_CACHE_WB;
+			rettype = _PAGE_CACHE_MODE_WB;
 
 		return rettype;
 	}
@@ -423,9 +423,9 @@ static unsigned long lookup_memtype(u64 paddr)
 
 	entry = rbt_memtype_lookup(paddr);
 	if (entry != NULL)
-		rettype = entry->type;
+		rettype = pgprot2cachemode(__pgprot(entry->type));
 	else
-		rettype = _PAGE_CACHE_UC_MINUS;
+		rettype = _PAGE_CACHE_MODE_UC_MINUS;
 
 	spin_unlock(&memtype_lock);
 	return rettype;
@@ -613,7 +613,7 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		if (!pat_enabled)
 			return 0;
 
-		flags = lookup_memtype(paddr);
+		flags = cachemode2protval(lookup_memtype(paddr));
 		if (want_flags != flags) {
 			printk(KERN_WARNING "%s:%d map pfn RAM range req %s for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
@@ -715,7 +715,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 		    unsigned long pfn, unsigned long addr, unsigned long size)
 {
 	resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
-	unsigned long flags;
+	enum page_cache_mode pcm;
 
 	/* reserve the whole chunk starting from paddr */
 	if (addr == vma->vm_start && size == (vma->vm_end - vma->vm_start)) {
@@ -734,18 +734,18 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 	 * For anything smaller than the vma size we set prot based on the
 	 * lookup.
 	 */
-	flags = lookup_memtype(paddr);
+	pcm = lookup_memtype(paddr);
 
 	/* Check memtype for the remaining pages */
 	while (size > PAGE_SIZE) {
 		size -= PAGE_SIZE;
 		paddr += PAGE_SIZE;
-		if (flags != lookup_memtype(paddr))
+		if (pcm != lookup_memtype(paddr))
 			return -EINVAL;
 	}
 
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 cachemode2protval(pcm));
 
 	return 0;
 }
@@ -753,15 +753,15 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 int track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
 		     unsigned long pfn)
 {
-	unsigned long flags;
+	enum page_cache_mode pcm;
 
 	if (!pat_enabled)
 		return 0;
 
 	/* Set prot based on lookup */
-	flags = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
+	pcm = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 cachemode2protval(pcm));
 
 	return 0;
 }
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (8 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 09/18] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert() Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-03 16:44   ` Thomas Gleixner
  2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 11/18] x86: Use new cache mode type in setting page attributes Juergen Gross
                   ` (10 subsequent siblings)
  20 siblings, 2 replies; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

When modifying page attributes via change_page_attr_set_clr() don't
test for setting _PAGE_PAT_LARGE, as this is
- never done
- PAT support for large pages is not included in the kernel up to now

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/mm/pageattr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index ae242a7..87c0d36 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1307,7 +1307,7 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 static inline int cache_attr(pgprot_t attr)
 {
 	return pgprot_val(attr) &
-		(_PAGE_PAT | _PAGE_PAT_LARGE | _PAGE_PWT | _PAGE_PCD);
+		(_PAGE_PAT | _PAGE_PWT | _PAGE_PCD);
 }
 
 static int change_page_attr_set_clr(unsigned long *addr, int numpages,
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 11/18] x86: Use new cache mode type in setting page attributes
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (9 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 12/18] x86: Use new cache mode type in mm/ioremap.c Juergen Gross
                   ` (9 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type in the functions for modifying page
attributes.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c | 52 +++++++++++++++++++++++++++-----------------------
 1 file changed, 28 insertions(+), 24 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 87c0d36..9f7e1b4 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1304,12 +1304,6 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 	return 0;
 }
 
-static inline int cache_attr(pgprot_t attr)
-{
-	return pgprot_val(attr) &
-		(_PAGE_PAT | _PAGE_PWT | _PAGE_PCD);
-}
-
 static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 				    pgprot_t mask_set, pgprot_t mask_clr,
 				    int force_split, int in_flag,
@@ -1390,7 +1384,7 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 	 * No need to flush, when we did not set any of the caching
 	 * attributes:
 	 */
-	cache = cache_attr(mask_set);
+	cache = !!pgprot2cachemode(mask_set);
 
 	/*
 	 * On success we use CLFLUSH, when the CPU supports it to
@@ -1445,7 +1439,8 @@ int _set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	return change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				    cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				    0);
 }
 
 int set_memory_uc(unsigned long addr, int numpages)
@@ -1474,7 +1469,7 @@ out_err:
 EXPORT_SYMBOL(set_memory_uc);
 
 static int _set_memory_array(unsigned long *addr, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	int i, j;
 	int ret;
@@ -1484,17 +1479,19 @@ static int _set_memory_array(unsigned long *addr, int addrinarray,
 	 */
 	for (i = 0; i < addrinarray; i++) {
 		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
-					new_type, NULL);
+					cachemode2protval(new_type), NULL);
 		if (ret)
 			goto out_free;
 	}
 
 	ret = change_page_attr_set(addr, addrinarray,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
+				   cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				   1);
 
-	if (!ret && new_type == _PAGE_CACHE_WC)
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(addr, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_ARRAY, NULL);
 	if (ret)
@@ -1511,13 +1508,13 @@ out_free:
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_memory_array_uc);
 
 int set_memory_array_wc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_WC);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_memory_array_wc);
 
@@ -1527,10 +1524,12 @@ int _set_memory_wc(unsigned long addr, int numpages)
 	unsigned long addr_copy = addr;
 
 	ret = change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				   cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				   0);
 	if (!ret) {
 		ret = change_page_attr_set_clr(&addr_copy, numpages,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, 0, NULL);
 	}
@@ -1564,6 +1563,7 @@ EXPORT_SYMBOL(set_memory_wc);
 
 int _set_memory_wb(unsigned long addr, int numpages)
 {
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	return change_page_attr_clear(&addr, numpages,
 				      __pgprot(_PAGE_CACHE_MASK), 0);
 }
@@ -1586,6 +1586,7 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 	int i;
 	int ret;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	ret = change_page_attr_clear(addr, addrinarray,
 				      __pgprot(_PAGE_CACHE_MASK), 1);
 	if (ret)
@@ -1648,7 +1649,7 @@ int set_pages_uc(struct page *page, int numpages)
 EXPORT_SYMBOL(set_pages_uc);
 
 static int _set_pages_array(struct page **pages, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	unsigned long start;
 	unsigned long end;
@@ -1661,15 +1662,17 @@ static int _set_pages_array(struct page **pages, int addrinarray,
 			continue;
 		start = page_to_pfn(pages[i]) << PAGE_SHIFT;
 		end = start + PAGE_SIZE;
-		if (reserve_memtype(start, end, new_type, NULL))
+		if (reserve_memtype(start, end, cachemode2protval(new_type),
+				    NULL))
 			goto err_out;
 	}
 
 	ret = cpa_set_pages_array(pages, addrinarray,
-			__pgprot(_PAGE_CACHE_UC_MINUS));
-	if (!ret && new_type == _PAGE_CACHE_WC)
+			cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS));
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(NULL, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_PAGES_ARRAY, pages);
 	if (ret)
@@ -1689,13 +1692,13 @@ err_out:
 
 int set_pages_array_uc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_pages_array_uc);
 
 int set_pages_array_wc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_WC);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_pages_array_wc);
 
@@ -1714,6 +1717,7 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 	unsigned long end;
 	int i;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	retval = cpa_clear_pages_array(pages, addrinarray,
 			__pgprot(_PAGE_CACHE_MASK));
 	if (retval)
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 12/18] x86: Use new cache mode type in mm/ioremap.c
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (10 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 11/18] x86: Use new cache mode type in setting page attributes Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:01 ` [PATCH V6 13/18] x86: Use new cache mode type in memtype related functions Juergen Gross
                   ` (8 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/io.h  |  2 +-
 arch/x86/include/asm/pat.h |  2 +-
 arch/x86/mm/ioremap.c      | 65 +++++++++++++++++++++++++---------------------
 arch/x86/mm/pat.c          | 12 +++++----
 4 files changed, 44 insertions(+), 37 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index b8237d8..71b9e65 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -314,7 +314,7 @@ extern void *xlate_dev_mem_ptr(unsigned long phys);
 extern void unxlate_dev_mem_ptr(unsigned long phys, void *addr);
 
 extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-				unsigned long prot_val);
+				enum page_cache_mode pcm);
 extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);
 
 extern bool is_early_ioremap_ptep(pte_t *ptep);
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index a8438bc..d35ee2d 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -17,7 +17,7 @@ extern int reserve_memtype(u64 start, u64 end,
 extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
-		unsigned long flag);
+		enum page_cache_mode pcm);
 
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
 			enum page_cache_mode *pcm);
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 3a81eb9..f31507f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -29,20 +29,20 @@
  * conflicts.
  */
 int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-			       unsigned long prot_val)
+			enum page_cache_mode pcm)
 {
 	unsigned long nrpages = size >> PAGE_SHIFT;
 	int err;
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
 		err = _set_memory_uc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		err = _set_memory_wc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
 		err = _set_memory_wb(vaddr, nrpages);
 		break;
 	}
@@ -75,13 +75,14 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-		unsigned long size, unsigned long prot_val, void *caller)
+		unsigned long size, enum page_cache_mode pcm, void *caller)
 {
 	unsigned long offset, vaddr;
 	resource_size_t pfn, last_pfn, last_addr;
 	const resource_size_t unaligned_phys_addr = phys_addr;
 	const unsigned long unaligned_size = size;
 	struct vm_struct *area;
+	enum page_cache_mode new_pcm;
 	unsigned long new_prot_val;
 	pgprot_t prot;
 	int retval;
@@ -134,39 +135,42 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
 
 	retval = reserve_memtype(phys_addr, (u64)phys_addr + size,
-						prot_val, &new_prot_val);
+				 cachemode2protval(pcm), &new_prot_val);
 	if (retval) {
 		printk(KERN_ERR "ioremap reserve_memtype failed %d\n", retval);
 		return NULL;
 	}
 
-	if (prot_val != new_prot_val) {
-		if (!is_new_memtype_allowed(phys_addr, size,
-				pgprot2cachemode(__pgprot(prot_val)),
-				pgprot2cachemode(__pgprot(new_prot_val)))) {
+	new_pcm = pgprot2cachemode(__pgprot(new_prot_val));
+
+	if (pcm != new_pcm) {
+		if (!is_new_memtype_allowed(phys_addr, size, pcm, new_pcm)) {
 			printk(KERN_ERR
-		"ioremap error for 0x%llx-0x%llx, requested 0x%lx, got 0x%lx\n",
+		"ioremap error for 0x%llx-0x%llx, requested 0x%x, got 0x%x\n",
 				(unsigned long long)phys_addr,
 				(unsigned long long)(phys_addr + size),
-				prot_val, new_prot_val);
+				pcm, new_pcm);
 			goto err_free_memtype;
 		}
-		prot_val = new_prot_val;
+		pcm = new_pcm;
 	}
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	prot = PAGE_KERNEL_IO;
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
-		prot = PAGE_KERNEL_IO_NOCACHE;
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_UC));
 		break;
-	case _PAGE_CACHE_UC_MINUS:
-		prot = PAGE_KERNEL_IO_UC_MINUS;
+	case _PAGE_CACHE_MODE_UC_MINUS:
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS));
 		break;
-	case _PAGE_CACHE_WC:
-		prot = PAGE_KERNEL_IO_WC;
+	case _PAGE_CACHE_MODE_WC:
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_WC));
 		break;
-	case _PAGE_CACHE_WB:
-		prot = PAGE_KERNEL_IO;
+	case _PAGE_CACHE_MODE_WB:
 		break;
 	}
 
@@ -179,7 +183,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	area->phys_addr = phys_addr;
 	vaddr = (unsigned long) area->addr;
 
-	if (kernel_map_sync_memtype(phys_addr, size, prot_val))
+	if (kernel_map_sync_memtype(phys_addr, size, pcm))
 		goto err_free_area;
 
 	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
@@ -228,14 +232,14 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, unsigned long size)
 {
 	/*
 	 * Ideally, this should be:
-	 *	pat_enabled ? _PAGE_CACHE_UC : _PAGE_CACHE_UC_MINUS;
+	 *	pat_enabled ? _PAGE_CACHE_MODE_UC : _PAGE_CACHE_MODE_UC_MINUS;
 	 *
 	 * Till we fix all X drivers to use ioremap_wc(), we will use
 	 * UC MINUS.
 	 */
-	unsigned long val = _PAGE_CACHE_UC_MINUS;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
-	return __ioremap_caller(phys_addr, size, val,
+	return __ioremap_caller(phys_addr, size, pcm,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_nocache);
@@ -253,7 +257,7 @@ EXPORT_SYMBOL(ioremap_nocache);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
 	if (pat_enabled)
-		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WC,
+		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
 					__builtin_return_address(0));
 	else
 		return ioremap_nocache(phys_addr, size);
@@ -262,7 +266,7 @@ EXPORT_SYMBOL(ioremap_wc);
 
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
-	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
+	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_cache);
@@ -270,7 +274,8 @@ EXPORT_SYMBOL(ioremap_cache);
 void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
 				unsigned long prot_val)
 {
-	return __ioremap_caller(phys_addr, size, (prot_val & _PAGE_CACHE_MASK),
+	return __ioremap_caller(phys_addr, size,
+				pgprot2cachemode(__pgprot(prot_val)),
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_prot);
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 2f3744f..8f68a83 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -462,7 +462,7 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
-	if (kernel_map_sync_memtype(start, size, new_prot) < 0)
+	if (kernel_map_sync_memtype(start, size, new_type) < 0)
 		goto out_free;
 
 	*type = new_type;
@@ -560,7 +560,8 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
  * Change the memory type for the physial address range in kernel identity
  * mapping space if that range is a part of identity map.
  */
-int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
+int kernel_map_sync_memtype(u64 base, unsigned long size,
+			    enum page_cache_mode pcm)
 {
 	unsigned long id_sz;
 
@@ -578,11 +579,11 @@ int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
 				__pa(high_memory) - base :
 				size;
 
-	if (ioremap_change_attr((unsigned long)__va(base), id_sz, flags) < 0) {
+	if (ioremap_change_attr((unsigned long)__va(base), id_sz, pcm) < 0) {
 		printk(KERN_INFO "%s:%d ioremap_change_attr failed %s "
 			"for [mem %#010Lx-%#010Lx]\n",
 			current->comm, current->pid,
-			cattr_name(flags),
+			cattr_name(cachemode2protval(pcm)),
 			base, (unsigned long long)(base + size-1));
 		return -EINVAL;
 	}
@@ -656,7 +657,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 				     flags);
 	}
 
-	if (kernel_map_sync_memtype(paddr, size, flags) < 0) {
+	if (kernel_map_sync_memtype(paddr, size,
+				    pgprot2cachemode(__pgprot(flags))) < 0) {
 		free_memtype(paddr, paddr + size);
 		return -EINVAL;
 	}
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 13/18] x86: Use new cache mode type in memtype related functions
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (11 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 12/18] x86: Use new cache mode type in mm/ioremap.c Juergen Gross
@ 2014-11-03 13:01 ` Juergen Gross
  2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:02 ` [PATCH V6 14/18] x86: Clean up pgtable_types.h Juergen Gross
                   ` (7 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:01 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cacheflush.h |  38 ++++++++------
 arch/x86/include/asm/pat.h        |   2 +-
 arch/x86/mm/ioremap.c             |   5 +-
 arch/x86/mm/pageattr.c            |   9 ++--
 arch/x86/mm/pat.c                 | 102 ++++++++++++++++++--------------------
 arch/x86/mm/pat_internal.h        |  22 ++++----
 arch/x86/mm/pat_rbtree.c          |   8 +--
 7 files changed, 96 insertions(+), 90 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 9863ee3..157644b 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -9,10 +9,10 @@
 /*
  * X86 PAT uses page flags WC and Uncached together to keep track of
  * memory type of pages that have backing page struct. X86 PAT supports 3
- * different memory types, _PAGE_CACHE_WB, _PAGE_CACHE_WC and
- * _PAGE_CACHE_UC_MINUS and fourth state where page's memory type has not
+ * different memory types, _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC and
+ * _PAGE_CACHE_MODE_UC_MINUS and fourth state where page's memory type has not
  * been changed from its default (value of -1 used to denote this).
- * Note we do not support _PAGE_CACHE_UC here.
+ * Note we do not support _PAGE_CACHE_MODE_UC here.
  */
 
 #define _PGMT_DEFAULT		0
@@ -22,36 +22,40 @@
 #define _PGMT_MASK		(1UL << PG_uncached | 1UL << PG_arch_1)
 #define _PGMT_CLEAR_MASK	(~_PGMT_MASK)
 
-static inline unsigned long get_page_memtype(struct page *pg)
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
 {
 	unsigned long pg_flags = pg->flags & _PGMT_MASK;
 
 	if (pg_flags == _PGMT_DEFAULT)
 		return -1;
 	else if (pg_flags == _PGMT_WC)
-		return _PAGE_CACHE_WC;
+		return _PAGE_CACHE_MODE_WC;
 	else if (pg_flags == _PGMT_UC_MINUS)
-		return _PAGE_CACHE_UC_MINUS;
+		return _PAGE_CACHE_MODE_UC_MINUS;
 	else
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 }
 
-static inline void set_page_memtype(struct page *pg, unsigned long memtype)
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
 {
-	unsigned long memtype_flags = _PGMT_DEFAULT;
+	unsigned long memtype_flags;
 	unsigned long old_flags;
 	unsigned long new_flags;
 
 	switch (memtype) {
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		memtype_flags = _PGMT_WC;
 		break;
-	case _PAGE_CACHE_UC_MINUS:
+	case _PAGE_CACHE_MODE_UC_MINUS:
 		memtype_flags = _PGMT_UC_MINUS;
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
 		memtype_flags = _PGMT_WB;
 		break;
+	default:
+		memtype_flags = _PGMT_DEFAULT;
+		break;
 	}
 
 	do {
@@ -60,8 +64,14 @@ static inline void set_page_memtype(struct page *pg, unsigned long memtype)
 	} while (cmpxchg(&pg->flags, old_flags, new_flags) != old_flags);
 }
 #else
-static inline unsigned long get_page_memtype(struct page *pg) { return -1; }
-static inline void set_page_memtype(struct page *pg, unsigned long memtype) { }
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
+{
+	return -1;
+}
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
+{
+}
 #endif
 
 /*
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index d35ee2d..150407a 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -13,7 +13,7 @@ static const int pat_enabled;
 extern void pat_init(void);
 
 extern int reserve_memtype(u64 start, u64 end,
-		unsigned long req_type, unsigned long *ret_type);
+		enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
 extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index f31507f..8832e51 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -83,7 +83,6 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	const unsigned long unaligned_size = size;
 	struct vm_struct *area;
 	enum page_cache_mode new_pcm;
-	unsigned long new_prot_val;
 	pgprot_t prot;
 	int retval;
 	void __iomem *ret_addr;
@@ -135,14 +134,12 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
 
 	retval = reserve_memtype(phys_addr, (u64)phys_addr + size,
-				 cachemode2protval(pcm), &new_prot_val);
+						pcm, &new_pcm);
 	if (retval) {
 		printk(KERN_ERR "ioremap reserve_memtype failed %d\n", retval);
 		return NULL;
 	}
 
-	new_pcm = pgprot2cachemode(__pgprot(new_prot_val));
-
 	if (pcm != new_pcm) {
 		if (!is_new_memtype_allowed(phys_addr, size, pcm, new_pcm)) {
 			printk(KERN_ERR
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 9f7e1b4..de807c9 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1451,7 +1451,7 @@ int set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-			    _PAGE_CACHE_UC_MINUS, NULL);
+			      _PAGE_CACHE_MODE_UC_MINUS, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1479,7 +1479,7 @@ static int _set_memory_array(unsigned long *addr, int addrinarray,
 	 */
 	for (i = 0; i < addrinarray; i++) {
 		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
-					cachemode2protval(new_type), NULL);
+					new_type, NULL);
 		if (ret)
 			goto out_free;
 	}
@@ -1544,7 +1544,7 @@ int set_memory_wc(unsigned long addr, int numpages)
 		return set_memory_uc(addr, numpages);
 
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-		_PAGE_CACHE_WC, NULL);
+		_PAGE_CACHE_MODE_WC, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1662,8 +1662,7 @@ static int _set_pages_array(struct page **pages, int addrinarray,
 			continue;
 		start = page_to_pfn(pages[i]) << PAGE_SHIFT;
 		end = start + PAGE_SIZE;
-		if (reserve_memtype(start, end, cachemode2protval(new_type),
-				    NULL))
+		if (reserve_memtype(start, end, new_type, NULL))
 			goto err_out;
 	}
 
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 8f68a83..ef75f3f 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -139,20 +139,21 @@ static DEFINE_SPINLOCK(memtype_lock);	/* protects memtype accesses */
  * The intersection is based on "Effective Memory Type" tables in IA-32
  * SDM vol 3a
  */
-static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type)
+static unsigned long pat_x_mtrr_type(u64 start, u64 end,
+				     enum page_cache_mode req_type)
 {
 	/*
 	 * Look for MTRR hint to get the effective type in case where PAT
 	 * request is for WB.
 	 */
-	if (req_type == _PAGE_CACHE_WB) {
+	if (req_type == _PAGE_CACHE_MODE_WB) {
 		u8 mtrr_type;
 
 		mtrr_type = mtrr_type_lookup(start, end);
 		if (mtrr_type != MTRR_TYPE_WRBACK)
-			return _PAGE_CACHE_UC_MINUS;
+			return _PAGE_CACHE_MODE_UC_MINUS;
 
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 	}
 
 	return req_type;
@@ -207,25 +208,26 @@ static int pat_pagerange_is_ram(resource_size_t start, resource_size_t end)
  * - Find the memtype of all the pages in the range, look for any conflicts
  * - In case of no conflicts, set the new memtype for pages in the range
  */
-static int reserve_ram_pages_type(u64 start, u64 end, unsigned long req_type,
-				  unsigned long *new_type)
+static int reserve_ram_pages_type(u64 start, u64 end,
+				  enum page_cache_mode req_type,
+				  enum page_cache_mode *new_type)
 {
 	struct page *page;
 	u64 pfn;
 
-	if (req_type == _PAGE_CACHE_UC) {
+	if (req_type == _PAGE_CACHE_MODE_UC) {
 		/* We do not support strong UC */
 		WARN_ON_ONCE(1);
-		req_type = _PAGE_CACHE_UC_MINUS;
+		req_type = _PAGE_CACHE_MODE_UC_MINUS;
 	}
 
 	for (pfn = (start >> PAGE_SHIFT); pfn < (end >> PAGE_SHIFT); ++pfn) {
-		unsigned long type;
+		enum page_cache_mode type;
 
 		page = pfn_to_page(pfn);
 		type = get_page_memtype(page);
 		if (type != -1) {
-			printk(KERN_INFO "reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%lx, req 0x%lx\n",
+			pr_info("reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%x, req 0x%x\n",
 				start, end - 1, type, req_type);
 			if (new_type)
 				*new_type = type;
@@ -258,21 +260,21 @@ static int free_ram_pages_type(u64 start, u64 end)
 
 /*
  * req_type typically has one of the:
- * - _PAGE_CACHE_WB
- * - _PAGE_CACHE_WC
- * - _PAGE_CACHE_UC_MINUS
- * - _PAGE_CACHE_UC
+ * - _PAGE_CACHE_MODE_WB
+ * - _PAGE_CACHE_MODE_WC
+ * - _PAGE_CACHE_MODE_UC_MINUS
+ * - _PAGE_CACHE_MODE_UC
  *
  * If new_type is NULL, function will return an error if it cannot reserve the
  * region with req_type. If new_type is non-NULL, function will return
  * available type in new_type in case of no error. In case of any error
  * it will return a negative return value.
  */
-int reserve_memtype(u64 start, u64 end, unsigned long req_type,
-		    unsigned long *new_type)
+int reserve_memtype(u64 start, u64 end, enum page_cache_mode req_type,
+		    enum page_cache_mode *new_type)
 {
 	struct memtype *new;
-	unsigned long actual_type;
+	enum page_cache_mode actual_type;
 	int is_range_ram;
 	int err = 0;
 
@@ -281,10 +283,10 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	if (!pat_enabled) {
 		/* This is identical to page table setting without PAT */
 		if (new_type) {
-			if (req_type == _PAGE_CACHE_WC)
-				*new_type = _PAGE_CACHE_UC_MINUS;
+			if (req_type == _PAGE_CACHE_MODE_WC)
+				*new_type = _PAGE_CACHE_MODE_UC_MINUS;
 			else
-				*new_type = req_type & _PAGE_CACHE_MASK;
+				*new_type = req_type;
 		}
 		return 0;
 	}
@@ -292,7 +294,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	/* Low ISA region is always mapped WB in page table. No need to track */
 	if (x86_platform.is_untracked_pat_range(start, end)) {
 		if (new_type)
-			*new_type = _PAGE_CACHE_WB;
+			*new_type = _PAGE_CACHE_MODE_WB;
 		return 0;
 	}
 
@@ -302,7 +304,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	 * tools and ACPI tools). Use WB request for WB memory and use
 	 * UC_MINUS otherwise.
 	 */
-	actual_type = pat_x_mtrr_type(start, end, req_type & _PAGE_CACHE_MASK);
+	actual_type = pat_x_mtrr_type(start, end, req_type);
 
 	if (new_type)
 		*new_type = actual_type;
@@ -408,7 +410,7 @@ static enum page_cache_mode lookup_memtype(u64 paddr)
 	if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
 		struct page *page;
 		page = pfn_to_page(paddr >> PAGE_SHIFT);
-		rettype = pgprot2cachemode(__pgprot(get_page_memtype(page)));
+		rettype = get_page_memtype(page);
 		/*
 		 * -1 from get_page_memtype() implies RAM page is in its
 		 * default state and not reserved, and hence of type WB
@@ -423,7 +425,7 @@ static enum page_cache_mode lookup_memtype(u64 paddr)
 
 	entry = rbt_memtype_lookup(paddr);
 	if (entry != NULL)
-		rettype = pgprot2cachemode(__pgprot(entry->type));
+		rettype = entry->type;
 	else
 		rettype = _PAGE_CACHE_MODE_UC_MINUS;
 
@@ -447,18 +449,14 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	resource_size_t size = end - start;
 	enum page_cache_mode req_type = *type;
 	enum page_cache_mode new_type;
-	unsigned long new_prot;
 	int ret;
 
 	WARN_ON_ONCE(iomem_map_sanity_check(start, size));
 
-	ret = reserve_memtype(start, end, cachemode2protval(req_type),
-				&new_prot);
+	ret = reserve_memtype(start, end, req_type, &new_type);
 	if (ret)
 		goto out_err;
 
-	new_type = pgprot2cachemode(__pgprot(new_prot));
-
 	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
@@ -524,13 +522,13 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
 int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 				unsigned long size, pgprot_t *vma_prot)
 {
-	unsigned long flags = _PAGE_CACHE_WB;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_WB;
 
 	if (!range_is_allowed(pfn, size))
 		return 0;
 
 	if (file->f_flags & O_DSYNC)
-		flags = _PAGE_CACHE_UC_MINUS;
+		pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
 #ifdef CONFIG_X86_32
 	/*
@@ -547,12 +545,12 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	      boot_cpu_has(X86_FEATURE_CYRIX_ARR) ||
 	      boot_cpu_has(X86_FEATURE_CENTAUR_MCR)) &&
 	    (pfn << PAGE_SHIFT) >= __pa(high_memory)) {
-		flags = _PAGE_CACHE_UC;
+		pcm = _PAGE_CACHE_MODE_UC;
 	}
 #endif
 
 	*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
-			     flags);
+			     cachemode2protval(pcm));
 	return 1;
 }
 
@@ -583,7 +581,7 @@ int kernel_map_sync_memtype(u64 base, unsigned long size,
 		printk(KERN_INFO "%s:%d ioremap_change_attr failed %s "
 			"for [mem %#010Lx-%#010Lx]\n",
 			current->comm, current->pid,
-			cattr_name(cachemode2protval(pcm)),
+			cattr_name(pcm),
 			base, (unsigned long long)(base + size-1));
 		return -EINVAL;
 	}
@@ -600,8 +598,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 {
 	int is_ram = 0;
 	int ret;
-	unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
-	unsigned long flags = want_flags;
+	enum page_cache_mode want_pcm = pgprot2cachemode(*vma_prot);
+	enum page_cache_mode pcm = want_pcm;
 
 	is_ram = pat_pagerange_is_ram(paddr, paddr + size);
 
@@ -614,38 +612,36 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		if (!pat_enabled)
 			return 0;
 
-		flags = cachemode2protval(lookup_memtype(paddr));
-		if (want_flags != flags) {
+		pcm = lookup_memtype(paddr);
+		if (want_pcm != pcm) {
 			printk(KERN_WARNING "%s:%d map pfn RAM range req %s for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pcm),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pcm));
 			*vma_prot = __pgprot((pgprot_val(*vma_prot) &
-					      (~_PAGE_CACHE_MASK)) |
-					     flags);
+					     (~_PAGE_CACHE_MASK)) |
+					     cachemode2protval(pcm));
 		}
 		return 0;
 	}
 
-	ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);
+	ret = reserve_memtype(paddr, paddr + size, want_pcm, &pcm);
 	if (ret)
 		return ret;
 
-	if (flags != want_flags) {
+	if (pcm != want_pcm) {
 		if (strict_prot ||
-		    !is_new_memtype_allowed(paddr, size,
-				pgprot2cachemode(__pgprot(want_flags)),
-				pgprot2cachemode(__pgprot(flags)))) {
+		    !is_new_memtype_allowed(paddr, size, want_pcm, pcm)) {
 			free_memtype(paddr, paddr + size);
 			printk(KERN_ERR "%s:%d map pfn expected mapping type %s"
 				" for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pcm),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pcm));
 			return -EINVAL;
 		}
 		/*
@@ -654,11 +650,10 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		 */
 		*vma_prot = __pgprot((pgprot_val(*vma_prot) &
 				      (~_PAGE_CACHE_MASK)) |
-				     flags);
+				     cachemode2protval(pcm));
 	}
 
-	if (kernel_map_sync_memtype(paddr, size,
-				    pgprot2cachemode(__pgprot(flags))) < 0) {
+	if (kernel_map_sync_memtype(paddr, size, pcm) < 0) {
 		free_memtype(paddr, paddr + size);
 		return -EINVAL;
 	}
@@ -799,7 +794,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 pgprot_t pgprot_writecombine(pgprot_t prot)
 {
 	if (pat_enabled)
-		return __pgprot(pgprot_val(prot) | _PAGE_CACHE_WC);
+		return __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_WC));
 	else
 		return pgprot_noncached(prot);
 }
diff --git a/arch/x86/mm/pat_internal.h b/arch/x86/mm/pat_internal.h
index 77e5ba1..f641162 100644
--- a/arch/x86/mm/pat_internal.h
+++ b/arch/x86/mm/pat_internal.h
@@ -10,30 +10,32 @@ struct memtype {
 	u64			start;
 	u64			end;
 	u64			subtree_max_end;
-	unsigned long		type;
+	enum page_cache_mode	type;
 	struct rb_node		rb;
 };
 
-static inline char *cattr_name(unsigned long flags)
+static inline char *cattr_name(enum page_cache_mode pcm)
 {
-	switch (flags & _PAGE_CACHE_MASK) {
-	case _PAGE_CACHE_UC:		return "uncached";
-	case _PAGE_CACHE_UC_MINUS:	return "uncached-minus";
-	case _PAGE_CACHE_WB:		return "write-back";
-	case _PAGE_CACHE_WC:		return "write-combining";
-	default:			return "broken";
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:		return "uncached";
+	case _PAGE_CACHE_MODE_UC_MINUS:		return "uncached-minus";
+	case _PAGE_CACHE_MODE_WB:		return "write-back";
+	case _PAGE_CACHE_MODE_WC:		return "write-combining";
+	case _PAGE_CACHE_MODE_WT:		return "write-through";
+	case _PAGE_CACHE_MODE_WP:		return "write-protected";
+	default:				return "broken";
 	}
 }
 
 #ifdef CONFIG_X86_PAT
 extern int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type);
+					enum page_cache_mode *new_type);
 extern struct memtype *rbt_memtype_erase(u64 start, u64 end);
 extern struct memtype *rbt_memtype_lookup(u64 addr);
 extern int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos);
 #else
 static inline int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type)
+					enum page_cache_mode *new_type)
 { return 0; }
 static inline struct memtype *rbt_memtype_erase(u64 start, u64 end)
 { return NULL; }
diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c
index 415f6c4..6582adc 100644
--- a/arch/x86/mm/pat_rbtree.c
+++ b/arch/x86/mm/pat_rbtree.c
@@ -122,11 +122,12 @@ static struct memtype *memtype_rb_exact_match(struct rb_root *root,
 
 static int memtype_rb_check_conflict(struct rb_root *root,
 				u64 start, u64 end,
-				unsigned long reqtype, unsigned long *newtype)
+				enum page_cache_mode reqtype,
+				enum page_cache_mode *newtype)
 {
 	struct rb_node *node;
 	struct memtype *match;
-	int found_type = reqtype;
+	enum page_cache_mode found_type = reqtype;
 
 	match = memtype_rb_lowest_match(&memtype_rbroot, start, end);
 	if (match == NULL)
@@ -187,7 +188,8 @@ static void memtype_rb_insert(struct rb_root *root, struct memtype *newdata)
 	rb_insert_augmented(&newdata->rb, root, &memtype_rb_augment_cb);
 }
 
-int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type)
+int rbt_memtype_check_insert(struct memtype *new,
+			     enum page_cache_mode *ret_type)
 {
 	int err = 0;
 
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 14/18] x86: Clean up pgtable_types.h
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (12 preceding siblings ...)
  2014-11-03 13:01 ` [PATCH V6 13/18] x86: Use new cache mode type in memtype related functions Juergen Gross
@ 2014-11-03 13:02 ` Juergen Gross
  2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:02 ` [PATCH V6 15/18] x86: Support PAT bit in pagetable dump for lower levels Juergen Gross
                   ` (6 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:02 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Remove no longer used defines from pgtable_types.h as they are not
used any longer.

Switch __PAGE_KERNEL_NOCACHE to use cache mode type instead of pte
bits.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable_types.h | 21 +--------------------
 1 file changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 5124642..6d5f6d1 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -128,11 +128,6 @@
 			 _PAGE_SOFT_DIRTY | _PAGE_NUMA)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
 
-#define _PAGE_CACHE_WB		(0)
-#define _PAGE_CACHE_WC		(_PAGE_PWT)
-#define _PAGE_CACHE_UC_MINUS	(_PAGE_PCD)
-#define _PAGE_CACHE_UC		(_PAGE_PCD | _PAGE_PWT)
-
 /*
  * The cache modes defined here are used to translate between pure SW usage
  * and the HW defined cache mode bits and/or PAT entries.
@@ -178,41 +173,27 @@ enum page_cache_mode {
 
 #define __PAGE_KERNEL_RO		(__PAGE_KERNEL & ~_PAGE_RW)
 #define __PAGE_KERNEL_RX		(__PAGE_KERNEL_EXEC & ~_PAGE_RW)
-#define __PAGE_KERNEL_EXEC_NOCACHE	(__PAGE_KERNEL_EXEC | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_WC		(__PAGE_KERNEL | _PAGE_CACHE_WC)
-#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_UC_MINUS		(__PAGE_KERNEL | _PAGE_PCD)
+#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_NOCACHE)
 #define __PAGE_KERNEL_VSYSCALL		(__PAGE_KERNEL_RX | _PAGE_USER)
 #define __PAGE_KERNEL_VVAR		(__PAGE_KERNEL_RO | _PAGE_USER)
-#define __PAGE_KERNEL_VVAR_NOCACHE	(__PAGE_KERNEL_VVAR | _PAGE_PCD | _PAGE_PWT)
 #define __PAGE_KERNEL_LARGE		(__PAGE_KERNEL | _PAGE_PSE)
-#define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
 #define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
 #define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
 #define PAGE_KERNEL_EXEC		__pgprot(__PAGE_KERNEL_EXEC)
 #define PAGE_KERNEL_RX			__pgprot(__PAGE_KERNEL_RX)
-#define PAGE_KERNEL_WC			__pgprot(__PAGE_KERNEL_WC)
 #define PAGE_KERNEL_NOCACHE		__pgprot(__PAGE_KERNEL_NOCACHE)
-#define PAGE_KERNEL_UC_MINUS		__pgprot(__PAGE_KERNEL_UC_MINUS)
-#define PAGE_KERNEL_EXEC_NOCACHE	__pgprot(__PAGE_KERNEL_EXEC_NOCACHE)
 #define PAGE_KERNEL_LARGE		__pgprot(__PAGE_KERNEL_LARGE)
-#define PAGE_KERNEL_LARGE_NOCACHE	__pgprot(__PAGE_KERNEL_LARGE_NOCACHE)
 #define PAGE_KERNEL_LARGE_EXEC		__pgprot(__PAGE_KERNEL_LARGE_EXEC)
 #define PAGE_KERNEL_VSYSCALL		__pgprot(__PAGE_KERNEL_VSYSCALL)
 #define PAGE_KERNEL_VVAR		__pgprot(__PAGE_KERNEL_VVAR)
-#define PAGE_KERNEL_VVAR_NOCACHE	__pgprot(__PAGE_KERNEL_VVAR_NOCACHE)
 
 #define PAGE_KERNEL_IO			__pgprot(__PAGE_KERNEL_IO)
 #define PAGE_KERNEL_IO_NOCACHE		__pgprot(__PAGE_KERNEL_IO_NOCACHE)
-#define PAGE_KERNEL_IO_UC_MINUS		__pgprot(__PAGE_KERNEL_IO_UC_MINUS)
-#define PAGE_KERNEL_IO_WC		__pgprot(__PAGE_KERNEL_IO_WC)
 
 /*         xwr */
 #define __P000	PAGE_NONE
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 15/18] x86: Support PAT bit in pagetable dump for lower levels
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (13 preceding siblings ...)
  2014-11-03 13:02 ` [PATCH V6 14/18] x86: Clean up pgtable_types.h Juergen Gross
@ 2014-11-03 13:02 ` Juergen Gross
  2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:02 ` [PATCH V6 16/18] x86: Respect PAT bit when copying pte values between large and normal pages Juergen Gross
                   ` (5 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:02 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Dumping page table protection bits is not correct for entries on levels
2 and 3 regarding the PAT bit, which is at a different position as on
level 4.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/dump_pagetables.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 95a427e..6c2ca03 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -126,7 +126,7 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 
 	if (!pgprot_val(prot)) {
 		/* Not present */
-		pt_dump_cont_printf(m, dmsg, "                          ");
+		pt_dump_cont_printf(m, dmsg, "                              ");
 	} else {
 		if (pr & _PAGE_USER)
 			pt_dump_cont_printf(m, dmsg, "USR ");
@@ -145,18 +145,16 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 		else
 			pt_dump_cont_printf(m, dmsg, "    ");
 
-		/* Bit 9 has a different meaning on level 3 vs 4 */
-		if (level <= 3) {
-			if (pr & _PAGE_PSE)
-				pt_dump_cont_printf(m, dmsg, "PSE ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		} else {
-			if (pr & _PAGE_PAT)
-				pt_dump_cont_printf(m, dmsg, "pat ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		}
+		/* Bit 7 has a different meaning on level 3 vs 4 */
+		if (level <= 3 && pr & _PAGE_PSE)
+			pt_dump_cont_printf(m, dmsg, "PSE ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
+		if ((level == 4 && pr & _PAGE_PAT) ||
+		    ((level == 3 || level == 2) && pr & _PAGE_PAT_LARGE))
+			pt_dump_cont_printf(m, dmsg, "pat ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
 		if (pr & _PAGE_GLOBAL)
 			pt_dump_cont_printf(m, dmsg, "GLB ");
 		else
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 16/18] x86: Respect PAT bit when copying pte values between large and normal pages
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (14 preceding siblings ...)
  2014-11-03 13:02 ` [PATCH V6 15/18] x86: Support PAT bit in pagetable dump for lower levels Juergen Gross
@ 2014-11-03 13:02 ` Juergen Gross
  2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:02 ` [PATCH V6 17/18] x86: Enable PAT to use cache mode translation tables Juergen Gross
                   ` (4 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:02 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

The PAT bit in the ptes is not moved to the correct position when
copying page protection attributes between entries of different sized
pages. Translate the ptes according to their page size.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c | 33 +++++++++++++++++++++++----------
 1 file changed, 23 insertions(+), 10 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index de807c9..6c8e3fd 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -485,14 +485,23 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 
 	/*
 	 * We are safe now. Check whether the new pgprot is the same:
+	 * Convert protection attributes to 4k-format, as cpa->mask* are set
+	 * up accordingly.
 	 */
 	old_pte = *kpte;
-	old_prot = req_prot = pte_pgprot(old_pte);
+	old_prot = req_prot = pgprot_large_2_4k(pte_pgprot(old_pte));
 
 	pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
 	pgprot_val(req_prot) |= pgprot_val(cpa->mask_set);
 
 	/*
+	 * req_prot is in format of 4k pages. It must be converted to large
+	 * page format: the caching mode includes the PAT bit located at
+	 * different bit positions in the two formats.
+	 */
+	req_prot = pgprot_4k_2_large(req_prot);
+
+	/*
 	 * Set the PSE and GLOBAL flags only if the PRESENT flag is
 	 * set otherwise pmd_present/pmd_huge will return true even on
 	 * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL
@@ -585,13 +594,10 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 
 	paravirt_alloc_pte(&init_mm, page_to_pfn(base));
 	ref_prot = pte_pgprot(pte_clrhuge(*kpte));
-	/*
-	 * If we ever want to utilize the PAT bit, we need to
-	 * update this function to make sure it's converted from
-	 * bit 12 to bit 7 when we cross from the 2MB level to
-	 * the 4K level:
-	 */
-	WARN_ON_ONCE(pgprot_val(ref_prot) & _PAGE_PAT_LARGE);
+
+	/* promote PAT bit to correct position */
+	if (level == PG_LEVEL_2M)
+		ref_prot = pgprot_large_2_4k(ref_prot);
 
 #ifdef CONFIG_X86_64
 	if (level == PG_LEVEL_1G) {
@@ -879,6 +885,7 @@ static int populate_pmd(struct cpa_data *cpa,
 {
 	unsigned int cur_pages = 0;
 	pmd_t *pmd;
+	pgprot_t pmd_pgprot;
 
 	/*
 	 * Not on a 2M boundary?
@@ -910,6 +917,8 @@ static int populate_pmd(struct cpa_data *cpa,
 	if (num_pages == cur_pages)
 		return cur_pages;
 
+	pmd_pgprot = pgprot_4k_2_large(pgprot);
+
 	while (end - start >= PMD_SIZE) {
 
 		/*
@@ -921,7 +930,8 @@ static int populate_pmd(struct cpa_data *cpa,
 
 		pmd = pmd_offset(pud, start);
 
-		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pmd_pgprot)));
 
 		start	  += PMD_SIZE;
 		cpa->pfn  += PMD_SIZE;
@@ -949,6 +959,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 	pud_t *pud;
 	unsigned long end;
 	int cur_pages = 0;
+	pgprot_t pud_pgprot;
 
 	end = start + (cpa->numpages << PAGE_SHIFT);
 
@@ -986,12 +997,14 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 		return cur_pages;
 
 	pud = pud_offset(pgd, start);
+	pud_pgprot = pgprot_4k_2_large(pgprot);
 
 	/*
 	 * Map everything starting from the Gb boundary, possibly with 1G pages
 	 */
 	while (end - start >= PUD_SIZE) {
-		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pud_pgprot)));
 
 		start	  += PUD_SIZE;
 		cpa->pfn  += PUD_SIZE;
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 17/18] x86: Enable PAT to use cache mode translation tables
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (15 preceding siblings ...)
  2014-11-03 13:02 ` [PATCH V6 16/18] x86: Respect PAT bit when copying pte values between large and normal pages Juergen Gross
@ 2014-11-03 13:02 ` Juergen Gross
  2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 13:02 ` [PATCH V6 18/18] xen: Support Xen pv-domains using PAT Juergen Gross
                   ` (3 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:02 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

Update the translation tables from cache mode to pgprot values
according to the PAT settings. This enables changing the cache
attributes of a PAT index in just one place without having to change
at the users side.

With this change it is possible to use the same kernel with different
PAT configurations, e.g. supporting Xen.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Toshi Kani <toshi.kani@hp.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pat.h           |  1 +
 arch/x86/include/asm/pgtable_types.h |  4 +++
 arch/x86/mm/init.c                   |  8 ++++++
 arch/x86/mm/mm_internal.h            |  2 ++
 arch/x86/mm/pat.c                    | 50 ++++++++++++++++++++++++++++++++++--
 5 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index 150407a..91bc4ba 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -11,6 +11,7 @@ static const int pat_enabled;
 #endif
 
 extern void pat_init(void);
+void pat_init_cache_modes(void);
 
 extern int reserve_memtype(u64 start, u64 end,
 		enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 6d5f6d1..af447f9 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -351,6 +351,10 @@ extern uint8_t __pte2cachemode_tbl[8];
 	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
 	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
 	 (((cb) >> _PAGE_BIT_PWT) & 1))
+#define __cm_idx2pte(i)					\
+	((((i) & 4) << (_PAGE_BIT_PAT - 2)) |		\
+	 (((i) & 2) << (_PAGE_BIT_PCD - 1)) |		\
+	 (((i) & 1) << _PAGE_BIT_PWT))
 
 static inline unsigned long cachemode2protval(enum page_cache_mode pcm)
 {
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a9776ba..82b41d5 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -716,3 +716,11 @@ void __init zone_sizes_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
+void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+{
+	/* entry 0 MUST be WB (hardwired to speed up translations) */
+	BUG_ON(!entry && cache != _PAGE_CACHE_MODE_WB);
+
+	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+	__pte2cachemode_tbl[entry] = cache;
+}
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 6b563a1..62474ba 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -16,4 +16,6 @@ void zone_sizes_init(void);
 
 extern int after_bootmem;
 
+void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
+
 #endif	/* __X86_MM_INTERNAL_H */
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index ef75f3f..4c60127 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -31,6 +31,7 @@
 #include <asm/io.h>
 
 #include "pat_internal.h"
+#include "mm_internal.h"
 
 #ifdef CONFIG_X86_PAT
 int __read_mostly pat_enabled = 1;
@@ -75,6 +76,52 @@ enum {
 	PAT_UC_MINUS = 7,	/* UC, but can be overriden by MTRR */
 };
 
+#define CM(c) (_PAGE_CACHE_MODE_ ## c)
+
+static enum page_cache_mode pat_get_cache_mode(unsigned pat_val, char *msg)
+{
+	enum page_cache_mode cache;
+	char *cache_mode;
+
+	switch (pat_val) {
+	case PAT_UC:       cache = CM(UC);       cache_mode = "UC  "; break;
+	case PAT_WC:       cache = CM(WC);       cache_mode = "WC  "; break;
+	case PAT_WT:       cache = CM(WT);       cache_mode = "WT  "; break;
+	case PAT_WP:       cache = CM(WP);       cache_mode = "WP  "; break;
+	case PAT_WB:       cache = CM(WB);       cache_mode = "WB  "; break;
+	case PAT_UC_MINUS: cache = CM(UC_MINUS); cache_mode = "UC- "; break;
+	default:           cache = CM(WB);       cache_mode = "WB  "; break;
+	}
+
+	memcpy(msg, cache_mode, 4);
+
+	return cache;
+}
+
+#undef CM
+
+/*
+ * Update the cache mode to pgprot translation tables according to PAT
+ * configuration.
+ * Using lower indices is preferred, so we start with highest index.
+ */
+void pat_init_cache_modes(void)
+{
+	int i;
+	enum page_cache_mode cache;
+	char pat_msg[33];
+	u64 pat;
+
+	rdmsrl(MSR_IA32_CR_PAT, pat);
+	pat_msg[32] = 0;
+	for (i = 7; i >= 0; i--) {
+		cache = pat_get_cache_mode((pat >> (i * 8)) & 7,
+					   pat_msg + 4 * i);
+		update_cache_mode_entry(i, cache);
+	}
+	pr_info("PAT configuration [0-7]: %s\n", pat_msg);
+}
+
 #define PAT(x, y)	((u64)PAT_ ## y << ((x)*8))
 
 void pat_init(void)
@@ -124,8 +171,7 @@ void pat_init(void)
 	wrmsrl(MSR_IA32_CR_PAT, pat);
 
 	if (boot_cpu)
-		printk(KERN_INFO "x86 PAT enabled: cpu %d, old 0x%Lx, new 0x%Lx\n",
-		       smp_processor_id(), boot_pat_state, pat);
+		pat_init_cache_modes();
 }
 
 #undef PAT
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH V6 18/18] xen: Support Xen pv-domains using PAT
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (16 preceding siblings ...)
  2014-11-03 13:02 ` [PATCH V6 17/18] x86: Enable PAT to use cache mode translation tables Juergen Gross
@ 2014-11-03 13:02 ` Juergen Gross
  2014-11-16 10:59   ` [tip:x86/mm] " tip-bot for Juergen Gross
  2014-11-03 16:43 ` [PATCH V6 00/18] x86: Full support of PAT Toshi Kani
                   ` (2 subsequent siblings)
  20 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2014-11-03 13:02 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas
  Cc: Juergen Gross

With the dynamical mapping between cache modes and pgprot values it is
now possible to use all cache modes via the Xen hypervisor PAT settings
in a pv domain.

All to be done is to read the PAT configuration MSR and set up the
translation tables accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/xen/enlighten.c | 25 +++++++------------------
 arch/x86/xen/mmu.c       | 47 +----------------------------------------------
 arch/x86/xen/xen-ops.h   |  1 -
 3 files changed, 8 insertions(+), 65 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fac5e4f..6bf3a13 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1100,12 +1100,6 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 		/* Fast syscall setup is all done in hypercalls, so
 		   these are all ignored.  Stub them out here to stop
 		   Xen console noise. */
-		break;
-
-	case MSR_IA32_CR_PAT:
-		if (smp_processor_id() == 0)
-			xen_set_pat(((u64)high << 32) | low);
-		break;
 
 	default:
 		ret = native_write_msr_safe(msr, low, high);
@@ -1561,10 +1555,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
-#if 0
-	if (!xen_initial_domain())
-#endif
-		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
 	/*
 	 * Prevent page tables from being allocated in highmem, even
@@ -1618,14 +1608,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	 */
 	acpi_numa = -1;
 #endif
-#ifdef CONFIG_X86_PAT
-	/*
-	 * For right now disable the PAT. We should remove this once
-	 * git commit 8eaffa67b43e99ae581622c5133e20b0f48bcef1
-	 * (xen/pat: Disable PAT support for now) is reverted.
-	 */
-	pat_enabled = 0;
-#endif
 	/* Don't do the full vcpu_info placement stuff until we have a
 	   possible map and a non-dummy shared_info. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
@@ -1636,6 +1618,13 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
+	/*
+	 * Modify the cache mode translation tables to match Xen's PAT
+	 * configuration.
+	 */
+
+	pat_init_cache_modes();
+
 	/* keep using Xen gdt for now; no urgent need to change it */
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index a8a1a3d..9855eb8 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -410,13 +410,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
-#if 0
-	/* If this is a WC pte, convert back from Xen WC to Linux WC */
-	if ((pteval & (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)) == _PAGE_PAT) {
-		WARN_ON(!pat_enabled);
-		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
-	}
-#endif
+
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -427,47 +421,8 @@ __visible pgdval_t xen_pgd_val(pgd_t pgd)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pgd_val);
 
-/*
- * Xen's PAT setup is part of its ABI, though I assume entries 6 & 7
- * are reserved for now, to correspond to the Intel-reserved PAT
- * types.
- *
- * We expect Linux's PAT set as follows:
- *
- * Idx  PTE flags        Linux    Xen    Default
- * 0                     WB       WB     WB
- * 1            PWT      WC       WT     WT
- * 2        PCD          UC-      UC-    UC-
- * 3        PCD PWT      UC       UC     UC
- * 4    PAT              WB       WC     WB
- * 5    PAT     PWT      WC       WP     WT
- * 6    PAT PCD          UC-      rsv    UC-
- * 7    PAT PCD PWT      UC       rsv    UC
- */
-
-void xen_set_pat(u64 pat)
-{
-	/* We expect Linux to use a PAT setting of
-	 * UC UC- WC WB (ignoring the PAT flag) */
-	WARN_ON(pat != 0x0007010600070106ull);
-}
-
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-#if 0
-	/* If Linux is trying to set a WC pte, then map to the Xen WC.
-	 * If _PAGE_PAT is set, then it probably means it is really
-	 * _PAGE_PSE, so avoid fiddling with the PAT mapping and hope
-	 * things work out OK...
-	 *
-	 * (We should never see kernel mappings with _PAGE_PSE set,
-	 * but we could see hugetlbfs mappings, I think.).
-	 */
-	if (pat_enabled && !WARN_ON(pte & _PAGE_PAT)) {
-		if ((pte & (_PAGE_PCD | _PAGE_PWT)) == _PAGE_PWT)
-			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
-	}
-#endif
 	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 28c7e0b..4ab9298 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -33,7 +33,6 @@ extern unsigned long xen_max_p2m_pfn;
 
 void xen_mm_pin_all(void);
 void xen_mm_unpin_all(void);
-void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 char * xen_auto_xlated_memory_setup(void);
-- 
1.8.4.5


^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 00/18] x86: Full support of PAT
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (17 preceding siblings ...)
  2014-11-03 13:02 ` [PATCH V6 18/18] xen: Support Xen pv-domains using PAT Juergen Gross
@ 2014-11-03 16:43 ` Toshi Kani
  2014-11-14  6:30 ` Juergen Gross
  2014-11-16 13:08 ` Ingo Molnar
  20 siblings, 0 replies; 47+ messages in thread
From: Toshi Kani @ 2014-11-03 16:43 UTC (permalink / raw)
  To: Juergen Gross
  Cc: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, plagnioj,
	tomi.valkeinen, bhelgaas

On Mon, 2014-11-03 at 14:01 +0100, Juergen Gross wrote:
> The x86 architecture offers via the PAT (Page Attribute Table) a way to
> specify different caching modes in page table entries. The PAT MSR contains
> 8 entries each specifying one of 6 possible cache modes. A pte references one
> of those entries via 3 bits: _PAGE_PAT, _PAGE_PWT and _PAGE_PCD.
> 
> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
> top 4 entries in the PAT MSR are the same as the 4 lower entries.
> 
> This results in the kernel not supporting e.g. write-through mode. Especially
> this cache mode would speed up drivers of video cards which now have to use
> uncached accesses.
> 
> OTOH some old processors (Pentium) don't support PAT correctly and the Xen
> hypervisor has been using a different PAT MSR configuration for some time now
> and can't change that as this setting is part of the ABI.
> 
> This patch set abstracts the cache mode from the pte and introduces tables to
> translate between cache mode and pte bits (the default cache mode "write back"
> is hard-wired to PAT entry 0). The tables are statically initialized with
> values being compatible to old processors and current usage. As soon as the
> PAT MSR is changed (or - in case of Xen - is read at boot time) the tables are
> changed accordingly. Requests of mappings with special cache modes are always
> possible now, in case they are not supported there will be a fallback to a
> compatible but slower mode.
> 
> Summing it up, this patch set adds the following features:
> - capability to support WT and WP cache modes on processors with full PAT
>   support
> - processors with no or uncorrect PAT support are still working as today, even
>   if WT or WP cache mode are selected by drivers for some pages
> - reduction of Xen special handling regarding cache mode
> 
> Changes in V6:
> - add new patch 10 (x86: Remove looking for setting of _PAGE_PAT_LARGE in
>   pageattr.c) as suggested by Thomas Gleixner
> - replaced SOB of Stefan Bader by "Based-on-patch-by:" as suggested by
>   Borislav Petkov

For patch 01/18 to 16/18:

Reviewed-by: Toshi Kani <toshi.kani@hp.com>

Thanks,
-Toshi



^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c
  2014-11-03 13:01 ` [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c Juergen Gross
@ 2014-11-03 16:44   ` Thomas Gleixner
  2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
  1 sibling, 0 replies; 47+ messages in thread
From: Thomas Gleixner @ 2014-11-03 16:44 UTC (permalink / raw)
  To: Juergen Gross
  Cc: hpa, x86, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas

On Mon, 3 Nov 2014, Juergen Gross wrote:

> When modifying page attributes via change_page_attr_set_clr() don't
> test for setting _PAGE_PAT_LARGE, as this is
> - never done
> - PAT support for large pages is not included in the kernel up to now
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c
  2014-11-03 13:01 ` [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c Juergen Gross
@ 2014-11-07  8:16   ` Tomi Valkeinen
  2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ gbefb.c tip-bot for Juergen Gross
  1 sibling, 0 replies; 47+ messages in thread
From: Tomi Valkeinen @ 2014-11-07  8:16 UTC (permalink / raw)
  To: Juergen Gross
  Cc: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, bhelgaas

[-- Attachment #1: Type: text/plain, Size: 971 bytes --]

On 03/11/14 15:01, Juergen Gross wrote:
> Instead of directly using the cache mode bits in the pte switch to
> using the cache mode type.
> 
> Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  drivers/video/fbdev/gbefb.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
> index 4aa56ba..6d9ef39 100644
> --- a/drivers/video/fbdev/gbefb.c
> +++ b/drivers/video/fbdev/gbefb.c
> @@ -54,7 +54,8 @@ struct gbefb_par {
>  #endif
>  #endif
>  #ifdef CONFIG_X86
> -#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
> +#define pgprot_fb(_prot) (((_prot) & ~_PAGE_CACHE_MASK) |	\
> +			  cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS))
>  #endif
>  
>  /*
> 

For this and vermilion fb:

Acked-by: Tomi Valkeinen <tomi.valkeinen@ti.com>

 Tomi



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 00/18] x86: Full support of PAT
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (18 preceding siblings ...)
  2014-11-03 16:43 ` [PATCH V6 00/18] x86: Full support of PAT Toshi Kani
@ 2014-11-14  6:30 ` Juergen Gross
  2014-11-16 13:08 ` Ingo Molnar
  20 siblings, 0 replies; 47+ messages in thread
From: Juergen Gross @ 2014-11-14  6:30 UTC (permalink / raw)
  To: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas

Ingo,

could you take the patches, please?


Juergen

On 11/03/2014 02:01 PM, Juergen Gross wrote:
> The x86 architecture offers via the PAT (Page Attribute Table) a way to
> specify different caching modes in page table entries. The PAT MSR contains
> 8 entries each specifying one of 6 possible cache modes. A pte references one
> of those entries via 3 bits: _PAGE_PAT, _PAGE_PWT and _PAGE_PCD.
>
> The Linux kernel currently supports only 4 different cache modes. The PAT MSR
> is set up in a way that the setting of _PAGE_PAT in a pte doesn't matter: the
> top 4 entries in the PAT MSR are the same as the 4 lower entries.
>
> This results in the kernel not supporting e.g. write-through mode. Especially
> this cache mode would speed up drivers of video cards which now have to use
> uncached accesses.
>
> OTOH some old processors (Pentium) don't support PAT correctly and the Xen
> hypervisor has been using a different PAT MSR configuration for some time now
> and can't change that as this setting is part of the ABI.
>
> This patch set abstracts the cache mode from the pte and introduces tables to
> translate between cache mode and pte bits (the default cache mode "write back"
> is hard-wired to PAT entry 0). The tables are statically initialized with
> values being compatible to old processors and current usage. As soon as the
> PAT MSR is changed (or - in case of Xen - is read at boot time) the tables are
> changed accordingly. Requests of mappings with special cache modes are always
> possible now, in case they are not supported there will be a fallback to a
> compatible but slower mode.
>
> Summing it up, this patch set adds the following features:
> - capability to support WT and WP cache modes on processors with full PAT
>    support
> - processors with no or uncorrect PAT support are still working as today, even
>    if WT or WP cache mode are selected by drivers for some pages
> - reduction of Xen special handling regarding cache mode
>
> Changes in V6:
> - add new patch 10 (x86: Remove looking for setting of _PAGE_PAT_LARGE in
>    pageattr.c) as suggested by Thomas Gleixner
> - replaced SOB of Stefan Bader by "Based-on-patch-by:" as suggested by
>    Borislav Petkov
>
> Changes in V5:
> - split up first patch as requested by Ingo Molnar and Thomas Gleixner
> - add a helper function in pat_init_cache_modes() as requested by Ingo Molnar
>
> Changes in V4:
> - rebased to 3.18-rc2
>
> Changes in V3:
> - corrected two minor nits (UC_MINUS, again) detected by Toshi Kani
>
> Changes in V2:
> - simplified handling of PAT MSR write under Xen as suggested by David Vrabel
> - removed resetting of pat_enabled under Xen
> - two small corrections requested by Toshi Kani (UC_MINUS cache mode in
>    vermilion driver, fix 32 bit kernel build failure)
> - correct build error on non-x86 arch by moving definition of
>    update_cache_mode_entry() to x86 specific header
>
> Changes since RFC:
> - renamed functions and variables as suggested by Toshi Kani
> - corrected cache mode bits for WT and WP
> - modified handling of PAT MSR write under Xen as suggested by Jan Beulich
>
>
> Juergen Gross (18):
>    x86: Make page cache mode a real type
>    x86: Use new cache mode type in include/asm/fb.h
>    x86: Use new cache mode type in drivers/video/fbdev/gbefb.c
>    x86: Use new cache mode type in drivers/video/fbdev/vermilion
>    x86: Use new cache mode type in arch/x86/pci
>    x86: Use new cache mode type in arch/x86/mm/init_64.c
>    x86: Use new cache mode type in asm/pgtable.h
>    x86: Use new cache mode type in mm/iomap_32.c
>    x86: Use new cache mode type in track_pfn_remap() and
>      track_pfn_insert()
>    x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c
>    x86: Use new cache mode type in setting page attributes
>    x86: Use new cache mode type in mm/ioremap.c
>    x86: Use new cache mode type in memtype related functions
>    x86: Clean up pgtable_types.h
>    x86: Support PAT bit in pagetable dump for lower levels
>    x86: Respect PAT bit when copying pte values between large and normal
>      pages
>    x86: Enable PAT to use cache mode translation tables
>    xen: Support Xen pv-domains using PAT
>
>   arch/x86/include/asm/cacheflush.h         |  38 ++++---
>   arch/x86/include/asm/fb.h                 |   6 +-
>   arch/x86/include/asm/io.h                 |   2 +-
>   arch/x86/include/asm/pat.h                |   7 +-
>   arch/x86/include/asm/pgtable.h            |  19 ++--
>   arch/x86/include/asm/pgtable_types.h      |  96 ++++++++++++----
>   arch/x86/mm/dump_pagetables.c             |  24 ++--
>   arch/x86/mm/init.c                        |  37 +++++++
>   arch/x86/mm/init_64.c                     |   9 +-
>   arch/x86/mm/iomap_32.c                    |  12 +-
>   arch/x86/mm/ioremap.c                     |  63 ++++++-----
>   arch/x86/mm/mm_internal.h                 |   2 +
>   arch/x86/mm/pageattr.c                    |  84 ++++++++------
>   arch/x86/mm/pat.c                         | 176 +++++++++++++++++++-----------
>   arch/x86/mm/pat_internal.h                |  22 ++--
>   arch/x86/mm/pat_rbtree.c                  |   8 +-
>   arch/x86/pci/i386.c                       |   4 +-
>   arch/x86/xen/enlighten.c                  |  25 ++---
>   arch/x86/xen/mmu.c                        |  47 +-------
>   arch/x86/xen/xen-ops.h                    |   1 -
>   drivers/video/fbdev/gbefb.c               |   3 +-
>   drivers/video/fbdev/vermilion/vermilion.c |   6 +-
>   22 files changed, 412 insertions(+), 279 deletions(-)
>


^ permalink raw reply	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Make page cache mode a real type
  2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
@ 2014-11-16 10:54   ` tip-bot for Juergen Gross
  2015-01-22  7:11   ` [PATCH V6 01/18] " Steven Noonan
  1 sibling, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:54 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, stefan.bader, hpa, jgross, tglx, linux-kernel

Commit-ID:  281d4078bec366d60990add9d91a952953bd0d72
Gitweb:     http://git.kernel.org/tip/281d4078bec366d60990add9d91a952953bd0d72
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:47 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:24 +0100

x86: Make page cache mode a real type

At the moment there are a lot of places that handle setting or getting
the page cache mode by treating the pgprot bits equal to the cache mode.
This is only true because there are a lot of assumptions about the setup
of the PAT MSR. Otherwise the cache type needs to get translated into
pgprot bits and vice versa.

This patch tries to prepare for that by introducing a separate type
for the cache mode and adding functions to translate between those and
pgprot values.

To avoid too much performance penalty the translation between cache mode
and pgprot values is done via tables which contain the relevant
information.  Write-back cache mode is hard-wired to be 0, all other
modes are configurable via those tables. For large pages there are
translation functions as the PAT bit is located at different positions
in the ptes of 4k and large pages.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-2-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable_types.h | 73 +++++++++++++++++++++++++++++++++++-
 arch/x86/mm/init.c                   | 29 ++++++++++++++
 2 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0778964..5124642 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -128,12 +128,34 @@
 			 _PAGE_SOFT_DIRTY | _PAGE_NUMA)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
 
-#define _PAGE_CACHE_MASK	(_PAGE_PCD | _PAGE_PWT)
 #define _PAGE_CACHE_WB		(0)
 #define _PAGE_CACHE_WC		(_PAGE_PWT)
 #define _PAGE_CACHE_UC_MINUS	(_PAGE_PCD)
 #define _PAGE_CACHE_UC		(_PAGE_PCD | _PAGE_PWT)
 
+/*
+ * The cache modes defined here are used to translate between pure SW usage
+ * and the HW defined cache mode bits and/or PAT entries.
+ *
+ * The resulting bits for PWT, PCD and PAT should be chosen in a way
+ * to have the WB mode at index 0 (all bits clear). This is the default
+ * right now and likely would break too much if changed.
+ */
+#ifndef __ASSEMBLY__
+enum page_cache_mode {
+	_PAGE_CACHE_MODE_WB = 0,
+	_PAGE_CACHE_MODE_WC = 1,
+	_PAGE_CACHE_MODE_UC_MINUS = 2,
+	_PAGE_CACHE_MODE_UC = 3,
+	_PAGE_CACHE_MODE_WT = 4,
+	_PAGE_CACHE_MODE_WP = 5,
+	_PAGE_CACHE_MODE_NUM = 8
+};
+#endif
+
+#define _PAGE_CACHE_MASK	(_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
+#define _PAGE_NOCACHE		(cachemode2protval(_PAGE_CACHE_MODE_UC))
+
 #define PAGE_NONE	__pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
 #define PAGE_SHARED	__pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
 				 _PAGE_ACCESSED | _PAGE_NX)
@@ -341,6 +363,55 @@ static inline pmdval_t pmdnuma_flags(pmd_t pmd)
 #define pgprot_val(x)	((x).pgprot)
 #define __pgprot(x)	((pgprot_t) { (x) } )
 
+extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
+extern uint8_t __pte2cachemode_tbl[8];
+
+#define __pte2cm_idx(cb)				\
+	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
+	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
+	 (((cb) >> _PAGE_BIT_PWT) & 1))
+
+static inline unsigned long cachemode2protval(enum page_cache_mode pcm)
+{
+	if (likely(pcm == 0))
+		return 0;
+	return __cachemode2pte_tbl[pcm];
+}
+static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm)
+{
+	return __pgprot(cachemode2protval(pcm));
+}
+static inline enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
+{
+	unsigned long masked;
+
+	masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
+	if (likely(masked == 0))
+		return 0;
+	return __pte2cachemode_tbl[__pte2cm_idx(masked)];
+}
+static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+		((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
+{
+	pgprot_t new;
+	unsigned long val;
+
+	val = pgprot_val(pgprot);
+	pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
+			  ((val & _PAGE_PAT_LARGE) >>
+			   (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
+	return new;
+}
+
 
 typedef struct page *pgtable_t;
 
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 66dba36..a9776ba 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -27,6 +27,35 @@
 
 #include "mm_internal.h"
 
+/*
+ * Tables translating between page_cache_type_t and pte encoding.
+ * Minimal supported modes are defined statically, modified if more supported
+ * cache modes are available.
+ * Index into __cachemode2pte_tbl is the cachemode.
+ * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
+ * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
+ */
+uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
+	[_PAGE_CACHE_MODE_WB]		= 0,
+	[_PAGE_CACHE_MODE_WC]		= _PAGE_PWT,
+	[_PAGE_CACHE_MODE_UC_MINUS]	= _PAGE_PCD,
+	[_PAGE_CACHE_MODE_UC]		= _PAGE_PCD | _PAGE_PWT,
+	[_PAGE_CACHE_MODE_WT]		= _PAGE_PCD,
+	[_PAGE_CACHE_MODE_WP]		= _PAGE_PCD,
+};
+EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
+uint8_t __pte2cachemode_tbl[8] = {
+	[__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
+	[__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
+	[__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
+	[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
+};
+EXPORT_SYMBOL_GPL(__pte2cachemode_tbl);
+
 static unsigned long __initdata pgt_buf_start;
 static unsigned long __initdata pgt_buf_end;
 static unsigned long __initdata pgt_buf_top;

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in include/asm/fb.h
  2014-11-03 13:01 ` [PATCH V6 02/18] x86: Use new cache mode type in include/asm/fb.h Juergen Gross
@ 2014-11-16 10:54   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:54 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, stefan.bader, tglx, jgross, mingo, hpa

Commit-ID:  c27ce0af896b7cc1718995f2b3e66e2892d5081c
Gitweb:     http://git.kernel.org/tip/c27ce0af896b7cc1718995f2b3e66e2892d5081c
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:48 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:24 +0100

x86: Use new cache mode type in include/asm/fb.h

Instead of directly using cache mode bits in the pte switch to usage of
the new cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-3-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/fb.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/fb.h b/arch/x86/include/asm/fb.h
index 2519d06..c3dd5e7 100644
--- a/arch/x86/include/asm/fb.h
+++ b/arch/x86/include/asm/fb.h
@@ -8,8 +8,12 @@
 static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
 				unsigned long off)
 {
+	unsigned long prot;
+
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
 	if (boot_cpu_data.x86 > 3)
-		pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
+		pgprot_val(vma->vm_page_prot) =
+			prot | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 }
 
 extern int fb_is_primary_device(struct fb_info *info);

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ gbefb.c
  2014-11-03 13:01 ` [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c Juergen Gross
  2014-11-07  8:16   ` Tomi Valkeinen
@ 2014-11-16 10:55   ` tip-bot for Juergen Gross
  1 sibling, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:55 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, linux-kernel, jgross, tglx, stefan.bader, hpa

Commit-ID:  2d85ebf8e12e14694d6f9a4f34359c19f0738ace
Gitweb:     http://git.kernel.org/tip/2d85ebf8e12e14694d6f9a4f34359c19f0738ace
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:49 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in drivers/video/fbdev/gbefb.c

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-4-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/video/fbdev/gbefb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
index 4aa56ba..6d9ef39 100644
--- a/drivers/video/fbdev/gbefb.c
+++ b/drivers/video/fbdev/gbefb.c
@@ -54,7 +54,8 @@ struct gbefb_par {
 #endif
 #endif
 #ifdef CONFIG_X86
-#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
+#define pgprot_fb(_prot) (((_prot) & ~_PAGE_CACHE_MASK) |	\
+			  cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS))
 #endif
 
 /*

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ vermilion
  2014-11-03 13:01 ` [PATCH V6 04/18] x86: Use new cache mode type in drivers/video/fbdev/vermilion Juergen Gross
@ 2014-11-16 10:55   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:55 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: stefan.bader, tglx, mingo, hpa, linux-kernel, jgross

Commit-ID:  5006e45a6bc293b490638210d1a88ac391d2eb92
Gitweb:     http://git.kernel.org/tip/5006e45a6bc293b490638210d1a88ac391d2eb92
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:50 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in drivers/video/fbdev/vermilion

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-5-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/video/fbdev/vermilion/vermilion.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c
index 5f930ae..6b70d7f 100644
--- a/drivers/video/fbdev/vermilion/vermilion.c
+++ b/drivers/video/fbdev/vermilion/vermilion.c
@@ -1003,13 +1003,15 @@ static int vmlfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 	struct vml_info *vinfo = container_of(info, struct vml_info, info);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
 	int ret;
+	unsigned long prot;
 
 	ret = vmlfb_vram_offset(vinfo, offset);
 	if (ret)
 		return -EINVAL;
 
-	pgprot_val(vma->vm_page_prot) |= _PAGE_PCD;
-	pgprot_val(vma->vm_page_prot) &= ~_PAGE_PWT;
+	prot = pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK;
+	pgprot_val(vma->vm_page_prot) =
+		prot | cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 
 	return vm_iomap_memory(vma, vinfo->vram_start,
 			vinfo->vram_contig_size);

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in arch/x86/pci
  2014-11-03 13:01 ` [PATCH V6 05/18] x86: Use new cache mode type in arch/x86/pci Juergen Gross
@ 2014-11-16 10:55   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:55 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: stefan.bader, mingo, tglx, linux-kernel, hpa, jgross

Commit-ID:  1c64216be16404df7fab33c793890bb5076e8123
Gitweb:     http://git.kernel.org/tip/1c64216be16404df7fab33c793890bb5076e8123
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:51 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in arch/x86/pci

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-6-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/pci/i386.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index 37c1435..9b18ef3 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,14 +433,14 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		return -EINVAL;
 
 	if (pat_enabled && write_combine)
-		prot |= _PAGE_CACHE_WC;
+		prot |= cachemode2protval(_PAGE_CACHE_MODE_WC);
 	else if (pat_enabled || boot_cpu_data.x86 > 3)
 		/*
 		 * ioremap() and ioremap_nocache() defaults to UC MINUS for now.
 		 * To avoid attribute conflicts, request UC MINUS here
 		 * as well.
 		 */
-		prot |= _PAGE_CACHE_UC_MINUS;
+		prot |= cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS);
 
 	vma->vm_page_prot = __pgprot(prot);
 

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in arch/x86/mm/ init_64.c
  2014-11-03 13:01 ` [PATCH V6 06/18] x86: Use new cache mode type in arch/x86/mm/init_64.c Juergen Gross
@ 2014-11-16 10:55   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:55 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, tglx, hpa, stefan.bader, mingo, jgross

Commit-ID:  2df58b6d35306e8dab48923c9fbe9e1ad17537e2
Gitweb:     http://git.kernel.org/tip/2df58b6d35306e8dab48923c9fbe9e1ad17537e2
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:52 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in arch/x86/mm/init_64.c

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-7-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/init_64.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ebca30f..bd42786 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -337,12 +337,15 @@ pte_t * __init populate_extra_pte(unsigned long vaddr)
  * Create large page table mappings for a range of physical addresses.
  */
 static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
-						pgprot_t prot)
+					enum page_cache_mode cache)
 {
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
+	pgprot_t prot;
 
+	pgprot_val(prot) = pgprot_val(PAGE_KERNEL_LARGE) |
+		pgprot_val(pgprot_4k_2_large(cachemode2pgprot(cache)));
 	BUG_ON((phys & ~PMD_MASK) || (size & ~PMD_MASK));
 	for (; size; phys += PMD_SIZE, size -= PMD_SIZE) {
 		pgd = pgd_offset_k((unsigned long)__va(phys));
@@ -365,12 +368,12 @@ static void __init __init_extra_mapping(unsigned long phys, unsigned long size,
 
 void __init init_extra_mapping_wb(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_WB);
 }
 
 void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
 {
-	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE_NOCACHE);
+	__init_extra_mapping(phys, size, _PAGE_CACHE_MODE_UC);
 }
 
 /*

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in asm/pgtable.h
  2014-11-03 13:01 ` [PATCH V6 07/18] x86: Use new cache mode type in asm/pgtable.h Juergen Gross
@ 2014-11-16 10:56   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:56 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, tglx, hpa, jgross, linux-kernel, stefan.bader

Commit-ID:  d85f33342a0f57acfbe078cdd0c4f590d5608bb7
Gitweb:     http://git.kernel.org/tip/d85f33342a0f57acfbe078cdd0c4f590d5608bb7
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:53 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in asm/pgtable.h

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. This requires changing some callers of
is_new_memtype_allowed() to be changed as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-8-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable.h | 19 ++++++++++---------
 arch/x86/mm/ioremap.c          |  3 ++-
 arch/x86/mm/pat.c              |  8 ++++++--
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index aa97a07..c112ea6 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -9,9 +9,10 @@
 /*
  * Macro to mark a page protection value as UC-
  */
-#define pgprot_noncached(prot)					\
-	((boot_cpu_data.x86 > 3)				\
-	 ? (__pgprot(pgprot_val(prot) | _PAGE_CACHE_UC_MINUS))	\
+#define pgprot_noncached(prot)						\
+	((boot_cpu_data.x86 > 3)					\
+	 ? (__pgprot(pgprot_val(prot) |					\
+		     cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS)))	\
 	 : (prot))
 
 #ifndef __ASSEMBLY__
@@ -404,8 +405,8 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
 #define canon_pgprot(p) __pgprot(massage_pgprot(p))
 
 static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
-					 unsigned long flags,
-					 unsigned long new_flags)
+					 enum page_cache_mode pcm,
+					 enum page_cache_mode new_pcm)
 {
 	/*
 	 * PAT type is always WB for untracked ranges, so no need to check.
@@ -419,10 +420,10 @@ static inline int is_new_memtype_allowed(u64 paddr, unsigned long size,
 	 * - request is uncached, return cannot be write-back
 	 * - request is write-combine, return cannot be write-back
 	 */
-	if ((flags == _PAGE_CACHE_UC_MINUS &&
-	     new_flags == _PAGE_CACHE_WB) ||
-	    (flags == _PAGE_CACHE_WC &&
-	     new_flags == _PAGE_CACHE_WB)) {
+	if ((pcm == _PAGE_CACHE_MODE_UC_MINUS &&
+	     new_pcm == _PAGE_CACHE_MODE_WB) ||
+	    (pcm == _PAGE_CACHE_MODE_WC &&
+	     new_pcm == _PAGE_CACHE_MODE_WB)) {
 		return 0;
 	}
 
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index af78e50..3a81eb9 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -142,7 +142,8 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 
 	if (prot_val != new_prot_val) {
 		if (!is_new_memtype_allowed(phys_addr, size,
-					    prot_val, new_prot_val)) {
+				pgprot2cachemode(__pgprot(prot_val)),
+				pgprot2cachemode(__pgprot(new_prot_val)))) {
 			printk(KERN_ERR
 		"ioremap error for 0x%llx-0x%llx, requested 0x%lx, got 0x%lx\n",
 				(unsigned long long)phys_addr,
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 6574388..47282c2 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -455,7 +455,9 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	if (ret)
 		goto out_err;
 
-	if (!is_new_memtype_allowed(start, size, req_type, new_type))
+	if (!is_new_memtype_allowed(start, size,
+				    pgprot2cachemode(__pgprot(req_type)),
+				    pgprot2cachemode(__pgprot(new_type))))
 		goto out_free;
 
 	if (kernel_map_sync_memtype(start, size, new_type) < 0)
@@ -630,7 +632,9 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 
 	if (flags != want_flags) {
 		if (strict_prot ||
-		    !is_new_memtype_allowed(paddr, size, want_flags, flags)) {
+		    !is_new_memtype_allowed(paddr, size,
+				pgprot2cachemode(__pgprot(want_flags)),
+				pgprot2cachemode(__pgprot(flags)))) {
 			free_memtype(paddr, paddr + size);
 			printk(KERN_ERR "%s:%d map pfn expected mapping type %s"
 				" for [mem %#010Lx-%#010Lx], got %s\n",

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in mm/iomap_32.c
  2014-11-03 13:01 ` [PATCH V6 08/18] x86: Use new cache mode type in mm/iomap_32.c Juergen Gross
@ 2014-11-16 10:56   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:56 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, jgross, stefan.bader, tglx, mingo

Commit-ID:  49a3b3cbdf1621678a39bd95a3e67c0f858539c7
Gitweb:     http://git.kernel.org/tip/49a3b3cbdf1621678a39bd95a3e67c0f858539c7
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:54 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in mm/iomap_32.c

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. This requires to change
io_reserve_memtype() as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-9-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pat.h |  2 +-
 arch/x86/mm/iomap_32.c     | 12 +++++++-----
 arch/x86/mm/pat.c          | 18 ++++++++++--------
 3 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index e2c1668..a8438bc 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -20,7 +20,7 @@ extern int kernel_map_sync_memtype(u64 base, unsigned long size,
 		unsigned long flag);
 
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type);
+			enum page_cache_mode *pcm);
 
 void io_free_memtype(resource_size_t start, resource_size_t end);
 
diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
index 7b179b49..9ca35fc 100644
--- a/arch/x86/mm/iomap_32.c
+++ b/arch/x86/mm/iomap_32.c
@@ -33,17 +33,17 @@ static int is_io_mapping_possible(resource_size_t base, unsigned long size)
 
 int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot)
 {
-	unsigned long flag = _PAGE_CACHE_WC;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_WC;
 	int ret;
 
 	if (!is_io_mapping_possible(base, size))
 		return -EINVAL;
 
-	ret = io_reserve_memtype(base, base + size, &flag);
+	ret = io_reserve_memtype(base, base + size, &pcm);
 	if (ret)
 		return ret;
 
-	*prot = __pgprot(__PAGE_KERNEL | flag);
+	*prot = __pgprot(__PAGE_KERNEL | cachemode2protval(pcm));
 	return 0;
 }
 EXPORT_SYMBOL_GPL(iomap_create_wc);
@@ -82,8 +82,10 @@ iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
 	 * MTRR is UC or WC.  UC_MINUS gets the real intention, of the
 	 * user, which is "WC if the MTRR is WC, UC if you can't do that."
 	 */
-	if (!pat_enabled && pgprot_val(prot) == pgprot_val(PAGE_KERNEL_WC))
-		prot = PAGE_KERNEL_UC_MINUS;
+	if (!pat_enabled && pgprot_val(prot) ==
+	    (__PAGE_KERNEL | cachemode2protval(_PAGE_CACHE_MODE_WC)))
+		prot = __pgprot(__PAGE_KERNEL |
+				cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS));
 
 	return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot);
 }
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 47282c2..6d5a8e3 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -442,25 +442,27 @@ static unsigned long lookup_memtype(u64 paddr)
  * On failure, returns non-zero
  */
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
-			unsigned long *type)
+			enum page_cache_mode *type)
 {
 	resource_size_t size = end - start;
-	unsigned long req_type = *type;
-	unsigned long new_type;
+	enum page_cache_mode req_type = *type;
+	enum page_cache_mode new_type;
+	unsigned long new_prot;
 	int ret;
 
 	WARN_ON_ONCE(iomem_map_sanity_check(start, size));
 
-	ret = reserve_memtype(start, end, req_type, &new_type);
+	ret = reserve_memtype(start, end, cachemode2protval(req_type),
+				&new_prot);
 	if (ret)
 		goto out_err;
 
-	if (!is_new_memtype_allowed(start, size,
-				    pgprot2cachemode(__pgprot(req_type)),
-				    pgprot2cachemode(__pgprot(new_type))))
+	new_type = pgprot2cachemode(__pgprot(new_prot));
+
+	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
-	if (kernel_map_sync_memtype(start, size, new_type) < 0)
+	if (kernel_map_sync_memtype(start, size, new_prot) < 0)
 		goto out_free;
 
 	*type = new_type;

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert()
  2014-11-03 13:01 ` [PATCH V6 09/18] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert() Juergen Gross
@ 2014-11-16 10:56   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:56 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, jgross, tglx, mingo, stefan.bader, hpa

Commit-ID:  2a3746984c98b17b565e6a2c2bbaaaef757db1b4
Gitweb:     http://git.kernel.org/tip/2a3746984c98b17b565e6a2c2bbaaaef757db1b4
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:55 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert()

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type. As those are the main callers of
lookup_memtype(), change this as well.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-10-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pat.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 6d5a8e3..2f3744f 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -394,12 +394,12 @@ int free_memtype(u64 start, u64 end)
  *
  * Only to be called when PAT is enabled
  *
- * Returns _PAGE_CACHE_WB, _PAGE_CACHE_WC, _PAGE_CACHE_UC_MINUS or
- * _PAGE_CACHE_UC
+ * Returns _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC, _PAGE_CACHE_MODE_UC_MINUS
+ * or _PAGE_CACHE_MODE_UC
  */
-static unsigned long lookup_memtype(u64 paddr)
+static enum page_cache_mode lookup_memtype(u64 paddr)
 {
-	int rettype = _PAGE_CACHE_WB;
+	enum page_cache_mode rettype = _PAGE_CACHE_MODE_WB;
 	struct memtype *entry;
 
 	if (x86_platform.is_untracked_pat_range(paddr, paddr + PAGE_SIZE))
@@ -408,13 +408,13 @@ static unsigned long lookup_memtype(u64 paddr)
 	if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
 		struct page *page;
 		page = pfn_to_page(paddr >> PAGE_SHIFT);
-		rettype = get_page_memtype(page);
+		rettype = pgprot2cachemode(__pgprot(get_page_memtype(page)));
 		/*
 		 * -1 from get_page_memtype() implies RAM page is in its
 		 * default state and not reserved, and hence of type WB
 		 */
 		if (rettype == -1)
-			rettype = _PAGE_CACHE_WB;
+			rettype = _PAGE_CACHE_MODE_WB;
 
 		return rettype;
 	}
@@ -423,9 +423,9 @@ static unsigned long lookup_memtype(u64 paddr)
 
 	entry = rbt_memtype_lookup(paddr);
 	if (entry != NULL)
-		rettype = entry->type;
+		rettype = pgprot2cachemode(__pgprot(entry->type));
 	else
-		rettype = _PAGE_CACHE_UC_MINUS;
+		rettype = _PAGE_CACHE_MODE_UC_MINUS;
 
 	spin_unlock(&memtype_lock);
 	return rettype;
@@ -613,7 +613,7 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		if (!pat_enabled)
 			return 0;
 
-		flags = lookup_memtype(paddr);
+		flags = cachemode2protval(lookup_memtype(paddr));
 		if (want_flags != flags) {
 			printk(KERN_WARNING "%s:%d map pfn RAM range req %s for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
@@ -715,7 +715,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 		    unsigned long pfn, unsigned long addr, unsigned long size)
 {
 	resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
-	unsigned long flags;
+	enum page_cache_mode pcm;
 
 	/* reserve the whole chunk starting from paddr */
 	if (addr == vma->vm_start && size == (vma->vm_end - vma->vm_start)) {
@@ -734,18 +734,18 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 	 * For anything smaller than the vma size we set prot based on the
 	 * lookup.
 	 */
-	flags = lookup_memtype(paddr);
+	pcm = lookup_memtype(paddr);
 
 	/* Check memtype for the remaining pages */
 	while (size > PAGE_SIZE) {
 		size -= PAGE_SIZE;
 		paddr += PAGE_SIZE;
-		if (flags != lookup_memtype(paddr))
+		if (pcm != lookup_memtype(paddr))
 			return -EINVAL;
 	}
 
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 cachemode2protval(pcm));
 
 	return 0;
 }
@@ -753,15 +753,15 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 int track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
 		     unsigned long pfn)
 {
-	unsigned long flags;
+	enum page_cache_mode pcm;
 
 	if (!pat_enabled)
 		return 0;
 
 	/* Set prot based on lookup */
-	flags = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
+	pcm = lookup_memtype((resource_size_t)pfn << PAGE_SHIFT);
 	*prot = __pgprot((pgprot_val(vma->vm_page_prot) & (~_PAGE_CACHE_MASK)) |
-			 flags);
+			 cachemode2protval(pcm));
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c
  2014-11-03 13:01 ` [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c Juergen Gross
  2014-11-03 16:44   ` Thomas Gleixner
@ 2014-11-16 10:57   ` tip-bot for Juergen Gross
  1 sibling, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:57 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: hpa, tglx, jgross, linux-kernel, mingo

Commit-ID:  102e19e1955d85f31475416b1ee22980c6462cf8
Gitweb:     http://git.kernel.org/tip/102e19e1955d85f31475416b1ee22980c6462cf8
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:56 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c

When modifying page attributes via change_page_attr_set_clr() don't
test for setting _PAGE_PAT_LARGE, as this is
- never done
- PAT support for large pages is not included in the kernel up to now

Signed-off-by: Juergen Gross <jgross@suse.com>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-11-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index ae242a7..87c0d36 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1307,7 +1307,7 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 static inline int cache_attr(pgprot_t attr)
 {
 	return pgprot_val(attr) &
-		(_PAGE_PAT | _PAGE_PAT_LARGE | _PAGE_PWT | _PAGE_PCD);
+		(_PAGE_PAT | _PAGE_PWT | _PAGE_PCD);
 }
 
 static int change_page_attr_set_clr(unsigned long *addr, int numpages,

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in setting page attributes
  2014-11-03 13:01 ` [PATCH V6 11/18] x86: Use new cache mode type in setting page attributes Juergen Gross
@ 2014-11-16 10:57   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:57 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, stefan.bader, jgross, tglx, hpa, mingo

Commit-ID:  c06814d8419a74528500f85faf5fc01f67f8e7e6
Gitweb:     http://git.kernel.org/tip/c06814d8419a74528500f85faf5fc01f67f8e7e6
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:57 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:25 +0100

x86: Use new cache mode type in setting page attributes

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type in the functions for modifying page
attributes.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-12-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c | 52 +++++++++++++++++++++++++++-----------------------
 1 file changed, 28 insertions(+), 24 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 87c0d36..9f7e1b4 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1304,12 +1304,6 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
 	return 0;
 }
 
-static inline int cache_attr(pgprot_t attr)
-{
-	return pgprot_val(attr) &
-		(_PAGE_PAT | _PAGE_PWT | _PAGE_PCD);
-}
-
 static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 				    pgprot_t mask_set, pgprot_t mask_clr,
 				    int force_split, int in_flag,
@@ -1390,7 +1384,7 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
 	 * No need to flush, when we did not set any of the caching
 	 * attributes:
 	 */
-	cache = cache_attr(mask_set);
+	cache = !!pgprot2cachemode(mask_set);
 
 	/*
 	 * On success we use CLFLUSH, when the CPU supports it to
@@ -1445,7 +1439,8 @@ int _set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	return change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				    cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				    0);
 }
 
 int set_memory_uc(unsigned long addr, int numpages)
@@ -1474,7 +1469,7 @@ out_err:
 EXPORT_SYMBOL(set_memory_uc);
 
 static int _set_memory_array(unsigned long *addr, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	int i, j;
 	int ret;
@@ -1484,17 +1479,19 @@ static int _set_memory_array(unsigned long *addr, int addrinarray,
 	 */
 	for (i = 0; i < addrinarray; i++) {
 		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
-					new_type, NULL);
+					cachemode2protval(new_type), NULL);
 		if (ret)
 			goto out_free;
 	}
 
 	ret = change_page_attr_set(addr, addrinarray,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
+				   cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				   1);
 
-	if (!ret && new_type == _PAGE_CACHE_WC)
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(addr, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_ARRAY, NULL);
 	if (ret)
@@ -1511,13 +1508,13 @@ out_free:
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_memory_array_uc);
 
 int set_memory_array_wc(unsigned long *addr, int addrinarray)
 {
-	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_WC);
+	return _set_memory_array(addr, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_memory_array_wc);
 
@@ -1527,10 +1524,12 @@ int _set_memory_wc(unsigned long addr, int numpages)
 	unsigned long addr_copy = addr;
 
 	ret = change_page_attr_set(&addr, numpages,
-				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+				   cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS),
+				   0);
 	if (!ret) {
 		ret = change_page_attr_set_clr(&addr_copy, numpages,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, 0, NULL);
 	}
@@ -1564,6 +1563,7 @@ EXPORT_SYMBOL(set_memory_wc);
 
 int _set_memory_wb(unsigned long addr, int numpages)
 {
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	return change_page_attr_clear(&addr, numpages,
 				      __pgprot(_PAGE_CACHE_MASK), 0);
 }
@@ -1586,6 +1586,7 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 	int i;
 	int ret;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	ret = change_page_attr_clear(addr, addrinarray,
 				      __pgprot(_PAGE_CACHE_MASK), 1);
 	if (ret)
@@ -1648,7 +1649,7 @@ int set_pages_uc(struct page *page, int numpages)
 EXPORT_SYMBOL(set_pages_uc);
 
 static int _set_pages_array(struct page **pages, int addrinarray,
-		unsigned long new_type)
+		enum page_cache_mode new_type)
 {
 	unsigned long start;
 	unsigned long end;
@@ -1661,15 +1662,17 @@ static int _set_pages_array(struct page **pages, int addrinarray,
 			continue;
 		start = page_to_pfn(pages[i]) << PAGE_SHIFT;
 		end = start + PAGE_SIZE;
-		if (reserve_memtype(start, end, new_type, NULL))
+		if (reserve_memtype(start, end, cachemode2protval(new_type),
+				    NULL))
 			goto err_out;
 	}
 
 	ret = cpa_set_pages_array(pages, addrinarray,
-			__pgprot(_PAGE_CACHE_UC_MINUS));
-	if (!ret && new_type == _PAGE_CACHE_WC)
+			cachemode2pgprot(_PAGE_CACHE_MODE_UC_MINUS));
+	if (!ret && new_type == _PAGE_CACHE_MODE_WC)
 		ret = change_page_attr_set_clr(NULL, addrinarray,
-					       __pgprot(_PAGE_CACHE_WC),
+					       cachemode2pgprot(
+						_PAGE_CACHE_MODE_WC),
 					       __pgprot(_PAGE_CACHE_MASK),
 					       0, CPA_PAGES_ARRAY, pages);
 	if (ret)
@@ -1689,13 +1692,13 @@ err_out:
 
 int set_pages_array_uc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_UC_MINUS);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_UC_MINUS);
 }
 EXPORT_SYMBOL(set_pages_array_uc);
 
 int set_pages_array_wc(struct page **pages, int addrinarray)
 {
-	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_WC);
+	return _set_pages_array(pages, addrinarray, _PAGE_CACHE_MODE_WC);
 }
 EXPORT_SYMBOL(set_pages_array_wc);
 
@@ -1714,6 +1717,7 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 	unsigned long end;
 	int i;
 
+	/* WB cache mode is hard wired to all cache attribute bits being 0 */
 	retval = cpa_clear_pages_array(pages, addrinarray,
 			__pgprot(_PAGE_CACHE_MASK));
 	if (retval)

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in mm/ioremap.c
  2014-11-03 13:01 ` [PATCH V6 12/18] x86: Use new cache mode type in mm/ioremap.c Juergen Gross
@ 2014-11-16 10:57   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:57 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: hpa, jgross, stefan.bader, tglx, linux-kernel, mingo

Commit-ID:  b14097bd911c2554b0b5271b3a6b2d84044d1843
Gitweb:     http://git.kernel.org/tip/b14097bd911c2554b0b5271b3a6b2d84044d1843
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:58 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Use new cache mode type in mm/ioremap.c

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-13-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/io.h  |  2 +-
 arch/x86/include/asm/pat.h |  2 +-
 arch/x86/mm/ioremap.c      | 65 +++++++++++++++++++++++++---------------------
 arch/x86/mm/pat.c          | 12 +++++----
 4 files changed, 44 insertions(+), 37 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index b8237d8..71b9e65 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -314,7 +314,7 @@ extern void *xlate_dev_mem_ptr(unsigned long phys);
 extern void unxlate_dev_mem_ptr(unsigned long phys, void *addr);
 
 extern int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-				unsigned long prot_val);
+				enum page_cache_mode pcm);
 extern void __iomem *ioremap_wc(resource_size_t offset, unsigned long size);
 
 extern bool is_early_ioremap_ptep(pte_t *ptep);
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index a8438bc..d35ee2d 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -17,7 +17,7 @@ extern int reserve_memtype(u64 start, u64 end,
 extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
-		unsigned long flag);
+		enum page_cache_mode pcm);
 
 int io_reserve_memtype(resource_size_t start, resource_size_t end,
 			enum page_cache_mode *pcm);
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 3a81eb9..f31507f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -29,20 +29,20 @@
  * conflicts.
  */
 int ioremap_change_attr(unsigned long vaddr, unsigned long size,
-			       unsigned long prot_val)
+			enum page_cache_mode pcm)
 {
 	unsigned long nrpages = size >> PAGE_SHIFT;
 	int err;
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
 		err = _set_memory_uc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		err = _set_memory_wc(vaddr, nrpages);
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
 		err = _set_memory_wb(vaddr, nrpages);
 		break;
 	}
@@ -75,13 +75,14 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-		unsigned long size, unsigned long prot_val, void *caller)
+		unsigned long size, enum page_cache_mode pcm, void *caller)
 {
 	unsigned long offset, vaddr;
 	resource_size_t pfn, last_pfn, last_addr;
 	const resource_size_t unaligned_phys_addr = phys_addr;
 	const unsigned long unaligned_size = size;
 	struct vm_struct *area;
+	enum page_cache_mode new_pcm;
 	unsigned long new_prot_val;
 	pgprot_t prot;
 	int retval;
@@ -134,39 +135,42 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
 
 	retval = reserve_memtype(phys_addr, (u64)phys_addr + size,
-						prot_val, &new_prot_val);
+				 cachemode2protval(pcm), &new_prot_val);
 	if (retval) {
 		printk(KERN_ERR "ioremap reserve_memtype failed %d\n", retval);
 		return NULL;
 	}
 
-	if (prot_val != new_prot_val) {
-		if (!is_new_memtype_allowed(phys_addr, size,
-				pgprot2cachemode(__pgprot(prot_val)),
-				pgprot2cachemode(__pgprot(new_prot_val)))) {
+	new_pcm = pgprot2cachemode(__pgprot(new_prot_val));
+
+	if (pcm != new_pcm) {
+		if (!is_new_memtype_allowed(phys_addr, size, pcm, new_pcm)) {
 			printk(KERN_ERR
-		"ioremap error for 0x%llx-0x%llx, requested 0x%lx, got 0x%lx\n",
+		"ioremap error for 0x%llx-0x%llx, requested 0x%x, got 0x%x\n",
 				(unsigned long long)phys_addr,
 				(unsigned long long)(phys_addr + size),
-				prot_val, new_prot_val);
+				pcm, new_pcm);
 			goto err_free_memtype;
 		}
-		prot_val = new_prot_val;
+		pcm = new_pcm;
 	}
 
-	switch (prot_val) {
-	case _PAGE_CACHE_UC:
+	prot = PAGE_KERNEL_IO;
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:
 	default:
-		prot = PAGE_KERNEL_IO_NOCACHE;
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_UC));
 		break;
-	case _PAGE_CACHE_UC_MINUS:
-		prot = PAGE_KERNEL_IO_UC_MINUS;
+	case _PAGE_CACHE_MODE_UC_MINUS:
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_UC_MINUS));
 		break;
-	case _PAGE_CACHE_WC:
-		prot = PAGE_KERNEL_IO_WC;
+	case _PAGE_CACHE_MODE_WC:
+		prot = __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_WC));
 		break;
-	case _PAGE_CACHE_WB:
-		prot = PAGE_KERNEL_IO;
+	case _PAGE_CACHE_MODE_WB:
 		break;
 	}
 
@@ -179,7 +183,7 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	area->phys_addr = phys_addr;
 	vaddr = (unsigned long) area->addr;
 
-	if (kernel_map_sync_memtype(phys_addr, size, prot_val))
+	if (kernel_map_sync_memtype(phys_addr, size, pcm))
 		goto err_free_area;
 
 	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
@@ -228,14 +232,14 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, unsigned long size)
 {
 	/*
 	 * Ideally, this should be:
-	 *	pat_enabled ? _PAGE_CACHE_UC : _PAGE_CACHE_UC_MINUS;
+	 *	pat_enabled ? _PAGE_CACHE_MODE_UC : _PAGE_CACHE_MODE_UC_MINUS;
 	 *
 	 * Till we fix all X drivers to use ioremap_wc(), we will use
 	 * UC MINUS.
 	 */
-	unsigned long val = _PAGE_CACHE_UC_MINUS;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
-	return __ioremap_caller(phys_addr, size, val,
+	return __ioremap_caller(phys_addr, size, pcm,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_nocache);
@@ -253,7 +257,7 @@ EXPORT_SYMBOL(ioremap_nocache);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
 	if (pat_enabled)
-		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WC,
+		return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
 					__builtin_return_address(0));
 	else
 		return ioremap_nocache(phys_addr, size);
@@ -262,7 +266,7 @@ EXPORT_SYMBOL(ioremap_wc);
 
 void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
 {
-	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
+	return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_cache);
@@ -270,7 +274,8 @@ EXPORT_SYMBOL(ioremap_cache);
 void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
 				unsigned long prot_val)
 {
-	return __ioremap_caller(phys_addr, size, (prot_val & _PAGE_CACHE_MASK),
+	return __ioremap_caller(phys_addr, size,
+				pgprot2cachemode(__pgprot(prot_val)),
 				__builtin_return_address(0));
 }
 EXPORT_SYMBOL(ioremap_prot);
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 2f3744f..8f68a83 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -462,7 +462,7 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
-	if (kernel_map_sync_memtype(start, size, new_prot) < 0)
+	if (kernel_map_sync_memtype(start, size, new_type) < 0)
 		goto out_free;
 
 	*type = new_type;
@@ -560,7 +560,8 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
  * Change the memory type for the physial address range in kernel identity
  * mapping space if that range is a part of identity map.
  */
-int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
+int kernel_map_sync_memtype(u64 base, unsigned long size,
+			    enum page_cache_mode pcm)
 {
 	unsigned long id_sz;
 
@@ -578,11 +579,11 @@ int kernel_map_sync_memtype(u64 base, unsigned long size, unsigned long flags)
 				__pa(high_memory) - base :
 				size;
 
-	if (ioremap_change_attr((unsigned long)__va(base), id_sz, flags) < 0) {
+	if (ioremap_change_attr((unsigned long)__va(base), id_sz, pcm) < 0) {
 		printk(KERN_INFO "%s:%d ioremap_change_attr failed %s "
 			"for [mem %#010Lx-%#010Lx]\n",
 			current->comm, current->pid,
-			cattr_name(flags),
+			cattr_name(cachemode2protval(pcm)),
 			base, (unsigned long long)(base + size-1));
 		return -EINVAL;
 	}
@@ -656,7 +657,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 				     flags);
 	}
 
-	if (kernel_map_sync_memtype(paddr, size, flags) < 0) {
+	if (kernel_map_sync_memtype(paddr, size,
+				    pgprot2cachemode(__pgprot(flags))) < 0) {
 		free_memtype(paddr, paddr + size);
 		return -EINVAL;
 	}

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Use new cache mode type in memtype related functions
  2014-11-03 13:01 ` [PATCH V6 13/18] x86: Use new cache mode type in memtype related functions Juergen Gross
@ 2014-11-16 10:57   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:57 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: hpa, mingo, linux-kernel, jgross, stefan.bader, tglx

Commit-ID:  e00c8cc93c1ac01ecd5049929a50fb47b62bb041
Gitweb:     http://git.kernel.org/tip/e00c8cc93c1ac01ecd5049929a50fb47b62bb041
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:01:59 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Use new cache mode type in memtype related functions

Instead of directly using the cache mode bits in the pte switch to
using the cache mode type.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-14-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/cacheflush.h |  38 ++++++++------
 arch/x86/include/asm/pat.h        |   2 +-
 arch/x86/mm/ioremap.c             |   5 +-
 arch/x86/mm/pageattr.c            |   9 ++--
 arch/x86/mm/pat.c                 | 102 ++++++++++++++++++--------------------
 arch/x86/mm/pat_internal.h        |  22 ++++----
 arch/x86/mm/pat_rbtree.c          |   8 +--
 7 files changed, 96 insertions(+), 90 deletions(-)

diff --git a/arch/x86/include/asm/cacheflush.h b/arch/x86/include/asm/cacheflush.h
index 9863ee3..157644b 100644
--- a/arch/x86/include/asm/cacheflush.h
+++ b/arch/x86/include/asm/cacheflush.h
@@ -9,10 +9,10 @@
 /*
  * X86 PAT uses page flags WC and Uncached together to keep track of
  * memory type of pages that have backing page struct. X86 PAT supports 3
- * different memory types, _PAGE_CACHE_WB, _PAGE_CACHE_WC and
- * _PAGE_CACHE_UC_MINUS and fourth state where page's memory type has not
+ * different memory types, _PAGE_CACHE_MODE_WB, _PAGE_CACHE_MODE_WC and
+ * _PAGE_CACHE_MODE_UC_MINUS and fourth state where page's memory type has not
  * been changed from its default (value of -1 used to denote this).
- * Note we do not support _PAGE_CACHE_UC here.
+ * Note we do not support _PAGE_CACHE_MODE_UC here.
  */
 
 #define _PGMT_DEFAULT		0
@@ -22,36 +22,40 @@
 #define _PGMT_MASK		(1UL << PG_uncached | 1UL << PG_arch_1)
 #define _PGMT_CLEAR_MASK	(~_PGMT_MASK)
 
-static inline unsigned long get_page_memtype(struct page *pg)
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
 {
 	unsigned long pg_flags = pg->flags & _PGMT_MASK;
 
 	if (pg_flags == _PGMT_DEFAULT)
 		return -1;
 	else if (pg_flags == _PGMT_WC)
-		return _PAGE_CACHE_WC;
+		return _PAGE_CACHE_MODE_WC;
 	else if (pg_flags == _PGMT_UC_MINUS)
-		return _PAGE_CACHE_UC_MINUS;
+		return _PAGE_CACHE_MODE_UC_MINUS;
 	else
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 }
 
-static inline void set_page_memtype(struct page *pg, unsigned long memtype)
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
 {
-	unsigned long memtype_flags = _PGMT_DEFAULT;
+	unsigned long memtype_flags;
 	unsigned long old_flags;
 	unsigned long new_flags;
 
 	switch (memtype) {
-	case _PAGE_CACHE_WC:
+	case _PAGE_CACHE_MODE_WC:
 		memtype_flags = _PGMT_WC;
 		break;
-	case _PAGE_CACHE_UC_MINUS:
+	case _PAGE_CACHE_MODE_UC_MINUS:
 		memtype_flags = _PGMT_UC_MINUS;
 		break;
-	case _PAGE_CACHE_WB:
+	case _PAGE_CACHE_MODE_WB:
 		memtype_flags = _PGMT_WB;
 		break;
+	default:
+		memtype_flags = _PGMT_DEFAULT;
+		break;
 	}
 
 	do {
@@ -60,8 +64,14 @@ static inline void set_page_memtype(struct page *pg, unsigned long memtype)
 	} while (cmpxchg(&pg->flags, old_flags, new_flags) != old_flags);
 }
 #else
-static inline unsigned long get_page_memtype(struct page *pg) { return -1; }
-static inline void set_page_memtype(struct page *pg, unsigned long memtype) { }
+static inline enum page_cache_mode get_page_memtype(struct page *pg)
+{
+	return -1;
+}
+static inline void set_page_memtype(struct page *pg,
+				    enum page_cache_mode memtype)
+{
+}
 #endif
 
 /*
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index d35ee2d..150407a 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -13,7 +13,7 @@ static const int pat_enabled;
 extern void pat_init(void);
 
 extern int reserve_memtype(u64 start, u64 end,
-		unsigned long req_type, unsigned long *ret_type);
+		enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
 extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index f31507f..8832e51 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -83,7 +83,6 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	const unsigned long unaligned_size = size;
 	struct vm_struct *area;
 	enum page_cache_mode new_pcm;
-	unsigned long new_prot_val;
 	pgprot_t prot;
 	int retval;
 	void __iomem *ret_addr;
@@ -135,14 +134,12 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 	size = PAGE_ALIGN(last_addr+1) - phys_addr;
 
 	retval = reserve_memtype(phys_addr, (u64)phys_addr + size,
-				 cachemode2protval(pcm), &new_prot_val);
+						pcm, &new_pcm);
 	if (retval) {
 		printk(KERN_ERR "ioremap reserve_memtype failed %d\n", retval);
 		return NULL;
 	}
 
-	new_pcm = pgprot2cachemode(__pgprot(new_prot_val));
-
 	if (pcm != new_pcm) {
 		if (!is_new_memtype_allowed(phys_addr, size, pcm, new_pcm)) {
 			printk(KERN_ERR
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 9f7e1b4..de807c9 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1451,7 +1451,7 @@ int set_memory_uc(unsigned long addr, int numpages)
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-			    _PAGE_CACHE_UC_MINUS, NULL);
+			      _PAGE_CACHE_MODE_UC_MINUS, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1479,7 +1479,7 @@ static int _set_memory_array(unsigned long *addr, int addrinarray,
 	 */
 	for (i = 0; i < addrinarray; i++) {
 		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
-					cachemode2protval(new_type), NULL);
+					new_type, NULL);
 		if (ret)
 			goto out_free;
 	}
@@ -1544,7 +1544,7 @@ int set_memory_wc(unsigned long addr, int numpages)
 		return set_memory_uc(addr, numpages);
 
 	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-		_PAGE_CACHE_WC, NULL);
+		_PAGE_CACHE_MODE_WC, NULL);
 	if (ret)
 		goto out_err;
 
@@ -1662,8 +1662,7 @@ static int _set_pages_array(struct page **pages, int addrinarray,
 			continue;
 		start = page_to_pfn(pages[i]) << PAGE_SHIFT;
 		end = start + PAGE_SIZE;
-		if (reserve_memtype(start, end, cachemode2protval(new_type),
-				    NULL))
+		if (reserve_memtype(start, end, new_type, NULL))
 			goto err_out;
 	}
 
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 8f68a83..ef75f3f 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -139,20 +139,21 @@ static DEFINE_SPINLOCK(memtype_lock);	/* protects memtype accesses */
  * The intersection is based on "Effective Memory Type" tables in IA-32
  * SDM vol 3a
  */
-static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type)
+static unsigned long pat_x_mtrr_type(u64 start, u64 end,
+				     enum page_cache_mode req_type)
 {
 	/*
 	 * Look for MTRR hint to get the effective type in case where PAT
 	 * request is for WB.
 	 */
-	if (req_type == _PAGE_CACHE_WB) {
+	if (req_type == _PAGE_CACHE_MODE_WB) {
 		u8 mtrr_type;
 
 		mtrr_type = mtrr_type_lookup(start, end);
 		if (mtrr_type != MTRR_TYPE_WRBACK)
-			return _PAGE_CACHE_UC_MINUS;
+			return _PAGE_CACHE_MODE_UC_MINUS;
 
-		return _PAGE_CACHE_WB;
+		return _PAGE_CACHE_MODE_WB;
 	}
 
 	return req_type;
@@ -207,25 +208,26 @@ static int pat_pagerange_is_ram(resource_size_t start, resource_size_t end)
  * - Find the memtype of all the pages in the range, look for any conflicts
  * - In case of no conflicts, set the new memtype for pages in the range
  */
-static int reserve_ram_pages_type(u64 start, u64 end, unsigned long req_type,
-				  unsigned long *new_type)
+static int reserve_ram_pages_type(u64 start, u64 end,
+				  enum page_cache_mode req_type,
+				  enum page_cache_mode *new_type)
 {
 	struct page *page;
 	u64 pfn;
 
-	if (req_type == _PAGE_CACHE_UC) {
+	if (req_type == _PAGE_CACHE_MODE_UC) {
 		/* We do not support strong UC */
 		WARN_ON_ONCE(1);
-		req_type = _PAGE_CACHE_UC_MINUS;
+		req_type = _PAGE_CACHE_MODE_UC_MINUS;
 	}
 
 	for (pfn = (start >> PAGE_SHIFT); pfn < (end >> PAGE_SHIFT); ++pfn) {
-		unsigned long type;
+		enum page_cache_mode type;
 
 		page = pfn_to_page(pfn);
 		type = get_page_memtype(page);
 		if (type != -1) {
-			printk(KERN_INFO "reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%lx, req 0x%lx\n",
+			pr_info("reserve_ram_pages_type failed [mem %#010Lx-%#010Lx], track 0x%x, req 0x%x\n",
 				start, end - 1, type, req_type);
 			if (new_type)
 				*new_type = type;
@@ -258,21 +260,21 @@ static int free_ram_pages_type(u64 start, u64 end)
 
 /*
  * req_type typically has one of the:
- * - _PAGE_CACHE_WB
- * - _PAGE_CACHE_WC
- * - _PAGE_CACHE_UC_MINUS
- * - _PAGE_CACHE_UC
+ * - _PAGE_CACHE_MODE_WB
+ * - _PAGE_CACHE_MODE_WC
+ * - _PAGE_CACHE_MODE_UC_MINUS
+ * - _PAGE_CACHE_MODE_UC
  *
  * If new_type is NULL, function will return an error if it cannot reserve the
  * region with req_type. If new_type is non-NULL, function will return
  * available type in new_type in case of no error. In case of any error
  * it will return a negative return value.
  */
-int reserve_memtype(u64 start, u64 end, unsigned long req_type,
-		    unsigned long *new_type)
+int reserve_memtype(u64 start, u64 end, enum page_cache_mode req_type,
+		    enum page_cache_mode *new_type)
 {
 	struct memtype *new;
-	unsigned long actual_type;
+	enum page_cache_mode actual_type;
 	int is_range_ram;
 	int err = 0;
 
@@ -281,10 +283,10 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	if (!pat_enabled) {
 		/* This is identical to page table setting without PAT */
 		if (new_type) {
-			if (req_type == _PAGE_CACHE_WC)
-				*new_type = _PAGE_CACHE_UC_MINUS;
+			if (req_type == _PAGE_CACHE_MODE_WC)
+				*new_type = _PAGE_CACHE_MODE_UC_MINUS;
 			else
-				*new_type = req_type & _PAGE_CACHE_MASK;
+				*new_type = req_type;
 		}
 		return 0;
 	}
@@ -292,7 +294,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	/* Low ISA region is always mapped WB in page table. No need to track */
 	if (x86_platform.is_untracked_pat_range(start, end)) {
 		if (new_type)
-			*new_type = _PAGE_CACHE_WB;
+			*new_type = _PAGE_CACHE_MODE_WB;
 		return 0;
 	}
 
@@ -302,7 +304,7 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 	 * tools and ACPI tools). Use WB request for WB memory and use
 	 * UC_MINUS otherwise.
 	 */
-	actual_type = pat_x_mtrr_type(start, end, req_type & _PAGE_CACHE_MASK);
+	actual_type = pat_x_mtrr_type(start, end, req_type);
 
 	if (new_type)
 		*new_type = actual_type;
@@ -408,7 +410,7 @@ static enum page_cache_mode lookup_memtype(u64 paddr)
 	if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
 		struct page *page;
 		page = pfn_to_page(paddr >> PAGE_SHIFT);
-		rettype = pgprot2cachemode(__pgprot(get_page_memtype(page)));
+		rettype = get_page_memtype(page);
 		/*
 		 * -1 from get_page_memtype() implies RAM page is in its
 		 * default state and not reserved, and hence of type WB
@@ -423,7 +425,7 @@ static enum page_cache_mode lookup_memtype(u64 paddr)
 
 	entry = rbt_memtype_lookup(paddr);
 	if (entry != NULL)
-		rettype = pgprot2cachemode(__pgprot(entry->type));
+		rettype = entry->type;
 	else
 		rettype = _PAGE_CACHE_MODE_UC_MINUS;
 
@@ -447,18 +449,14 @@ int io_reserve_memtype(resource_size_t start, resource_size_t end,
 	resource_size_t size = end - start;
 	enum page_cache_mode req_type = *type;
 	enum page_cache_mode new_type;
-	unsigned long new_prot;
 	int ret;
 
 	WARN_ON_ONCE(iomem_map_sanity_check(start, size));
 
-	ret = reserve_memtype(start, end, cachemode2protval(req_type),
-				&new_prot);
+	ret = reserve_memtype(start, end, req_type, &new_type);
 	if (ret)
 		goto out_err;
 
-	new_type = pgprot2cachemode(__pgprot(new_prot));
-
 	if (!is_new_memtype_allowed(start, size, req_type, new_type))
 		goto out_free;
 
@@ -524,13 +522,13 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
 int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 				unsigned long size, pgprot_t *vma_prot)
 {
-	unsigned long flags = _PAGE_CACHE_WB;
+	enum page_cache_mode pcm = _PAGE_CACHE_MODE_WB;
 
 	if (!range_is_allowed(pfn, size))
 		return 0;
 
 	if (file->f_flags & O_DSYNC)
-		flags = _PAGE_CACHE_UC_MINUS;
+		pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
 #ifdef CONFIG_X86_32
 	/*
@@ -547,12 +545,12 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	      boot_cpu_has(X86_FEATURE_CYRIX_ARR) ||
 	      boot_cpu_has(X86_FEATURE_CENTAUR_MCR)) &&
 	    (pfn << PAGE_SHIFT) >= __pa(high_memory)) {
-		flags = _PAGE_CACHE_UC;
+		pcm = _PAGE_CACHE_MODE_UC;
 	}
 #endif
 
 	*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
-			     flags);
+			     cachemode2protval(pcm));
 	return 1;
 }
 
@@ -583,7 +581,7 @@ int kernel_map_sync_memtype(u64 base, unsigned long size,
 		printk(KERN_INFO "%s:%d ioremap_change_attr failed %s "
 			"for [mem %#010Lx-%#010Lx]\n",
 			current->comm, current->pid,
-			cattr_name(cachemode2protval(pcm)),
+			cattr_name(pcm),
 			base, (unsigned long long)(base + size-1));
 		return -EINVAL;
 	}
@@ -600,8 +598,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 {
 	int is_ram = 0;
 	int ret;
-	unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
-	unsigned long flags = want_flags;
+	enum page_cache_mode want_pcm = pgprot2cachemode(*vma_prot);
+	enum page_cache_mode pcm = want_pcm;
 
 	is_ram = pat_pagerange_is_ram(paddr, paddr + size);
 
@@ -614,38 +612,36 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		if (!pat_enabled)
 			return 0;
 
-		flags = cachemode2protval(lookup_memtype(paddr));
-		if (want_flags != flags) {
+		pcm = lookup_memtype(paddr);
+		if (want_pcm != pcm) {
 			printk(KERN_WARNING "%s:%d map pfn RAM range req %s for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pcm),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pcm));
 			*vma_prot = __pgprot((pgprot_val(*vma_prot) &
-					      (~_PAGE_CACHE_MASK)) |
-					     flags);
+					     (~_PAGE_CACHE_MASK)) |
+					     cachemode2protval(pcm));
 		}
 		return 0;
 	}
 
-	ret = reserve_memtype(paddr, paddr + size, want_flags, &flags);
+	ret = reserve_memtype(paddr, paddr + size, want_pcm, &pcm);
 	if (ret)
 		return ret;
 
-	if (flags != want_flags) {
+	if (pcm != want_pcm) {
 		if (strict_prot ||
-		    !is_new_memtype_allowed(paddr, size,
-				pgprot2cachemode(__pgprot(want_flags)),
-				pgprot2cachemode(__pgprot(flags)))) {
+		    !is_new_memtype_allowed(paddr, size, want_pcm, pcm)) {
 			free_memtype(paddr, paddr + size);
 			printk(KERN_ERR "%s:%d map pfn expected mapping type %s"
 				" for [mem %#010Lx-%#010Lx], got %s\n",
 				current->comm, current->pid,
-				cattr_name(want_flags),
+				cattr_name(want_pcm),
 				(unsigned long long)paddr,
 				(unsigned long long)(paddr + size - 1),
-				cattr_name(flags));
+				cattr_name(pcm));
 			return -EINVAL;
 		}
 		/*
@@ -654,11 +650,10 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 		 */
 		*vma_prot = __pgprot((pgprot_val(*vma_prot) &
 				      (~_PAGE_CACHE_MASK)) |
-				     flags);
+				     cachemode2protval(pcm));
 	}
 
-	if (kernel_map_sync_memtype(paddr, size,
-				    pgprot2cachemode(__pgprot(flags))) < 0) {
+	if (kernel_map_sync_memtype(paddr, size, pcm) < 0) {
 		free_memtype(paddr, paddr + size);
 		return -EINVAL;
 	}
@@ -799,7 +794,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 pgprot_t pgprot_writecombine(pgprot_t prot)
 {
 	if (pat_enabled)
-		return __pgprot(pgprot_val(prot) | _PAGE_CACHE_WC);
+		return __pgprot(pgprot_val(prot) |
+				cachemode2protval(_PAGE_CACHE_MODE_WC));
 	else
 		return pgprot_noncached(prot);
 }
diff --git a/arch/x86/mm/pat_internal.h b/arch/x86/mm/pat_internal.h
index 77e5ba1..f641162 100644
--- a/arch/x86/mm/pat_internal.h
+++ b/arch/x86/mm/pat_internal.h
@@ -10,30 +10,32 @@ struct memtype {
 	u64			start;
 	u64			end;
 	u64			subtree_max_end;
-	unsigned long		type;
+	enum page_cache_mode	type;
 	struct rb_node		rb;
 };
 
-static inline char *cattr_name(unsigned long flags)
+static inline char *cattr_name(enum page_cache_mode pcm)
 {
-	switch (flags & _PAGE_CACHE_MASK) {
-	case _PAGE_CACHE_UC:		return "uncached";
-	case _PAGE_CACHE_UC_MINUS:	return "uncached-minus";
-	case _PAGE_CACHE_WB:		return "write-back";
-	case _PAGE_CACHE_WC:		return "write-combining";
-	default:			return "broken";
+	switch (pcm) {
+	case _PAGE_CACHE_MODE_UC:		return "uncached";
+	case _PAGE_CACHE_MODE_UC_MINUS:		return "uncached-minus";
+	case _PAGE_CACHE_MODE_WB:		return "write-back";
+	case _PAGE_CACHE_MODE_WC:		return "write-combining";
+	case _PAGE_CACHE_MODE_WT:		return "write-through";
+	case _PAGE_CACHE_MODE_WP:		return "write-protected";
+	default:				return "broken";
 	}
 }
 
 #ifdef CONFIG_X86_PAT
 extern int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type);
+					enum page_cache_mode *new_type);
 extern struct memtype *rbt_memtype_erase(u64 start, u64 end);
 extern struct memtype *rbt_memtype_lookup(u64 addr);
 extern int rbt_memtype_copy_nth_element(struct memtype *out, loff_t pos);
 #else
 static inline int rbt_memtype_check_insert(struct memtype *new,
-					unsigned long *new_type)
+					enum page_cache_mode *new_type)
 { return 0; }
 static inline struct memtype *rbt_memtype_erase(u64 start, u64 end)
 { return NULL; }
diff --git a/arch/x86/mm/pat_rbtree.c b/arch/x86/mm/pat_rbtree.c
index 415f6c4..6582adc 100644
--- a/arch/x86/mm/pat_rbtree.c
+++ b/arch/x86/mm/pat_rbtree.c
@@ -122,11 +122,12 @@ static struct memtype *memtype_rb_exact_match(struct rb_root *root,
 
 static int memtype_rb_check_conflict(struct rb_root *root,
 				u64 start, u64 end,
-				unsigned long reqtype, unsigned long *newtype)
+				enum page_cache_mode reqtype,
+				enum page_cache_mode *newtype)
 {
 	struct rb_node *node;
 	struct memtype *match;
-	int found_type = reqtype;
+	enum page_cache_mode found_type = reqtype;
 
 	match = memtype_rb_lowest_match(&memtype_rbroot, start, end);
 	if (match == NULL)
@@ -187,7 +188,8 @@ static void memtype_rb_insert(struct rb_root *root, struct memtype *newdata)
 	rb_insert_augmented(&newdata->rb, root, &memtype_rb_augment_cb);
 }
 
-int rbt_memtype_check_insert(struct memtype *new, unsigned long *ret_type)
+int rbt_memtype_check_insert(struct memtype *new,
+			     enum page_cache_mode *ret_type)
 {
 	int err = 0;
 

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Clean up pgtable_types.h
  2014-11-03 13:02 ` [PATCH V6 14/18] x86: Clean up pgtable_types.h Juergen Gross
@ 2014-11-16 10:58   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:58 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, tglx, hpa, jgross, linux-kernel, stefan.bader

Commit-ID:  87ad0b713b1034b6caf559976c35ce47f6d1d1e9
Gitweb:     http://git.kernel.org/tip/87ad0b713b1034b6caf559976c35ce47f6d1d1e9
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:02:00 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Clean up pgtable_types.h

Remove no longer used defines from pgtable_types.h as they are not
used any longer.

Switch __PAGE_KERNEL_NOCACHE to use cache mode type instead of pte
bits.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-15-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pgtable_types.h | 21 +--------------------
 1 file changed, 1 insertion(+), 20 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 5124642..6d5f6d1 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -128,11 +128,6 @@
 			 _PAGE_SOFT_DIRTY | _PAGE_NUMA)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
 
-#define _PAGE_CACHE_WB		(0)
-#define _PAGE_CACHE_WC		(_PAGE_PWT)
-#define _PAGE_CACHE_UC_MINUS	(_PAGE_PCD)
-#define _PAGE_CACHE_UC		(_PAGE_PCD | _PAGE_PWT)
-
 /*
  * The cache modes defined here are used to translate between pure SW usage
  * and the HW defined cache mode bits and/or PAT entries.
@@ -178,41 +173,27 @@ enum page_cache_mode {
 
 #define __PAGE_KERNEL_RO		(__PAGE_KERNEL & ~_PAGE_RW)
 #define __PAGE_KERNEL_RX		(__PAGE_KERNEL_EXEC & ~_PAGE_RW)
-#define __PAGE_KERNEL_EXEC_NOCACHE	(__PAGE_KERNEL_EXEC | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_WC		(__PAGE_KERNEL | _PAGE_CACHE_WC)
-#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_PCD | _PAGE_PWT)
-#define __PAGE_KERNEL_UC_MINUS		(__PAGE_KERNEL | _PAGE_PCD)
+#define __PAGE_KERNEL_NOCACHE		(__PAGE_KERNEL | _PAGE_NOCACHE)
 #define __PAGE_KERNEL_VSYSCALL		(__PAGE_KERNEL_RX | _PAGE_USER)
 #define __PAGE_KERNEL_VVAR		(__PAGE_KERNEL_RO | _PAGE_USER)
-#define __PAGE_KERNEL_VVAR_NOCACHE	(__PAGE_KERNEL_VVAR | _PAGE_PCD | _PAGE_PWT)
 #define __PAGE_KERNEL_LARGE		(__PAGE_KERNEL | _PAGE_PSE)
-#define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
 #define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
 #define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
 #define PAGE_KERNEL_EXEC		__pgprot(__PAGE_KERNEL_EXEC)
 #define PAGE_KERNEL_RX			__pgprot(__PAGE_KERNEL_RX)
-#define PAGE_KERNEL_WC			__pgprot(__PAGE_KERNEL_WC)
 #define PAGE_KERNEL_NOCACHE		__pgprot(__PAGE_KERNEL_NOCACHE)
-#define PAGE_KERNEL_UC_MINUS		__pgprot(__PAGE_KERNEL_UC_MINUS)
-#define PAGE_KERNEL_EXEC_NOCACHE	__pgprot(__PAGE_KERNEL_EXEC_NOCACHE)
 #define PAGE_KERNEL_LARGE		__pgprot(__PAGE_KERNEL_LARGE)
-#define PAGE_KERNEL_LARGE_NOCACHE	__pgprot(__PAGE_KERNEL_LARGE_NOCACHE)
 #define PAGE_KERNEL_LARGE_EXEC		__pgprot(__PAGE_KERNEL_LARGE_EXEC)
 #define PAGE_KERNEL_VSYSCALL		__pgprot(__PAGE_KERNEL_VSYSCALL)
 #define PAGE_KERNEL_VVAR		__pgprot(__PAGE_KERNEL_VVAR)
-#define PAGE_KERNEL_VVAR_NOCACHE	__pgprot(__PAGE_KERNEL_VVAR_NOCACHE)
 
 #define PAGE_KERNEL_IO			__pgprot(__PAGE_KERNEL_IO)
 #define PAGE_KERNEL_IO_NOCACHE		__pgprot(__PAGE_KERNEL_IO_NOCACHE)
-#define PAGE_KERNEL_IO_UC_MINUS		__pgprot(__PAGE_KERNEL_IO_UC_MINUS)
-#define PAGE_KERNEL_IO_WC		__pgprot(__PAGE_KERNEL_IO_WC)
 
 /*         xwr */
 #define __P000	PAGE_NONE

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Support PAT bit in pagetable dump for lower levels
  2014-11-03 13:02 ` [PATCH V6 15/18] x86: Support PAT bit in pagetable dump for lower levels Juergen Gross
@ 2014-11-16 10:58   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:58 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: tglx, hpa, mingo, linux-kernel, stefan.bader, jgross

Commit-ID:  f439c429c320981943f8b64b2a4049d946cb492b
Gitweb:     http://git.kernel.org/tip/f439c429c320981943f8b64b2a4049d946cb492b
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:02:01 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Support PAT bit in pagetable dump for lower levels

Dumping page table protection bits is not correct for entries on levels
2 and 3 regarding the PAT bit, which is at a different position as on
level 4.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-16-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/dump_pagetables.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index 95a427e..6c2ca03 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -126,7 +126,7 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 
 	if (!pgprot_val(prot)) {
 		/* Not present */
-		pt_dump_cont_printf(m, dmsg, "                          ");
+		pt_dump_cont_printf(m, dmsg, "                              ");
 	} else {
 		if (pr & _PAGE_USER)
 			pt_dump_cont_printf(m, dmsg, "USR ");
@@ -145,18 +145,16 @@ static void printk_prot(struct seq_file *m, pgprot_t prot, int level, bool dmsg)
 		else
 			pt_dump_cont_printf(m, dmsg, "    ");
 
-		/* Bit 9 has a different meaning on level 3 vs 4 */
-		if (level <= 3) {
-			if (pr & _PAGE_PSE)
-				pt_dump_cont_printf(m, dmsg, "PSE ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		} else {
-			if (pr & _PAGE_PAT)
-				pt_dump_cont_printf(m, dmsg, "pat ");
-			else
-				pt_dump_cont_printf(m, dmsg, "    ");
-		}
+		/* Bit 7 has a different meaning on level 3 vs 4 */
+		if (level <= 3 && pr & _PAGE_PSE)
+			pt_dump_cont_printf(m, dmsg, "PSE ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
+		if ((level == 4 && pr & _PAGE_PAT) ||
+		    ((level == 3 || level == 2) && pr & _PAGE_PAT_LARGE))
+			pt_dump_cont_printf(m, dmsg, "pat ");
+		else
+			pt_dump_cont_printf(m, dmsg, "    ");
 		if (pr & _PAGE_GLOBAL)
 			pt_dump_cont_printf(m, dmsg, "GLB ");
 		else

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Respect PAT bit when copying pte values between large and normal pages
  2014-11-03 13:02 ` [PATCH V6 16/18] x86: Respect PAT bit when copying pte values between large and normal pages Juergen Gross
@ 2014-11-16 10:58   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:58 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, tglx, jgross, linux-kernel, hpa, stefan.bader

Commit-ID:  f5b2831d654167d77da8afbef4d2584897b12d0c
Gitweb:     http://git.kernel.org/tip/f5b2831d654167d77da8afbef4d2584897b12d0c
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:02:02 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Respect PAT bit when copying pte values between large and normal pages

The PAT bit in the ptes is not moved to the correct position when
copying page protection attributes between entries of different sized
pages. Translate the ptes according to their page size.

Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: konrad.wilk@oracle.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-17-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/mm/pageattr.c | 33 +++++++++++++++++++++++----------
 1 file changed, 23 insertions(+), 10 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index de807c9..6c8e3fd 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -485,14 +485,23 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 
 	/*
 	 * We are safe now. Check whether the new pgprot is the same:
+	 * Convert protection attributes to 4k-format, as cpa->mask* are set
+	 * up accordingly.
 	 */
 	old_pte = *kpte;
-	old_prot = req_prot = pte_pgprot(old_pte);
+	old_prot = req_prot = pgprot_large_2_4k(pte_pgprot(old_pte));
 
 	pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
 	pgprot_val(req_prot) |= pgprot_val(cpa->mask_set);
 
 	/*
+	 * req_prot is in format of 4k pages. It must be converted to large
+	 * page format: the caching mode includes the PAT bit located at
+	 * different bit positions in the two formats.
+	 */
+	req_prot = pgprot_4k_2_large(req_prot);
+
+	/*
 	 * Set the PSE and GLOBAL flags only if the PRESENT flag is
 	 * set otherwise pmd_present/pmd_huge will return true even on
 	 * a non present pmd. The canon_pgprot will clear _PAGE_GLOBAL
@@ -585,13 +594,10 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 
 	paravirt_alloc_pte(&init_mm, page_to_pfn(base));
 	ref_prot = pte_pgprot(pte_clrhuge(*kpte));
-	/*
-	 * If we ever want to utilize the PAT bit, we need to
-	 * update this function to make sure it's converted from
-	 * bit 12 to bit 7 when we cross from the 2MB level to
-	 * the 4K level:
-	 */
-	WARN_ON_ONCE(pgprot_val(ref_prot) & _PAGE_PAT_LARGE);
+
+	/* promote PAT bit to correct position */
+	if (level == PG_LEVEL_2M)
+		ref_prot = pgprot_large_2_4k(ref_prot);
 
 #ifdef CONFIG_X86_64
 	if (level == PG_LEVEL_1G) {
@@ -879,6 +885,7 @@ static int populate_pmd(struct cpa_data *cpa,
 {
 	unsigned int cur_pages = 0;
 	pmd_t *pmd;
+	pgprot_t pmd_pgprot;
 
 	/*
 	 * Not on a 2M boundary?
@@ -910,6 +917,8 @@ static int populate_pmd(struct cpa_data *cpa,
 	if (num_pages == cur_pages)
 		return cur_pages;
 
+	pmd_pgprot = pgprot_4k_2_large(pgprot);
+
 	while (end - start >= PMD_SIZE) {
 
 		/*
@@ -921,7 +930,8 @@ static int populate_pmd(struct cpa_data *cpa,
 
 		pmd = pmd_offset(pud, start);
 
-		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pmd(pmd, __pmd(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pmd_pgprot)));
 
 		start	  += PMD_SIZE;
 		cpa->pfn  += PMD_SIZE;
@@ -949,6 +959,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 	pud_t *pud;
 	unsigned long end;
 	int cur_pages = 0;
+	pgprot_t pud_pgprot;
 
 	end = start + (cpa->numpages << PAGE_SHIFT);
 
@@ -986,12 +997,14 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
 		return cur_pages;
 
 	pud = pud_offset(pgd, start);
+	pud_pgprot = pgprot_4k_2_large(pgprot);
 
 	/*
 	 * Map everything starting from the Gb boundary, possibly with 1G pages
 	 */
 	while (end - start >= PUD_SIZE) {
-		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE | massage_pgprot(pgprot)));
+		set_pud(pud, __pud(cpa->pfn | _PAGE_PSE |
+				   massage_pgprot(pud_pgprot)));
 
 		start	  += PUD_SIZE;
 		cpa->pfn  += PUD_SIZE;

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] x86: Enable PAT to use cache mode translation tables
  2014-11-03 13:02 ` [PATCH V6 17/18] x86: Enable PAT to use cache mode translation tables Juergen Gross
@ 2014-11-16 10:58   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:58 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, jgross, toshi.kani, konrad.wilk, linux-kernel, tglx, hpa

Commit-ID:  bd809af16e3ab1f8d55b3e2928c47c67e2a865d2
Gitweb:     http://git.kernel.org/tip/bd809af16e3ab1f8d55b3e2928c47c67e2a865d2
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:02:03 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

x86: Enable PAT to use cache mode translation tables

Update the translation tables from cache mode to pgprot values
according to the PAT settings. This enables changing the cache
attributes of a PAT index in just one place without having to change
at the users side.

With this change it is possible to use the same kernel with different
PAT configurations, e.g. supporting Xen.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Toshi Kani <toshi.kani@hp.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: ville.syrjala@linux.intel.com
Cc: david.vrabel@citrix.com
Cc: jbeulich@suse.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-18-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/pat.h           |  1 +
 arch/x86/include/asm/pgtable_types.h |  4 +++
 arch/x86/mm/init.c                   |  8 ++++++
 arch/x86/mm/mm_internal.h            |  2 ++
 arch/x86/mm/pat.c                    | 50 ++++++++++++++++++++++++++++++++++--
 5 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index 150407a..91bc4ba 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -11,6 +11,7 @@ static const int pat_enabled;
 #endif
 
 extern void pat_init(void);
+void pat_init_cache_modes(void);
 
 extern int reserve_memtype(u64 start, u64 end,
 		enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 6d5f6d1..af447f9 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -351,6 +351,10 @@ extern uint8_t __pte2cachemode_tbl[8];
 	((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |		\
 	 (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |		\
 	 (((cb) >> _PAGE_BIT_PWT) & 1))
+#define __cm_idx2pte(i)					\
+	((((i) & 4) << (_PAGE_BIT_PAT - 2)) |		\
+	 (((i) & 2) << (_PAGE_BIT_PCD - 1)) |		\
+	 (((i) & 1) << _PAGE_BIT_PWT))
 
 static inline unsigned long cachemode2protval(enum page_cache_mode pcm)
 {
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a9776ba..82b41d5 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -716,3 +716,11 @@ void __init zone_sizes_init(void)
 	free_area_init_nodes(max_zone_pfns);
 }
 
+void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache)
+{
+	/* entry 0 MUST be WB (hardwired to speed up translations) */
+	BUG_ON(!entry && cache != _PAGE_CACHE_MODE_WB);
+
+	__cachemode2pte_tbl[cache] = __cm_idx2pte(entry);
+	__pte2cachemode_tbl[entry] = cache;
+}
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 6b563a1..62474ba 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -16,4 +16,6 @@ void zone_sizes_init(void);
 
 extern int after_bootmem;
 
+void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache);
+
 #endif	/* __X86_MM_INTERNAL_H */
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index ef75f3f..4c60127 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -31,6 +31,7 @@
 #include <asm/io.h>
 
 #include "pat_internal.h"
+#include "mm_internal.h"
 
 #ifdef CONFIG_X86_PAT
 int __read_mostly pat_enabled = 1;
@@ -75,6 +76,52 @@ enum {
 	PAT_UC_MINUS = 7,	/* UC, but can be overriden by MTRR */
 };
 
+#define CM(c) (_PAGE_CACHE_MODE_ ## c)
+
+static enum page_cache_mode pat_get_cache_mode(unsigned pat_val, char *msg)
+{
+	enum page_cache_mode cache;
+	char *cache_mode;
+
+	switch (pat_val) {
+	case PAT_UC:       cache = CM(UC);       cache_mode = "UC  "; break;
+	case PAT_WC:       cache = CM(WC);       cache_mode = "WC  "; break;
+	case PAT_WT:       cache = CM(WT);       cache_mode = "WT  "; break;
+	case PAT_WP:       cache = CM(WP);       cache_mode = "WP  "; break;
+	case PAT_WB:       cache = CM(WB);       cache_mode = "WB  "; break;
+	case PAT_UC_MINUS: cache = CM(UC_MINUS); cache_mode = "UC- "; break;
+	default:           cache = CM(WB);       cache_mode = "WB  "; break;
+	}
+
+	memcpy(msg, cache_mode, 4);
+
+	return cache;
+}
+
+#undef CM
+
+/*
+ * Update the cache mode to pgprot translation tables according to PAT
+ * configuration.
+ * Using lower indices is preferred, so we start with highest index.
+ */
+void pat_init_cache_modes(void)
+{
+	int i;
+	enum page_cache_mode cache;
+	char pat_msg[33];
+	u64 pat;
+
+	rdmsrl(MSR_IA32_CR_PAT, pat);
+	pat_msg[32] = 0;
+	for (i = 7; i >= 0; i--) {
+		cache = pat_get_cache_mode((pat >> (i * 8)) & 7,
+					   pat_msg + 4 * i);
+		update_cache_mode_entry(i, cache);
+	}
+	pr_info("PAT configuration [0-7]: %s\n", pat_msg);
+}
+
 #define PAT(x, y)	((u64)PAT_ ## y << ((x)*8))
 
 void pat_init(void)
@@ -124,8 +171,7 @@ void pat_init(void)
 	wrmsrl(MSR_IA32_CR_PAT, pat);
 
 	if (boot_cpu)
-		printk(KERN_INFO "x86 PAT enabled: cpu %d, old 0x%Lx, new 0x%Lx\n",
-		       smp_processor_id(), boot_pat_state, pat);
+		pat_init_cache_modes();
 }
 
 #undef PAT

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [tip:x86/mm] xen: Support Xen pv-domains using PAT
  2014-11-03 13:02 ` [PATCH V6 18/18] xen: Support Xen pv-domains using PAT Juergen Gross
@ 2014-11-16 10:59   ` tip-bot for Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: tip-bot for Juergen Gross @ 2014-11-16 10:59 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jgross, konrad.wilk, david.vrabel, tglx, linux-kernel, hpa, mingo

Commit-ID:  47591df505129c9774af6cca2debf283a6e56ed7
Gitweb:     http://git.kernel.org/tip/47591df505129c9774af6cca2debf283a6e56ed7
Author:     Juergen Gross <jgross@suse.com>
AuthorDate: Mon, 3 Nov 2014 14:02:04 +0100
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 16 Nov 2014 11:04:26 +0100

xen: Support Xen pv-domains using PAT

With the dynamical mapping between cache modes and pgprot values it is
now possible to use all cache modes via the Xen hypervisor PAT settings
in a pv domain.

All to be done is to read the PAT configuration MSR and set up the
translation tables accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stefan.bader@canonical.com
Cc: xen-devel@lists.xensource.com
Cc: ville.syrjala@linux.intel.com
Cc: jbeulich@suse.com
Cc: toshi.kani@hp.com
Cc: plagnioj@jcrosoft.com
Cc: tomi.valkeinen@ti.com
Cc: bhelgaas@google.com
Link: http://lkml.kernel.org/r/1415019724-4317-19-git-send-email-jgross@suse.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/xen/enlighten.c | 25 +++++++------------------
 arch/x86/xen/mmu.c       | 47 +----------------------------------------------
 arch/x86/xen/xen-ops.h   |  1 -
 3 files changed, 8 insertions(+), 65 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index fac5e4f..6bf3a13 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1100,12 +1100,6 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 		/* Fast syscall setup is all done in hypercalls, so
 		   these are all ignored.  Stub them out here to stop
 		   Xen console noise. */
-		break;
-
-	case MSR_IA32_CR_PAT:
-		if (smp_processor_id() == 0)
-			xen_set_pat(((u64)high << 32) | low);
-		break;
 
 	default:
 		ret = native_write_msr_safe(msr, low, high);
@@ -1561,10 +1555,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	/* Prevent unwanted bits from being set in PTEs. */
 	__supported_pte_mask &= ~_PAGE_GLOBAL;
-#if 0
-	if (!xen_initial_domain())
-#endif
-		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
 	/*
 	 * Prevent page tables from being allocated in highmem, even
@@ -1618,14 +1608,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	 */
 	acpi_numa = -1;
 #endif
-#ifdef CONFIG_X86_PAT
-	/*
-	 * For right now disable the PAT. We should remove this once
-	 * git commit 8eaffa67b43e99ae581622c5133e20b0f48bcef1
-	 * (xen/pat: Disable PAT support for now) is reverted.
-	 */
-	pat_enabled = 0;
-#endif
 	/* Don't do the full vcpu_info placement stuff until we have a
 	   possible map and a non-dummy shared_info. */
 	per_cpu(xen_vcpu, 0) = &HYPERVISOR_shared_info->vcpu_info[0];
@@ -1636,6 +1618,13 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	xen_raw_console_write("mapping kernel into physical memory\n");
 	xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base, xen_start_info->nr_pages);
 
+	/*
+	 * Modify the cache mode translation tables to match Xen's PAT
+	 * configuration.
+	 */
+
+	pat_init_cache_modes();
+
 	/* keep using Xen gdt for now; no urgent need to change it */
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index a8a1a3d..9855eb8 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -410,13 +410,7 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
-#if 0
-	/* If this is a WC pte, convert back from Xen WC to Linux WC */
-	if ((pteval & (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)) == _PAGE_PAT) {
-		WARN_ON(!pat_enabled);
-		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
-	}
-#endif
+
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -427,47 +421,8 @@ __visible pgdval_t xen_pgd_val(pgd_t pgd)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pgd_val);
 
-/*
- * Xen's PAT setup is part of its ABI, though I assume entries 6 & 7
- * are reserved for now, to correspond to the Intel-reserved PAT
- * types.
- *
- * We expect Linux's PAT set as follows:
- *
- * Idx  PTE flags        Linux    Xen    Default
- * 0                     WB       WB     WB
- * 1            PWT      WC       WT     WT
- * 2        PCD          UC-      UC-    UC-
- * 3        PCD PWT      UC       UC     UC
- * 4    PAT              WB       WC     WB
- * 5    PAT     PWT      WC       WP     WT
- * 6    PAT PCD          UC-      rsv    UC-
- * 7    PAT PCD PWT      UC       rsv    UC
- */
-
-void xen_set_pat(u64 pat)
-{
-	/* We expect Linux to use a PAT setting of
-	 * UC UC- WC WB (ignoring the PAT flag) */
-	WARN_ON(pat != 0x0007010600070106ull);
-}
-
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-#if 0
-	/* If Linux is trying to set a WC pte, then map to the Xen WC.
-	 * If _PAGE_PAT is set, then it probably means it is really
-	 * _PAGE_PSE, so avoid fiddling with the PAT mapping and hope
-	 * things work out OK...
-	 *
-	 * (We should never see kernel mappings with _PAGE_PSE set,
-	 * but we could see hugetlbfs mappings, I think.).
-	 */
-	if (pat_enabled && !WARN_ON(pte & _PAGE_PAT)) {
-		if ((pte & (_PAGE_PCD | _PAGE_PWT)) == _PAGE_PWT)
-			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
-	}
-#endif
 	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 28c7e0b..4ab9298 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -33,7 +33,6 @@ extern unsigned long xen_max_p2m_pfn;
 
 void xen_mm_pin_all(void);
 void xen_mm_unpin_all(void);
-void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
 char * xen_auto_xlated_memory_setup(void);

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 00/18] x86: Full support of PAT
  2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
                   ` (19 preceding siblings ...)
  2014-11-14  6:30 ` Juergen Gross
@ 2014-11-16 13:08 ` Ingo Molnar
  2014-11-17  4:42   ` Jürgen Groß
  20 siblings, 1 reply; 47+ messages in thread
From: Ingo Molnar @ 2014-11-16 13:08 UTC (permalink / raw)
  To: Juergen Gross
  Cc: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas


* Juergen Gross <jgross@suse.com> wrote:

>  arch/x86/include/asm/cacheflush.h         |  38 ++++---

FYI, this series breaks the UML build:

In file included from /home/mingo/tip/include/linux/highmem.h:11:0,
                 from /home/mingo/tip/include/linux/pagemap.h:10,
                 from /home/mingo/tip/include/linux/mempolicy.h:14,
                 from /home/mingo/tip/include/linux/shmem_fs.h:6,
                 from /home/mingo/tip/init/do_mounts.c:30:
/home/mingo/tip/arch/x86/include/asm/cacheflush.h:67:36: error: return type is an incomplete type
 static inline enum page_cache_mode get_page_memtype(struct page *pg)
                                    ^
/home/mingo/tip/arch/x86/include/asm/cacheflush.h: In function ‘get_page_memtype’:
/home/mingo/tip/arch/x86/include/asm/cacheflush.h:69:2: warning: ‘return’ with a value, in function returning void [enabled by default]
  return -1;
  ^
/home/mingo/tip/arch/x86/include/asm/cacheflush.h: At top level:
/home/mingo/tip/arch/x86/include/asm/cacheflush.h:72:30: error: parameter 2 (‘memtype’) has incomplete type
         enum page_cache_mode memtype)
                              ^
/home/mingo/tip/arch/x86/include/asm/cacheflush.h:71:20: error: function declaration isn’t a prototype [-Werror=strict-prototypes]

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 00/18] x86: Full support of PAT
  2014-11-16 13:08 ` Ingo Molnar
@ 2014-11-17  4:42   ` Jürgen Groß
  0 siblings, 0 replies; 47+ messages in thread
From: Jürgen Groß @ 2014-11-17  4:42 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: hpa, x86, tglx, mingo, stefan.bader, linux-kernel, xen-devel,
	konrad.wilk, ville.syrjala, david.vrabel, jbeulich, toshi.kani,
	plagnioj, tomi.valkeinen, bhelgaas

On 11/16/2014 02:08 PM, Ingo Molnar wrote:
>
> * Juergen Gross <jgross@suse.com> wrote:
>
>>   arch/x86/include/asm/cacheflush.h         |  38 ++++---
>
> FYI, this series breaks the UML build:
>
> In file included from /home/mingo/tip/include/linux/highmem.h:11:0,
>                   from /home/mingo/tip/include/linux/pagemap.h:10,
>                   from /home/mingo/tip/include/linux/mempolicy.h:14,
>                   from /home/mingo/tip/include/linux/shmem_fs.h:6,
>                   from /home/mingo/tip/init/do_mounts.c:30:
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h:67:36: error: return type is an incomplete type
>   static inline enum page_cache_mode get_page_memtype(struct page *pg)
>                                      ^
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h: In function ‘get_page_memtype’:
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h:69:2: warning: ‘return’ with a value, in function returning void [enabled by default]
>    return -1;
>    ^
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h: At top level:
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h:72:30: error: parameter 2 (‘memtype’) has incomplete type
>           enum page_cache_mode memtype)
>                                ^
> /home/mingo/tip/arch/x86/include/asm/cacheflush.h:71:20: error: function declaration isn’t a prototype [-Werror=strict-prototypes]

Thomas already committed a fixup. Thank you, Thomas.


Juergen


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 01/18] x86: Make page cache mode a real type
  2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
  2014-11-16 10:54   ` [tip:x86/mm] " tip-bot for Juergen Gross
@ 2015-01-22  7:11   ` Steven Noonan
  2015-01-22 10:15     ` Juergen Gross
  1 sibling, 1 reply; 47+ messages in thread
From: Steven Noonan @ 2015-01-22  7:11 UTC (permalink / raw)
  To: Juergen Gross
  Cc: H. Peter Anvin, Linux-X86, Thomas Gleixner, Ingo Molnar,
	stefan.bader, Linux Kernel mailing List, xen-devel,
	Konrad Rzeszutek Wilk, ville.syrjala, David Vrabel, Jan Beulich,
	toshi.kani, plagnioj, tomi.valkeinen, bhelgaas

On Mon, Nov 3, 2014 at 5:01 AM, Juergen Gross <jgross@suse.com> wrote:
> At the moment there are a lot of places that handle setting or getting
> the page cache mode by treating the pgprot bits equal to the cache mode.
> This is only true because there are a lot of assumptions about the setup
> of the PAT MSR. Otherwise the cache type needs to get translated into
> pgprot bits and vice versa.
>
> This patch tries to prepare for that by introducing a separate type
> for the cache mode and adding functions to translate between those and
> pgprot values.
>
> To avoid too much performance penalty the translation between cache mode
> and pgprot values is done via tables which contain the relevant
> information.  Write-back cache mode is hard-wired to be 0, all other
> modes are configurable via those tables. For large pages there are
> translation functions as the PAT bit is located at different positions
> in the ptes of 4k and large pages.
>
> Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/include/asm/pgtable_types.h | 73 +++++++++++++++++++++++++++++++++++-
>  arch/x86/mm/init.c                   | 29 ++++++++++++++
>  2 files changed, 101 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0778964..5124642 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -128,12 +128,34 @@
>                          _PAGE_SOFT_DIRTY | _PAGE_NUMA)
>  #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
>
> -#define _PAGE_CACHE_MASK       (_PAGE_PCD | _PAGE_PWT)
>  #define _PAGE_CACHE_WB         (0)
>  #define _PAGE_CACHE_WC         (_PAGE_PWT)
>  #define _PAGE_CACHE_UC_MINUS   (_PAGE_PCD)
>  #define _PAGE_CACHE_UC         (_PAGE_PCD | _PAGE_PWT)
>
> +/*
> + * The cache modes defined here are used to translate between pure SW usage
> + * and the HW defined cache mode bits and/or PAT entries.
> + *
> + * The resulting bits for PWT, PCD and PAT should be chosen in a way
> + * to have the WB mode at index 0 (all bits clear). This is the default
> + * right now and likely would break too much if changed.
> + */
> +#ifndef __ASSEMBLY__
> +enum page_cache_mode {
> +       _PAGE_CACHE_MODE_WB = 0,
> +       _PAGE_CACHE_MODE_WC = 1,
> +       _PAGE_CACHE_MODE_UC_MINUS = 2,
> +       _PAGE_CACHE_MODE_UC = 3,
> +       _PAGE_CACHE_MODE_WT = 4,
> +       _PAGE_CACHE_MODE_WP = 5,
> +       _PAGE_CACHE_MODE_NUM = 8
> +};
> +#endif
> +
> +#define _PAGE_CACHE_MASK       (_PAGE_PAT | _PAGE_PCD | _PAGE_PWT)
> +#define _PAGE_NOCACHE          (cachemode2protval(_PAGE_CACHE_MODE_UC))
> +
>  #define PAGE_NONE      __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
>  #define PAGE_SHARED    __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
>                                  _PAGE_ACCESSED | _PAGE_NX)
> @@ -341,6 +363,55 @@ static inline pmdval_t pmdnuma_flags(pmd_t pmd)
>  #define pgprot_val(x)  ((x).pgprot)
>  #define __pgprot(x)    ((pgprot_t) { (x) } )
>
> +extern uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM];
> +extern uint8_t __pte2cachemode_tbl[8];
> +
> +#define __pte2cm_idx(cb)                               \
> +       ((((cb) >> (_PAGE_BIT_PAT - 2)) & 4) |          \
> +        (((cb) >> (_PAGE_BIT_PCD - 1)) & 2) |          \
> +        (((cb) >> _PAGE_BIT_PWT) & 1))
> +
> +static inline unsigned long cachemode2protval(enum page_cache_mode pcm)
> +{
> +       if (likely(pcm == 0))
> +               return 0;
> +       return __cachemode2pte_tbl[pcm];
> +}
> +static inline pgprot_t cachemode2pgprot(enum page_cache_mode pcm)
> +{
> +       return __pgprot(cachemode2protval(pcm));
> +}
> +static inline enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)
> +{
> +       unsigned long masked;
> +
> +       masked = pgprot_val(pgprot) & _PAGE_CACHE_MASK;
> +       if (likely(masked == 0))
> +               return 0;
> +       return __pte2cachemode_tbl[__pte2cm_idx(masked)];
> +}
> +static inline pgprot_t pgprot_4k_2_large(pgprot_t pgprot)
> +{
> +       pgprot_t new;
> +       unsigned long val;
> +
> +       val = pgprot_val(pgprot);
> +       pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
> +               ((val & _PAGE_PAT) << (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
> +       return new;
> +}
> +static inline pgprot_t pgprot_large_2_4k(pgprot_t pgprot)
> +{
> +       pgprot_t new;
> +       unsigned long val;
> +
> +       val = pgprot_val(pgprot);
> +       pgprot_val(new) = (val & ~(_PAGE_PAT | _PAGE_PAT_LARGE)) |
> +                         ((val & _PAGE_PAT_LARGE) >>
> +                          (_PAGE_BIT_PAT_LARGE - _PAGE_BIT_PAT));
> +       return new;
> +}
> +
>
>  typedef struct page *pgtable_t;
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 66dba36..a9776ba 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -27,6 +27,35 @@
>
>  #include "mm_internal.h"
>
> +/*
> + * Tables translating between page_cache_type_t and pte encoding.
> + * Minimal supported modes are defined statically, modified if more supported
> + * cache modes are available.
> + * Index into __cachemode2pte_tbl is the cachemode.
> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
> + */
> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
> +       [_PAGE_CACHE_MODE_WB]           = 0,
> +       [_PAGE_CACHE_MODE_WC]           = _PAGE_PWT,
> +       [_PAGE_CACHE_MODE_UC_MINUS]     = _PAGE_PCD,
> +       [_PAGE_CACHE_MODE_UC]           = _PAGE_PCD | _PAGE_PWT,
> +       [_PAGE_CACHE_MODE_WT]           = _PAGE_PCD,
> +       [_PAGE_CACHE_MODE_WP]           = _PAGE_PCD,
> +};
> +EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
> +uint8_t __pte2cachemode_tbl[8] = {
> +       [__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
> +       [__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
> +       [__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
> +       [__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
> +       [__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
> +};
> +EXPORT_SYMBOL_GPL(__pte2cachemode_tbl);
> +

I notice these two symbols are exported GPL-only. This breaks builds
of several out-of-tree non-GPL modules such as the NVIDIA driver, and
VMware modules, etc. What is the appropriate code path for proprietary
modules to use when setting page cache mode flags? Alternatively, is
it possible for these EXPORT_SYMBOL_GPLs to be changed to
EXPORT_SYMBOL?

>  static unsigned long __initdata pgt_buf_start;
>  static unsigned long __initdata pgt_buf_end;
>  static unsigned long __initdata pgt_buf_top;
> --
> 1.8.4.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 01/18] x86: Make page cache mode a real type
  2015-01-22  7:11   ` [PATCH V6 01/18] " Steven Noonan
@ 2015-01-22 10:15     ` Juergen Gross
  2015-01-22 11:06       ` Thomas Gleixner
  0 siblings, 1 reply; 47+ messages in thread
From: Juergen Gross @ 2015-01-22 10:15 UTC (permalink / raw)
  To: Steven Noonan
  Cc: H. Peter Anvin, Linux-X86, Thomas Gleixner, Ingo Molnar,
	stefan.bader, Linux Kernel mailing List, xen-devel,
	Konrad Rzeszutek Wilk, ville.syrjala, David Vrabel, Jan Beulich,
	toshi.kani, plagnioj, tomi.valkeinen, bhelgaas

On 01/22/2015 08:11 AM, Steven Noonan wrote:
> On Mon, Nov 3, 2014 at 5:01 AM, Juergen Gross <jgross@suse.com> wrote:
>> At the moment there are a lot of places that handle setting or getting
>> the page cache mode by treating the pgprot bits equal to the cache mode.
>> This is only true because there are a lot of assumptions about the setup
>> of the PAT MSR. Otherwise the cache type needs to get translated into
>> pgprot bits and vice versa.
>>
>> This patch tries to prepare for that by introducing a separate type
>> for the cache mode and adding functions to translate between those and
>> pgprot values.
>>
>> To avoid too much performance penalty the translation between cache mode
>> and pgprot values is done via tables which contain the relevant
>> information.  Write-back cache mode is hard-wired to be 0, all other
>> modes are configurable via those tables. For large pages there are
>> translation functions as the PAT bit is located at different positions
>> in the ptes of 4k and large pages.
>>
>> Based-on-patch-by: Stefan Bader <stefan.bader@canonical.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
>> ---
...
>> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
>> index 66dba36..a9776ba 100644
>> --- a/arch/x86/mm/init.c
>> +++ b/arch/x86/mm/init.c
>> @@ -27,6 +27,35 @@
>>
>>   #include "mm_internal.h"
>>
>> +/*
>> + * Tables translating between page_cache_type_t and pte encoding.
>> + * Minimal supported modes are defined statically, modified if more supported
>> + * cache modes are available.
>> + * Index into __cachemode2pte_tbl is the cachemode.
>> + * Index into __pte2cachemode_tbl are the caching attribute bits of the pte
>> + * (_PAGE_PWT, _PAGE_PCD, _PAGE_PAT) at index bit positions 0, 1, 2.
>> + */
>> +uint16_t __cachemode2pte_tbl[_PAGE_CACHE_MODE_NUM] = {
>> +       [_PAGE_CACHE_MODE_WB]           = 0,
>> +       [_PAGE_CACHE_MODE_WC]           = _PAGE_PWT,
>> +       [_PAGE_CACHE_MODE_UC_MINUS]     = _PAGE_PCD,
>> +       [_PAGE_CACHE_MODE_UC]           = _PAGE_PCD | _PAGE_PWT,
>> +       [_PAGE_CACHE_MODE_WT]           = _PAGE_PCD,
>> +       [_PAGE_CACHE_MODE_WP]           = _PAGE_PCD,
>> +};
>> +EXPORT_SYMBOL_GPL(__cachemode2pte_tbl);
>> +uint8_t __pte2cachemode_tbl[8] = {
>> +       [__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB,
>> +       [__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC,
>> +       [__pte2cm_idx(_PAGE_PCD)] = _PAGE_CACHE_MODE_UC_MINUS,
>> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD)] = _PAGE_CACHE_MODE_UC,
>> +       [__pte2cm_idx(_PAGE_PAT)] = _PAGE_CACHE_MODE_WB,
>> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PAT)] = _PAGE_CACHE_MODE_WC,
>> +       [__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS,
>> +       [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
>> +};
>> +EXPORT_SYMBOL_GPL(__pte2cachemode_tbl);
>> +
>
> I notice these two symbols are exported GPL-only. This breaks builds
> of several out-of-tree non-GPL modules such as the NVIDIA driver, and
> VMware modules, etc. What is the appropriate code path for proprietary
> modules to use when setting page cache mode flags? Alternatively, is
> it possible for these EXPORT_SYMBOL_GPLs to be changed to
> EXPORT_SYMBOL?

I don't mind you sending a patch to change this. I won't object such a
patch. OTOH this is more kind of a political question and I don't want
to spend my time on arguing.

Juergen


^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 01/18] x86: Make page cache mode a real type
  2015-01-22 10:15     ` Juergen Gross
@ 2015-01-22 11:06       ` Thomas Gleixner
  2015-01-22 11:11         ` Juergen Gross
  0 siblings, 1 reply; 47+ messages in thread
From: Thomas Gleixner @ 2015-01-22 11:06 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Steven Noonan, H. Peter Anvin, Linux-X86, Ingo Molnar,
	stefan.bader, Linux Kernel mailing List, xen-devel,
	Konrad Rzeszutek Wilk, ville.syrjala, David Vrabel, Jan Beulich,
	toshi.kani, plagnioj, tomi.valkeinen, bhelgaas

On Thu, 22 Jan 2015, Juergen Gross wrote:
> On 01/22/2015 08:11 AM, Steven Noonan wrote:
> > I notice these two symbols are exported GPL-only. This breaks builds
> > of several out-of-tree non-GPL modules such as the NVIDIA driver, and
> > VMware modules, etc. What is the appropriate code path for proprietary
> > modules to use when setting page cache mode flags? Alternatively, is
> > it possible for these EXPORT_SYMBOL_GPLs to be changed to
> > EXPORT_SYMBOL?
> 
> I don't mind you sending a patch to change this. I won't object such a
> patch. OTOH this is more kind of a political question and I don't want
> to spend my time on arguing.

It's rather simple. If the new cache stuff replaces code which was
available under EXPORT_SYMBOL, we either need a compat export or make
the new exports non GPL.

Not that I like it, but that has been the policy so far. We only break
out of tree stuff if the export is broken or security relevant, like
we did with the init_mm export a few years ago.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH V6 01/18] x86: Make page cache mode a real type
  2015-01-22 11:06       ` Thomas Gleixner
@ 2015-01-22 11:11         ` Juergen Gross
  0 siblings, 0 replies; 47+ messages in thread
From: Juergen Gross @ 2015-01-22 11:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Steven Noonan, H. Peter Anvin, Linux-X86, Ingo Molnar,
	stefan.bader, Linux Kernel mailing List, xen-devel,
	Konrad Rzeszutek Wilk, ville.syrjala, David Vrabel, Jan Beulich,
	toshi.kani, plagnioj, tomi.valkeinen, bhelgaas

On 01/22/2015 12:06 PM, Thomas Gleixner wrote:
> On Thu, 22 Jan 2015, Juergen Gross wrote:
>> On 01/22/2015 08:11 AM, Steven Noonan wrote:
>>> I notice these two symbols are exported GPL-only. This breaks builds
>>> of several out-of-tree non-GPL modules such as the NVIDIA driver, and
>>> VMware modules, etc. What is the appropriate code path for proprietary
>>> modules to use when setting page cache mode flags? Alternatively, is
>>> it possible for these EXPORT_SYMBOL_GPLs to be changed to
>>> EXPORT_SYMBOL?
>>
>> I don't mind you sending a patch to change this. I won't object such a
>> patch. OTOH this is more kind of a political question and I don't want
>> to spend my time on arguing.
>
> It's rather simple. If the new cache stuff replaces code which was
> available under EXPORT_SYMBOL, we either need a compat export or make
> the new exports non GPL.
>
> Not that I like it, but that has been the policy so far. We only break
> out of tree stuff if the export is broken or security relevant, like
> we did with the init_mm export a few years ago.

Okay, in this case I'll write patch.

Thanks for clarifying this.


Juergen


^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2015-01-22 11:12 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-03 13:01 [PATCH V6 00/18] x86: Full support of PAT Juergen Gross
2014-11-03 13:01 ` [PATCH V6 01/18] x86: Make page cache mode a real type Juergen Gross
2014-11-16 10:54   ` [tip:x86/mm] " tip-bot for Juergen Gross
2015-01-22  7:11   ` [PATCH V6 01/18] " Steven Noonan
2015-01-22 10:15     ` Juergen Gross
2015-01-22 11:06       ` Thomas Gleixner
2015-01-22 11:11         ` Juergen Gross
2014-11-03 13:01 ` [PATCH V6 02/18] x86: Use new cache mode type in include/asm/fb.h Juergen Gross
2014-11-16 10:54   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 03/18] x86: Use new cache mode type in drivers/video/fbdev/gbefb.c Juergen Gross
2014-11-07  8:16   ` Tomi Valkeinen
2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ gbefb.c tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 04/18] x86: Use new cache mode type in drivers/video/fbdev/vermilion Juergen Gross
2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in drivers/video/fbdev/ vermilion tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 05/18] x86: Use new cache mode type in arch/x86/pci Juergen Gross
2014-11-16 10:55   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 06/18] x86: Use new cache mode type in arch/x86/mm/init_64.c Juergen Gross
2014-11-16 10:55   ` [tip:x86/mm] x86: Use new cache mode type in arch/x86/mm/ init_64.c tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 07/18] x86: Use new cache mode type in asm/pgtable.h Juergen Gross
2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 08/18] x86: Use new cache mode type in mm/iomap_32.c Juergen Gross
2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 09/18] x86: Use new cache mode type in track_pfn_remap() and track_pfn_insert() Juergen Gross
2014-11-16 10:56   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 10/18] x86: Remove looking for setting of _PAGE_PAT_LARGE in pageattr.c Juergen Gross
2014-11-03 16:44   ` Thomas Gleixner
2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 11/18] x86: Use new cache mode type in setting page attributes Juergen Gross
2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 12/18] x86: Use new cache mode type in mm/ioremap.c Juergen Gross
2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:01 ` [PATCH V6 13/18] x86: Use new cache mode type in memtype related functions Juergen Gross
2014-11-16 10:57   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:02 ` [PATCH V6 14/18] x86: Clean up pgtable_types.h Juergen Gross
2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:02 ` [PATCH V6 15/18] x86: Support PAT bit in pagetable dump for lower levels Juergen Gross
2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:02 ` [PATCH V6 16/18] x86: Respect PAT bit when copying pte values between large and normal pages Juergen Gross
2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:02 ` [PATCH V6 17/18] x86: Enable PAT to use cache mode translation tables Juergen Gross
2014-11-16 10:58   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 13:02 ` [PATCH V6 18/18] xen: Support Xen pv-domains using PAT Juergen Gross
2014-11-16 10:59   ` [tip:x86/mm] " tip-bot for Juergen Gross
2014-11-03 16:43 ` [PATCH V6 00/18] x86: Full support of PAT Toshi Kani
2014-11-14  6:30 ` Juergen Gross
2014-11-16 13:08 ` Ingo Molnar
2014-11-17  4:42   ` Jürgen Groß

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).