linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
@ 2009-04-09 21:26 venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup venkatesh.pallipadi
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Suresh Siddha, Venkatesh Pallipadi

This patchset contains cleanups and minor bug fixes in x86 PAT and CPA
related code. The bugs were mostly found by code inspection. There
should not be any functionality changes with this patchset.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

-- 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:33   ` [tip:x86/pat] " Suresh Siddha
  2009-04-09 21:26 ` [patch 2/6] x86, PAT: Change order of cpa and free in set_memory_wb venkatesh.pallipadi
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Suresh Siddha, Venkatesh Pallipadi

[-- Attachment #1: 0001-x86-CPA-Change-idmap-attribute-before-ioremap-attr.patch --]
[-- Type: text/plain, Size: 1589 bytes --]

From: Suresh Siddha <suresh.b.siddha@intel.com>
Subject: [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup

Change the identity mapping with the requested attribute first, before
we setup the virtual memory mapping with the new requested attribute.
This makes sure that there is no window when identity map'ed attribute
may disagree with ioremap range on the attribute type.
This also avoids doing cpa on the ioremap'ed address twice (first in
ioremap_page_range and then in ioremap_change_attr using vaddr), and
should improve ioremap performance a bit.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0dfa09d..329387e 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -280,15 +280,16 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 		return NULL;
 	area->phys_addr = phys_addr;
 	vaddr = (unsigned long) area->addr;
-	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
+
+	if (kernel_map_sync_memtype(phys_addr, size, prot_val)) {
 		free_memtype(phys_addr, phys_addr + size);
 		free_vm_area(area);
 		return NULL;
 	}
 
-	if (ioremap_change_attr(vaddr, size, prot_val) < 0) {
+	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
 		free_memtype(phys_addr, phys_addr + size);
-		vunmap(area->addr);
+		free_vm_area(area);
 		return NULL;
 	}
 
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [patch 2/6] x86, PAT: Change order of cpa and free in set_memory_wb
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 3/6] x86, PAT: Handle faults cleanly in set_memory_ APIs venkatesh.pallipadi
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: 0002-x86-PAT-Change-order-of-cpa-and-free-in-set_memory.patch --]
[-- Type: text/plain, Size: 1429 bytes --]

To be free of aliasing due to races, set_memory_* interfaces should
follow ordering of reserving, changing memtype to UC/WC, changing
memtype back to WB followed by free.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---
 arch/x86/mm/pageattr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 1224865..38dc61f 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1007,15 +1007,19 @@ int _set_memory_wb(unsigned long addr, int numpages)
 
 int set_memory_wb(unsigned long addr, int numpages)
 {
+	int ret = _set_memory_wb(addr, numpages);
 	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
-
-	return _set_memory_wb(addr, numpages);
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_wb);
 
 int set_memory_array_wb(unsigned long *addr, int addrinarray)
 {
 	int i;
+	int ret;
+
+	ret = change_page_attr_clear(addr, addrinarray,
+				      __pgprot(_PAGE_CACHE_MASK), 1);
 
 	for (i = 0; i < addrinarray; i++) {
 		unsigned long start = __pa(addr[i]);
@@ -1028,8 +1032,7 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 		}
 		free_memtype(start, end);
 	}
-	return change_page_attr_clear(addr, addrinarray,
-				      __pgprot(_PAGE_CACHE_MASK), 1);
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_array_wb);
 
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [patch 3/6] x86, PAT: Handle faults cleanly in set_memory_ APIs
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 2/6] x86, PAT: Change order of cpa and free in set_memory_wb venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 4/6] x86, PAT: Changing memtype to WC ensuring no WB alias venkatesh.pallipadi
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: 0003-x86-PAT-Handle-faults-cleanly-in-set_memory_-APIs.patch --]
[-- Type: text/plain, Size: 4953 bytes --]

Handle faults and do proper cleanups in set_memory_*() functions. In
some cases, these functions were not doing proper free on failure paths.

With the changes to tracking memtype of RAM pages in struct page instead
of pat list, we do not need the changes in commits c5e147. This patch
reverts that change.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---
 arch/x86/mm/pageattr.c |  113 +++++++++++++++++++++++++++--------------------
 1 files changed, 65 insertions(+), 48 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 38dc61f..3226504 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -931,52 +931,56 @@ int _set_memory_uc(unsigned long addr, int numpages)
 
 int set_memory_uc(unsigned long addr, int numpages)
 {
+	int ret;
+
 	/*
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
-	if (reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-			    _PAGE_CACHE_UC_MINUS, NULL))
-		return -EINVAL;
+	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
+			    _PAGE_CACHE_UC_MINUS, NULL);
+	if (ret)
+		goto out_err;
+
+	ret = _set_memory_uc(addr, numpages);
+	if (ret)
+		goto out_free;
+
+	return 0;
 
-	return _set_memory_uc(addr, numpages);
+out_free:
+	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
+out_err:
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_uc);
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray)
 {
-	unsigned long start;
-	unsigned long end;
-	int i;
+	int i, j;
+	int ret;
+
 	/*
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	for (i = 0; i < addrinarray; i++) {
-		start = __pa(addr[i]);
-		for (end = start + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		if (reserve_memtype(start, end, _PAGE_CACHE_UC_MINUS, NULL))
-			goto out;
+		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
+					_PAGE_CACHE_UC_MINUS, NULL);
+		if (ret)
+			goto out_free;
 	}
 
-	return change_page_attr_set(addr, addrinarray,
+	ret = change_page_attr_set(addr, addrinarray,
 				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
-out:
-	for (i = 0; i < addrinarray; i++) {
-		unsigned long tmp = __pa(addr[i]);
-
-		if (tmp == start)
-			break;
-		for (end = tmp + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		free_memtype(tmp, end);
-	}
-	return -EINVAL;
+	if (ret)
+		goto out_free;
+
+	return 0;
+
+out_free:
+	for (j = 0; j < i; j++)
+		free_memtype(__pa(addr[j]), __pa(addr[j]) + PAGE_SIZE);
+
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_array_uc);
 
@@ -988,14 +992,26 @@ int _set_memory_wc(unsigned long addr, int numpages)
 
 int set_memory_wc(unsigned long addr, int numpages)
 {
+	int ret;
+
 	if (!pat_enabled)
 		return set_memory_uc(addr, numpages);
 
-	if (reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-		_PAGE_CACHE_WC, NULL))
-		return -EINVAL;
+	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
+		_PAGE_CACHE_WC, NULL);
+	if (ret)
+		goto out_err;
 
-	return _set_memory_wc(addr, numpages);
+	ret = _set_memory_wc(addr, numpages);
+	if (ret)
+		goto out_free;
+
+	return 0;
+
+out_free:
+	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
+out_err:
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_wc);
 
@@ -1007,9 +1023,14 @@ int _set_memory_wb(unsigned long addr, int numpages)
 
 int set_memory_wb(unsigned long addr, int numpages)
 {
-	int ret = _set_memory_wb(addr, numpages);
+	int ret;
+
+	ret = _set_memory_wb(addr, numpages);
+	if (ret)
+		return ret;
+
 	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(set_memory_wb);
 
@@ -1020,19 +1041,13 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 
 	ret = change_page_attr_clear(addr, addrinarray,
 				      __pgprot(_PAGE_CACHE_MASK), 1);
+	if (ret)
+		return ret;
 
-	for (i = 0; i < addrinarray; i++) {
-		unsigned long start = __pa(addr[i]);
-		unsigned long end;
+	for (i = 0; i < addrinarray; i++)
+		free_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE);
 
-		for (end = start + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		free_memtype(start, end);
-	}
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(set_memory_array_wb);
 
@@ -1125,6 +1140,8 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 
 	retval = cpa_clear_pages_array(pages, addrinarray,
 			__pgprot(_PAGE_CACHE_MASK));
+	if (retval)
+		return retval;
 
 	for (i = 0; i < addrinarray; i++) {
 		start = (unsigned long)page_address(pages[i]);
@@ -1132,7 +1149,7 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 		free_memtype(start, end);
 	}
 
-	return retval;
+	return 0;
 }
 EXPORT_SYMBOL(set_pages_array_wb);
 
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [patch 4/6] x86, PAT: Changing memtype to WC ensuring no WB alias
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
                   ` (2 preceding siblings ...)
  2009-04-09 21:26 ` [patch 3/6] x86, PAT: Handle faults cleanly in set_memory_ APIs venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
  2009-04-09 21:26 ` [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype() venkatesh.pallipadi
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: 0004-x86-PAT-Changing-memtype-to-WC-ensuring-no-WB-alia.patch --]
[-- Type: text/plain, Size: 1449 bytes --]

As per SDM, there should not be any aliasing of a WC with any cacheable
type across CPUs. That is if one CPU is changing the identity map
memtype to _WC, no other CPU at the time of this change should not have a
TLB for this page that carries a WB attribute. SDM suggests to make the
page not present. But for that we will have to handle any page faults
that can potentially happen due to these pages being not present.

Other way to deal with this without having any WB mapping is to change
the page first to UC and then to WC. This ensures that we meet the SDM
requirement of no cacheable alais to WC page. This also has same or
lower overhead than marking the page not present and making it present
later.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
---

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 3226504..4fa8996 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -986,8 +986,15 @@ EXPORT_SYMBOL(set_memory_array_uc);
 
 int _set_memory_wc(unsigned long addr, int numpages)
 {
-	return change_page_attr_set(&addr, numpages,
+	int ret;
+	ret = change_page_attr_set(&addr, numpages,
+				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+
+	if (!ret) {
+		ret = change_page_attr_set(&addr, numpages,
 				    __pgprot(_PAGE_CACHE_WC), 0);
+	}
+	return ret;
 }
 
 int set_memory_wc(unsigned long addr, int numpages)
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
                   ` (3 preceding siblings ...)
  2009-04-09 21:26 ` [patch 4/6] x86, PAT: Changing memtype to WC ensuring no WB alias venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:34   ` [tip:x86/pat] " Suresh Siddha
  2009-04-09 21:26 ` [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap venkatesh.pallipadi
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Suresh Siddha, Venkatesh Pallipadi

[-- Attachment #1: 0005-x86-PAT-Consolidate-code-in-pat_x_mtrr_type-and.patch --]
[-- Type: text/plain, Size: 3158 bytes --]

From: Suresh Siddha <suresh.b.siddha@intel.com>
Subject: [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()

Fix pat_x_mtrr_type() to use UC_MINUS when the mtrr type return UC. This
is to be  consistent with ioremap() and ioremap_nocache() which uses
UC_MINUS.

Consolidate the code such that reserve_memtype() also uses
pat_x_mtrr_type() when the caller doesn't specify any special attribute
(non WB attribute).

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/mm/ioremap.c |    3 ++-
 arch/x86/mm/pat.c     |   35 +++++++++++++----------------------
 2 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 329387e..d4c4b2c 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -375,7 +375,8 @@ static void __iomem *ioremap_default(resource_size_t phys_addr,
 	 * - UC_MINUS for non-WB-able memory with no other conflicting mappings
 	 * - Inherit from confliting mappings otherwise
 	 */
-	err = reserve_memtype(phys_addr, phys_addr + size, -1, &flags);
+	err = reserve_memtype(phys_addr, phys_addr + size,
+				_PAGE_CACHE_WB, &flags);
 	if (err < 0)
 		return NULL;
 
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 95d3b1a..3c8624c 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -182,10 +182,10 @@ static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type)
 		u8 mtrr_type;
 
 		mtrr_type = mtrr_type_lookup(start, end);
-		if (mtrr_type == MTRR_TYPE_UNCACHABLE)
-			return _PAGE_CACHE_UC;
-		if (mtrr_type == MTRR_TYPE_WRCOMB)
-			return _PAGE_CACHE_WC;
+		if (mtrr_type != MTRR_TYPE_WRBACK)
+			return _PAGE_CACHE_UC_MINUS;
+
+		return _PAGE_CACHE_WB;
 	}
 
 	return req_type;
@@ -352,23 +352,13 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 		return 0;
 	}
 
-	if (req_type == -1) {
-		/*
-		 * Call mtrr_lookup to get the type hint. This is an
-		 * optimization for /dev/mem mmap'ers into WB memory (BIOS
-		 * tools and ACPI tools). Use WB request for WB memory and use
-		 * UC_MINUS otherwise.
-		 */
-		u8 mtrr_type = mtrr_type_lookup(start, end);
-
-		if (mtrr_type == MTRR_TYPE_WRBACK)
-			actual_type = _PAGE_CACHE_WB;
-		else
-			actual_type = _PAGE_CACHE_UC_MINUS;
-	} else {
-		actual_type = pat_x_mtrr_type(start, end,
-					      req_type & _PAGE_CACHE_MASK);
-	}
+	/*
+	 * Call mtrr_lookup to get the type hint. This is an
+	 * optimization for /dev/mem mmap'ers into WB memory (BIOS
+	 * tools and ACPI tools). Use WB request for WB memory and use
+	 * UC_MINUS otherwise.
+	 */
+	actual_type = pat_x_mtrr_type(start, end, req_type & _PAGE_CACHE_MASK);
 
 	if (new_type)
 		*new_type = actual_type;
@@ -587,7 +577,8 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	if (flags != -1) {
 		retval = reserve_memtype(offset, offset + size, flags, NULL);
 	} else {
-		retval = reserve_memtype(offset, offset + size, -1, &flags);
+		retval = reserve_memtype(offset, offset + size,
+					_PAGE_CACHE_WB, &flags);
 	}
 
 	if (retval < 0)
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
                   ` (4 preceding siblings ...)
  2009-04-09 21:26 ` [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype() venkatesh.pallipadi
@ 2009-04-09 21:26 ` venkatesh.pallipadi
  2009-04-10 12:34   ` [tip:x86/pat] " Suresh Siddha
  2009-04-10 11:53 ` [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes Ingo Molnar
  2009-04-10 20:41 ` Eric Anholt
  7 siblings, 1 reply; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-09 21:26 UTC (permalink / raw)
  To: mingo, tglx, hpa; +Cc: linux-kernel, Suresh Siddha, Venkatesh Pallipadi

[-- Attachment #1: 0006-x86-PAT-Remove-duplicate-memtype-reserve-in-devmem.patch --]
[-- Type: text/plain, Size: 5398 bytes --]

From: Suresh Siddha <suresh.b.siddha@intel.com>
Subject: [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap

/dev/mem mmap code was doing memtype reserve/free for a while now.
Recently we added memtype tracking in remap_pfn_range, and /dev/mem mmap
uses it indirectly. So, we don't need seperate tracking in /dev/mem code
any more. That means another ~100 lines of code removed :-).

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
---
 arch/x86/include/asm/pat.h |    4 ---
 arch/x86/mm/pat.c          |   60 +------------------------------------------
 drivers/char/mem.c         |   27 -------------------
 3 files changed, 2 insertions(+), 89 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index 2cd07b9..7af14e5 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -18,9 +18,5 @@ extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
 		unsigned long flag);
-extern void map_devmem(unsigned long pfn, unsigned long size,
-		       struct pgprot vma_prot);
-extern void unmap_devmem(unsigned long pfn, unsigned long size,
-			 struct pgprot vma_prot);
 
 #endif /* _ASM_X86_PAT_H */
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 3c8624c..c5d01cf 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -536,9 +536,7 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
 int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 				unsigned long size, pgprot_t *vma_prot)
 {
-	u64 offset = ((u64) pfn) << PAGE_SHIFT;
-	unsigned long flags = -1;
-	int retval;
+	unsigned long flags = _PAGE_CACHE_WB;
 
 	if (!range_is_allowed(pfn, size))
 		return 0;
@@ -566,65 +564,11 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	}
 #endif
 
-	/*
-	 * With O_SYNC, we can only take UC_MINUS mapping. Fail if we cannot.
-	 *
-	 * Without O_SYNC, we want to get
-	 * - WB for WB-able memory and no other conflicting mappings
-	 * - UC_MINUS for non-WB-able memory with no other conflicting mappings
-	 * - Inherit from confliting mappings otherwise
-	 */
-	if (flags != -1) {
-		retval = reserve_memtype(offset, offset + size, flags, NULL);
-	} else {
-		retval = reserve_memtype(offset, offset + size,
-					_PAGE_CACHE_WB, &flags);
-	}
-
-	if (retval < 0)
-		return 0;
-
-	if (((pfn < max_low_pfn_mapped) ||
-	     (pfn >= (1UL<<(32 - PAGE_SHIFT)) && pfn < max_pfn_mapped)) &&
-	    ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
-		free_memtype(offset, offset + size);
-		printk(KERN_INFO
-		"%s:%d /dev/mem ioremap_change_attr failed %s for %Lx-%Lx\n",
-			current->comm, current->pid,
-			cattr_name(flags),
-			offset, (unsigned long long)(offset + size));
-		return 0;
-	}
-
 	*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
 			     flags);
 	return 1;
 }
 
-void map_devmem(unsigned long pfn, unsigned long size, pgprot_t vma_prot)
-{
-	unsigned long want_flags = (pgprot_val(vma_prot) & _PAGE_CACHE_MASK);
-	u64 addr = (u64)pfn << PAGE_SHIFT;
-	unsigned long flags;
-
-	reserve_memtype(addr, addr + size, want_flags, &flags);
-	if (flags != want_flags) {
-		printk(KERN_INFO
-		"%s:%d /dev/mem expected mapping type %s for %Lx-%Lx, got %s\n",
-			current->comm, current->pid,
-			cattr_name(want_flags),
-			addr, (unsigned long long)(addr + size),
-			cattr_name(flags));
-	}
-}
-
-void unmap_devmem(unsigned long pfn, unsigned long size, pgprot_t vma_prot)
-{
-	u64 addr = (u64)pfn << PAGE_SHIFT;
-
-	free_memtype(addr, addr + size);
-}
-
 /*
  * Change the memory type for the physial address range in kernel identity
  * mapping space if that range is a part of identity map.
@@ -668,8 +612,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 {
 	int is_ram = 0;
 	int ret;
-	unsigned long flags;
 	unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
+	unsigned long flags = want_flags;
 
 	is_ram = pat_pagerange_is_ram(paddr, paddr + size);
 
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 3586b3b..8f05c38 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -301,33 +301,7 @@ static inline int private_mapping_ok(struct vm_area_struct *vma)
 }
 #endif
 
-void __attribute__((weak))
-map_devmem(unsigned long pfn, unsigned long len, pgprot_t prot)
-{
-	/* nothing. architectures can override. */
-}
-
-void __attribute__((weak))
-unmap_devmem(unsigned long pfn, unsigned long len, pgprot_t prot)
-{
-	/* nothing. architectures can override. */
-}
-
-static void mmap_mem_open(struct vm_area_struct *vma)
-{
-	map_devmem(vma->vm_pgoff,  vma->vm_end - vma->vm_start,
-			vma->vm_page_prot);
-}
-
-static void mmap_mem_close(struct vm_area_struct *vma)
-{
-	unmap_devmem(vma->vm_pgoff,  vma->vm_end - vma->vm_start,
-			vma->vm_page_prot);
-}
-
 static struct vm_operations_struct mmap_mem_ops = {
-	.open  = mmap_mem_open,
-	.close = mmap_mem_close,
 #ifdef CONFIG_HAVE_IOREMAP_PROT
 	.access = generic_access_phys
 #endif
@@ -362,7 +336,6 @@ static int mmap_mem(struct file * file, struct vm_area_struct * vma)
 			    vma->vm_pgoff,
 			    size,
 			    vma->vm_page_prot)) {
-		unmap_devmem(vma->vm_pgoff, size, vma->vm_page_prot);
 		return -EAGAIN;
 	}
 	return 0;
-- 
1.6.0.6

-- 


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
                   ` (5 preceding siblings ...)
  2009-04-09 21:26 ` [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap venkatesh.pallipadi
@ 2009-04-10 11:53 ` Ingo Molnar
  2009-04-10 17:31   ` Pallipadi, Venkatesh
  2009-04-10 20:41 ` Eric Anholt
  7 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2009-04-10 11:53 UTC (permalink / raw)
  To: venkatesh.pallipadi; +Cc: tglx, hpa, linux-kernel, Suresh Siddha


* venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com> wrote:

> This patchset contains cleanups and minor bug fixes in x86 PAT and 
> CPA related code. The bugs were mostly found by code inspection. 
> There should not be any functionality changes with this patchset.
> 
> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

Great, this series looks really nice!

Does this solve the problems reported in the "2.6.29 git master and 
PAT problems" thread and addressed via an earlier patch of yours:

 Subject: [PATCH] x86, PAT: Remove page granularity tracking for vm_insert_pfn maps

i am worried about this particular patch - it looks more like a 
workaround than a true realization of a bug.

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, CPA: Change idmap attribute before ioremap attribute setup
  2009-04-09 21:26 ` [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup venkatesh.pallipadi
@ 2009-04-10 12:33   ` Suresh Siddha
  0 siblings, 0 replies; 18+ messages in thread
From: Suresh Siddha @ 2009-04-10 12:33 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  43a432b1559798d33970261f710030f787770231
Gitweb:     http://git.kernel.org/tip/43a432b1559798d33970261f710030f787770231
Author:     Suresh Siddha <suresh.b.siddha@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:47 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:46 +0200

x86, CPA: Change idmap attribute before ioremap attribute setup

Change the identity mapping with the requested attribute first, before
we setup the virtual memory mapping with the new requested attribute.

This makes sure that there is no window when identity map'ed attribute
may disagree with ioremap range on the attribute type.

This also avoids doing cpa on the ioremap'ed address twice (first in
ioremap_page_range and then in ioremap_change_attr using vaddr), and
should improve ioremap performance a bit.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
LKML-Reference: <20090409212708.373330000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/mm/ioremap.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0dfa09d..329387e 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -280,15 +280,16 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
 		return NULL;
 	area->phys_addr = phys_addr;
 	vaddr = (unsigned long) area->addr;
-	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
+
+	if (kernel_map_sync_memtype(phys_addr, size, prot_val)) {
 		free_memtype(phys_addr, phys_addr + size);
 		free_vm_area(area);
 		return NULL;
 	}
 
-	if (ioremap_change_attr(vaddr, size, prot_val) < 0) {
+	if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot)) {
 		free_memtype(phys_addr, phys_addr + size);
-		vunmap(area->addr);
+		free_vm_area(area);
 		return NULL;
 	}
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, PAT: Change order of cpa and free in set_memory_wb
  2009-04-09 21:26 ` [patch 2/6] x86, PAT: Change order of cpa and free in set_memory_wb venkatesh.pallipadi
@ 2009-04-10 12:34   ` venkatesh.pallipadi
  0 siblings, 0 replies; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-10 12:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  a5593e0b329a14dea41ea173380dbf1533de2bd2
Gitweb:     http://git.kernel.org/tip/a5593e0b329a14dea41ea173380dbf1533de2bd2
Author:     venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:48 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:46 +0200

x86, PAT: Change order of cpa and free in set_memory_wb

To be free of aliasing due to races, set_memory_* interfaces should
follow ordering of reserving, changing memtype to UC/WC, changing
memtype back to WB followed by free.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20090409212708.512280000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/mm/pageattr.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index d71e1b6..d487eaa 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1021,15 +1021,19 @@ int _set_memory_wb(unsigned long addr, int numpages)
 
 int set_memory_wb(unsigned long addr, int numpages)
 {
+	int ret = _set_memory_wb(addr, numpages);
 	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
-
-	return _set_memory_wb(addr, numpages);
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_wb);
 
 int set_memory_array_wb(unsigned long *addr, int addrinarray)
 {
 	int i;
+	int ret;
+
+	ret = change_page_attr_clear(addr, addrinarray,
+				      __pgprot(_PAGE_CACHE_MASK), 1);
 
 	for (i = 0; i < addrinarray; i++) {
 		unsigned long start = __pa(addr[i]);
@@ -1042,8 +1046,7 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 		}
 		free_memtype(start, end);
 	}
-	return change_page_attr_clear(addr, addrinarray,
-				      __pgprot(_PAGE_CACHE_MASK), 1);
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_array_wb);
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, PAT: Handle faults cleanly in set_memory_ APIs
  2009-04-09 21:26 ` [patch 3/6] x86, PAT: Handle faults cleanly in set_memory_ APIs venkatesh.pallipadi
@ 2009-04-10 12:34   ` venkatesh.pallipadi
  0 siblings, 0 replies; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-10 12:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  9fa3ab390abfc8b49fc0dd7c845b0ad224ec429f
Gitweb:     http://git.kernel.org/tip/9fa3ab390abfc8b49fc0dd7c845b0ad224ec429f
Author:     venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:49 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:47 +0200

x86, PAT: Handle faults cleanly in set_memory_ APIs

Handle faults and do proper cleanups in set_memory_*() functions. In
some cases, these functions were not doing proper free on failure paths.

With the changes to tracking memtype of RAM pages in struct page instead
of pat list, we do not need the changes in commits c5e147. This patch
reverts that change.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20090409212708.653222000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/mm/pageattr.c |  113 +++++++++++++++++++++++++++--------------------
 1 files changed, 65 insertions(+), 48 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index d487eaa..985eef8 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -945,52 +945,56 @@ int _set_memory_uc(unsigned long addr, int numpages)
 
 int set_memory_uc(unsigned long addr, int numpages)
 {
+	int ret;
+
 	/*
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
-	if (reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-			    _PAGE_CACHE_UC_MINUS, NULL))
-		return -EINVAL;
+	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
+			    _PAGE_CACHE_UC_MINUS, NULL);
+	if (ret)
+		goto out_err;
+
+	ret = _set_memory_uc(addr, numpages);
+	if (ret)
+		goto out_free;
+
+	return 0;
 
-	return _set_memory_uc(addr, numpages);
+out_free:
+	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
+out_err:
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_uc);
 
 int set_memory_array_uc(unsigned long *addr, int addrinarray)
 {
-	unsigned long start;
-	unsigned long end;
-	int i;
+	int i, j;
+	int ret;
+
 	/*
 	 * for now UC MINUS. see comments in ioremap_nocache()
 	 */
 	for (i = 0; i < addrinarray; i++) {
-		start = __pa(addr[i]);
-		for (end = start + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		if (reserve_memtype(start, end, _PAGE_CACHE_UC_MINUS, NULL))
-			goto out;
+		ret = reserve_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE,
+					_PAGE_CACHE_UC_MINUS, NULL);
+		if (ret)
+			goto out_free;
 	}
 
-	return change_page_attr_set(addr, addrinarray,
+	ret = change_page_attr_set(addr, addrinarray,
 				    __pgprot(_PAGE_CACHE_UC_MINUS), 1);
-out:
-	for (i = 0; i < addrinarray; i++) {
-		unsigned long tmp = __pa(addr[i]);
-
-		if (tmp == start)
-			break;
-		for (end = tmp + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		free_memtype(tmp, end);
-	}
-	return -EINVAL;
+	if (ret)
+		goto out_free;
+
+	return 0;
+
+out_free:
+	for (j = 0; j < i; j++)
+		free_memtype(__pa(addr[j]), __pa(addr[j]) + PAGE_SIZE);
+
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_array_uc);
 
@@ -1002,14 +1006,26 @@ int _set_memory_wc(unsigned long addr, int numpages)
 
 int set_memory_wc(unsigned long addr, int numpages)
 {
+	int ret;
+
 	if (!pat_enabled)
 		return set_memory_uc(addr, numpages);
 
-	if (reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
-		_PAGE_CACHE_WC, NULL))
-		return -EINVAL;
+	ret = reserve_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE,
+		_PAGE_CACHE_WC, NULL);
+	if (ret)
+		goto out_err;
 
-	return _set_memory_wc(addr, numpages);
+	ret = _set_memory_wc(addr, numpages);
+	if (ret)
+		goto out_free;
+
+	return 0;
+
+out_free:
+	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
+out_err:
+	return ret;
 }
 EXPORT_SYMBOL(set_memory_wc);
 
@@ -1021,9 +1037,14 @@ int _set_memory_wb(unsigned long addr, int numpages)
 
 int set_memory_wb(unsigned long addr, int numpages)
 {
-	int ret = _set_memory_wb(addr, numpages);
+	int ret;
+
+	ret = _set_memory_wb(addr, numpages);
+	if (ret)
+		return ret;
+
 	free_memtype(__pa(addr), __pa(addr) + numpages * PAGE_SIZE);
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(set_memory_wb);
 
@@ -1034,19 +1055,13 @@ int set_memory_array_wb(unsigned long *addr, int addrinarray)
 
 	ret = change_page_attr_clear(addr, addrinarray,
 				      __pgprot(_PAGE_CACHE_MASK), 1);
+	if (ret)
+		return ret;
 
-	for (i = 0; i < addrinarray; i++) {
-		unsigned long start = __pa(addr[i]);
-		unsigned long end;
+	for (i = 0; i < addrinarray; i++)
+		free_memtype(__pa(addr[i]), __pa(addr[i]) + PAGE_SIZE);
 
-		for (end = start + PAGE_SIZE; i < addrinarray - 1; end += PAGE_SIZE) {
-			if (end != __pa(addr[i + 1]))
-				break;
-			i++;
-		}
-		free_memtype(start, end);
-	}
-	return ret;
+	return 0;
 }
 EXPORT_SYMBOL(set_memory_array_wb);
 
@@ -1139,6 +1154,8 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 
 	retval = cpa_clear_pages_array(pages, addrinarray,
 			__pgprot(_PAGE_CACHE_MASK));
+	if (retval)
+		return retval;
 
 	for (i = 0; i < addrinarray; i++) {
 		start = (unsigned long)page_address(pages[i]);
@@ -1146,7 +1163,7 @@ int set_pages_array_wb(struct page **pages, int addrinarray)
 		free_memtype(start, end);
 	}
 
-	return retval;
+	return 0;
 }
 EXPORT_SYMBOL(set_pages_array_wb);
 

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, PAT: Changing memtype to WC ensuring no WB alias
  2009-04-09 21:26 ` [patch 4/6] x86, PAT: Changing memtype to WC ensuring no WB alias venkatesh.pallipadi
@ 2009-04-10 12:34   ` venkatesh.pallipadi
  0 siblings, 0 replies; 18+ messages in thread
From: venkatesh.pallipadi @ 2009-04-10 12:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  3869c4aa18835c8c61b44bd0f3ace36e9d3b5bd0
Gitweb:     http://git.kernel.org/tip/3869c4aa18835c8c61b44bd0f3ace36e9d3b5bd0
Author:     venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:50 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:47 +0200

x86, PAT: Changing memtype to WC ensuring no WB alias

As per SDM, there should not be any aliasing of a WC with any cacheable
type across CPUs. That is if one CPU is changing the identity map
memtype to _WC, no other CPU at the time of this change should not have a
TLB for this page that carries a WB attribute. SDM suggests to make the
page not present. But for that we will have to handle any page faults
that can potentially happen due to these pages being not present.

Other way to deal with this without having any WB mapping is to change
the page first to UC and then to WC. This ensures that we meet the SDM
requirement of no cacheable alais to WC page. This also has same or
lower overhead than marking the page not present and making it present
later.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
LKML-Reference: <20090409212708.797481000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/mm/pageattr.c |    9 ++++++++-
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 985eef8..797f9f1 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1000,8 +1000,15 @@ EXPORT_SYMBOL(set_memory_array_uc);
 
 int _set_memory_wc(unsigned long addr, int numpages)
 {
-	return change_page_attr_set(&addr, numpages,
+	int ret;
+	ret = change_page_attr_set(&addr, numpages,
+				    __pgprot(_PAGE_CACHE_UC_MINUS), 0);
+
+	if (!ret) {
+		ret = change_page_attr_set(&addr, numpages,
 				    __pgprot(_PAGE_CACHE_WC), 0);
+	}
+	return ret;
 }
 
 int set_memory_wc(unsigned long addr, int numpages)

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()
  2009-04-09 21:26 ` [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype() venkatesh.pallipadi
@ 2009-04-10 12:34   ` Suresh Siddha
  0 siblings, 0 replies; 18+ messages in thread
From: Suresh Siddha @ 2009-04-10 12:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  b6ff32d9aaeeeecf98f9a852d715569183585312
Gitweb:     http://git.kernel.org/tip/b6ff32d9aaeeeecf98f9a852d715569183585312
Author:     Suresh Siddha <suresh.b.siddha@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:51 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:48 +0200

x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()

Fix pat_x_mtrr_type() to use UC_MINUS when the mtrr type return UC. This
is to be  consistent with ioremap() and ioremap_nocache() which uses
UC_MINUS.

Consolidate the code such that reserve_memtype() also uses
pat_x_mtrr_type() when the caller doesn't specify any special attribute
(non WB attribute).

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
LKML-Reference: <20090409212708.939936000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/mm/ioremap.c |    3 ++-
 arch/x86/mm/pat.c     |   35 +++++++++++++----------------------
 2 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 329387e..d4c4b2c 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -375,7 +375,8 @@ static void __iomem *ioremap_default(resource_size_t phys_addr,
 	 * - UC_MINUS for non-WB-able memory with no other conflicting mappings
 	 * - Inherit from confliting mappings otherwise
 	 */
-	err = reserve_memtype(phys_addr, phys_addr + size, -1, &flags);
+	err = reserve_memtype(phys_addr, phys_addr + size,
+				_PAGE_CACHE_WB, &flags);
 	if (err < 0)
 		return NULL;
 
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 640339e..8d3de95 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -182,10 +182,10 @@ static unsigned long pat_x_mtrr_type(u64 start, u64 end, unsigned long req_type)
 		u8 mtrr_type;
 
 		mtrr_type = mtrr_type_lookup(start, end);
-		if (mtrr_type == MTRR_TYPE_UNCACHABLE)
-			return _PAGE_CACHE_UC;
-		if (mtrr_type == MTRR_TYPE_WRCOMB)
-			return _PAGE_CACHE_WC;
+		if (mtrr_type != MTRR_TYPE_WRBACK)
+			return _PAGE_CACHE_UC_MINUS;
+
+		return _PAGE_CACHE_WB;
 	}
 
 	return req_type;
@@ -352,23 +352,13 @@ int reserve_memtype(u64 start, u64 end, unsigned long req_type,
 		return 0;
 	}
 
-	if (req_type == -1) {
-		/*
-		 * Call mtrr_lookup to get the type hint. This is an
-		 * optimization for /dev/mem mmap'ers into WB memory (BIOS
-		 * tools and ACPI tools). Use WB request for WB memory and use
-		 * UC_MINUS otherwise.
-		 */
-		u8 mtrr_type = mtrr_type_lookup(start, end);
-
-		if (mtrr_type == MTRR_TYPE_WRBACK)
-			actual_type = _PAGE_CACHE_WB;
-		else
-			actual_type = _PAGE_CACHE_UC_MINUS;
-	} else {
-		actual_type = pat_x_mtrr_type(start, end,
-					      req_type & _PAGE_CACHE_MASK);
-	}
+	/*
+	 * Call mtrr_lookup to get the type hint. This is an
+	 * optimization for /dev/mem mmap'ers into WB memory (BIOS
+	 * tools and ACPI tools). Use WB request for WB memory and use
+	 * UC_MINUS otherwise.
+	 */
+	actual_type = pat_x_mtrr_type(start, end, req_type & _PAGE_CACHE_MASK);
 
 	if (new_type)
 		*new_type = actual_type;
@@ -587,7 +577,8 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	if (flags != -1) {
 		retval = reserve_memtype(offset, offset + size, flags, NULL);
 	} else {
-		retval = reserve_memtype(offset, offset + size, -1, &flags);
+		retval = reserve_memtype(offset, offset + size,
+					_PAGE_CACHE_WB, &flags);
 	}
 
 	if (retval < 0)

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [tip:x86/pat] x86, PAT: Remove duplicate memtype reserve in devmem mmap
  2009-04-09 21:26 ` [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap venkatesh.pallipadi
@ 2009-04-10 12:34   ` Suresh Siddha
  0 siblings, 0 replies; 18+ messages in thread
From: Suresh Siddha @ 2009-04-10 12:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, venkatesh.pallipadi, suresh.b.siddha,
	tglx, mingo

Commit-ID:  0c3c8a18361a636069f5a5d9d0d0f9c2124e6b94
Gitweb:     http://git.kernel.org/tip/0c3c8a18361a636069f5a5d9d0d0f9c2124e6b94
Author:     Suresh Siddha <suresh.b.siddha@intel.com>
AuthorDate: Thu, 9 Apr 2009 14:26:52 -0700
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Fri, 10 Apr 2009 13:55:48 +0200

x86, PAT: Remove duplicate memtype reserve in devmem mmap

/dev/mem mmap code was doing memtype reserve/free for a while now.
Recently we added memtype tracking in remap_pfn_range, and /dev/mem mmap
uses it indirectly. So, we don't need seperate tracking in /dev/mem code
any more. That means another ~100 lines of code removed :-).

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
LKML-Reference: <20090409212709.085210000@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 arch/x86/include/asm/pat.h |    4 ---
 arch/x86/mm/pat.c          |   60 +------------------------------------------
 drivers/char/mem.c         |   27 -------------------
 3 files changed, 2 insertions(+), 89 deletions(-)

diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index 2cd07b9..7af14e5 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -18,9 +18,5 @@ extern int free_memtype(u64 start, u64 end);
 
 extern int kernel_map_sync_memtype(u64 base, unsigned long size,
 		unsigned long flag);
-extern void map_devmem(unsigned long pfn, unsigned long size,
-		       struct pgprot vma_prot);
-extern void unmap_devmem(unsigned long pfn, unsigned long size,
-			 struct pgprot vma_prot);
 
 #endif /* _ASM_X86_PAT_H */
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index 8d3de95..cc5e0e2 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -536,9 +536,7 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
 int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 				unsigned long size, pgprot_t *vma_prot)
 {
-	u64 offset = ((u64) pfn) << PAGE_SHIFT;
-	unsigned long flags = -1;
-	int retval;
+	unsigned long flags = _PAGE_CACHE_WB;
 
 	if (!range_is_allowed(pfn, size))
 		return 0;
@@ -566,65 +564,11 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
 	}
 #endif
 
-	/*
-	 * With O_SYNC, we can only take UC_MINUS mapping. Fail if we cannot.
-	 *
-	 * Without O_SYNC, we want to get
-	 * - WB for WB-able memory and no other conflicting mappings
-	 * - UC_MINUS for non-WB-able memory with no other conflicting mappings
-	 * - Inherit from confliting mappings otherwise
-	 */
-	if (flags != -1) {
-		retval = reserve_memtype(offset, offset + size, flags, NULL);
-	} else {
-		retval = reserve_memtype(offset, offset + size,
-					_PAGE_CACHE_WB, &flags);
-	}
-
-	if (retval < 0)
-		return 0;
-
-	if (((pfn < max_low_pfn_mapped) ||
-	     (pfn >= (1UL<<(32 - PAGE_SHIFT)) && pfn < max_pfn_mapped)) &&
-	    ioremap_change_attr((unsigned long)__va(offset), size, flags) < 0) {
-		free_memtype(offset, offset + size);
-		printk(KERN_INFO
-		"%s:%d /dev/mem ioremap_change_attr failed %s for %Lx-%Lx\n",
-			current->comm, current->pid,
-			cattr_name(flags),
-			offset, (unsigned long long)(offset + size));
-		return 0;
-	}
-
 	*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
 			     flags);
 	return 1;
 }
 
-void map_devmem(unsigned long pfn, unsigned long size, pgprot_t vma_prot)
-{
-	unsigned long want_flags = (pgprot_val(vma_prot) & _PAGE_CACHE_MASK);
-	u64 addr = (u64)pfn << PAGE_SHIFT;
-	unsigned long flags;
-
-	reserve_memtype(addr, addr + size, want_flags, &flags);
-	if (flags != want_flags) {
-		printk(KERN_INFO
-		"%s:%d /dev/mem expected mapping type %s for %Lx-%Lx, got %s\n",
-			current->comm, current->pid,
-			cattr_name(want_flags),
-			addr, (unsigned long long)(addr + size),
-			cattr_name(flags));
-	}
-}
-
-void unmap_devmem(unsigned long pfn, unsigned long size, pgprot_t vma_prot)
-{
-	u64 addr = (u64)pfn << PAGE_SHIFT;
-
-	free_memtype(addr, addr + size);
-}
-
 /*
  * Change the memory type for the physial address range in kernel identity
  * mapping space if that range is a part of identity map.
@@ -662,8 +606,8 @@ static int reserve_pfn_range(u64 paddr, unsigned long size, pgprot_t *vma_prot,
 {
 	int is_ram = 0;
 	int ret;
-	unsigned long flags;
 	unsigned long want_flags = (pgprot_val(*vma_prot) & _PAGE_CACHE_MASK);
+	unsigned long flags = want_flags;
 
 	is_ram = pat_pagerange_is_ram(paddr, paddr + size);
 
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 3586b3b..8f05c38 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -301,33 +301,7 @@ static inline int private_mapping_ok(struct vm_area_struct *vma)
 }
 #endif
 
-void __attribute__((weak))
-map_devmem(unsigned long pfn, unsigned long len, pgprot_t prot)
-{
-	/* nothing. architectures can override. */
-}
-
-void __attribute__((weak))
-unmap_devmem(unsigned long pfn, unsigned long len, pgprot_t prot)
-{
-	/* nothing. architectures can override. */
-}
-
-static void mmap_mem_open(struct vm_area_struct *vma)
-{
-	map_devmem(vma->vm_pgoff,  vma->vm_end - vma->vm_start,
-			vma->vm_page_prot);
-}
-
-static void mmap_mem_close(struct vm_area_struct *vma)
-{
-	unmap_devmem(vma->vm_pgoff,  vma->vm_end - vma->vm_start,
-			vma->vm_page_prot);
-}
-
 static struct vm_operations_struct mmap_mem_ops = {
-	.open  = mmap_mem_open,
-	.close = mmap_mem_close,
 #ifdef CONFIG_HAVE_IOREMAP_PROT
 	.access = generic_access_phys
 #endif
@@ -362,7 +336,6 @@ static int mmap_mem(struct file * file, struct vm_area_struct * vma)
 			    vma->vm_pgoff,
 			    size,
 			    vma->vm_page_prot)) {
-		unmap_devmem(vma->vm_pgoff, size, vma->vm_page_prot);
 		return -EAGAIN;
 	}
 	return 0;

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
  2009-04-10 11:53 ` [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes Ingo Molnar
@ 2009-04-10 17:31   ` Pallipadi, Venkatesh
  0 siblings, 0 replies; 18+ messages in thread
From: Pallipadi, Venkatesh @ 2009-04-10 17:31 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: tglx, hpa, linux-kernel, Siddha, Suresh B

On Fri, 2009-04-10 at 04:53 -0700, Ingo Molnar wrote:
> * venkatesh.pallipadi@intel.com <venkatesh.pallipadi@intel.com> wrote:
> 
> > This patchset contains cleanups and minor bug fixes in x86 PAT and 
> > CPA related code. The bugs were mostly found by code inspection. 
> > There should not be any functionality changes with this patchset.
> > 
> > Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
> > Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
> 
> Great, this series looks really nice!
> 
> Does this solve the problems reported in the "2.6.29 git master and 
> PAT problems" thread and addressed via an earlier patch of yours:
> 
>  Subject: [PATCH] x86, PAT: Remove page granularity tracking for vm_insert_pfn maps

> i am worried about this particular patch - it looks more like a 
> workaround than a true realization of a bug.
> 

No. This patchset does not fix that problem. "Remove page granularity
tracking" patch is still needed with this patchset.

Yes. That patch is more of a workaround or a direction change about how
we want to handle vm_insert_pfn with PAT.
When we added the page level tracking, there were no in kernel users of
vm_insert_pfn() and we did not consider the usages where same address
space/vma will be used to map different physical addresses over time
with unmap_mapping_range() and re -inserting different pfns to the same
vma. That is what X 915 driver is doing now.
You are right in saying that the patch is not handling the bug of
"freeing invalid memtype" errors. They are happening due to unbalanced
reserve/free (more free than reserve) in the code path of tracking
vm_insert_pfn pages. I couldn't really reproduce the problem where we
are getting those unbalanced frees as reported in that bug report. But,
that bug report also points to another issue. The issue of 1000s of
single page UC_MINUS or WC mappings from X driver. Even though it is not
a functionality problem, it will have major performance impact tracking
thousands of memtypes. We don't have to really track memtypes of such
small chunks and they want WC (or UC_MINUS) for all those mappings.
So, the plan is to forget about tracking of individual pages. That
automatically takes care of the unbalanced reserve/free problem.

We are adding a new API where driver can ask for a type for a big
address range, something like the entire PCI map range. And then used
the type for each individual page that they may map on demand. This way
we don't have to keep track of individual pages that driver may map and
unmap. This is still in works, I should have a patch for this soon (a
week or so).

Thanks,
Venki



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
  2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
                   ` (6 preceding siblings ...)
  2009-04-10 11:53 ` [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes Ingo Molnar
@ 2009-04-10 20:41 ` Eric Anholt
  2009-04-11  7:00   ` Ingo Molnar
  7 siblings, 1 reply; 18+ messages in thread
From: Eric Anholt @ 2009-04-10 20:41 UTC (permalink / raw)
  To: venkatesh.pallipadi; +Cc: mingo, tglx, hpa, linux-kernel, Suresh Siddha

[-- Attachment #1: Type: text/plain, Size: 953 bytes --]

On Thu, 2009-04-09 at 14:26 -0700, venkatesh.pallipadi@intel.com wrote:
> This patchset contains cleanups and minor bug fixes in x86 PAT and CPA
> related code. The bugs were mostly found by code inspection. There
> should not be any functionality changes with this patchset.

I've been curious, what are you using to test PAT changes for
regressions?  I've got some regression tests at:

http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/

Requires KMS enabled and master of libdrm, but after that you can sudo
make check, and it tests execution of several DRM paths without
requiring X.  In benchmarks/ there are a few microbenchmarks of various
mapping types, which has been useful in making sure that we're ending up
with the right PTEs.

It should work on all Intel GPUs from the 830 through the GM45, though I
haven't tested pre-915 much.

-- 
Eric Anholt
eric@anholt.net                         eric.anholt@intel.com



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
  2009-04-10 20:41 ` Eric Anholt
@ 2009-04-11  7:00   ` Ingo Molnar
  2009-04-13 20:11     ` Eric Anholt
  0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2009-04-11  7:00 UTC (permalink / raw)
  To: Eric Anholt; +Cc: venkatesh.pallipadi, tglx, hpa, linux-kernel, Suresh Siddha


* Eric Anholt <eric@anholt.net> wrote:

> On Thu, 2009-04-09 at 14:26 -0700, venkatesh.pallipadi@intel.com wrote:
> > This patchset contains cleanups and minor bug fixes in x86 PAT and CPA
> > related code. The bugs were mostly found by code inspection. There
> > should not be any functionality changes with this patchset.
> 
> I've been curious, what are you using to test PAT changes for 
> regressions?  I've got some regression tests at:
> 
> http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/
> 
> Requires KMS enabled and master of libdrm, but after that you can 
> sudo make check, and it tests execution of several DRM paths 
> without requiring X.  In benchmarks/ there are a few 
> microbenchmarks of various mapping types, which has been useful in 
> making sure that we're ending up with the right PTEs.

Looks really nice! Regarding libdrm, is there a version cutoff from 
where it is expected to work just fine? I've got this version: 
libdrm-2.4.6-3.fc11.x86_64.

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes
  2009-04-11  7:00   ` Ingo Molnar
@ 2009-04-13 20:11     ` Eric Anholt
  0 siblings, 0 replies; 18+ messages in thread
From: Eric Anholt @ 2009-04-13 20:11 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: venkatesh.pallipadi, tglx, hpa, linux-kernel, Suresh Siddha

[-- Attachment #1: Type: text/plain, Size: 1356 bytes --]

On Sat, 2009-04-11 at 09:00 +0200, Ingo Molnar wrote:
> * Eric Anholt <eric@anholt.net> wrote:
> 
> > On Thu, 2009-04-09 at 14:26 -0700, venkatesh.pallipadi@intel.com wrote:
> > > This patchset contains cleanups and minor bug fixes in x86 PAT and CPA
> > > related code. The bugs were mostly found by code inspection. There
> > > should not be any functionality changes with this patchset.
> > 
> > I've been curious, what are you using to test PAT changes for 
> > regressions?  I've got some regression tests at:
> > 
> > http://cgit.freedesktop.org/xorg/app/intel-gpu-tools/
> > 
> > Requires KMS enabled and master of libdrm, but after that you can 
> > sudo make check, and it tests execution of several DRM paths 
> > without requiring X.  In benchmarks/ there are a few 
> > microbenchmarks of various mapping types, which has been useful in 
> > making sure that we're ending up with the right PTEs.
> 
> Looks really nice! Regarding libdrm, is there a version cutoff from 
> where it is expected to work just fine? I've got this version: 
> libdrm-2.4.6-3.fc11.x86_64.

I should keep the pkgconfig check up to date, but in the worst case it
doesn't compile and you go get new libdrm.  The 2.4.6 check right now
appears to be correct.

-- 
Eric Anholt
eric@anholt.net                         eric.anholt@intel.com



[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2009-04-13 20:12 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-09 21:26 [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes venkatesh.pallipadi
2009-04-09 21:26 ` [patch 1/6] x86, CPA: Change idmap attribute before ioremap attribute setup venkatesh.pallipadi
2009-04-10 12:33   ` [tip:x86/pat] " Suresh Siddha
2009-04-09 21:26 ` [patch 2/6] x86, PAT: Change order of cpa and free in set_memory_wb venkatesh.pallipadi
2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
2009-04-09 21:26 ` [patch 3/6] x86, PAT: Handle faults cleanly in set_memory_ APIs venkatesh.pallipadi
2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
2009-04-09 21:26 ` [patch 4/6] x86, PAT: Changing memtype to WC ensuring no WB alias venkatesh.pallipadi
2009-04-10 12:34   ` [tip:x86/pat] " venkatesh.pallipadi
2009-04-09 21:26 ` [patch 5/6] x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype() venkatesh.pallipadi
2009-04-10 12:34   ` [tip:x86/pat] " Suresh Siddha
2009-04-09 21:26 ` [patch 6/6] x86, PAT: Remove duplicate memtype reserve in devmem mmap venkatesh.pallipadi
2009-04-10 12:34   ` [tip:x86/pat] " Suresh Siddha
2009-04-10 11:53 ` [patch 0/6] x86, PAT, CPA: Cleanups and minor bug fixes Ingo Molnar
2009-04-10 17:31   ` Pallipadi, Venkatesh
2009-04-10 20:41 ` Eric Anholt
2009-04-11  7:00   ` Ingo Molnar
2009-04-13 20:11     ` Eric Anholt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).