All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-02  1:11 ` Max Filippov
  0 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Hi,

this series adds mapping color control to the generic kmap code, allowing
architectures with aliasing VIPT cache to use high memory. There's also
use example of this new interface by xtensa.

Changes since v3:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Max Filippov (2):
  mm/highmem: make kmap cache coloring aware
  xtensa: support aliasing cache in kmap

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++-
 arch/xtensa/mm/highmem.c          | 18 ++++++++
 mm/highmem.c                      | 86 ++++++++++++++++++++++++++++++++++-----
 3 files changed, 131 insertions(+), 13 deletions(-)

-- 
1.8.1.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-02  1:11 ` Max Filippov
  0 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Hi,

this series adds mapping color control to the generic kmap code, allowing
architectures with aliasing VIPT cache to use high memory. There's also
use example of this new interface by xtensa.

Changes since v3:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Max Filippov (2):
  mm/highmem: make kmap cache coloring aware
  xtensa: support aliasing cache in kmap

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++-
 arch/xtensa/mm/highmem.c          | 18 ++++++++
 mm/highmem.c                      | 86 ++++++++++++++++++++++++++++++++++-----
 3 files changed, 131 insertions(+), 13 deletions(-)

-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v4 1/2] mm/highmem: make kmap cache coloring aware
  2014-08-02  1:11 ` Max Filippov
@ 2014-08-02  1:11   ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

User-visible effect:
Architectures that choose this method of maintaining cache coherency
(MIPS and xtensa currently) are able to use high memory on cores with
aliasing data cache. Without this fix such architectures can not use
high memory (in case of xtensa it means that at most 128 MBytes of
physical memory is available).

The problem:
VIPT cache with way size larger than MMU page size may suffer from
aliasing problem: a single physical address accessed via different
virtual addresses may end up in multiple locations in the cache.
Virtual mappings of a physical address that always get cached in
different cache locations are said to have different colors.
L1 caching hardware usually doesn't handle this situation leaving it
up to software. Software must avoid this situation as it leads to
data corruption.

What can be done:
One way to handle this is to flush and invalidate data cache every time
page mapping changes color. The other way is to always map physical page
at a virtual address with the same color. Low memory pages already have
this property. Giving architecture a way to control color of high memory
page mapping allows reusing of existing low memory cache alias handling
code.

How this is done with this patch:
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.

This code is based on the implementation of similar feature for MIPS by
Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes v3->v4:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Changes v2->v3:
- drop ARCH_PKMAP_COLORING, check gif et_pkmap_color is defined instead;
- add comment stating that arch should place definitions into
  asm/highmem.h, include it directly to mm/highmem.c;
- replace macros with inline functions, change set_pkmap_color to
  get_pkmap_color which better fits inline function model;
- drop get_last_pkmap_nr;
- replace get_next_pkmap_counter with get_pkmap_entries_count, leave
  original counting code;
- introduce get_pkmap_wait_queue_head and make sleeping/waking dependent
  on mapping color;
- move file-scope static variables last_pkmap_nr and pkmap_map_wait into
  get_next_pkmap_nr and get_pkmap_wait_queue_head respectively;
- document new functions;
- expand patch description and change authorship.

Changes v1->v2:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
  code behavior;

 mm/highmem.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 75 insertions(+), 11 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..123bcd3 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,66 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
  */
 #ifdef CONFIG_HIGHMEM
 
+/*
+ * Architecture with aliasing data cache may define the following family of
+ * helper functions in its asm/highmem.h to control cache color of virtual
+ * addresses where physical memory pages are mapped by kmap.
+ */
+#ifndef get_pkmap_color
+
+/*
+ * Determine color of virtual address where the page should be mapped.
+ */
+static inline unsigned int get_pkmap_color(struct page *page)
+{
+	return 0;
+}
+#define get_pkmap_color get_pkmap_color
+
+/*
+ * Get next index for mapping inside PKMAP region for page with given color.
+ */
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	static unsigned int last_pkmap_nr;
+
+	last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
+	return last_pkmap_nr;
+}
+
+/*
+ * Determine if page index inside PKMAP region (pkmap_nr) of given color
+ * has wrapped around PKMAP region end. When this happens an attempt to
+ * flush all unused PKMAP slots is made.
+ */
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr == 0;
+}
+
+/*
+ * Get the number of PKMAP entries of the given color. If no free slot is
+ * found after checking that many entries, kmap will sleep waiting for
+ * someone to call kunmap and free PKMAP slot.
+ */
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP;
+}
+
+/*
+ * Get head of a wait queue for PKMAP entries of the given color.
+ * Wait queues for different mapping colors should be independent to avoid
+ * unnecessary wakeups caused by freeing of slots of other colors.
+ */
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
+
+	return &pkmap_map_wait;
+}
+#endif
+
 unsigned long totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(totalhigh_pages);
 
@@ -68,13 +128,10 @@ unsigned int nr_free_highpages (void)
 }
 
 static int pkmap_count[LAST_PKMAP];
-static unsigned int last_pkmap_nr;
 static  __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
 
 pte_t * pkmap_page_table;
 
-static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
-
 /*
  * Most architectures have no use for kmap_high_get(), so let's abstract
  * the disabling of IRQ out of the locking in that case to save on a
@@ -161,15 +218,17 @@ static inline unsigned long map_new_virtual(struct page *page)
 {
 	unsigned long vaddr;
 	int count;
+	unsigned int last_pkmap_nr;
+	unsigned int color = get_pkmap_color(page);
 
 start:
-	count = LAST_PKMAP;
+	count = get_pkmap_entries_count(color);
 	/* Find an empty entry */
 	for (;;) {
-		last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
-		if (!last_pkmap_nr) {
+		last_pkmap_nr = get_next_pkmap_nr(color);
+		if (no_more_pkmaps(last_pkmap_nr, color)) {
 			flush_all_zero_pkmaps();
-			count = LAST_PKMAP;
+			count = get_pkmap_entries_count(color);
 		}
 		if (!pkmap_count[last_pkmap_nr])
 			break;	/* Found a usable entry */
@@ -181,12 +240,14 @@ start:
 		 */
 		{
 			DECLARE_WAITQUEUE(wait, current);
+			wait_queue_head_t *pkmap_map_wait =
+				get_pkmap_wait_queue_head(color);
 
 			__set_current_state(TASK_UNINTERRUPTIBLE);
-			add_wait_queue(&pkmap_map_wait, &wait);
+			add_wait_queue(pkmap_map_wait, &wait);
 			unlock_kmap();
 			schedule();
-			remove_wait_queue(&pkmap_map_wait, &wait);
+			remove_wait_queue(pkmap_map_wait, &wait);
 			lock_kmap();
 
 			/* Somebody else might have mapped it while we slept */
@@ -274,6 +335,8 @@ void kunmap_high(struct page *page)
 	unsigned long nr;
 	unsigned long flags;
 	int need_wakeup;
+	unsigned int color = get_pkmap_color(page);
+	wait_queue_head_t *pkmap_map_wait;
 
 	lock_kmap_any(flags);
 	vaddr = (unsigned long)page_address(page);
@@ -299,13 +362,14 @@ void kunmap_high(struct page *page)
 		 * no need for the wait-queue-head's lock.  Simply
 		 * test if the queue is empty.
 		 */
-		need_wakeup = waitqueue_active(&pkmap_map_wait);
+		pkmap_map_wait = get_pkmap_wait_queue_head(color);
+		need_wakeup = waitqueue_active(pkmap_map_wait);
 	}
 	unlock_kmap_any(flags);
 
 	/* do wake-up, if needed, race-free outside of the spin lock */
 	if (need_wakeup)
-		wake_up(&pkmap_map_wait);
+		wake_up(pkmap_map_wait);
 }
 
 EXPORT_SYMBOL(kunmap_high);
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 1/2] mm/highmem: make kmap cache coloring aware
  2014-08-02  1:11 ` Max Filippov
  (?)
  (?)
@ 2014-08-02  1:11 ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

User-visible effect:
Architectures that choose this method of maintaining cache coherency
(MIPS and xtensa currently) are able to use high memory on cores with
aliasing data cache. Without this fix such architectures can not use
high memory (in case of xtensa it means that at most 128 MBytes of
physical memory is available).

The problem:
VIPT cache with way size larger than MMU page size may suffer from
aliasing problem: a single physical address accessed via different
virtual addresses may end up in multiple locations in the cache.
Virtual mappings of a physical address that always get cached in
different cache locations are said to have different colors.
L1 caching hardware usually doesn't handle this situation leaving it
up to software. Software must avoid this situation as it leads to
data corruption.

What can be done:
One way to handle this is to flush and invalidate data cache every time
page mapping changes color. The other way is to always map physical page
at a virtual address with the same color. Low memory pages already have
this property. Giving architecture a way to control color of high memory
page mapping allows reusing of existing low memory cache alias handling
code.

How this is done with this patch:
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.

This code is based on the implementation of similar feature for MIPS by
Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes v3->v4:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Changes v2->v3:
- drop ARCH_PKMAP_COLORING, check gif et_pkmap_color is defined instead;
- add comment stating that arch should place definitions into
  asm/highmem.h, include it directly to mm/highmem.c;
- replace macros with inline functions, change set_pkmap_color to
  get_pkmap_color which better fits inline function model;
- drop get_last_pkmap_nr;
- replace get_next_pkmap_counter with get_pkmap_entries_count, leave
  original counting code;
- introduce get_pkmap_wait_queue_head and make sleeping/waking dependent
  on mapping color;
- move file-scope static variables last_pkmap_nr and pkmap_map_wait into
  get_next_pkmap_nr and get_pkmap_wait_queue_head respectively;
- document new functions;
- expand patch description and change authorship.

Changes v1->v2:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
  code behavior;

 mm/highmem.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 75 insertions(+), 11 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..123bcd3 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,66 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
  */
 #ifdef CONFIG_HIGHMEM
 
+/*
+ * Architecture with aliasing data cache may define the following family of
+ * helper functions in its asm/highmem.h to control cache color of virtual
+ * addresses where physical memory pages are mapped by kmap.
+ */
+#ifndef get_pkmap_color
+
+/*
+ * Determine color of virtual address where the page should be mapped.
+ */
+static inline unsigned int get_pkmap_color(struct page *page)
+{
+	return 0;
+}
+#define get_pkmap_color get_pkmap_color
+
+/*
+ * Get next index for mapping inside PKMAP region for page with given color.
+ */
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	static unsigned int last_pkmap_nr;
+
+	last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
+	return last_pkmap_nr;
+}
+
+/*
+ * Determine if page index inside PKMAP region (pkmap_nr) of given color
+ * has wrapped around PKMAP region end. When this happens an attempt to
+ * flush all unused PKMAP slots is made.
+ */
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr == 0;
+}
+
+/*
+ * Get the number of PKMAP entries of the given color. If no free slot is
+ * found after checking that many entries, kmap will sleep waiting for
+ * someone to call kunmap and free PKMAP slot.
+ */
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP;
+}
+
+/*
+ * Get head of a wait queue for PKMAP entries of the given color.
+ * Wait queues for different mapping colors should be independent to avoid
+ * unnecessary wakeups caused by freeing of slots of other colors.
+ */
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
+
+	return &pkmap_map_wait;
+}
+#endif
+
 unsigned long totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(totalhigh_pages);
 
@@ -68,13 +128,10 @@ unsigned int nr_free_highpages (void)
 }
 
 static int pkmap_count[LAST_PKMAP];
-static unsigned int last_pkmap_nr;
 static  __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
 
 pte_t * pkmap_page_table;
 
-static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
-
 /*
  * Most architectures have no use for kmap_high_get(), so let's abstract
  * the disabling of IRQ out of the locking in that case to save on a
@@ -161,15 +218,17 @@ static inline unsigned long map_new_virtual(struct page *page)
 {
 	unsigned long vaddr;
 	int count;
+	unsigned int last_pkmap_nr;
+	unsigned int color = get_pkmap_color(page);
 
 start:
-	count = LAST_PKMAP;
+	count = get_pkmap_entries_count(color);
 	/* Find an empty entry */
 	for (;;) {
-		last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
-		if (!last_pkmap_nr) {
+		last_pkmap_nr = get_next_pkmap_nr(color);
+		if (no_more_pkmaps(last_pkmap_nr, color)) {
 			flush_all_zero_pkmaps();
-			count = LAST_PKMAP;
+			count = get_pkmap_entries_count(color);
 		}
 		if (!pkmap_count[last_pkmap_nr])
 			break;	/* Found a usable entry */
@@ -181,12 +240,14 @@ start:
 		 */
 		{
 			DECLARE_WAITQUEUE(wait, current);
+			wait_queue_head_t *pkmap_map_wait =
+				get_pkmap_wait_queue_head(color);
 
 			__set_current_state(TASK_UNINTERRUPTIBLE);
-			add_wait_queue(&pkmap_map_wait, &wait);
+			add_wait_queue(pkmap_map_wait, &wait);
 			unlock_kmap();
 			schedule();
-			remove_wait_queue(&pkmap_map_wait, &wait);
+			remove_wait_queue(pkmap_map_wait, &wait);
 			lock_kmap();
 
 			/* Somebody else might have mapped it while we slept */
@@ -274,6 +335,8 @@ void kunmap_high(struct page *page)
 	unsigned long nr;
 	unsigned long flags;
 	int need_wakeup;
+	unsigned int color = get_pkmap_color(page);
+	wait_queue_head_t *pkmap_map_wait;
 
 	lock_kmap_any(flags);
 	vaddr = (unsigned long)page_address(page);
@@ -299,13 +362,14 @@ void kunmap_high(struct page *page)
 		 * no need for the wait-queue-head's lock.  Simply
 		 * test if the queue is empty.
 		 */
-		need_wakeup = waitqueue_active(&pkmap_map_wait);
+		pkmap_map_wait = get_pkmap_wait_queue_head(color);
+		need_wakeup = waitqueue_active(pkmap_map_wait);
 	}
 	unlock_kmap_any(flags);
 
 	/* do wake-up, if needed, race-free outside of the spin lock */
 	if (need_wakeup)
-		wake_up(&pkmap_map_wait);
+		wake_up(pkmap_map_wait);
 }
 
 EXPORT_SYMBOL(kunmap_high);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 1/2] mm/highmem: make kmap cache coloring aware
  2014-08-02  1:11 ` Max Filippov
  (?)
@ 2014-08-02  1:11 ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

User-visible effect:
Architectures that choose this method of maintaining cache coherency
(MIPS and xtensa currently) are able to use high memory on cores with
aliasing data cache. Without this fix such architectures can not use
high memory (in case of xtensa it means that at most 128 MBytes of
physical memory is available).

The problem:
VIPT cache with way size larger than MMU page size may suffer from
aliasing problem: a single physical address accessed via different
virtual addresses may end up in multiple locations in the cache.
Virtual mappings of a physical address that always get cached in
different cache locations are said to have different colors.
L1 caching hardware usually doesn't handle this situation leaving it
up to software. Software must avoid this situation as it leads to
data corruption.

What can be done:
One way to handle this is to flush and invalidate data cache every time
page mapping changes color. The other way is to always map physical page
at a virtual address with the same color. Low memory pages already have
this property. Giving architecture a way to control color of high memory
page mapping allows reusing of existing low memory cache alias handling
code.

How this is done with this patch:
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.

This code is based on the implementation of similar feature for MIPS by
Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes v3->v4:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Changes v2->v3:
- drop ARCH_PKMAP_COLORING, check gif et_pkmap_color is defined instead;
- add comment stating that arch should place definitions into
  asm/highmem.h, include it directly to mm/highmem.c;
- replace macros with inline functions, change set_pkmap_color to
  get_pkmap_color which better fits inline function model;
- drop get_last_pkmap_nr;
- replace get_next_pkmap_counter with get_pkmap_entries_count, leave
  original counting code;
- introduce get_pkmap_wait_queue_head and make sleeping/waking dependent
  on mapping color;
- move file-scope static variables last_pkmap_nr and pkmap_map_wait into
  get_next_pkmap_nr and get_pkmap_wait_queue_head respectively;
- document new functions;
- expand patch description and change authorship.

Changes v1->v2:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
  code behavior;

 mm/highmem.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 75 insertions(+), 11 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..123bcd3 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,66 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
  */
 #ifdef CONFIG_HIGHMEM
 
+/*
+ * Architecture with aliasing data cache may define the following family of
+ * helper functions in its asm/highmem.h to control cache color of virtual
+ * addresses where physical memory pages are mapped by kmap.
+ */
+#ifndef get_pkmap_color
+
+/*
+ * Determine color of virtual address where the page should be mapped.
+ */
+static inline unsigned int get_pkmap_color(struct page *page)
+{
+	return 0;
+}
+#define get_pkmap_color get_pkmap_color
+
+/*
+ * Get next index for mapping inside PKMAP region for page with given color.
+ */
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	static unsigned int last_pkmap_nr;
+
+	last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
+	return last_pkmap_nr;
+}
+
+/*
+ * Determine if page index inside PKMAP region (pkmap_nr) of given color
+ * has wrapped around PKMAP region end. When this happens an attempt to
+ * flush all unused PKMAP slots is made.
+ */
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr == 0;
+}
+
+/*
+ * Get the number of PKMAP entries of the given color. If no free slot is
+ * found after checking that many entries, kmap will sleep waiting for
+ * someone to call kunmap and free PKMAP slot.
+ */
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP;
+}
+
+/*
+ * Get head of a wait queue for PKMAP entries of the given color.
+ * Wait queues for different mapping colors should be independent to avoid
+ * unnecessary wakeups caused by freeing of slots of other colors.
+ */
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
+
+	return &pkmap_map_wait;
+}
+#endif
+
 unsigned long totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(totalhigh_pages);
 
@@ -68,13 +128,10 @@ unsigned int nr_free_highpages (void)
 }
 
 static int pkmap_count[LAST_PKMAP];
-static unsigned int last_pkmap_nr;
 static  __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
 
 pte_t * pkmap_page_table;
 
-static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
-
 /*
  * Most architectures have no use for kmap_high_get(), so let's abstract
  * the disabling of IRQ out of the locking in that case to save on a
@@ -161,15 +218,17 @@ static inline unsigned long map_new_virtual(struct page *page)
 {
 	unsigned long vaddr;
 	int count;
+	unsigned int last_pkmap_nr;
+	unsigned int color = get_pkmap_color(page);
 
 start:
-	count = LAST_PKMAP;
+	count = get_pkmap_entries_count(color);
 	/* Find an empty entry */
 	for (;;) {
-		last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
-		if (!last_pkmap_nr) {
+		last_pkmap_nr = get_next_pkmap_nr(color);
+		if (no_more_pkmaps(last_pkmap_nr, color)) {
 			flush_all_zero_pkmaps();
-			count = LAST_PKMAP;
+			count = get_pkmap_entries_count(color);
 		}
 		if (!pkmap_count[last_pkmap_nr])
 			break;	/* Found a usable entry */
@@ -181,12 +240,14 @@ start:
 		 */
 		{
 			DECLARE_WAITQUEUE(wait, current);
+			wait_queue_head_t *pkmap_map_wait =
+				get_pkmap_wait_queue_head(color);
 
 			__set_current_state(TASK_UNINTERRUPTIBLE);
-			add_wait_queue(&pkmap_map_wait, &wait);
+			add_wait_queue(pkmap_map_wait, &wait);
 			unlock_kmap();
 			schedule();
-			remove_wait_queue(&pkmap_map_wait, &wait);
+			remove_wait_queue(pkmap_map_wait, &wait);
 			lock_kmap();
 
 			/* Somebody else might have mapped it while we slept */
@@ -274,6 +335,8 @@ void kunmap_high(struct page *page)
 	unsigned long nr;
 	unsigned long flags;
 	int need_wakeup;
+	unsigned int color = get_pkmap_color(page);
+	wait_queue_head_t *pkmap_map_wait;
 
 	lock_kmap_any(flags);
 	vaddr = (unsigned long)page_address(page);
@@ -299,13 +362,14 @@ void kunmap_high(struct page *page)
 		 * no need for the wait-queue-head's lock.  Simply
 		 * test if the queue is empty.
 		 */
-		need_wakeup = waitqueue_active(&pkmap_map_wait);
+		pkmap_map_wait = get_pkmap_wait_queue_head(color);
+		need_wakeup = waitqueue_active(pkmap_map_wait);
 	}
 	unlock_kmap_any(flags);
 
 	/* do wake-up, if needed, race-free outside of the spin lock */
 	if (need_wakeup)
-		wake_up(&pkmap_map_wait);
+		wake_up(pkmap_map_wait);
 }
 
 EXPORT_SYMBOL(kunmap_high);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 1/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-02  1:11   ` Max Filippov
  0 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

User-visible effect:
Architectures that choose this method of maintaining cache coherency
(MIPS and xtensa currently) are able to use high memory on cores with
aliasing data cache. Without this fix such architectures can not use
high memory (in case of xtensa it means that at most 128 MBytes of
physical memory is available).

The problem:
VIPT cache with way size larger than MMU page size may suffer from
aliasing problem: a single physical address accessed via different
virtual addresses may end up in multiple locations in the cache.
Virtual mappings of a physical address that always get cached in
different cache locations are said to have different colors.
L1 caching hardware usually doesn't handle this situation leaving it
up to software. Software must avoid this situation as it leads to
data corruption.

What can be done:
One way to handle this is to flush and invalidate data cache every time
page mapping changes color. The other way is to always map physical page
at a virtual address with the same color. Low memory pages already have
this property. Giving architecture a way to control color of high memory
page mapping allows reusing of existing low memory cache alias handling
code.

How this is done with this patch:
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such architectures
may enforce similar coloring of low- and high-memory page mappings and
reuse existing cache management functions to support highmem.

This code is based on the implementation of similar feature for MIPS by
Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
Changes v3->v4:
- drop #include <asm/highmem.h> from mm/highmem.c as it's done in
  linux/highmem.h;
- add 'User-visible effect' section to changelog.

Changes v2->v3:
- drop ARCH_PKMAP_COLORING, check gif et_pkmap_color is defined instead;
- add comment stating that arch should place definitions into
  asm/highmem.h, include it directly to mm/highmem.c;
- replace macros with inline functions, change set_pkmap_color to
  get_pkmap_color which better fits inline function model;
- drop get_last_pkmap_nr;
- replace get_next_pkmap_counter with get_pkmap_entries_count, leave
  original counting code;
- introduce get_pkmap_wait_queue_head and make sleeping/waking dependent
  on mapping color;
- move file-scope static variables last_pkmap_nr and pkmap_map_wait into
  get_next_pkmap_nr and get_pkmap_wait_queue_head respectively;
- document new functions;
- expand patch description and change authorship.

Changes v1->v2:
- define set_pkmap_color(pg, cl) as do { } while (0) instead of /* */;
- rename is_no_more_pkmaps to no_more_pkmaps;
- change 'if (count > 0)' to 'if (count)' to better match the original
  code behavior;

 mm/highmem.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 75 insertions(+), 11 deletions(-)

diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..123bcd3 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -44,6 +44,66 @@ DEFINE_PER_CPU(int, __kmap_atomic_idx);
  */
 #ifdef CONFIG_HIGHMEM
 
+/*
+ * Architecture with aliasing data cache may define the following family of
+ * helper functions in its asm/highmem.h to control cache color of virtual
+ * addresses where physical memory pages are mapped by kmap.
+ */
+#ifndef get_pkmap_color
+
+/*
+ * Determine color of virtual address where the page should be mapped.
+ */
+static inline unsigned int get_pkmap_color(struct page *page)
+{
+	return 0;
+}
+#define get_pkmap_color get_pkmap_color
+
+/*
+ * Get next index for mapping inside PKMAP region for page with given color.
+ */
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	static unsigned int last_pkmap_nr;
+
+	last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
+	return last_pkmap_nr;
+}
+
+/*
+ * Determine if page index inside PKMAP region (pkmap_nr) of given color
+ * has wrapped around PKMAP region end. When this happens an attempt to
+ * flush all unused PKMAP slots is made.
+ */
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr == 0;
+}
+
+/*
+ * Get the number of PKMAP entries of the given color. If no free slot is
+ * found after checking that many entries, kmap will sleep waiting for
+ * someone to call kunmap and free PKMAP slot.
+ */
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP;
+}
+
+/*
+ * Get head of a wait queue for PKMAP entries of the given color.
+ * Wait queues for different mapping colors should be independent to avoid
+ * unnecessary wakeups caused by freeing of slots of other colors.
+ */
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
+
+	return &pkmap_map_wait;
+}
+#endif
+
 unsigned long totalhigh_pages __read_mostly;
 EXPORT_SYMBOL(totalhigh_pages);
 
@@ -68,13 +128,10 @@ unsigned int nr_free_highpages (void)
 }
 
 static int pkmap_count[LAST_PKMAP];
-static unsigned int last_pkmap_nr;
 static  __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock);
 
 pte_t * pkmap_page_table;
 
-static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait);
-
 /*
  * Most architectures have no use for kmap_high_get(), so let's abstract
  * the disabling of IRQ out of the locking in that case to save on a
@@ -161,15 +218,17 @@ static inline unsigned long map_new_virtual(struct page *page)
 {
 	unsigned long vaddr;
 	int count;
+	unsigned int last_pkmap_nr;
+	unsigned int color = get_pkmap_color(page);
 
 start:
-	count = LAST_PKMAP;
+	count = get_pkmap_entries_count(color);
 	/* Find an empty entry */
 	for (;;) {
-		last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK;
-		if (!last_pkmap_nr) {
+		last_pkmap_nr = get_next_pkmap_nr(color);
+		if (no_more_pkmaps(last_pkmap_nr, color)) {
 			flush_all_zero_pkmaps();
-			count = LAST_PKMAP;
+			count = get_pkmap_entries_count(color);
 		}
 		if (!pkmap_count[last_pkmap_nr])
 			break;	/* Found a usable entry */
@@ -181,12 +240,14 @@ start:
 		 */
 		{
 			DECLARE_WAITQUEUE(wait, current);
+			wait_queue_head_t *pkmap_map_wait =
+				get_pkmap_wait_queue_head(color);
 
 			__set_current_state(TASK_UNINTERRUPTIBLE);
-			add_wait_queue(&pkmap_map_wait, &wait);
+			add_wait_queue(pkmap_map_wait, &wait);
 			unlock_kmap();
 			schedule();
-			remove_wait_queue(&pkmap_map_wait, &wait);
+			remove_wait_queue(pkmap_map_wait, &wait);
 			lock_kmap();
 
 			/* Somebody else might have mapped it while we slept */
@@ -274,6 +335,8 @@ void kunmap_high(struct page *page)
 	unsigned long nr;
 	unsigned long flags;
 	int need_wakeup;
+	unsigned int color = get_pkmap_color(page);
+	wait_queue_head_t *pkmap_map_wait;
 
 	lock_kmap_any(flags);
 	vaddr = (unsigned long)page_address(page);
@@ -299,13 +362,14 @@ void kunmap_high(struct page *page)
 		 * no need for the wait-queue-head's lock.  Simply
 		 * test if the queue is empty.
 		 */
-		need_wakeup = waitqueue_active(&pkmap_map_wait);
+		pkmap_map_wait = get_pkmap_wait_queue_head(color);
+		need_wakeup = waitqueue_active(pkmap_map_wait);
 	}
 	unlock_kmap_any(flags);
 
 	/* do wake-up, if needed, race-free outside of the spin lock */
 	if (need_wakeup)
-		wake_up(&pkmap_map_wait);
+		wake_up(pkmap_map_wait);
 }
 
 EXPORT_SYMBOL(kunmap_high);
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/2] xtensa: support aliasing cache in kmap
  2014-08-02  1:11 ` Max Filippov
@ 2014-08-02  1:11   ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.

Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
This patch is only a demonstration of new interface usage, it depends
on other patches from the xtensa tree. Please don't commit it.

Changes v3->v4:
- none.

Changes v2->v3:
- switch to new function names/prototypes;
- implement get_pkmap_wait_queue_head, add kmap_waitqueues_init.

Changes v1->v2:
- new file.

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++++++++++++++++++++++--
 arch/xtensa/mm/highmem.c          | 18 ++++++++++++++++++
 2 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..2c7901e 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -12,19 +12,55 @@
 #ifndef _XTENSA_HIGHMEM_H
 #define _XTENSA_HIGHMEM_H
 
+#include <linux/wait.h>
 #include <asm/cacheflush.h>
 #include <asm/fixmap.h>
 #include <asm/kmap_types.h>
 #include <asm/pgtable.h>
 
-#define PKMAP_BASE		(FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP		PTRS_PER_PTE
+#define PKMAP_BASE		((FIXADDR_START - \
+				  (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP		(PTRS_PER_PTE * DCACHE_N_COLORS)
 #define LAST_PKMAP_MASK		(LAST_PKMAP - 1)
 #define PKMAP_NR(virt)		(((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)		(PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
 #define kmap_prot		PAGE_KERNEL
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define get_pkmap_color get_pkmap_color
+static inline int get_pkmap_color(struct page *page)
+{
+	return DCACHE_ALIAS(page_to_phys(page));
+}
+
+extern unsigned int last_pkmap_nr_arr[];
+
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	last_pkmap_nr_arr[color] =
+		(last_pkmap_nr_arr[color] + DCACHE_N_COLORS) & LAST_PKMAP_MASK;
+	return last_pkmap_nr_arr[color] + color;
+}
+
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr < DCACHE_N_COLORS;
+}
+
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP / DCACHE_N_COLORS;
+}
+
+extern wait_queue_head_t pkmap_map_wait_arr[];
+
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	return pkmap_map_wait_arr + color;
+}
+#endif
+
 extern pte_t *pkmap_page_table;
 
 void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..8cfb71e 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -14,6 +14,23 @@
 
 static pte_t *kmap_pte;
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
+wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
+
+static void __init kmap_waitqueues_init(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
+		init_waitqueue_head(pkmap_map_wait_arr + i);
+}
+#else
+static inline void kmap_waitqueues_init(void)
+{
+}
+#endif
+
 static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
 {
 	return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
@@ -72,4 +89,5 @@ void __init kmap_init(void)
 	/* cache the first kmap pte */
 	kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
 	kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+	kmap_waitqueues_init();
 }
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/2] xtensa: support aliasing cache in kmap
  2014-08-02  1:11 ` Max Filippov
                   ` (4 preceding siblings ...)
  (?)
@ 2014-08-02  1:11 ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.

Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
This patch is only a demonstration of new interface usage, it depends
on other patches from the xtensa tree. Please don't commit it.

Changes v3->v4:
- none.

Changes v2->v3:
- switch to new function names/prototypes;
- implement get_pkmap_wait_queue_head, add kmap_waitqueues_init.

Changes v1->v2:
- new file.

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++++++++++++++++++++++--
 arch/xtensa/mm/highmem.c          | 18 ++++++++++++++++++
 2 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..2c7901e 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -12,19 +12,55 @@
 #ifndef _XTENSA_HIGHMEM_H
 #define _XTENSA_HIGHMEM_H
 
+#include <linux/wait.h>
 #include <asm/cacheflush.h>
 #include <asm/fixmap.h>
 #include <asm/kmap_types.h>
 #include <asm/pgtable.h>
 
-#define PKMAP_BASE		(FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP		PTRS_PER_PTE
+#define PKMAP_BASE		((FIXADDR_START - \
+				  (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP		(PTRS_PER_PTE * DCACHE_N_COLORS)
 #define LAST_PKMAP_MASK		(LAST_PKMAP - 1)
 #define PKMAP_NR(virt)		(((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)		(PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
 #define kmap_prot		PAGE_KERNEL
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define get_pkmap_color get_pkmap_color
+static inline int get_pkmap_color(struct page *page)
+{
+	return DCACHE_ALIAS(page_to_phys(page));
+}
+
+extern unsigned int last_pkmap_nr_arr[];
+
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	last_pkmap_nr_arr[color] =
+		(last_pkmap_nr_arr[color] + DCACHE_N_COLORS) & LAST_PKMAP_MASK;
+	return last_pkmap_nr_arr[color] + color;
+}
+
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr < DCACHE_N_COLORS;
+}
+
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP / DCACHE_N_COLORS;
+}
+
+extern wait_queue_head_t pkmap_map_wait_arr[];
+
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	return pkmap_map_wait_arr + color;
+}
+#endif
+
 extern pte_t *pkmap_page_table;
 
 void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..8cfb71e 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -14,6 +14,23 @@
 
 static pte_t *kmap_pte;
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
+wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
+
+static void __init kmap_waitqueues_init(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
+		init_waitqueue_head(pkmap_map_wait_arr + i);
+}
+#else
+static inline void kmap_waitqueues_init(void)
+{
+}
+#endif
+
 static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
 {
 	return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
@@ -72,4 +89,5 @@ void __init kmap_init(void)
 	/* cache the first kmap pte */
 	kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
 	kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+	kmap_waitqueues_init();
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/2] xtensa: support aliasing cache in kmap
  2014-08-02  1:11 ` Max Filippov
                   ` (3 preceding siblings ...)
  (?)
@ 2014-08-02  1:11 ` Max Filippov
  -1 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.

Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
This patch is only a demonstration of new interface usage, it depends
on other patches from the xtensa tree. Please don't commit it.

Changes v3->v4:
- none.

Changes v2->v3:
- switch to new function names/prototypes;
- implement get_pkmap_wait_queue_head, add kmap_waitqueues_init.

Changes v1->v2:
- new file.

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++++++++++++++++++++++--
 arch/xtensa/mm/highmem.c          | 18 ++++++++++++++++++
 2 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..2c7901e 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -12,19 +12,55 @@
 #ifndef _XTENSA_HIGHMEM_H
 #define _XTENSA_HIGHMEM_H
 
+#include <linux/wait.h>
 #include <asm/cacheflush.h>
 #include <asm/fixmap.h>
 #include <asm/kmap_types.h>
 #include <asm/pgtable.h>
 
-#define PKMAP_BASE		(FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP		PTRS_PER_PTE
+#define PKMAP_BASE		((FIXADDR_START - \
+				  (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP		(PTRS_PER_PTE * DCACHE_N_COLORS)
 #define LAST_PKMAP_MASK		(LAST_PKMAP - 1)
 #define PKMAP_NR(virt)		(((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)		(PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
 #define kmap_prot		PAGE_KERNEL
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define get_pkmap_color get_pkmap_color
+static inline int get_pkmap_color(struct page *page)
+{
+	return DCACHE_ALIAS(page_to_phys(page));
+}
+
+extern unsigned int last_pkmap_nr_arr[];
+
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	last_pkmap_nr_arr[color] =
+		(last_pkmap_nr_arr[color] + DCACHE_N_COLORS) & LAST_PKMAP_MASK;
+	return last_pkmap_nr_arr[color] + color;
+}
+
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr < DCACHE_N_COLORS;
+}
+
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP / DCACHE_N_COLORS;
+}
+
+extern wait_queue_head_t pkmap_map_wait_arr[];
+
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	return pkmap_map_wait_arr + color;
+}
+#endif
+
 extern pte_t *pkmap_page_table;
 
 void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..8cfb71e 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -14,6 +14,23 @@
 
 static pte_t *kmap_pte;
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
+wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
+
+static void __init kmap_waitqueues_init(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
+		init_waitqueue_head(pkmap_map_wait_arr + i);
+}
+#else
+static inline void kmap_waitqueues_init(void)
+{
+}
+#endif
+
 static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
 {
 	return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
@@ -72,4 +89,5 @@ void __init kmap_init(void)
 	/* cache the first kmap pte */
 	kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
 	kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+	kmap_waitqueues_init();
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v4 2/2] xtensa: support aliasing cache in kmap
@ 2014-08-02  1:11   ` Max Filippov
  0 siblings, 0 replies; 24+ messages in thread
From: Max Filippov @ 2014-08-02  1:11 UTC (permalink / raw)
  To: linux-xtensa
  Cc: Chris Zankel, Marc Gauthier, linux-mm, linux-arch, linux-mips,
	linux-kernel, David Rientjes, Andrew Morton, Leonid Yegoshin,
	Steven Hill, Max Filippov

Define ARCH_PKMAP_COLORING and provide corresponding macro definitions
on cores with aliasing data cache.

Instead of single last_pkmap_nr maintain an array last_pkmap_nr_arr of
pkmap counters for each page color. Make sure that kmap maps physical
page at virtual address with color matching its physical address.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
---
This patch is only a demonstration of new interface usage, it depends
on other patches from the xtensa tree. Please don't commit it.

Changes v3->v4:
- none.

Changes v2->v3:
- switch to new function names/prototypes;
- implement get_pkmap_wait_queue_head, add kmap_waitqueues_init.

Changes v1->v2:
- new file.

 arch/xtensa/include/asm/highmem.h | 40 +++++++++++++++++++++++++++++++++++++--
 arch/xtensa/mm/highmem.c          | 18 ++++++++++++++++++
 2 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
index 2653ef5..2c7901e 100644
--- a/arch/xtensa/include/asm/highmem.h
+++ b/arch/xtensa/include/asm/highmem.h
@@ -12,19 +12,55 @@
 #ifndef _XTENSA_HIGHMEM_H
 #define _XTENSA_HIGHMEM_H
 
+#include <linux/wait.h>
 #include <asm/cacheflush.h>
 #include <asm/fixmap.h>
 #include <asm/kmap_types.h>
 #include <asm/pgtable.h>
 
-#define PKMAP_BASE		(FIXADDR_START - PMD_SIZE)
-#define LAST_PKMAP		PTRS_PER_PTE
+#define PKMAP_BASE		((FIXADDR_START - \
+				  (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK)
+#define LAST_PKMAP		(PTRS_PER_PTE * DCACHE_N_COLORS)
 #define LAST_PKMAP_MASK		(LAST_PKMAP - 1)
 #define PKMAP_NR(virt)		(((virt) - PKMAP_BASE) >> PAGE_SHIFT)
 #define PKMAP_ADDR(nr)		(PKMAP_BASE + ((nr) << PAGE_SHIFT))
 
 #define kmap_prot		PAGE_KERNEL
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+#define get_pkmap_color get_pkmap_color
+static inline int get_pkmap_color(struct page *page)
+{
+	return DCACHE_ALIAS(page_to_phys(page));
+}
+
+extern unsigned int last_pkmap_nr_arr[];
+
+static inline unsigned int get_next_pkmap_nr(unsigned int color)
+{
+	last_pkmap_nr_arr[color] =
+		(last_pkmap_nr_arr[color] + DCACHE_N_COLORS) & LAST_PKMAP_MASK;
+	return last_pkmap_nr_arr[color] + color;
+}
+
+static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color)
+{
+	return pkmap_nr < DCACHE_N_COLORS;
+}
+
+static inline int get_pkmap_entries_count(unsigned int color)
+{
+	return LAST_PKMAP / DCACHE_N_COLORS;
+}
+
+extern wait_queue_head_t pkmap_map_wait_arr[];
+
+static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+{
+	return pkmap_map_wait_arr + color;
+}
+#endif
+
 extern pte_t *pkmap_page_table;
 
 void *kmap_high(struct page *page);
diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c
index 466abae..8cfb71e 100644
--- a/arch/xtensa/mm/highmem.c
+++ b/arch/xtensa/mm/highmem.c
@@ -14,6 +14,23 @@
 
 static pte_t *kmap_pte;
 
+#if DCACHE_WAY_SIZE > PAGE_SIZE
+unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS];
+wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS];
+
+static void __init kmap_waitqueues_init(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i)
+		init_waitqueue_head(pkmap_map_wait_arr + i);
+}
+#else
+static inline void kmap_waitqueues_init(void)
+{
+}
+#endif
+
 static inline enum fixed_addresses kmap_idx(int type, unsigned long color)
 {
 	return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS +
@@ -72,4 +89,5 @@ void __init kmap_init(void)
 	/* cache the first kmap pte */
 	kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN);
 	kmap_pte = kmap_get_fixmap_pte(kmap_vstart);
+	kmap_waitqueues_init();
 }
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-02  1:11 ` Max Filippov
@ 2014-08-25 17:16   ` Ralf Baechle
  -1 siblings, 0 replies; 24+ messages in thread
From: Ralf Baechle @ 2014-08-25 17:16 UTC (permalink / raw)
  To: Max Filippov
  Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-mm, linux-arch,
	linux-mips, linux-kernel, David Rientjes, Andrew Morton,
	Leonid Yegoshin, Steven Hill

On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:

> this series adds mapping color control to the generic kmap code, allowing
> architectures with aliasing VIPT cache to use high memory. There's also
> use example of this new interface by xtensa.

I haven't actually ported this to MIPS but it certainly appears to be
the right framework to get highmem aliases handled on MIPS, too.

Though I still consider increasing PAGE_SIZE to 16k the preferable
solution because it will entirly do away with cache aliases.

Thanks,

  Ralf

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-25 17:16   ` Ralf Baechle
  0 siblings, 0 replies; 24+ messages in thread
From: Ralf Baechle @ 2014-08-25 17:16 UTC (permalink / raw)
  To: Max Filippov
  Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-mm, linux-arch,
	linux-mips, linux-kernel, David Rientjes, Andrew Morton,
	Leonid Yegoshin, Steven Hill

On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:

> this series adds mapping color control to the generic kmap code, allowing
> architectures with aliasing VIPT cache to use high memory. There's also
> use example of this new interface by xtensa.

I haven't actually ported this to MIPS but it certainly appears to be
the right framework to get highmem aliases handled on MIPS, too.

Though I still consider increasing PAGE_SIZE to 16k the preferable
solution because it will entirly do away with cache aliases.

Thanks,

  Ralf

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-25 17:16   ` Ralf Baechle
@ 2014-08-25 23:55     ` Joshua Kinard
  -1 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-25 23:55 UTC (permalink / raw)
  To: Ralf Baechle, Max Filippov
  Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-mm, linux-arch,
	linux-mips, linux-kernel, David Rientjes, Andrew Morton,
	Leonid Yegoshin, Steven Hill

On 08/25/2014 13:16, Ralf Baechle wrote:
> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
> 
>> this series adds mapping color control to the generic kmap code, allowing
>> architectures with aliasing VIPT cache to use high memory. There's also
>> use example of this new interface by xtensa.
> 
> I haven't actually ported this to MIPS but it certainly appears to be
> the right framework to get highmem aliases handled on MIPS, too.
> 
> Though I still consider increasing PAGE_SIZE to 16k the preferable
> solution because it will entirly do away with cache aliases.

Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
load (SIGSEGVs or such, which panicks the kernel).

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-25 23:55     ` Joshua Kinard
  0 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-25 23:55 UTC (permalink / raw)
  To: Ralf Baechle, Max Filippov
  Cc: linux-xtensa, Chris Zankel, Marc Gauthier, linux-mm, linux-arch,
	linux-mips, linux-kernel, David Rientjes, Andrew Morton,
	Leonid Yegoshin, Steven Hill

On 08/25/2014 13:16, Ralf Baechle wrote:
> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
> 
>> this series adds mapping color control to the generic kmap code, allowing
>> architectures with aliasing VIPT cache to use high memory. There's also
>> use example of this new interface by xtensa.
> 
> I haven't actually ported this to MIPS but it certainly appears to be
> the right framework to get highmem aliases handled on MIPS, too.
> 
> Though I still consider increasing PAGE_SIZE to 16k the preferable
> solution because it will entirly do away with cache aliases.

Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
load (SIGSEGVs or such, which panicks the kernel).

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-25 23:55     ` Joshua Kinard
@ 2014-08-26  0:36       ` David Daney
  -1 siblings, 0 replies; 24+ messages in thread
From: David Daney @ 2014-08-26  0:36 UTC (permalink / raw)
  To: Joshua Kinard
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 04:55 PM, Joshua Kinard wrote:
> On 08/25/2014 13:16, Ralf Baechle wrote:
>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>
>>> this series adds mapping color control to the generic kmap code, allowing
>>> architectures with aliasing VIPT cache to use high memory. There's also
>>> use example of this new interface by xtensa.
>>
>> I haven't actually ported this to MIPS but it certainly appears to be
>> the right framework to get highmem aliases handled on MIPS, too.
>>
>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>> solution because it will entirly do away with cache aliases.
>
> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
> load (SIGSEGVs or such, which panicks the kernel).
>

It isn't supposed to break things.  Using "stock" toolchains should 
result in executables that will run with any page size.

In the past, some geniuses came up with some linker (ld) patches that, 
in order to save a few KB of RAM, produced executables that ran only on 
4K pages.

There were some equally astute Debian emacs package maintainers that 
were carrying emacs patches into Debian that would not work on non-4K 
page size systems.

That said, I think such thinking should be punished.  The punishment 
should be to not have their software run when we select non-4K page 
sizes.  The vast majority of prepackaged software runs just fine with a 
larger page size.

David Daney

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-26  0:36       ` David Daney
  0 siblings, 0 replies; 24+ messages in thread
From: David Daney @ 2014-08-26  0:36 UTC (permalink / raw)
  To: Joshua Kinard
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 04:55 PM, Joshua Kinard wrote:
> On 08/25/2014 13:16, Ralf Baechle wrote:
>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>
>>> this series adds mapping color control to the generic kmap code, allowing
>>> architectures with aliasing VIPT cache to use high memory. There's also
>>> use example of this new interface by xtensa.
>>
>> I haven't actually ported this to MIPS but it certainly appears to be
>> the right framework to get highmem aliases handled on MIPS, too.
>>
>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>> solution because it will entirly do away with cache aliases.
>
> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
> load (SIGSEGVs or such, which panicks the kernel).
>

It isn't supposed to break things.  Using "stock" toolchains should 
result in executables that will run with any page size.

In the past, some geniuses came up with some linker (ld) patches that, 
in order to save a few KB of RAM, produced executables that ran only on 
4K pages.

There were some equally astute Debian emacs package maintainers that 
were carrying emacs patches into Debian that would not work on non-4K 
page size systems.

That said, I think such thinking should be punished.  The punishment 
should be to not have their software run when we select non-4K page 
sizes.  The vast majority of prepackaged software runs just fine with a 
larger page size.

David Daney

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-26  0:36       ` David Daney
@ 2014-08-26  2:41         ` Joshua Kinard
  -1 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-26  2:41 UTC (permalink / raw)
  To: David Daney
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 20:36, David Daney wrote:
> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>
>>>> this series adds mapping color control to the generic kmap code, allowing
>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>> use example of this new interface by xtensa.
>>>
>>> I haven't actually ported this to MIPS but it certainly appears to be
>>> the right framework to get highmem aliases handled on MIPS, too.
>>>
>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>> solution because it will entirly do away with cache aliases.
>>
>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>> load (SIGSEGVs or such, which panicks the kernel).
>>
> 
> It isn't supposed to break things.  Using "stock" toolchains should result
> in executables that will run with any page size.
> 
> In the past, some geniuses came up with some linker (ld) patches that, in
> order to save a few KB of RAM, produced executables that ran only on 4K pages.
> 
> There were some equally astute Debian emacs package maintainers that were
> carrying emacs patches into Debian that would not work on non-4K page size
> systems.
> 
> That said, I think such thinking should be punished.  The punishment should
> be to not have their software run when we select non-4K page sizes.  The
> vast majority of prepackaged software runs just fine with a larger page size.

Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
into userland with just a couple of "illegal instruction" errors from 'rm'
and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
Have to dig around and find something that reproduces the problem on demand.

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-26  2:41         ` Joshua Kinard
  0 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-26  2:41 UTC (permalink / raw)
  To: David Daney
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 20:36, David Daney wrote:
> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>
>>>> this series adds mapping color control to the generic kmap code, allowing
>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>> use example of this new interface by xtensa.
>>>
>>> I haven't actually ported this to MIPS but it certainly appears to be
>>> the right framework to get highmem aliases handled on MIPS, too.
>>>
>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>> solution because it will entirly do away with cache aliases.
>>
>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>> load (SIGSEGVs or such, which panicks the kernel).
>>
> 
> It isn't supposed to break things.  Using "stock" toolchains should result
> in executables that will run with any page size.
> 
> In the past, some geniuses came up with some linker (ld) patches that, in
> order to save a few KB of RAM, produced executables that ran only on 4K pages.
> 
> There were some equally astute Debian emacs package maintainers that were
> carrying emacs patches into Debian that would not work on non-4K page size
> systems.
> 
> That said, I think such thinking should be punished.  The punishment should
> be to not have their software run when we select non-4K page sizes.  The
> vast majority of prepackaged software runs just fine with a larger page size.

Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
into userland with just a couple of "illegal instruction" errors from 'rm'
and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
Have to dig around and find something that reproduces the problem on demand.

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-26  2:41         ` Joshua Kinard
@ 2014-08-26 17:45           ` David Daney
  -1 siblings, 0 replies; 24+ messages in thread
From: David Daney @ 2014-08-26 17:45 UTC (permalink / raw)
  To: Joshua Kinard
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 07:41 PM, Joshua Kinard wrote:
> On 08/25/2014 20:36, David Daney wrote:
>> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>>
>>>>> this series adds mapping color control to the generic kmap code, allowing
>>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>>> use example of this new interface by xtensa.
>>>>
>>>> I haven't actually ported this to MIPS but it certainly appears to be
>>>> the right framework to get highmem aliases handled on MIPS, too.
>>>>
>>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>>> solution because it will entirly do away with cache aliases.
>>>
>>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
>>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>>> load (SIGSEGVs or such, which panicks the kernel).
>>>
>>
>> It isn't supposed to break things.  Using "stock" toolchains should result
>> in executables that will run with any page size.
>>
>> In the past, some geniuses came up with some linker (ld) patches that, in
>> order to save a few KB of RAM, produced executables that ran only on 4K pages.
>>
>> There were some equally astute Debian emacs package maintainers that were
>> carrying emacs patches into Debian that would not work on non-4K page size
>> systems.
>>
>> That said, I think such thinking should be punished.  The punishment should
>> be to not have their software run when we select non-4K page sizes.  The
>> vast majority of prepackaged software runs just fine with a larger page size.
>
> Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
> into userland with just a couple of "illegal instruction" errors from 'rm'
> and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
> Have to dig around and find something that reproduces the problem on demand.
>

What does the output of "readelf -lW" look like for the failing 
programs?  If the "Offset" and "VirtAddr" constraints for the LOAD 
Program Headers are not possible to achieve with the selected PAGE_SIZE, 
you will see problems.  A "correct" toolchain will generate binaries 
that work with any PAGE_SIZE up to 64K.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-26 17:45           ` David Daney
  0 siblings, 0 replies; 24+ messages in thread
From: David Daney @ 2014-08-26 17:45 UTC (permalink / raw)
  To: Joshua Kinard
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/25/2014 07:41 PM, Joshua Kinard wrote:
> On 08/25/2014 20:36, David Daney wrote:
>> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>>
>>>>> this series adds mapping color control to the generic kmap code, allowing
>>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>>> use example of this new interface by xtensa.
>>>>
>>>> I haven't actually ported this to MIPS but it certainly appears to be
>>>> the right framework to get highmem aliases handled on MIPS, too.
>>>>
>>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>>> solution because it will entirly do away with cache aliases.
>>>
>>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I use a
>>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>>> load (SIGSEGVs or such, which panicks the kernel).
>>>
>>
>> It isn't supposed to break things.  Using "stock" toolchains should result
>> in executables that will run with any page size.
>>
>> In the past, some geniuses came up with some linker (ld) patches that, in
>> order to save a few KB of RAM, produced executables that ran only on 4K pages.
>>
>> There were some equally astute Debian emacs package maintainers that were
>> carrying emacs patches into Debian that would not work on non-4K page size
>> systems.
>>
>> That said, I think such thinking should be punished.  The punishment should
>> be to not have their software run when we select non-4K page sizes.  The
>> vast majority of prepackaged software runs just fine with a larger page size.
>
> Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
> into userland with just a couple of "illegal instruction" errors from 'rm'
> and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
> Have to dig around and find something that reproduces the problem on demand.
>

What does the output of "readelf -lW" look like for the failing 
programs?  If the "Offset" and "VirtAddr" constraints for the LOAD 
Program Headers are not possible to achieve with the selected PAGE_SIZE, 
you will see problems.  A "correct" toolchain will generate binaries 
that work with any PAGE_SIZE up to 64K.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-25 17:16   ` Ralf Baechle
@ 2014-08-26 18:37     ` Leonid Yegoshin
  -1 siblings, 0 replies; 24+ messages in thread
From: Leonid Yegoshin @ 2014-08-26 18:37 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Max Filippov, linux-xtensa, Chris Zankel, Marc Gauthier,
	linux-mm, linux-arch, linux-mips, linux-kernel, David Rientjes,
	Andrew Morton, Steven J. Hill

16KB page size solution may sometime be a solution:

1) in microcontroller environment then small pages have advantage
in small applications world.

2) some kernel drivers may not fit well a different page size, especially if HW
has an embedded memory translation: GPU, video/audio decoders,
supplement accelerators.

3) finally, somebody can increase cache size faster than page size,
this race never finishes.



Ralf Baechle <ralf@linux-mips.org> wrote:


On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:

> this series adds mapping color control to the generic kmap code, allowing
> architectures with aliasing VIPT cache to use high memory. There's also
> use example of this new interface by xtensa.

I haven't actually ported this to MIPS but it certainly appears to be
the right framework to get highmem aliases handled on MIPS, too.

Though I still consider increasing PAGE_SIZE to 16k the preferable
solution because it will entirly do away with cache aliases.

Thanks,

  Ralf

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-26 18:37     ` Leonid Yegoshin
  0 siblings, 0 replies; 24+ messages in thread
From: Leonid Yegoshin @ 2014-08-26 18:37 UTC (permalink / raw)
  To: Ralf Baechle
  Cc: Max Filippov, linux-xtensa, Chris Zankel, Marc Gauthier,
	linux-mm, linux-arch, linux-mips, linux-kernel, David Rientjes,
	Andrew Morton, Steven J. Hill

16KB page size solution may sometime be a solution:

1) in microcontroller environment then small pages have advantage
in small applications world.

2) some kernel drivers may not fit well a different page size, especially if HW
has an embedded memory translation: GPU, video/audio decoders,
supplement accelerators.

3) finally, somebody can increase cache size faster than page size,
this race never finishes.



Ralf Baechle <ralf@linux-mips.org> wrote:


On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:

> this series adds mapping color control to the generic kmap code, allowing
> architectures with aliasing VIPT cache to use high memory. There's also
> use example of this new interface by xtensa.

I haven't actually ported this to MIPS but it certainly appears to be
the right framework to get highmem aliases handled on MIPS, too.

Though I still consider increasing PAGE_SIZE to 16k the preferable
solution because it will entirly do away with cache aliases.

Thanks,

  Ralf

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
  2014-08-26 17:45           ` David Daney
@ 2014-08-27  1:04             ` Joshua Kinard
  -1 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-27  1:04 UTC (permalink / raw)
  To: David Daney
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/26/2014 13:45, David Daney wrote:
> On 08/25/2014 07:41 PM, Joshua Kinard wrote:
>> On 08/25/2014 20:36, David Daney wrote:
>>> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>>>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>>>
>>>>>> this series adds mapping color control to the generic kmap code, allowing
>>>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>>>> use example of this new interface by xtensa.
>>>>>
>>>>> I haven't actually ported this to MIPS but it certainly appears to be
>>>>> the right framework to get highmem aliases handled on MIPS, too.
>>>>>
>>>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>>>> solution because it will entirly do away with cache aliases.
>>>>
>>>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I
>>>> use a
>>>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>>>> load (SIGSEGVs or such, which panicks the kernel).
>>>>
>>>
>>> It isn't supposed to break things.  Using "stock" toolchains should result
>>> in executables that will run with any page size.
>>>
>>> In the past, some geniuses came up with some linker (ld) patches that, in
>>> order to save a few KB of RAM, produced executables that ran only on 4K
>>> pages.
>>>
>>> There were some equally astute Debian emacs package maintainers that were
>>> carrying emacs patches into Debian that would not work on non-4K page size
>>> systems.
>>>
>>> That said, I think such thinking should be punished.  The punishment should
>>> be to not have their software run when we select non-4K page sizes.  The
>>> vast majority of prepackaged software runs just fine with a larger page
>>> size.
>>
>> Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
>> into userland with just a couple of "illegal instruction" errors from 'rm'
>> and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
>> Have to dig around and find something that reproduces the problem on demand.
>>
> 
> What does the output of "readelf -lW" look like for the failing programs? 
> If the "Offset" and "VirtAddr" constraints for the LOAD Program Headers are
> not possible to achieve with the selected PAGE_SIZE, you will see problems. 
> A "correct" toolchain will generate binaries that work with any PAGE_SIZE up
> to 64K.

Well, I recently rebuilt shash, so that might've changed things.  But,
running readelf -lW on shash core dumped readelf itself on the first
invocation (with a SIGBUS instead of SIGILL).  So I instead ran readelf -lW
on itself, which hasn't been recently rebuilt:

# readelf -lW /usr/bin/shash
Bus error (core dumped)
# readelf -lW /usr/bin/readelf

Elf file type is EXEC (Executable file)
Entry point 0x402590
There are 11 program headers, starting at offset 52

Program Headers:
  Type           Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
  PHDR           0x000034 0x00400034 0x00400034 0x00160 0x00160 R E 0x4
  INTERP         0x000194 0x00400194 0x00400194 0x0000d 0x0000d R   0x1
      [Requesting program interpreter: /lib/ld.so.1]
  REGINFO        0x0001c4 0x004001c4 0x004001c4 0x00018 0x00018 R   0x4
  LOAD           0x000000 0x00400000 0x00400000 0x72338 0x72338 R E 0x10000
  LOAD           0x0728c8 0x004828c8 0x004828c8 0x01834 0x03d88 RW  0x10000
  DYNAMIC        0x0001dc 0x004001dc 0x004001dc 0x000e0 0x000e0 RWE 0x4
  NOTE           0x0001a4 0x004001a4 0x004001a4 0x00020 0x00020 R   0x4
  GNU_EH_FRAME   0x0722c0 0x004722c0 0x004722c0 0x00024 0x00024 R   0x4
  GNU_RELRO      0x0728c8 0x004828c8 0x004828c8 0x00738 0x00738 R   0x1
  PAX_FLAGS      0x000000 0x00000000 0x00000000 0x00000 0x00000     0x4
  NULL           0x000000 0x00000000 0x00000000 0x00000 0x00000     0x4

 Section to Segment mapping:
  Segment Sections...
   00
   01     .interp
   02     .reginfo
   03     .interp .note.ABI-tag .reginfo .dynamic .hash .dynsym .dynstr
.gnu.version .gnu.version_r .init .text .MIPS.stubs .fini .rodata
.eh_frame_hdr .eh_frame
   04     .ctors .dtors .jcr .data.rel.ro .data .rld_map .got .sdata .sbss .bss
   05     .dynamic
   06     .note.ABI-tag
   07     .eh_frame_hdr
   08     .ctors .dtors .jcr .data.rel.ro
   09
   10

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware
@ 2014-08-27  1:04             ` Joshua Kinard
  0 siblings, 0 replies; 24+ messages in thread
From: Joshua Kinard @ 2014-08-27  1:04 UTC (permalink / raw)
  To: David Daney
  Cc: Ralf Baechle, Max Filippov, linux-xtensa, Chris Zankel,
	Marc Gauthier, linux-mm, linux-arch, linux-mips, linux-kernel,
	David Rientjes, Andrew Morton, Leonid Yegoshin, Steven Hill

On 08/26/2014 13:45, David Daney wrote:
> On 08/25/2014 07:41 PM, Joshua Kinard wrote:
>> On 08/25/2014 20:36, David Daney wrote:
>>> On 08/25/2014 04:55 PM, Joshua Kinard wrote:
>>>> On 08/25/2014 13:16, Ralf Baechle wrote:
>>>>> On Sat, Aug 02, 2014 at 05:11:37AM +0400, Max Filippov wrote:
>>>>>
>>>>>> this series adds mapping color control to the generic kmap code, allowing
>>>>>> architectures with aliasing VIPT cache to use high memory. There's also
>>>>>> use example of this new interface by xtensa.
>>>>>
>>>>> I haven't actually ported this to MIPS but it certainly appears to be
>>>>> the right framework to get highmem aliases handled on MIPS, too.
>>>>>
>>>>> Though I still consider increasing PAGE_SIZE to 16k the preferable
>>>>> solution because it will entirly do away with cache aliases.
>>>>
>>>> Won't setting PAGE_SIZE to 16k break some existing userlands (o32)?  I
>>>> use a
>>>> 4k PAGE_SIZE because the last few times I've tried 16k or 64k, init won't
>>>> load (SIGSEGVs or such, which panicks the kernel).
>>>>
>>>
>>> It isn't supposed to break things.  Using "stock" toolchains should result
>>> in executables that will run with any page size.
>>>
>>> In the past, some geniuses came up with some linker (ld) patches that, in
>>> order to save a few KB of RAM, produced executables that ran only on 4K
>>> pages.
>>>
>>> There were some equally astute Debian emacs package maintainers that were
>>> carrying emacs patches into Debian that would not work on non-4K page size
>>> systems.
>>>
>>> That said, I think such thinking should be punished.  The punishment should
>>> be to not have their software run when we select non-4K page sizes.  The
>>> vast majority of prepackaged software runs just fine with a larger page
>>> size.
>>
>> Well, it does appear to mostly work now w/ 16k PAGE_SIZE.  The Octane booted
>> into userland with just a couple of "illegal instruction" errors from 'rm'
>> and 'mdadm'.  I wonder if that's tied to a hardcoded PAGE_SIZE somewhere.
>> Have to dig around and find something that reproduces the problem on demand.
>>
> 
> What does the output of "readelf -lW" look like for the failing programs? 
> If the "Offset" and "VirtAddr" constraints for the LOAD Program Headers are
> not possible to achieve with the selected PAGE_SIZE, you will see problems. 
> A "correct" toolchain will generate binaries that work with any PAGE_SIZE up
> to 64K.

Well, I recently rebuilt shash, so that might've changed things.  But,
running readelf -lW on shash core dumped readelf itself on the first
invocation (with a SIGBUS instead of SIGILL).  So I instead ran readelf -lW
on itself, which hasn't been recently rebuilt:

# readelf -lW /usr/bin/shash
Bus error (core dumped)
# readelf -lW /usr/bin/readelf

Elf file type is EXEC (Executable file)
Entry point 0x402590
There are 11 program headers, starting at offset 52

Program Headers:
  Type           Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
  PHDR           0x000034 0x00400034 0x00400034 0x00160 0x00160 R E 0x4
  INTERP         0x000194 0x00400194 0x00400194 0x0000d 0x0000d R   0x1
      [Requesting program interpreter: /lib/ld.so.1]
  REGINFO        0x0001c4 0x004001c4 0x004001c4 0x00018 0x00018 R   0x4
  LOAD           0x000000 0x00400000 0x00400000 0x72338 0x72338 R E 0x10000
  LOAD           0x0728c8 0x004828c8 0x004828c8 0x01834 0x03d88 RW  0x10000
  DYNAMIC        0x0001dc 0x004001dc 0x004001dc 0x000e0 0x000e0 RWE 0x4
  NOTE           0x0001a4 0x004001a4 0x004001a4 0x00020 0x00020 R   0x4
  GNU_EH_FRAME   0x0722c0 0x004722c0 0x004722c0 0x00024 0x00024 R   0x4
  GNU_RELRO      0x0728c8 0x004828c8 0x004828c8 0x00738 0x00738 R   0x1
  PAX_FLAGS      0x000000 0x00000000 0x00000000 0x00000 0x00000     0x4
  NULL           0x000000 0x00000000 0x00000000 0x00000 0x00000     0x4

 Section to Segment mapping:
  Segment Sections...
   00
   01     .interp
   02     .reginfo
   03     .interp .note.ABI-tag .reginfo .dynamic .hash .dynsym .dynstr
.gnu.version .gnu.version_r .init .text .MIPS.stubs .fini .rodata
.eh_frame_hdr .eh_frame
   04     .ctors .dtors .jcr .data.rel.ro .data .rld_map .got .sdata .sbss .bss
   05     .dynamic
   06     .note.ABI-tag
   07     .eh_frame_hdr
   08     .ctors .dtors .jcr .data.rel.ro
   09
   10

-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
4096R/D25D95E3 2011-03-28

"The past tempts us, the present confuses us, the future frightens us.  And
our lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-08-27  1:10 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-02  1:11 [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware Max Filippov
2014-08-02  1:11 ` Max Filippov
2014-08-02  1:11 ` [PATCH v4 1/2] " Max Filippov
2014-08-02  1:11 ` Max Filippov
2014-08-02  1:11 ` Max Filippov
2014-08-02  1:11   ` Max Filippov
2014-08-02  1:11 ` [PATCH v4 2/2] xtensa: support aliasing cache in kmap Max Filippov
2014-08-02  1:11 ` Max Filippov
2014-08-02  1:11 ` Max Filippov
2014-08-02  1:11   ` Max Filippov
2014-08-25 17:16 ` [PATCH v4 0/2] mm/highmem: make kmap cache coloring aware Ralf Baechle
2014-08-25 17:16   ` Ralf Baechle
2014-08-25 23:55   ` Joshua Kinard
2014-08-25 23:55     ` Joshua Kinard
2014-08-26  0:36     ` David Daney
2014-08-26  0:36       ` David Daney
2014-08-26  2:41       ` Joshua Kinard
2014-08-26  2:41         ` Joshua Kinard
2014-08-26 17:45         ` David Daney
2014-08-26 17:45           ` David Daney
2014-08-27  1:04           ` Joshua Kinard
2014-08-27  1:04             ` Joshua Kinard
2014-08-26 18:37   ` Leonid Yegoshin
2014-08-26 18:37     ` Leonid Yegoshin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.