All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] mm: vmacache updates
@ 2014-04-14 23:57 ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

Two additions really. The first patch adds some needed debugging info.
The second one includes an optimization suggested by Oleg. I preferred
waiting until 3.15 for these, giving the code a chance to settle a bit.

Thanks!

Davidlohr Bueso (3):
  mm: fix CONFIG_DEBUG_VM_RB description
  mm,vmacache: add debug data
  mm,vmacache: optimize overflow system-wide flushing

 include/linux/vm_event_item.h |  4 ++++
 include/linux/vmstat.h        |  6 ++++++
 lib/Kconfig.debug             | 13 +++++++++++--
 mm/vmacache.c                 | 19 ++++++++++++++++++-
 mm/vmstat.c                   |  4 ++++
 5 files changed, 43 insertions(+), 3 deletions(-)

-- 
1.8.1.4


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 0/3] mm: vmacache updates
@ 2014-04-14 23:57 ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

Two additions really. The first patch adds some needed debugging info.
The second one includes an optimization suggested by Oleg. I preferred
waiting until 3.15 for these, giving the code a chance to settle a bit.

Thanks!

Davidlohr Bueso (3):
  mm: fix CONFIG_DEBUG_VM_RB description
  mm,vmacache: add debug data
  mm,vmacache: optimize overflow system-wide flushing

 include/linux/vm_event_item.h |  4 ++++
 include/linux/vmstat.h        |  6 ++++++
 lib/Kconfig.debug             | 13 +++++++++++--
 mm/vmacache.c                 | 19 ++++++++++++++++++-
 mm/vmstat.c                   |  4 ++++
 5 files changed, 43 insertions(+), 3 deletions(-)

-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] mm: fix CONFIG_DEBUG_VM_RB description
  2014-04-14 23:57 ` Davidlohr Bueso
@ 2014-04-14 23:57   ` Davidlohr Bueso
  -1 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

This appears to be a copy/paste error. Update the description
to reflect extra rbtree debug and checks for the config option
instead of duplicating CONFIG_DEBUG_VM.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 lib/Kconfig.debug | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 140b66a..819ac51 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -505,8 +505,7 @@ config DEBUG_VM_RB
 	bool "Debug VM red-black trees"
 	depends on DEBUG_VM
 	help
-	  Enable this to turn on more extended checks in the virtual-memory
-	  system that may impact performance.
+	  Enable VM red-black tree debugging information and extra validations.
 
 	  If unsure, say N.
 
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 1/3] mm: fix CONFIG_DEBUG_VM_RB description
@ 2014-04-14 23:57   ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

This appears to be a copy/paste error. Update the description
to reflect extra rbtree debug and checks for the config option
instead of duplicating CONFIG_DEBUG_VM.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 lib/Kconfig.debug | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 140b66a..819ac51 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -505,8 +505,7 @@ config DEBUG_VM_RB
 	bool "Debug VM red-black trees"
 	depends on DEBUG_VM
 	help
-	  Enable this to turn on more extended checks in the virtual-memory
-	  system that may impact performance.
+	  Enable VM red-black tree debugging information and extra validations.
 
 	  If unsure, say N.
 
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] mm,vmacache: add debug data
  2014-04-14 23:57 ` Davidlohr Bueso
@ 2014-04-14 23:57   ` Davidlohr Bueso
  -1 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

Introduce a CONFIG_DEBUG_VM_VMACACHE option to enable
counting the cache hit rate -- exported in /proc/vmstat.

Any updates to the caching scheme needs this kind of data,
thus it can save some work re-implementing the counting
all the time.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 include/linux/vm_event_item.h |  4 ++++
 include/linux/vmstat.h        |  6 ++++++
 lib/Kconfig.debug             | 10 ++++++++++
 mm/vmacache.c                 |  9 ++++++++-
 mm/vmstat.c                   |  4 ++++
 5 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 486c397..ced9234 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -80,6 +80,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		NR_TLB_LOCAL_FLUSH_ALL,
 		NR_TLB_LOCAL_FLUSH_ONE,
 #endif /* CONFIG_DEBUG_TLBFLUSH */
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+		VMACACHE_FIND_CALLS,
+		VMACACHE_FIND_HITS,
+#endif
 		NR_VM_EVENT_ITEMS
 };
 
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 45c9cd1..82e7db7 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -95,6 +95,12 @@ static inline void vm_events_fold_cpu(int cpu)
 #define count_vm_tlb_events(x, y) do { (void)(y); } while (0)
 #endif
 
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+#define count_vm_vmacache_event(x) count_vm_event(x)
+#else
+#define count_vm_vmacache_event(x) do {} while (0)
+#endif
+
 #define __count_zone_vm_events(item, zone, delta) \
 		__count_vm_events(item##_NORMAL - ZONE_NORMAL + \
 		zone_idx(zone), delta)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 819ac51..9ed3d9b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -501,6 +501,16 @@ config DEBUG_VM
 
 	  If unsure, say N.
 
+config DEBUG_VM_VMACACHE
+	bool "Debug VMA caching"
+	depends on DEBUG_VM
+	help
+	  Enable this to turn on VMA caching debug information. Doing so
+	  can cause significant overhead, so only enable it in non-production
+	  environments.
+
+	  If unsure, say N.
+
 config DEBUG_VM_RB
 	bool "Debug VM red-black trees"
 	depends on DEBUG_VM
diff --git a/mm/vmacache.c b/mm/vmacache.c
index d4224b3..e167da2 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -78,11 +78,14 @@ struct vm_area_struct *vmacache_find(struct mm_struct *mm, unsigned long addr)
 	if (!vmacache_valid(mm))
 		return NULL;
 
+	count_vm_vmacache_event(VMACACHE_FIND_CALLS);
+
 	for (i = 0; i < VMACACHE_SIZE; i++) {
 		struct vm_area_struct *vma = current->vmacache[i];
 
 		if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
 			BUG_ON(vma->vm_mm != mm);
+			count_vm_vmacache_event(VMACACHE_FIND_HITS);
 			return vma;
 		}
 	}
@@ -100,11 +103,15 @@ struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
 	if (!vmacache_valid(mm))
 		return NULL;
 
+	count_vm_vmacache_event(VMACACHE_FIND_CALLS);
+
 	for (i = 0; i < VMACACHE_SIZE; i++) {
 		struct vm_area_struct *vma = current->vmacache[i];
 
-		if (vma && vma->vm_start == start && vma->vm_end == end)
+		if (vma && vma->vm_start == start && vma->vm_end == end) {
+			count_vm_vmacache_event(VMACACHE_FIND_HITS);
 			return vma;
+		}
 	}
 
 	return NULL;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 302dd07..82ce17c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -866,6 +866,10 @@ const char * const vmstat_text[] = {
 	"nr_tlb_local_flush_one",
 #endif /* CONFIG_DEBUG_TLBFLUSH */
 
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+	"vmacache_find_calls",
+	"vmacache_find_hits",
+#endif
 #endif /* CONFIG_VM_EVENTS_COUNTERS */
 };
 #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] mm,vmacache: add debug data
@ 2014-04-14 23:57   ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

Introduce a CONFIG_DEBUG_VM_VMACACHE option to enable
counting the cache hit rate -- exported in /proc/vmstat.

Any updates to the caching scheme needs this kind of data,
thus it can save some work re-implementing the counting
all the time.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 include/linux/vm_event_item.h |  4 ++++
 include/linux/vmstat.h        |  6 ++++++
 lib/Kconfig.debug             | 10 ++++++++++
 mm/vmacache.c                 |  9 ++++++++-
 mm/vmstat.c                   |  4 ++++
 5 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 486c397..ced9234 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -80,6 +80,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 		NR_TLB_LOCAL_FLUSH_ALL,
 		NR_TLB_LOCAL_FLUSH_ONE,
 #endif /* CONFIG_DEBUG_TLBFLUSH */
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+		VMACACHE_FIND_CALLS,
+		VMACACHE_FIND_HITS,
+#endif
 		NR_VM_EVENT_ITEMS
 };
 
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 45c9cd1..82e7db7 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -95,6 +95,12 @@ static inline void vm_events_fold_cpu(int cpu)
 #define count_vm_tlb_events(x, y) do { (void)(y); } while (0)
 #endif
 
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+#define count_vm_vmacache_event(x) count_vm_event(x)
+#else
+#define count_vm_vmacache_event(x) do {} while (0)
+#endif
+
 #define __count_zone_vm_events(item, zone, delta) \
 		__count_vm_events(item##_NORMAL - ZONE_NORMAL + \
 		zone_idx(zone), delta)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 819ac51..9ed3d9b 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -501,6 +501,16 @@ config DEBUG_VM
 
 	  If unsure, say N.
 
+config DEBUG_VM_VMACACHE
+	bool "Debug VMA caching"
+	depends on DEBUG_VM
+	help
+	  Enable this to turn on VMA caching debug information. Doing so
+	  can cause significant overhead, so only enable it in non-production
+	  environments.
+
+	  If unsure, say N.
+
 config DEBUG_VM_RB
 	bool "Debug VM red-black trees"
 	depends on DEBUG_VM
diff --git a/mm/vmacache.c b/mm/vmacache.c
index d4224b3..e167da2 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -78,11 +78,14 @@ struct vm_area_struct *vmacache_find(struct mm_struct *mm, unsigned long addr)
 	if (!vmacache_valid(mm))
 		return NULL;
 
+	count_vm_vmacache_event(VMACACHE_FIND_CALLS);
+
 	for (i = 0; i < VMACACHE_SIZE; i++) {
 		struct vm_area_struct *vma = current->vmacache[i];
 
 		if (vma && vma->vm_start <= addr && vma->vm_end > addr) {
 			BUG_ON(vma->vm_mm != mm);
+			count_vm_vmacache_event(VMACACHE_FIND_HITS);
 			return vma;
 		}
 	}
@@ -100,11 +103,15 @@ struct vm_area_struct *vmacache_find_exact(struct mm_struct *mm,
 	if (!vmacache_valid(mm))
 		return NULL;
 
+	count_vm_vmacache_event(VMACACHE_FIND_CALLS);
+
 	for (i = 0; i < VMACACHE_SIZE; i++) {
 		struct vm_area_struct *vma = current->vmacache[i];
 
-		if (vma && vma->vm_start == start && vma->vm_end == end)
+		if (vma && vma->vm_start == start && vma->vm_end == end) {
+			count_vm_vmacache_event(VMACACHE_FIND_HITS);
 			return vma;
+		}
 	}
 
 	return NULL;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 302dd07..82ce17c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -866,6 +866,10 @@ const char * const vmstat_text[] = {
 	"nr_tlb_local_flush_one",
 #endif /* CONFIG_DEBUG_TLBFLUSH */
 
+#ifdef CONFIG_DEBUG_VM_VMACACHE
+	"vmacache_find_calls",
+	"vmacache_find_hits",
+#endif
 #endif /* CONFIG_VM_EVENTS_COUNTERS */
 };
 #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] mm,vmacache: optimize overflow system-wide flushing
  2014-04-14 23:57 ` Davidlohr Bueso
@ 2014-04-14 23:57   ` Davidlohr Bueso
  -1 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

For single threaded workloads, we can avoid flushing
and iterating through the entire list of tasks, making
the whole function a lot faster, requiring only a single
atomic read for the mm_users.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 mm/vmacache.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/mm/vmacache.c b/mm/vmacache.c
index e167da2..61c38ae 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
 {
 	struct task_struct *g, *p;
 
+	/*
+	 * Single threaded tasks need not iterate the entire
+	 * list of process. We can avoid the flushing as well
+	 * since the mm's seqnum was increased and don't have
+	 * to worry about other threads' seqnum. Current's
+	 * flush will occur upon the next lookup.
+	 */
+	if (atomic_read(&mm->mm_users) == 1)
+		return;
+
 	rcu_read_lock();
 	for_each_process_thread(g, p) {
 		/*
-- 
1.8.1.4


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] mm,vmacache: optimize overflow system-wide flushing
@ 2014-04-14 23:57   ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-14 23:57 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, davidlohr, aswin

For single threaded workloads, we can avoid flushing
and iterating through the entire list of tasks, making
the whole function a lot faster, requiring only a single
atomic read for the mm_users.

Suggested-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
---
 mm/vmacache.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/mm/vmacache.c b/mm/vmacache.c
index e167da2..61c38ae 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
 {
 	struct task_struct *g, *p;
 
+	/*
+	 * Single threaded tasks need not iterate the entire
+	 * list of process. We can avoid the flushing as well
+	 * since the mm's seqnum was increased and don't have
+	 * to worry about other threads' seqnum. Current's
+	 * flush will occur upon the next lookup.
+	 */
+	if (atomic_read(&mm->mm_users) == 1)
+		return;
+
 	rcu_read_lock();
 	for_each_process_thread(g, p) {
 		/*
-- 
1.8.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] mm,vmacache: optimize overflow system-wide flushing
  2014-04-14 23:57   ` Davidlohr Bueso
@ 2014-04-15  0:02     ` Davidlohr Bueso
  -1 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-15  0:02 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, aswin, Oleg Nesterov

Stupid script... Cc'ing Oleg.

On Mon, 2014-04-14 at 16:57 -0700, Davidlohr Bueso wrote:
> For single threaded workloads, we can avoid flushing
> and iterating through the entire list of tasks, making
> the whole function a lot faster, requiring only a single
> atomic read for the mm_users.
> 
> Suggested-by: Oleg Nesterov <oleg@redhat.com>
> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
> ---
>  mm/vmacache.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/mm/vmacache.c b/mm/vmacache.c
> index e167da2..61c38ae 100644
> --- a/mm/vmacache.c
> +++ b/mm/vmacache.c
> @@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
>  {
>  	struct task_struct *g, *p;
>  
> +	/*
> +	 * Single threaded tasks need not iterate the entire
> +	 * list of process. We can avoid the flushing as well
> +	 * since the mm's seqnum was increased and don't have
> +	 * to worry about other threads' seqnum. Current's
> +	 * flush will occur upon the next lookup.
> +	 */
> +	if (atomic_read(&mm->mm_users) == 1)
> +		return;
> +
>  	rcu_read_lock();
>  	for_each_process_thread(g, p) {
>  		/*



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] mm,vmacache: optimize overflow system-wide flushing
@ 2014-04-15  0:02     ` Davidlohr Bueso
  0 siblings, 0 replies; 10+ messages in thread
From: Davidlohr Bueso @ 2014-04-15  0:02 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, linux-kernel, aswin, Oleg Nesterov

Stupid script... Cc'ing Oleg.

On Mon, 2014-04-14 at 16:57 -0700, Davidlohr Bueso wrote:
> For single threaded workloads, we can avoid flushing
> and iterating through the entire list of tasks, making
> the whole function a lot faster, requiring only a single
> atomic read for the mm_users.
> 
> Suggested-by: Oleg Nesterov <oleg@redhat.com>
> Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
> ---
>  mm/vmacache.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/mm/vmacache.c b/mm/vmacache.c
> index e167da2..61c38ae 100644
> --- a/mm/vmacache.c
> +++ b/mm/vmacache.c
> @@ -17,6 +17,16 @@ void vmacache_flush_all(struct mm_struct *mm)
>  {
>  	struct task_struct *g, *p;
>  
> +	/*
> +	 * Single threaded tasks need not iterate the entire
> +	 * list of process. We can avoid the flushing as well
> +	 * since the mm's seqnum was increased and don't have
> +	 * to worry about other threads' seqnum. Current's
> +	 * flush will occur upon the next lookup.
> +	 */
> +	if (atomic_read(&mm->mm_users) == 1)
> +		return;
> +
>  	rcu_read_lock();
>  	for_each_process_thread(g, p) {
>  		/*


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-04-15  0:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-14 23:57 [PATCH 0/3] mm: vmacache updates Davidlohr Bueso
2014-04-14 23:57 ` Davidlohr Bueso
2014-04-14 23:57 ` [PATCH 1/3] mm: fix CONFIG_DEBUG_VM_RB description Davidlohr Bueso
2014-04-14 23:57   ` Davidlohr Bueso
2014-04-14 23:57 ` [PATCH 2/3] mm,vmacache: add debug data Davidlohr Bueso
2014-04-14 23:57   ` Davidlohr Bueso
2014-04-14 23:57 ` [PATCH 3/3] mm,vmacache: optimize overflow system-wide flushing Davidlohr Bueso
2014-04-14 23:57   ` Davidlohr Bueso
2014-04-15  0:02   ` Davidlohr Bueso
2014-04-15  0:02     ` Davidlohr Bueso

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.