All of lore.kernel.org
 help / color / mirror / Atom feed
From: Glauber Costa <glommer@parallels.com>
To: <cgroups@vger.kernel.org>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	<devel@openvz.org>, <kamezawa.hiroyu@jp.fujitsu.com>,
	Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Greg Thelen <gthelen@google.com>,
	Suleiman Souhlal <suleiman@google.com>,
	Suleiman Souhlal <ssouhlal@FreeBSD.org>
Subject: [PATCH 05/23] memcg: Reclaim when more than one page needed.
Date: Fri, 20 Apr 2012 18:57:13 -0300	[thread overview]
Message-ID: <1334959051-18203-6-git-send-email-glommer@parallels.com> (raw)
In-Reply-To: <1334959051-18203-1-git-send-email-glommer@parallels.com>

From: Suleiman Souhlal <ssouhlal@FreeBSD.org>

mem_cgroup_do_charge() was written before slab accounting, and expects
three cases: being called for 1 page, being called for a stock of 32 pages,
or being called for a hugepage.  If we call for 2 pages (and several slabs
used in process creation are such, at least with the debug options I had),
it assumed it's being called for stock and just retried without reclaiming.

Fix that by passing down a minsize argument in addition to the csize.

And what to do about that (csize == PAGE_SIZE && ret) retry?  If it's
needed at all (and presumably is since it's there, perhaps to handle
races), then it should be extended to more than PAGE_SIZE, yet how far?
And should there be a retry count limit, of what?  For now retry up to
COSTLY_ORDER (as page_alloc.c does), stay safe with a cond_resched(),
and make sure not to do it if __GFP_NORETRY.

Signed-off-by: Suleiman Souhlal <suleiman@google.com>
---
 mm/memcontrol.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4b94b2d..cbffc4c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2187,7 +2187,8 @@ enum {
 };
 
 static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
-				unsigned int nr_pages, bool oom_check)
+				unsigned int nr_pages, unsigned int min_pages,
+				bool oom_check)
 {
 	unsigned long csize = nr_pages * PAGE_SIZE;
 	struct mem_cgroup *mem_over_limit;
@@ -2210,18 +2211,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	} else
 		mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
 	/*
-	 * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch
-	 * of regular pages (CHARGE_BATCH), or a single regular page (1).
-	 *
 	 * Never reclaim on behalf of optional batching, retry with a
 	 * single page instead.
 	 */
-	if (nr_pages == CHARGE_BATCH)
+	if (nr_pages > min_pages)
 		return CHARGE_RETRY;
 
 	if (!(gfp_mask & __GFP_WAIT))
 		return CHARGE_WOULDBLOCK;
 
+	if (gfp_mask & __GFP_NORETRY)
+		return CHARGE_NOMEM;
+
 	ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags);
 	if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
 		return CHARGE_RETRY;
@@ -2234,8 +2235,10 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	 * unlikely to succeed so close to the limit, and we fall back
 	 * to regular pages anyway in case of failure.
 	 */
-	if (nr_pages == 1 && ret)
+	if (nr_pages <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER) && ret) {
+		cond_resched();
 		return CHARGE_RETRY;
+	}
 
 	/*
 	 * At task move, charge accounts can be doubly counted. So, it's
@@ -2369,7 +2372,8 @@ again:
 			nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
 		}
 
-		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
+		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages,
+		    oom_check);
 		switch (ret) {
 		case CHARGE_OK:
 			break;
-- 
1.7.7.6


WARNING: multiple messages have this Message-ID (diff)
From: Glauber Costa <glommer@parallels.com>
To: cgroups@vger.kernel.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	devel@openvz.org, kamezawa.hiroyu@jp.fujitsu.com,
	Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Greg Thelen <gthelen@google.com>,
	Suleiman Souhlal <suleiman@google.com>,
	Suleiman Souhlal <ssouhlal@FreeBSD.org>
Subject: [PATCH 05/23] memcg: Reclaim when more than one page needed.
Date: Fri, 20 Apr 2012 18:57:13 -0300	[thread overview]
Message-ID: <1334959051-18203-6-git-send-email-glommer@parallels.com> (raw)
In-Reply-To: <1334959051-18203-1-git-send-email-glommer@parallels.com>

From: Suleiman Souhlal <ssouhlal@FreeBSD.org>

mem_cgroup_do_charge() was written before slab accounting, and expects
three cases: being called for 1 page, being called for a stock of 32 pages,
or being called for a hugepage.  If we call for 2 pages (and several slabs
used in process creation are such, at least with the debug options I had),
it assumed it's being called for stock and just retried without reclaiming.

Fix that by passing down a minsize argument in addition to the csize.

And what to do about that (csize == PAGE_SIZE && ret) retry?  If it's
needed at all (and presumably is since it's there, perhaps to handle
races), then it should be extended to more than PAGE_SIZE, yet how far?
And should there be a retry count limit, of what?  For now retry up to
COSTLY_ORDER (as page_alloc.c does), stay safe with a cond_resched(),
and make sure not to do it if __GFP_NORETRY.

Signed-off-by: Suleiman Souhlal <suleiman@google.com>
---
 mm/memcontrol.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4b94b2d..cbffc4c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2187,7 +2187,8 @@ enum {
 };
 
 static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
-				unsigned int nr_pages, bool oom_check)
+				unsigned int nr_pages, unsigned int min_pages,
+				bool oom_check)
 {
 	unsigned long csize = nr_pages * PAGE_SIZE;
 	struct mem_cgroup *mem_over_limit;
@@ -2210,18 +2211,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	} else
 		mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
 	/*
-	 * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch
-	 * of regular pages (CHARGE_BATCH), or a single regular page (1).
-	 *
 	 * Never reclaim on behalf of optional batching, retry with a
 	 * single page instead.
 	 */
-	if (nr_pages == CHARGE_BATCH)
+	if (nr_pages > min_pages)
 		return CHARGE_RETRY;
 
 	if (!(gfp_mask & __GFP_WAIT))
 		return CHARGE_WOULDBLOCK;
 
+	if (gfp_mask & __GFP_NORETRY)
+		return CHARGE_NOMEM;
+
 	ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags);
 	if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
 		return CHARGE_RETRY;
@@ -2234,8 +2235,10 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	 * unlikely to succeed so close to the limit, and we fall back
 	 * to regular pages anyway in case of failure.
 	 */
-	if (nr_pages == 1 && ret)
+	if (nr_pages <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER) && ret) {
+		cond_resched();
 		return CHARGE_RETRY;
+	}
 
 	/*
 	 * At task move, charge accounts can be doubly counted. So, it's
@@ -2369,7 +2372,8 @@ again:
 			nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
 		}
 
-		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
+		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages,
+		    oom_check);
 		switch (ret) {
 		case CHARGE_OK:
 			break;
-- 
1.7.7.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Glauber Costa <glommer@parallels.com>
To: cgroups@vger.kernel.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	devel@openvz.org, kamezawa.hiroyu@jp.fujitsu.com,
	Michal Hocko <mhocko@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Greg Thelen <gthelen@google.com>,
	Suleiman Souhlal <suleiman@google.com>,
	Suleiman Souhlal <ssouhlal@FreeBSD.org>
Subject: [PATCH 05/23] memcg: Reclaim when more than one page needed.
Date: Fri, 20 Apr 2012 18:57:13 -0300	[thread overview]
Message-ID: <1334959051-18203-6-git-send-email-glommer@parallels.com> (raw)
In-Reply-To: <1334959051-18203-1-git-send-email-glommer@parallels.com>

From: Suleiman Souhlal <ssouhlal@FreeBSD.org>

mem_cgroup_do_charge() was written before slab accounting, and expects
three cases: being called for 1 page, being called for a stock of 32 pages,
or being called for a hugepage.  If we call for 2 pages (and several slabs
used in process creation are such, at least with the debug options I had),
it assumed it's being called for stock and just retried without reclaiming.

Fix that by passing down a minsize argument in addition to the csize.

And what to do about that (csize == PAGE_SIZE && ret) retry?  If it's
needed at all (and presumably is since it's there, perhaps to handle
races), then it should be extended to more than PAGE_SIZE, yet how far?
And should there be a retry count limit, of what?  For now retry up to
COSTLY_ORDER (as page_alloc.c does), stay safe with a cond_resched(),
and make sure not to do it if __GFP_NORETRY.

Signed-off-by: Suleiman Souhlal <suleiman@google.com>
---
 mm/memcontrol.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 4b94b2d..cbffc4c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2187,7 +2187,8 @@ enum {
 };
 
 static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
-				unsigned int nr_pages, bool oom_check)
+				unsigned int nr_pages, unsigned int min_pages,
+				bool oom_check)
 {
 	unsigned long csize = nr_pages * PAGE_SIZE;
 	struct mem_cgroup *mem_over_limit;
@@ -2210,18 +2211,18 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	} else
 		mem_over_limit = mem_cgroup_from_res_counter(fail_res, res);
 	/*
-	 * nr_pages can be either a huge page (HPAGE_PMD_NR), a batch
-	 * of regular pages (CHARGE_BATCH), or a single regular page (1).
-	 *
 	 * Never reclaim on behalf of optional batching, retry with a
 	 * single page instead.
 	 */
-	if (nr_pages == CHARGE_BATCH)
+	if (nr_pages > min_pages)
 		return CHARGE_RETRY;
 
 	if (!(gfp_mask & __GFP_WAIT))
 		return CHARGE_WOULDBLOCK;
 
+	if (gfp_mask & __GFP_NORETRY)
+		return CHARGE_NOMEM;
+
 	ret = mem_cgroup_reclaim(mem_over_limit, gfp_mask, flags);
 	if (mem_cgroup_margin(mem_over_limit) >= nr_pages)
 		return CHARGE_RETRY;
@@ -2234,8 +2235,10 @@ static int mem_cgroup_do_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	 * unlikely to succeed so close to the limit, and we fall back
 	 * to regular pages anyway in case of failure.
 	 */
-	if (nr_pages == 1 && ret)
+	if (nr_pages <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER) && ret) {
+		cond_resched();
 		return CHARGE_RETRY;
+	}
 
 	/*
 	 * At task move, charge accounts can be doubly counted. So, it's
@@ -2369,7 +2372,8 @@ again:
 			nr_oom_retries = MEM_CGROUP_RECLAIM_RETRIES;
 		}
 
-		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, oom_check);
+		ret = mem_cgroup_do_charge(memcg, gfp_mask, batch, nr_pages,
+		    oom_check);
 		switch (ret) {
 		case CHARGE_OK:
 			break;
-- 
1.7.7.6

  parent reply	other threads:[~2012-04-20 21:58 UTC|newest]

Thread overview: 180+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-20 21:57 [PATCH 00/23] slab+slub accounting for memcg Glauber Costa
2012-04-20 21:57 ` Glauber Costa
2012-04-20 21:57 ` Glauber Costa
2012-04-20 21:57 ` [PATCH 01/23] slub: don't create a copy of the name string in kmem_cache_create Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57 ` [PATCH 02/23] slub: always get the cache from its page in kfree Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57 ` [PATCH 03/23] slab: rename gfpflags to allocflags Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57 ` [PATCH 04/23] memcg: Make it possible to use the stock for more than one page Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-25  0:59   ` KAMEZAWA Hiroyuki
2012-04-25  0:59     ` KAMEZAWA Hiroyuki
2012-04-25  0:59     ` KAMEZAWA Hiroyuki
2012-04-20 21:57 ` Glauber Costa [this message]
2012-04-20 21:57   ` [PATCH 05/23] memcg: Reclaim when more than one page needed Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-25  1:16   ` KAMEZAWA Hiroyuki
2012-04-25  1:16     ` KAMEZAWA Hiroyuki
2012-04-25  1:16     ` KAMEZAWA Hiroyuki
2012-04-20 21:57 ` [PATCH 06/23] slab: use obj_size field of struct kmem_cache when not debugging Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57 ` [PATCH 07/23] change defines to an enum Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-25  1:18   ` KAMEZAWA Hiroyuki
2012-04-25  1:18     ` KAMEZAWA Hiroyuki
2012-04-25  1:18     ` KAMEZAWA Hiroyuki
2012-04-20 21:57 ` [PATCH 08/23] don't force return value checking in res_counter_charge_nofail Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-25  1:28   ` KAMEZAWA Hiroyuki
2012-04-25  1:28     ` KAMEZAWA Hiroyuki
2012-04-25  1:28     ` KAMEZAWA Hiroyuki
2012-04-20 21:57 ` [PATCH 09/23] kmem slab accounting basic infrastructure Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-25  1:32   ` KAMEZAWA Hiroyuki
2012-04-25  1:32     ` KAMEZAWA Hiroyuki
2012-04-25  1:32     ` KAMEZAWA Hiroyuki
2012-04-25 14:38     ` Glauber Costa
2012-04-25 14:38       ` Glauber Costa
2012-04-26  0:08       ` KAMEZAWA Hiroyuki
2012-04-26  0:08         ` KAMEZAWA Hiroyuki
2012-04-26  0:08         ` KAMEZAWA Hiroyuki
2012-04-30 19:33   ` Suleiman Souhlal
2012-04-30 19:33     ` Suleiman Souhlal
2012-05-02 15:15     ` Glauber Costa
2012-05-02 15:15       ` Glauber Costa
2012-05-02 15:15       ` Glauber Costa
2012-04-20 21:57 ` [PATCH 10/23] slab/slub: struct memcg_params Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-30 19:42   ` Suleiman Souhlal
2012-04-30 19:42     ` Suleiman Souhlal
2012-04-20 21:57 ` [PATCH 11/23] slub: consider a memcg parameter in kmem_create_cache Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-20 21:57   ` Glauber Costa
2012-04-24 14:03   ` Frederic Weisbecker
2012-04-24 14:03     ` Frederic Weisbecker
2012-04-24 14:27     ` Glauber Costa
2012-04-24 14:27       ` Glauber Costa
2012-04-25  1:38   ` KAMEZAWA Hiroyuki
2012-04-25  1:38     ` KAMEZAWA Hiroyuki
2012-04-25  1:38     ` KAMEZAWA Hiroyuki
2012-04-25 14:37     ` Glauber Costa
2012-04-25 14:37       ` Glauber Costa
2012-04-30 19:51   ` Suleiman Souhlal
2012-04-30 19:51     ` Suleiman Souhlal
2012-05-02 15:18     ` Glauber Costa
2012-05-02 15:18       ` Glauber Costa
2012-04-22 23:53 ` [PATCH 12/23] slab: pass memcg parameter to kmem_cache_create Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-30 19:54   ` Suleiman Souhlal
2012-04-30 19:54     ` Suleiman Souhlal
2012-04-22 23:53 ` [PATCH 13/23] slub: create duplicate cache Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-24 14:18   ` Frederic Weisbecker
2012-04-24 14:18     ` Frederic Weisbecker
2012-04-24 14:18     ` Frederic Weisbecker
2012-04-24 14:37     ` Glauber Costa
2012-04-24 14:37       ` Glauber Costa
2012-04-26 13:10       ` Frederic Weisbecker
2012-04-26 13:10         ` Frederic Weisbecker
2012-04-26 13:10         ` Frederic Weisbecker
2012-04-30 20:15   ` Suleiman Souhlal
2012-04-30 20:15     ` Suleiman Souhlal
2012-04-22 23:53 ` [PATCH 14/23] slub: provide kmalloc_no_account Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 15/23] slab: create duplicate cache Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 16/23] slab: provide kmalloc_no_account Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-25  1:44   ` KAMEZAWA Hiroyuki
2012-04-25  1:44     ` KAMEZAWA Hiroyuki
2012-04-25 14:29     ` Glauber Costa
2012-04-25 14:29       ` Glauber Costa
2012-04-26  0:13       ` KAMEZAWA Hiroyuki
2012-04-26  0:13         ` KAMEZAWA Hiroyuki
2012-04-22 23:53 ` [PATCH 17/23] kmem controller charge/uncharge infrastructure Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-23 22:25   ` David Rientjes
2012-04-23 22:25     ` David Rientjes
2012-04-24 14:22     ` Frederic Weisbecker
2012-04-24 14:22       ` Frederic Weisbecker
2012-04-24 14:22       ` Frederic Weisbecker
2012-04-24 14:40       ` Glauber Costa
2012-04-24 14:40         ` Glauber Costa
2012-04-24 14:40         ` Glauber Costa
2012-04-24 20:25         ` David Rientjes
2012-04-24 20:25           ` David Rientjes
2012-04-24 20:25           ` David Rientjes
2012-04-24 21:36           ` Glauber Costa
2012-04-24 21:36             ` Glauber Costa
2012-04-24 22:54             ` David Rientjes
2012-04-24 22:54               ` David Rientjes
2012-04-25 14:43               ` Glauber Costa
2012-04-25 14:43                 ` Glauber Costa
2012-04-25 14:43                 ` Glauber Costa
2012-04-24 20:21       ` David Rientjes
2012-04-24 20:21         ` David Rientjes
2012-04-27 11:38         ` Frederic Weisbecker
2012-04-27 11:38           ` Frederic Weisbecker
2012-04-27 18:13           ` David Rientjes
2012-04-27 18:13             ` David Rientjes
2012-04-27 18:13             ` David Rientjes
2012-04-25  1:56       ` KAMEZAWA Hiroyuki
2012-04-25  1:56         ` KAMEZAWA Hiroyuki
2012-04-25 14:44         ` Glauber Costa
2012-04-25 14:44           ` Glauber Costa
2012-04-27 12:22         ` Frederic Weisbecker
2012-04-27 12:22           ` Frederic Weisbecker
2012-04-27 12:22           ` Frederic Weisbecker
2012-04-30 20:56   ` Suleiman Souhlal
2012-04-30 20:56     ` Suleiman Souhlal
2012-04-30 20:56     ` Suleiman Souhlal
2012-05-02 15:34     ` Glauber Costa
2012-05-02 15:34       ` Glauber Costa
2012-04-22 23:53 ` [PATCH 18/23] slub: charge allocation to a memcg Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 19/23] slab: per-memcg accounting of slab caches Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-30 21:25   ` Suleiman Souhlal
2012-04-30 21:25     ` Suleiman Souhlal
2012-05-02 15:40     ` Glauber Costa
2012-05-02 15:40       ` Glauber Costa
2012-05-02 15:40       ` Glauber Costa
2012-04-22 23:53 ` [PATCH 20/23] memcg: disable kmem code when not in use Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 21/23] memcg: Track all the memcg children of a kmem_cache Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 22/23] memcg: Per-memcg memory.kmem.slabinfo file Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53 ` [PATCH 23/23] slub: create slabinfo file for memcg Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:53   ` Glauber Costa
2012-04-22 23:59 ` [PATCH 00/23] slab+slub accounting " Glauber Costa
2012-04-22 23:59   ` Glauber Costa
2012-04-22 23:59   ` Glauber Costa
2012-04-30  9:59 ` [PATCH 0/3] A few fixes for '[PATCH 00/23] slab+slub accounting for memcg' series Anton Vorontsov
2012-04-30  9:59   ` Anton Vorontsov
2012-04-30  9:59   ` Anton Vorontsov
2012-04-30 10:01   ` [PATCH 1/3] slab: Proper off-slabs handling when duplicating caches Anton Vorontsov
2012-04-30 10:01     ` Anton Vorontsov
2012-04-30 10:01   ` [PATCH 2/3] slab: Fix imbalanced rcu locking Anton Vorontsov
2012-04-30 10:01     ` Anton Vorontsov
2012-04-30 10:01     ` Anton Vorontsov
2012-04-30 10:02   ` [PATCH 3/3] slab: Get rid of mem_cgroup_put_kmem_cache() Anton Vorontsov
2012-04-30 10:02     ` Anton Vorontsov
2012-04-30 10:02     ` Anton Vorontsov
  -- strict thread matches above, loose matches on Subject: below --
2012-04-20 21:48 [PATCH 00/23] slab+slub accounting for memcg Glauber Costa
2012-04-20 21:49 ` [PATCH 05/23] memcg: Reclaim when more than one page needed Glauber Costa
2012-04-20 21:49   ` Glauber Costa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1334959051-18203-6-git-send-email-glommer@parallels.com \
    --to=glommer@parallels.com \
    --cc=cgroups@vger.kernel.org \
    --cc=devel@openvz.org \
    --cc=fweisbec@gmail.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=ssouhlal@FreeBSD.org \
    --cc=suleiman@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.