All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 1/2] add res_counter_usage_safe
@ 2012-06-28 10:20 Kamezawa Hiroyuki
  2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-28 10:20 UTC (permalink / raw)
  To: linux-mm
  Cc: Michal Hocko, David Rientjes, Johannes Weiner, Andrew Morton, Tejun Heo

This series is a cleaned up patches discussed in a few days ago, the topic
was how to make compaction works well even if there is a memcg under OOM.
==
memcg: add res_counter_usage_safe()

I think usage > limit means a sign of BUG. But, sometimes,
res_counter_charge_nofail() is very convenient. tcp_memcg uses it.
And I'd like to use it for helping page migration.

This patch adds res_counter_usage_safe() which returns min(usage,limit).
By this we can use res_counter_charge_nofail() without breaking
user experience.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 include/linux/res_counter.h |    2 ++
 kernel/res_counter.c        |   15 +++++++++++++++
 net/ipv4/tcp_memcontrol.c   |    2 +-
 3 files changed, 18 insertions(+), 1 deletions(-)

diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h
index 7d7fbe2..a6f8cc5 100644
--- a/include/linux/res_counter.h
+++ b/include/linux/res_counter.h
@@ -226,4 +226,6 @@ res_counter_set_soft_limit(struct res_counter *cnt,
 	return 0;
 }
 
+u64 res_counter_usage_safe(struct res_counter *cnt);
+
 #endif
diff --git a/kernel/res_counter.c b/kernel/res_counter.c
index ad581aa..e84149b 100644
--- a/kernel/res_counter.c
+++ b/kernel/res_counter.c
@@ -171,6 +171,21 @@ u64 res_counter_read_u64(struct res_counter *counter, int member)
 }
 #endif
 
+/*
+ * Returns usage. If usage > limit, limit is returned.
+ * This is useful not to break user experiance if the excess
+ * is temporal.
+ */
+u64 res_counter_usage_safe(struct res_counter *counter)
+{
+	u64 usage, limit;
+
+	limit = res_counter_read_u64(counter, RES_LIMIT);
+	usage = res_counter_read_u64(counter, RES_USAGE);
+
+	return min(usage, limit);
+}
+
 int res_counter_memparse_write_strategy(const char *buf,
 					unsigned long long *res)
 {
diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
index b6f3583..a73dce6 100644
--- a/net/ipv4/tcp_memcontrol.c
+++ b/net/ipv4/tcp_memcontrol.c
@@ -180,7 +180,7 @@ static u64 tcp_read_usage(struct mem_cgroup *memcg)
 		return atomic_long_read(&tcp_memory_allocated) << PAGE_SHIFT;
 
 	tcp = tcp_from_cgproto(cg_proto);
-	return res_counter_read_u64(&tcp->tcp_memory_allocated, RES_USAGE);
+	return res_counter_usage_safe(&tcp->tcp_memory_allocated);
 }
 
 static u64 tcp_cgroup_read(struct cgroup *cont, struct cftype *cft)
-- 
1.7.4.1


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration.
  2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
@ 2012-06-28 10:23 ` Kamezawa Hiroyuki
  2012-06-29 21:41   ` David Rientjes
  2012-07-02 16:48   ` Michal Hocko
  2012-06-28 10:57 ` [RFC][PATCH 1/2] add res_counter_usage_safe Glauber Costa
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 9+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-28 10:23 UTC (permalink / raw)
  To: linux-mm
  Cc: Michal Hocko, David Rientjes, Johannes Weiner, Andrew Morton, Tejun Heo

For handling many kinds of races, memcg adds an extra charge to
page's memcg at page migration. But this affects the page compaction
and make it fail if the memcg is under OOM.

This patch uses res_counter_charge_nofail() in page migration path
and remove -ENOMEM. By this, page migration will not fail by the
status of memcg.

Reported-by: David Rientjes <rientjes@google.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 mm/memcontrol.c |   26 +++++++-------------------
 1 files changed, 7 insertions(+), 19 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a2677e0..7424fab 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3168,6 +3168,7 @@ int mem_cgroup_prepare_migration(struct page *page,
 	struct page *newpage, struct mem_cgroup **memcgp, gfp_t gfp_mask)
 {
 	struct mem_cgroup *memcg = NULL;
+	struct res_counter *dummy;
 	struct page_cgroup *pc;
 	enum charge_type ctype;
 	int ret = 0;
@@ -3222,29 +3223,16 @@ int mem_cgroup_prepare_migration(struct page *page,
 	 */
 	if (!memcg)
 		return 0;
-
-	*memcgp = memcg;
-	ret = __mem_cgroup_try_charge(NULL, gfp_mask, 1, memcgp, false);
-	css_put(&memcg->css);/* drop extra refcnt */
-	if (ret) {
-		if (PageAnon(page)) {
-			lock_page_cgroup(pc);
-			ClearPageCgroupMigration(pc);
-			unlock_page_cgroup(pc);
-			/*
-			 * The old page may be fully unmapped while we kept it.
-			 */
-			mem_cgroup_uncharge_page(page);
-		}
-		/* we'll need to revisit this error code (we have -EINTR) */
-		return -ENOMEM;
-	}
 	/*
 	 * We charge new page before it's used/mapped. So, even if unlock_page()
 	 * is called before end_migration, we can catch all events on this new
 	 * page. In the case new page is migrated but not remapped, new page's
 	 * mapcount will be finally 0 and we call uncharge in end_migration().
 	 */
+	res_counter_charge_nofail(&memcg->res, PAGE_SIZE, &dummy);
+	if (do_swap_account)
+		res_counter_charge_nofail(&memcg->memsw, PAGE_SIZE, &dummy);
+
 	if (PageAnon(page))
 		ctype = MEM_CGROUP_CHARGE_TYPE_ANON;
 	else if (page_is_file_cache(page))
@@ -3807,9 +3795,9 @@ static inline u64 mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
 
 	if (!mem_cgroup_is_root(memcg)) {
 		if (!swap)
-			return res_counter_read_u64(&memcg->res, RES_USAGE);
+			return res_counter_usage_safe(&memcg->res);
 		else
-			return res_counter_read_u64(&memcg->memsw, RES_USAGE);
+			return res_counter_usage_safe(&memcg->memsw);
 	}
 
 	val = mem_cgroup_recursive_stat(memcg, MEM_CGROUP_STAT_CACHE);
-- 
1.7.4.1


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 1/2] add res_counter_usage_safe
  2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
  2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
@ 2012-06-28 10:57 ` Glauber Costa
  2012-06-29  2:35   ` Kamezawa Hiroyuki
  2012-06-29 21:34 ` David Rientjes
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Glauber Costa @ 2012-06-28 10:57 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Michal Hocko, David Rientjes, Johannes Weiner,
	Andrew Morton, Tejun Heo

On 06/28/2012 02:20 PM, Kamezawa Hiroyuki wrote:
> I think usage > limit means a sign of BUG. But, sometimes,
> res_counter_charge_nofail() is very convenient. tcp_memcg uses it.
> And I'd like to use it for helping page migration.
> 
> This patch adds res_counter_usage_safe() which returns min(usage,limit).
> By this we can use res_counter_charge_nofail() without breaking
> user experience.
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

I totally agree.

It would be very nice to never go over limit, but truth is, sometimes
we're forced too - for a limited time. In those circumstances, it is
better to actually charge memcg, so the charges won't unbalance and
disappear. Every work around proposed so far for those has been to
basically add some form of "extra_charge" to the memcg, that would
effectively charge to it, but not display it.

The good fix is in the display side.

We should just be careful to always have good justification for no_fail
usage. It should be reserved to those situations where we really need
it, but that's on us on future reviews.

For the idea:

Acked-by: Glauber Costa <glommer@parallels.com>

For the patch itself: I believe we can take the lock once in
res_counter_usage_safe, and then read the value and the limit under it.

Calling res_counter_read_u64 two times seems not only wasteful but
potentially wrong, since they can change under our nose.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 1/2] add res_counter_usage_safe
  2012-06-28 10:57 ` [RFC][PATCH 1/2] add res_counter_usage_safe Glauber Costa
@ 2012-06-29  2:35   ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 9+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-29  2:35 UTC (permalink / raw)
  To: Glauber Costa
  Cc: linux-mm, Michal Hocko, David Rientjes, Johannes Weiner,
	Andrew Morton, Tejun Heo

(2012/06/28 19:57), Glauber Costa wrote:
> On 06/28/2012 02:20 PM, Kamezawa Hiroyuki wrote:
>> I think usage > limit means a sign of BUG. But, sometimes,
>> res_counter_charge_nofail() is very convenient. tcp_memcg uses it.
>> And I'd like to use it for helping page migration.
>>
>> This patch adds res_counter_usage_safe() which returns min(usage,limit).
>> By this we can use res_counter_charge_nofail() without breaking
>> user experience.
>>
>> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> 
> I totally agree.
> 
> It would be very nice to never go over limit, but truth is, sometimes
> we're forced too - for a limited time. In those circumstances, it is
> better to actually charge memcg, so the charges won't unbalance and
> disappear. Every work around proposed so far for those has been to
> basically add some form of "extra_charge" to the memcg, that would
> effectively charge to it, but not display it.
> 
> The good fix is in the display side.
> 
> We should just be careful to always have good justification for no_fail
> usage. It should be reserved to those situations where we really need
> it, but that's on us on future reviews.
> 
> For the idea:
> 
> Acked-by: Glauber Costa <glommer@parallels.com>
> 
> For the patch itself: I believe we can take the lock once in
> res_counter_usage_safe, and then read the value and the limit under it.
> 
> Calling res_counter_read_u64 two times seems not only wasteful but
> potentially wrong, since they can change under our nose.
> 
Thank you for comments.

I'll update the patch using that way.

Thanks,
-Kame



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 1/2] add res_counter_usage_safe
  2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
  2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
  2012-06-28 10:57 ` [RFC][PATCH 1/2] add res_counter_usage_safe Glauber Costa
@ 2012-06-29 21:34 ` David Rientjes
  2012-07-02 16:52 ` Michal Hocko
  2012-07-04 13:19 ` Wanpeng Li
  4 siblings, 0 replies; 9+ messages in thread
From: David Rientjes @ 2012-06-29 21:34 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Michal Hocko, Johannes Weiner, Andrew Morton, Tejun Heo

On Thu, 28 Jun 2012, Kamezawa Hiroyuki wrote:

> diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h
> index 7d7fbe2..a6f8cc5 100644
> --- a/include/linux/res_counter.h
> +++ b/include/linux/res_counter.h
> @@ -226,4 +226,6 @@ res_counter_set_soft_limit(struct res_counter *cnt,
>  	return 0;
>  }
>  
> +u64 res_counter_usage_safe(struct res_counter *cnt);
> +
>  #endif
> diff --git a/kernel/res_counter.c b/kernel/res_counter.c
> index ad581aa..e84149b 100644
> --- a/kernel/res_counter.c
> +++ b/kernel/res_counter.c
> @@ -171,6 +171,21 @@ u64 res_counter_read_u64(struct res_counter *counter, int member)
>  }
>  #endif
>  
> +/*
> + * Returns usage. If usage > limit, limit is returned.
> + * This is useful not to break user experiance if the excess
> + * is temporal.

s/temporal/temporary/

> + */
> +u64 res_counter_usage_safe(struct res_counter *counter)
> +{
> +	u64 usage, limit;
> +
> +	limit = res_counter_read_u64(counter, RES_LIMIT);
> +	usage = res_counter_read_u64(counter, RES_USAGE);
> +
> +	return min(usage, limit);
> +}
> +
>  int res_counter_memparse_write_strategy(const char *buf,
>  					unsigned long long *res)
>  {
> diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
> index b6f3583..a73dce6 100644
> --- a/net/ipv4/tcp_memcontrol.c
> +++ b/net/ipv4/tcp_memcontrol.c
> @@ -180,7 +180,7 @@ static u64 tcp_read_usage(struct mem_cgroup *memcg)
>  		return atomic_long_read(&tcp_memory_allocated) << PAGE_SHIFT;
>  
>  	tcp = tcp_from_cgproto(cg_proto);
> -	return res_counter_read_u64(&tcp->tcp_memory_allocated, RES_USAGE);
> +	return res_counter_usage_safe(&tcp->tcp_memory_allocated);
>  }
>  
>  static u64 tcp_cgroup_read(struct cgroup *cont, struct cftype *cft)

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration.
  2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
@ 2012-06-29 21:41   ` David Rientjes
  2012-07-02 16:48   ` Michal Hocko
  1 sibling, 0 replies; 9+ messages in thread
From: David Rientjes @ 2012-06-29 21:41 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Michal Hocko, Johannes Weiner, Andrew Morton, Tejun Heo

On Thu, 28 Jun 2012, Kamezawa Hiroyuki wrote:

> For handling many kinds of races, memcg adds an extra charge to
> page's memcg at page migration. But this affects the page compaction
> and make it fail if the memcg is under OOM.
> 
> This patch uses res_counter_charge_nofail() in page migration path
> and remove -ENOMEM. By this, page migration will not fail by the
> status of memcg.
> 
> Reported-by: David Rientjes <rientjes@google.com>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Acked-by: David Rientjes <rientjes@google.com>

This is a very good improvement for page migration under memory compaction 
and increases the liklihood that it will do useful work for transparent 
hugepage allocations, thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration.
  2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
  2012-06-29 21:41   ` David Rientjes
@ 2012-07-02 16:48   ` Michal Hocko
  1 sibling, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2012-07-02 16:48 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, David Rientjes, Johannes Weiner, Andrew Morton, Tejun Heo

On Thu 28-06-12 19:23:11, KAMEZAWA Hiroyuki wrote:
> For handling many kinds of races, memcg adds an extra charge to
> page's memcg at page migration. But this affects the page compaction
> and make it fail if the memcg is under OOM.
> 
> This patch uses res_counter_charge_nofail() in page migration path
> and remove -ENOMEM. By this, page migration will not fail by the
> status of memcg.

Maybe we could add something like below to the changelog as well.
"
Even though res_counter_charge_nofail can silently go over the memcg
limit mem_cgroup_usage compensates that and it doesn't tell the real truth
to the userspace. 
Excessive charges are only temporal and done on a single page per-CPU in
the worst case. This sounds tolerable and actually consumes less charges
than the current per-cpu memcg_stock.
"

> Reported-by: David Rientjes <rientjes@google.com>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Acked-by: Michal Hocko <mhocko@suse.cz>

Thanks!

> ---
>  mm/memcontrol.c |   26 +++++++-------------------
>  1 files changed, 7 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index a2677e0..7424fab 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3168,6 +3168,7 @@ int mem_cgroup_prepare_migration(struct page *page,
>  	struct page *newpage, struct mem_cgroup **memcgp, gfp_t gfp_mask)
>  {
>  	struct mem_cgroup *memcg = NULL;
> +	struct res_counter *dummy;
>  	struct page_cgroup *pc;
>  	enum charge_type ctype;
>  	int ret = 0;
> @@ -3222,29 +3223,16 @@ int mem_cgroup_prepare_migration(struct page *page,
>  	 */
>  	if (!memcg)
>  		return 0;
> -
> -	*memcgp = memcg;
> -	ret = __mem_cgroup_try_charge(NULL, gfp_mask, 1, memcgp, false);
> -	css_put(&memcg->css);/* drop extra refcnt */
> -	if (ret) {
> -		if (PageAnon(page)) {
> -			lock_page_cgroup(pc);
> -			ClearPageCgroupMigration(pc);
> -			unlock_page_cgroup(pc);
> -			/*
> -			 * The old page may be fully unmapped while we kept it.
> -			 */
> -			mem_cgroup_uncharge_page(page);
> -		}
> -		/* we'll need to revisit this error code (we have -EINTR) */
> -		return -ENOMEM;
> -	}
>  	/*
>  	 * We charge new page before it's used/mapped. So, even if unlock_page()
>  	 * is called before end_migration, we can catch all events on this new
>  	 * page. In the case new page is migrated but not remapped, new page's
>  	 * mapcount will be finally 0 and we call uncharge in end_migration().
>  	 */
> +	res_counter_charge_nofail(&memcg->res, PAGE_SIZE, &dummy);
> +	if (do_swap_account)
> +		res_counter_charge_nofail(&memcg->memsw, PAGE_SIZE, &dummy);
> +
>  	if (PageAnon(page))
>  		ctype = MEM_CGROUP_CHARGE_TYPE_ANON;
>  	else if (page_is_file_cache(page))
> @@ -3807,9 +3795,9 @@ static inline u64 mem_cgroup_usage(struct mem_cgroup *memcg, bool swap)
>  
>  	if (!mem_cgroup_is_root(memcg)) {
>  		if (!swap)
> -			return res_counter_read_u64(&memcg->res, RES_USAGE);
> +			return res_counter_usage_safe(&memcg->res);
>  		else
> -			return res_counter_read_u64(&memcg->memsw, RES_USAGE);
> +			return res_counter_usage_safe(&memcg->memsw);
>  	}
>  
>  	val = mem_cgroup_recursive_stat(memcg, MEM_CGROUP_STAT_CACHE);
> -- 
> 1.7.4.1
> 
> 

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 1/2] add res_counter_usage_safe
  2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
                   ` (2 preceding siblings ...)
  2012-06-29 21:34 ` David Rientjes
@ 2012-07-02 16:52 ` Michal Hocko
  2012-07-04 13:19 ` Wanpeng Li
  4 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2012-07-02 16:52 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, David Rientjes, Johannes Weiner, Andrew Morton, Tejun Heo

On Thu 28-06-12 19:20:58, KAMEZAWA Hiroyuki wrote:
> This series is a cleaned up patches discussed in a few days ago, the topic
> was how to make compaction works well even if there is a memcg under OOM.
> ==
> memcg: add res_counter_usage_safe()
> 
> I think usage > limit means a sign of BUG. But, sometimes,
> res_counter_charge_nofail() is very convenient. tcp_memcg uses it.
> And I'd like to use it for helping page migration.
> 
> This patch adds res_counter_usage_safe() which returns min(usage,limit).
> By this we can use res_counter_charge_nofail() without breaking
> user experience.
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/res_counter.h |    2 ++
>  kernel/res_counter.c        |   15 +++++++++++++++
>  net/ipv4/tcp_memcontrol.c   |    2 +-
>  3 files changed, 18 insertions(+), 1 deletions(-)
> 
> diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h
> index 7d7fbe2..a6f8cc5 100644
> --- a/include/linux/res_counter.h
> +++ b/include/linux/res_counter.h
> @@ -226,4 +226,6 @@ res_counter_set_soft_limit(struct res_counter *cnt,
>  	return 0;
>  }
>  
> +u64 res_counter_usage_safe(struct res_counter *cnt);
> +
>  #endif
> diff --git a/kernel/res_counter.c b/kernel/res_counter.c
> index ad581aa..e84149b 100644
> --- a/kernel/res_counter.c
> +++ b/kernel/res_counter.c
> @@ -171,6 +171,21 @@ u64 res_counter_read_u64(struct res_counter *counter, int member)
>  }
>  #endif
>  
> +/*
> + * Returns usage. If usage > limit, limit is returned.
> + * This is useful not to break user experiance if the excess
> + * is temporal.
> + */
> +u64 res_counter_usage_safe(struct res_counter *counter)
> +{
> +	u64 usage, limit;
> +
> +	limit = res_counter_read_u64(counter, RES_LIMIT);
> +	usage = res_counter_read_u64(counter, RES_USAGE);
> +
> +	return min(usage, limit);
> +}
> +
>  int res_counter_memparse_write_strategy(const char *buf,
>  					unsigned long long *res)
>  {
> diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
> index b6f3583..a73dce6 100644
> --- a/net/ipv4/tcp_memcontrol.c
> +++ b/net/ipv4/tcp_memcontrol.c
> @@ -180,7 +180,7 @@ static u64 tcp_read_usage(struct mem_cgroup *memcg)
>  		return atomic_long_read(&tcp_memory_allocated) << PAGE_SHIFT;
>  
>  	tcp = tcp_from_cgproto(cg_proto);
> -	return res_counter_read_u64(&tcp->tcp_memory_allocated, RES_USAGE);
> +	return res_counter_usage_safe(&tcp->tcp_memory_allocated);
>  }
>  
>  static u64 tcp_cgroup_read(struct cgroup *cont, struct cftype *cft)
> -- 
> 1.7.4.1
> 
> 

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC][PATCH 1/2] add res_counter_usage_safe
  2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
                   ` (3 preceding siblings ...)
  2012-07-02 16:52 ` Michal Hocko
@ 2012-07-04 13:19 ` Wanpeng Li
  4 siblings, 0 replies; 9+ messages in thread
From: Wanpeng Li @ 2012-07-04 13:19 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Michal Hocko, David Rientjes, Johannes Weiner,
	Andrew Morton, Tejun Heo, Wanpeng Li

On Thu, Jun 28, 2012 at 07:20:58PM +0900, Kamezawa Hiroyuki wrote:
>This series is a cleaned up patches discussed in a few days ago, the topic
>was how to make compaction works well even if there is a memcg under OOM.
>==
>memcg: add res_counter_usage_safe()
>
>I think usage > limit means a sign of BUG. But, sometimes,
>res_counter_charge_nofail() is very convenient. tcp_memcg uses it.
>And I'd like to use it for helping page migration.
>
>This patch adds res_counter_usage_safe() which returns min(usage,limit).
>By this we can use res_counter_charge_nofail() without breaking
>user experience.
>
>Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
>---
> include/linux/res_counter.h |    2 ++
> kernel/res_counter.c        |   15 +++++++++++++++
> net/ipv4/tcp_memcontrol.c   |    2 +-
> 3 files changed, 18 insertions(+), 1 deletions(-)
>
>diff --git a/include/linux/res_counter.h b/include/linux/res_counter.h
>index 7d7fbe2..a6f8cc5 100644
>--- a/include/linux/res_counter.h
>+++ b/include/linux/res_counter.h
>@@ -226,4 +226,6 @@ res_counter_set_soft_limit(struct res_counter *cnt,
> 	return 0;
> }
> 
>+u64 res_counter_usage_safe(struct res_counter *cnt);
>+
> #endif
>diff --git a/kernel/res_counter.c b/kernel/res_counter.c
>index ad581aa..e84149b 100644
>--- a/kernel/res_counter.c
>+++ b/kernel/res_counter.c
>@@ -171,6 +171,21 @@ u64 res_counter_read_u64(struct res_counter *counter, int member)
> }
> #endif
> 
>+/*
>+ * Returns usage. If usage > limit, limit is returned.
>+ * This is useful not to break user experiance if the excess
                                      ^^^^^^^^
/experiance/experience
>+ * is temporal.
>+ */
>+u64 res_counter_usage_safe(struct res_counter *counter)
>+{
>+	u64 usage, limit;
>+
>+	limit = res_counter_read_u64(counter, RES_LIMIT);
>+	usage = res_counter_read_u64(counter, RES_USAGE);
>+
>+	return min(usage, limit);
>+}
>+
> int res_counter_memparse_write_strategy(const char *buf,
> 					unsigned long long *res)
> {
>diff --git a/net/ipv4/tcp_memcontrol.c b/net/ipv4/tcp_memcontrol.c
>index b6f3583..a73dce6 100644
>--- a/net/ipv4/tcp_memcontrol.c
>+++ b/net/ipv4/tcp_memcontrol.c
>@@ -180,7 +180,7 @@ static u64 tcp_read_usage(struct mem_cgroup *memcg)
> 		return atomic_long_read(&tcp_memory_allocated) << PAGE_SHIFT;
> 
> 	tcp = tcp_from_cgproto(cg_proto);
>-	return res_counter_read_u64(&tcp->tcp_memory_allocated, RES_USAGE);
>+	return res_counter_usage_safe(&tcp->tcp_memory_allocated);
> }
> 
> static u64 tcp_cgroup_read(struct cgroup *cont, struct cftype *cft)
>-- 
>1.7.4.1
>
>
>--
>To unsubscribe, send a message with 'unsubscribe linux-mm' in
>the body to majordomo@kvack.org.  For more info on Linux MM,
>see: http://www.linux-mm.org/ .
>Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-07-04 13:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-28 10:20 [RFC][PATCH 1/2] add res_counter_usage_safe Kamezawa Hiroyuki
2012-06-28 10:23 ` [RFC][PATCH 2/2] memcg : remove -ENOMEM at page migration Kamezawa Hiroyuki
2012-06-29 21:41   ` David Rientjes
2012-07-02 16:48   ` Michal Hocko
2012-06-28 10:57 ` [RFC][PATCH 1/2] add res_counter_usage_safe Glauber Costa
2012-06-29  2:35   ` Kamezawa Hiroyuki
2012-06-29 21:34 ` David Rientjes
2012-07-02 16:52 ` Michal Hocko
2012-07-04 13:19 ` Wanpeng Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.