* [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-07 20:34 ` David Rientjes
0 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-07 20:34 UTC (permalink / raw)
To: Andrew Morton; +Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
since it makes no sense to allow a partial page to be charged.
As a result of the hugetlb cgroup using the resource counter, it is also aligned
to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
limited.
Align hugetlb cgroup limit to hugepage size.
Signed-off-by: David Rientjes <rientjes@google.com>
---
mm/hugetlb_cgroup.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
ret = res_counter_memparse_write_strategy(buf, &val);
if (ret)
break;
+ val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
+ PAGE_SHIFT));
ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
break;
default:
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-07 20:34 ` David Rientjes
0 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-07 20:34 UTC (permalink / raw)
To: Andrew Morton; +Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
since it makes no sense to allow a partial page to be charged.
As a result of the hugetlb cgroup using the resource counter, it is also aligned
to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
limited.
Align hugetlb cgroup limit to hugepage size.
Signed-off-by: David Rientjes <rientjes@google.com>
---
mm/hugetlb_cgroup.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
ret = res_counter_memparse_write_strategy(buf, &val);
if (ret)
break;
+ val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
+ PAGE_SHIFT));
ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
break;
default:
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
2014-08-07 20:34 ` David Rientjes
@ 2014-08-08 5:47 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2014-08-08 5:47 UTC (permalink / raw)
To: David Rientjes, Andrew Morton
Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
David Rientjes <rientjes@google.com> writes:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
> mm/hugetlb_cgroup.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
> + PAGE_SHIFT));
you can use 1UL << huge_page_shift(hstate); ?
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
>
-aneesh
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-08 5:47 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2014-08-08 5:47 UTC (permalink / raw)
To: David Rientjes, Andrew Morton
Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
David Rientjes <rientjes@google.com> writes:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
> mm/hugetlb_cgroup.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
> + PAGE_SHIFT));
you can use 1UL << huge_page_shift(hstate); ?
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
>
-aneesh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
2014-08-07 20:34 ` David Rientjes
@ 2014-08-08 8:21 ` Michal Hocko
-1 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2014-08-08 8:21 UTC (permalink / raw)
To: David Rientjes; +Cc: Andrew Morton, Tejun Heo, Li Zefan, linux-kernel, linux-mm
On Thu 07-08-14 13:34:12, David Rientjes wrote:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Signed-off-by: David Rientjes <rientjes@google.com>
huge_page_shift as proposed by Aneesh looks better.
Acked-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/hugetlb_cgroup.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
> + PAGE_SHIFT));
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-08 8:21 ` Michal Hocko
0 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2014-08-08 8:21 UTC (permalink / raw)
To: David Rientjes; +Cc: Andrew Morton, Tejun Heo, Li Zefan, linux-kernel, linux-mm
On Thu 07-08-14 13:34:12, David Rientjes wrote:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Signed-off-by: David Rientjes <rientjes@google.com>
huge_page_shift as proposed by Aneesh looks better.
Acked-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/hugetlb_cgroup.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,8 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1 << (huge_page_order(&hstates[idx]) +
> + PAGE_SHIFT));
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
--
Michal Hocko
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
2014-08-07 20:34 ` David Rientjes
@ 2014-08-08 22:07 ` David Rientjes
-1 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-08 22:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Aneesh Kumar K.V, Tejun Heo, Li Zefan, Michal Hocko,
linux-kernel, linux-mm
Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
since it makes no sense to allow a partial page to be charged.
As a result of the hugetlb cgroup using the resource counter, it is also aligned
to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
limited.
Align hugetlb cgroup limit to hugepage size.
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: David Rientjes <rientjes@google.com>
---
v2: use huge_page_order() per Aneesh
Sorry for not cc'ing you initially, get_maintainer.pl failed me
mm/hugetlb_cgroup.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
ret = res_counter_memparse_write_strategy(buf, &val);
if (ret)
break;
+ val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
break;
default:
^ permalink raw reply [flat|nested] 12+ messages in thread
* [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-08 22:07 ` David Rientjes
0 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-08 22:07 UTC (permalink / raw)
To: Andrew Morton
Cc: Aneesh Kumar K.V, Tejun Heo, Li Zefan, Michal Hocko,
linux-kernel, linux-mm
Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
since it makes no sense to allow a partial page to be charged.
As a result of the hugetlb cgroup using the resource counter, it is also aligned
to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
limited.
Align hugetlb cgroup limit to hugepage size.
Acked-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: David Rientjes <rientjes@google.com>
---
v2: use huge_page_order() per Aneesh
Sorry for not cc'ing you initially, get_maintainer.pl failed me
mm/hugetlb_cgroup.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
ret = res_counter_memparse_write_strategy(buf, &val);
if (ret)
break;
+ val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
break;
default:
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
2014-08-08 22:07 ` David Rientjes
@ 2014-08-09 10:05 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2014-08-09 10:05 UTC (permalink / raw)
To: David Rientjes, Andrew Morton
Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
David Rientjes <rientjes@google.com> writes:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Acked-by: Michal Hocko <mhocko@suse.cz>
> Signed-off-by: David Rientjes <rientjes@google.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
> v2: use huge_page_order() per Aneesh
> Sorry for not cc'ing you initially, get_maintainer.pl failed me
>
> mm/hugetlb_cgroup.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
Do we really need ULL ? max value should fit in unsigned long right ?
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-09 10:05 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2014-08-09 10:05 UTC (permalink / raw)
To: David Rientjes, Andrew Morton
Cc: Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
David Rientjes <rientjes@google.com> writes:
> Memcg aligns memory.limit_in_bytes to PAGE_SIZE as part of the resource counter
> since it makes no sense to allow a partial page to be charged.
>
> As a result of the hugetlb cgroup using the resource counter, it is also aligned
> to PAGE_SIZE but makes no sense unless aligned to the size of the hugepage being
> limited.
>
> Align hugetlb cgroup limit to hugepage size.
>
> Acked-by: Michal Hocko <mhocko@suse.cz>
> Signed-off-by: David Rientjes <rientjes@google.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
> v2: use huge_page_order() per Aneesh
> Sorry for not cc'ing you initially, get_maintainer.pl failed me
>
> mm/hugetlb_cgroup.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> ret = res_counter_memparse_write_strategy(buf, &val);
> if (ret)
> break;
> + val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
Do we really need ULL ? max value should fit in unsigned long right ?
> ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> break;
> default:
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
2014-08-09 10:05 ` Aneesh Kumar K.V
@ 2014-08-09 21:34 ` David Rientjes
-1 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-09 21:34 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: Andrew Morton, Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
On Sat, 9 Aug 2014, Aneesh Kumar K.V wrote:
> > diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> > --- a/mm/hugetlb_cgroup.c
> > +++ b/mm/hugetlb_cgroup.c
> > @@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> > ret = res_counter_memparse_write_strategy(buf, &val);
> > if (ret)
> > break;
> > + val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
>
> Do we really need ULL ? max value should fit in unsigned long right ?
>
I usually just go for type agreement.
> > ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> > break;
> > default:
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [patch v2] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size
@ 2014-08-09 21:34 ` David Rientjes
0 siblings, 0 replies; 12+ messages in thread
From: David Rientjes @ 2014-08-09 21:34 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: Andrew Morton, Tejun Heo, Li Zefan, Michal Hocko, linux-kernel, linux-mm
On Sat, 9 Aug 2014, Aneesh Kumar K.V wrote:
> > diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> > --- a/mm/hugetlb_cgroup.c
> > +++ b/mm/hugetlb_cgroup.c
> > @@ -275,6 +275,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
> > ret = res_counter_memparse_write_strategy(buf, &val);
> > if (ret)
> > break;
> > + val = ALIGN(val, 1ULL << huge_page_shift(&hstates[idx]));
>
> Do we really need ULL ? max value should fit in unsigned long right ?
>
I usually just go for type agreement.
> > ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
> > break;
> > default:
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2014-08-09 21:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-07 20:34 [patch] mm, hugetlb_cgroup: align hugetlb cgroup limit to hugepage size David Rientjes
2014-08-07 20:34 ` David Rientjes
2014-08-08 5:47 ` Aneesh Kumar K.V
2014-08-08 5:47 ` Aneesh Kumar K.V
2014-08-08 8:21 ` Michal Hocko
2014-08-08 8:21 ` Michal Hocko
2014-08-08 22:07 ` [patch v2] " David Rientjes
2014-08-08 22:07 ` David Rientjes
2014-08-09 10:05 ` Aneesh Kumar K.V
2014-08-09 10:05 ` Aneesh Kumar K.V
2014-08-09 21:34 ` David Rientjes
2014-08-09 21:34 ` David Rientjes
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.