* [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel
Hi Andrew,
This series contain fixes based on review feedback on top of the
hugetlb controller patches already in -mm. Please apply.
-aneesh
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel
Hi Andrew,
This series contain fixes based on review feedback on top of the
hugetlb controller patches already in -mm. Please apply.
-aneesh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH -V3 1/4] hugeltb: Mark hugelb_max_hstate __read_mostly
2012-06-15 12:41 ` Aneesh Kumar K.V
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
We set this value only during boot.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
include/linux/hugetlb.h | 2 +-
mm/hugetlb.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 9650bb1..0f0877e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -23,7 +23,7 @@ struct hugepage_subpool {
};
extern spinlock_t hugetlb_lock;
-extern int hugetlb_max_hstate;
+extern int hugetlb_max_hstate __read_mostly;
#define for_each_hstate(h) \
for ((h) = hstates; (h) < &hstates[hugetlb_max_hstate]; (h)++)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a5a30bf..c57740b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -37,7 +37,7 @@ const unsigned long hugetlb_zero = 0, hugetlb_infinity = ~0UL;
static gfp_t htlb_alloc_mask = GFP_HIGHUSER;
unsigned long hugepages_treat_as_movable;
-int hugetlb_max_hstate;
+int hugetlb_max_hstate __read_mostly;
unsigned int default_hstate_idx;
struct hstate hstates[HUGE_MAX_HSTATE];
--
1.7.10
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 1/4] hugeltb: Mark hugelb_max_hstate __read_mostly
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
We set this value only during boot.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
include/linux/hugetlb.h | 2 +-
mm/hugetlb.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 9650bb1..0f0877e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -23,7 +23,7 @@ struct hugepage_subpool {
};
extern spinlock_t hugetlb_lock;
-extern int hugetlb_max_hstate;
+extern int hugetlb_max_hstate __read_mostly;
#define for_each_hstate(h) \
for ((h) = hstates; (h) < &hstates[hugetlb_max_hstate]; (h)++)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a5a30bf..c57740b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -37,7 +37,7 @@ const unsigned long hugetlb_zero = 0, hugetlb_infinity = ~0UL;
static gfp_t htlb_alloc_mask = GFP_HIGHUSER;
unsigned long hugepages_treat_as_movable;
-int hugetlb_max_hstate;
+int hugetlb_max_hstate __read_mostly;
unsigned int default_hstate_idx;
struct hstate hstates[HUGE_MAX_HSTATE];
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 2/4] hugetlb: Move all the in use pages to active list
2012-06-15 12:41 ` Aneesh Kumar K.V
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
When we fail to allocate pages from the reserve pool, hugetlb
do try to allocate huge pages using alloc_buddy_huge_page.
Add these to the active list. We also need to add the huge
page we allocate when we soft offline the oldpage to active
list.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c57740b..ec7b86e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -928,8 +928,14 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_node(h, nid);
spin_unlock(&hugetlb_lock);
- if (!page)
+ if (!page) {
page = alloc_buddy_huge_page(h, nid);
+ if (page) {
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
+ }
+ }
return page;
}
@@ -1155,6 +1161,9 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
hugepage_subpool_put_pages(spool, chg);
return ERR_PTR(-ENOSPC);
}
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
}
set_page_private(page, (unsigned long)spool);
--
1.7.10
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 2/4] hugetlb: Move all the in use pages to active list
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
When we fail to allocate pages from the reserve pool, hugetlb
do try to allocate huge pages using alloc_buddy_huge_page.
Add these to the active list. We also need to add the huge
page we allocate when we soft offline the oldpage to active
list.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c57740b..ec7b86e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -928,8 +928,14 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_node(h, nid);
spin_unlock(&hugetlb_lock);
- if (!page)
+ if (!page) {
page = alloc_buddy_huge_page(h, nid);
+ if (page) {
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
+ }
+ }
return page;
}
@@ -1155,6 +1161,9 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
hugepage_subpool_put_pages(spool, chg);
return ERR_PTR(-ENOSPC);
}
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
}
set_page_private(page, (unsigned long)spool);
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 3/4] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page to active list.
2012-06-15 12:41 ` Aneesh Kumar K.V
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
page's hugetlb cgroup assign and moving to active list should happen with
hugetlb_lock held. Otherwise when we remove the hugetlb cgroup we would
iterate the active list and will find page with NULL hugetlb cgroup values.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 22 ++++++++++------------
mm/hugetlb_cgroup.c | 5 +++--
2 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ec7b86e..c39e4be 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -928,14 +928,8 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_node(h, nid);
spin_unlock(&hugetlb_lock);
- if (!page) {
+ if (!page)
page = alloc_buddy_huge_page(h, nid);
- if (page) {
- spin_lock(&hugetlb_lock);
- list_move(&page->lru, &h->hugepage_activelist);
- spin_unlock(&hugetlb_lock);
- }
- }
return page;
}
@@ -1150,9 +1144,13 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
}
spin_lock(&hugetlb_lock);
page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve);
- spin_unlock(&hugetlb_lock);
-
- if (!page) {
+ if (page) {
+ /* update page cgroup details */
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
+ spin_unlock(&hugetlb_lock);
+ } else {
+ spin_unlock(&hugetlb_lock);
page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
if (!page) {
hugetlb_cgroup_uncharge_cgroup(idx,
@@ -1162,6 +1160,8 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
return ERR_PTR(-ENOSPC);
}
spin_lock(&hugetlb_lock);
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
list_move(&page->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
}
@@ -1169,8 +1169,6 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
set_page_private(page, (unsigned long)spool);
vma_commit_reservation(h, vma, addr);
- /* update page cgroup details */
- hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
return page;
}
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 8e7ca0a..55e109a 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -218,6 +218,7 @@ done:
return ret;
}
+/* Should be called with hugetlb_lock held */
void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
struct hugetlb_cgroup *h_cg,
struct page *page)
@@ -225,9 +226,7 @@ void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
if (hugetlb_cgroup_disabled() || !h_cg)
return;
- spin_lock(&hugetlb_lock);
set_hugetlb_cgroup(page, h_cg);
- spin_unlock(&hugetlb_lock);
return;
}
@@ -391,6 +390,7 @@ int __init hugetlb_cgroup_file_init(int idx)
void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
{
struct hugetlb_cgroup *h_cg;
+ struct hstate *h = page_hstate(oldhpage);
if (hugetlb_cgroup_disabled())
return;
@@ -403,6 +403,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
/* move the h_cg details to new cgroup */
set_hugetlb_cgroup(newhpage, h_cg);
+ list_move(&newhpage->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
cgroup_release_and_wakeup_rmdir(&h_cg->css);
return;
--
1.7.10
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 3/4] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page to active list.
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
page's hugetlb cgroup assign and moving to active list should happen with
hugetlb_lock held. Otherwise when we remove the hugetlb cgroup we would
iterate the active list and will find page with NULL hugetlb cgroup values.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 22 ++++++++++------------
mm/hugetlb_cgroup.c | 5 +++--
2 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ec7b86e..c39e4be 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -928,14 +928,8 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_node(h, nid);
spin_unlock(&hugetlb_lock);
- if (!page) {
+ if (!page)
page = alloc_buddy_huge_page(h, nid);
- if (page) {
- spin_lock(&hugetlb_lock);
- list_move(&page->lru, &h->hugepage_activelist);
- spin_unlock(&hugetlb_lock);
- }
- }
return page;
}
@@ -1150,9 +1144,13 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
}
spin_lock(&hugetlb_lock);
page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve);
- spin_unlock(&hugetlb_lock);
-
- if (!page) {
+ if (page) {
+ /* update page cgroup details */
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
+ spin_unlock(&hugetlb_lock);
+ } else {
+ spin_unlock(&hugetlb_lock);
page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
if (!page) {
hugetlb_cgroup_uncharge_cgroup(idx,
@@ -1162,6 +1160,8 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
return ERR_PTR(-ENOSPC);
}
spin_lock(&hugetlb_lock);
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
list_move(&page->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
}
@@ -1169,8 +1169,6 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
set_page_private(page, (unsigned long)spool);
vma_commit_reservation(h, vma, addr);
- /* update page cgroup details */
- hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
return page;
}
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 8e7ca0a..55e109a 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -218,6 +218,7 @@ done:
return ret;
}
+/* Should be called with hugetlb_lock held */
void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
struct hugetlb_cgroup *h_cg,
struct page *page)
@@ -225,9 +226,7 @@ void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
if (hugetlb_cgroup_disabled() || !h_cg)
return;
- spin_lock(&hugetlb_lock);
set_hugetlb_cgroup(page, h_cg);
- spin_unlock(&hugetlb_lock);
return;
}
@@ -391,6 +390,7 @@ int __init hugetlb_cgroup_file_init(int idx)
void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
{
struct hugetlb_cgroup *h_cg;
+ struct hstate *h = page_hstate(oldhpage);
if (hugetlb_cgroup_disabled())
return;
@@ -403,6 +403,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
/* move the h_cg details to new cgroup */
set_hugetlb_cgroup(newhpage, h_cg);
+ list_move(&newhpage->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
cgroup_release_and_wakeup_rmdir(&h_cg->css);
return;
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 4/4] hugetlb/cgroup: Remove exclude and wakeup rmdir calls from migrate
2012-06-15 12:41 ` Aneesh Kumar K.V
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
-1 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
We already hold the hugetlb_lock. That should prevent a parallel
cgroup rmdir from touching page's hugetlb cgroup. So remove
the exclude and wakeup calls.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb_cgroup.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 55e109a..a7a0a79 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -387,6 +387,10 @@ int __init hugetlb_cgroup_file_init(int idx)
return 0;
}
+/*
+ * hugetlb_lock will make sure a parallel cgroup rmdir won't happen
+ * when we migrate hugepages
+ */
void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
{
struct hugetlb_cgroup *h_cg;
@@ -399,13 +403,11 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
spin_lock(&hugetlb_lock);
h_cg = hugetlb_cgroup_from_page(oldhpage);
set_hugetlb_cgroup(oldhpage, NULL);
- cgroup_exclude_rmdir(&h_cg->css);
/* move the h_cg details to new cgroup */
set_hugetlb_cgroup(newhpage, h_cg);
list_move(&newhpage->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
- cgroup_release_and_wakeup_rmdir(&h_cg->css);
return;
}
--
1.7.10
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH -V3 4/4] hugetlb/cgroup: Remove exclude and wakeup rmdir calls from migrate
@ 2012-06-15 12:41 ` Aneesh Kumar K.V
0 siblings, 0 replies; 12+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 12:41 UTC (permalink / raw)
To: linux-mm, kamezawa.hiroyu, dhillf, rientjes, mhocko, akpm, hannes
Cc: linux-kernel, Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
We already hold the hugetlb_lock. That should prevent a parallel
cgroup rmdir from touching page's hugetlb cgroup. So remove
the exclude and wakeup calls.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb_cgroup.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 55e109a..a7a0a79 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -387,6 +387,10 @@ int __init hugetlb_cgroup_file_init(int idx)
return 0;
}
+/*
+ * hugetlb_lock will make sure a parallel cgroup rmdir won't happen
+ * when we migrate hugepages
+ */
void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
{
struct hugetlb_cgroup *h_cg;
@@ -399,13 +403,11 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
spin_lock(&hugetlb_lock);
h_cg = hugetlb_cgroup_from_page(oldhpage);
set_hugetlb_cgroup(oldhpage, NULL);
- cgroup_exclude_rmdir(&h_cg->css);
/* move the h_cg details to new cgroup */
set_hugetlb_cgroup(newhpage, h_cg);
list_move(&newhpage->lru, &h->hugepage_activelist);
spin_unlock(&hugetlb_lock);
- cgroup_release_and_wakeup_rmdir(&h_cg->css);
return;
}
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches
2012-06-15 12:41 ` Aneesh Kumar K.V
@ 2012-06-15 12:50 ` Michal Hocko
-1 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2012-06-15 12:50 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: linux-mm, kamezawa.hiroyu, dhillf, rientjes, akpm, hannes, linux-kernel
On Fri 15-06-12 18:11:18, Aneesh Kumar K.V wrote:
> Hi Andrew,
>
> This series contain fixes based on review feedback on top of the
> hugetlb controller patches already in -mm. Please apply.
>
> -aneesh
>
You can add my to all 4 patches.
Reviewed-by: Michal Hocko <mhocko@suse.cz>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches
@ 2012-06-15 12:50 ` Michal Hocko
0 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2012-06-15 12:50 UTC (permalink / raw)
To: Aneesh Kumar K.V
Cc: linux-mm, kamezawa.hiroyu, dhillf, rientjes, akpm, hannes, linux-kernel
On Fri 15-06-12 18:11:18, Aneesh Kumar K.V wrote:
> Hi Andrew,
>
> This series contain fixes based on review feedback on top of the
> hugetlb controller patches already in -mm. Please apply.
>
> -aneesh
>
You can add my to all 4 patches.
Reviewed-by: Michal Hocko <mhocko@suse.cz>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-06-15 12:50 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-15 12:41 [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches Aneesh Kumar K.V
2012-06-15 12:41 ` Aneesh Kumar K.V
2012-06-15 12:41 ` [PATCH -V3 1/4] hugeltb: Mark hugelb_max_hstate __read_mostly Aneesh Kumar K.V
2012-06-15 12:41 ` Aneesh Kumar K.V
2012-06-15 12:41 ` [PATCH -V3 2/4] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
2012-06-15 12:41 ` Aneesh Kumar K.V
2012-06-15 12:41 ` [PATCH -V3 3/4] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
2012-06-15 12:41 ` Aneesh Kumar K.V
2012-06-15 12:41 ` [PATCH -V3 4/4] hugetlb/cgroup: Remove exclude and wakeup rmdir calls from migrate Aneesh Kumar K.V
2012-06-15 12:41 ` Aneesh Kumar K.V
2012-06-15 12:50 ` [PATCH -V3 0/4] hugeltb: Fixes for hugetlb controller patches Michal Hocko
2012-06-15 12:50 ` Michal Hocko
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.