* [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup
@ 2022-07-29 8:01 Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 1/5] hugetlb_cgroup: remove unneeded nr_pages > 0 check Miaohe Lin
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
Hi everyone,
This series contains a few cleaup patches to remove unneeded check,
use helper macro, remove unneeded return value and so on. More details
can be found in the respective changelogs.
Thanks!
---
v2:
drop patch 2/6 per Mina
collect Reviewed-by tag per Mina and Mike. Thanks for review!
---
Miaohe Lin (5):
hugetlb_cgroup: remove unneeded nr_pages > 0 check
hugetlb_cgroup: hugetlbfs: use helper macro SZ_1{K,M,G}
hugetlb_cgroup: remove unneeded return value
hugetlb_cgroup: use helper macro NUMA_NO_NODE
hugetlb_cgroup: use helper for_each_hstate and hstate_index
include/linux/hugetlb_cgroup.h | 19 ++++++++-----------
mm/hugetlb_cgroup.c | 27 ++++++++++++---------------
2 files changed, 20 insertions(+), 26 deletions(-)
--
2.23.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/5] hugetlb_cgroup: remove unneeded nr_pages > 0 check
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
@ 2022-07-29 8:01 ` Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 2/5] hugetlb_cgroup: hugetlbfs: use helper macro SZ_1{K,M,G} Miaohe Lin
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
When code reaches here, nr_pages must be > 0. Remove unneeded nr_pages > 0
check to simplify the code.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mina Almasry <almasrymina@google.com>
---
mm/hugetlb_cgroup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index c86691c431fd..d16eb00c947d 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -442,7 +442,7 @@ void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,
if (hugetlb_cgroup_disabled() || !resv || !rg || !nr_pages)
return;
- if (rg->reservation_counter && resv->pages_per_hpage && nr_pages > 0 &&
+ if (rg->reservation_counter && resv->pages_per_hpage &&
!resv->reservation_counter) {
page_counter_uncharge(rg->reservation_counter,
nr_pages * resv->pages_per_hpage);
--
2.23.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/5] hugetlb_cgroup: hugetlbfs: use helper macro SZ_1{K,M,G}
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 1/5] hugetlb_cgroup: remove unneeded nr_pages > 0 check Miaohe Lin
@ 2022-07-29 8:01 ` Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 3/5] hugetlb_cgroup: remove unneeded return value Miaohe Lin
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
Use helper macro SZ_1K, SZ_1M and SZ_1G to do the size conversion. Minor
readability improvement.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mina Almasry <almasrymina@google.com>
---
mm/hugetlb_cgroup.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index d16eb00c947d..01a709468937 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -675,12 +675,12 @@ static ssize_t hugetlb_cgroup_reset(struct kernfs_open_file *of,
static char *mem_fmt(char *buf, int size, unsigned long hsize)
{
- if (hsize >= (1UL << 30))
- snprintf(buf, size, "%luGB", hsize >> 30);
- else if (hsize >= (1UL << 20))
- snprintf(buf, size, "%luMB", hsize >> 20);
+ if (hsize >= SZ_1G)
+ snprintf(buf, size, "%luGB", hsize / SZ_1G);
+ else if (hsize >= SZ_1M)
+ snprintf(buf, size, "%luMB", hsize / SZ_1M);
else
- snprintf(buf, size, "%luKB", hsize >> 10);
+ snprintf(buf, size, "%luKB", hsize / SZ_1K);
return buf;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 3/5] hugetlb_cgroup: remove unneeded return value
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 1/5] hugetlb_cgroup: remove unneeded nr_pages > 0 check Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 2/5] hugetlb_cgroup: hugetlbfs: use helper macro SZ_1{K,M,G} Miaohe Lin
@ 2022-07-29 8:01 ` Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 4/5] hugetlb_cgroup: use helper macro NUMA_NO_NODE Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 5/5] hugetlb_cgroup: use helper for_each_hstate and hstate_index Miaohe Lin
4 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
The return value of set_hugetlb_cgroup and set_hugetlb_cgroup_rsvd are
always ignored. Remove them to clean up the code.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
---
include/linux/hugetlb_cgroup.h | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 379344828e78..630cd255d0cf 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -90,32 +90,31 @@ hugetlb_cgroup_from_page_rsvd(struct page *page)
return __hugetlb_cgroup_from_page(page, true);
}
-static inline int __set_hugetlb_cgroup(struct page *page,
+static inline void __set_hugetlb_cgroup(struct page *page,
struct hugetlb_cgroup *h_cg, bool rsvd)
{
VM_BUG_ON_PAGE(!PageHuge(page), page);
if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
- return -1;
+ return;
if (rsvd)
set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
(unsigned long)h_cg);
else
set_page_private(page + SUBPAGE_INDEX_CGROUP,
(unsigned long)h_cg);
- return 0;
}
-static inline int set_hugetlb_cgroup(struct page *page,
+static inline void set_hugetlb_cgroup(struct page *page,
struct hugetlb_cgroup *h_cg)
{
- return __set_hugetlb_cgroup(page, h_cg, false);
+ __set_hugetlb_cgroup(page, h_cg, false);
}
-static inline int set_hugetlb_cgroup_rsvd(struct page *page,
+static inline void set_hugetlb_cgroup_rsvd(struct page *page,
struct hugetlb_cgroup *h_cg)
{
- return __set_hugetlb_cgroup(page, h_cg, true);
+ __set_hugetlb_cgroup(page, h_cg, true);
}
static inline bool hugetlb_cgroup_disabled(void)
@@ -199,16 +198,14 @@ hugetlb_cgroup_from_page_rsvd(struct page *page)
return NULL;
}
-static inline int set_hugetlb_cgroup(struct page *page,
+static inline void set_hugetlb_cgroup(struct page *page,
struct hugetlb_cgroup *h_cg)
{
- return 0;
}
-static inline int set_hugetlb_cgroup_rsvd(struct page *page,
+static inline void set_hugetlb_cgroup_rsvd(struct page *page,
struct hugetlb_cgroup *h_cg)
{
- return 0;
}
static inline bool hugetlb_cgroup_disabled(void)
--
2.23.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 4/5] hugetlb_cgroup: use helper macro NUMA_NO_NODE
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
` (2 preceding siblings ...)
2022-07-29 8:01 ` [PATCH v2 3/5] hugetlb_cgroup: remove unneeded return value Miaohe Lin
@ 2022-07-29 8:01 ` Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 5/5] hugetlb_cgroup: use helper for_each_hstate and hstate_index Miaohe Lin
4 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
It's better to use NUMA_NO_NODE instead of magic number -1. Minor
readability improvement.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mina Almasry <almasrymina@google.com>
---
mm/hugetlb_cgroup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 01a709468937..2affccfe59f1 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -154,9 +154,9 @@ hugetlb_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
* function.
*/
for_each_node(node) {
- /* Set node_to_alloc to -1 for offline nodes. */
+ /* Set node_to_alloc to NUMA_NO_NODE for offline nodes. */
int node_to_alloc =
- node_state(node, N_NORMAL_MEMORY) ? node : -1;
+ node_state(node, N_NORMAL_MEMORY) ? node : NUMA_NO_NODE;
h_cgroup->nodeinfo[node] =
kzalloc_node(sizeof(struct hugetlb_cgroup_per_node),
GFP_KERNEL, node_to_alloc);
--
2.23.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 5/5] hugetlb_cgroup: use helper for_each_hstate and hstate_index
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
` (3 preceding siblings ...)
2022-07-29 8:01 ` [PATCH v2 4/5] hugetlb_cgroup: use helper macro NUMA_NO_NODE Miaohe Lin
@ 2022-07-29 8:01 ` Miaohe Lin
4 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2022-07-29 8:01 UTC (permalink / raw)
To: akpm; +Cc: mike.kravetz, almasrymina, linux-mm, linux-kernel, linmiaohe
Use helper for_each_hstate and hstate_index to iterate the hstate and get
the hstate index. Minor readability improvement.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mina Almasry <almasrymina@google.com>
---
mm/hugetlb_cgroup.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 2affccfe59f1..f61d132df52b 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -75,11 +75,11 @@ parent_hugetlb_cgroup(struct hugetlb_cgroup *h_cg)
static inline bool hugetlb_cgroup_have_usage(struct hugetlb_cgroup *h_cg)
{
- int idx;
+ struct hstate *h;
- for (idx = 0; idx < hugetlb_max_hstate; idx++) {
+ for_each_hstate(h) {
if (page_counter_read(
- hugetlb_cgroup_counter_from_cgroup(h_cg, idx)))
+ hugetlb_cgroup_counter_from_cgroup(h_cg, hstate_index(h))))
return true;
}
return false;
@@ -225,17 +225,14 @@ static void hugetlb_cgroup_css_offline(struct cgroup_subsys_state *css)
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css);
struct hstate *h;
struct page *page;
- int idx;
do {
- idx = 0;
for_each_hstate(h) {
spin_lock_irq(&hugetlb_lock);
list_for_each_entry(page, &h->hugepage_activelist, lru)
- hugetlb_cgroup_move_parent(idx, h_cg, page);
+ hugetlb_cgroup_move_parent(hstate_index(h), h_cg, page);
spin_unlock_irq(&hugetlb_lock);
- idx++;
}
cond_resched();
} while (hugetlb_cgroup_have_usage(h_cg));
--
2.23.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-07-29 8:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-29 8:01 [PATCH v2 0/5] A few cleanup patches for hugetlb_cgroup Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 1/5] hugetlb_cgroup: remove unneeded nr_pages > 0 check Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 2/5] hugetlb_cgroup: hugetlbfs: use helper macro SZ_1{K,M,G} Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 3/5] hugetlb_cgroup: remove unneeded return value Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 4/5] hugetlb_cgroup: use helper macro NUMA_NO_NODE Miaohe Lin
2022-07-29 8:01 ` [PATCH v2 5/5] hugetlb_cgroup: use helper for_each_hstate and hstate_index Miaohe Lin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.