* [PATCH v3] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()
@ 2019-02-22 14:01 Jing Xiangfeng
2019-02-22 18:56 ` Mike Kravetz
0 siblings, 1 reply; 2+ messages in thread
From: Jing Xiangfeng @ 2019-02-22 14:01 UTC (permalink / raw)
To: mike.kravetz, mhocko, akpm
Cc: hughd, linux-mm, n-horiguchi, aarcange, kirill.shutemov,
linux-kernel, Jing Xiangfeng
User can change a node specific hugetlb count. i.e.
/sys/devices/system/node/node1/hugepages/hugepages-2048kB
the calculated value of count is a total number of huge pages. It could
be overflow when a user entering a crazy high value. If so, the total
number of huge pages could be a small value which is not user expect.
We can simply fix it by setting count to ULONG_MAX, then it goes on. This
may be more in line with user's intention of allocating as many huge pages
as possible.
Signed-off-by: Jing Xiangfeng <jingxiangfeng@huawei.com>
---
mm/hugetlb.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index afef616..18fa7d7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2423,7 +2423,10 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
* per node hstate attribute: adjust count to global,
* but restrict alloc/free to the specified node.
*/
+ unsigned long old_count = count;
count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
+ if (count < old_count)
+ count = ULONG_MAX;
init_nodemask_of_node(nodes_allowed, nid);
} else
nodes_allowed = &node_states[N_MEMORY];
--
2.7.4
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v3] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()
2019-02-22 14:01 [PATCH v3] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common() Jing Xiangfeng
@ 2019-02-22 18:56 ` Mike Kravetz
0 siblings, 0 replies; 2+ messages in thread
From: Mike Kravetz @ 2019-02-22 18:56 UTC (permalink / raw)
To: Jing Xiangfeng, mhocko, akpm
Cc: hughd, linux-mm, n-horiguchi, aarcange, kirill.shutemov, linux-kernel
On 2/22/19 6:01 AM, Jing Xiangfeng wrote:
Thanks, just a couple small changes.
> User can change a node specific hugetlb count. i.e.
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB
Please make that,
/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> the calculated value of count is a total number of huge pages. It could
> be overflow when a user entering a crazy high value. If so, the total
> number of huge pages could be a small value which is not user expect.
> We can simply fix it by setting count to ULONG_MAX, then it goes on. This
> may be more in line with user's intention of allocating as many huge pages
> as possible.
>
> Signed-off-by: Jing Xiangfeng <jingxiangfeng@huawei.com>
> ---
> mm/hugetlb.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index afef616..18fa7d7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2423,7 +2423,10 @@ static ssize_t __nr_hugepages_store_common(bool obey_mempolicy,
> * per node hstate attribute: adjust count to global,
> * but restrict alloc/free to the specified node.
> */
> + unsigned long old_count = count;
> count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
Also, adding a comment here about checking for overflow would help people
reading the code. Something like,
/*
* If user specified count causes overflow, set to
* largest possible value.
*/
--
Mike Kravetz
> + if (count < old_count)
> + count = ULONG_MAX;
> init_nodemask_of_node(nodes_allowed, nid);
> } else
> nodes_allowed = &node_states[N_MEMORY];
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-02-22 18:57 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-22 14:01 [PATCH v3] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common() Jing Xiangfeng
2019-02-22 18:56 ` Mike Kravetz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).