All of lore.kernel.org
 help / color / mirror / Atom feed
* re: mm: memory-hotplug: enable memory hotplug to handle hugepage
@ 2015-05-11 11:17 Dan Carpenter
  2015-05-11 11:29 ` Dan Carpenter
  2015-05-11 23:54 ` Naoya Horiguchi
  0 siblings, 2 replies; 13+ messages in thread
From: Dan Carpenter @ 2015-05-11 11:17 UTC (permalink / raw)
  To: n-horiguchi; +Cc: linux-mm

Hello Naoya Horiguchi,

The patch c8721bbbdd36: "mm: memory-hotplug: enable memory hotplug to
handle hugepage" from Sep 11, 2013, leads to the following static
checker warning:

	mm/hugetlb.c:1203 dissolve_free_huge_pages()
	warn: potential right shift more than type allows '9,18,64'

mm/hugetlb.c
  1189  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
  1190  {
  1191          unsigned int order = 8 * sizeof(void *);
                                     ^^^^^^^^^^^^^^^^^^
Let's say order is 64.

  1192          unsigned long pfn;
  1193          struct hstate *h;
  1194  
  1195          if (!hugepages_supported())
  1196                  return;
  1197  
  1198          /* Set scan step to minimum hugepage size */
  1199          for_each_hstate(h)
  1200                  if (order > huge_page_order(h))
  1201                          order = huge_page_order(h);
  1202          VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
  1203          for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
                                                            ^^^^^^^^^^
1 << 64 is undefined but let's say it's zero because that's normal for
GCC.  This is an endless loop.

  1204                  dissolve_free_huge_page(pfn_to_page(pfn));
  1205  }

regards,
dan carpenter

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-11 11:17 mm: memory-hotplug: enable memory hotplug to handle hugepage Dan Carpenter
@ 2015-05-11 11:29 ` Dan Carpenter
  2015-05-11 23:56   ` Naoya Horiguchi
  2015-05-11 23:54 ` Naoya Horiguchi
  1 sibling, 1 reply; 13+ messages in thread
From: Dan Carpenter @ 2015-05-11 11:29 UTC (permalink / raw)
  To: n-horiguchi; +Cc: linux-mm

On Mon, May 11, 2015 at 02:17:48PM +0300, Dan Carpenter wrote:
> Hello Naoya Horiguchi,
> 
> The patch c8721bbbdd36: "mm: memory-hotplug: enable memory hotplug to
> handle hugepage" from Sep 11, 2013, leads to the following static
> checker warning:
> 
> 	mm/hugetlb.c:1203 dissolve_free_huge_pages()
> 	warn: potential right shift more than type allows '9,18,64'
> 
> mm/hugetlb.c
>   1189  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>   1190  {
>   1191          unsigned int order = 8 * sizeof(void *);
>                                      ^^^^^^^^^^^^^^^^^^
> Let's say order is 64.

Actually, the 64 here is just chosen to be an impossibly high number
isn't it?  It's a bit complicated to understand that at first glance.

regards,
dan carpenter

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-11 11:17 mm: memory-hotplug: enable memory hotplug to handle hugepage Dan Carpenter
  2015-05-11 11:29 ` Dan Carpenter
@ 2015-05-11 23:54 ` Naoya Horiguchi
  2015-05-12  8:43   ` Dan Carpenter
  1 sibling, 1 reply; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-11 23:54 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-mm, Andrew Morton

On Mon, May 11, 2015 at 02:17:48PM +0300, Dan Carpenter wrote:
> Hello Naoya Horiguchi,
> 
> The patch c8721bbbdd36: "mm: memory-hotplug: enable memory hotplug to
> handle hugepage" from Sep 11, 2013, leads to the following static
> checker warning:
> 
> 	mm/hugetlb.c:1203 dissolve_free_huge_pages()
> 	warn: potential right shift more than type allows '9,18,64'
> 
> mm/hugetlb.c
>   1189  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>   1190  {
>   1191          unsigned int order = 8 * sizeof(void *);
>                                      ^^^^^^^^^^^^^^^^^^
> Let's say order is 64.

Hi Dan, thank you for reporting.

order is supposed to be capped by running each hstates and finding the
minimum hugepage order as done in below code, and I intended that this
initialization gives potential maximum. I guess that keeping this to 64
doesn't solve the above warning, so we use 8 * sizeof(void *) - 1 or 63 ?
I don't test on 32-bit system, so not sure that this code can be used
by 32-bit system, but considering such case, keeping sizeof(void *)
seems better.

> 
>   1192          unsigned long pfn;
>   1193          struct hstate *h;
>   1194  
>   1195          if (!hugepages_supported())
>   1196                  return;
>   1197  
>   1198          /* Set scan step to minimum hugepage size */
>   1199          for_each_hstate(h)
>   1200                  if (order > huge_page_order(h))
>   1201                          order = huge_page_order(h);
>   1202          VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
>   1203          for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
>                                                             ^^^^^^^^^^
> 1 << 64 is undefined but let's say it's zero because that's normal for
> GCC.  This is an endless loop.

That never happens if hstates is properly initialized, but we had better
avoid the potential risk.

How about the following patch?

Thanks,
Naoya Horiguchi
---
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Date: Tue, 12 May 2015 08:17:10 +0900
Subject: [PATCH] mm/hugetlb: decrement initial value of order in
 dissolve_free_huge_pages

Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
which leads to the following warning in static checker:

  mm/hugetlb.c:1203 dissolve_free_huge_pages()
  warn: potential right shift more than type allows '9,18,64'

This is a potential risk of infinite loop, because 1 << order (== 0) is used
in for-loop like this:

  for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
      ...

So this patch simply avoids the risk by decrementing the initial value.

Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/hugetlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c41b2a0ee273..74abfb44e4d0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1086,7 +1086,8 @@ static void dissolve_free_huge_page(struct page *page)
  */
 void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	unsigned int order = 8 * sizeof(void *);
+	/* Initialized to "high enough" value which is capped later */
+	unsigned int order = 8 * sizeof(void *) - 1;
 	unsigned long pfn;
 	struct hstate *h;
 
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-11 11:29 ` Dan Carpenter
@ 2015-05-11 23:56   ` Naoya Horiguchi
  0 siblings, 0 replies; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-11 23:56 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-mm

On Mon, May 11, 2015 at 02:29:24PM +0300, Dan Carpenter wrote:
> On Mon, May 11, 2015 at 02:17:48PM +0300, Dan Carpenter wrote:
> > Hello Naoya Horiguchi,
> > 
> > The patch c8721bbbdd36: "mm: memory-hotplug: enable memory hotplug to
> > handle hugepage" from Sep 11, 2013, leads to the following static
> > checker warning:
> > 
> > 	mm/hugetlb.c:1203 dissolve_free_huge_pages()
> > 	warn: potential right shift more than type allows '9,18,64'
> > 
> > mm/hugetlb.c
> >   1189  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >   1190  {
> >   1191          unsigned int order = 8 * sizeof(void *);
> >                                      ^^^^^^^^^^^^^^^^^^
> > Let's say order is 64.
> 
> Actually, the 64 here is just chosen to be an impossibly high number
> isn't it?

Right.

>  It's a bit complicated to understand that at first glance.

OK, so I added oneline comment in the patch in another email.

Thanks,
Naoya Horiguchi
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-11 23:54 ` Naoya Horiguchi
@ 2015-05-12  8:43   ` Dan Carpenter
  2015-05-12  9:04     ` Naoya Horiguchi
  0 siblings, 1 reply; 13+ messages in thread
From: Dan Carpenter @ 2015-05-12  8:43 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: linux-mm, Andrew Morton

On Mon, May 11, 2015 at 11:54:44PM +0000, Naoya Horiguchi wrote:
> @@ -1086,7 +1086,8 @@ static void dissolve_free_huge_page(struct page *page)
>   */
>  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>  {
> -	unsigned int order = 8 * sizeof(void *);
> +	/* Initialized to "high enough" value which is capped later */
> +	unsigned int order = 8 * sizeof(void *) - 1;

Why not use UINT_MAX?  It's more clear that it's not valid that way.
Otherwise doing a complicated calculation it makes it seem like we will
use the variable.

regards,
dan carpenter

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-12  8:43   ` Dan Carpenter
@ 2015-05-12  9:04     ` Naoya Horiguchi
  2015-05-12  9:13       ` Dan Carpenter
  0 siblings, 1 reply; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-12  9:04 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-mm, Andrew Morton

On Tue, May 12, 2015 at 11:43:39AM +0300, Dan Carpenter wrote:
> On Mon, May 11, 2015 at 11:54:44PM +0000, Naoya Horiguchi wrote:
> > @@ -1086,7 +1086,8 @@ static void dissolve_free_huge_page(struct page *page)
> >   */
> >  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >  {
> > -	unsigned int order = 8 * sizeof(void *);
> > +	/* Initialized to "high enough" value which is capped later */
> > +	unsigned int order = 8 * sizeof(void *) - 1;
> 
> Why not use UINT_MAX?  It's more clear that it's not valid that way.

It's OK if code checker doesn't show "too much right shift" warning.
With UINT_MAX, inserting VM_BUG_ON(order == UINT_MAX) after for_each_hstate
loop might be safer (1 << UINT_MAX is clearly wrong.)

> Otherwise doing a complicated calculation it makes it seem like we will
> use the variable.

OK.

Is the below OK for you?
---
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 271e4432734c..804437505a84 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1188,7 +1188,7 @@ static void dissolve_free_huge_page(struct page *page)
  */
 void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	unsigned int order = 8 * sizeof(void *);
+	unsigned int order = UINT_MAX;
 	unsigned long pfn;
 	struct hstate *h;
 
@@ -1200,6 +1200,7 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 		if (order > huge_page_order(h))
 			order = huge_page_order(h);
 	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
+	VM_BUG_ON(order == UINT_MAX);
 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
 		dissolve_free_huge_page(pfn_to_page(pfn));
 }
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-12  9:04     ` Naoya Horiguchi
@ 2015-05-12  9:13       ` Dan Carpenter
  2015-05-12  9:16         ` Naoya Horiguchi
  0 siblings, 1 reply; 13+ messages in thread
From: Dan Carpenter @ 2015-05-12  9:13 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: linux-mm, Andrew Morton

On Tue, May 12, 2015 at 02:04:55AM -0700, Naoya Horiguchi wrote:
> On Tue, May 12, 2015 at 11:43:39AM +0300, Dan Carpenter wrote:
> > On Mon, May 11, 2015 at 11:54:44PM +0000, Naoya Horiguchi wrote:
> > > @@ -1086,7 +1086,8 @@ static void dissolve_free_huge_page(struct page *page)
> > >   */
> > >  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > >  {
> > > -	unsigned int order = 8 * sizeof(void *);
> > > +	/* Initialized to "high enough" value which is capped later */
> > > +	unsigned int order = 8 * sizeof(void *) - 1;
> > 
> > Why not use UINT_MAX?  It's more clear that it's not valid that way.
> 
> It's OK if code checker doesn't show "too much right shift" warning.

It's a comlicated question to answer but with the new VM_BUG_ON() then
it won't warn.

regards,
dan carpenter

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: mm: memory-hotplug: enable memory hotplug to handle hugepage
  2015-05-12  9:13       ` Dan Carpenter
@ 2015-05-12  9:16         ` Naoya Horiguchi
  2015-05-12  9:20           ` [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages() Naoya Horiguchi
  0 siblings, 1 reply; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-12  9:16 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-mm, Andrew Morton

On Tue, May 12, 2015 at 12:13:49PM +0300, Dan Carpenter wrote:
> On Tue, May 12, 2015 at 02:04:55AM -0700, Naoya Horiguchi wrote:
> > On Tue, May 12, 2015 at 11:43:39AM +0300, Dan Carpenter wrote:
> > > On Mon, May 11, 2015 at 11:54:44PM +0000, Naoya Horiguchi wrote:
> > > > @@ -1086,7 +1086,8 @@ static void dissolve_free_huge_page(struct page *page)
> > > >   */
> > > >  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> > > >  {
> > > > -	unsigned int order = 8 * sizeof(void *);
> > > > +	/* Initialized to "high enough" value which is capped later */
> > > > +	unsigned int order = 8 * sizeof(void *) - 1;
> > > 
> > > Why not use UINT_MAX?  It's more clear that it's not valid that way.
> > 
> > It's OK if code checker doesn't show "too much right shift" warning.
> 
> It's a comlicated question to answer but with the new VM_BUG_ON() then
> it won't warn.

Good, so I'll repost the patch soon later with revised description.

Thanks,
Naoya
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages()
  2015-05-12  9:16         ` Naoya Horiguchi
@ 2015-05-12  9:20           ` Naoya Horiguchi
  2015-05-12 23:15             ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-12  9:20 UTC (permalink / raw)
  To: Dan Carpenter; +Cc: linux-mm, Andrew Morton

Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
which leads to the following warning in static checker:

  mm/hugetlb.c:1203 dissolve_free_huge_pages()
  warn: potential right shift more than type allows '9,18,64'

This is a potential risk of infinite loop, because 1 << order (== 0) is used
in for-loop like this:

  for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
      ...

So this patch simply avoids the risk by initializing with UINT_MAX.

Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/hugetlb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 271e4432734c..804437505a84 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1188,7 +1188,7 @@ static void dissolve_free_huge_page(struct page *page)
  */
 void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	unsigned int order = 8 * sizeof(void *);
+	unsigned int order = UINT_MAX;
 	unsigned long pfn;
 	struct hstate *h;
 
@@ -1200,6 +1200,7 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 		if (order > huge_page_order(h))
 			order = huge_page_order(h);
 	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
+	VM_BUG_ON(order == UINT_MAX);
 	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
 		dissolve_free_huge_page(pfn_to_page(pfn));
 }
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages()
  2015-05-12  9:20           ` [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages() Naoya Horiguchi
@ 2015-05-12 23:15             ` Andrew Morton
  2015-05-13  1:44               ` Naoya Horiguchi
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2015-05-12 23:15 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Dan Carpenter, linux-mm

On Tue, 12 May 2015 09:20:35 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
> which leads to the following warning in static checker:
> 
>   mm/hugetlb.c:1203 dissolve_free_huge_pages()
>   warn: potential right shift more than type allows '9,18,64'
> 
> This is a potential risk of infinite loop, because 1 << order (== 0) is used
> in for-loop like this:
> 
>   for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
>       ...
> 
> So this patch simply avoids the risk by initializing with UINT_MAX.
> 
> ..
>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1188,7 +1188,7 @@ static void dissolve_free_huge_page(struct page *page)
>   */
>  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>  {
> -	unsigned int order = 8 * sizeof(void *);
> +	unsigned int order = UINT_MAX;
>  	unsigned long pfn;
>  	struct hstate *h;
>  
> @@ -1200,6 +1200,7 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>  		if (order > huge_page_order(h))
>  			order = huge_page_order(h);
>  	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
> +	VM_BUG_ON(order == UINT_MAX);
>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
>  		dissolve_free_huge_page(pfn_to_page(pfn));

Do we need to calculate this each time?  Can it be done in
hugetlb_init_hstates(), save the result in a global?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages()
  2015-05-12 23:15             ` Andrew Morton
@ 2015-05-13  1:44               ` Naoya Horiguchi
  2015-05-13 20:55                 ` Andrew Morton
  0 siblings, 1 reply; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-13  1:44 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Dan Carpenter, linux-mm

On Tue, May 12, 2015 at 04:15:11PM -0700, Andrew Morton wrote:
> On Tue, 12 May 2015 09:20:35 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> 
> > Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
> > which leads to the following warning in static checker:
> > 
> >   mm/hugetlb.c:1203 dissolve_free_huge_pages()
> >   warn: potential right shift more than type allows '9,18,64'
> > 
> > This is a potential risk of infinite loop, because 1 << order (== 0) is used
> > in for-loop like this:
> > 
> >   for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> >       ...
> > 
> > So this patch simply avoids the risk by initializing with UINT_MAX.
> > 
> > ..
> >
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1188,7 +1188,7 @@ static void dissolve_free_huge_page(struct page *page)
> >   */
> >  void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >  {
> > -	unsigned int order = 8 * sizeof(void *);
> > +	unsigned int order = UINT_MAX;
> >  	unsigned long pfn;
> >  	struct hstate *h;
> >  
> > @@ -1200,6 +1200,7 @@ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >  		if (order > huge_page_order(h))
> >  			order = huge_page_order(h);
> >  	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
> > +	VM_BUG_ON(order == UINT_MAX);
> >  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> >  		dissolve_free_huge_page(pfn_to_page(pfn));
> 
> Do we need to calculate this each time?  Can it be done in
> hugetlb_init_hstates(), save the result in a global?

Yes, it should work. How about the following?
This adds 4bytes to .data due to a new global variable, but reduces 47 bytes
.text size of code reduces, so it's a win in total.

   text    data     bss     dec     hex filename                         
  28313     469   84236  113018   1b97a mm/hugetlb.o (above patch)
  28266     473   84236  112975   1b94f mm/hugetlb.o (below patch)

---
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 271e4432734c..fecb8bbfe11e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -40,6 +40,7 @@ int hugepages_treat_as_movable;
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
 struct hstate hstates[HUGE_MAX_HSTATE];
+unsigned int minimum_order __read_mostly;
 
 __initdata LIST_HEAD(huge_boot_pages);
 
@@ -1188,19 +1189,13 @@ static void dissolve_free_huge_page(struct page *page)
  */
 void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	unsigned int order = 8 * sizeof(void *);
 	unsigned long pfn;
-	struct hstate *h;
 
 	if (!hugepages_supported())
 		return;
 
-	/* Set scan step to minimum hugepage size */
-	for_each_hstate(h)
-		if (order > huge_page_order(h))
-			order = huge_page_order(h);
-	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
-	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
+	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
+	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
 		dissolve_free_huge_page(pfn_to_page(pfn));
 }
 
@@ -1626,11 +1621,16 @@ static void __init hugetlb_init_hstates(void)
 {
 	struct hstate *h;
 
+	minimum_order = UINT_MAX;
 	for_each_hstate(h) {
+		if (minimum_order > huge_page_order(h))
+			minimum_order = huge_page_order(h);
+
 		/* oversize hugepages were init'ed in early boot */
 		if (!hstate_is_gigantic(h))
 			hugetlb_hstate_alloc_pages(h);
 	}
+	VM_BUG_ON(minimum_order == UINT_MAX);
 }
 
 static char * __init memfmt(char *buf, unsigned long n)
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages()
  2015-05-13  1:44               ` Naoya Horiguchi
@ 2015-05-13 20:55                 ` Andrew Morton
  2015-05-14  6:15                   ` [PATCH] mm/hugetlb: introduce minimum hugepage order Naoya Horiguchi
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2015-05-13 20:55 UTC (permalink / raw)
  To: Naoya Horiguchi; +Cc: Dan Carpenter, linux-mm

On Wed, 13 May 2015 01:44:22 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:

> > >  			order = huge_page_order(h);
> > >  	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
> > > +	VM_BUG_ON(order == UINT_MAX);
> > >  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > >  		dissolve_free_huge_page(pfn_to_page(pfn));
> > 
> > Do we need to calculate this each time?  Can it be done in
> > hugetlb_init_hstates(), save the result in a global?
> 
> Yes, it should work. How about the following?
> This adds 4bytes to .data due to a new global variable, but reduces 47 bytes
> .text size of code reduces, so it's a win in total.
> 
>    text    data     bss     dec     hex filename                         
>   28313     469   84236  113018   1b97a mm/hugetlb.o (above patch)
>   28266     473   84236  112975   1b94f mm/hugetlb.o (below patch)

Looks good.  Please turn it into a real patch and send it over when
convenient?

> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -40,6 +40,7 @@ int hugepages_treat_as_movable;
>  int hugetlb_max_hstate __read_mostly;
>  unsigned int default_hstate_idx;
>  struct hstate hstates[HUGE_MAX_HSTATE];
> +unsigned int minimum_order __read_mostly;

static.

And a comment would be nice ;)

>
> ...
>
> @@ -1626,11 +1621,16 @@ static void __init hugetlb_init_hstates(void)
>  {
>  	struct hstate *h;
>  
> +	minimum_order = UINT_MAX;

Do this at compile time.

>  	for_each_hstate(h) {
> +		if (minimum_order > huge_page_order(h))
> +			minimum_order = huge_page_order(h);
> +
>  		/* oversize hugepages were init'ed in early boot */
>  		if (!hstate_is_gigantic(h))
>  			hugetlb_hstate_alloc_pages(h);
>  	}
> +	VM_BUG_ON(minimum_order == UINT_MAX);

Is the system hopelessly screwed up when this happens, or will it still
be able to boot up and do useful things?

If the system is hopelessly broken then BUG_ON or, better, panic should
be used here.  But if there's still potential to do useful things then
I guess VM_BUG_ON is appropriate.


>  }

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] mm/hugetlb: introduce minimum hugepage order
  2015-05-13 20:55                 ` Andrew Morton
@ 2015-05-14  6:15                   ` Naoya Horiguchi
  0 siblings, 0 replies; 13+ messages in thread
From: Naoya Horiguchi @ 2015-05-14  6:15 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Dan Carpenter, linux-mm

On Wed, May 13, 2015 at 01:55:56PM -0700, Andrew Morton wrote:
> On Wed, 13 May 2015 01:44:22 +0000 Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> wrote:
> 
> > > >  			order = huge_page_order(h);
> > > >  	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
> > > > +	VM_BUG_ON(order == UINT_MAX);
> > > >  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
> > > >  		dissolve_free_huge_page(pfn_to_page(pfn));
> > > 
> > > Do we need to calculate this each time?  Can it be done in
> > > hugetlb_init_hstates(), save the result in a global?
> > 
> > Yes, it should work. How about the following?
> > This adds 4bytes to .data due to a new global variable, but reduces 47 bytes
> > .text size of code reduces, so it's a win in total.
> > 
> >    text    data     bss     dec     hex filename                         
> >   28313     469   84236  113018   1b97a mm/hugetlb.o (above patch)
> >   28266     473   84236  112975   1b94f mm/hugetlb.o (below patch)
> 
> Looks good.  Please turn it into a real patch and send it over when
> convenient?

Yes, the patch is attached below.

> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -40,6 +40,7 @@ int hugepages_treat_as_movable;
> >  int hugetlb_max_hstate __read_mostly;
> >  unsigned int default_hstate_idx;
> >  struct hstate hstates[HUGE_MAX_HSTATE];
> > +unsigned int minimum_order __read_mostly;
> 
> static.
> 
> And a comment would be nice ;)

OK, fixed.

> 
> >
> > ...
> >
> > @@ -1626,11 +1621,16 @@ static void __init hugetlb_init_hstates(void)
> >  {
> >  	struct hstate *h;
> >  
> > +	minimum_order = UINT_MAX;
> 
> Do this at compile time.

OK.

> >  	for_each_hstate(h) {
> > +		if (minimum_order > huge_page_order(h))
> > +			minimum_order = huge_page_order(h);
> > +
> >  		/* oversize hugepages were init'ed in early boot */
> >  		if (!hstate_is_gigantic(h))
> >  			hugetlb_hstate_alloc_pages(h);
> >  	}
> > +	VM_BUG_ON(minimum_order == UINT_MAX);
> 
> Is the system hopelessly screwed up when this happens, or will it still
> be able to boot up and do useful things?
> 
> If the system is hopelessly broken then BUG_ON or, better, panic should
> be used here.  But if there's still potential to do useful things then
> I guess VM_BUG_ON is appropriate.

When this happens, hugetlb subsystem is broken but the system can run as
usual, so this is not critical. So I think VM_BUG_ON is OK.

Thanks,
Naoya Horiguchi
---
From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Subject: [PATCH] mm/hugetlb: introduce minimum hugepage order

Currently the initial value of order in dissolve_free_huge_page is 64 or 32,
which leads to the following warning in static checker:

  mm/hugetlb.c:1203 dissolve_free_huge_pages()
  warn: potential right shift more than type allows '9,18,64'

This is a potential risk of infinite loop, because 1 << order (== 0) is used
in for-loop like this:

  for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
      ...

So this patch fixes it by using global minimum_order calculated at boot time.

    text    data     bss     dec     hex filename
   28313     469   84236  113018   1b97a mm/hugetlb.o
   28256     473   84236  112965   1b945 mm/hugetlb.o (patched)

Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
---
 mm/hugetlb.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 271e4432734c..8c4c1f9f9a9a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -40,6 +40,11 @@ int hugepages_treat_as_movable;
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
 struct hstate hstates[HUGE_MAX_HSTATE];
+/*
+ * Minimum page order among possible hugepage sizes, set to a proper value
+ * at boot time.
+ */
+static unsigned int minimum_order __read_mostly = UINT_MAX;
 
 __initdata LIST_HEAD(huge_boot_pages);
 
@@ -1188,19 +1193,13 @@ static void dissolve_free_huge_page(struct page *page)
  */
 void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
 {
-	unsigned int order = 8 * sizeof(void *);
 	unsigned long pfn;
-	struct hstate *h;
 
 	if (!hugepages_supported())
 		return;
 
-	/* Set scan step to minimum hugepage size */
-	for_each_hstate(h)
-		if (order > huge_page_order(h))
-			order = huge_page_order(h);
-	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
-	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
+	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
+	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
 		dissolve_free_huge_page(pfn_to_page(pfn));
 }
 
@@ -1627,10 +1626,14 @@ static void __init hugetlb_init_hstates(void)
 	struct hstate *h;
 
 	for_each_hstate(h) {
+		if (minimum_order > huge_page_order(h))
+			minimum_order = huge_page_order(h);
+
 		/* oversize hugepages were init'ed in early boot */
 		if (!hstate_is_gigantic(h))
 			hugetlb_hstate_alloc_pages(h);
 	}
+	VM_BUG_ON(minimum_order == UINT_MAX);
 }
 
 static char * __init memfmt(char *buf, unsigned long n)
-- 
2.1.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-05-14  6:16 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-11 11:17 mm: memory-hotplug: enable memory hotplug to handle hugepage Dan Carpenter
2015-05-11 11:29 ` Dan Carpenter
2015-05-11 23:56   ` Naoya Horiguchi
2015-05-11 23:54 ` Naoya Horiguchi
2015-05-12  8:43   ` Dan Carpenter
2015-05-12  9:04     ` Naoya Horiguchi
2015-05-12  9:13       ` Dan Carpenter
2015-05-12  9:16         ` Naoya Horiguchi
2015-05-12  9:20           ` [PATCH] mm/hugetlb: initialize order with UINT_MAX in dissolve_free_huge_pages() Naoya Horiguchi
2015-05-12 23:15             ` Andrew Morton
2015-05-13  1:44               ` Naoya Horiguchi
2015-05-13 20:55                 ` Andrew Morton
2015-05-14  6:15                   ` [PATCH] mm/hugetlb: introduce minimum hugepage order Naoya Horiguchi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.