linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* weird allocation pattern in alloc_ila_locks
@ 2017-01-06  9:51 Michal Hocko
  2017-01-06 10:04 ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-06  9:51 UTC (permalink / raw)
  To: Tom Herbert; +Cc: linux-mm, LKML

Hi Tom,
I am currently looking at kmalloc with vmalloc fallback users [1]
and came across alloc_ila_locks which is using a pretty unusual
allocation pattern - it seems to be a c&p alloc_bucket_locks which
is doing a similar thing - except it has to support GFP_ATOMIC.

I am really wondering what is the point of 
#ifdef CONFIG_NUMA
		if (size * sizeof(spinlock_t) > PAGE_SIZE)
			ilan->locks = vmalloc(size * sizeof(spinlock_t));
		else
#endif

there doesn't seem to be any NUMA awareness in the ifdef code so I can
only assume that the intention is to reflect that NUMA machines tend to
have more CPUs. On the other hand nr_pcpus is limited to 32 so this
doesn't seem to be the case here...
Can we just get rid of this ugly and confusing code and do something as
simple as
diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
index af8f52ee7180..1d86ceae61b3 100644
--- a/net/ipv6/ila/ila_xlat.c
+++ b/net/ipv6/ila/ila_xlat.c
@@ -41,13 +41,11 @@ static int alloc_ila_locks(struct ila_net *ilan)
 	size = roundup_pow_of_two(nr_pcpus * LOCKS_PER_CPU);
 
 	if (sizeof(spinlock_t) != 0) {
-#ifdef CONFIG_NUMA
-		if (size * sizeof(spinlock_t) > PAGE_SIZE)
-			ilan->locks = vmalloc(size * sizeof(spinlock_t));
-		else
-#endif
 		ilan->locks = kmalloc_array(size, sizeof(spinlock_t),
-					    GFP_KERNEL);
+					    GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
+		if (!ilan->locks)
+			ilan->locks = vmalloc(size * sizeof(spinlock_t));
+
 		if (!ilan->locks)
 			return -ENOMEM;
 		for (i = 0; i < size; i++)

which I would then simply turn into kvmalloc()?


[1] http://lkml.kernel.org/r/20170102133700.1734-1-mhocko@kernel.org
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-06  9:51 weird allocation pattern in alloc_ila_locks Michal Hocko
@ 2017-01-06 10:04 ` Michal Hocko
  2017-01-06 12:16   ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-06 10:04 UTC (permalink / raw)
  To: Tom Herbert; +Cc: linux-mm, LKML

On Fri 06-01-17 10:51:15, Michal Hocko wrote:
> Hi Tom,
> I am currently looking at kmalloc with vmalloc fallback users [1]
> and came across alloc_ila_locks which is using a pretty unusual
> allocation pattern - it seems to be a c&p alloc_bucket_locks which
> is doing a similar thing - except it has to support GFP_ATOMIC.
> 
> I am really wondering what is the point of 
> #ifdef CONFIG_NUMA
> 		if (size * sizeof(spinlock_t) > PAGE_SIZE)
> 			ilan->locks = vmalloc(size * sizeof(spinlock_t));
> 		else
> #endif
> 
> there doesn't seem to be any NUMA awareness in the ifdef code so I can
> only assume that the intention is to reflect that NUMA machines tend to
> have more CPUs. On the other hand nr_pcpus is limited to 32 so this
> doesn't seem to be the case here...
> Can we just get rid of this ugly and confusing code and do something as
> simple as
> diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
> index af8f52ee7180..1d86ceae61b3 100644
> --- a/net/ipv6/ila/ila_xlat.c
> +++ b/net/ipv6/ila/ila_xlat.c
> @@ -41,13 +41,11 @@ static int alloc_ila_locks(struct ila_net *ilan)
>  	size = roundup_pow_of_two(nr_pcpus * LOCKS_PER_CPU);
>  
>  	if (sizeof(spinlock_t) != 0) {
> -#ifdef CONFIG_NUMA
> -		if (size * sizeof(spinlock_t) > PAGE_SIZE)
> -			ilan->locks = vmalloc(size * sizeof(spinlock_t));
> -		else
> -#endif
>  		ilan->locks = kmalloc_array(size, sizeof(spinlock_t),
> -					    GFP_KERNEL);
> +					    GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
> +		if (!ilan->locks)
> +			ilan->locks = vmalloc(size * sizeof(spinlock_t));
> +
>  		if (!ilan->locks)
>  			return -ENOMEM;
>  		for (i = 0; i < size; i++)
> 
> which I would then simply turn into kvmalloc()?

The patch would look as follows:
---
>From 37f4478c6d3540664741c5172b29a5a5f6ee3a14 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Fri, 6 Jan 2017 10:52:20 +0100
Subject: [PATCH] ila: simplify a strange allocation pattern

alloc_ila_locks seemed to c&p from alloc_bucket_locks allocation pattern
which is quite unusual. We are preferring vmalloc when CONFIG_NUMA is
enabled which doesn't make much sense because there is no special NUMA
locality handled in that code path. Let's just simplify the code and
use kvmalloc helper which is a transparent way to use kmalloc with
vmalloc fallback.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 net/ipv6/ila/ila_xlat.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
index af8f52ee7180..2fd5ca151dcf 100644
--- a/net/ipv6/ila/ila_xlat.c
+++ b/net/ipv6/ila/ila_xlat.c
@@ -41,13 +41,7 @@ static int alloc_ila_locks(struct ila_net *ilan)
 	size = roundup_pow_of_two(nr_pcpus * LOCKS_PER_CPU);
 
 	if (sizeof(spinlock_t) != 0) {
-#ifdef CONFIG_NUMA
-		if (size * sizeof(spinlock_t) > PAGE_SIZE)
-			ilan->locks = vmalloc(size * sizeof(spinlock_t));
-		else
-#endif
-		ilan->locks = kmalloc_array(size, sizeof(spinlock_t),
-					    GFP_KERNEL);
+		ilan->locks = kvmalloc(size * sizeof(spinlock_t), GFP_KERNEL);
 		if (!ilan->locks)
 			return -ENOMEM;
 		for (i = 0; i < size; i++)
-- 
2.11.0

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-06 10:04 ` Michal Hocko
@ 2017-01-06 12:16   ` Michal Hocko
  2017-01-06 22:14     ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-06 12:16 UTC (permalink / raw)
  To: Tom Herbert; +Cc: linux-mm, LKML

I was thinking about the rhashtable which was the source of the c&p and
it can be simplified as well.
---
>From 555543604f5f020284ea85d928d52f6a55fde7ca Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Fri, 6 Jan 2017 13:12:31 +0100
Subject: [PATCH] rhashtable: simplify a strange allocation pattern

alloc_bucket_locks allocation pattern is quite unusual. We are
preferring vmalloc when CONFIG_NUMA is enabled which doesn't make much
sense because there is no special NUMA locality handled in that code
path. Let's just simplify the code and use kvmalloc helper, which is a
transparent way to use kmalloc with vmalloc fallback, if the caller
is allowed to block and use the flag otherwise.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 lib/rhashtable.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 32d0ad058380..4d3886b6ab7d 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
 	size = min_t(unsigned int, size, tbl->size >> 1);
 
 	if (sizeof(spinlock_t) != 0) {
-		tbl->locks = NULL;
-#ifdef CONFIG_NUMA
-		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
-		    gfp == GFP_KERNEL)
-			tbl->locks = vmalloc(size * sizeof(spinlock_t));
-#endif
-		if (gfp != GFP_KERNEL)
-			gfp |= __GFP_NOWARN | __GFP_NORETRY;
-
-		if (!tbl->locks)
+		if (gfpflags_allow_blocking(gfp_))
+			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
+		else
 			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
 						   gfp);
 		if (!tbl->locks)
-- 
2.11.0

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-06 12:16   ` Michal Hocko
@ 2017-01-06 22:14     ` Eric Dumazet
  2017-01-07  9:27       ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2017-01-06 22:14 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Tom Herbert, linux-mm, LKML, edumazet

On Fri, 2017-01-06 at 13:16 +0100, Michal Hocko wrote:
> I was thinking about the rhashtable which was the source of the c&p and
> it can be simplified as well.
> ---
> From 555543604f5f020284ea85d928d52f6a55fde7ca Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Fri, 6 Jan 2017 13:12:31 +0100
> Subject: [PATCH] rhashtable: simplify a strange allocation pattern
> 
> alloc_bucket_locks allocation pattern is quite unusual. We are
> preferring vmalloc when CONFIG_NUMA is enabled which doesn't make much
> sense because there is no special NUMA locality handled in that code
> path. Let's just simplify the code and use kvmalloc helper, which is a
> transparent way to use kmalloc with vmalloc fallback, if the caller
> is allowed to block and use the flag otherwise.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  lib/rhashtable.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> index 32d0ad058380..4d3886b6ab7d 100644
> --- a/lib/rhashtable.c
> +++ b/lib/rhashtable.c
> @@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
>  	size = min_t(unsigned int, size, tbl->size >> 1);
>  
>  	if (sizeof(spinlock_t) != 0) {
> -		tbl->locks = NULL;
> -#ifdef CONFIG_NUMA
> -		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> -		    gfp == GFP_KERNEL)
> -			tbl->locks = vmalloc(size * sizeof(spinlock_t));
> -#endif
> -		if (gfp != GFP_KERNEL)
> -			gfp |= __GFP_NOWARN | __GFP_NORETRY;
> -
> -		if (!tbl->locks)
> +		if (gfpflags_allow_blocking(gfp_))
> +			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
> +		else
>  			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),


I believe the intent was to get NUMA spreading, a bit like what we have
in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT

For hash tables that are not attached to a single NUMA node, this might
make sense.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-06 22:14     ` Eric Dumazet
@ 2017-01-07  9:27       ` Michal Hocko
  2017-01-07 18:37         ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-07  9:27 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Tom Herbert, linux-mm, LKML, edumazet

On Fri 06-01-17 14:14:49, Eric Dumazet wrote:
> On Fri, 2017-01-06 at 13:16 +0100, Michal Hocko wrote:
> > I was thinking about the rhashtable which was the source of the c&p and
> > it can be simplified as well.
> > ---
> > From 555543604f5f020284ea85d928d52f6a55fde7ca Mon Sep 17 00:00:00 2001
> > From: Michal Hocko <mhocko@suse.com>
> > Date: Fri, 6 Jan 2017 13:12:31 +0100
> > Subject: [PATCH] rhashtable: simplify a strange allocation pattern
> > 
> > alloc_bucket_locks allocation pattern is quite unusual. We are
> > preferring vmalloc when CONFIG_NUMA is enabled which doesn't make much
> > sense because there is no special NUMA locality handled in that code
> > path. Let's just simplify the code and use kvmalloc helper, which is a
> > transparent way to use kmalloc with vmalloc fallback, if the caller
> > is allowed to block and use the flag otherwise.
> > 
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  lib/rhashtable.c | 13 +++----------
> >  1 file changed, 3 insertions(+), 10 deletions(-)
> > 
> > diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> > index 32d0ad058380..4d3886b6ab7d 100644
> > --- a/lib/rhashtable.c
> > +++ b/lib/rhashtable.c
> > @@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
> >  	size = min_t(unsigned int, size, tbl->size >> 1);
> >  
> >  	if (sizeof(spinlock_t) != 0) {
> > -		tbl->locks = NULL;
> > -#ifdef CONFIG_NUMA
> > -		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> > -		    gfp == GFP_KERNEL)
> > -			tbl->locks = vmalloc(size * sizeof(spinlock_t));
> > -#endif
> > -		if (gfp != GFP_KERNEL)
> > -			gfp |= __GFP_NOWARN | __GFP_NORETRY;
> > -
> > -		if (!tbl->locks)
> > +		if (gfpflags_allow_blocking(gfp_))
> > +			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
> > +		else
> >  			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
> 
> 
> I believe the intent was to get NUMA spreading, a bit like what we have
> in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT

Hmm, I am not sure this works as expected then. Because it is more
likely that all pages backing the vmallocked area will come from the
local node than spread around more nodes. Or did I miss your point?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-07  9:27       ` Michal Hocko
@ 2017-01-07 18:37         ` Eric Dumazet
  2017-01-09  9:58           ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2017-01-07 18:37 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Eric Dumazet, Tom Herbert, linux-mm, LKML

On Sat, Jan 7, 2017 at 1:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> On Fri 06-01-17 14:14:49, Eric Dumazet wrote:

>> I believe the intent was to get NUMA spreading, a bit like what we have
>> in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT
>
> Hmm, I am not sure this works as expected then. Because it is more
> likely that all pages backing the vmallocked area will come from the
> local node than spread around more nodes. Or did I miss your point?

Well, you missed that vmalloc() is aware of NUMA policies.

If current process has requested interleave on 2 nodes (as it is done
at boot time on a dual node system),
then vmalloc() of 8 pages will allocate 4 pages on each node.

If you force/attempt a kmalloc() of one order-3 page, chances are very
high to get all memory on one single node.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-07 18:37         ` Eric Dumazet
@ 2017-01-09  9:58           ` Michal Hocko
  2017-01-09 14:31             ` Eric Dumazet
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-09  9:58 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Eric Dumazet, Tom Herbert, linux-mm, LKML

On Sat 07-01-17 10:37:41, Eric Dumazet wrote:
> On Sat, Jan 7, 2017 at 1:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Fri 06-01-17 14:14:49, Eric Dumazet wrote:
> 
> >> I believe the intent was to get NUMA spreading, a bit like what we have
> >> in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT
> >
> > Hmm, I am not sure this works as expected then. Because it is more
> > likely that all pages backing the vmallocked area will come from the
> > local node than spread around more nodes. Or did I miss your point?
> 
> Well, you missed that vmalloc() is aware of NUMA policies.

You are right. I have missed that alloc_page ends up using
alloc_pages_current for CONFIG_NUMA rather than alloc_pages_node.

> If current process has requested interleave on 2 nodes (as it is done
> at boot time on a dual node system),
> then vmalloc() of 8 pages will allocate 4 pages on each node.

On the other hand alloc_ila_locks does go over a single page when
lockdep is enabled and I am wondering whether doing this NUMA subtle
magic is any win...

Also this seems to be an init code so I assume a modprobe would have to
set a non-default policy to make use of it. Does anybody do that out
there?

alloc_bucket_locks is a bit more complicated because it is not only
called from the init context. But considering that rhashtable_shrink is
called from the worker context - so no mem policy can be assumed then I
am wondering whether the code really works as expected. To me it sounds
like it is trying to be clever while the outcome doesn't really do what
it is intended.

Would you mind if I just convert it to kvmalloc and make it easier to
understand?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-09  9:58           ` Michal Hocko
@ 2017-01-09 14:31             ` Eric Dumazet
  2017-01-09 14:41               ` Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Eric Dumazet @ 2017-01-09 14:31 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Eric Dumazet, Tom Herbert, linux-mm, LKML

On Mon, Jan 9, 2017 at 1:58 AM, Michal Hocko <mhocko@kernel.org> wrote:

>
> Also this seems to be an init code so I assume a modprobe would have to
> set a non-default policy to make use of it. Does anybody do that out
> there?

This is not init code. Whole point of rhashtable is that the resizes
can happen anytime.
At boot time, most rhashtable would be tiny.
Then, when load permits, hashtables grow in size.

Yes, some applications make some specific choices about NUMA policies.

It would be perfectly possible to amend rhashtable to make sure that
allocations can respect this strategy.
(ie the NUMA policy could be an attribute of the rhashtable, instead
of being implicitly given by current process)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: weird allocation pattern in alloc_ila_locks
  2017-01-09 14:31             ` Eric Dumazet
@ 2017-01-09 14:41               ` Michal Hocko
  2017-01-09 14:43                 ` [PATCH 1/2] rhashtable: simplify a strange allocation pattern Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-09 14:41 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Eric Dumazet, Tom Herbert, linux-mm, LKML

On Mon 09-01-17 06:31:50, Eric Dumazet wrote:
> On Mon, Jan 9, 2017 at 1:58 AM, Michal Hocko <mhocko@kernel.org> wrote:
> 
> >
> > Also this seems to be an init code so I assume a modprobe would have to
> > set a non-default policy to make use of it. Does anybody do that out
> > there?
> 
> This is not init code. Whole point of rhashtable is that the resizes
> can happen anytime.
> At boot time, most rhashtable would be tiny.
> Then, when load permits, hashtables grow in size.

OK, we are mixing two things here. I was talking about alloc_ila_locks
which is an init code AFAIU.

If you are talking about alloc_bucket_locks then I would argue that the
current code doesn't work as expected as the rehash happens from a
kernel worker context and so the numa policy is out of control.

I will reply to this email with the patches I have pending here and plan
to post just to make sure we are at the same page.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] rhashtable: simplify a strange allocation pattern
  2017-01-09 14:41               ` Michal Hocko
@ 2017-01-09 14:43                 ` Michal Hocko
  2017-01-09 14:43                   ` [PATCH 2/2] ila: " Michal Hocko
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2017-01-09 14:43 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Tom Herbert, linux-mm, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

alloc_bucket_locks allocation pattern is quite unusual. We are
preferring vmalloc when CONFIG_NUMA is enabled. The rationale is that
vmalloc will respect the memory policy of the current process and so the
backing memory will get distributed over multiple nodes if the requester
is configured properly. At least that is the intention, in reality
rhastable is shrunk and expanded from a kernel worker so no mempolicy
can be assumed.

Let's just simplify the code and use kvmalloc helper, which is a
transparent way to use kmalloc with vmalloc fallback, if the caller
is allowed to block and use the flag otherwise.

Cc: Tom Herbert <tom@herbertland.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 lib/rhashtable.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 32d0ad058380..1a487ea70829 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
 	size = min_t(unsigned int, size, tbl->size >> 1);
 
 	if (sizeof(spinlock_t) != 0) {
-		tbl->locks = NULL;
-#ifdef CONFIG_NUMA
-		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
-		    gfp == GFP_KERNEL)
-			tbl->locks = vmalloc(size * sizeof(spinlock_t));
-#endif
-		if (gfp != GFP_KERNEL)
-			gfp |= __GFP_NOWARN | __GFP_NORETRY;
-
-		if (!tbl->locks)
+		if (gfpflags_allow_blocking(gfp))
+			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
+		else
 			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
 						   gfp);
 		if (!tbl->locks)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] ila: simplify a strange allocation pattern
  2017-01-09 14:43                 ` [PATCH 1/2] rhashtable: simplify a strange allocation pattern Michal Hocko
@ 2017-01-09 14:43                   ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2017-01-09 14:43 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Tom Herbert, linux-mm, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

alloc_ila_locks seemed to c&p from alloc_bucket_locks allocation
pattern which is quite unusual. The default allocation size is 320 *
sizeof(spinlock_t) which is sub page unless lockdep is enabled when the
performance benefit is really questionable and not worth the subtle code
IMHO. Also note that the context when we call ila_init_net (modprobe or
a task creating a net namespace) has to be properly configured.

Let's just simplify the code and use kvmalloc helper which is a
transparent way to use kmalloc with vmalloc fallback.

Cc: Tom Herbert <tom@herbertland.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 net/ipv6/ila/ila_xlat.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
index af8f52ee7180..2fd5ca151dcf 100644
--- a/net/ipv6/ila/ila_xlat.c
+++ b/net/ipv6/ila/ila_xlat.c
@@ -41,13 +41,7 @@ static int alloc_ila_locks(struct ila_net *ilan)
 	size = roundup_pow_of_two(nr_pcpus * LOCKS_PER_CPU);
 
 	if (sizeof(spinlock_t) != 0) {
-#ifdef CONFIG_NUMA
-		if (size * sizeof(spinlock_t) > PAGE_SIZE)
-			ilan->locks = vmalloc(size * sizeof(spinlock_t));
-		else
-#endif
-		ilan->locks = kmalloc_array(size, sizeof(spinlock_t),
-					    GFP_KERNEL);
+		ilan->locks = kvmalloc(size * sizeof(spinlock_t), GFP_KERNEL);
 		if (!ilan->locks)
 			return -ENOMEM;
 		for (i = 0; i < size; i++)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-01-09 14:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-06  9:51 weird allocation pattern in alloc_ila_locks Michal Hocko
2017-01-06 10:04 ` Michal Hocko
2017-01-06 12:16   ` Michal Hocko
2017-01-06 22:14     ` Eric Dumazet
2017-01-07  9:27       ` Michal Hocko
2017-01-07 18:37         ` Eric Dumazet
2017-01-09  9:58           ` Michal Hocko
2017-01-09 14:31             ` Eric Dumazet
2017-01-09 14:41               ` Michal Hocko
2017-01-09 14:43                 ` [PATCH 1/2] rhashtable: simplify a strange allocation pattern Michal Hocko
2017-01-09 14:43                   ` [PATCH 2/2] ila: " Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).