All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] slub: avoid false-postive warning
@ 2016-10-24 15:56 ` Arnd Bergmann
  0 siblings, 0 replies; 4+ messages in thread
From: Arnd Bergmann @ 2016-10-24 15:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Arnd Bergmann, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vladimir Davydov, Jesper Dangaard Brouer,
	Johannes Weiner, Laura Abbott, Alexander Potapenko, linux-mm,
	linux-kernel

The slub allocator gives us some incorrect warnings when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set, as the unlikely()
macro prevents it from seeing that the return code matches
what it was before:

mm/slub.c: In function ‘kmem_cache_free_bulk’:
mm/slub.c:262:23: error: ‘df.s’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2943:3: error: ‘df.cnt’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2933:4470: error: ‘df.freelist’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2943:3: error: ‘df.tail’ may be used uninitialized in this function [-Werror=maybe-uninitialized]

I have not been able to come up with a perfect way for dealing with
this, the three options I see are:

- add a bogus initialization, which would increase the runtime overhead
- replace unlikely() with unlikely_notrace()
- remove the unlikely() annotation completely

I checked the object code for a typical x86 configuration and the
last two cases produce the same result, so I went for the last
one, which is the simplest.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2b3e740609e9..68b84f93d38d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3076,7 +3076,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
 		struct detached_freelist df;
 
 		size = build_detached_freelist(s, size, p, &df);
-		if (unlikely(!df.page))
+		if (!df.page)
 			continue;
 
 		slab_free(df.s, df.page, df.freelist, df.tail, df.cnt,_RET_IP_);
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH] slub: avoid false-postive warning
@ 2016-10-24 15:56 ` Arnd Bergmann
  0 siblings, 0 replies; 4+ messages in thread
From: Arnd Bergmann @ 2016-10-24 15:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Arnd Bergmann, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vladimir Davydov, Jesper Dangaard Brouer,
	Johannes Weiner, Laura Abbott, Alexander Potapenko, linux-mm,
	linux-kernel

The slub allocator gives us some incorrect warnings when
CONFIG_PROFILE_ANNOTATED_BRANCHES is set, as the unlikely()
macro prevents it from seeing that the return code matches
what it was before:

mm/slub.c: In function a??kmem_cache_free_bulka??:
mm/slub.c:262:23: error: a??df.sa?? may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2943:3: error: a??df.cnta?? may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2933:4470: error: a??df.freelista?? may be used uninitialized in this function [-Werror=maybe-uninitialized]
mm/slub.c:2943:3: error: a??df.taila?? may be used uninitialized in this function [-Werror=maybe-uninitialized]

I have not been able to come up with a perfect way for dealing with
this, the three options I see are:

- add a bogus initialization, which would increase the runtime overhead
- replace unlikely() with unlikely_notrace()
- remove the unlikely() annotation completely

I checked the object code for a typical x86 configuration and the
last two cases produce the same result, so I went for the last
one, which is the simplest.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 mm/slub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 2b3e740609e9..68b84f93d38d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3076,7 +3076,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
 		struct detached_freelist df;
 
 		size = build_detached_freelist(s, size, p, &df);
-		if (unlikely(!df.page))
+		if (!df.page)
 			continue;
 
 		slab_free(df.s, df.page, df.freelist, df.tail, df.cnt,_RET_IP_);
-- 
2.9.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] slub: avoid false-postive warning
  2016-10-24 15:56 ` Arnd Bergmann
@ 2016-10-25 11:33   ` Jesper Dangaard Brouer
  -1 siblings, 0 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2016-10-25 11:33 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vladimir Davydov, Johannes Weiner, Laura Abbott,
	Alexander Potapenko, linux-mm, linux-kernel, brouer,
	Alexander Duyck


On Mon, 24 Oct 2016 17:56:13 +0200 Arnd Bergmann <arnd@arndb.de> wrote:

> The slub allocator gives us some incorrect warnings when
> CONFIG_PROFILE_ANNOTATED_BRANCHES is set, as the unlikely()
> macro prevents it from seeing that the return code matches
> what it was before:
> 
> mm/slub.c: In function ‘kmem_cache_free_bulk’:
> mm/slub.c:262:23: error: ‘df.s’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2943:3: error: ‘df.cnt’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2933:4470: error: ‘df.freelist’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2943:3: error: ‘df.tail’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 
> I have not been able to come up with a perfect way for dealing with
> this, the three options I see are:
> 
> - add a bogus initialization, which would increase the runtime overhead
> - replace unlikely() with unlikely_notrace()
> - remove the unlikely() annotation completely
> 
> I checked the object code for a typical x86 configuration and the
> last two cases produce the same result, so I went for the last
> one, which is the simplest.

If the object code is the same, then I've fine with this solution, as
the performance should then also be the same.

I do have micro-benchmark module there to verify the performance:
 https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>


> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 2b3e740609e9..68b84f93d38d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3076,7 +3076,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
>  		struct detached_freelist df;
>  
>  		size = build_detached_freelist(s, size, p, &df);
> -		if (unlikely(!df.page))
> +		if (!df.page)
>  			continue;
>  
>  		slab_free(df.s, df.page, df.freelist, df.tail, df.cnt,_RET_IP_);



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] slub: avoid false-postive warning
@ 2016-10-25 11:33   ` Jesper Dangaard Brouer
  0 siblings, 0 replies; 4+ messages in thread
From: Jesper Dangaard Brouer @ 2016-10-25 11:33 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Vladimir Davydov, Johannes Weiner, Laura Abbott,
	Alexander Potapenko, linux-mm, linux-kernel, brouer,
	Alexander Duyck


On Mon, 24 Oct 2016 17:56:13 +0200 Arnd Bergmann <arnd@arndb.de> wrote:

> The slub allocator gives us some incorrect warnings when
> CONFIG_PROFILE_ANNOTATED_BRANCHES is set, as the unlikely()
> macro prevents it from seeing that the return code matches
> what it was before:
> 
> mm/slub.c: In function ‘kmem_cache_free_bulk’:
> mm/slub.c:262:23: error: ‘df.s’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2943:3: error: ‘df.cnt’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2933:4470: error: ‘df.freelist’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> mm/slub.c:2943:3: error: ‘df.tail’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 
> I have not been able to come up with a perfect way for dealing with
> this, the three options I see are:
> 
> - add a bogus initialization, which would increase the runtime overhead
> - replace unlikely() with unlikely_notrace()
> - remove the unlikely() annotation completely
> 
> I checked the object code for a typical x86 configuration and the
> last two cases produce the same result, so I went for the last
> one, which is the simplest.

If the object code is the same, then I've fine with this solution, as
the performance should then also be the same.

I do have micro-benchmark module there to verify the performance:
 https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c

Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>


> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
>  mm/slub.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 2b3e740609e9..68b84f93d38d 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3076,7 +3076,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
>  		struct detached_freelist df;
>  
>  		size = build_detached_freelist(s, size, p, &df);
> -		if (unlikely(!df.page))
> +		if (!df.page)
>  			continue;
>  
>  		slab_free(df.s, df.page, df.freelist, df.tail, df.cnt,_RET_IP_);



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-10-25 11:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-24 15:56 [PATCH] slub: avoid false-postive warning Arnd Bergmann
2016-10-24 15:56 ` Arnd Bergmann
2016-10-25 11:33 ` Jesper Dangaard Brouer
2016-10-25 11:33   ` Jesper Dangaard Brouer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.