linux-numa.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* memory leaks in numa_run_on_node
@ 2013-06-27  0:05 Andrew J. Schorr
  2013-06-27  0:31 ` Andi Kleen
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew J. Schorr @ 2013-06-27  0:05 UTC (permalink / raw)
  To: linux-numa

Hi,

According to valgrind, this trivial program has a bunch of memory leaks:

bash$ cat numaleak.c 
#include <numa.h>

int
main(int argc, char **argv)
{
   numa_run_on_node(0);
   return 0;
}

bash-4.2$ valgrind --leak-check=full --show-reachable=yes numaleak
==17943== Memcheck, a memory error detector
==17943== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==17943== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==17943== Command: numaleak
==17943== 
==17943== 
==17943== HEAP SUMMARY:
==17943==     in use at exit: 4,144 bytes in 3 blocks
==17943==   total heap usage: 35 allocs, 32 frees, 74,720 bytes allocated
==17943== 
==17943== 16 bytes in 1 blocks are still reachable in loss record 1 of 3
==17943==    at 0x4C2B78F: malloc (vg_replace_malloc.c:270)
==17943==    by 0x4E36F55: numa_bitmask_alloc (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E38B2E: numa_node_to_cpus (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E39449: numa_run_on_node (in /usr/lib64/libnuma.so.1)
==17943==    by 0x40055A: main (numaleak.c:6)
==17943== 
==17943== 32 bytes in 1 blocks are still reachable in loss record 2 of 3
==17943==    at 0x4C29A84: calloc (vg_replace_malloc.c:593)
==17943==    by 0x4E36F72: numa_bitmask_alloc (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E38B2E: numa_node_to_cpus (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E39449: numa_run_on_node (in /usr/lib64/libnuma.so.1)
==17943==    by 0x40055A: main (numaleak.c:6)
==17943== 
==17943== 4,096 bytes in 1 blocks are still reachable in loss record 3 of 3
==17943==    at 0x4C29A84: calloc (vg_replace_malloc.c:593)
==17943==    by 0x4E38668: ??? (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E38B18: numa_node_to_cpus (in /usr/lib64/libnuma.so.1)
==17943==    by 0x4E39449: numa_run_on_node (in /usr/lib64/libnuma.so.1)
==17943==    by 0x40055A: main (numaleak.c:6)
==17943== 
==17943== LEAK SUMMARY:
==17943==    definitely lost: 0 bytes in 0 blocks
==17943==    indirectly lost: 0 bytes in 0 blocks
==17943==      possibly lost: 0 bytes in 0 blocks
==17943==    still reachable: 4,144 bytes in 3 blocks
==17943==         suppressed: 0 bytes in 0 blocks
==17943== 
==17943== For counts of detected and suppressed errors, rerun with: -v
==17943== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)

This is using version 2.0.8 of numactl on a Fedora 16 system.
There are similar memory leaks in numa_run_on_node_mask().

Regards,
Andy

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory leaks in numa_run_on_node
  2013-06-27  0:05 memory leaks in numa_run_on_node Andrew J. Schorr
@ 2013-06-27  0:31 ` Andi Kleen
  2013-06-27  0:53   ` Andrew J. Schorr
  0 siblings, 1 reply; 4+ messages in thread
From: Andi Kleen @ 2013-06-27  0:31 UTC (permalink / raw)
  To: Andrew J. Schorr; +Cc: linux-numa

On Wed, Jun 26, 2013 at 08:05:12PM -0400, Andrew J. Schorr wrote:
> Hi,
> 
> According to valgrind, this trivial program has a bunch of memory leaks:

libnuma caches the node masks of CPUs. This is a global cache that
is bounded by the number of CPUs. It's not a leak that will grow
over time.

So it's a false positive.

-Andi

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory leaks in numa_run_on_node
  2013-06-27  0:31 ` Andi Kleen
@ 2013-06-27  0:53   ` Andrew J. Schorr
  2013-06-27  1:15     ` Andrew J. Schorr
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew J. Schorr @ 2013-06-27  0:53 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-numa

Hi Andi,

On Thu, Jun 27, 2013 at 02:31:41AM +0200, Andi Kleen wrote:
> libnuma caches the node masks of CPUs. This is a global cache that
> is bounded by the number of CPUs. It's not a leak that will grow
> over time.
> 
> So it's a false positive.

Thanks for the quick response, and sorry for the false bug report.

What's the proper forum for asking questions about how to use the API?
The documentation is rather slender. :-)

In particular, I'd like to know if this is valid code for binding the
memory to node 0, regardless of the number of nodes on the system:

   {
      unsigned long mask = 1;
      struct bitmask nodemask;

      nodemask.size = 1;
      nodemask.maskp = &mask;
      numa_set_membind(&nodemask);
   }

I tried calling set_mempolicy directly for this trivial case, but I get
EINVAL when I try:

   {
      unsigned long mask = 1;
      set_mempolicy(MPOL_BIND, &mask, 1);
   }

Increasing the 3rd arg from 1 to 2 seems to solve the problem, but I cannot
understand why from "man 2 set_mempolicy".

Any guidance on these issues or pointers to where to find help would
be appreciated.

Thanks,
Andy

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: memory leaks in numa_run_on_node
  2013-06-27  0:53   ` Andrew J. Schorr
@ 2013-06-27  1:15     ` Andrew J. Schorr
  0 siblings, 0 replies; 4+ messages in thread
From: Andrew J. Schorr @ 2013-06-27  1:15 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-numa

On Wed, Jun 26, 2013 at 08:53:36PM -0400, Andrew J. Schorr wrote:
> I tried calling set_mempolicy directly for this trivial case, but I get
> EINVAL when I try:
> 
>    {
>       unsigned long mask = 1;
>       set_mempolicy(MPOL_BIND, &mask, 1);
>    }
> 
> Increasing the 3rd arg from 1 to 2 seems to solve the problem, but I cannot
> understand why from "man 2 set_mempolicy".

In linux-3.6/mm/mempolicy.c:get_nodes(), maxnode is decremented immediately.  I
cannot see from the man page why that is done, but it seems to explain the
problem I found.  I guess this explains why libnuma.c:setpol adds 1 to the
bitmask size.  But I still cannot see where this odd behavior is documented in
the man page.  If anybody can enlighten me, I'd appreciate it.

Perhaps this is why it is not recommended to use this API directly. :-)

Regards,
Andy

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-06-27  1:15 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-06-27  0:05 memory leaks in numa_run_on_node Andrew J. Schorr
2013-06-27  0:31 ` Andi Kleen
2013-06-27  0:53   ` Andrew J. Schorr
2013-06-27  1:15     ` Andrew J. Schorr

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).