All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
       [not found] <bug-201699-27@https.bugzilla.kernel.org/>
@ 2018-11-15 21:06 ` Andrew Morton
  2018-11-16  2:23   ` dong
  2018-11-16 17:50   ` Vladimir Davydov
  0 siblings, 2 replies; 23+ messages in thread
From: Andrew Morton @ 2018-11-15 21:06 UTC (permalink / raw)
  To: Vladimir Davydov, Michal Hocko, Johannes Weiner
  Cc: bugzilla-daemon, linux-mm, bauers


(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

On Thu, 15 Nov 2018 06:31:19 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=201699
> 
>             Bug ID: 201699
>            Summary: kmemleak in memcg_create_kmem_cache
>            Product: Memory Management
>            Version: 2.5
>     Kernel Version: 4.20.0-rc2i 1/4 ?other version include 4.14.52 etc.i 1/4 ?
>           Hardware: Intel
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: high
>           Priority: P1
>          Component: Slab Allocator
>           Assignee: akpm@linux-foundation.org
>           Reporter: bauers@126.com
>         Regression: No
> 
> On debian OS, when systemd restart a failed service periodically. It will cause
> memory leak. When I enable kmemleak, the message comes up.
> 
> 
> [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
> [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> #1
> [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> 04/12/2016
> [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> [ 4658.065587] Call Trace:
> [ 4658.065590]  dump_stack+0x5c/0x7b
> [ 4658.065594]  lookup_object+0x5e/0x80
> [ 4658.065596]  find_and_get_object+0x29/0x80
> [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
> [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
> [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
> [ 4658.065603]  do_tune_cpucache+0x27/0xb0
> [ 4658.065605]  enable_cpucache+0x80/0x110
> [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
> [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
> [ 4658.065612]  create_cache+0xd9/0x200
> [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
> [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065619]  process_one_work+0x1d1/0x3d0
> [ 4658.065621]  worker_thread+0x4f/0x3b0
> [ 4658.065623]  ? rescuer_thread+0x360/0x360
> [ 4658.065625]  kthread+0xf8/0x130
> [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
> [ 4658.065628]  ret_from_fork+0x35/0x40
> [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
> [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
> [ 4658.065631] kmemleak:   min_count = 1
> [ 4658.065632] kmemleak:   count = 0
> [ 4658.065632] kmemleak:   flags = 0x1
> [ 4658.065633] kmemleak:   checksum = 0
> [ 4658.065633] kmemleak:   backtrace:
> [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
> [ 4658.065636]      do_tune_cpucache+0x27/0xb0
> [ 4658.065637]      enable_cpucache+0x80/0x110
> [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
> [ 4658.065640]      create_cache+0xd9/0x200
> [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
> [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065644]      process_one_work+0x1d1/0x3d0
> [ 4658.065646]      worker_thread+0x4f/0x3b0
> [ 4658.065647]      kthread+0xf8/0x130
> [ 4658.065648]      ret_from_fork+0x35/0x40
> [ 4658.065649]      0xffffffffffffffff
> [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808
> [ 4658.065651] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> #1
> [ 4658.065652] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> 04/12/2016
> [ 4658.065653] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> [ 4658.065654] Call Trace:
> [ 4658.065656]  dump_stack+0x5c/0x7b
> [ 4658.065657]  kmemleak_no_scan+0xa0/0xc0
> [ 4658.065659]  setup_kmem_cache_node+0x271/0x350
> [ 4658.065660]  __do_tune_cpucache+0x18c/0x220
> [ 4658.065662]  do_tune_cpucache+0x27/0xb0
> [ 4658.065663]  enable_cpucache+0x80/0x110
> [ 4658.065664]  __kmem_cache_create+0x217/0x3a0
> [ 4658.065667]  ? kmem_cache_alloc+0x1aa/0x280
> [ 4658.065668]  create_cache+0xd9/0x200
> [ 4658.065670]  memcg_create_kmem_cache+0xef/0x120
> [ 4658.065671]  memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065673]  process_one_work+0x1d1/0x3d0
> [ 4658.065675]  worker_thread+0x4f/0x3b0
> [ 4658.065677]  ? rescuer_thread+0x360/0x360
> [ 4658.065679]  kthread+0xf8/0x130
> [ 4658.065681]  ? kthread_create_worker_on_cpu+0x70/0x70
> [ 4658.065682]  ret_from_fork+0x35/0x40
> [ 4658.065718] kmemleak: Found object by alias at 0xffff9d8cb36bd288
> [ 4658.065720] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> #1
> [ 4658.065721] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> 04/12/2016
> [ 4658.065722] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> [ 4658.065722] Call Trace:
> [ 4658.065724]  dump_stack+0x5c/0x7b
> [ 4658.065726]  lookup_object+0x5e/0x80
> [ 4658.065728]  find_and_get_object+0x29/0x80
> [ 4658.065729]  kmemleak_no_scan+0x31/0xc0
> [ 4658.065730]  setup_kmem_cache_node+0x271/0x350
> [ 4658.065732]  __do_tune_cpucache+0x18c/0x220
> [ 4658.065734]  do_tune_cpucache+0x27/0xb0
> [ 4658.065735]  enable_cpucache+0x80/0x110
> [ 4658.065737]  __kmem_cache_create+0x217/0x3a0
> [ 4658.065739]  ? kmem_cache_alloc+0x1aa/0x280
> [ 4658.065740]  create_cache+0xd9/0x200
> [ 4658.065742]  memcg_create_kmem_cache+0xef/0x120
> [ 4658.065743]  memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065745]  process_one_work+0x1d1/0x3d0
> [ 4658.065747]  worker_thread+0x4f/0x3b0
> [ 4658.065750]  ? rescuer_thread+0x360/0x360
> [ 4658.065751]  kthread+0xf8/0x130
> [ 4658.065753]  ? kthread_create_worker_on_cpu+0x70/0x70
> [ 4658.065754]  ret_from_fork+0x35/0x40
> [ 4658.065755] kmemleak: Object 0xffff9d8cb36bd280 (size 128):
> [ 4658.065756] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
> [ 4658.065757] kmemleak:   min_count = 1
> [ 4658.065757] kmemleak:   count = 0
> [ 4658.065757] kmemleak:   flags = 0x1
> [ 4658.065758] kmemleak:   checksum = 0
> [ 4658.065758] kmemleak:   backtrace:
> [ 4658.065759]      __do_tune_cpucache+0x18c/0x220
> [ 4658.065760]      do_tune_cpucache+0x27/0xb0
> [ 4658.065762]      enable_cpucache+0x80/0x110
> [ 4658.065763]      __kmem_cache_create+0x217/0x3a0
> [ 4658.065764]      create_cache+0xd9/0x200
> [ 4658.065765]      memcg_create_kmem_cache+0xef/0x120
> [ 4658.065766]      memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065768]      process_one_work+0x1d1/0x3d0
> [ 4658.065770]      worker_thread+0x4f/0x3b0
> [ 4658.065771]      kthread+0xf8/0x130
> [ 4658.065772]      ret_from_fork+0x35/0x40
> [ 4658.065773]      0xffffffffffffffff
> [ 4658.065774] kmemleak: Not scanning unknown object at 0xffff9d8cb36bd288
> [ 4658.065775] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> #1
> [ 4658.065775] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> 04/12/2016
> [ 4658.065776] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> [ 4658.065777] Call Trace:
> [ 4658.065779]  dump_stack+0x5c/0x7b
> [ 4658.065780]  kmemleak_no_scan+0xa0/0xc0
> [ 4658.065781]  setup_kmem_cache_node+0x271/0x350
> [ 4658.065783]  __do_tune_cpucache+0x18c/0x220
> [ 4658.065784]  do_tune_cpucache+0x27/0xb0
> [ 4658.065785]  enable_cpucache+0x80/0x110
> [ 4658.065787]  __kmem_cache_create+0x217/0x3a0
> [ 4658.065789]  ? kmem_cache_alloc+0x1aa/0x280
> [ 4658.065790]  create_cache+0xd9/0x200
> [ 4658.065792]  memcg_create_kmem_cache+0xef/0x120
> [ 4658.065793]  memcg_kmem_cache_create_func+0x1b/0x60
> [ 4658.065795]  process_one_work+0x1d1/0x3d0
> [ 4658.065797]  worker_thread+0x4f/0x3b0
> [ 4658.065799]  ? rescuer_thread+0x360/0x360
> [ 4658.065801]  kthread+0xf8/0x130
> [ 4658.065802]  ? kthread_create_worker_on_cpu+0x70/0x70
> [ 4658.065804]  ret_from_fork+0x35/0x40
> 
> -- 
> You are receiving this mail because:
> You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-15 21:06 ` [Bug 201699] New: kmemleak in memcg_create_kmem_cache Andrew Morton
@ 2018-11-16  2:23   ` dong
  2018-11-16  3:04     ` dong
  2018-11-16 17:50   ` Vladimir Davydov
  1 sibling, 1 reply; 23+ messages in thread
From: dong @ 2018-11-16  2:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vladimir Davydov, Michal Hocko, Johannes Weiner, bugzilla-daemon,
	linux-mm

[-- Attachment #1: Type: text/plain, Size: 8831 bytes --]

When I straced systemd, I found the weird system call ‘kcmp’.  Is that can explain something? 





% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 29.06    0.000077          19         4           close
 16.98    0.000045          23         2           read
 15.47    0.000041          21         2           open
 10.94    0.000029          15         2           recvmsg
  9.43    0.000025           6         4           epoll_wait
  9.06    0.000024           6         4           epoll_ctl
  6.42    0.000017           0        54           kcmp
  2.26    0.000006           2         4           clock_gettime
  0.38    0.000001           1         2           fstat
------ ----------- ----------- --------- --------- ----------------
100.00    0.000265                    78           total


Sincerely

At 2018-11-16 05:06:46, "Andrew Morton" <akpm@linux-foundation.org> wrote:
>
>(switched to email.  Please respond via emailed reply-to-all, not via the
>bugzilla web interface).
>
>On Thu, 15 Nov 2018 06:31:19 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:
>
>> https://bugzilla.kernel.org/show_bug.cgi?id=201699
>> 
>>             Bug ID: 201699
>>            Summary: kmemleak in memcg_create_kmem_cache
>>            Product: Memory Management
>>            Version: 2.5
>>     Kernel Version: 4.20.0-rc2(other version include 4.14.52 etc.)
>>           Hardware: Intel
>>                 OS: Linux
>>               Tree: Mainline
>>             Status: NEW
>>           Severity: high
>>           Priority: P1
>>          Component: Slab Allocator
>>           Assignee: akpm@linux-foundation.org
>>           Reporter: bauers@126.com
>>         Regression: No
>> 
>> On debian OS, when systemd restart a failed service periodically. It will cause
>> memory leak. When I enable kmemleak, the message comes up.
>> 
>> 
>> [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
>> [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065587] Call Trace:
>> [ 4658.065590]  dump_stack+0x5c/0x7b
>> [ 4658.065594]  lookup_object+0x5e/0x80
>> [ 4658.065596]  find_and_get_object+0x29/0x80
>> [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065603]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065605]  enable_cpucache+0x80/0x110
>> [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065612]  create_cache+0xd9/0x200
>> [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065619]  process_one_work+0x1d1/0x3d0
>> [ 4658.065621]  worker_thread+0x4f/0x3b0
>> [ 4658.065623]  ? rescuer_thread+0x360/0x360
>> [ 4658.065625]  kthread+0xf8/0x130
>> [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065628]  ret_from_fork+0x35/0x40
>> [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
>> [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065631] kmemleak:   min_count = 1
>> [ 4658.065632] kmemleak:   count = 0
>> [ 4658.065632] kmemleak:   flags = 0x1
>> [ 4658.065633] kmemleak:   checksum = 0
>> [ 4658.065633] kmemleak:   backtrace:
>> [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065636]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065637]      enable_cpucache+0x80/0x110
>> [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065640]      create_cache+0xd9/0x200
>> [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065644]      process_one_work+0x1d1/0x3d0
>> [ 4658.065646]      worker_thread+0x4f/0x3b0
>> [ 4658.065647]      kthread+0xf8/0x130
>> [ 4658.065648]      ret_from_fork+0x35/0x40
>> [ 4658.065649]      0xffffffffffffffff
>> [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808
>> [ 4658.065651] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065652] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065653] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065654] Call Trace:
>> [ 4658.065656]  dump_stack+0x5c/0x7b
>> [ 4658.065657]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065659]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065660]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065662]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065663]  enable_cpucache+0x80/0x110
>> [ 4658.065664]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065667]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065668]  create_cache+0xd9/0x200
>> [ 4658.065670]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065671]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065673]  process_one_work+0x1d1/0x3d0
>> [ 4658.065675]  worker_thread+0x4f/0x3b0
>> [ 4658.065677]  ? rescuer_thread+0x360/0x360
>> [ 4658.065679]  kthread+0xf8/0x130
>> [ 4658.065681]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065682]  ret_from_fork+0x35/0x40
>> [ 4658.065718] kmemleak: Found object by alias at 0xffff9d8cb36bd288
>> [ 4658.065720] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065721] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065722] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065722] Call Trace:
>> [ 4658.065724]  dump_stack+0x5c/0x7b
>> [ 4658.065726]  lookup_object+0x5e/0x80
>> [ 4658.065728]  find_and_get_object+0x29/0x80
>> [ 4658.065729]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065730]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065732]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065734]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065735]  enable_cpucache+0x80/0x110
>> [ 4658.065737]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065739]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065740]  create_cache+0xd9/0x200
>> [ 4658.065742]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065743]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065745]  process_one_work+0x1d1/0x3d0
>> [ 4658.065747]  worker_thread+0x4f/0x3b0
>> [ 4658.065750]  ? rescuer_thread+0x360/0x360
>> [ 4658.065751]  kthread+0xf8/0x130
>> [ 4658.065753]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065754]  ret_from_fork+0x35/0x40
>> [ 4658.065755] kmemleak: Object 0xffff9d8cb36bd280 (size 128):
>> [ 4658.065756] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065757] kmemleak:   min_count = 1
>> [ 4658.065757] kmemleak:   count = 0
>> [ 4658.065757] kmemleak:   flags = 0x1
>> [ 4658.065758] kmemleak:   checksum = 0
>> [ 4658.065758] kmemleak:   backtrace:
>> [ 4658.065759]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065760]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065762]      enable_cpucache+0x80/0x110
>> [ 4658.065763]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065764]      create_cache+0xd9/0x200
>> [ 4658.065765]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065766]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065768]      process_one_work+0x1d1/0x3d0
>> [ 4658.065770]      worker_thread+0x4f/0x3b0
>> [ 4658.065771]      kthread+0xf8/0x130
>> [ 4658.065772]      ret_from_fork+0x35/0x40
>> [ 4658.065773]      0xffffffffffffffff
>> [ 4658.065774] kmemleak: Not scanning unknown object at 0xffff9d8cb36bd288
>> [ 4658.065775] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065775] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065776] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065777] Call Trace:
>> [ 4658.065779]  dump_stack+0x5c/0x7b
>> [ 4658.065780]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065781]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065783]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065784]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065785]  enable_cpucache+0x80/0x110
>> [ 4658.065787]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065789]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065790]  create_cache+0xd9/0x200
>> [ 4658.065792]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065793]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065795]  process_one_work+0x1d1/0x3d0
>> [ 4658.065797]  worker_thread+0x4f/0x3b0
>> [ 4658.065799]  ? rescuer_thread+0x360/0x360
>> [ 4658.065801]  kthread+0xf8/0x130
>> [ 4658.065802]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065804]  ret_from_fork+0x35/0x40
>> 
>> -- 
>> You are receiving this mail because:
>> You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 11404 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re:Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-16  2:23   ` dong
@ 2018-11-16  3:04     ` dong
  2018-11-16  3:37       ` dong
  0 siblings, 1 reply; 23+ messages in thread
From: dong @ 2018-11-16  3:04 UTC (permalink / raw)
  To: dong
  Cc: Andrew Morton, Vladimir Davydov, Michal Hocko, Johannes Weiner,
	bugzilla-daemon, linux-mm

[-- Attachment #1: Type: text/plain, Size: 9359 bytes --]

When I run `crash /proc/kcore` to check the leak object pointer, I got this. Is there anything else I can offer ?


crash> struct alien_cache -x 0xffff88f914ddc180
struct alien_cache {
  lock = {
    {
      rlock = {
        raw_lock = {
          val = {
            counter = 0x0
          }
        }
      }
    }
  },
  ac = {
    avail = 0x0,
    limit = 0xc,
    batchcount = 0xbaadf00d,
    touched = 0x0,
    entry = 0xffff88f914ddc198
  }
}

Sincerely



At 2018-11-16 10:23:03, "dong" <bauers@126.com> wrote:

When I straced systemd, I found the weird system call ‘kcmp’.  Is that can explain something? 





% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 29.06    0.000077          19         4           close
 16.98    0.000045          23         2           read
 15.47    0.000041          21         2           open
 10.94    0.000029          15         2           recvmsg
  9.43    0.000025           6         4           epoll_wait
  9.06    0.000024           6         4           epoll_ctl
  6.42    0.000017           0        54           kcmp
  2.26    0.000006           2         4           clock_gettime
  0.38    0.000001           1         2           fstat
------ ----------- ----------- --------- --------- ----------------
100.00    0.000265                    78           total


Sincerely

At 2018-11-16 05:06:46, "Andrew Morton" <akpm@linux-foundation.org> wrote:
>
>(switched to email.  Please respond via emailed reply-to-all, not via the
>bugzilla web interface).
>
>On Thu, 15 Nov 2018 06:31:19 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:
>
>> https://bugzilla.kernel.org/show_bug.cgi?id=201699
>> 
>>             Bug ID: 201699
>>            Summary: kmemleak in memcg_create_kmem_cache
>>            Product: Memory Management
>>            Version: 2.5
>>     Kernel Version: 4.20.0-rc2(other version include 4.14.52 etc.)
>>           Hardware: Intel
>>                 OS: Linux
>>               Tree: Mainline
>>             Status: NEW
>>           Severity: high
>>           Priority: P1
>>          Component: Slab Allocator
>>           Assignee: akpm@linux-foundation.org
>>           Reporter: bauers@126.com
>>         Regression: No
>> 
>> On debian OS, when systemd restart a failed service periodically. It will cause
>> memory leak. When I enable kmemleak, the message comes up.
>> 
>> 
>> [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
>> [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065587] Call Trace:
>> [ 4658.065590]  dump_stack+0x5c/0x7b
>> [ 4658.065594]  lookup_object+0x5e/0x80
>> [ 4658.065596]  find_and_get_object+0x29/0x80
>> [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065603]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065605]  enable_cpucache+0x80/0x110
>> [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065612]  create_cache+0xd9/0x200
>> [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065619]  process_one_work+0x1d1/0x3d0
>> [ 4658.065621]  worker_thread+0x4f/0x3b0
>> [ 4658.065623]  ? rescuer_thread+0x360/0x360
>> [ 4658.065625]  kthread+0xf8/0x130
>> [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065628]  ret_from_fork+0x35/0x40
>> [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
>> [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065631] kmemleak:   min_count = 1
>> [ 4658.065632] kmemleak:   count = 0
>> [ 4658.065632] kmemleak:   flags = 0x1
>> [ 4658.065633] kmemleak:   checksum = 0
>> [ 4658.065633] kmemleak:   backtrace:
>> [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065636]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065637]      enable_cpucache+0x80/0x110
>> [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065640]      create_cache+0xd9/0x200
>> [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065644]      process_one_work+0x1d1/0x3d0
>> [ 4658.065646]      worker_thread+0x4f/0x3b0
>> [ 4658.065647]      kthread+0xf8/0x130
>> [ 4658.065648]      ret_from_fork+0x35/0x40
>> [ 4658.065649]      0xffffffffffffffff
>> [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808
>> [ 4658.065651] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065652] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065653] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065654] Call Trace:
>> [ 4658.065656]  dump_stack+0x5c/0x7b
>> [ 4658.065657]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065659]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065660]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065662]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065663]  enable_cpucache+0x80/0x110
>> [ 4658.065664]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065667]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065668]  create_cache+0xd9/0x200
>> [ 4658.065670]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065671]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065673]  process_one_work+0x1d1/0x3d0
>> [ 4658.065675]  worker_thread+0x4f/0x3b0
>> [ 4658.065677]  ? rescuer_thread+0x360/0x360
>> [ 4658.065679]  kthread+0xf8/0x130
>> [ 4658.065681]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065682]  ret_from_fork+0x35/0x40
>> [ 4658.065718] kmemleak: Found object by alias at 0xffff9d8cb36bd288
>> [ 4658.065720] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065721] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065722] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065722] Call Trace:
>> [ 4658.065724]  dump_stack+0x5c/0x7b
>> [ 4658.065726]  lookup_object+0x5e/0x80
>> [ 4658.065728]  find_and_get_object+0x29/0x80
>> [ 4658.065729]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065730]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065732]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065734]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065735]  enable_cpucache+0x80/0x110
>> [ 4658.065737]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065739]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065740]  create_cache+0xd9/0x200
>> [ 4658.065742]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065743]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065745]  process_one_work+0x1d1/0x3d0
>> [ 4658.065747]  worker_thread+0x4f/0x3b0
>> [ 4658.065750]  ? rescuer_thread+0x360/0x360
>> [ 4658.065751]  kthread+0xf8/0x130
>> [ 4658.065753]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065754]  ret_from_fork+0x35/0x40
>> [ 4658.065755] kmemleak: Object 0xffff9d8cb36bd280 (size 128):
>> [ 4658.065756] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065757] kmemleak:   min_count = 1
>> [ 4658.065757] kmemleak:   count = 0
>> [ 4658.065757] kmemleak:   flags = 0x1
>> [ 4658.065758] kmemleak:   checksum = 0
>> [ 4658.065758] kmemleak:   backtrace:
>> [ 4658.065759]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065760]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065762]      enable_cpucache+0x80/0x110
>> [ 4658.065763]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065764]      create_cache+0xd9/0x200
>> [ 4658.065765]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065766]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065768]      process_one_work+0x1d1/0x3d0
>> [ 4658.065770]      worker_thread+0x4f/0x3b0
>> [ 4658.065771]      kthread+0xf8/0x130
>> [ 4658.065772]      ret_from_fork+0x35/0x40
>> [ 4658.065773]      0xffffffffffffffff
>> [ 4658.065774] kmemleak: Not scanning unknown object at 0xffff9d8cb36bd288
>> [ 4658.065775] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065775] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065776] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065777] Call Trace:
>> [ 4658.065779]  dump_stack+0x5c/0x7b
>> [ 4658.065780]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065781]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065783]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065784]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065785]  enable_cpucache+0x80/0x110
>> [ 4658.065787]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065789]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065790]  create_cache+0xd9/0x200
>> [ 4658.065792]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065793]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065795]  process_one_work+0x1d1/0x3d0
>> [ 4658.065797]  worker_thread+0x4f/0x3b0
>> [ 4658.065799]  ? rescuer_thread+0x360/0x360
>> [ 4658.065801]  kthread+0xf8/0x130
>> [ 4658.065802]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065804]  ret_from_fork+0x35/0x40
>> 
>> -- 
>> You are receiving this mail because:
>> You are the assignee for the bug.





 

[-- Attachment #2: Type: text/html, Size: 13505 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re:Re:Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-16  3:04     ` dong
@ 2018-11-16  3:37       ` dong
  0 siblings, 0 replies; 23+ messages in thread
From: dong @ 2018-11-16  3:37 UTC (permalink / raw)
  To: dong
  Cc: Andrew Morton, Vladimir Davydov, Michal Hocko, Johannes Weiner,
	bugzilla-daemon, linux-mm, zhangyongsu, liuxian.1, liuxiaozhou,
	duanxiongchun

[-- Attachment #1: Type: text/plain, Size: 9530 bytes --]

cc






At 2018-11-16 11:04:21, "dong" <bauers@126.com> wrote:

When I run `crash /proc/kcore` to check the leak object pointer, I got this. Is there anything else I can offer ?


crash> struct alien_cache -x 0xffff88f914ddc180
struct alien_cache {
  lock = {
    {
      rlock = {
        raw_lock = {
          val = {
            counter = 0x0
          }
        }
      }
    }
  },
  ac = {
    avail = 0x0,
    limit = 0xc,
    batchcount = 0xbaadf00d,
    touched = 0x0,
    entry = 0xffff88f914ddc198
  }
}

Sincerely



At 2018-11-16 10:23:03, "dong" <bauers@126.com> wrote:

When I straced systemd, I found the weird system call ‘kcmp’.  Is that can explain something? 





% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 29.06    0.000077          19         4           close
 16.98    0.000045          23         2           read
 15.47    0.000041          21         2           open
 10.94    0.000029          15         2           recvmsg
  9.43    0.000025           6         4           epoll_wait
  9.06    0.000024           6         4           epoll_ctl
  6.42    0.000017           0        54           kcmp
  2.26    0.000006           2         4           clock_gettime
  0.38    0.000001           1         2           fstat
------ ----------- ----------- --------- --------- ----------------
100.00    0.000265                    78           total


Sincerely

At 2018-11-16 05:06:46, "Andrew Morton" <akpm@linux-foundation.org> wrote:
>
>(switched to email.  Please respond via emailed reply-to-all, not via the
>bugzilla web interface).
>
>On Thu, 15 Nov 2018 06:31:19 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:
>
>> https://bugzilla.kernel.org/show_bug.cgi?id=201699
>> 
>>             Bug ID: 201699
>>            Summary: kmemleak in memcg_create_kmem_cache
>>            Product: Memory Management
>>            Version: 2.5
>>     Kernel Version: 4.20.0-rc2(other version include 4.14.52 etc.)
>>           Hardware: Intel
>>                 OS: Linux
>>               Tree: Mainline
>>             Status: NEW
>>           Severity: high
>>           Priority: P1
>>          Component: Slab Allocator
>>           Assignee: akpm@linux-foundation.org
>>           Reporter: bauers@126.com
>>         Regression: No
>> 
>> On debian OS, when systemd restart a failed service periodically. It will cause
>> memory leak. When I enable kmemleak, the message comes up.
>> 
>> 
>> [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
>> [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065587] Call Trace:
>> [ 4658.065590]  dump_stack+0x5c/0x7b
>> [ 4658.065594]  lookup_object+0x5e/0x80
>> [ 4658.065596]  find_and_get_object+0x29/0x80
>> [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065603]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065605]  enable_cpucache+0x80/0x110
>> [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065612]  create_cache+0xd9/0x200
>> [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065619]  process_one_work+0x1d1/0x3d0
>> [ 4658.065621]  worker_thread+0x4f/0x3b0
>> [ 4658.065623]  ? rescuer_thread+0x360/0x360
>> [ 4658.065625]  kthread+0xf8/0x130
>> [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065628]  ret_from_fork+0x35/0x40
>> [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
>> [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065631] kmemleak:   min_count = 1
>> [ 4658.065632] kmemleak:   count = 0
>> [ 4658.065632] kmemleak:   flags = 0x1
>> [ 4658.065633] kmemleak:   checksum = 0
>> [ 4658.065633] kmemleak:   backtrace:
>> [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065636]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065637]      enable_cpucache+0x80/0x110
>> [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065640]      create_cache+0xd9/0x200
>> [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065644]      process_one_work+0x1d1/0x3d0
>> [ 4658.065646]      worker_thread+0x4f/0x3b0
>> [ 4658.065647]      kthread+0xf8/0x130
>> [ 4658.065648]      ret_from_fork+0x35/0x40
>> [ 4658.065649]      0xffffffffffffffff
>> [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808
>> [ 4658.065651] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065652] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065653] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065654] Call Trace:
>> [ 4658.065656]  dump_stack+0x5c/0x7b
>> [ 4658.065657]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065659]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065660]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065662]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065663]  enable_cpucache+0x80/0x110
>> [ 4658.065664]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065667]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065668]  create_cache+0xd9/0x200
>> [ 4658.065670]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065671]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065673]  process_one_work+0x1d1/0x3d0
>> [ 4658.065675]  worker_thread+0x4f/0x3b0
>> [ 4658.065677]  ? rescuer_thread+0x360/0x360
>> [ 4658.065679]  kthread+0xf8/0x130
>> [ 4658.065681]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065682]  ret_from_fork+0x35/0x40
>> [ 4658.065718] kmemleak: Found object by alias at 0xffff9d8cb36bd288
>> [ 4658.065720] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065721] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065722] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065722] Call Trace:
>> [ 4658.065724]  dump_stack+0x5c/0x7b
>> [ 4658.065726]  lookup_object+0x5e/0x80
>> [ 4658.065728]  find_and_get_object+0x29/0x80
>> [ 4658.065729]  kmemleak_no_scan+0x31/0xc0
>> [ 4658.065730]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065732]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065734]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065735]  enable_cpucache+0x80/0x110
>> [ 4658.065737]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065739]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065740]  create_cache+0xd9/0x200
>> [ 4658.065742]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065743]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065745]  process_one_work+0x1d1/0x3d0
>> [ 4658.065747]  worker_thread+0x4f/0x3b0
>> [ 4658.065750]  ? rescuer_thread+0x360/0x360
>> [ 4658.065751]  kthread+0xf8/0x130
>> [ 4658.065753]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065754]  ret_from_fork+0x35/0x40
>> [ 4658.065755] kmemleak: Object 0xffff9d8cb36bd280 (size 128):
>> [ 4658.065756] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
>> [ 4658.065757] kmemleak:   min_count = 1
>> [ 4658.065757] kmemleak:   count = 0
>> [ 4658.065757] kmemleak:   flags = 0x1
>> [ 4658.065758] kmemleak:   checksum = 0
>> [ 4658.065758] kmemleak:   backtrace:
>> [ 4658.065759]      __do_tune_cpucache+0x18c/0x220
>> [ 4658.065760]      do_tune_cpucache+0x27/0xb0
>> [ 4658.065762]      enable_cpucache+0x80/0x110
>> [ 4658.065763]      __kmem_cache_create+0x217/0x3a0
>> [ 4658.065764]      create_cache+0xd9/0x200
>> [ 4658.065765]      memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065766]      memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065768]      process_one_work+0x1d1/0x3d0
>> [ 4658.065770]      worker_thread+0x4f/0x3b0
>> [ 4658.065771]      kthread+0xf8/0x130
>> [ 4658.065772]      ret_from_fork+0x35/0x40
>> [ 4658.065773]      0xffffffffffffffff
>> [ 4658.065774] kmemleak: Not scanning unknown object at 0xffff9d8cb36bd288
>> [ 4658.065775] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
>> #1
>> [ 4658.065775] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
>> 04/12/2016
>> [ 4658.065776] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
>> [ 4658.065777] Call Trace:
>> [ 4658.065779]  dump_stack+0x5c/0x7b
>> [ 4658.065780]  kmemleak_no_scan+0xa0/0xc0
>> [ 4658.065781]  setup_kmem_cache_node+0x271/0x350
>> [ 4658.065783]  __do_tune_cpucache+0x18c/0x220
>> [ 4658.065784]  do_tune_cpucache+0x27/0xb0
>> [ 4658.065785]  enable_cpucache+0x80/0x110
>> [ 4658.065787]  __kmem_cache_create+0x217/0x3a0
>> [ 4658.065789]  ? kmem_cache_alloc+0x1aa/0x280
>> [ 4658.065790]  create_cache+0xd9/0x200
>> [ 4658.065792]  memcg_create_kmem_cache+0xef/0x120
>> [ 4658.065793]  memcg_kmem_cache_create_func+0x1b/0x60
>> [ 4658.065795]  process_one_work+0x1d1/0x3d0
>> [ 4658.065797]  worker_thread+0x4f/0x3b0
>> [ 4658.065799]  ? rescuer_thread+0x360/0x360
>> [ 4658.065801]  kthread+0xf8/0x130
>> [ 4658.065802]  ? kthread_create_worker_on_cpu+0x70/0x70
>> [ 4658.065804]  ret_from_fork+0x35/0x40
>> 
>> -- 
>> You are receiving this mail because:
>> You are the assignee for the bug.





 




【网易自营|30天无忧退货】真性价比:网易员工用纸“无添加谷风一木软抽面巾纸”,限时仅16.9元一提>>      

[-- Attachment #2: Type: text/html, Size: 14350 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-15 21:06 ` [Bug 201699] New: kmemleak in memcg_create_kmem_cache Andrew Morton
  2018-11-16  2:23   ` dong
@ 2018-11-16 17:50   ` Vladimir Davydov
  2018-11-18  0:44     ` dong
  1 sibling, 1 reply; 23+ messages in thread
From: Vladimir Davydov @ 2018-11-16 17:50 UTC (permalink / raw)
  To: bauers
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

On Thu, Nov 15, 2018 at 01:06:46PM -0800, Andrew Morton wrote:
> > On debian OS, when systemd restart a failed service periodically. It will cause
> > memory leak. When I enable kmemleak, the message comes up.

What made you think there was a memory leak in the first place?

> > 
> > 
> > [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
> > [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> > #1
> > [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> > 04/12/2016
> > [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> > [ 4658.065587] Call Trace:
> > [ 4658.065590]  dump_stack+0x5c/0x7b
> > [ 4658.065594]  lookup_object+0x5e/0x80
> > [ 4658.065596]  find_and_get_object+0x29/0x80
> > [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
> > [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
> > [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
> > [ 4658.065603]  do_tune_cpucache+0x27/0xb0
> > [ 4658.065605]  enable_cpucache+0x80/0x110
> > [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
> > [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
> > [ 4658.065612]  create_cache+0xd9/0x200
> > [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
> > [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
> > [ 4658.065619]  process_one_work+0x1d1/0x3d0
> > [ 4658.065621]  worker_thread+0x4f/0x3b0
> > [ 4658.065623]  ? rescuer_thread+0x360/0x360
> > [ 4658.065625]  kthread+0xf8/0x130
> > [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
> > [ 4658.065628]  ret_from_fork+0x35/0x40
> > [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
> > [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
> > [ 4658.065631] kmemleak:   min_count = 1
> > [ 4658.065632] kmemleak:   count = 0
> > [ 4658.065632] kmemleak:   flags = 0x1
> > [ 4658.065633] kmemleak:   checksum = 0
> > [ 4658.065633] kmemleak:   backtrace:
> > [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
> > [ 4658.065636]      do_tune_cpucache+0x27/0xb0
> > [ 4658.065637]      enable_cpucache+0x80/0x110
> > [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
> > [ 4658.065640]      create_cache+0xd9/0x200
> > [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
> > [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
> > [ 4658.065644]      process_one_work+0x1d1/0x3d0
> > [ 4658.065646]      worker_thread+0x4f/0x3b0
> > [ 4658.065647]      kthread+0xf8/0x130
> > [ 4658.065648]      ret_from_fork+0x35/0x40
> > [ 4658.065649]      0xffffffffffffffff
> > [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808

This doesn't look like kmemleak reporting a leak to me, although this
does look weird. What does /sys/kernel/debug/kmemleak show?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-16 17:50   ` Vladimir Davydov
@ 2018-11-18  0:44     ` dong
  2018-11-19  8:30       ` Vladimir Davydov
  0 siblings, 1 reply; 23+ messages in thread
From: dong @ 2018-11-18  0:44 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 3533 bytes --]

First of all,I can see memory leak when I run ‘free -g’ command. So I enabled kmemleak. I got the messages above. When I run ‘cat /sys/kernel/debug/kmemleak’, nothing came up. Instead, the ‘dmesg’ command show me the leak messages. So the messages is not the leak reason?How can I detect the real memory leak?Thanks!




--
发自我的网易邮箱手机智能版
<br/><br/><br/>


----- Original Message -----
From: "Vladimir Davydov" <vdavydov.dev@gmail.com>
To: bauers@126.com
Cc: "Michal Hocko" <mhocko@kernel.org>, "Johannes Weiner" <hannes@cmpxchg.org>, bugzilla-daemon@bugzilla.kernel.org, linux-mm@kvack.org, "Andrew Morton" <akpm@linux-foundation.org>
Sent: Fri, 16 Nov 2018 20:50:05 +0300
Subject: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache

On Thu, Nov 15, 2018 at 01:06:46PM -0800, Andrew Morton wrote:
> > On debian OS, when systemd restart a failed service periodically. It will cause
> > memory leak. When I enable kmemleak, the message comes up.

What made you think there was a memory leak in the first place?

> > 
> > 
> > [ 4658.065578] kmemleak: Found object by alias at 0xffff9d84ba868808
> > [ 4658.065581] CPU: 8 PID: 5194 Comm: kworker/8:3 Not tainted 4.20.0-rc2.bm.1+
> > #1
> > [ 4658.065582] Hardware name: Dell Inc. PowerEdge C6320/082F9M, BIOS 2.1.5
> > 04/12/2016
> > [ 4658.065586] Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
> > [ 4658.065587] Call Trace:
> > [ 4658.065590]  dump_stack+0x5c/0x7b
> > [ 4658.065594]  lookup_object+0x5e/0x80
> > [ 4658.065596]  find_and_get_object+0x29/0x80
> > [ 4658.065598]  kmemleak_no_scan+0x31/0xc0
> > [ 4658.065600]  setup_kmem_cache_node+0x271/0x350
> > [ 4658.065602]  __do_tune_cpucache+0x18c/0x220
> > [ 4658.065603]  do_tune_cpucache+0x27/0xb0
> > [ 4658.065605]  enable_cpucache+0x80/0x110
> > [ 4658.065606]  __kmem_cache_create+0x217/0x3a0
> > [ 4658.065609]  ? kmem_cache_alloc+0x1aa/0x280
> > [ 4658.065612]  create_cache+0xd9/0x200
> > [ 4658.065614]  memcg_create_kmem_cache+0xef/0x120
> > [ 4658.065616]  memcg_kmem_cache_create_func+0x1b/0x60
> > [ 4658.065619]  process_one_work+0x1d1/0x3d0
> > [ 4658.065621]  worker_thread+0x4f/0x3b0
> > [ 4658.065623]  ? rescuer_thread+0x360/0x360
> > [ 4658.065625]  kthread+0xf8/0x130
> > [ 4658.065627]  ? kthread_create_worker_on_cpu+0x70/0x70
> > [ 4658.065628]  ret_from_fork+0x35/0x40
> > [ 4658.065630] kmemleak: Object 0xffff9d84ba868800 (size 128):
> > [ 4658.065631] kmemleak:   comm "kworker/8:3", pid 5194, jiffies 4296056196
> > [ 4658.065631] kmemleak:   min_count = 1
> > [ 4658.065632] kmemleak:   count = 0
> > [ 4658.065632] kmemleak:   flags = 0x1
> > [ 4658.065633] kmemleak:   checksum = 0
> > [ 4658.065633] kmemleak:   backtrace:
> > [ 4658.065635]      __do_tune_cpucache+0x18c/0x220
> > [ 4658.065636]      do_tune_cpucache+0x27/0xb0
> > [ 4658.065637]      enable_cpucache+0x80/0x110
> > [ 4658.065638]      __kmem_cache_create+0x217/0x3a0
> > [ 4658.065640]      create_cache+0xd9/0x200
> > [ 4658.065641]      memcg_create_kmem_cache+0xef/0x120
> > [ 4658.065642]      memcg_kmem_cache_create_func+0x1b/0x60
> > [ 4658.065644]      process_one_work+0x1d1/0x3d0
> > [ 4658.065646]      worker_thread+0x4f/0x3b0
> > [ 4658.065647]      kthread+0xf8/0x130
> > [ 4658.065648]      ret_from_fork+0x35/0x40
> > [ 4658.065649]      0xffffffffffffffff
> > [ 4658.065650] kmemleak: Not scanning unknown object at 0xffff9d84ba868808

This doesn't look like kmemleak reporting a leak to me, although this
does look weird. What does /sys/kernel/debug/kmemleak show?

[-- Attachment #2: Type: text/html, Size: 6405 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-18  0:44     ` dong
@ 2018-11-19  8:30       ` Vladimir Davydov
  2018-11-19 10:24         ` Michal Hocko
  2018-11-19 11:56         ` dong
  0 siblings, 2 replies; 23+ messages in thread
From: Vladimir Davydov @ 2018-11-19  8:30 UTC (permalink / raw)
  To: dong
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

On Sun, Nov 18, 2018 at 08:44:14AM +0800, dong wrote:
> First of all,I can see memory leak when I run a??free -ga?? command.

This doesn't mean there's a leak. The kernel may postpone freeing memory
until there's memory pressure. In particular cgroup objects are not
released until there are objects allocated from the corresponding kmem
caches. Those objects may be inodes or dentries, which are freed lazily.
Looks like restarting a service causes recreation of a memory cgroup and
hence piling up dead cgroups. Try to drop caches.

>So I enabled kmemleak. I got the messages above. When I run a??cat
>/sys/kernel/debug/kmemleaka??, nothing came up. Instead, the a??dmesga??
>command show me the leak messages. So the messages is not the leak
>reasoni 1/4 ?How can I detect the real memory leaki 1/4 ?Thanksi 1/4 ?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-19  8:30       ` Vladimir Davydov
@ 2018-11-19 10:24         ` Michal Hocko
  2018-11-19 11:56         ` dong
  1 sibling, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2018-11-19 10:24 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: dong, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton,
	Roman Gushchin

[Cc Roman - the email thread starts
http://lkml.kernel.org/r/20181115130646.6de1029eb1f3b8d7276c3543@linux-foundation.org]

On Mon 19-11-18 11:30:45, Vladimir Davydov wrote:
> On Sun, Nov 18, 2018 at 08:44:14AM +0800, dong wrote:
> > First of all,I can see memory leak when I run a??free -ga?? command.
> 
> This doesn't mean there's a leak. The kernel may postpone freeing memory
> until there's memory pressure. In particular cgroup objects are not
> released until there are objects allocated from the corresponding kmem
> caches. Those objects may be inodes or dentries, which are freed lazily.
> Looks like restarting a service causes recreation of a memory cgroup and
> hence piling up dead cgroups. Try to drop caches.

This seems similar to what Roman was looking recently. All the fixes
should be merged in the current Linus tree IIRC.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-19  8:30       ` Vladimir Davydov
  2018-11-19 10:24         ` Michal Hocko
@ 2018-11-19 11:56         ` dong
  2018-11-21  8:46           ` dong
  2018-11-21  8:52           ` Re: " Vladimir Davydov
  1 sibling, 2 replies; 23+ messages in thread
From: dong @ 2018-11-19 11:56 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1210 bytes --]

Sorry, there's a leak indeed. The memory was leaking all the time and I tried to run command `echo 3 > /proc/sys/vm/drop_caches`, it didn't help.

But when I delete the log files which was created by the failed systemd service, the leak(cached) memory was released. 
I suspect the leak is relevant to the inode objects.





At 2018-11-19 16:30:45, "Vladimir Davydov" <vdavydov.dev@gmail.com> wrote:
>On Sun, Nov 18, 2018 at 08:44:14AM +0800, dong wrote:
>> First of all,I can see memory leak when I run ‘free -g’ command.
>
>This doesn't mean there's a leak. The kernel may postpone freeing memory
>until there's memory pressure. In particular cgroup objects are not
>released until there are objects allocated from the corresponding kmem
>caches. Those objects may be inodes or dentries, which are freed lazily.
>Looks like restarting a service causes recreation of a memory cgroup and
>hence piling up dead cgroups. Try to drop caches.
>
>>So I enabled kmemleak. I got the messages above. When I run ‘cat
>>/sys/kernel/debug/kmemleak’, nothing came up. Instead, the ‘dmesg’
>>command show me the leak messages. So the messages is not the leak
>>reason?How can I detect the real memory leak?Thanks!

[-- Attachment #2: Type: text/html, Size: 1538 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-19 11:56         ` dong
@ 2018-11-21  8:46           ` dong
  2018-11-21  8:56             ` Vladimir Davydov
  2018-11-21  9:10             ` Michal Hocko
  2018-11-21  8:52           ` Re: " Vladimir Davydov
  1 sibling, 2 replies; 23+ messages in thread
From: dong @ 2018-11-21  8:46 UTC (permalink / raw)
  To: dong
  Cc: Vladimir Davydov, Michal Hocko, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1749 bytes --]

Sorry, I found when I ran `echo 3 >  /proc/sys/vm/drop_caches`, the leak memory was released very slowly. 

The `Page Cache` of the opened log file is the reason to cause leak. Because the `struct page` contains 
`struct mem_cgroup *mem_cgroup` which has a large chunk of memory. Thanks everyone for helping me to
solve the problem.


The last question: If I alloc many small pages and not free them, will I exhaust the memory ( because every page contains `mem_cgroup` )?




At 2018-11-19 19:56:53, "dong" <bauers@126.com> wrote:

Sorry, there's a leak indeed. The memory was leaking all the time and I tried to run command `echo 3 > /proc/sys/vm/drop_caches`, it didn't help.

But when I delete the log files which was created by the failed systemd service, the leak(cached) memory was released. 
I suspect the leak is relevant to the inode objects.





At 2018-11-19 16:30:45, "Vladimir Davydov" <vdavydov.dev@gmail.com> wrote:
>On Sun, Nov 18, 2018 at 08:44:14AM +0800, dong wrote:
>> First of all,I can see memory leak when I run ‘free -g’ command.
>
>This doesn't mean there's a leak. The kernel may postpone freeing memory
>until there's memory pressure. In particular cgroup objects are not
>released until there are objects allocated from the corresponding kmem
>caches. Those objects may be inodes or dentries, which are freed lazily.
>Looks like restarting a service causes recreation of a memory cgroup and
>hence piling up dead cgroups. Try to drop caches.
>
>>So I enabled kmemleak. I got the messages above. When I run ‘cat
>>/sys/kernel/debug/kmemleak’, nothing came up. Instead, the ‘dmesg’
>>command show me the leak messages. So the messages is not the leak
>>reason?How can I detect the real memory leak?Thanks!





 

[-- Attachment #2: Type: text/html, Size: 2914 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-19 11:56         ` dong
  2018-11-21  8:46           ` dong
@ 2018-11-21  8:52           ` Vladimir Davydov
  1 sibling, 0 replies; 23+ messages in thread
From: Vladimir Davydov @ 2018-11-21  8:52 UTC (permalink / raw)
  To: dong
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

On Mon, Nov 19, 2018 at 07:56:53PM +0800, dong wrote:
> Sorry, there's a leak indeed. The memory was leaking all the time and
> I tried to run command `echo 3 > /proc/sys/vm/drop_caches`, it didn't
> help.
> 
> But when I delete the log files which was created by the failed
> systemd service, the leak(cached) memory was released.  I suspect the
> leak is relevant to the inode objects.

What kind of filesystem is used for storing logs?

Also, I assume you use SLAB. It would be nice if you could try to
reproduce the issue with SLUB, because the latter exports information
about per memcg caches under /sys/kernel/slab/<cache-name>/cgroup. It
could shed the light on what kinds of objects are not freed after cgroup
destruction.

In case of SLAB you can try to monitor /proc/slabinfo to see which
caches are growing. Anyway, you'll probably have to turn off kmem cache
merging - see slab_nomerge boot options.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  8:46           ` dong
@ 2018-11-21  8:56             ` Vladimir Davydov
  2018-11-21  9:06               ` dong
  2018-11-21  9:10             ` Michal Hocko
  1 sibling, 1 reply; 23+ messages in thread
From: Vladimir Davydov @ 2018-11-21  8:56 UTC (permalink / raw)
  To: dong
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

On Wed, Nov 21, 2018 at 04:46:48PM +0800, dong wrote:
> Sorry, I found when I ran `echo 3 >  /proc/sys/vm/drop_caches`, the
> leak memory was released very slowly. 
> 
> The `Page Cache` of the opened log file is the reason to cause leak.
> Because the `struct page` contains `struct mem_cgroup *mem_cgroup`
> which has a large chunk of memory. Thanks everyone for helping me to
> solve the problem.

Ah, so it doesn't seem to be kmem problem at all. The email I sent
several minutes ago isn't relevant then.

> The last question: If I alloc many small pages and not free them, will
> I exhaust the memory ( because every page contains `mem_cgroup` )?

Once memory usage is close to the limit, the reclaimer will kick in
automatically to free those pages and the associated dead cgroups.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re: Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  8:56             ` Vladimir Davydov
@ 2018-11-21  9:06               ` dong
  0 siblings, 0 replies; 23+ messages in thread
From: dong @ 2018-11-21  9:06 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Johannes Weiner, bugzilla-daemon, linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 919 bytes --]

I see. Thank you very much :-)








At 2018-11-21 16:56:29, "Vladimir Davydov" <vdavydov.dev@gmail.com> wrote:
>On Wed, Nov 21, 2018 at 04:46:48PM +0800, dong wrote:
>> Sorry, I found when I ran `echo 3 >  /proc/sys/vm/drop_caches`, the
>> leak memory was released very slowly. 
>> 
>> The `Page Cache` of the opened log file is the reason to cause leak.
>> Because the `struct page` contains `struct mem_cgroup *mem_cgroup`
>> which has a large chunk of memory. Thanks everyone for helping me to
>> solve the problem.
>
>Ah, so it doesn't seem to be kmem problem at all. The email I sent
>several minutes ago isn't relevant then.
>
>> The last question: If I alloc many small pages and not free them, will
>> I exhaust the memory ( because every page contains `mem_cgroup` )?
>
>Once memory usage is close to the limit, the reclaimer will kick in
>automatically to free those pages and the associated dead cgroups.

[-- Attachment #2: Type: text/html, Size: 1253 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  8:46           ` dong
  2018-11-21  8:56             ` Vladimir Davydov
@ 2018-11-21  9:10             ` Michal Hocko
  2018-11-21  9:22               ` dong
  1 sibling, 1 reply; 23+ messages in thread
From: Michal Hocko @ 2018-11-21  9:10 UTC (permalink / raw)
  To: dong
  Cc: Vladimir Davydov, Johannes Weiner, bugzilla-daemon, linux-mm,
	Andrew Morton

On Wed 21-11-18 16:46:48, dong wrote:
> The last question: If I alloc many small pages and not free them, will
> I exhaust the memory ( because every page contains `mem_cgroup` )?

No, the memory will get reclaimed on the memory pressure or for
anonymous one (malloc) when the process allocating it terminates,
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re:Re: Re:Re: Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  9:10             ` Michal Hocko
@ 2018-11-21  9:22               ` dong
  2018-11-21  9:36                 ` 段熊春
  0 siblings, 1 reply; 23+ messages in thread
From: dong @ 2018-11-21  9:22 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Vladimir Davydov, Johannes Weiner, bugzilla-daemon, linux-mm,
	Andrew Morton, duanxiongchun

[-- Attachment #1: Type: text/plain, Size: 472 bytes --]

Thanks for replying, Michal.


cc to duanxiongchun








At 2018-11-21 17:10:41, "Michal Hocko" <mhocko@kernel.org> wrote:
>On Wed 21-11-18 16:46:48, dong wrote:
>> The last question: If I alloc many small pages and not free them, will
>> I exhaust the memory ( because every page contains `mem_cgroup` )?
>
>No, the memory will get reclaimed on the memory pressure or for
>anonymous one (malloc) when the process allocating it terminates,
>-- 
>Michal Hocko
>SUSE Labs

[-- Attachment #2: Type: text/html, Size: 857 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  9:22               ` dong
@ 2018-11-21  9:36                 ` 段熊春
  2018-11-21 16:27                   ` Michal Hocko
  0 siblings, 1 reply; 23+ messages in thread
From: 段熊春 @ 2018-11-21  9:36 UTC (permalink / raw)
  To: dong
  Cc: Michal Hocko, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1308 bytes --]

hi all:

In same case, I think it’s may be a problem。

if I create a virtual netdev device under mem cgroup(like ip link add ve_A type veth peer name ve_B).after that ,I destroy this mem cgroup。

I find that may the object  net_device, will be hold by the kernel until I run command (ip link del ). And the memory pages which container the object won’t be uncharge. mem_cgroup object  also will be not free. 

Anothers may think kernel just hold sizeof(struct netdev_device) memory size. But,it’s not really,it’s much bigger than they think.

It’s maybe a problems, I am not very sure about that.

 Thanks

bytedance.net
段熊春
duanxiongchun@bytedance.com




> On Nov 21, 2018, at 5:22 PM, dong <bauers@126.com> wrote:
> 
> Thanks for replying, Michal.
> 
> cc to duanxiongchun
> 
> 
> 
> 
> 
> 
> At 2018-11-21 17:10:41, "Michal Hocko" <mhocko@kernel.org> wrote:
> >On Wed 21-11-18 16:46:48, dong wrote:
> >> The last question: If I alloc many small pages and not free them, will
> >> I exhaust the memory ( because every page contains `mem_cgroup` )?
> >
> >No, the memory will get reclaimed on the memory pressure or for
> >anonymous one (malloc) when the process allocating it terminates,
> >-- 
> >Michal Hocko
> >SUSE Labs
> 
> 
>  


[-- Attachment #2: Type: text/html, Size: 3293 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21  9:36                 ` 段熊春
@ 2018-11-21 16:27                   ` Michal Hocko
  2018-11-22  2:19                     ` 段熊春
  2018-11-22  2:56                     ` 段熊春
  0 siblings, 2 replies; 23+ messages in thread
From: Michal Hocko @ 2018-11-21 16:27 UTC (permalink / raw)
  To: 段熊春
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

On Wed 21-11-18 17:36:51, ae(R)uc??ae?JPY wrote:
> hi alli 1/4 ?
> 
> In same casei 1/4 ? I think ita??s may be a problema??
> 
> if I create a virtual netdev device under mem cgroup(like ip link add ve_A type veth peer name ve_B).after that ,I destroy this mem cgroupa??

Which object is charged to that memcg? If there is no relation to any
task context then accounting to a memcg is problematic.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21 16:27                   ` Michal Hocko
@ 2018-11-22  2:19                     ` 段熊春
  2018-11-22  7:32                       ` Michal Hocko
  2018-11-22  2:56                     ` 段熊春
  1 sibling, 1 reply; 23+ messages in thread
From: 段熊春 @ 2018-11-22  2:19 UTC (permalink / raw)
  To: Michal Hocko
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1437 bytes --]

I had view the slab kmem_cache_alloc function,I think the virtual netdevice object will charged to memcg.
Becuse the function slab_pre_alloc_hook will choose a kmem_cache, which belong to current task memcg.
If  virtual netdevice object not destroy by another command, the virtual netdevice object will still charged to memcg, and the memcg will still in memory.

Above is just an example.
The general scenario is as follows
if a user process which has own memcg creates a semi-permeanent kernel object , and does not release this kernel object before exit.
The memcg which belong to this process will just offline but not release until the semi-permeanent kernel object release.

I think in those case, kernel will hold more memory than user’s think。no just sizeof(struct blabla),but sizeof(struct blabla) + memory memcg used.

bytedance.net
段熊春
duanxiongchun@bytedance.com




> On Nov 22, 2018, at 12:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> 
> On Wed 21-11-18 17:36:51, 段熊春 wrote:
>> hi all:
>> 
>> In same case, I think it’s may be a problem。
>> 
>> if I create a virtual netdev device under mem cgroup(like ip link add ve_A type veth peer name ve_B).after that ,I destroy this mem cgroup。
> 
> Which object is charged to that memcg? If there is no relation to any
> task context then accounting to a memcg is problematic.
> 
> -- 
> Michal Hocko
> SUSE Labs


[-- Attachment #2: Type: text/html, Size: 2922 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-21 16:27                   ` Michal Hocko
  2018-11-22  2:19                     ` 段熊春
@ 2018-11-22  2:56                     ` 段熊春
  2018-11-22  7:34                       ` Michal Hocko
  1 sibling, 1 reply; 23+ messages in thread
From: 段熊春 @ 2018-11-22  2:56 UTC (permalink / raw)
  To: Michal Hocko
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1531 bytes --]

We worry about that because in our system ,we use systemd manager our service . One day we find some machine suddenly eat lots of memory.
we find in some case,our server will start fail just recording a log then exit。 but the systemd will relaunch this server every 2 second. That  server is limit memory access by memcg.

After long time dig, we find their lots of offline but not release memcg object in memory eating lots of memory.
Why this memcg not release? Because the inode pagecache use  some page which is charged to those memcg,

And we find some time the inode(log file inode ) is also charged to one  memcg.  The only way to release that memcg is to free the inode object(example, to remove the log file.)

No matter which allocator  using (slab or slub), the problem is aways there. 

After  I view the code in slab ,slub and memcg. I think in above general scenario there maybe a problem.

Thanks for replying
bytedance.net
段熊春
duanxiongchun@bytedance.com




> On Nov 22, 2018, at 12:27 AM, Michal Hocko <mhocko@kernel.org> wrote:
> 
> On Wed 21-11-18 17:36:51, 段熊春 wrote:
>> hi all:
>> 
>> In same case, I think it’s may be a problem。
>> 
>> if I create a virtual netdev device under mem cgroup(like ip link add ve_A type veth peer name ve_B).after that ,I destroy this mem cgroup。
> 
> Which object is charged to that memcg? If there is no relation to any
> task context then accounting to a memcg is problematic.
> 
> -- 
> Michal Hocko
> SUSE Labs


[-- Attachment #2: Type: text/html, Size: 3108 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-22  2:19                     ` 段熊春
@ 2018-11-22  7:32                       ` Michal Hocko
  0 siblings, 0 replies; 23+ messages in thread
From: Michal Hocko @ 2018-11-22  7:32 UTC (permalink / raw)
  To: 段熊春
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[Please do not top post]

On Thu 22-11-18 10:19:58, 段熊春 wrote:
> I had view the slab kmem_cache_alloc function,I think the virtual netdevice object will charged to memcg.
> Becuse the function slab_pre_alloc_hook will choose a kmem_cache, which belong to current task memcg.

Only for caches which opted in for kmem accounting SLAB_ACCOUNT or for
allocations with __GFP_ACCOUNT. Is this the case for the virtual
netdevice? I would check myself but I am not familiar with data
structures in this area.

> If  virtual netdevice object not destroy by another command, the virtual netdevice object will still charged to memcg, and the memcg will still in memory.

And that is why I've noted that charging objects which are not bound to
a user context and/or generally reclaimable under memory pressure are
not good candidates for kmem accounting.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-22  2:56                     ` 段熊春
@ 2018-11-22  7:34                       ` Michal Hocko
  2018-11-22  8:21                         ` 段熊春
  2018-11-23  6:54                         ` 段熊春
  0 siblings, 2 replies; 23+ messages in thread
From: Michal Hocko @ 2018-11-22  7:34 UTC (permalink / raw)
  To: 段熊春
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

On Thu 22-11-18 10:56:04, 段熊春 wrote:
> After long time dig, we find their lots of offline but not release memcg object in memory eating lots of memory.
> Why this memcg not release? Because the inode pagecache use  some page which is charged to those memcg,

As already explained these objects should be reclaimed under memory
pressure. If they are not then there is a bug. And Roman has fixed some
of those recently.

Which kernel version are you using?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-22  7:34                       ` Michal Hocko
@ 2018-11-22  8:21                         ` 段熊春
  2018-11-23  6:54                         ` 段熊春
  1 sibling, 0 replies; 23+ messages in thread
From: 段熊春 @ 2018-11-22  8:21 UTC (permalink / raw)
  To: Michal Hocko
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 855 bytes --]

4.9 and 4.14

OK, I had check  the code  did not  have  __GFP_ACCOUNT flag.

I will  double check on the latest version.

Maybe there something I do not know. 

Thanks for replying

bytedance.net
段熊春
duanxiongchun@bytedance.com




> On Nov 22, 2018, at 3:34 PM, Michal Hocko <mhocko@kernel.org> wrote:
> 
> On Thu 22-11-18 10:56:04, 段熊春 wrote:
>> After long time dig, we find their lots of offline but not release memcg object in memory eating lots of memory.
>> Why this memcg not release? Because the inode pagecache use  some page which is charged to those memcg,
> 
> As already explained these objects should be reclaimed under memory
> pressure. If they are not then there is a bug. And Roman has fixed some
> of those recently.
> 
> Which kernel version are you using?
> -- 
> Michal Hocko
> SUSE Labs


[-- Attachment #2: Type: text/html, Size: 2392 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Bug 201699] New: kmemleak in memcg_create_kmem_cache
  2018-11-22  7:34                       ` Michal Hocko
  2018-11-22  8:21                         ` 段熊春
@ 2018-11-23  6:54                         ` 段熊春
  1 sibling, 0 replies; 23+ messages in thread
From: 段熊春 @ 2018-11-23  6:54 UTC (permalink / raw)
  To: Michal Hocko
  Cc: dong, Vladimir Davydov, Johannes Weiner, bugzilla-daemon,
	linux-mm, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 11072 bytes --]

I had double check on the newest version 4.20-rc3

I had wrote a small test .


Test service code 

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
int main()
{
	struct stat buf;
	int fd ;
	fd = open("/var/log/test",O_RDWR|O_CREAT|O_APPEND|O_CLOEXEC,0644);
	sleep(1);
	fd = open("/var/log/test.1", O_RDWR|O_CREAT|O_APPEND|O_CLOEXEC|O_SYNC, 0644);
	char log[4096] = {'a'};
	if (fd > 0) {
		write(fd, log, 4096);
		close(fd);
	}

	return 1;
}


Test.service 

[Service]
ExecStart=/usr/bin/test
Restart=always
RestartSec=100ms
MemoryLimit=1G
StartLimitInterval=0
[Install]
WantedBy=default.target

Probe code 

Get test.1 node address kretprobe  code 

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/kprobes.h>
#include <linux/seq_file.h>
#include <linux/proc_fs.h>
#include <linux/spinlock.h>

static struct kretprobe kprobe_ret_object = {
    .kp.symbol_name    = "d_lookup",
};

static int handler_d_lookup_pre(struct kretprobe_instance *p, struct pt_regs *regs)
{
	int *tmp;
	struct qstr * name =(struct qstr *)regs->si;
	tmp=(int *)p->data;
	*tmp=0;
	if(strcmp("test.1",name->name)==0)
		*tmp=1;
	return 0;
}

static int ret_handler_d_lookup_pre(struct kretprobe_instance *p,struct pt_regs *regs)
{
	int *tmp;
	struct dentry * tmp_dentry = (struct dentry *)regs_return_value(regs);
	tmp = (int *)p->data;
	if(*tmp == 1)
		printk(KERN_INFO "return dentry address %px,inode address %px\n",
			tmp_dentry,tmp_dentry->d_inode);
	return 0;
}
static int __init kprobe_init(void)
{
    int ret;
    kprobe_ret_object.entry_handler = handler_d_lookup_pre;
    kprobe_ret_object.handler = ret_handler_d_lookup_pre;
    kprobe_ret_object.maxactive = 0;
    kprobe_ret_object.data_size = sizeof(int);

    ret = register_kretprobe(&kprobe_ret_object);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    printk(KERN_INFO "Planted kprobe at %p\n", kprobe_ret_object.kp.addr);
    return 0;
}

static void __exit kprobe_exit(void)
{
    unregister_kretprobe(&kprobe_ret_object);
    printk(KERN_INFO "kprobe at %p unregistered\n", kprobe_ret_object.kp.addr);
}

module_init(kprobe_init)
module_exit(kprobe_exit)
MODULE_LICENSE("GPL”);

Get unreleased mem_cgroup address

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/kprobes.h>
#include <linux/seq_file.h>
#include <linux/proc_fs.h>
#include <linux/spinlock.h>

static struct kretprobe css_alloc = {
    .kp.symbol_name    = "mem_cgroup_css_alloc",
};

static struct kprobe css_free = {
    .symbol_name    = "mem_cgroup_css_free",
};
static struct kprobe css_released = {
    .symbol_name    = "mem_cgroup_css_released",
};
static struct kprobe css_offline = {
    .symbol_name    = "mem_cgroup_css_offline",
};
static struct kprobe trycharge = {
    .symbol_name    = "page_counter_try_charge",
};
static struct kprobe charge = {
    .symbol_name    = "page_counter_charge",
};
static struct kprobe uncharge = {
    .symbol_name    = "page_counter_uncharge",
};

atomic_t cssalloc;
atomic_t cssfree;
atomic_t cssreleased;
atomic_t cssoffline;
static spinlock_t my_lock = __SPIN_LOCK_UNLOCKED();
void * css_addr=0;
void * memory_addr=0;

static int handler_trycharge(struct kprobe *p,struct pt_regs *regs)
{
    if (memory_addr == (void *)(regs->di)){
	printk(KERN_INFO"trycharge_memory %px nr %px",(void *)memory_addr,(void *)regs->si);
	spin_lock(&my_lock);
	dump_stack();
	spin_unlock(&my_lock);
    }
    return 0;
}
static int handler_charge(struct kprobe *p,struct pt_regs *regs)
{
    if (memory_addr == (void *)(regs->di)){
	printk(KERN_INFO"charge_memory %px,nr %px",(void *)memory_addr,(void *)regs->si);
	spin_lock(&my_lock);
	dump_stack();
	spin_unlock(&my_lock);
    }
    return 0;
}
static int handler_uncharge(struct kprobe *p,struct pt_regs *regs)
{
    if (memory_addr == (void *)(regs->di)){
	printk(KERN_INFO"uncharge_memory %px,nr %px",(void *)memory_addr,(void *)regs->si);
	spin_lock(&my_lock);
	dump_stack();
	spin_unlock(&my_lock);
    }
    return 0;
}

static int handler_cssalloc_pre(struct kretprobe_instance *p, struct pt_regs *regs)
{
    atomic_inc(&cssalloc);
    return 0;
}

static int ret_handler_cssalloc_pre(struct kretprobe_instance *p,struct pt_regs *regs)
{
	if (css_addr==0){
		css_addr=(void *)regs_return_value(regs);
		memory_addr=(void *)(regs_return_value(regs)+192);
	}
	return 0;
}

static int handler_cssfree_pre(struct kprobe *p,struct pt_regs *regs)
{
   atomic_inc(&cssfree);
    if (css_addr == (void *)(regs->di))
	css_addr = 0;
    return 0;
}
static int handler_cssreleased_pre(struct kprobe *p,struct pt_regs *regs)
{
   atomic_inc(&cssreleased);
    return 0;
}
static int handler_cssoffline_pre(struct kprobe *p,struct pt_regs *regs)
{
   atomic_inc(&cssoffline);
    return 0;
}

static void handler_post(struct kprobe *p, struct pt_regs *regs,
                unsigned long flags)
{
}

static int handler_fault(struct kprobe *p, struct pt_regs *regs, int trapnr)
{
    return 0;
}

static int myleak_read(struct seq_file *m, void *v)
{
        seq_printf(m,"alloc %d  offline %d release %d free %d trace addr %px\n",atomic_read(&cssalloc),atomic_read(&cssoffline),
		atomic_read(&cssreleased),atomic_read(&cssfree),css_addr);
        return 0;
}

static int myleak_open(struct inode *inode, struct file *file)
{
        return single_open(file, myleak_read, NULL);
}

ssize_t myleak_write(struct file *filp,const char *buf,size_t count,loff_t *offp){
	css_addr = 0;
	return count;
}

static const struct file_operations myleak = {
        .open =myleak_open,
        .read = seq_read,
	.write = myleak_write,
        .llseek = seq_lseek,
        .release = single_release,
};

static int __init kprobe_init(void)
{
    int ret;
    css_alloc.entry_handler = handler_cssalloc_pre;
    css_alloc.handler = ret_handler_cssalloc_pre;
    css_alloc.maxactive = 0;

    css_free.pre_handler = handler_cssfree_pre;
    css_free.post_handler = handler_post;
    css_free.fault_handler = handler_fault;

    css_released.pre_handler = handler_cssreleased_pre;
    css_released.post_handler = handler_post;
    css_released.fault_handler = handler_fault;

    css_offline.pre_handler = handler_cssoffline_pre;
    css_offline.post_handler = handler_post;
    css_offline.fault_handler = handler_fault;

    trycharge.pre_handler = handler_trycharge;
    trycharge.post_handler = handler_post;
    trycharge.fault_handler = handler_fault;

    charge.pre_handler = handler_charge;
    charge.post_handler = handler_post;
    charge.fault_handler = handler_fault;

    uncharge.pre_handler = handler_uncharge;
    uncharge.post_handler = handler_post;
    uncharge.fault_handler = handler_fault;
    atomic_set(&cssalloc,0);
    atomic_set(&cssfree,0);
    atomic_set(&cssreleased,0);
    atomic_set(&cssoffline,0);

    ret = register_kretprobe(&css_alloc);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&css_free);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&css_released);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&css_offline);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&trycharge);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&charge);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    ret = register_kprobe(&uncharge);
    if (ret < 0) {
        printk(KERN_INFO "register_kprobe failed, returned %d\n", ret);
        return ret;
    }
    proc_create("cgroup_leak", 0, NULL, &myleak);
    printk(KERN_INFO "Planted kprobe at %p\n", css_alloc.kp.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", css_free.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", css_released.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", css_offline.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", trycharge.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", charge.addr);
    printk(KERN_INFO "Planted kprobe at %p\n", uncharge.addr);
    return 0;
}

static void __exit kprobe_exit(void)
{
    unregister_kretprobe(&css_alloc);
    unregister_kprobe(&css_free);
    unregister_kprobe(&css_released);
    unregister_kprobe(&css_offline);
    unregister_kprobe(&trycharge);
    unregister_kprobe(&charge);
    unregister_kprobe(&uncharge);
    printk(KERN_INFO "kprobe at %p unregistered\n", css_alloc.kp.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", css_free.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", css_released.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", css_offline.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", trycharge.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", charge.addr);
    printk(KERN_INFO "kprobe at %p unregistered\n", uncharge.addr);
    remove_proc_entry("cgroup_leak",NULL);
}

module_init(kprobe_init)
module_exit(kprobe_exit)
MODULE_LICENSE("GPL");




First delete /var/log/test /var/log/test.1

Then run command systemctl start test ,After three second run command systemctl stop test 

Then write a python script open /var/log/test.1
Import time
f=open("/var/log/test.1”)
Time.sleep(1000)

Then in other console echo 3 > /proc/sys/vm/drop_caches

after that we find mem_cgroup object  still unreleased。

if we close the python process,then echo 3 >  /proc/sys/vm/drop_caches。
the mem_cgroup was released。

I think because the inode of test.1 is hold by python process , so drop_caches is no used。

I do not think this is a real bug。 but programer should care about   the memory used。 -:)

Thanks for reply
bytedance.net
段熊春
duanxiongchun@bytedance.com




> On Nov 22, 2018, at 3:34 PM, Michal Hocko <mhocko@kernel.org> wrote:
> 
> On Thu 22-11-18 10:56:04, 段熊春 wrote:
>> After long time dig, we find their lots of offline but not release memcg object in memory eating lots of memory.
>> Why this memcg not release? Because the inode pagecache use  some page which is charged to those memcg,
> 
> As already explained these objects should be reclaimed under memory
> pressure. If they are not then there is a bug. And Roman has fixed some
> of those recently.
> 
> Which kernel version are you using?
> -- 
> Michal Hocko
> SUSE Labs


[-- Attachment #2: Type: text/html, Size: 24593 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2018-11-23  6:54 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <bug-201699-27@https.bugzilla.kernel.org/>
2018-11-15 21:06 ` [Bug 201699] New: kmemleak in memcg_create_kmem_cache Andrew Morton
2018-11-16  2:23   ` dong
2018-11-16  3:04     ` dong
2018-11-16  3:37       ` dong
2018-11-16 17:50   ` Vladimir Davydov
2018-11-18  0:44     ` dong
2018-11-19  8:30       ` Vladimir Davydov
2018-11-19 10:24         ` Michal Hocko
2018-11-19 11:56         ` dong
2018-11-21  8:46           ` dong
2018-11-21  8:56             ` Vladimir Davydov
2018-11-21  9:06               ` dong
2018-11-21  9:10             ` Michal Hocko
2018-11-21  9:22               ` dong
2018-11-21  9:36                 ` 段熊春
2018-11-21 16:27                   ` Michal Hocko
2018-11-22  2:19                     ` 段熊春
2018-11-22  7:32                       ` Michal Hocko
2018-11-22  2:56                     ` 段熊春
2018-11-22  7:34                       ` Michal Hocko
2018-11-22  8:21                         ` 段熊春
2018-11-23  6:54                         ` 段熊春
2018-11-21  8:52           ` Re: " Vladimir Davydov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.