All of lore.kernel.org
 help / color / mirror / Atom feed
* [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
@ 2017-01-09 21:02 ` Sami Farin
  0 siblings, 0 replies; 6+ messages in thread
From: Sami Farin @ 2017-01-09 21:02 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm

# sysctl vm.vfs_cache_pressure=-100

kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-661702561611775889
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535442432
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194205300197
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535439872
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659655090764208789
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351275164
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562615996648922728
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351264981
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037048430
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565212096665106188
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037043196
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659660388715270673


Alternatively,
# sysctl vm.vfs_cache_pressure=10000000
< allocate 6 GB of RAM on 16 GB system >
< start google-chrome-stable >
infinite loop in khugepaged → super_cache_scan

(this was with 4.9.1)

kernel: sysrq: SysRq : Show Regs
kernel: CPU: 2 PID: 353 Comm: khugepaged Tainted: G        W       4.9.1+ #79
kernel: Hardware name: System manufacturer System Product Name/P8Z68-V PRO GEN3, BIOS 3402 05/07/2012
kernel: task: ffffa2e8cc7d9500 task.stack: ffffabe040858000
kernel: RIP: 0010:[<ffffffffa210af9e>]  [<ffffffffa210af9e>] lock_acquire+0xee/0x180
kernel: RSP: 0018:ffffabe04085b860  EFLAGS: 00000286
kernel: RAX: ffffa2e8cc7d9500 RBX: 0000000000000286 RCX: d1055b5d00000000
kernel: RDX: 000000001113d196 RSI: 0000000003e5c7cd RDI: 0000000000000286
kernel: RBP: ffffabe04085b8b8 R08: 0000000000000000 R09: 0000000000000000
kernel: R10: 0000000032ec60fe R11: 0000000000000001 R12: 0000000000000000
kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
kernel: FS:  0000000000000000(0000) GS:ffffa2e8df100000(0000) knlGS:0000000000000000
kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kernel: CR2: 0000371ebec2e004 CR3: 00000004033fb000 CR4: 00000000000406e0
kernel: Stack:
kernel: ffffffffa21c88ed ffffa2e800000000 0000000000000000 0000000000000286
kernel: 000000017ae9f878 ffffa2e87af7de98 ffffa2e87af7de80 ffffffffffffffff
kernel: ffffabe04085b9f8 0000000000000000 0000000000000000 ffffabe04085b8d8
kernel: Call Trace:
kernel: [<ffffffffa21c88ed>] ? __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa294af63>] _raw_spin_lock+0x33/0x50
kernel: [<ffffffffa21c88ed>] ? __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa21c88ed>] __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa21c896e>] list_lru_count_one+0x1e/0x20
kernel: [<ffffffffa220f741>] super_cache_scan+0xa1/0x1a0
kernel: [<ffffffffa21aef6e>] shrink_slab.part.15+0x22e/0x4b0
kernel: [<ffffffffa21af21f>] shrink_slab+0x2f/0x40
kernel: [<ffffffffa21b2c2b>] shrink_node+0xeb/0x2e0
kernel: [<ffffffffa21b2ee7>] do_try_to_free_pages+0xc7/0x2d0
kernel: [<ffffffffa21b31be>] try_to_free_pages+0xce/0x210
kernel: [<ffffffffa21a32a8>] __alloc_pages_nodemask+0x538/0xd60
kernel: [<ffffffffa21fdc33>] khugepaged+0x3a3/0x24a0
kernel: [<ffffffffa21051a0>] ? wake_atomic_t_function+0x50/0x50
kernel: [<ffffffffa21fd890>] ? collapse_shmem.isra.8+0xb00/0xb00
kernel: [<ffffffffa20e29f0>] kthread+0xe0/0x100
kernel: [<ffffffffa20e2910>] ? kthread_park+0x60/0x60
kernel: [<ffffffffa294bb45>] ret_from_fork+0x25/0x30
kernel: Code: 04 24 48 8b 7d d0 49 83 f0 01 41 83 e0 01 e8 aa f2 ff ff 48 89 df 65 48 8b 04 25 00 d4 00 00 c7 80 0c 07 00 00 00 00 00 00 57 9d <66> 66 90 66 90 48 83 c4 30 5b 41 5c 41 5d 41 5e 41 5f 5d c3 65 

-- 
Do what you love because life is too short for anything else.
https://samifar.in/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
@ 2017-01-09 21:02 ` Sami Farin
  0 siblings, 0 replies; 6+ messages in thread
From: Sami Farin @ 2017-01-09 21:02 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm

# sysctl vm.vfs_cache_pressure=-100

kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-661702561611775889
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535442432
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194205300197
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535439872
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659655090764208789
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351275164
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562615996648922728
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351264981
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037048430
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565212096665106188
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037043196
kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659660388715270673


Alternatively,
# sysctl vm.vfs_cache_pressure=10000000
< allocate 6 GB of RAM on 16 GB system >
< start google-chrome-stable >
infinite loop in khugepaged a?? super_cache_scan

(this was with 4.9.1)

kernel: sysrq: SysRq : Show Regs
kernel: CPU: 2 PID: 353 Comm: khugepaged Tainted: G        W       4.9.1+ #79
kernel: Hardware name: System manufacturer System Product Name/P8Z68-V PRO GEN3, BIOS 3402 05/07/2012
kernel: task: ffffa2e8cc7d9500 task.stack: ffffabe040858000
kernel: RIP: 0010:[<ffffffffa210af9e>]  [<ffffffffa210af9e>] lock_acquire+0xee/0x180
kernel: RSP: 0018:ffffabe04085b860  EFLAGS: 00000286
kernel: RAX: ffffa2e8cc7d9500 RBX: 0000000000000286 RCX: d1055b5d00000000
kernel: RDX: 000000001113d196 RSI: 0000000003e5c7cd RDI: 0000000000000286
kernel: RBP: ffffabe04085b8b8 R08: 0000000000000000 R09: 0000000000000000
kernel: R10: 0000000032ec60fe R11: 0000000000000001 R12: 0000000000000000
kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
kernel: FS:  0000000000000000(0000) GS:ffffa2e8df100000(0000) knlGS:0000000000000000
kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kernel: CR2: 0000371ebec2e004 CR3: 00000004033fb000 CR4: 00000000000406e0
kernel: Stack:
kernel: ffffffffa21c88ed ffffa2e800000000 0000000000000000 0000000000000286
kernel: 000000017ae9f878 ffffa2e87af7de98 ffffa2e87af7de80 ffffffffffffffff
kernel: ffffabe04085b9f8 0000000000000000 0000000000000000 ffffabe04085b8d8
kernel: Call Trace:
kernel: [<ffffffffa21c88ed>] ? __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa294af63>] _raw_spin_lock+0x33/0x50
kernel: [<ffffffffa21c88ed>] ? __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa21c88ed>] __list_lru_count_one.isra.0+0x1d/0x80
kernel: [<ffffffffa21c896e>] list_lru_count_one+0x1e/0x20
kernel: [<ffffffffa220f741>] super_cache_scan+0xa1/0x1a0
kernel: [<ffffffffa21aef6e>] shrink_slab.part.15+0x22e/0x4b0
kernel: [<ffffffffa21af21f>] shrink_slab+0x2f/0x40
kernel: [<ffffffffa21b2c2b>] shrink_node+0xeb/0x2e0
kernel: [<ffffffffa21b2ee7>] do_try_to_free_pages+0xc7/0x2d0
kernel: [<ffffffffa21b31be>] try_to_free_pages+0xce/0x210
kernel: [<ffffffffa21a32a8>] __alloc_pages_nodemask+0x538/0xd60
kernel: [<ffffffffa21fdc33>] khugepaged+0x3a3/0x24a0
kernel: [<ffffffffa21051a0>] ? wake_atomic_t_function+0x50/0x50
kernel: [<ffffffffa21fd890>] ? collapse_shmem.isra.8+0xb00/0xb00
kernel: [<ffffffffa20e29f0>] kthread+0xe0/0x100
kernel: [<ffffffffa20e2910>] ? kthread_park+0x60/0x60
kernel: [<ffffffffa294bb45>] ret_from_fork+0x25/0x30
kernel: Code: 04 24 48 8b 7d d0 49 83 f0 01 41 83 e0 01 e8 aa f2 ff ff 48 89 df 65 48 8b 04 25 00 d4 00 00 c7 80 0c 07 00 00 00 00 00 00 57 9d <66> 66 90 66 90 48 83 c4 30 5b 41 5c 41 5d 41 5e 41 5f 5d c3 65 

-- 
Do what you love because life is too short for anything else.
https://samifar.in/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
  2017-01-09 21:02 ` Sami Farin
@ 2017-01-10  9:22   ` Michal Hocko
  -1 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2017-01-10  9:22 UTC (permalink / raw)
  To: Sami Farin; +Cc: linux-kernel, linux-mm

On Mon 09-01-17 23:02:10, Sami Farin wrote:
> # sysctl vm.vfs_cache_pressure=-100
> 
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-661702561611775889
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535442432
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194205300197
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535439872
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659655090764208789
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351275164
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562615996648922728
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351264981
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037048430
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565212096665106188
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037043196
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659660388715270673
> 
> 
> Alternatively,
> # sysctl vm.vfs_cache_pressure=10000000

Both values are insane and admins do not do insane things to their
machines, do they?

I am not sure how much we want to check the input value. -100 is clearly
bogus and 

		.procname	= "vfs_cache_pressure",
		.data		= &sysctl_vfs_cache_pressure,
		.maxlen		= sizeof(sysctl_vfs_cache_pressure),
		.mode		= 0644,
		.proc_handler	= &proc_dointvec,
		.extra1		= &zero,

tries to enforce min (extra1) check except proc_dointvec doesn't care
about this... This is news to me. Only proc_dointvec_minmax does care
about extra*, it seems.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
@ 2017-01-10  9:22   ` Michal Hocko
  0 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2017-01-10  9:22 UTC (permalink / raw)
  To: Sami Farin; +Cc: linux-kernel, linux-mm

On Mon 09-01-17 23:02:10, Sami Farin wrote:
> # sysctl vm.vfs_cache_pressure=-100
> 
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-661702561611775889
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535442432
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194205300197
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535439872
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659655090764208789
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351275164
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562615996648922728
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6564660665198832072
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6562613194351264981
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037048430
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565212096665106188
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-569296135781119076
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-565206492037043196
> kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-659660388715270673
> 
> 
> Alternatively,
> # sysctl vm.vfs_cache_pressure=10000000

Both values are insane and admins do not do insane things to their
machines, do they?

I am not sure how much we want to check the input value. -100 is clearly
bogus and 

		.procname	= "vfs_cache_pressure",
		.data		= &sysctl_vfs_cache_pressure,
		.maxlen		= sizeof(sysctl_vfs_cache_pressure),
		.mode		= 0644,
		.proc_handler	= &proc_dointvec,
		.extra1		= &zero,

tries to enforce min (extra1) check except proc_dointvec doesn't care
about this... This is news to me. Only proc_dointvec_minmax does care
about extra*, it seems.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
  2017-01-10  9:22   ` Michal Hocko
@ 2017-01-10 10:32     ` Sami Farin
  -1 siblings, 0 replies; 6+ messages in thread
From: Sami Farin @ 2017-01-10 10:32 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm

On Tue, Jan 10, 2017 at 10:22:41 +0100, Michal Hocko wrote:
> On Mon 09-01-17 23:02:10, Sami Farin wrote:
> > # sysctl vm.vfs_cache_pressure=-100
> > 
> > kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
> > kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
...
> > 
> > 
> > Alternatively,
> > # sysctl vm.vfs_cache_pressure=10000000
> 
> Both values are insane and admins do not do insane things to their
> machines, do they?

Not on purpose, unless they are insane :)

Docs say:
"Increasing vfs_cache_pressure significantly beyond 100 may have
negative performance impact."
Nothing about crashing.

But anyways, the problem I originally had was:
with vm.vfs_cache_pressure=0 , dentry/inode caches are reclaimed
at a very alarming rate, and when I e.g. rescan quodlibet media
directory (only 30000 files), that takes a lot of time..  I only download
some files for a minute and dentry/inode caches are reclaimed,
or so it seems.  Still, SReclaimable keeps on increasing, when it gets to
about 6 GB , I increase vm.vfs_cache_pressure .... 

-- 
Do what you love because life is too short for anything else.
https://samifar.in/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab
@ 2017-01-10 10:32     ` Sami Farin
  0 siblings, 0 replies; 6+ messages in thread
From: Sami Farin @ 2017-01-10 10:32 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm

On Tue, Jan 10, 2017 at 10:22:41 +0100, Michal Hocko wrote:
> On Mon 09-01-17 23:02:10, Sami Farin wrote:
> > # sysctl vm.vfs_cache_pressure=-100
> > 
> > kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535449472
> > kernel: vmscan: shrink_slab: super_cache_scan+0x0/0x1a0 negative objects to delete nr=-6640827866535450112
...
> > 
> > 
> > Alternatively,
> > # sysctl vm.vfs_cache_pressure=10000000
> 
> Both values are insane and admins do not do insane things to their
> machines, do they?

Not on purpose, unless they are insane :)

Docs say:
"Increasing vfs_cache_pressure significantly beyond 100 may have
negative performance impact."
Nothing about crashing.

But anyways, the problem I originally had was:
with vm.vfs_cache_pressure=0 , dentry/inode caches are reclaimed
at a very alarming rate, and when I e.g. rescan quodlibet media
directory (only 30000 files), that takes a lot of time..  I only download
some files for a minute and dentry/inode caches are reclaimed,
or so it seems.  Still, SReclaimable keeps on increasing, when it gets to
about 6 GB , I increase vm.vfs_cache_pressure .... 

-- 
Do what you love because life is too short for anything else.
https://samifar.in/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-01-10 10:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-09 21:02 [BUG] How to crash 4.9.2 x86_64: vmscan: shrink_slab Sami Farin
2017-01-09 21:02 ` Sami Farin
2017-01-10  9:22 ` Michal Hocko
2017-01-10  9:22   ` Michal Hocko
2017-01-10 10:32   ` Sami Farin
2017-01-10 10:32     ` Sami Farin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.