All of lore.kernel.org
 help / color / mirror / Atom feed
* Slab memory usage
@ 2009-04-24 23:01 Poul Petersen
  2009-04-25  1:01 ` Eric Sandeen
  0 siblings, 1 reply; 4+ messages in thread
From: Poul Petersen @ 2009-04-24 23:01 UTC (permalink / raw)
  To: xfs

	I'm running Debian Lenny with kernel 2.6.26-1-amd64 and  
xfsprogs-2.9.8-1. I've been having a problem with the amount of slab  
memory that XFS seems to be consuming when running a rsync backup job,  
a du, or other file-system intensive programs. Below is an example of  
the output of slabtop and /proc/meminfo. I'm running a tool that  
monitors free memory space, and it starts generating alerts, though I  
don't blame it when the SLAB is running at 50% of memory!

	When the process finishes, the memory usually frees up over a period  
of several hours. However, on a similar system, even 24 hours after  
the rsync job finished, the slab never freed up. On that machine, if I  
run:

echo 2 > /proc/sys/vm/drop_caches

	Then the slab goes down to something more like 1% or 2% of system  
RAM. Any ideas what is causing this behaviour? And how I might  
alleviate it?

Thanks,

-poul

slabtop
=======

  Active / Total Objects (% used)    : 7684622 / 7875871 (97.6%)
  Active / Total Slabs (% used)      : 720661 / 720662 (100.0%)
  Active / Total Caches (% used)     : 105 / 176 (59.7%)
  Active / Total Size (% used)       : 2683658.81K / 2702989.38K (99.3%)
  Minimum / Average / Maximum Object : 0.02K / 0.34K / 4096.00K

   OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
1933952 1933787  99%    0.44K 241744        8    966976K xfs_inode
1933918 1933787  99%    0.56K 276274        7   1105096K xfs_vnode
1361008 1359980  99%    0.20K  71632       19    286528K dentry
1311360 1309586  99%    0.12K  43712       30    174848K size-128
1030770 1030548  99%    0.25K  68718       15    274872K size-256

/proc/meminfo
=============
MemTotal:      8265368 kB
MemFree:         54160 kB
Buffers:          4252 kB
Cached:         416880 kB
SwapCached:     166676 kB
Active:        4999204 kB
Inactive:       278028 kB
SwapTotal:     1951856 kB
SwapFree:      1761736 kB
Dirty:             324 kB
Writeback:           0 kB
AnonPages:     4855084 kB
Mapped:          12632 kB
Slab:          2883536 kB
SReclaimable:  2411420 kB
SUnreclaim:     472116 kB
PageTables:     870112 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
WritebackTmp:        0 kB
CommitLimit:   6084540 kB
Committed_AS:  5172208 kB
VmallocTotal: 34359738367 kB
VmallocUsed:     90988 kB
VmallocChunk: 34359647323 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
HugePages_Surp:      0
Hugepagesize:     2048 kB

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Slab memory usage
  2009-04-24 23:01 Slab memory usage Poul Petersen
@ 2009-04-25  1:01 ` Eric Sandeen
  2009-04-26 21:51   ` Michael Monnerie
  0 siblings, 1 reply; 4+ messages in thread
From: Eric Sandeen @ 2009-04-25  1:01 UTC (permalink / raw)
  To: Poul Petersen; +Cc: xfs

Poul Petersen wrote:
> 	I'm running Debian Lenny with kernel 2.6.26-1-amd64 and  
> xfsprogs-2.9.8-1. I've been having a problem with the amount of slab  
> memory that XFS seems to be consuming when running a rsync backup job,  
> a du, or other file-system intensive programs. Below is an example of  
> the output of slabtop and /proc/meminfo. I'm running a tool that  
> monitors free memory space, and it starts generating alerts, though I  
> don't blame it when the SLAB is running at 50% of memory!
> 
> 	When the process finishes, the memory usually frees up over a period  
> of several hours. However, on a similar system, even 24 hours after  
> the rsync job finished, the slab never freed up. On that machine, if I  
> run:
> 
> echo 2 > /proc/sys/vm/drop_caches
> 
> 	Then the slab goes down to something more like 1% or 2% of system  
> RAM. Any ideas what is causing this behaviour? And how I might  
> alleviate it?
> 
> Thanks,
> 
> -poul
> 
> slabtop
> =======
> 
>   Active / Total Objects (% used)    : 7684622 / 7875871 (97.6%)
>   Active / Total Slabs (% used)      : 720661 / 720662 (100.0%)
>   Active / Total Caches (% used)     : 105 / 176 (59.7%)
>   Active / Total Size (% used)       : 2683658.81K / 2702989.38K (99.3%)
>   Minimum / Average / Maximum Object : 0.02K / 0.34K / 4096.00K
> 
>    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> 1933952 1933787  99%    0.44K 241744        8    966976K xfs_inode
> 1933918 1933787  99%    0.56K 276274        7   1105096K xfs_vnode
> 1361008 1359980  99%    0.20K  71632       19    286528K dentry


o/~ the dentry's connected to the ... v-node, the v-node's connected to
the ... i-node .... o/~

This is really mostly the linux vfs hanging onto the dentries.  This in
turn pins the inodes and related xfs structures.

But, your memory is there for caching, most of the time.  If it's not
(mostly) used, then it's wasted.  If the memory is needed for other
purposes, the vfs frees the cached dentries, which in turn frees the
related structures.  This really isn't necessarily indicative of a problem.

There are some tunables* you could play with to change this behavior if
you like, but unless you are actually seeing performance problems, I
wouldn't be too concerned.

-Eric


*from Documentation/sysctl/vm.txt:

vfs_cache_pressure
------------------

Controls the tendency of the kernel to reclaim the memory which is used
for caching of directory and inode objects.

At the default value of vfs_cache_pressure=100 the kernel will attempt
to reclaim dentries and inodes at a "fair" rate with respect to
pagecache and swapcache reclaim.  Decreasing vfs_cache_pressure causes
the kernel to prefer to retain dentry and inode caches.  Increasing
vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim
dentries and inodes.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Slab memory usage
  2009-04-25  1:01 ` Eric Sandeen
@ 2009-04-26 21:51   ` Michael Monnerie
  2009-04-27  8:40     ` Josef 'Jeff' Sipek
  0 siblings, 1 reply; 4+ messages in thread
From: Michael Monnerie @ 2009-04-26 21:51 UTC (permalink / raw)
  To: xfs

On Samstag 25 April 2009 Eric Sandeen wrote:
> *from Documentation/sysctl/vm.txt:
>
> vfs_cache_pressure
> ------------------
>
> Controls the tendency of the kernel to reclaim the memory which is
> used for caching of directory and inode objects.
>
> At the default value of vfs_cache_pressure=100 the kernel will
> attempt to reclaim dentries and inodes at a "fair" rate with respect
> to pagecache and swapcache reclaim.  Decreasing vfs_cache_pressure
> causes the kernel to prefer to retain dentry and inode caches.
>  Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer
> to reclaim dentries and inodes.

So if I decrease it, lets say to 60, Linux prefers to remember 
files/dirs over their content. An increase to 150 would mean Linux 
prefers to keep file contents over dirs/files?

If so, I think for a fileserver for many users accessing many 
dirs/files, I'd prefer a lower value, in order to prevent searching. 
Disk contents can be read fast, with all the read-ahead caching of 
disks/controllers and Linux itself, but the scattered dirs take loooong 
to scan sometimes. (Example: a foto collection with 50.000 files in many 
dirs). Am I right?

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Slab memory usage
  2009-04-26 21:51   ` Michael Monnerie
@ 2009-04-27  8:40     ` Josef 'Jeff' Sipek
  0 siblings, 0 replies; 4+ messages in thread
From: Josef 'Jeff' Sipek @ 2009-04-27  8:40 UTC (permalink / raw)
  To: Michael Monnerie; +Cc: xfs

On Sun, Apr 26, 2009 at 11:51:22PM +0200, Michael Monnerie wrote:
> On Samstag 25 April 2009 Eric Sandeen wrote:
> > *from Documentation/sysctl/vm.txt:
> >
> > vfs_cache_pressure
> > ------------------
> >
> > Controls the tendency of the kernel to reclaim the memory which is
> > used for caching of directory and inode objects.
> >
> > At the default value of vfs_cache_pressure=100 the kernel will
> > attempt to reclaim dentries and inodes at a "fair" rate with respect
> > to pagecache and swapcache reclaim.  Decreasing vfs_cache_pressure
> > causes the kernel to prefer to retain dentry and inode caches.
> >  Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer
> > to reclaim dentries and inodes.
> 
> So if I decrease it, lets say to 60, Linux prefers to remember 
> files/dirs over their content. An increase to 150 would mean Linux 
> prefers to keep file contents over dirs/files?

Yep, that's right.

> If so, I think for a fileserver for many users accessing many 
> dirs/files, I'd prefer a lower value, in order to prevent searching. 
> Disk contents can be read fast, with all the read-ahead caching of 
> disks/controllers and Linux itself, but the scattered dirs take loooong 
> to scan sometimes. (Example: a foto collection with 50.000 files in many 
> dirs). Am I right?

Approximate answer is: it depends on the frequency of meta-data reads vs.
data reads. Your reasoning is fine if whoever access the photo collection
does not frequently read the photos themselves.

Best answer is: benchmark it with the exact workload you have to deal with

Josef 'Jeff' Sipek.

-- 
Intellectuals solve problems; geniuses prevent them
		- Albert Einstein

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2009-04-27  8:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-24 23:01 Slab memory usage Poul Petersen
2009-04-25  1:01 ` Eric Sandeen
2009-04-26 21:51   ` Michael Monnerie
2009-04-27  8:40     ` Josef 'Jeff' Sipek

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.