linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Mem: and Swap: lines in /proc/meminfo
@ 2003-12-09  0:00 Mike Fedyk
  2003-12-11 22:02 ` Rik van Riel
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-09  0:00 UTC (permalink / raw)
  To: linux-kernel

Hi guys,

I'm working on a script that reads /proc/meminfo, and it has two modes for
reading the file.

One that reads the first column to get the name, and gets the value in an
array, and the other just reads the Mem: and Swap: lines and parses the info
out.

Now I need to change the order (it is using Mem: and Swap: first, and the
other more thurough method second), but I'm wondering what versions of the
kernel I'd be cutting out if I just removed the parsing of Mem: and Swap:...

All of the systems I have available are running 2.4, and instead of booting
a bunch of 2.2, or even 2.0 or earlier kernels, maybe some lkml readers
recall from memory? :)

TIA,

Mike

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-09  0:00 Mem: and Swap: lines in /proc/meminfo Mike Fedyk
@ 2003-12-11 22:02 ` Rik van Riel
  2003-12-11 22:23   ` Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2003-12-11 22:02 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Mon, 8 Dec 2003, Mike Fedyk wrote:

> Now I need to change the order (it is using Mem: and Swap: first, and the
> other more thurough method second), but I'm wondering what versions of the
> kernel I'd be cutting out if I just removed the parsing of Mem: and Swap:...

IIRC 2.2 kernels already had the one-value-per-line
memory statistics, so you'd only lose 2.0 and earlier.

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-11 22:02 ` Rik van Riel
@ 2003-12-11 22:23   ` Mike Fedyk
  2003-12-11 22:42     ` Rik van Riel
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-11 22:23 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

On Thu, Dec 11, 2003 at 05:02:01PM -0500, Rik van Riel wrote:
> On Mon, 8 Dec 2003, Mike Fedyk wrote:
> 
> > Now I need to change the order (it is using Mem: and Swap: first, and the
> > other more thurough method second), but I'm wondering what versions of the
> > kernel I'd be cutting out if I just removed the parsing of Mem: and Swap:...
> 
> IIRC 2.2 kernels already had the one-value-per-line
> memory statistics, so you'd only lose 2.0 and earlier.

Ahh, great.  I'll change the ordering.  Should help clean up the code a bit
and make adding the features I want easier. :)

Another question:

Inact_dirty:     21516 kB
Inact_laundry:   65612 kB
Inact_clean:     19812 kB

These three are seperate lists in rmap, and are equal to "Inactive:" in the
-aa vm.

Inact_target:   150080 kB

This doesn't account any memory, but is only what the VM is trying to size
the sum of the three lists above.

Do I have that right?

I'm going to graph active and inactive for lrrd, and I need to know how to
map the different values when it is run on a rmap kernel.

Thanks.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-11 22:23   ` Mike Fedyk
@ 2003-12-11 22:42     ` Rik van Riel
  2003-12-11 23:05       ` Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2003-12-11 22:42 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Thu, 11 Dec 2003, Mike Fedyk wrote:

> Inact_dirty:     21516 kB
> Inact_laundry:   65612 kB
> Inact_clean:     19812 kB
> 
> These three are seperate lists in rmap, and are equal to "Inactive:" in
> the -aa vm.

I should add an Inactive: list to -rmap that sums up all
3, to make it a bit easier on programs parsing /proc.

Note that the inactive clean pages count (more or less)
as free pages, too.

> Inact_target:   150080 kB
> 
> This doesn't account any memory, but is only what the VM is trying to size
> the sum of the three lists above.
> 
> Do I have that right?

Yes, you're completely right.

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-11 22:42     ` Rik van Riel
@ 2003-12-11 23:05       ` Mike Fedyk
  2003-12-12  0:41         ` shm Rob Roschewsk
                           ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: Mike Fedyk @ 2003-12-11 23:05 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

On Thu, Dec 11, 2003 at 05:42:46PM -0500, Rik van Riel wrote:
> On Thu, 11 Dec 2003, Mike Fedyk wrote:
> 
> > Inact_dirty:     21516 kB
> > Inact_laundry:   65612 kB
> > Inact_clean:     19812 kB
> > 
> > These three are seperate lists in rmap, and are equal to "Inactive:" in
> > the -aa vm.
> 
> I should add an Inactive: list to -rmap that sums up all
> 3, to make it a bit easier on programs parsing /proc.
> 

ISTR, asking for this a while ago ;)

Yes, please do add that Inactive: line to rmap. :)

> Note that the inactive clean pages count (more or less)
> as free pages, too.
> 

But I should count it as "Inactive" right?

So, if it's clean, then the page has already been zeroed out, and is ready
to be used but just needs some flags updated?  Or they contain possibly
useful data, and just are not dirty?  So a page that is inactive, but not
dirty will go directly in that list?

What can happen to Inact_clean pages besides being freed, and used on the
free memory list?

> > Inact_target:   150080 kB
> > 
> > This doesn't account any memory, but is only what the VM is trying to size
> > the sum of the three lists above.
> > 
> > Do I have that right?
> 
> Yes, you're completely right.

Great. :)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* shm
  2003-12-11 23:05       ` Mike Fedyk
@ 2003-12-12  0:41         ` Rob Roschewsk
  2003-12-12  0:43         ` shm Rob Roschewsk
                           ` (4 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Rob Roschewsk @ 2003-12-12  0:41 UTC (permalink / raw)
  To: linux-kernel

Hi All,
    I need some pointers for using shared memory .... if this isn't the
place to ask then certainly suggest another.

Assume I have all of the physical RAM I need ... I want to be able to use
the maximum amount of shared memory for memory mapped files. I'm guessing
the largest SHM segment I can expect is 4GB .. correct???

What else competes for shared memory??? (ramdisks??? initrd???)

Any help would be appreciated.

Thanks,

Rob


^ permalink raw reply	[flat|nested] 22+ messages in thread

* shm
  2003-12-11 23:05       ` Mike Fedyk
  2003-12-12  0:41         ` shm Rob Roschewsk
@ 2003-12-12  0:43         ` Rob Roschewsk
  2003-12-12  0:44         ` shm Rob Roschewsk
                           ` (3 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Rob Roschewsk @ 2003-12-12  0:43 UTC (permalink / raw)
  To: linux-kernel

Hi All,
    I need some pointers for using shared memory .... if this isn't the
place to ask then certainly suggest another.

Assume I have all of the physical RAM I need ... I want to be able to use
the maximum amount of shared memory for memory mapped files. I'm guessing
the largest SHM segment I can expect is 4GB .. correct???

What else competes for shared memory??? (ramdisks??? initrd???)

Any help would be appreciated.

Thanks,

Rob


^ permalink raw reply	[flat|nested] 22+ messages in thread

* shm
  2003-12-11 23:05       ` Mike Fedyk
  2003-12-12  0:41         ` shm Rob Roschewsk
  2003-12-12  0:43         ` shm Rob Roschewsk
@ 2003-12-12  0:44         ` Rob Roschewsk
  2003-12-12  0:45         ` shm Rob Roschewsk
                           ` (2 subsequent siblings)
  5 siblings, 0 replies; 22+ messages in thread
From: Rob Roschewsk @ 2003-12-12  0:44 UTC (permalink / raw)
  To: linux-kernel

Hi All,
    I need some pointers for using shared memory .... if this isn't the
place to ask then certainly suggest another.

Assume I have all of the physical RAM I need ... I want to be able to use
the maximum amount of shared memory for memory mapped files. I'm guessing
the largest SHM segment I can expect is 4GB .. correct???

What else competes for shared memory??? (ramdisks??? initrd???)

Any help would be appreciated.

Thanks,

Rob


^ permalink raw reply	[flat|nested] 22+ messages in thread

* shm
  2003-12-11 23:05       ` Mike Fedyk
                           ` (2 preceding siblings ...)
  2003-12-12  0:44         ` shm Rob Roschewsk
@ 2003-12-12  0:45         ` Rob Roschewsk
  2003-12-12 12:00         ` Mem: and Swap: lines in /proc/meminfo Rik van Riel
  2003-12-17  1:12         ` [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo Mike Fedyk
  5 siblings, 0 replies; 22+ messages in thread
From: Rob Roschewsk @ 2003-12-12  0:45 UTC (permalink / raw)
  To: linux-kernel

Hi All,
    I need some pointers for using shared memory .... if this isn't the
place to ask then certainly suggest another.

Assume I have all of the physical RAM I need ... I want to be able to use
the maximum amount of shared memory for memory mapped files. I'm guessing
the largest SHM segment I can expect is 4GB .. correct???

What else competes for shared memory??? (ramdisks??? initrd???)

Any help would be appreciated.

Thanks,

Rob


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-11 23:05       ` Mike Fedyk
                           ` (3 preceding siblings ...)
  2003-12-12  0:45         ` shm Rob Roschewsk
@ 2003-12-12 12:00         ` Rik van Riel
  2003-12-12 18:12           ` Mike Fedyk
  2003-12-17  1:12         ` [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo Mike Fedyk
  5 siblings, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2003-12-12 12:00 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Thu, 11 Dec 2003, Mike Fedyk wrote:

> > Note that the inactive clean pages count (more or less)
> > as free pages, too.
> 
> But I should count it as "Inactive" right?

Yeah.

> So, if it's clean, then the page has already been zeroed out, and is
> ready to be used but just needs some flags updated?  Or they contain
> possibly useful data, and just are not dirty?

The latter.

> So a page that is inactive, but not dirty will go directly in that
> list?

No, LRU ordering is preserved.  The inactive clean list is just the
last stage before the page really gets freed.

> What can happen to Inact_clean pages besides being freed, and used on
> the free memory list?

The data that's still in the page could be referenced again, in which
case the page gets moved to the inactive dirty list and from there on
to the active list.

In effect, the inactive clean list is a "soft free" list, which means
we can keep a larger number of pages almost-free, without wasting
memory.


-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Mem: and Swap: lines in /proc/meminfo
  2003-12-12 12:00         ` Mem: and Swap: lines in /proc/meminfo Rik van Riel
@ 2003-12-12 18:12           ` Mike Fedyk
  2003-12-13  3:23             ` More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-12 18:12 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

On Fri, Dec 12, 2003 at 07:00:30AM -0500, Rik van Riel wrote:
> On Thu, 11 Dec 2003, Mike Fedyk wrote:
> 
> > > Note that the inactive clean pages count (more or less)
> > > as free pages, too.
> > 
> > But I should count it as "Inactive" right?
> 
> Yeah.

OK.

> > What can happen to Inact_clean pages besides being freed, and used on
> > the free memory list?
> 
> The data that's still in the page could be referenced again, in which
> case the page gets moved to the inactive dirty list and from there on
> to the active list.
> 
> In effect, the inactive clean list is a "soft free" list, which means
> we can keep a larger number of pages almost-free, without wasting
> memory.
> 

So it doesn't have to be dirty to go in the dirty list, only referenced?
What about Inact_laundry?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-12 18:12           ` Mike Fedyk
@ 2003-12-13  3:23             ` Mike Fedyk
  2003-12-13 17:54               ` Rik van Riel
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-13  3:23 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

VmallocUsed is being reported in /proc/meminfo in 2.6 now.

Is VmallocUsed contained within any of the other memory reported below?

How can I get VmallocUsed from userspace in earlier kernels (2.[024])?

And the same questions with PageTables too. :)

Are Dirty: and Writeback: counted in Inactive: or are they seperate?

Does Mapped: include all files mmap()ed, or only the executable ones?

MemTotal:       514880 kB
MemFree:        268440 kB
Buffers:         10736 kB
Cached:          98064 kB
SwapCached:          0 kB
Active:         161732 kB
Inactive:        54756 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       514880 kB
LowFree:        268440 kB
SwapTotal:      627024 kB
SwapFree:       627024 kB
Dirty:              48 kB
Writeback:           0 kB
Mapped:         155292 kB
Slab:            16712 kB
Committed_AS:   288808 kB
PageTables:       1816 kB
VmallocTotal:   507896 kB
VmallocUsed:     26472 kB
VmallocChunk:   481176 kB


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-13  3:23             ` More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) Mike Fedyk
@ 2003-12-13 17:54               ` Rik van Riel
  2003-12-14  1:44                 ` Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2003-12-13 17:54 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Fri, 12 Dec 2003, Mike Fedyk wrote:

> VmallocUsed is being reported in /proc/meminfo in 2.6 now.
> 
> Is VmallocUsed contained within any of the other memory reported below?

No.

> How can I get VmallocUsed from userspace in earlier kernels (2.[024])?

You can't.

> And the same questions with PageTables too. :)

Same answers ;)

Maybe I should send marcelo a patch to export the PageTables
number in /proc somewhere ?

> Are Dirty: and Writeback: counted in Inactive: or are they seperate?

They're unrelated statistics to active/inactive and will
overlap with active/inactive.

> Does Mapped: include all files mmap()ed, or only the executable ones?

Mapped: includes all mmap()ed pages, regardless of executable
status.

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-13 17:54               ` Rik van Riel
@ 2003-12-14  1:44                 ` Mike Fedyk
  2003-12-15  0:17                   ` Rik van Riel
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-14  1:44 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2150 bytes --]

On Sat, Dec 13, 2003 at 12:54:19PM -0500, Rik van Riel wrote:
> On Fri, 12 Dec 2003, Mike Fedyk wrote:
> 
> > VmallocUsed is being reported in /proc/meminfo in 2.6 now.
> > 
> > Is VmallocUsed contained within any of the other memory reported below?
> 
> No.

OK, thanks.

> 
> > How can I get VmallocUsed from userspace in earlier kernels (2.[024])?
> 
> You can't.

OK, 2.6 only then...

> 
> > And the same questions with PageTables too. :)
> 
> Same answers ;)
> 
> Maybe I should send marcelo a patch to export the PageTables
> number in /proc somewhere ?

Yes!  Please do.  /proc/meminfo hopefully. :)

> > Are Dirty: and Writeback: counted in Inactive: or are they seperate?
> 
> They're unrelated statistics to active/inactive and will
> overlap with active/inactive.

Do they count anonymous memory, or are they strictly dirty/writeback
pagecache?

> 
> > Does Mapped: include all files mmap()ed, or only the executable ones?
> 
> Mapped: includes all mmap()ed pages, regardless of executable
> status.

Is mmap() always pagecache backed, or can it be backed with anonymous
memory?  IE, can I subtract mapped from pagecache?

I have this excerpt from a perl script (attached) that feeds data into lrrd,
a rrd-tool based graphing system.

"apps.value" is basically everything that's not counted in /proc/meminfo
(except for free, of course), and on 2.4 and below, it ends up showing a
larger number than should be correct, since pagetables, and several other
types of memory overhead are not reported.

print "apps.value ", $mems{MemTotal}-$mems{MemFree}-$mems{Buffers}-$mems{Cached}-$mems{Slab}, "\n";
print "free.value ", $mems{MemFree}, "\n";
print "buffers.value ", $mems{Buffers}, "\n";
print "cached.value ", $mems{Cached}, "\n";
print "swap.value ", $mems{SwapTotal}-$mems{SwapFree}, "\n";
print "slab.value ", $mems{Slab}, "\n";

check out http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=223346 for one of
the enhancements I'm making to the graphs.

I'd love to find a more accurate way to get the amount of memory used for
apps, short of reading the output of ps and doing calculations on RSS,
VIRTUAL, and SHARED...

Thanks,

Mike

[-- Attachment #2: memory --]
[-- Type: text/plain, Size: 2935 bytes --]

#!/usr/bin/perl -w
#
# Plugin to monitor memory usage.
#
# Slab cache memory checking added by Mike Fedyk
#
# Parameters:
#
# 	config   (required)
# 	autoconf (optional - only used by lrrd-config)
#
# Magic markers (optional - only used by lrrd-config and some
# installation scripts):
#%# family=auto
#%# capabilities=autoconf


if ($ARGV[0] and $ARGV[0] eq "autoconf")
{
	if (-r "/proc/meminfo" && -r "/proc/slabinfo")
	{
		print "yes\n";
		exit 0;
	}
	else
	{
		print "no (/proc/meminfo or /proc/slabinfo) found\n";
		exit 1;
	}
}

my %mems;
&fetch_meminfo;

if ($ARGV[0] and $ARGV[0] eq "config")
{
	print "graph_args --base 1024 -l 0 --vertical-label Bytes --upper-limit ", $mems{'MemTotal'}, "\n";
	print "graph_title Memory usage\n";
	print "graph_order apps slab swap_cache cached buffers free swap\n";
	print "apps.label apps\n";
	print "apps.draw AREA\n";
	print "buffers.label buffers\n";
	print "buffers.draw STACK\n";
	print "slab.label slab_cache\n";
	print "slab.draw STACK\n";
	print "swap.label swap\n";
	print "swap.draw STACK\n";
	print "cached.label cache\n";
	print "cached.draw STACK\n";
	print "free.label unused\n";
	print "free.draw STACK\n";
	if (exists $mems{'SwapCached'})
	{
		print "swap_cache.label swap_cache\n";
		print "swap_cache.draw STACK\n";
	}
	if (exists $mems{'Committed_AS'})
	{
		print "committed.label committed\n";
		print "committed.draw LINE2\n";
		print "committed.warn ", ($mems{SwapTotal}+$mems{MemTotal})*1024, "\n";
	}
	exit 0;
}

print "apps.value ", $mems{MemTotal}-$mems{MemFree}-$mems{Buffers}-$mems{Cached}-$mems{Slab}, "\n";
print "free.value ", $mems{MemFree}, "\n";
print "buffers.value ", $mems{Buffers}, "\n";
print "cached.value ", $mems{Cached}, "\n";
print "swap.value ", $mems{SwapTotal}-$mems{SwapFree}, "\n";
print "slab.value ", $mems{Slab}, "\n";

if (exists $mems{'SwapCached'})
{
	print "swap_cache.value ", $mems{SwapCached}, "\n";
}

if (exists $mems{'Committed_AS'})
{
	print "committed.value ", $mems{'Committed_AS'}, "\n";
}

sub fetch_meminfo
{
	my (@memline, @swapline);
	open (IN, "/proc/meminfo") || die "Could not open /proc/meminfo for reading: $!";
	while (<IN>)
	{
		if (/^(\w+):\s*(\d+)\s+kb/i)
		{
			$mems{$1} = $2 * 1024;
		}
		elsif (/^Mem:\s+(.+)$/)
		{
			@memline = split;
		}
		elsif (/^Swap:\s+(.+)$/)
		{
			@swapline = split;
		}
	}
	close (IN);
	if (!$mems{Slab})
	{
		$mems{Slab} = &fetch_slabinfo;
	}
	if (!$mems{MemTotal})
	{
		$mems{MemTotal} = $memline[1];
		$mems{MemFree} = $memline[3];
		$mems{Buffers} = $memline[5];
		$mems{Cached} = $memline[6];
		$mems{SwapTotal} = $swapline[1];
		$mems{SwapFree} = $swapline[3];
	}
}

sub fetch_slabinfo
{
	open (IN, "/proc/slabinfo") || die "Could not open /proc/slabinfo for reading: $!";
        my $tot_slab_pages = 0;
	while (<IN>)
	{
	    if (!/^slabinfo/)
	    {
		@slabinfo = split;
		$tot_slab_pages += $slabinfo[5];
	    }
	}
	close (IN);
        return $tot_slab_pages * 4096 
}

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-14  1:44                 ` Mike Fedyk
@ 2003-12-15  0:17                   ` Rik van Riel
  2003-12-15 18:57                     ` Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2003-12-15  0:17 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Sat, 13 Dec 2003, Mike Fedyk wrote:

> > > Are Dirty: and Writeback: counted in Inactive: or are they seperate?
> > 
> > They're unrelated statistics to active/inactive and will
> > overlap with active/inactive.
> 
> Do they count anonymous memory, or are they strictly dirty/writeback
> pagecache?

Pagecache only, I think.

> > > Does Mapped: include all files mmap()ed, or only the executable ones?
> > 
> > Mapped: includes all mmap()ed pages, regardless of executable
> > status.
> 
> Is mmap() always pagecache backed, or can it be backed with anonymous
> memory?  IE, can I subtract mapped from pagecache?

Mapped includes all mapped memory, both pagecache and
anonymous.

> I'd love to find a more accurate way to get the amount of memory used for
> apps, short of reading the output of ps and doing calculations on RSS,
> VIRTUAL, and SHARED...

That would be great, it would really help with tuning
the VM further (if that turns out to be needed for
special workloads).

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-15  0:17                   ` Rik van Riel
@ 2003-12-15 18:57                     ` Mike Fedyk
  2003-12-15 19:40                       ` edjard
  0 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-15 18:57 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

On Sun, Dec 14, 2003 at 07:17:05PM -0500, Rik van Riel wrote:
> On Sat, 13 Dec 2003, Mike Fedyk wrote:
> 
> > > > Are Dirty: and Writeback: counted in Inactive: or are they seperate?
> > > 
> > > They're unrelated statistics to active/inactive and will
> > > overlap with active/inactive.
> > 
> > Do they count anonymous memory, or are they strictly dirty/writeback
> > pagecache?
> 
> Pagecache only, I think.
> 

That makes sence, since dirty anonymous memory should be swapped out, not
"written back".

Though dirty seems anbiguous, since it could contain dirty anon memory too.
But, I think you are right.  On my idle system (with kde running), there's
only 40KB "dirty" memory, so it's probably pagecache only.

Thanks.

> > > > Does Mapped: include all files mmap()ed, or only the executable ones?
> > > 
> > > Mapped: includes all mmap()ed pages, regardless of executable
> > > status.
> > 
> > Is mmap() always pagecache backed, or can it be backed with anonymous
> > memory?  IE, can I subtract mapped from pagecache?
> 
> Mapped includes all mapped memory, both pagecache and
> anonymous.
> 

Ok, then I can't subtract it from the pagecache value.  I'll have to graph
that differently (a line instead of a stack).

Thanks.

> > I'd love to find a more accurate way to get the amount of memory used for
> > apps, short of reading the output of ps and doing calculations on RSS,
> > VIRTUAL, and SHARED...
> 
> That would be great, it would really help with tuning
> the VM further (if that turns out to be needed for
> special workloads).

Any suggestions?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: More questions about 2.6 /proc/meminfo was: (Mem: and  Swap: lines in /proc/meminfo)
  2003-12-15 18:57                     ` Mike Fedyk
@ 2003-12-15 19:40                       ` edjard
  2003-12-15 21:57                         ` Mike Fedyk
  0 siblings, 1 reply; 22+ messages in thread
From: edjard @ 2003-12-15 19:40 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: Rik van Riel, linux-kernel

> On Sun, Dec 14, 2003 at 07:17:05PM -0500, Rik van Riel wrote:
>> On Sat, 13 Dec 2003, Mike Fedyk wrote:
>>
>> > > > Are Dirty: and Writeback: counted in Inactive: or are they
>> seperate?
>> > >
>> > > They're unrelated statistics to active/inactive and will
>> > > overlap with active/inactive.
>> >
>> > Do they count anonymous memory, or are they strictly dirty/writeback
>> > pagecache?
>>
>> Pagecache only, I think.
>>
>
> That makes sence, since dirty anonymous memory should be swapped out, not
> "written back".
>
> Though dirty seems anbiguous, since it could contain dirty anon memory
> too.
> But, I think you are right.  On my idle system (with kde running), there's
> only 40KB "dirty" memory, so it's probably pagecache only.
>
> Thanks.
>
>> > > > Does Mapped: include all files mmap()ed, or only the executable
>> ones?
>> > >
>> > > Mapped: includes all mmap()ed pages, regardless of executable
>> > > status.
>> >
>> > Is mmap() always pagecache backed, or can it be backed with anonymous
>> > memory?  IE, can I subtract mapped from pagecache?
>>
>> Mapped includes all mapped memory, both pagecache and
>> anonymous.
>>
>
> Ok, then I can't subtract it from the pagecache value.  I'll have to graph
> that differently (a line instead of a stack).
>
> Thanks.
>
>> > I'd love to find a more accurate way to get the amount of memory used
>> for
>> > apps, short of reading the output of ps and doing calculations on RSS,
>> > VIRTUAL, and SHARED...
>>
>> That would be great, it would really help with tuning
>> the VM further (if that turns out to be needed for
>> special workloads).
>
> Any suggestions?

Some days ago we sent this patch for 2.6.0-test11, which gives some useful
information regarding these data you are requesting.

We are now changing this patch to provide the data you require. No body
answered so far if this is ok to be done by the kernel. We did not to
try to implement it as a module yet.

BR

Edjard

--- linux-2.6.0-test11/fs/proc/task_mmu.c	2003-11-26 18:43:07.000000000 -0200
+++ linux/fs/proc/task_mmu.c	2003-12-02 13:58:10.000000000 -0200
@@ -3,42 +3,83 @@
 #include <linux/seq_file.h>
 #include <asm/uaccess.h>

+/**
+* Allan Bezerra (ajsb@dcc.fua.br) &
+* Bruna Moreira (brunampm@bol.com.br) &
+* Edjard Mota (edjard@ufam.edu.br) &
+* Mauricio Lin (mauriciolin@bol.com.br) &
+* Include a process PID physical memory size info in the /proc/PID/status
+*/
+
+void resident_mem_size(struct mm_struct *mm, unsigned long start_address,
unsigned long end_address, unsigned long *size)
+{
+	pgd_t *my_pgd;
+	pmd_t *my_pmd;
+	pte_t *my_pte;
+	unsigned long page;
+
+	for (page = start_address; page < end_address; page += PAGE_SIZE) {
+		my_pgd = pgd_offset(mm, page);
+		if (pgd_none(*my_pgd) || pgd_bad(*my_pgd)) continue;
+		my_pmd = pmd_offset(my_pgd, page);
+		if (pmd_none(*my_pmd) || pmd_bad(*my_pmd)) continue;
+		my_pte = pte_offset_map(my_pmd, page);
+		if (pte_present(*my_pte))
+			*size += PAGE_SIZE;
+	}
+}
+
 char *task_mem(struct mm_struct *mm, char *buffer)
 {
 	unsigned long data = 0, stack = 0, exec = 0, lib = 0;
 	struct vm_area_struct *vma;
-
+	unsigned long phys_data = 0, phys_stack = 0, phys_exec = 0, phys_lib =
0, phys_brk = 0;
 	down_read(&mm->mmap_sem);
 	for (vma = mm->mmap; vma; vma = vma->vm_next) {
 		unsigned long len = (vma->vm_end - vma->vm_start) >> 10;
 		if (!vma->vm_file) {
 			data += len;
-			if (vma->vm_flags & VM_GROWSDOWN)
+			resident_mem_size(mm, vma->vm_start, vma->vm_end, &phys_data);
+			if (vma->vm_flags & VM_GROWSDOWN){
 				stack += len;
+				resident_mem_size(mm, vma->vm_start, vma->vm_end, &phys_stack);
+			}
 			continue;
 		}
 		if (vma->vm_flags & VM_WRITE)
 			continue;
 		if (vma->vm_flags & VM_EXEC) {
 			exec += len;
+			resident_mem_size(mm, vma->vm_start, vma->vm_end, &phys_exec);
 			if (vma->vm_flags & VM_EXECUTABLE)
 				continue;
 			lib += len;
+			resident_mem_size(mm, vma->vm_start, vma->vm_end, &phys_lib);
 		}
 	}
+	resident_mem_size(mm, mm->start_brk, mm->brk, &phys_brk);
 	buffer += sprintf(buffer,
 		"VmSize:\t%8lu kB\n"
 		"VmLck:\t%8lu kB\n"
 		"VmRSS:\t%8lu kB\n"
 		"VmData:\t%8lu kB\n"
+		"RssData:\t%8lu kB\n"
 		"VmStk:\t%8lu kB\n"
+		"RssStk:\t%8lu kB\n"
 		"VmExe:\t%8lu kB\n"
-		"VmLib:\t%8lu kB\n",
+		"RssExe:\t%8lu kB\n"
+		"VmLib:\t%8lu kB\n"
+		"RssLib:\t%8lu kB\n"
+		"VmHeap:\t%8lu KB\n"
+		"RssHeap:\t%8lu KB\n",
 		mm->total_vm << (PAGE_SHIFT-10),
 		mm->locked_vm << (PAGE_SHIFT-10),
 		mm->rss << (PAGE_SHIFT-10),
-		data - stack, stack,
-		exec - lib, lib);
+		data - stack, (phys_data - phys_stack) >> 10,
+		stack, phys_stack >> 10,
+		exec - lib, (phys_exec - phys_lib) >> 10,
+		lib,  phys_lib >> 10,
+		(mm->brk - mm->start_brk) >> 10, phys_brk >> 10);
 	up_read(&mm->mmap_sem);
 	return buffer;
 }



> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)
  2003-12-15 19:40                       ` edjard
@ 2003-12-15 21:57                         ` Mike Fedyk
  2003-12-16  4:10                           ` Calculating total slab memory on 2.2/2.0 (was: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)) Mike Fedyk
  2003-12-16 20:07                           ` Re: Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) edjard
  0 siblings, 2 replies; 22+ messages in thread
From: Mike Fedyk @ 2003-12-15 21:57 UTC (permalink / raw)
  To: edjard; +Cc: Rik van Riel, linux-kernel

On Mon, Dec 15, 2003 at 05:40:10PM -0200, edjard@ufam.edu.br wrote:
> Some days ago we sent this patch for 2.6.0-test11, which gives some useful
> information regarding these data you are requesting.
> 
> We are now changing this patch to provide the data you require. No body
> answered so far if this is ok to be done by the kernel. We did not to
> try to implement it as a module yet.
> 

OK, interesting.  I have some questions below.

>  	buffer += sprintf(buffer,
>  		"VmSize:\t%8lu kB\n"
>  		"VmLck:\t%8lu kB\n"
>  		"VmRSS:\t%8lu kB\n"
>  		"VmData:\t%8lu kB\n"
> +		"RssData:\t%8lu kB\n"
>  		"VmStk:\t%8lu kB\n"
> +		"RssStk:\t%8lu kB\n"
>  		"VmExe:\t%8lu kB\n"
> -		"VmLib:\t%8lu kB\n",
> +		"RssExe:\t%8lu kB\n"
> +		"VmLib:\t%8lu kB\n"
> +		"RssLib:\t%8lu kB\n"
> +		"VmHeap:\t%8lu KB\n"
> +		"RssHeap:\t%8lu KB\n",
>  		mm->total_vm << (PAGE_SHIFT-10),
>  		mm->locked_vm << (PAGE_SHIFT-10),
>  		mm->rss << (PAGE_SHIFT-10),
> -		data - stack, stack,
> -		exec - lib, lib);
> +		data - stack, (phys_data - phys_stack) >> 10,
> +		stack, phys_stack >> 10,
> +		exec - lib, (phys_exec - phys_lib) >> 10,
> +		lib,  phys_lib >> 10,
> +		(mm->brk - mm->start_brk) >> 10, phys_brk >> 10);
>  	up_read(&mm->mmap_sem);
>  	return buffer;
>  }

Kernels without this patch, I'd have to take VmRSS and calculate that
against VmLib, and then have to think about shared memory between threads
and I'm not sure what else.

If I want to get the memory used by all apps from /proc/$pid/status, how am
I going to calculate it with this patch, and how will it be more accurate
from what I'd have to deal with from 2.[246] as they currently are without
the patch?

Thanks,

Mike

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Calculating total slab memory on 2.2/2.0 (was: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo))
  2003-12-15 21:57                         ` Mike Fedyk
@ 2003-12-16  4:10                           ` Mike Fedyk
  2003-12-16 20:07                           ` Re: Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) edjard
  1 sibling, 0 replies; 22+ messages in thread
From: Mike Fedyk @ 2003-12-16  4:10 UTC (permalink / raw)
  To: edjard, Rik van Riel, linux-kernel

In 2.4 with slabinfo format version 1.1, I can sum up the total pages used
for all slab caches, but the number of pages used is not in the 1.0 format
(in the 2.2 kernel).

Is there a way to get the total amount of slab cache memory used in a 2.2
kernel from userspace?

Same question for a 2.0 kernel too. :)

Thanks,

Mike

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: Re: Re: More questions about 2.6 /proc/meminfo was: (Mem: and  Swap: lines in /proc/meminfo)
  2003-12-15 21:57                         ` Mike Fedyk
  2003-12-16  4:10                           ` Calculating total slab memory on 2.2/2.0 (was: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)) Mike Fedyk
@ 2003-12-16 20:07                           ` edjard
  1 sibling, 0 replies; 22+ messages in thread
From: edjard @ 2003-12-16 20:07 UTC (permalink / raw)
  To: Mike Fedyk
  Cc: edjard, Rik van Riel, linux-kernel, torvalds, ajsb, mauriciolin

Hi again,

> Kernels without this patch, I'd have to take VmRSS and calculate that
> against VmLib, and then have to think about shared memory between threads
> and I'm not sure what else.

We are proposing to create another entry in /proc for more accurate
memory informartion. This is a variation of maps but it includes information
not shown by maps, like the size of stack and amount of rss allocated and
so on.

> If I want to get the memory used by all apps from /proc/$pid/status, how
> am I going to calculate it with this patch, and how will it be more
> accurate from what I'd have to deal with from 2.[246] as they currently
> are without the patch?

You may now build an applicatin like top and access /proc/PID/smaps as you
like. Your application should care about the format of smaps output. Here
is
just a sample of it

08048000-08085000 r-xp  /opt/mozilla/lib/mozilla-bin
VmRss:     216 kB 		 VmData:       0 KB
VmStk:       0 kB 		 VmExe:     244 kB
VmLib:       0 kB 		 Shared:       0 kB
Private:     244 kB 		 Shareable:       0 kB
08085000-08087000 rw-p  /opt/mozilla/lib/mozilla-bin
VmRss:       8 kB 		 VmData:       0 KB
VmStk:       0 kB 		 VmExe:       0 kB
VmLib:       0 kB 		 Shared:       0 kB
Private:       8 kB 		 Shareable:       0 kB
08087000-0879e000 rwxp
VmRss:    7188 kB 		 VmData:    7260 KB
VmStk:       0 kB 		 VmExe:       0 kB
VmLib:       0 kB 		 Shared:       0 kB
Private:    7260 kB 		 Shareable:       0 kB
40000000-40018000 r-xp  /lib/ld-2.3.2.so
VmRss:      92 kB 		 VmData:       0 KB
VmStk:       0 kB 		 VmExe:       0 kB
VmLib:      96 kB 		 Shared:       0 kB
Private:      96 kB 		 Shareable:       0 kB
40018000-40019000 rw-p  /lib/ld-2.3.2.so
VmRss:       4 kB 		 VmData:       0 KB
VmStk:       0 kB 		 VmExe:       0 kB
VmLib:       0 kB 		 Shared:       0 kB
Private:       4 kB 		 Shareable:       0 kB
40019000-4001a000 rw-p
VmRss:       4 kB 		 VmData:       4 KB
VmStk:       0 kB 		 VmExe:       0 kB
VmLib:       0 kB 		 Shared:       0 kB
Private:       4 kB 		 Shareable:       0 kB
....
etc.

Here it goes the new patch. Note that there are actually two patches for
fs/proc/base.c and fs/proc/task_mmu.c files.

BR

Edjard

--- linux-2.6.0-test11/fs/proc/base.c	2003-11-26 14:44:31.000000000 -0600
+++ linux/fs/proc/base.c	2003-12-16 16:02:28.000000000 -0600
@@ -60,6 +60,7 @@
 	PROC_TGID_MAPS,
 	PROC_TGID_MOUNTS,
 	PROC_TGID_WCHAN,
+	PROC_TGID_SMAPS,	/* created by 10LE */
 #ifdef CONFIG_SECURITY
 	PROC_TGID_ATTR,
 	PROC_TGID_ATTR_CURRENT,
@@ -83,6 +84,7 @@
 	PROC_TID_MAPS,
 	PROC_TID_MOUNTS,
 	PROC_TID_WCHAN,
+	PROC_TID_SMAPS,		/* created by 10LE */
 #ifdef CONFIG_SECURITY
 	PROC_TID_ATTR,
 	PROC_TID_ATTR_CURRENT,
@@ -117,6 +119,7 @@
 	E(PROC_TGID_ROOT,      "root",    S_IFLNK|S_IRWXUGO),
 	E(PROC_TGID_EXE,       "exe",     S_IFLNK|S_IRWXUGO),
 	E(PROC_TGID_MOUNTS,    "mounts",  S_IFREG|S_IRUGO),
+	E(PROC_TGID_SMAPS,     "smaps",   S_IFREG|S_IRUGO),	/* created by 10LE */
 #ifdef CONFIG_SECURITY
 	E(PROC_TGID_ATTR,      "attr",    S_IFDIR|S_IRUGO|S_IXUGO),
 #endif
@@ -139,6 +142,7 @@
 	E(PROC_TID_ROOT,       "root",    S_IFLNK|S_IRWXUGO),
 	E(PROC_TID_EXE,        "exe",     S_IFLNK|S_IRWXUGO),
 	E(PROC_TID_MOUNTS,     "mounts",  S_IFREG|S_IRUGO),
+	E(PROC_TID_SMAPS,      "smaps",   S_IFREG|S_IRUGO),
 #ifdef CONFIG_SECURITY
 	E(PROC_TID_ATTR,       "attr",    S_IFDIR|S_IRUGO|S_IXUGO),
 #endif
@@ -439,6 +443,27 @@
 	.release	= seq_release,
 };

+/* BEGIN - Created by 10LE */
+extern struct seq_operations proc_pid_smaps_op;
+static int smaps_open(struct inode *inode, struct file *file)
+{
+	struct task_struct *task = proc_task(inode);
+	int ret = seq_open(file, &proc_pid_smaps_op);
+	if (!ret) {
+		struct seq_file *m = file->private_data;
+		m->private = task;
+	}
+	return ret;
+}
+
+static struct file_operations proc_smaps_operations = {
+	.open		= smaps_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= seq_release,
+};
+/* END - Created by 10LE */
+
 extern struct seq_operations mounts_op;
 static int mounts_open(struct inode *inode, struct file *file)
 {
@@ -1332,6 +1357,10 @@
 		case PROC_TGID_MOUNTS:
 			inode->i_fop = &proc_mounts_operations;
 			break;
+		case PROC_TID_SMAPS:
+		case PROC_TGID_SMAPS:
+			inode->i_fop = &proc_smaps_operations;
+			break;
 #ifdef CONFIG_SECURITY
 		case PROC_TID_ATTR:
 			inode->i_nlink = 2;

/********************* HERE BEGIN THE OTHER PATCH ********************
--- linux-2.6.0-test11/fs/proc/task_mmu.c	2003-11-26 14:43:07.000000000 -0600
+++ linux/fs/proc/task_mmu.c	2003-12-16 16:45:02.000000000 -0600
@@ -111,6 +111,96 @@
 	return 0;
 }

+/* Proposed by 10LE - Calculates the RSS for /proc/PID/smaps entry.
+ */
+void resident_mem_size(struct mm_struct *mm, unsigned long start_address,
+			unsigned long end_address, unsigned long *size) {
+	pgd_t *my_pgd;
+	pmd_t *my_pmd;
+	pte_t *my_pte;
+	unsigned long page;
+
+	for (page = start_address; page < end_address; page += PAGE_SIZE) {
+		my_pgd = pgd_offset(mm, page);
+		if (pgd_none(*my_pgd) || pgd_bad(*my_pgd)) continue;
+		my_pmd = pmd_offset(my_pgd, page);
+		if (pmd_none(*my_pmd) || pmd_bad(*my_pmd)) continue;
+		my_pte = pte_offset_map(my_pmd, page);
+		if (pte_present(*my_pte)) {
+			*size += PAGE_SIZE;
+		}
+	  }
+}
+
+/* Proposed by 10LE - Compute and calculates rss, shared, private,
shareable,
+ * data, stack, executable and libray size for each VM_AREA of a PID in new
+ * entry /proc/PID/smaps.
+ */
+static int show_smap(struct seq_file *m, void *v)
+{
+	struct vm_area_struct *map = v;
+	struct file *file = map->vm_file;
+	int flags = map->vm_flags;
+	int len;
+	struct mm_struct *mm = map->vm_mm;
+	unsigned long rss = 0, shared = 0, private = 0, shareable = 0;
+	unsigned long data = 0, stack = 0, exec = 0, lib = 0;
+	unsigned long vma_len = (map->vm_end - map->vm_start) >> 10;
+
+	resident_mem_size(mm, map->vm_start, map->vm_end, &rss);
+
+	if (map->vm_flags & VM_MAYSHARE) {
+		shareable = vma_len;
+		if (map->vm_flags & VM_SHARED) {
+			shared = vma_len;
+		}
+	}
+	else {
+		private = vma_len;
+	}
+
+	if (!map->vm_file) {
+		data = vma_len;
+	}
+	else if (map->vm_flags & VM_GROWSDOWN) {
+		stack = vma_len;
+	}
+	else if (map->vm_flags & VM_EXEC) {
+		if (map->vm_flags & VM_EXECUTABLE)
+			exec = vma_len;
+		else
+			lib = vma_len;
+	}
+
+	seq_printf(m, "%08lx-%08lx %c%c%c%c %n",
+			map->vm_start,
+			map->vm_end,
+			flags & VM_READ ? 'r' : '-',
+			flags & VM_WRITE ? 'w' : '-',
+			flags & VM_EXEC ? 'x' : '-',
+			flags & VM_MAYSHARE ? 's' : 'p',
+			&len);
+
+	if (map->vm_file) {
+		len = sizeof(void*) * 6 - len;
+		if (len < 1)
+			len = 1;
+		seq_printf(m, "%*c", len, ' ');
+		seq_path(m, file->f_vfsmnt, file->f_dentry, " \t\n\\");
+	}
+	seq_putc(m, '\n');
+	seq_printf(m, "VmRss:%8lu kB \t\t VmData:%8lu KB\n"
+			"VmStk:%8lu kB \t\t VmExe:%8lu kB\n"
+			"VmLib:%8lu kB \t\t Shared:%8lu kB\n"
+			"Private:%8lu kB \t\t Shareable:%8lu kB\n",
+			rss >> 10, data,
+			stack, exec,
+			lib, shared,
+			private, shareable);
+	return 0;
+}
+
+
 static void *m_start(struct seq_file *m, loff_t *pos)
 {
 	struct task_struct *task = m->private;
@@ -158,3 +248,13 @@
 	.stop	= m_stop,
 	.show	= show_map
 };
+
+/* Proposed by 10LE - Differs from the above struct only in the .show
+ * function variable (or field).
+ */
+struct seq_operations proc_pid_smaps_op = {
+	.start	= m_start,
+	.next	= m_next,
+	.stop	= m_stop,
+	.show	= show_smap
+};


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo
  2003-12-11 23:05       ` Mike Fedyk
                           ` (4 preceding siblings ...)
  2003-12-12 12:00         ` Mem: and Swap: lines in /proc/meminfo Rik van Riel
@ 2003-12-17  1:12         ` Mike Fedyk
  2003-12-17  3:59           ` Rik van Riel
  5 siblings, 1 reply; 22+ messages in thread
From: Mike Fedyk @ 2003-12-17  1:12 UTC (permalink / raw)
  To: Rik van Riel; +Cc: linux-kernel

On Thu, Dec 11, 2003 at 03:05:11PM -0800, Mike Fedyk wrote:
> On Thu, Dec 11, 2003 at 05:42:46PM -0500, Rik van Riel wrote:
> > On Thu, 11 Dec 2003, Mike Fedyk wrote:
> > 
> > > Inact_dirty:     21516 kB
> > > Inact_laundry:   65612 kB
> > > Inact_clean:     19812 kB
> > > 
> > > These three are seperate lists in rmap, and are equal to "Inactive:" in
> > > the -aa vm.
> > 
> > I should add an Inactive: list to -rmap that sums up all
> > 3, to make it a bit easier on programs parsing /proc.
> > 
> 
> ISTR, asking for this a while ago ;)
> 
> Yes, please do add that Inactive: line to rmap. :)

How's this patch?

--- proc_misc.c.orig	2003-12-16 17:03:45.000000000 -0800
+++ proc_misc.c	2003-12-16 17:04:28.000000000 -0800
@@ -189,6 +189,7 @@
 		"Active:       %8u kB\n"
 		"ActiveAnon:   %8u kB\n"
 		"ActiveCache:  %8u kB\n"
+		"Inactive:     %8u kB\n"
 		"Inact_dirty:  %8u kB\n"
 		"Inact_laundry:%8u kB\n"
 		"Inact_clean:  %8u kB\n"
@@ -208,6 +209,8 @@
 		K(nr_active_anon_pages()) + K(nr_active_cache_pages()),
 		K(nr_active_anon_pages()),
 		K(nr_active_cache_pages()),
+		K(nr_inactive_dirty_pages()) + K(nr_inactive_dirty_pages())
+			+ K(nr_inactive_laundry_pages()),
 		K(nr_inactive_dirty_pages()),
 		K(nr_inactive_laundry_pages()),
 		K(nr_inactive_clean_pages()),

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo
  2003-12-17  1:12         ` [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo Mike Fedyk
@ 2003-12-17  3:59           ` Rik van Riel
  0 siblings, 0 replies; 22+ messages in thread
From: Rik van Riel @ 2003-12-17  3:59 UTC (permalink / raw)
  To: Mike Fedyk; +Cc: linux-kernel

On Tue, 16 Dec 2003, Mike Fedyk wrote:

> How's this patch?

Looks great.  Thanks.

-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2003-12-17  3:59 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-12-09  0:00 Mem: and Swap: lines in /proc/meminfo Mike Fedyk
2003-12-11 22:02 ` Rik van Riel
2003-12-11 22:23   ` Mike Fedyk
2003-12-11 22:42     ` Rik van Riel
2003-12-11 23:05       ` Mike Fedyk
2003-12-12  0:41         ` shm Rob Roschewsk
2003-12-12  0:43         ` shm Rob Roschewsk
2003-12-12  0:44         ` shm Rob Roschewsk
2003-12-12  0:45         ` shm Rob Roschewsk
2003-12-12 12:00         ` Mem: and Swap: lines in /proc/meminfo Rik van Riel
2003-12-12 18:12           ` Mike Fedyk
2003-12-13  3:23             ` More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) Mike Fedyk
2003-12-13 17:54               ` Rik van Riel
2003-12-14  1:44                 ` Mike Fedyk
2003-12-15  0:17                   ` Rik van Riel
2003-12-15 18:57                     ` Mike Fedyk
2003-12-15 19:40                       ` edjard
2003-12-15 21:57                         ` Mike Fedyk
2003-12-16  4:10                           ` Calculating total slab memory on 2.2/2.0 (was: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo)) Mike Fedyk
2003-12-16 20:07                           ` Re: Re: More questions about 2.6 /proc/meminfo was: (Mem: and Swap: lines in /proc/meminfo) edjard
2003-12-17  1:12         ` [PATCH 2.4 Rmap] Add Inactive to /proc/meminfo was: Mem: and Swap: lines in /proc/meminfo Mike Fedyk
2003-12-17  3:59           ` Rik van Riel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).