linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephen Satchell <list@satchell.net>
To: linux-kernel <linux-kernel@vger.kernel.org>
Subject: Swap performance statistics in 2.6 -- which /proc file has it?
Date: Tue, 09 Dec 2003 05:19:25 -0800	[thread overview]
Message-ID: <1070975964.5966.5.camel@ssatchell1.pyramid.net> (raw)
In-Reply-To: <3FD546D5.2000003@nishanet.com>

OK, color me stupid.  I just grepped the entire Documentation directory
for 2.6.0-test11, and couldn't find anywhere where the number of disk
requests for swap, or the swap transfer volume, is provided.  In 2.4 I
had a single place where all swap activity (whether it was to a separate
partition or to a file on a mounted file system) is recorded.

I also grepped the proc filesystem source (linux/fs/proc) for "swap" and 
"Swap" and didn't find anything that had to do with swap request accounting, 
only with swap memory allocation (which I do use, but which for me is only 
half the story).

My purpose for wanting this performance metric is to try to detect when
a server has entered a thrashing mode (lots of swaps for an extended
period of time, possibly coupled with an ever-increasing amount of swap
used as the server falls further and further behind) so that I can take
some form of corrective action before the OOM killer starts committing
processicide, perhaps incorrectly.

Now, I could try to identify swap partitions using /proc/swaps,
totalling up the RIO+WIO and RBLK+WBLK from /proc/diskinfo for those
partitions that are swap partitions to get some measure, but that
doesn't help when an after-the-build swap file is added because the
original swap partition is too small.

(The person who answered this when I had a blinkered subject line of 
"Re: balanced interrupts" neglected to indicate which source file he 
was quoting to when providing what he thought was an answer.)

Someone please point out the obvious oversight to this feeble old fool
of a programmer.

Stephen Satchell



  parent reply	other threads:[~2003-12-09 13:19 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <BF1FE1855350A0479097B3A0D2A80EE00184D619@hdsmsx402.hd.intel.com>
2003-12-08 19:29 ` balance interrupts Len Brown
2003-12-08 20:00   ` Julien Oster
2003-12-08 20:02     ` Julien Oster
2003-12-08 21:46     ` Len Brown
2003-12-09  3:51   ` Bob
2003-12-09  5:11     ` Stephen Satchell
2003-12-09 13:19     ` Stephen Satchell [this message]
2003-12-09 13:56       ` Swap performance statistics in 2.6 -- which /proc file has it? Richard B. Johnson
2003-12-09 14:46         ` Stephen Satchell
2003-12-09 15:25           ` Richard B. Johnson
2003-12-09 19:53             ` Dominik Kubla
2003-12-09 20:24               ` Richard B. Johnson
2003-12-10 10:18                 ` Dominik Kubla
2003-12-10  1:28               ` Stephen Satchell
2003-12-10 10:34                 ` Dominik Kubla
2003-12-10 13:06                   ` Answer to Swap performance statistics in 2.6 -- which /proc file has it Stephen Satchell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1070975964.5966.5.camel@ssatchell1.pyramid.net \
    --to=list@satchell.net \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).