From: Valdis.Kletnieks@vt.edu
To: Alexander Shishkin <alexander.shishckin@gmail.com>
Cc: linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org,
viro@zeniv.linux.org.uk,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] [RFC] List per-process file descriptor consumption when hitting file-max
Date: Thu, 30 Jul 2009 08:40:36 -0400 [thread overview]
Message-ID: <28675.1248957636@turing-police.cc.vt.edu> (raw)
In-Reply-To: Your message of "Wed, 29 Jul 2009 19:17:00 +0300." <71a0d6ff0907290917u1f0c0e68p8036d53c69320392@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2055 bytes --]
On Wed, 29 Jul 2009 19:17:00 +0300, Alexander Shishkin said:
>Is there anything dramatically wrong with this one, or could someone please review this?
> + for_each_process(p) {
> + files = get_files_struct(p);
> + if (!files)
> + continue;
> +
> + spin_lock(&files->file_lock);
> + fdt = files_fdtable(files);
> +
> + /* we have to actually *count* the fds */
> + for (count = i = 0; i < fdt->max_fds; i++)
> + count += !!fcheck_files(files, i);
> +
> + printk(KERN_INFO "=> %s [%d]: %d\n", p->comm,
> + p->pid, count);
1) Splatting out 'count' without a hint of what it is isn't very user friendly.
Consider something like "=> %s[%d]: open=%d\n" instead, or add a second line
to the 'VFS: file-max' printk to provide a header.
2) What context does this run in, and what locks/scheduling considerations
are there? On a large system with many processes running, this could conceivably
wrap the logmsg buffer before syslog has a chance to get scheduled and read
the stuff out.
3) This can be used by a miscreant to spam the logs - consider a program
that does open() until it hits the limit, then goes into a close()/open()
loop to repeatedly bang up against the limit. Every 2 syscalls by the
abuser could get them another 5,000+ lines in the log - an incredible
amplification factor.
Now, if you fixed it to only print out the top 10 offending processes, it would
make it a lot more useful to the sysadmin, and a lot of those considerations go
away, but it also makes the already N**2 behavior even more expensive...
At that point, it would be good to report some CPU numbers by running a abusive
program that repeatedly hit the limit, and be able to say "Even under full
stress, it only used 15% of a CPU on a 2.4Ghz Core2" or similar...
[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]
next prev parent reply other threads:[~2009-07-30 14:18 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-08 11:38 [PATCH] [RFC] List per-process file descriptor consumption when hitting file-max alexander.shishckin
2009-07-29 16:17 ` Alexander Shishkin
2009-07-30 12:40 ` Valdis.Kletnieks [this message]
2009-10-11 12:17 ` Alexander Shishkin
2010-01-10 16:34 ` [RFC][PATCHv2] " Alexander Shishkin
2010-01-13 22:12 ` Andrew Morton
2010-01-11 9:38 ` [RFC][PATCHv3] " Alexander Shishkin
2010-01-11 12:40 ` Andreas Dilger
2010-01-13 22:06 ` Andi Kleen
2010-01-13 22:44 ` Al Viro
2010-01-13 22:57 ` Al Viro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=28675.1248957636@turing-police.cc.vt.edu \
--to=valdis.kletnieks@vt.edu \
--cc=akpm@linux-foundation.org \
--cc=alexander.shishckin@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).