All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Daniel Walker <danielwa@cisco.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Khalid Mughal <khalidm@cisco.com>,
	xe-kernel@external.cisco.com, dave.hansen@intel.com,
	hannes@cmpxchg.org, riel@redhat.com,
	Jonathan Corbet <corbet@lwn.net>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] kernel: fs: drop_caches: add dds drop_caches_count
Date: Mon, 15 Feb 2016 08:18:56 +1100	[thread overview]
Message-ID: <20160214211856.GT19486@dastard> (raw)
In-Reply-To: <1455308080-27238-1-git-send-email-danielwa@cisco.com>

On Fri, Feb 12, 2016 at 12:14:39PM -0800, Daniel Walker wrote:
> From: Khalid Mughal <khalidm@cisco.com>
> 
> Currently there is no way to figure out the droppable pagecache size
> from the meminfo output. The MemFree size can shrink during normal
> system operation, when some of the memory pages get cached and is
> reflected in "Cached" field. Similarly for file operations some of
> the buffer memory gets cached and it is reflected in "Buffers" field.
> The kernel automatically reclaims all this cached & buffered memory,
> when it is needed elsewhere on the system. The only way to manually
> reclaim this memory is by writing 1 to /proc/sys/vm/drop_caches. But
> this can have performance impact. Since it discards cached objects,
> it may cause high CPU & I/O utilization to recreate the dropped
> objects during heavy system load.
> This patch computes the droppable pagecache count, using same
> algorithm as "vm/drop_caches". It is non-destructive and does not
> drop any pages. Therefore it does not have any impact on system
> performance. The computation does not include the size of
> reclaimable slab.

Why, exactly, do you need this? You've described what the patch
does (i.e. redundant, because we can read the code), and described
that the kernel already accounts this reclaimable memory elsewhere
and you can already read that and infer the amount of reclaimable
memory from it. So why isn't that accounting sufficient?

As to the code, I think it is a horrible hack - the calculation
does not come for free. Forcing iteration all the inodes in the
inode cache is not something we should allow users to do - what's to
stop someone just doing this 100 times in parallel and DOSing the
machine?

Or what happens when someone does 'grep "" /proc/sys/vm/*" to see
what all the VM settings are on a machine with a couple of TB of
page cache spread across a couple of hundred million cached inodes?
It a) takes a long time, b) adds sustained load to an already
contended lock (sb->s_inode_list_lock), and c) isn't configuration
information.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Daniel Walker <danielwa@cisco.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Khalid Mughal <khalidm@cisco.com>,
	xe-kernel@external.cisco.com, dave.hansen@intel.com,
	hannes@cmpxchg.org, riel@redhat.com,
	Jonathan Corbet <corbet@lwn.net>,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] kernel: fs: drop_caches: add dds drop_caches_count
Date: Mon, 15 Feb 2016 08:18:56 +1100	[thread overview]
Message-ID: <20160214211856.GT19486@dastard> (raw)
In-Reply-To: <1455308080-27238-1-git-send-email-danielwa@cisco.com>

On Fri, Feb 12, 2016 at 12:14:39PM -0800, Daniel Walker wrote:
> From: Khalid Mughal <khalidm@cisco.com>
> 
> Currently there is no way to figure out the droppable pagecache size
> from the meminfo output. The MemFree size can shrink during normal
> system operation, when some of the memory pages get cached and is
> reflected in "Cached" field. Similarly for file operations some of
> the buffer memory gets cached and it is reflected in "Buffers" field.
> The kernel automatically reclaims all this cached & buffered memory,
> when it is needed elsewhere on the system. The only way to manually
> reclaim this memory is by writing 1 to /proc/sys/vm/drop_caches. But
> this can have performance impact. Since it discards cached objects,
> it may cause high CPU & I/O utilization to recreate the dropped
> objects during heavy system load.
> This patch computes the droppable pagecache count, using same
> algorithm as "vm/drop_caches". It is non-destructive and does not
> drop any pages. Therefore it does not have any impact on system
> performance. The computation does not include the size of
> reclaimable slab.

Why, exactly, do you need this? You've described what the patch
does (i.e. redundant, because we can read the code), and described
that the kernel already accounts this reclaimable memory elsewhere
and you can already read that and infer the amount of reclaimable
memory from it. So why isn't that accounting sufficient?

As to the code, I think it is a horrible hack - the calculation
does not come for free. Forcing iteration all the inodes in the
inode cache is not something we should allow users to do - what's to
stop someone just doing this 100 times in parallel and DOSing the
machine?

Or what happens when someone does 'grep "" /proc/sys/vm/*" to see
what all the VM settings are on a machine with a couple of TB of
page cache spread across a couple of hundred million cached inodes?
It a) takes a long time, b) adds sustained load to an already
contended lock (sb->s_inode_list_lock), and c) isn't configuration
information.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-14 21:19 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-12 20:14 [PATCH] kernel: fs: drop_caches: add dds drop_caches_count Daniel Walker
2016-02-12 20:14 ` Daniel Walker
2016-02-14 21:18 ` Dave Chinner [this message]
2016-02-14 21:18   ` Dave Chinner
2016-02-15 18:19   ` Daniel Walker
2016-02-15 18:19     ` Daniel Walker
2016-02-15 23:05     ` Dave Chinner
2016-02-15 23:05       ` Dave Chinner
2016-02-15 23:52       ` Daniel Walker
2016-02-15 23:52         ` Daniel Walker
2016-02-15 23:52         ` Daniel Walker
2016-02-16  0:45         ` Theodore Ts'o
2016-02-16  0:45           ` Theodore Ts'o
2016-02-16  0:45           ` Theodore Ts'o
2016-02-16  2:58           ` Nag Avadhanam (nag)
2016-02-16  2:58             ` Nag Avadhanam (nag)
2016-02-16  5:38             ` Dave Chinner
2016-02-16  5:38               ` Dave Chinner
2016-02-16  7:14               ` Nag Avadhanam
2016-02-16  7:14                 ` Nag Avadhanam
2016-02-16  8:35                 ` Dave Chinner
2016-02-16  8:35                   ` Dave Chinner
2016-02-16  8:43             ` Vladimir Davydov
2016-02-16  8:43               ` Vladimir Davydov
2016-02-16 18:37               ` Nag Avadhanam
2016-02-16 18:37                 ` Nag Avadhanam
2016-02-16  5:28         ` Dave Chinner
2016-02-16  5:28           ` Dave Chinner
2016-02-16  5:28           ` Dave Chinner
2016-02-16  5:57           ` Nag Avadhanam
2016-02-16  5:57             ` Nag Avadhanam
2016-02-16  8:22             ` Dave Chinner
2016-02-16  8:22               ` Dave Chinner
2016-02-16 16:12           ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160214211856.GT19486@dastard \
    --to=david@fromorbit.com \
    --cc=corbet@lwn.net \
    --cc=danielwa@cisco.com \
    --cc=dave.hansen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=khalidm@cisco.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=xe-kernel@external.cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.