All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nag Avadhanam <nag@cisco.com>
To: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>,
	"Daniel Walker (danielwa)" <danielwa@cisco.com>,
	Dave Chinner <david@fromorbit.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	"Khalid Mughal (khalidm)" <khalidm@cisco.com>,
	"xe-kernel@external.cisco.com" <xe-kernel@external.cisco.com>,
	"dave.hansen@intel.com" <dave.hansen@intel.com>,
	"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"riel@redhat.com" <riel@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] kernel: fs: drop_caches: add dds drop_caches_count
Date: Tue, 16 Feb 2016 10:37:06 -0800 (PST)	[thread overview]
Message-ID: <alpine.LRH.2.00.1602160950310.14077@mcp-bld-lnx-277.cisco.com> (raw)
In-Reply-To: <20160216084346.GA8511@esperanza>

On Tue, 16 Feb 2016, Vladimir Davydov wrote:

> On Tue, Feb 16, 2016 at 02:58:04AM +0000, Nag Avadhanam (nag) wrote:
>> We have a class of platforms that are essentially swap-less embedded
>> systems that have limited memory resources (2GB and less).
>>
>> There is a need to implement early alerts (before the OOM killer kicks in)
>> based on the current memory usage so admins can take appropriate steps (do
>> not initiate provisioning operations but support existing services,
>> de-provision certain services, etc. based on the extent of memory usage in
>> the system) .
>>
>> There is also a general need to let end users know the available memory so
>> they can determine if they can enable new services (helps in planning).
>>
>> These two depend upon knowing approximate (accurate within few 10s of MB)
>> memory usage within the system. We want to alert admins before system
>> exhibits any thrashing behaviors.
>
> Have you considered using /proc/kpageflags for counting such pages? It
> should already export all information about memory pages you might need,
> e.g. which pages are mapped, which are anonymous, which are inactive,
> basically all page flags and even more. Moreover, you can even determine
> the set of pages that are really read/written by processes - see
> /sys/kernel/mm/page_idle/bitmap. On such a small machine scanning the
> whole pfn range should be pretty cheap, so you might find this API
> acceptable.

Thanks Vladimir. I came across the pagmemap interface sometime ago. I
was not sure if its mainstream. I think this should allow userspace 
VM scan (scans might take a bit longer). Will try it.

We could avoid the scans altogether.

The need plainly put is, inform the admins of these swapless embedded systems 
of the available memory.

If we can reliably and efficiently maintain counts of file pages 
(inactive and active) mapped into the address spaces of active user space 
processes, this need can be met. "Mapped" of /proc/meminfo does not seem 
to be a direct fit for this purpose (I need to understand this better). 
If I know for sure "Mapped" does not count device and the kernel pages 
mapped into the user space, then I can employ it gainfully for this need.

(Cached - Shmem - <mapped file/binary pages of active processes>) gives me
reclaimable file pages. If I can determine that then I can add that to MemFree 
and determine the available memory.

Thanks,
nag

>
> Thanks,
> Vladimir
>

WARNING: multiple messages have this Message-ID (diff)
From: Nag Avadhanam <nag@cisco.com>
To: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>,
	"Daniel Walker (danielwa)" <danielwa@cisco.com>,
	Dave Chinner <david@fromorbit.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	"Khalid Mughal (khalidm)" <khalidm@cisco.com>,
	"xe-kernel@external.cisco.com" <xe-kernel@external.cisco.com>,
	"dave.hansen@intel.com" <dave.hansen@intel.com>,
	"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
	"riel@redhat.com" <riel@redhat.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: Re: [PATCH] kernel: fs: drop_caches: add dds drop_caches_count
Date: Tue, 16 Feb 2016 10:37:06 -0800 (PST)	[thread overview]
Message-ID: <alpine.LRH.2.00.1602160950310.14077@mcp-bld-lnx-277.cisco.com> (raw)
In-Reply-To: <20160216084346.GA8511@esperanza>

On Tue, 16 Feb 2016, Vladimir Davydov wrote:

> On Tue, Feb 16, 2016 at 02:58:04AM +0000, Nag Avadhanam (nag) wrote:
>> We have a class of platforms that are essentially swap-less embedded
>> systems that have limited memory resources (2GB and less).
>>
>> There is a need to implement early alerts (before the OOM killer kicks in)
>> based on the current memory usage so admins can take appropriate steps (do
>> not initiate provisioning operations but support existing services,
>> de-provision certain services, etc. based on the extent of memory usage in
>> the system) .
>>
>> There is also a general need to let end users know the available memory so
>> they can determine if they can enable new services (helps in planning).
>>
>> These two depend upon knowing approximate (accurate within few 10s of MB)
>> memory usage within the system. We want to alert admins before system
>> exhibits any thrashing behaviors.
>
> Have you considered using /proc/kpageflags for counting such pages? It
> should already export all information about memory pages you might need,
> e.g. which pages are mapped, which are anonymous, which are inactive,
> basically all page flags and even more. Moreover, you can even determine
> the set of pages that are really read/written by processes - see
> /sys/kernel/mm/page_idle/bitmap. On such a small machine scanning the
> whole pfn range should be pretty cheap, so you might find this API
> acceptable.

Thanks Vladimir. I came across the pagmemap interface sometime ago. I
was not sure if its mainstream. I think this should allow userspace 
VM scan (scans might take a bit longer). Will try it.

We could avoid the scans altogether.

The need plainly put is, inform the admins of these swapless embedded systems 
of the available memory.

If we can reliably and efficiently maintain counts of file pages 
(inactive and active) mapped into the address spaces of active user space 
processes, this need can be met. "Mapped" of /proc/meminfo does not seem 
to be a direct fit for this purpose (I need to understand this better). 
If I know for sure "Mapped" does not count device and the kernel pages 
mapped into the user space, then I can employ it gainfully for this need.

(Cached - Shmem - <mapped file/binary pages of active processes>) gives me
reclaimable file pages. If I can determine that then I can add that to MemFree 
and determine the available memory.

Thanks,
nag

>
> Thanks,
> Vladimir
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-02-16 18:37 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-12 20:14 [PATCH] kernel: fs: drop_caches: add dds drop_caches_count Daniel Walker
2016-02-12 20:14 ` Daniel Walker
2016-02-14 21:18 ` Dave Chinner
2016-02-14 21:18   ` Dave Chinner
2016-02-15 18:19   ` Daniel Walker
2016-02-15 18:19     ` Daniel Walker
2016-02-15 23:05     ` Dave Chinner
2016-02-15 23:05       ` Dave Chinner
2016-02-15 23:52       ` Daniel Walker
2016-02-15 23:52         ` Daniel Walker
2016-02-15 23:52         ` Daniel Walker
2016-02-16  0:45         ` Theodore Ts'o
2016-02-16  0:45           ` Theodore Ts'o
2016-02-16  0:45           ` Theodore Ts'o
2016-02-16  2:58           ` Nag Avadhanam (nag)
2016-02-16  2:58             ` Nag Avadhanam (nag)
2016-02-16  5:38             ` Dave Chinner
2016-02-16  5:38               ` Dave Chinner
2016-02-16  7:14               ` Nag Avadhanam
2016-02-16  7:14                 ` Nag Avadhanam
2016-02-16  8:35                 ` Dave Chinner
2016-02-16  8:35                   ` Dave Chinner
2016-02-16  8:43             ` Vladimir Davydov
2016-02-16  8:43               ` Vladimir Davydov
2016-02-16 18:37               ` Nag Avadhanam [this message]
2016-02-16 18:37                 ` Nag Avadhanam
2016-02-16  5:28         ` Dave Chinner
2016-02-16  5:28           ` Dave Chinner
2016-02-16  5:28           ` Dave Chinner
2016-02-16  5:57           ` Nag Avadhanam
2016-02-16  5:57             ` Nag Avadhanam
2016-02-16  8:22             ` Dave Chinner
2016-02-16  8:22               ` Dave Chinner
2016-02-16 16:12           ` Rik van Riel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.00.1602160950310.14077@mcp-bld-lnx-277.cisco.com \
    --to=nag@cisco.com \
    --cc=corbet@lwn.net \
    --cc=danielwa@cisco.com \
    --cc=dave.hansen@intel.com \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=khalidm@cisco.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=tytso@mit.edu \
    --cc=vdavydov@virtuozzo.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=xe-kernel@external.cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.