linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -mm v9 0/8] idle memory tracking
@ 2015-07-19 12:31 Vladimir Davydov
  2015-07-19 12:31 ` [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper Vladimir Davydov
                   ` (11 more replies)
  0 siblings, 12 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

Hi,

This patch set introduces a new user API for tracking user memory pages
that have not been used for a given period of time. The purpose of this
is to provide the userspace with the means of tracking a workload's
working set, i.e. the set of pages that are actively used by the
workload. Knowing the working set size can be useful for partitioning
the system more efficiently, e.g. by tuning memory cgroup limits
appropriately, or for job placement within a compute cluster.

It is based on top of v4.2-rc2-mmotm-2015-07-15-16-46
It applies without conflicts to v4.2-rc2-mmotm-2015-07-17-16-04 as well

---- USE CASES ----

The unified cgroup hierarchy has memory.low and memory.high knobs, which
are defined as the low and high boundaries for the workload working set
size. However, the working set size of a workload may be unknown or
change in time. With this patch set, one can periodically estimate the
amount of memory unused by each cgroup and tune their memory.low and
memory.high parameters accordingly, therefore optimizing the overall
memory utilization.

Another use case is balancing workloads within a compute cluster.
Knowing how much memory is not really used by a workload unit may help
take a more optimal decision when considering migrating the unit to
another node within the cluster.

Also, as noted by Minchan, this would be useful for per-process reclaim
(https://lwn.net/Articles/545668/). With idle tracking, we could reclaim idle
pages only by smart user memory manager.

---- USER API ----

The user API consists of two new proc files:

 * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
   to a page, indexed by PFN. When the bit is set, the corresponding page is
   idle. A page is considered idle if it has not been accessed since it was
   marked idle. To mark a page idle one should set the bit corresponding to the
   page by writing to the file. A value written to the file is OR-ed with the
   current bitmap value. Only user memory pages can be marked idle, for other
   page types input is silently ignored. Writing to this file beyond max PFN
   results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
   set.

   This file can be used to estimate the amount of pages that are not
   used by a particular workload as follows:

   1. mark all pages of interest idle by setting corresponding bits in the
      /proc/kpageidle bitmap
   2. wait until the workload accesses its working set
   3. read /proc/kpageidle and count the number of bits set

 * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
   memory cgroup each page is charged to, indexed by PFN. Only available when
   CONFIG_MEMCG is set.

   This file can be used to find all pages (including unmapped file
   pages) accounted to a particular cgroup. Using /proc/kpageidle, one
   can then estimate the cgroup working set size.

For an example of using these files for estimating the amount of unused
memory pages per each memory cgroup, please see the script attached
below.

---- REASONING ----

The reason to introduce the new user API instead of using
/proc/PID/{clear_refs,smaps} is that the latter has two serious
drawbacks:

 - it does not count unmapped file pages
 - it affects the reclaimer logic

The new API attempts to overcome them both. For more details on how it
is achieved, please see the comment to patch 6.

---- CHANGE LOG ----

Changes in v9:

 - add cond_resched to /proc/kpage* read/write loop (Andres)
 - rebase on top of v4.2-rc2-mmotm-2015-07-15-16-46

Changes in v8:

 - clear referenced/accessed bit in secondary ptes while accessing
   /proc/kpageidle; this is required to estimate wss of KVM VMs (Andres)
 - check the young flag when collapsing a huge page
 - copy idle/young flags on page migration

Changes in v7:

This iteration addresses Andres's comments to v6:

 - do not reuse page_referenced for clearing idle flag, introduce a
   separate function instead; this way we won't issue expensive tlb
   flushes on /proc/kpageidle read/write
 - propagate young/idle flags from head to tail pages on thp split
 - skip compound tail pages while reading/writing /proc/kpageidle
 - cleanup page_referenced_one

Changes in v6:

 - Split the patch introducing page_cgroup_ino helper to ease review.
 - Rebase on top of v4.1-rc7-mmotm-2015-06-09-16-55

Changes in v5:

 - Fix possible race between kpageidle_clear_pte_refs() and
   __page_set_anon_rmap() by checking that a page is on an LRU list
   under zone->lru_lock (Minchan).
 - Export idle flag via /proc/kpageflags (Minchan).
 - Rebase on top of 4.1-rc3.

Changes in v4:

This iteration primarily addresses Minchan's comments to v3:

 - Implement /proc/kpageidle as a bitmap instead of using u64 per each page,
   because there does not seem to be any future uses for the other 63 bits.
 - Do not double-increase pra->referenced in page_referenced_one() if the page
   was young and referenced recently.
 - Remove the pointless (page_count == 0) check from kpageidle_get_page().
 - Rename kpageidle_clear_refs() to kpageidle_clear_pte_refs().
 - Improve comments to kpageidle-related functions.
 - Rebase on top of 4.1-rc2.

Note it does not address Minchan's concern of possible __page_set_anon_rmap vs
page_referenced race (see https://lkml.org/lkml/2015/5/3/220) since it is still
unclear if this race can really happen (see https://lkml.org/lkml/2015/5/4/160)

Changes in v3:

 - Enable CONFIG_IDLE_PAGE_TRACKING for 32 bit. Since this feature
   requires two extra page flags and there is no space for them on 32
   bit, page ext is used (thanks to Minchan Kim).
 - Minor code cleanups and comments improved.
 - Rebase on top of 4.1-rc1.

Changes in v2:

 - The main difference from v1 is the API change. In v1 the user can
   only set the idle flag for all pages at once, and for clearing the
   Idle flag on pages accessed via page tables /proc/PID/clear_refs
   should be used.
   The main drawback of the v1 approach, as noted by Minchan, is that on
   big machines setting the idle flag for each pages can result in CPU
   bursts, which would be especially frustrating if the user only wanted
   to estimate the amount of idle pages for a particular process or VMA.
   With the new API a more fine-grained approach is possible: one can
   read a process's /proc/PID/pagemap and set/check the Idle flag only
   for those pages of the process's address space he or she is
   interested in.
   Another good point about the v2 API is that it is possible to limit
   /proc/kpage* scanning rate when the user wants to estimate the total
   number of idle pages, which is unachievable with the v1 approach.
 - Make /proc/kpagecgroup return the ino of the closest online ancestor
   in case the cgroup a page is charged to is offline.
 - Fix /proc/PID/clear_refs not clearing Young page flag.
 - Rebase on top of v4.0-rc6-mmotm-2015-04-01-14-54

v8: https://lkml.org/lkml/2015/7/15/587 
v7: https://lkml.org/lkml/2015/7/11/119
v6: https://lkml.org/lkml/2015/6/12/301
v5: https://lkml.org/lkml/2015/5/12/449
v4: https://lkml.org/lkml/2015/5/7/580
v3: https://lkml.org/lkml/2015/4/28/224
v2: https://lkml.org/lkml/2015/4/7/260
v1: https://lkml.org/lkml/2015/3/18/794

---- PATCH SET STRUCTURE ----

The patch set is organized as follows:

 - patch 1 adds page_cgroup_ino() helper for the sake of
   /proc/kpagecgroup and patches 2-3 do related cleanup
 - patch 4 adds /proc/kpagecgroup, which reports cgroup ino each page is
   charged to
 - patch 5 introduces a new mmu notifier callback, clear_young, which is
   a lightweight version of clear_flush_young; it is used in patch 6
 - patch 6 implements the idle page tracking feature, including the
   userspace API, /proc/kpageidle
 - patch 7 exports idle flag via /proc/kpageflags

---- SIMILAR WORKS ----

Originally, the patch for tracking idle memory was proposed back in 2011
by Michel Lespinasse (see http://lwn.net/Articles/459269/). The main
difference between Michel's patch and this one is that Michel
implemented a kernel space daemon for estimating idle memory size per
cgroup while this patch only provides the userspace with the minimal API
for doing the job, leaving the rest up to the userspace. However, they
both share the same idea of Idle/Young page flags to avoid affecting the
reclaimer logic.

---- PERFORMANCE EVALUATION ----

SPECjvm2008 (https://www.spec.org/jvm2008/) was used to evaluate the
performance impact introduced by this patch set. Three runs were carried
out:

 - base: kernel without the patch
 - patched: patched kernel, the feature is not used
 - patched-active: patched kernel, 1 minute-period daemon is used for
   tracking idle memory

For tracking idle memory, idlememstat utility was used:
https://github.com/locker/idlememstat

testcase            base            patched        patched-active

compiler       537.40 ( 0.00)%   532.26 (-0.96)%   538.31 ( 0.17)%
compress       305.47 ( 0.00)%   301.08 (-1.44)%   300.71 (-1.56)%
crypto         284.32 ( 0.00)%   282.21 (-0.74)%   284.87 ( 0.19)%
derby          411.05 ( 0.00)%   413.44 ( 0.58)%   412.07 ( 0.25)%
mpegaudio      189.96 ( 0.00)%   190.87 ( 0.48)%   189.42 (-0.28)%
scimark.large   46.85 ( 0.00)%    46.41 (-0.94)%    47.83 ( 2.09)%
scimark.small  412.91 ( 0.00)%   415.41 ( 0.61)%   421.17 ( 2.00)%
serial         204.23 ( 0.00)%   213.46 ( 4.52)%   203.17 (-0.52)%
startup         36.76 ( 0.00)%    35.49 (-3.45)%    35.64 (-3.05)%
sunflow        115.34 ( 0.00)%   115.08 (-0.23)%   117.37 ( 1.76)%
xml            620.55 ( 0.00)%   619.95 (-0.10)%   620.39 (-0.03)%

composite      211.50 ( 0.00)%   211.15 (-0.17)%   211.67 ( 0.08)%

time idlememstat:

17.20user 65.16system 2:15:23elapsed 1%CPU (0avgtext+0avgdata 8476maxresident)k
448inputs+40outputs (1major+36052minor)pagefaults 0swaps

---- SCRIPT FOR COUNTING IDLE PAGES PER CGROUP ----
#! /usr/bin/python
#

import os
import stat
import errno
import struct

CGROUP_MOUNT = "/sys/fs/cgroup/memory"
BUFSIZE = 8 * 1024  # must be multiple of 8


def get_hugepage_size():
    with open("/proc/meminfo", "r") as f:
        for s in f:
            k, v = s.split(":")
            if k == "Hugepagesize":
                return int(v.split()[0]) * 1024

PAGE_SIZE = os.sysconf("SC_PAGE_SIZE")
HUGEPAGE_SIZE = get_hugepage_size()


def set_idle():
    f = open("/proc/kpageidle", "wb", BUFSIZE)
    while True:
        try:
            f.write(struct.pack("Q", pow(2, 64) - 1))
        except IOError as err:
            if err.errno == errno.ENXIO:
                break
            raise
    f.close()


def count_idle():
    f_flags = open("/proc/kpageflags", "rb", BUFSIZE)
    f_cgroup = open("/proc/kpagecgroup", "rb", BUFSIZE)

    with open("/proc/kpageidle", "rb", BUFSIZE) as f:
        while f.read(BUFSIZE): pass  # update idle flag

    idlememsz = {}
    while True:
        s1, s2 = f_flags.read(8), f_cgroup.read(8)
        if not s1 or not s2:
            break

        flags, = struct.unpack('Q', s1)
        cgino, = struct.unpack('Q', s2)

        unevictable = (flags >> 18) & 1
        huge = (flags >> 22) & 1
        idle = (flags >> 25) & 1

        if idle and not unevictable:
            idlememsz[cgino] = idlememsz.get(cgino, 0) + \
                (HUGEPAGE_SIZE if huge else PAGE_SIZE)

    f_flags.close()
    f_cgroup.close()
    return idlememsz


if __name__ == "__main__":
    print "Setting the idle flag for each page..."
    set_idle()

    raw_input("Wait until the workload accesses its working set, "
              "then press Enter")

    print "Counting idle pages..."
    idlememsz = count_idle()

    for dir, subdirs, files in os.walk(CGROUP_MOUNT):
        ino = os.stat(dir)[stat.ST_INO]
        print dir + ": " + str(idlememsz.get(ino, 0) / 1024) + " kB"
---- END SCRIPT ----

Comments are more than welcome.

Thanks,

Vladimir Davydov (8):
  memcg: add page_cgroup_ino helper
  hwpoison: use page_cgroup_ino for filtering by memcg
  memcg: zap try_get_mem_cgroup_from_page
  proc: add kpagecgroup file
  mmu-notifier: add clear_young callback
  proc: add kpageidle file
  proc: export idle flag via kpageflags
  proc: add cond_resched to /proc/kpage* read/write loop

 Documentation/vm/pagemap.txt           |  22 ++-
 fs/proc/page.c                         | 282 +++++++++++++++++++++++++++++++++
 fs/proc/task_mmu.c                     |   4 +-
 include/linux/memcontrol.h             |  10 +-
 include/linux/mm.h                     |  98 ++++++++++++
 include/linux/mmu_notifier.h           |  44 +++++
 include/linux/page-flags.h             |  11 ++
 include/linux/page_ext.h               |   4 +
 include/uapi/linux/kernel-page-flags.h |   1 +
 mm/Kconfig                             |  12 ++
 mm/debug.c                             |   4 +
 mm/huge_memory.c                       |  11 +-
 mm/hwpoison-inject.c                   |   5 +-
 mm/memcontrol.c                        |  71 ++++-----
 mm/memory-failure.c                    |  16 +-
 mm/migrate.c                           |   5 +
 mm/mmu_notifier.c                      |  17 ++
 mm/page_ext.c                          |   3 +
 mm/rmap.c                              |   5 +
 mm/swap.c                              |   2 +
 virt/kvm/kvm_main.c                    |  18 +++
 21 files changed, 579 insertions(+), 66 deletions(-)

-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-21 23:34   ` Andrew Morton
  2015-07-19 12:31 ` [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg Vladimir Davydov
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

This function returns the inode number of the closest online ancestor of
the memory cgroup a page is charged to. It is required for exporting
information about which page is charged to which cgroup to userspace,
which will be introduced by a following patch.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
---
 include/linux/memcontrol.h |  1 +
 mm/memcontrol.c            | 23 +++++++++++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d92b80b63c5c..99b0e43cac45 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -345,6 +345,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 }
 
 struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+unsigned long page_cgroup_ino(struct page *page);
 
 static inline bool mem_cgroup_disabled(void)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1def8810880a..a91bc1ee964c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -441,6 +441,29 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
 	return &memcg->css;
 }
 
+/**
+ * page_cgroup_ino - return inode number of the memcg a page is charged to
+ * @page: the page
+ *
+ * Look up the closest online ancestor of the memory cgroup @page is charged to
+ * and return its inode number or 0 if @page is not charged to any cgroup. It
+ * is safe to call this function without holding a reference to @page.
+ */
+unsigned long page_cgroup_ino(struct page *page)
+{
+	struct mem_cgroup *memcg;
+	unsigned long ino = 0;
+
+	rcu_read_lock();
+	memcg = READ_ONCE(page->mem_cgroup);
+	while (memcg && !(memcg->css.flags & CSS_ONLINE))
+		memcg = parent_mem_cgroup(memcg);
+	if (memcg)
+		ino = cgroup_ino(memcg->css.cgroup);
+	rcu_read_unlock();
+	return ino;
+}
+
 static struct mem_cgroup_per_zone *
 mem_cgroup_page_zoneinfo(struct mem_cgroup *memcg, struct page *page)
 {
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
  2015-07-19 12:31 ` [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-21 23:34   ` Andrew Morton
  2015-07-19 12:31 ` [PATCH -mm v9 3/8] memcg: zap try_get_mem_cgroup_from_page Vladimir Davydov
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

Hwpoison allows to filter pages by memory cgroup ino. Currently, it
calls try_get_mem_cgroup_from_page to obtain the cgroup from a page and
then its ino using cgroup_ino, but now we have an apter method for that,
page_cgroup_ino, so use it instead.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
---
 mm/hwpoison-inject.c |  5 +----
 mm/memory-failure.c  | 16 ++--------------
 2 files changed, 3 insertions(+), 18 deletions(-)

diff --git a/mm/hwpoison-inject.c b/mm/hwpoison-inject.c
index bf73ac17dad4..5015679014c1 100644
--- a/mm/hwpoison-inject.c
+++ b/mm/hwpoison-inject.c
@@ -45,12 +45,9 @@ static int hwpoison_inject(void *data, u64 val)
 	/*
 	 * do a racy check with elevated page count, to make sure PG_hwpoison
 	 * will only be set for the targeted owner (or on a free page).
-	 * We temporarily take page lock for try_get_mem_cgroup_from_page().
 	 * memory_failure() will redo the check reliably inside page lock.
 	 */
-	lock_page(hpage);
 	err = hwpoison_filter(hpage);
-	unlock_page(hpage);
 	if (err)
 		goto put_out;
 
@@ -126,7 +123,7 @@ static int pfn_inject_init(void)
 	if (!dentry)
 		goto fail;
 
-#ifdef CONFIG_MEMCG_SWAP
+#ifdef CONFIG_MEMCG
 	dentry = debugfs_create_u64("corrupt-filter-memcg", 0600,
 				    hwpoison_dir, &hwpoison_filter_memcg);
 	if (!dentry)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ef33ccf37224..97005396a507 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -130,27 +130,15 @@ static int hwpoison_filter_flags(struct page *p)
  * can only guarantee that the page either belongs to the memcg tasks, or is
  * a freed page.
  */
-#ifdef	CONFIG_MEMCG_SWAP
+#ifdef CONFIG_MEMCG
 u64 hwpoison_filter_memcg;
 EXPORT_SYMBOL_GPL(hwpoison_filter_memcg);
 static int hwpoison_filter_task(struct page *p)
 {
-	struct mem_cgroup *mem;
-	struct cgroup_subsys_state *css;
-	unsigned long ino;
-
 	if (!hwpoison_filter_memcg)
 		return 0;
 
-	mem = try_get_mem_cgroup_from_page(p);
-	if (!mem)
-		return -EINVAL;
-
-	css = &mem->css;
-	ino = cgroup_ino(css->cgroup);
-	css_put(css);
-
-	if (ino != hwpoison_filter_memcg)
+	if (page_cgroup_ino(p) != hwpoison_filter_memcg)
 		return -EINVAL;
 
 	return 0;
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 3/8] memcg: zap try_get_mem_cgroup_from_page
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
  2015-07-19 12:31 ` [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper Vladimir Davydov
  2015-07-19 12:31 ` [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-19 12:31 ` [PATCH -mm v9 4/8] proc: add kpagecgroup file Vladimir Davydov
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

It is only used in mem_cgroup_try_charge, so fold it in and zap it.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 include/linux/memcontrol.h |  9 +--------
 mm/memcontrol.c            | 48 ++++++++++++----------------------------------
 2 files changed, 13 insertions(+), 44 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 99b0e43cac45..d644aadfdd0d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -305,11 +305,9 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
 struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *);
 
 bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg);
-
-struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
 struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
-
 struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg);
+
 static inline
 struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
 	return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -556,11 +554,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page,
 	return &zone->lruvec;
 }
 
-static inline struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
-{
-	return NULL;
-}
-
 static inline bool mm_match_cgroup(struct mm_struct *mm,
 		struct mem_cgroup *memcg)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a91bc1ee964c..b9c76a0906f9 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2094,40 +2094,6 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
 	css_put_many(&memcg->css, nr_pages);
 }
 
-/*
- * try_get_mem_cgroup_from_page - look up page's memcg association
- * @page: the page
- *
- * Look up, get a css reference, and return the memcg that owns @page.
- *
- * The page must be locked to prevent racing with swap-in and page
- * cache charges.  If coming from an unlocked page table, the caller
- * must ensure the page is on the LRU or this can race with charging.
- */
-struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
-{
-	struct mem_cgroup *memcg;
-	unsigned short id;
-	swp_entry_t ent;
-
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-
-	memcg = page->mem_cgroup;
-	if (memcg) {
-		if (!css_tryget_online(&memcg->css))
-			memcg = NULL;
-	} else if (PageSwapCache(page)) {
-		ent.val = page_private(page);
-		id = lookup_swap_cgroup_id(ent);
-		rcu_read_lock();
-		memcg = mem_cgroup_from_id(id);
-		if (memcg && !css_tryget_online(&memcg->css))
-			memcg = NULL;
-		rcu_read_unlock();
-	}
-	return memcg;
-}
-
 static void lock_page_lru(struct page *page, int *isolated)
 {
 	struct zone *zone = page_zone(page);
@@ -5327,8 +5293,20 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
 		 * the page lock, which serializes swap cache removal, which
 		 * in turn serializes uncharging.
 		 */
+		VM_BUG_ON_PAGE(!PageLocked(page), page);
 		if (page->mem_cgroup)
 			goto out;
+
+		if (do_swap_account) {
+			swp_entry_t ent = { .val = page_private(page), };
+			unsigned short id = lookup_swap_cgroup_id(ent);
+
+			rcu_read_lock();
+			memcg = mem_cgroup_from_id(id);
+			if (memcg && !css_tryget_online(&memcg->css))
+				memcg = NULL;
+			rcu_read_unlock();
+		}
 	}
 
 	if (PageTransHuge(page)) {
@@ -5336,8 +5314,6 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
 		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
 	}
 
-	if (do_swap_account && PageSwapCache(page))
-		memcg = try_get_mem_cgroup_from_page(page);
 	if (!memcg)
 		memcg = get_mem_cgroup_from_mm(mm);
 
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 4/8] proc: add kpagecgroup file
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (2 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 3/8] memcg: zap try_get_mem_cgroup_from_page Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-21 23:34   ` Andrew Morton
  2015-07-19 12:31 ` [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback Vladimir Davydov
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

/proc/kpagecgroup contains a 64-bit inode number of the memory cgroup
each page is charged to, indexed by PFN. Having this information is
useful for estimating a cgroup working set size.

The file is present if CONFIG_PROC_PAGE_MONITOR && CONFIG_MEMCG.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 Documentation/vm/pagemap.txt |  6 ++++-
 fs/proc/page.c               | 53 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 56faec0f73f7..3a37ed184258 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -5,7 +5,7 @@ pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
 userspace programs to examine the page tables and related information by
 reading files in /proc.
 
-There are three components to pagemap:
+There are four components to pagemap:
 
  * /proc/pid/pagemap.  This file lets a userspace process find out which
    physical frame each virtual page is mapped to.  It contains one 64-bit
@@ -66,6 +66,10 @@ There are three components to pagemap:
     23. BALLOON
     24. ZERO_PAGE
 
+ * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
+   memory cgroup each page is charged to, indexed by PFN. Only available when
+   CONFIG_MEMCG is set.
+
 Short descriptions to the page flags:
 
  0. LOCKED
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 7eee2d8b97d9..70d23245dd43 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -9,6 +9,7 @@
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
 #include <linux/hugetlb.h>
+#include <linux/memcontrol.h>
 #include <linux/kernel-page-flags.h>
 #include <asm/uaccess.h>
 #include "internal.h"
@@ -225,10 +226,62 @@ static const struct file_operations proc_kpageflags_operations = {
 	.read = kpageflags_read,
 };
 
+#ifdef CONFIG_MEMCG
+static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
+				size_t count, loff_t *ppos)
+{
+	u64 __user *out = (u64 __user *)buf;
+	struct page *ppage;
+	unsigned long src = *ppos;
+	unsigned long pfn;
+	ssize_t ret = 0;
+	u64 ino;
+
+	pfn = src / KPMSIZE;
+	count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src);
+	if (src & KPMMASK || count & KPMMASK)
+		return -EINVAL;
+
+	while (count > 0) {
+		if (pfn_valid(pfn))
+			ppage = pfn_to_page(pfn);
+		else
+			ppage = NULL;
+
+		if (ppage)
+			ino = page_cgroup_ino(ppage);
+		else
+			ino = 0;
+
+		if (put_user(ino, out)) {
+			ret = -EFAULT;
+			break;
+		}
+
+		pfn++;
+		out++;
+		count -= KPMSIZE;
+	}
+
+	*ppos += (char __user *)out - buf;
+	if (!ret)
+		ret = (char __user *)out - buf;
+	return ret;
+}
+
+static const struct file_operations proc_kpagecgroup_operations = {
+	.llseek = mem_lseek,
+	.read = kpagecgroup_read,
+};
+#endif /* CONFIG_MEMCG */
+
 static int __init proc_page_init(void)
 {
 	proc_create("kpagecount", S_IRUSR, NULL, &proc_kpagecount_operations);
 	proc_create("kpageflags", S_IRUSR, NULL, &proc_kpageflags_operations);
+#ifdef CONFIG_MEMCG
+	proc_create("kpagecgroup", S_IRUSR, NULL, &proc_kpagecgroup_operations);
+#endif
 	return 0;
 }
 fs_initcall(proc_page_init);
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (3 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 4/8] proc: add kpagecgroup file Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-20 18:34   ` Andres Lagar-Cavilla
  2015-07-19 12:31 ` [PATCH -mm v9 6/8] proc: add kpageidle file Vladimir Davydov
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

In the scope of the idle memory tracking feature, which is introduced by
the following patch, we need to clear the referenced/accessed bit not
only in primary, but also in secondary ptes. The latter is required in
order to estimate wss of KVM VMs. At the same time we want to avoid
flushing tlb, because it is quite expensive and it won't really affect
the final result.

Currently, there is no function for clearing pte young bit that would
meet our requirements, so this patch introduces one. To achieve that we
have to add a new mmu-notifier callback, clear_young, since there is no
method for testing-and-clearing a secondary pte w/o flushing tlb. The
new method is not mandatory and currently only implemented by KVM.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/linux/mmu_notifier.h | 44 ++++++++++++++++++++++++++++++++++++++++++++
 mm/mmu_notifier.c            | 17 +++++++++++++++++
 virt/kvm/kvm_main.c          | 18 ++++++++++++++++++
 3 files changed, 79 insertions(+)

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 61cd67f4d788..a5b17137c683 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -66,6 +66,16 @@ struct mmu_notifier_ops {
 				 unsigned long end);
 
 	/*
+	 * clear_young is a lightweight version of clear_flush_young. Like the
+	 * latter, it is supposed to test-and-clear the young/accessed bitflag
+	 * in the secondary pte, but it may omit flushing the secondary tlb.
+	 */
+	int (*clear_young)(struct mmu_notifier *mn,
+			   struct mm_struct *mm,
+			   unsigned long start,
+			   unsigned long end);
+
+	/*
 	 * test_young is called to check the young/accessed bitflag in
 	 * the secondary pte. This is used to know if the page is
 	 * frequently used without actually clearing the flag or tearing
@@ -203,6 +213,9 @@ extern void __mmu_notifier_release(struct mm_struct *mm);
 extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
 					  unsigned long start,
 					  unsigned long end);
+extern int __mmu_notifier_clear_young(struct mm_struct *mm,
+				      unsigned long start,
+				      unsigned long end);
 extern int __mmu_notifier_test_young(struct mm_struct *mm,
 				     unsigned long address);
 extern void __mmu_notifier_change_pte(struct mm_struct *mm,
@@ -231,6 +244,15 @@ static inline int mmu_notifier_clear_flush_young(struct mm_struct *mm,
 	return 0;
 }
 
+static inline int mmu_notifier_clear_young(struct mm_struct *mm,
+					   unsigned long start,
+					   unsigned long end)
+{
+	if (mm_has_notifiers(mm))
+		return __mmu_notifier_clear_young(mm, start, end);
+	return 0;
+}
+
 static inline int mmu_notifier_test_young(struct mm_struct *mm,
 					  unsigned long address)
 {
@@ -311,6 +333,28 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
 	__young;							\
 })
 
+#define ptep_clear_young_notify(__vma, __address, __ptep)		\
+({									\
+	int __young;							\
+	struct vm_area_struct *___vma = __vma;				\
+	unsigned long ___address = __address;				\
+	__young = ptep_test_and_clear_young(___vma, ___address, __ptep);\
+	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
+					    ___address + PAGE_SIZE);	\
+	__young;							\
+})
+
+#define pmdp_clear_young_notify(__vma, __address, __pmdp)		\
+({									\
+	int __young;							\
+	struct vm_area_struct *___vma = __vma;				\
+	unsigned long ___address = __address;				\
+	__young = pmdp_test_and_clear_young(___vma, ___address, __pmdp);\
+	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
+					    ___address + PMD_SIZE);	\
+	__young;							\
+})
+
 #define	ptep_clear_flush_notify(__vma, __address, __ptep)		\
 ({									\
 	unsigned long ___addr = __address & PAGE_MASK;			\
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 3b9b3d0741b2..5fbdd367bbed 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -123,6 +123,23 @@ int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
 	return young;
 }
 
+int __mmu_notifier_clear_young(struct mm_struct *mm,
+			       unsigned long start,
+			       unsigned long end)
+{
+	struct mmu_notifier *mn;
+	int young = 0, id;
+
+	id = srcu_read_lock(&srcu);
+	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
+		if (mn->ops->clear_young)
+			young |= mn->ops->clear_young(mn, mm, start, end);
+	}
+	srcu_read_unlock(&srcu, id);
+
+	return young;
+}
+
 int __mmu_notifier_test_young(struct mm_struct *mm,
 			      unsigned long address)
 {
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 8b8a44453670..ff4173ce6924 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -387,6 +387,23 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn,
 	return young;
 }
 
+static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
+					struct mm_struct *mm,
+					unsigned long start,
+					unsigned long end)
+{
+	struct kvm *kvm = mmu_notifier_to_kvm(mn);
+	int young, idx;
+
+	idx = srcu_read_lock(&kvm->srcu);
+	spin_lock(&kvm->mmu_lock);
+	young = kvm_age_hva(kvm, start, end);
+	spin_unlock(&kvm->mmu_lock);
+	srcu_read_unlock(&kvm->srcu, idx);
+
+	return young;
+}
+
 static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn,
 				       struct mm_struct *mm,
 				       unsigned long address)
@@ -419,6 +436,7 @@ static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {
 	.invalidate_range_start	= kvm_mmu_notifier_invalidate_range_start,
 	.invalidate_range_end	= kvm_mmu_notifier_invalidate_range_end,
 	.clear_flush_young	= kvm_mmu_notifier_clear_flush_young,
+	.clear_young		= kvm_mmu_notifier_clear_young,
 	.test_young		= kvm_mmu_notifier_test_young,
 	.change_pte		= kvm_mmu_notifier_change_pte,
 	.release		= kvm_mmu_notifier_release,
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 6/8] proc: add kpageidle file
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (4 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-21 23:34   ` Andrew Morton
  2015-07-24 14:08   ` Paul Gortmaker
  2015-07-19 12:31 ` [PATCH -mm v9 7/8] proc: export idle flag via kpageflags Vladimir Davydov
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

Knowing the portion of memory that is not used by a certain application
or memory cgroup (idle memory) can be useful for partitioning the system
efficiently, e.g. by setting memory cgroup limits appropriately.
Currently, the only means to estimate the amount of idle memory provided
by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the
access bit for all pages mapped to a particular process by writing 1 to
clear_refs, wait for some time, and then count smaps:Referenced.
However, this method has two serious shortcomings:

 - it does not count unmapped file pages
 - it affects the reclaimer logic

To overcome these drawbacks, this patch introduces two new page flags,
Idle and Young, and a new proc file, /proc/kpageidle. A page's Idle flag
can only be set from userspace by setting bit in /proc/kpageidle at the
offset corresponding to the page, and it is cleared whenever the page is
accessed either through page tables (it is cleared in page_referenced()
in this case) or using the read(2) system call (mark_page_accessed()).
Thus by setting the Idle flag for pages of a particular workload, which
can be found e.g. by reading /proc/PID/pagemap, waiting for some time to
let the workload access its working set, and then reading the kpageidle
file, one can estimate the amount of pages that are not used by the
workload.

The Young page flag is used to avoid interference with the memory
reclaimer. A page's Young flag is set whenever the Access bit of a page
table entry pointing to the page is cleared by writing to kpageidle. If
page_referenced() is called on a Young page, it will add 1 to its return
value, therefore concealing the fact that the Access bit was cleared.

Note, since there is no room for extra page flags on 32 bit, this
feature uses extended page flags when compiled on 32 bit.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 Documentation/vm/pagemap.txt |  12 ++-
 fs/proc/page.c               | 218 +++++++++++++++++++++++++++++++++++++++++++
 fs/proc/task_mmu.c           |   4 +-
 include/linux/mm.h           |  98 +++++++++++++++++++
 include/linux/page-flags.h   |  11 +++
 include/linux/page_ext.h     |   4 +
 mm/Kconfig                   |  12 +++
 mm/debug.c                   |   4 +
 mm/huge_memory.c             |  11 ++-
 mm/migrate.c                 |   5 +
 mm/page_ext.c                |   3 +
 mm/rmap.c                    |   5 +
 mm/swap.c                    |   2 +
 13 files changed, 385 insertions(+), 4 deletions(-)

diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 3a37ed184258..34fe828c3007 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -5,7 +5,7 @@ pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
 userspace programs to examine the page tables and related information by
 reading files in /proc.
 
-There are four components to pagemap:
+There are five components to pagemap:
 
  * /proc/pid/pagemap.  This file lets a userspace process find out which
    physical frame each virtual page is mapped to.  It contains one 64-bit
@@ -70,6 +70,16 @@ There are four components to pagemap:
    memory cgroup each page is charged to, indexed by PFN. Only available when
    CONFIG_MEMCG is set.
 
+ * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
+   to a page, indexed by PFN. When the bit is set, the corresponding page is
+   idle. A page is considered idle if it has not been accessed since it was
+   marked idle. To mark a page idle one should set the bit corresponding to the
+   page by writing to the file. A value written to the file is OR-ed with the
+   current bitmap value. Only user memory pages can be marked idle, for other
+   page types input is silently ignored. Writing to this file beyond max PFN
+   results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
+   set.
+
 Short descriptions to the page flags:
 
  0. LOCKED
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 70d23245dd43..273537885ab4 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -5,6 +5,8 @@
 #include <linux/ksm.h>
 #include <linux/mm.h>
 #include <linux/mmzone.h>
+#include <linux/rmap.h>
+#include <linux/mmu_notifier.h>
 #include <linux/huge_mm.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
@@ -16,6 +18,7 @@
 
 #define KPMSIZE sizeof(u64)
 #define KPMMASK (KPMSIZE - 1)
+#define KPMBITS (KPMSIZE * BITS_PER_BYTE)
 
 /* /proc/kpagecount - an array exposing page counts
  *
@@ -275,6 +278,217 @@ static const struct file_operations proc_kpagecgroup_operations = {
 };
 #endif /* CONFIG_MEMCG */
 
+#ifdef CONFIG_IDLE_PAGE_TRACKING
+/*
+ * Idle page tracking only considers user memory pages, for other types of
+ * pages the idle flag is always unset and an attempt to set it is silently
+ * ignored.
+ *
+ * We treat a page as a user memory page if it is on an LRU list, because it is
+ * always safe to pass such a page to rmap_walk(), which is essential for idle
+ * page tracking. With such an indicator of user pages we can skip isolated
+ * pages, but since there are not usually many of them, it will hardly affect
+ * the overall result.
+ *
+ * This function tries to get a user memory page by pfn as described above.
+ */
+static struct page *kpageidle_get_page(unsigned long pfn)
+{
+	struct page *page;
+	struct zone *zone;
+
+	if (!pfn_valid(pfn))
+		return NULL;
+
+	page = pfn_to_page(pfn);
+	if (!page || !PageLRU(page) ||
+	    !get_page_unless_zero(page))
+		return NULL;
+
+	zone = page_zone(page);
+	spin_lock_irq(&zone->lru_lock);
+	if (unlikely(!PageLRU(page))) {
+		put_page(page);
+		page = NULL;
+	}
+	spin_unlock_irq(&zone->lru_lock);
+	return page;
+}
+
+static int kpageidle_clear_pte_refs_one(struct page *page,
+					struct vm_area_struct *vma,
+					unsigned long addr, void *arg)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	spinlock_t *ptl;
+	pmd_t *pmd;
+	pte_t *pte;
+	bool referenced = false;
+
+	if (unlikely(PageTransHuge(page))) {
+		pmd = page_check_address_pmd(page, mm, addr,
+					     PAGE_CHECK_ADDRESS_PMD_FLAG, &ptl);
+		if (pmd) {
+			referenced = pmdp_clear_young_notify(vma, addr, pmd);
+			spin_unlock(ptl);
+		}
+	} else {
+		pte = page_check_address(page, mm, addr, &ptl, 0);
+		if (pte) {
+			referenced = ptep_clear_young_notify(vma, addr, pte);
+			pte_unmap_unlock(pte, ptl);
+		}
+	}
+	if (referenced) {
+		clear_page_idle(page);
+		/*
+		 * We cleared the referenced bit in a mapping to this page. To
+		 * avoid interference with page reclaim, mark it young so that
+		 * page_referenced() will return > 0.
+		 */
+		set_page_young(page);
+	}
+	return SWAP_AGAIN;
+}
+
+static void kpageidle_clear_pte_refs(struct page *page)
+{
+	struct rmap_walk_control rwc = {
+		.rmap_one = kpageidle_clear_pte_refs_one,
+		.anon_lock = page_lock_anon_vma_read,
+	};
+	bool need_lock;
+
+	if (!page_mapped(page) ||
+	    !page_rmapping(page))
+		return;
+
+	need_lock = !PageAnon(page) || PageKsm(page);
+	if (need_lock && !trylock_page(page))
+		return;
+
+	rmap_walk(page, &rwc);
+
+	if (need_lock)
+		unlock_page(page);
+}
+
+static ssize_t kpageidle_read(struct file *file, char __user *buf,
+			      size_t count, loff_t *ppos)
+{
+	u64 __user *out = (u64 __user *)buf;
+	struct page *page;
+	unsigned long pfn, end_pfn;
+	ssize_t ret = 0;
+	u64 idle_bitmap = 0;
+	int bit;
+
+	if (*ppos & KPMMASK || count & KPMMASK)
+		return -EINVAL;
+
+	pfn = *ppos * BITS_PER_BYTE;
+	if (pfn >= max_pfn)
+		return 0;
+
+	end_pfn = pfn + count * BITS_PER_BYTE;
+	if (end_pfn > max_pfn)
+		end_pfn = ALIGN(max_pfn, KPMBITS);
+
+	for (; pfn < end_pfn; pfn++) {
+		bit = pfn % KPMBITS;
+		page = kpageidle_get_page(pfn);
+		if (page) {
+			if (page_is_idle(page)) {
+				/*
+				 * The page might have been referenced via a
+				 * pte, in which case it is not idle. Clear
+				 * refs and recheck.
+				 */
+				kpageidle_clear_pte_refs(page);
+				if (page_is_idle(page))
+					idle_bitmap |= 1ULL << bit;
+			}
+			put_page(page);
+		}
+		if (bit == KPMBITS - 1) {
+			if (put_user(idle_bitmap, out)) {
+				ret = -EFAULT;
+				break;
+			}
+			idle_bitmap = 0;
+			out++;
+		}
+	}
+
+	*ppos += (char __user *)out - buf;
+	if (!ret)
+		ret = (char __user *)out - buf;
+	return ret;
+}
+
+static ssize_t kpageidle_write(struct file *file, const char __user *buf,
+			       size_t count, loff_t *ppos)
+{
+	const u64 __user *in = (const u64 __user *)buf;
+	struct page *page;
+	unsigned long pfn, end_pfn;
+	ssize_t ret = 0;
+	u64 idle_bitmap = 0;
+	int bit;
+
+	if (*ppos & KPMMASK || count & KPMMASK)
+		return -EINVAL;
+
+	pfn = *ppos * BITS_PER_BYTE;
+	if (pfn >= max_pfn)
+		return -ENXIO;
+
+	end_pfn = pfn + count * BITS_PER_BYTE;
+	if (end_pfn > max_pfn)
+		end_pfn = ALIGN(max_pfn, KPMBITS);
+
+	for (; pfn < end_pfn; pfn++) {
+		bit = pfn % KPMBITS;
+		if (bit == 0) {
+			if (get_user(idle_bitmap, in)) {
+				ret = -EFAULT;
+				break;
+			}
+			in++;
+		}
+		if (idle_bitmap >> bit & 1) {
+			page = kpageidle_get_page(pfn);
+			if (page) {
+				kpageidle_clear_pte_refs(page);
+				set_page_idle(page);
+				put_page(page);
+			}
+		}
+	}
+
+	*ppos += (const char __user *)in - buf;
+	if (!ret)
+		ret = (const char __user *)in - buf;
+	return ret;
+}
+
+static const struct file_operations proc_kpageidle_operations = {
+	.llseek = mem_lseek,
+	.read = kpageidle_read,
+	.write = kpageidle_write,
+};
+
+#ifndef CONFIG_64BIT
+static bool need_page_idle(void)
+{
+	return true;
+}
+struct page_ext_operations page_idle_ops = {
+	.need = need_page_idle,
+};
+#endif
+#endif /* CONFIG_IDLE_PAGE_TRACKING */
+
 static int __init proc_page_init(void)
 {
 	proc_create("kpagecount", S_IRUSR, NULL, &proc_kpagecount_operations);
@@ -282,6 +496,10 @@ static int __init proc_page_init(void)
 #ifdef CONFIG_MEMCG
 	proc_create("kpagecgroup", S_IRUSR, NULL, &proc_kpagecgroup_operations);
 #endif
+#ifdef CONFIG_IDLE_PAGE_TRACKING
+	proc_create("kpageidle", S_IRUSR | S_IWUSR, NULL,
+		    &proc_kpageidle_operations);
+#endif
 	return 0;
 }
 fs_initcall(proc_page_init);
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 860bb0f30f14..7c9a17414106 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -459,7 +459,7 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
 
 	mss->resident += size;
 	/* Accumulate the size in pages that have been accessed. */
-	if (young || PageReferenced(page))
+	if (young || page_is_young(page) || PageReferenced(page))
 		mss->referenced += size;
 	mapcount = page_mapcount(page);
 	if (mapcount >= 2) {
@@ -808,6 +808,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
 
 		/* Clear accessed and referenced bits. */
 		pmdp_test_and_clear_young(vma, addr, pmd);
+		test_and_clear_page_young(page);
 		ClearPageReferenced(page);
 out:
 		spin_unlock(ptl);
@@ -835,6 +836,7 @@ out:
 
 		/* Clear accessed and referenced bits. */
 		ptep_test_and_clear_young(vma, addr, pte);
+		test_and_clear_page_young(page);
 		ClearPageReferenced(page);
 	}
 	pte_unmap_unlock(pte - 1, ptl);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c3a2b37365f6..0e62be7d5138 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2202,5 +2202,103 @@ void __init setup_nr_node_ids(void);
 static inline void setup_nr_node_ids(void) {}
 #endif
 
+#ifdef CONFIG_IDLE_PAGE_TRACKING
+#ifdef CONFIG_64BIT
+static inline bool page_is_young(struct page *page)
+{
+	return PageYoung(page);
+}
+
+static inline void set_page_young(struct page *page)
+{
+	SetPageYoung(page);
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return TestClearPageYoung(page);
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return PageIdle(page);
+}
+
+static inline void set_page_idle(struct page *page)
+{
+	SetPageIdle(page);
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+	ClearPageIdle(page);
+}
+#else /* !CONFIG_64BIT */
+/*
+ * If there is not enough space to store Idle and Young bits in page flags, use
+ * page ext flags instead.
+ */
+extern struct page_ext_operations page_idle_ops;
+
+static inline bool page_is_young(struct page *page)
+{
+	return test_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
+}
+
+static inline void set_page_young(struct page *page)
+{
+	set_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return test_and_clear_bit(PAGE_EXT_YOUNG,
+				  &lookup_page_ext(page)->flags);
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return test_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+
+static inline void set_page_idle(struct page *page)
+{
+	set_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+	clear_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+#endif /* CONFIG_64BIT */
+#else /* !CONFIG_IDLE_PAGE_TRACKING */
+static inline bool page_is_young(struct page *page)
+{
+	return false;
+}
+
+static inline void set_page_young(struct page *page)
+{
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return false;
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return false;
+}
+
+static inline void set_page_idle(struct page *page)
+{
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+}
+#endif /* CONFIG_IDLE_PAGE_TRACKING */
+
 #endif /* __KERNEL__ */
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 91b7f9b2b774..478f2241f284 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -109,6 +109,10 @@ enum pageflags {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	PG_compound_lock,
 #endif
+#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
+	PG_young,
+	PG_idle,
+#endif
 	__NR_PAGEFLAGS,
 
 	/* Filesystems */
@@ -363,6 +367,13 @@ PAGEFLAG_FALSE(HWPoison)
 #define __PG_HWPOISON 0
 #endif
 
+#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
+TESTPAGEFLAG(Young, young, PF_ANY)
+SETPAGEFLAG(Young, young, PF_ANY)
+TESTCLEARFLAG(Young, young, PF_ANY)
+PAGEFLAG(Idle, idle, PF_ANY)
+#endif
+
 /*
  * On an anonymous page mapped into a user virtual memory area,
  * page->mapping points to its anon_vma, not to a struct address_space;
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index c42981cd99aa..17f118a82854 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -26,6 +26,10 @@ enum page_ext_flags {
 	PAGE_EXT_DEBUG_POISON,		/* Page is poisoned */
 	PAGE_EXT_DEBUG_GUARD,
 	PAGE_EXT_OWNER,
+#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
+	PAGE_EXT_YOUNG,
+	PAGE_EXT_IDLE,
+#endif
 };
 
 /*
diff --git a/mm/Kconfig b/mm/Kconfig
index e79de2bd12cd..db817e2c2ec8 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -654,3 +654,15 @@ config DEFERRED_STRUCT_PAGE_INIT
 	  when kswapd starts. This has a potential performance impact on
 	  processes running early in the lifetime of the systemm until kswapd
 	  finishes the initialisation.
+
+config IDLE_PAGE_TRACKING
+	bool "Enable idle page tracking"
+	select PROC_PAGE_MONITOR
+	select PAGE_EXTENSION if !64BIT
+	help
+	  This feature allows to estimate the amount of user pages that have
+	  not been touched during a given period of time. This information can
+	  be useful to tune memory cgroup limits and/or for job placement
+	  within a compute cluster.
+
+	  See Documentation/vm/pagemap.txt for more details.
diff --git a/mm/debug.c b/mm/debug.c
index 76089ddf99ea..6c1b3ea61bfd 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -48,6 +48,10 @@ static const struct trace_print_flags pageflag_names[] = {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	{1UL << PG_compound_lock,	"compound_lock"	},
 #endif
+#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
+	{1UL << PG_young,		"young"		},
+	{1UL << PG_idle,		"idle"		},
+#endif
 };
 
 static void dump_flags(unsigned long flags,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8f9a334a6c66..5ab46adca104 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1806,6 +1806,11 @@ static void __split_huge_page_refcount(struct page *page,
 		/* clear PageTail before overwriting first_page */
 		smp_wmb();
 
+		if (page_is_young(page))
+			set_page_young(page_tail);
+		if (page_is_idle(page))
+			set_page_idle(page_tail);
+
 		/*
 		 * __split_huge_page_splitting() already set the
 		 * splitting bit in all pmd that could map this
@@ -2311,7 +2316,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		VM_BUG_ON_PAGE(PageLRU(page), page);
 
 		/* If there is no mapped pte young don't collapse the page */
-		if (pte_young(pteval) || PageReferenced(page) ||
+		if (pte_young(pteval) ||
+		    page_is_young(page) || PageReferenced(page) ||
 		    mmu_notifier_test_young(vma->vm_mm, address))
 			referenced = true;
 	}
@@ -2738,7 +2744,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		 */
 		if (page_count(page) != 1 + !!PageSwapCache(page))
 			goto out_unmap;
-		if (pte_young(pteval) || PageReferenced(page) ||
+		if (pte_young(pteval) ||
+		    page_is_young(page) || PageReferenced(page) ||
 		    mmu_notifier_test_young(vma->vm_mm, address))
 			referenced = true;
 	}
diff --git a/mm/migrate.c b/mm/migrate.c
index d3529d620a5b..d86cec005aa6 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -524,6 +524,11 @@ void migrate_page_copy(struct page *newpage, struct page *page)
 			__set_page_dirty_nobuffers(newpage);
  	}
 
+	if (page_is_young(page))
+		set_page_young(newpage);
+	if (page_is_idle(page))
+		set_page_idle(newpage);
+
 	/*
 	 * Copy NUMA information to the new page, to prevent over-eager
 	 * future migrations of this same page.
diff --git a/mm/page_ext.c b/mm/page_ext.c
index d86fd2f5353f..e4b3af054bf2 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -59,6 +59,9 @@ static struct page_ext_operations *page_ext_ops[] = {
 #ifdef CONFIG_PAGE_OWNER
 	&page_owner_ops,
 #endif
+#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
+	&page_idle_ops,
+#endif
 };
 
 static unsigned long total_usage;
diff --git a/mm/rmap.c b/mm/rmap.c
index 30812e9042ae..9e411aa03176 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -900,6 +900,11 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
 		pte_unmap_unlock(pte, ptl);
 	}
 
+	if (referenced)
+		clear_page_idle(page);
+	if (test_and_clear_page_young(page))
+		referenced++;
+
 	if (referenced) {
 		pra->referenced++;
 		pra->vm_flags |= vma->vm_flags;
diff --git a/mm/swap.c b/mm/swap.c
index d398860badd1..04b6ce51bcf0 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -623,6 +623,8 @@ void mark_page_accessed(struct page *page)
 	} else if (!PageReferenced(page)) {
 		SetPageReferenced(page);
 	}
+	if (page_is_idle(page))
+		clear_page_idle(page);
 }
 EXPORT_SYMBOL(mark_page_accessed);
 
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (5 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 6/8] proc: add kpageidle file Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-21 23:35   ` Andrew Morton
  2015-07-19 12:31 ` [PATCH -mm v9 8/8] proc: add cond_resched to /proc/kpage* read/write loop Vladimir Davydov
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

As noted by Minchan, a benefit of reading idle flag from
/proc/kpageflags is that one can easily filter dirty and/or unevictable
pages while estimating the size of unused memory.

Note that idle flag read from /proc/kpageflags may be stale in case the
page was accessed via a PTE, because it would be too costly to iterate
over all page mappings on each /proc/kpageflags read to provide an
up-to-date value. To make sure the flag is up-to-date one has to read
/proc/kpageidle first.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
---
 Documentation/vm/pagemap.txt           | 6 ++++++
 fs/proc/page.c                         | 3 +++
 include/uapi/linux/kernel-page-flags.h | 1 +
 3 files changed, 10 insertions(+)

diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 34fe828c3007..538735465693 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -65,6 +65,7 @@ There are five components to pagemap:
     22. THP
     23. BALLOON
     24. ZERO_PAGE
+    25. IDLE
 
  * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
    memory cgroup each page is charged to, indexed by PFN. Only available when
@@ -125,6 +126,11 @@ Short descriptions to the page flags:
 24. ZERO_PAGE
     zero page for pfn_zero or huge_zero page
 
+25. IDLE
+    page has not been accessed since it was marked idle (see /proc/kpageidle)
+    Note that this flag may be stale in case the page was accessed via a PTE.
+    To make sure the flag is up-to-date one has to read /proc/kpageidle first.
+
     [IO related page flags]
  1. ERROR     IO error occurred
  3. UPTODATE  page has up-to-date data
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 273537885ab4..13dcb823fe4e 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -150,6 +150,9 @@ u64 stable_page_flags(struct page *page)
 	if (PageBalloon(page))
 		u |= 1 << KPF_BALLOON;
 
+	if (page_is_idle(page))
+		u |= 1 << KPF_IDLE;
+
 	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
 
 	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
index a6c4962e5d46..5da5f8751ce7 100644
--- a/include/uapi/linux/kernel-page-flags.h
+++ b/include/uapi/linux/kernel-page-flags.h
@@ -33,6 +33,7 @@
 #define KPF_THP			22
 #define KPF_BALLOON		23
 #define KPF_ZERO_PAGE		24
+#define KPF_IDLE		25
 
 
 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH -mm v9 8/8] proc: add cond_resched to /proc/kpage* read/write loop
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (6 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 7/8] proc: export idle flag via kpageflags Vladimir Davydov
@ 2015-07-19 12:31 ` Vladimir Davydov
  2015-07-19 12:37 ` [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

Reading/writing a /proc/kpage* file may take long on machines with a lot
of RAM installed.

Suggested-by: Andres Lagar-Cavilla <andreslc@google.com>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
---
 fs/proc/page.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/fs/proc/page.c b/fs/proc/page.c
index 13dcb823fe4e..7ff7cba8617b 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -58,6 +58,8 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf,
 		pfn++;
 		out++;
 		count -= KPMSIZE;
+
+		cond_resched();
 	}
 
 	*ppos += (char __user *)out - buf;
@@ -219,6 +221,8 @@ static ssize_t kpageflags_read(struct file *file, char __user *buf,
 		pfn++;
 		out++;
 		count -= KPMSIZE;
+
+		cond_resched();
 	}
 
 	*ppos += (char __user *)out - buf;
@@ -267,6 +271,8 @@ static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
 		pfn++;
 		out++;
 		count -= KPMSIZE;
+
+		cond_resched();
 	}
 
 	*ppos += (char __user *)out - buf;
@@ -421,6 +427,7 @@ static ssize_t kpageidle_read(struct file *file, char __user *buf,
 			idle_bitmap = 0;
 			out++;
 		}
+		cond_resched();
 	}
 
 	*ppos += (char __user *)out - buf;
@@ -467,6 +474,7 @@ static ssize_t kpageidle_write(struct file *file, const char __user *buf,
 				put_page(page);
 			}
 		}
+		cond_resched();
 	}
 
 	*ppos += (const char __user *)in - buf;
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (7 preceding siblings ...)
  2015-07-19 12:31 ` [PATCH -mm v9 8/8] proc: add cond_resched to /proc/kpage* read/write loop Vladimir Davydov
@ 2015-07-19 12:37 ` Vladimir Davydov
  2015-07-21 21:39 ` Andres Lagar-Cavilla
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-19 12:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, Jul 19, 2015 at 03:31:09PM +0300, Vladimir Davydov wrote:
> ---- PERFORMANCE EVALUATION ----
> 
> SPECjvm2008 (https://www.spec.org/jvm2008/) was used to evaluate the
> performance impact introduced by this patch set. Three runs were carried
> out:
> 
>  - base: kernel without the patch
>  - patched: patched kernel, the feature is not used
>  - patched-active: patched kernel, 1 minute-period daemon is used for
>    tracking idle memory
> 
> For tracking idle memory, idlememstat utility was used:
> https://github.com/locker/idlememstat
> 
> testcase            base            patched        patched-active
> 
> compiler       537.40 ( 0.00)%   532.26 (-0.96)%   538.31 ( 0.17)%
> compress       305.47 ( 0.00)%   301.08 (-1.44)%   300.71 (-1.56)%
> crypto         284.32 ( 0.00)%   282.21 (-0.74)%   284.87 ( 0.19)%
> derby          411.05 ( 0.00)%   413.44 ( 0.58)%   412.07 ( 0.25)%
> mpegaudio      189.96 ( 0.00)%   190.87 ( 0.48)%   189.42 (-0.28)%
> scimark.large   46.85 ( 0.00)%    46.41 (-0.94)%    47.83 ( 2.09)%
> scimark.small  412.91 ( 0.00)%   415.41 ( 0.61)%   421.17 ( 2.00)%
> serial         204.23 ( 0.00)%   213.46 ( 4.52)%   203.17 (-0.52)%
> startup         36.76 ( 0.00)%    35.49 (-3.45)%    35.64 (-3.05)%
> sunflow        115.34 ( 0.00)%   115.08 (-0.23)%   117.37 ( 1.76)%
> xml            620.55 ( 0.00)%   619.95 (-0.10)%   620.39 (-0.03)%
> 
> composite      211.50 ( 0.00)%   211.15 (-0.17)%   211.67 ( 0.08)%
> 
> time idlememstat:
> 
> 17.20user 65.16system 2:15:23elapsed 1%CPU (0avgtext+0avgdata 8476maxresident)k
> 448inputs+40outputs (1major+36052minor)pagefaults 0swaps

FWIW here are idle memory stats obtained during the SPECjvm2008 run:

 time    total     idle idle%  testcase
  1 m   179 MB     0 MB    0%
  2 m  1770 MB    48 MB    2%
  3 m  1777 MB   173 MB    9%  compiler.compiler warmup
  4 m  1750 MB   152 MB    8%  compiler.compiler warmup
  5 m  1751 MB   202 MB   11%  compiler.compiler
  6 m  1754 MB   252 MB   14%  compiler.compiler
  7 m  1754 MB   225 MB   12%  compiler.compiler
  8 m  1748 MB   126 MB    7%  compiler.compiler
  9 m  1752 MB   175 MB   10%  compiler.sunflow warmup
 10 m  1760 MB   168 MB    9%  compiler.sunflow warmup
 11 m  1759 MB   210 MB   11%  compiler.sunflow
 12 m  1762 MB   232 MB   13%  compiler.sunflow
 13 m  1761 MB   207 MB   11%  compiler.sunflow
 14 m  1775 MB   139 MB    7%  compiler.sunflow
 15 m  1775 MB   370 MB   20%  compress warmup
 16 m  1773 MB   515 MB   29%  compress warmup
 17 m  1770 MB   514 MB   29%  compress
 18 m  1761 MB   465 MB   26%  compress
 19 m  1750 MB   433 MB   24%  compress
 20 m  1772 MB   339 MB   19%  compress
 21 m  1794 MB   307 MB   17%  crypto.aes warmup
 22 m  1796 MB   325 MB   18%  crypto.aes warmup
 23 m  1798 MB   341 MB   19%  crypto.aes
 24 m  1798 MB   333 MB   18%  crypto.aes
 25 m  1797 MB   332 MB   18%  crypto.aes
 26 m  1798 MB   328 MB   18%  crypto.aes
 27 m  1798 MB   370 MB   20%  crypto.rsa warmup
 28 m  1793 MB   377 MB   21%  crypto.rsa warmup
 29 m  1786 MB   363 MB   20%  crypto.rsa
 30 m  1782 MB   360 MB   20%  crypto.rsa
 31 m  1781 MB   344 MB   19%  crypto.rsa
 32 m  1799 MB   328 MB   18%  crypto.rsa
 33 m  1799 MB   326 MB   18%  crypto.signverify warmup
 34 m  1799 MB   327 MB   18%  crypto.signverify warmup
 35 m  1799 MB   334 MB   18%  crypto.signverify
 36 m  1800 MB   339 MB   18%  crypto.signverify
 37 m  1800 MB   339 MB   18%  crypto.signverify
 38 m  1843 MB   323 MB   17%  crypto.signverify
 39 m  1903 MB   223 MB   11%
 40 m  1951 MB   225 MB   11%
 41 m  2498 MB   253 MB   10%
 42 m  2561 MB   494 MB   19%  derby warmup
 43 m  2565 MB   527 MB   20%  derby warmup
 44 m  2577 MB   574 MB   22%  derby
 45 m  2621 MB   580 MB   22%  derby
 46 m  2641 MB   536 MB   20%  derby
 47 m  2256 MB   316 MB   14%  derby
 48 m  2244 MB   427 MB   19%  mpegaudio warmup
 49 m  2225 MB   781 MB   35%  mpegaudio warmup
 50 m  2179 MB  1143 MB   52%  mpegaudio
 51 m  2067 MB  1297 MB   62%  mpegaudio
 52 m  1976 MB  1186 MB   60%  mpegaudio
 53 m  2756 MB  1118 MB   40%  mpegaudio
 54 m  3810 MB  1831 MB   48%  scimark.fft.large warmup
 55 m  3252 MB  1108 MB   34%  scimark.fft.large warmup
 56 m  2550 MB  1271 MB   49%  scimark.fft.large
 57 m  3835 MB  1643 MB   42%  scimark.fft.large
 58 m  3067 MB  1138 MB   37%  scimark.fft.large
 59 m  2072 MB  1103 MB   53%  scimark.fft.large
 60 m  2183 MB   799 MB   36%  scimark.fft.large
 61 m  2159 MB   568 MB   26%  scimark.lu.large warmup
 62 m  2333 MB   320 MB   13%  scimark.lu.large warmup
 63 m  2411 MB   447 MB   18%  scimark.lu.large warmup
 64 m  2646 MB   345 MB   13%  scimark.lu.large
 65 m  2687 MB   499 MB   18%  scimark.lu.large
 66 m  2691 MB   459 MB   17%  scimark.lu.large
 67 m  2703 MB   641 MB   23%  scimark.lu.large
 68 m  2735 MB  1077 MB   39%  scimark.lu.large
 69 m  2735 MB  2310 MB   84%  scimark.sor.large warmup
 70 m  2735 MB  1704 MB   62%  scimark.sor.large warmup
 71 m  2735 MB  2034 MB   74%  scimark.sor.large
 72 m  2735 MB  2390 MB   87%  scimark.sor.large
 73 m  2735 MB  2417 MB   88%  scimark.sor.large
 74 m  2735 MB  1366 MB   49%  scimark.sor.large
 75 m  2735 MB   985 MB   36%  scimark.sparse.large warmup
 76 m  2759 MB   925 MB   33%  scimark.sparse.large warmup
 77 m  2759 MB  1192 MB   43%  scimark.sparse.large
 78 m  2703 MB  1120 MB   41%  scimark.sparse.large
 79 m  2679 MB  1035 MB   38%  scimark.sparse.large
 80 m  2679 MB  1069 MB   39%  scimark.sparse.large
 81 m  2162 MB   863 MB   39%  scimark.sparse.large
 82 m  2109 MB   677 MB   32%  scimark.fft.small warmup
 83 m  2172 MB   637 MB   29%  scimark.fft.small warmup
 84 m  2220 MB   655 MB   29%  scimark.fft.small
 85 m  2264 MB   658 MB   29%  scimark.fft.small
 86 m  2316 MB   656 MB   28%  scimark.fft.small
 87 m  2529 MB   630 MB   24%  scimark.fft.small
 88 m  2840 MB   645 MB   22%  scimark.lu.small warmup
 89 m  2983 MB   652 MB   21%  scimark.lu.small warmup
 90 m  2983 MB   652 MB   21%  scimark.lu.small
 91 m  2983 MB   651 MB   21%  scimark.lu.small
 92 m  2984 MB   651 MB   21%  scimark.lu.small
 93 m  2984 MB   652 MB   21%  scimark.lu.small
 94 m  2984 MB  2114 MB   70%  scimark.sor.small warmup
 95 m  2984 MB  2796 MB   93%  scimark.sor.small warmup
 96 m  2984 MB  2823 MB   94%  scimark.sor.small
 97 m  2984 MB  2848 MB   95%  scimark.sor.small
 98 m  2984 MB  2817 MB   94%  scimark.sor.small
 99 m  2984 MB  1366 MB   45%  scimark.sor.small
100 m  2984 MB   664 MB   22%  scimark.sparse.small warmup
101 m  2984 MB   654 MB   21%  scimark.sparse.small warmup
102 m  2983 MB   663 MB   22%  scimark.sparse.small
103 m  2983 MB   652 MB   21%  scimark.sparse.small
104 m  2982 MB   651 MB   21%  scimark.sparse.small
105 m  2981 MB   640 MB   21%  scimark.sparse.small
106 m  2981 MB  2113 MB   70%  scimark.monte_carlo warmup
107 m  2981 MB  2831 MB   94%  scimark.monte_carlo warmup
108 m  2981 MB  2835 MB   95%  scimark.monte_carlo
109 m  2981 MB  2863 MB   96%  scimark.monte_carlo
110 m  2981 MB  2872 MB   96%  scimark.monte_carlo
111 m  2881 MB  1179 MB   40%  scimark.monte_carlo
112 m  2880 MB   777 MB   26%  serial warmup
113 m  2882 MB  1063 MB   36%  serial warmup
114 m  2880 MB  1066 MB   37%  serial
115 m  2880 MB  1064 MB   36%  serial
116 m  2882 MB  1064 MB   36%  serial
117 m  2887 MB  1042 MB   36%  serial
118 m  2886 MB  1118 MB   38%  sunflow warmup
119 m  2887 MB  1161 MB   40%  sunflow warmup
120 m  2887 MB  1166 MB   40%  sunflow
121 m  2887 MB  1170 MB   40%  sunflow
122 m  2886 MB  1172 MB   40%  sunflow
123 m  2896 MB  1159 MB   40%  sunflow
124 m  2906 MB  1132 MB   38%  xml.transform warmup
125 m  2907 MB  1136 MB   39%  xml.transform warmup
126 m  2907 MB  1137 MB   39%  xml.transform
127 m  2907 MB  1137 MB   39%  xml.transform
128 m  2907 MB  1134 MB   39%  xml.transform
129 m  2907 MB  1120 MB   38%  xml.transform
130 m  2895 MB   917 MB   31%  xml.validation warmup
131 m  2894 MB   706 MB   24%  xml.validation warmup
132 m  2903 MB   529 MB   18%  xml.validation
133 m  2907 MB   883 MB   30%  xml.validation
134 m  2894 MB  1013 MB   35%  xml.validation
135 m  2907 MB   853 MB   29%  xml.validation

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback
  2015-07-19 12:31 ` [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback Vladimir Davydov
@ 2015-07-20 18:34   ` Andres Lagar-Cavilla
  2015-07-21  8:51     ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andres Lagar-Cavilla @ 2015-07-20 18:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Minchan Kim, Raghavendra K T, Johannes Weiner,
	Michal Hocko, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 8180 bytes --]

On Sun, Jul 19, 2015 at 5:31 AM, Vladimir Davydov <vdavydov@parallels.com>
wrote:

> In the scope of the idle memory tracking feature, which is introduced by
> the following patch, we need to clear the referenced/accessed bit not
> only in primary, but also in secondary ptes. The latter is required in
> order to estimate wss of KVM VMs. At the same time we want to avoid
> flushing tlb, because it is quite expensive and it won't really affect
> the final result.
>
> Currently, there is no function for clearing pte young bit that would
> meet our requirements, so this patch introduces one. To achieve that we
> have to add a new mmu-notifier callback, clear_young, since there is no
> method for testing-and-clearing a secondary pte w/o flushing tlb. The
> new method is not mandatory and currently only implemented by KVM.
>
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  include/linux/mmu_notifier.h | 44
> ++++++++++++++++++++++++++++++++++++++++++++
>  mm/mmu_notifier.c            | 17 +++++++++++++++++
>  virt/kvm/kvm_main.c          | 18 ++++++++++++++++++
>  3 files changed, 79 insertions(+)
>
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index 61cd67f4d788..a5b17137c683 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -66,6 +66,16 @@ struct mmu_notifier_ops {
>                                  unsigned long end);
>
>         /*
> +        * clear_young is a lightweight version of clear_flush_young. Like
> the
> +        * latter, it is supposed to test-and-clear the young/accessed
> bitflag
> +        * in the secondary pte, but it may omit flushing the secondary
> tlb.
> +        */
> +       int (*clear_young)(struct mmu_notifier *mn,
> +                          struct mm_struct *mm,
> +                          unsigned long start,
> +                          unsigned long end);
> +
> +       /*
>          * test_young is called to check the young/accessed bitflag in
>          * the secondary pte. This is used to know if the page is
>          * frequently used without actually clearing the flag or tearing
> @@ -203,6 +213,9 @@ extern void __mmu_notifier_release(struct mm_struct
> *mm);
>  extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
>                                           unsigned long start,
>                                           unsigned long end);
> +extern int __mmu_notifier_clear_young(struct mm_struct *mm,
> +                                     unsigned long start,
> +                                     unsigned long end);
>  extern int __mmu_notifier_test_young(struct mm_struct *mm,
>                                      unsigned long address);
>  extern void __mmu_notifier_change_pte(struct mm_struct *mm,
> @@ -231,6 +244,15 @@ static inline int
> mmu_notifier_clear_flush_young(struct mm_struct *mm,
>         return 0;
>  }
>
> +static inline int mmu_notifier_clear_young(struct mm_struct *mm,
> +                                          unsigned long start,
> +                                          unsigned long end)
> +{
> +       if (mm_has_notifiers(mm))
> +               return __mmu_notifier_clear_young(mm, start, end);
> +       return 0;
> +}
> +
>  static inline int mmu_notifier_test_young(struct mm_struct *mm,
>                                           unsigned long address)
>  {
> @@ -311,6 +333,28 @@ static inline void mmu_notifier_mm_destroy(struct
> mm_struct *mm)
>         __young;                                                        \
>  })
>
> +#define ptep_clear_young_notify(__vma, __address, __ptep)              \
> +({                                                                     \
> +       int __young;                                                    \
> +       struct vm_area_struct *___vma = __vma;                          \
> +       unsigned long ___address = __address;                           \
> +       __young = ptep_test_and_clear_young(___vma, ___address, __ptep);\
> +       __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,  \
> +                                           ___address + PAGE_SIZE);    \
> +       __young;                                                        \
> +})
> +
> +#define pmdp_clear_young_notify(__vma, __address, __pmdp)              \
> +({                                                                     \
> +       int __young;                                                    \
> +       struct vm_area_struct *___vma = __vma;                          \
> +       unsigned long ___address = __address;                           \
> +       __young = pmdp_test_and_clear_young(___vma, ___address, __pmdp);\
> +       __young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,  \
> +                                           ___address + PMD_SIZE);     \
> +       __young;                                                        \
> +})
> +
>  #define        ptep_clear_flush_notify(__vma, __address, __ptep)
>      \
>  ({                                                                     \
>         unsigned long ___addr = __address & PAGE_MASK;                  \
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 3b9b3d0741b2..5fbdd367bbed 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -123,6 +123,23 @@ int __mmu_notifier_clear_flush_young(struct mm_struct
> *mm,
>         return young;
>  }
>
> +int __mmu_notifier_clear_young(struct mm_struct *mm,
> +                              unsigned long start,
> +                              unsigned long end)
> +{
> +       struct mmu_notifier *mn;
> +       int young = 0, id;
> +
> +       id = srcu_read_lock(&srcu);
> +       hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> +               if (mn->ops->clear_young)
> +                       young |= mn->ops->clear_young(mn, mm, start, end);
> +       }
> +       srcu_read_unlock(&srcu, id);
> +
> +       return young;
> +}
> +
>  int __mmu_notifier_test_young(struct mm_struct *mm,
>                               unsigned long address)
>  {
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 8b8a44453670..ff4173ce6924 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -387,6 +387,23 @@ static int kvm_mmu_notifier_clear_flush_young(struct
> mmu_notifier *mn,
>         return young;
>  }
>
> +static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
> +                                       struct mm_struct *mm,
> +                                       unsigned long start,
> +                                       unsigned long end)
> +{
> +       struct kvm *kvm = mmu_notifier_to_kvm(mn);
> +       int young, idx;
> +
>
If you need to cut out another version please add comments as to the two
issues raised:
- This doesn't proactively flush TLBs -- not obvious if it should.
- This adversely affects performance in Pre_haswell Intel EPT.

Thanks
Andres

> +       idx = srcu_read_lock(&kvm->srcu);
> +       spin_lock(&kvm->mmu_lock);
> +       young = kvm_age_hva(kvm, start, end);
> +       spin_unlock(&kvm->mmu_lock);
> +       srcu_read_unlock(&kvm->srcu, idx);
> +
> +       return young;
> +}
> +
>  static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn,
>                                        struct mm_struct *mm,
>                                        unsigned long address)
> @@ -419,6 +436,7 @@ static const struct mmu_notifier_ops
> kvm_mmu_notifier_ops = {
>         .invalidate_range_start = kvm_mmu_notifier_invalidate_range_start,
>         .invalidate_range_end   = kvm_mmu_notifier_invalidate_range_end,
>         .clear_flush_young      = kvm_mmu_notifier_clear_flush_young,
> +       .clear_young            = kvm_mmu_notifier_clear_young,
>         .test_young             = kvm_mmu_notifier_test_young,
>         .change_pte             = kvm_mmu_notifier_change_pte,
>         .release                = kvm_mmu_notifier_release,
> --
> 2.1.4
>
>


-- 
Andres Lagar-Cavilla | Google Kernel Team | andreslc@google.com

[-- Attachment #2: Type: text/html, Size: 11229 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback
  2015-07-20 18:34   ` Andres Lagar-Cavilla
@ 2015-07-21  8:51     ` Vladimir Davydov
  2015-07-22 16:33       ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-21  8:51 UTC (permalink / raw)
  To: Andres Lagar-Cavilla
  Cc: Andrew Morton, Minchan Kim, Raghavendra K T, Johannes Weiner,
	Michal Hocko, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Mon, Jul 20, 2015 at 11:34:21AM -0700, Andres Lagar-Cavilla wrote:
> On Sun, Jul 19, 2015 at 5:31 AM, Vladimir Davydov <vdavydov@parallels.com>
[...]
> > +static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
> > +                                       struct mm_struct *mm,
> > +                                       unsigned long start,
> > +                                       unsigned long end)
> > +{
> > +       struct kvm *kvm = mmu_notifier_to_kvm(mn);
> > +       int young, idx;
> > +
> >
> If you need to cut out another version please add comments as to the two
> issues raised:
> - This doesn't proactively flush TLBs -- not obvious if it should.
> - This adversely affects performance in Pre_haswell Intel EPT.

Oops, I stopped reading your e-mail in reply to the previous version of
this patch as soon as I saw the Reviewed-by tag, so I missed your
request for the comment, sorry about that.

Here it goes (incremental):
---
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index ff4173ce6924..e69a5cb99571 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -397,6 +397,19 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
 
 	idx = srcu_read_lock(&kvm->srcu);
 	spin_lock(&kvm->mmu_lock);
+	/*
+	 * Even though we do not flush TLB, this will still adversely
+	 * affect performance on pre-Haswell Intel EPT, where there is
+	 * no EPT Access Bit to clear so that we have to tear down EPT
+	 * tables instead. If we find this unacceptable, we can always
+	 * add a parameter to kvm_age_hva so that it effectively doesn't
+	 * do anything on clear_young.
+	 *
+	 * Also note that currently we never issue secondary TLB flushes
+	 * from clear_young, leaving this job up to the regular system
+	 * cadence. If we find this inaccurate, we might come up with a
+	 * more sophisticated heuristic later.
+	 */
 	young = kvm_age_hva(kvm, start, end);
 	spin_unlock(&kvm->mmu_lock);
 	srcu_read_unlock(&kvm->srcu, idx);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (8 preceding siblings ...)
  2015-07-19 12:37 ` [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
@ 2015-07-21 21:39 ` Andres Lagar-Cavilla
  2015-07-21 23:34 ` Andrew Morton
  2015-07-29 12:36 ` Michal Hocko
  11 siblings, 0 replies; 57+ messages in thread
From: Andres Lagar-Cavilla @ 2015-07-21 21:39 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Minchan Kim, Raghavendra K T, Johannes Weiner,
	Michal Hocko, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 14247 bytes --]

On Sun, Jul 19, 2015 at 5:31 AM, Vladimir Davydov <vdavydov@parallels.com>
wrote:

> Hi,
>
> This patch set introduces a new user API for tracking user memory pages
> that have not been used for a given period of time. The purpose of this
> is to provide the userspace with the means of tracking a workload's
> working set, i.e. the set of pages that are actively used by the
> workload. Knowing the working set size can be useful for partitioning
> the system more efficiently, e.g. by tuning memory cgroup limits
> appropriately, or for job placement within a compute cluster.
>
> It is based on top of v4.2-rc2-mmotm-2015-07-15-16-46
> It applies without conflicts to v4.2-rc2-mmotm-2015-07-17-16-04 as well
>
> ---- USE CASES ----
>
> The unified cgroup hierarchy has memory.low and memory.high knobs, which
> are defined as the low and high boundaries for the workload working set
> size. However, the working set size of a workload may be unknown or
> change in time. With this patch set, one can periodically estimate the
> amount of memory unused by each cgroup and tune their memory.low and
> memory.high parameters accordingly, therefore optimizing the overall
> memory utilization.
>
> Another use case is balancing workloads within a compute cluster.
> Knowing how much memory is not really used by a workload unit may help
> take a more optimal decision when considering migrating the unit to
> another node within the cluster.
>
> Also, as noted by Minchan, this would be useful for per-process reclaim
> (https://lwn.net/Articles/545668/). With idle tracking, we could reclaim
> idle
> pages only by smart user memory manager.
>
> ---- USER API ----
>
> The user API consists of two new proc files:
>
>  * /proc/kpageidle.  This file implements a bitmap where each bit
> corresponds
>    to a page, indexed by PFN. When the bit is set, the corresponding page
> is
>    idle. A page is considered idle if it has not been accessed since it was
>    marked idle. To mark a page idle one should set the bit corresponding
> to the
>    page by writing to the file. A value written to the file is OR-ed with
> the
>    current bitmap value. Only user memory pages can be marked idle, for
> other
>    page types input is silently ignored. Writing to this file beyond max
> PFN
>    results in the ENXIO error. Only available when
> CONFIG_IDLE_PAGE_TRACKING is
>    set.
>
>    This file can be used to estimate the amount of pages that are not
>    used by a particular workload as follows:
>
>    1. mark all pages of interest idle by setting corresponding bits in the
>       /proc/kpageidle bitmap
>    2. wait until the workload accesses its working set
>    3. read /proc/kpageidle and count the number of bits set
>
>  * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
>    memory cgroup each page is charged to, indexed by PFN. Only available
> when
>    CONFIG_MEMCG is set.
>
>    This file can be used to find all pages (including unmapped file
>    pages) accounted to a particular cgroup. Using /proc/kpageidle, one
>    can then estimate the cgroup working set size.
>
> For an example of using these files for estimating the amount of unused
> memory pages per each memory cgroup, please see the script attached
> below.
>
> ---- REASONING ----
>
> The reason to introduce the new user API instead of using
> /proc/PID/{clear_refs,smaps} is that the latter has two serious
> drawbacks:
>
>  - it does not count unmapped file pages
>  - it affects the reclaimer logic
>
> The new API attempts to overcome them both. For more details on how it
> is achieved, please see the comment to patch 6.
>
> ---- CHANGE LOG ----
>
> Changes in v9:
>
>  - add cond_resched to /proc/kpage* read/write loop (Andres)
>  - rebase on top of v4.2-rc2-mmotm-2015-07-15-16-46
>

And thanks for the perf report.

This series
Reviewed-by: Andres Lagar-Cavilla <andreslc@google.com>


> Changes in v8:
>
>  - clear referenced/accessed bit in secondary ptes while accessing
>    /proc/kpageidle; this is required to estimate wss of KVM VMs (Andres)
>  - check the young flag when collapsing a huge page
>  - copy idle/young flags on page migration
>
> Changes in v7:
>
> This iteration addresses Andres's comments to v6:
>
>  - do not reuse page_referenced for clearing idle flag, introduce a
>    separate function instead; this way we won't issue expensive tlb
>    flushes on /proc/kpageidle read/write
>  - propagate young/idle flags from head to tail pages on thp split
>  - skip compound tail pages while reading/writing /proc/kpageidle
>  - cleanup page_referenced_one
>
> Changes in v6:
>
>  - Split the patch introducing page_cgroup_ino helper to ease review.
>  - Rebase on top of v4.1-rc7-mmotm-2015-06-09-16-55
>
> Changes in v5:
>
>  - Fix possible race between kpageidle_clear_pte_refs() and
>    __page_set_anon_rmap() by checking that a page is on an LRU list
>    under zone->lru_lock (Minchan).
>  - Export idle flag via /proc/kpageflags (Minchan).
>  - Rebase on top of 4.1-rc3.
>
> Changes in v4:
>
> This iteration primarily addresses Minchan's comments to v3:
>
>  - Implement /proc/kpageidle as a bitmap instead of using u64 per each
> page,
>    because there does not seem to be any future uses for the other 63 bits.
>  - Do not double-increase pra->referenced in page_referenced_one() if the
> page
>    was young and referenced recently.
>  - Remove the pointless (page_count == 0) check from kpageidle_get_page().
>  - Rename kpageidle_clear_refs() to kpageidle_clear_pte_refs().
>  - Improve comments to kpageidle-related functions.
>  - Rebase on top of 4.1-rc2.
>
> Note it does not address Minchan's concern of possible
> __page_set_anon_rmap vs
> page_referenced race (see https://lkml.org/lkml/2015/5/3/220) since it is
> still
> unclear if this race can really happen (see
> https://lkml.org/lkml/2015/5/4/160)
>
> Changes in v3:
>
>  - Enable CONFIG_IDLE_PAGE_TRACKING for 32 bit. Since this feature
>    requires two extra page flags and there is no space for them on 32
>    bit, page ext is used (thanks to Minchan Kim).
>  - Minor code cleanups and comments improved.
>  - Rebase on top of 4.1-rc1.
>
> Changes in v2:
>
>  - The main difference from v1 is the API change. In v1 the user can
>    only set the idle flag for all pages at once, and for clearing the
>    Idle flag on pages accessed via page tables /proc/PID/clear_refs
>    should be used.
>    The main drawback of the v1 approach, as noted by Minchan, is that on
>    big machines setting the idle flag for each pages can result in CPU
>    bursts, which would be especially frustrating if the user only wanted
>    to estimate the amount of idle pages for a particular process or VMA.
>    With the new API a more fine-grained approach is possible: one can
>    read a process's /proc/PID/pagemap and set/check the Idle flag only
>    for those pages of the process's address space he or she is
>    interested in.
>    Another good point about the v2 API is that it is possible to limit
>    /proc/kpage* scanning rate when the user wants to estimate the total
>    number of idle pages, which is unachievable with the v1 approach.
>  - Make /proc/kpagecgroup return the ino of the closest online ancestor
>    in case the cgroup a page is charged to is offline.
>  - Fix /proc/PID/clear_refs not clearing Young page flag.
>  - Rebase on top of v4.0-rc6-mmotm-2015-04-01-14-54
>
> v8: https://lkml.org/lkml/2015/7/15/587
> v7: https://lkml.org/lkml/2015/7/11/119
> v6: https://lkml.org/lkml/2015/6/12/301
> v5: https://lkml.org/lkml/2015/5/12/449
> v4: https://lkml.org/lkml/2015/5/7/580
> v3: https://lkml.org/lkml/2015/4/28/224
> v2: https://lkml.org/lkml/2015/4/7/260
> v1: https://lkml.org/lkml/2015/3/18/794
>
> ---- PATCH SET STRUCTURE ----
>
> The patch set is organized as follows:
>
>  - patch 1 adds page_cgroup_ino() helper for the sake of
>    /proc/kpagecgroup and patches 2-3 do related cleanup
>  - patch 4 adds /proc/kpagecgroup, which reports cgroup ino each page is
>    charged to
>  - patch 5 introduces a new mmu notifier callback, clear_young, which is
>    a lightweight version of clear_flush_young; it is used in patch 6
>  - patch 6 implements the idle page tracking feature, including the
>    userspace API, /proc/kpageidle
>  - patch 7 exports idle flag via /proc/kpageflags
>
> ---- SIMILAR WORKS ----
>
> Originally, the patch for tracking idle memory was proposed back in 2011
> by Michel Lespinasse (see http://lwn.net/Articles/459269/). The main
> difference between Michel's patch and this one is that Michel
> implemented a kernel space daemon for estimating idle memory size per
> cgroup while this patch only provides the userspace with the minimal API
> for doing the job, leaving the rest up to the userspace. However, they
> both share the same idea of Idle/Young page flags to avoid affecting the
> reclaimer logic.
>
> ---- PERFORMANCE EVALUATION ----
>
> SPECjvm2008 (https://www.spec.org/jvm2008/) was used to evaluate the
> performance impact introduced by this patch set. Three runs were carried
> out:
>
>  - base: kernel without the patch
>  - patched: patched kernel, the feature is not used
>  - patched-active: patched kernel, 1 minute-period daemon is used for
>    tracking idle memory
>
> For tracking idle memory, idlememstat utility was used:
> https://github.com/locker/idlememstat
>
> testcase            base            patched        patched-active
>
> compiler       537.40 ( 0.00)%   532.26 (-0.96)%   538.31 ( 0.17)%
> compress       305.47 ( 0.00)%   301.08 (-1.44)%   300.71 (-1.56)%
> crypto         284.32 ( 0.00)%   282.21 (-0.74)%   284.87 ( 0.19)%
> derby          411.05 ( 0.00)%   413.44 ( 0.58)%   412.07 ( 0.25)%
> mpegaudio      189.96 ( 0.00)%   190.87 ( 0.48)%   189.42 (-0.28)%
> scimark.large   46.85 ( 0.00)%    46.41 (-0.94)%    47.83 ( 2.09)%
> scimark.small  412.91 ( 0.00)%   415.41 ( 0.61)%   421.17 ( 2.00)%
> serial         204.23 ( 0.00)%   213.46 ( 4.52)%   203.17 (-0.52)%
> startup         36.76 ( 0.00)%    35.49 (-3.45)%    35.64 (-3.05)%
> sunflow        115.34 ( 0.00)%   115.08 (-0.23)%   117.37 ( 1.76)%
> xml            620.55 ( 0.00)%   619.95 (-0.10)%   620.39 (-0.03)%
>
> composite      211.50 ( 0.00)%   211.15 (-0.17)%   211.67 ( 0.08)%
>
> time idlememstat:
>
> 17.20user 65.16system 2:15:23elapsed 1%CPU (0avgtext+0avgdata
> 8476maxresident)k
> 448inputs+40outputs (1major+36052minor)pagefaults 0swaps
>
> ---- SCRIPT FOR COUNTING IDLE PAGES PER CGROUP ----
> #! /usr/bin/python
> #
>
> import os
> import stat
> import errno
> import struct
>
> CGROUP_MOUNT = "/sys/fs/cgroup/memory"
> BUFSIZE = 8 * 1024  # must be multiple of 8
>
>
> def get_hugepage_size():
>     with open("/proc/meminfo", "r") as f:
>         for s in f:
>             k, v = s.split(":")
>             if k == "Hugepagesize":
>                 return int(v.split()[0]) * 1024
>
> PAGE_SIZE = os.sysconf("SC_PAGE_SIZE")
> HUGEPAGE_SIZE = get_hugepage_size()
>
>
> def set_idle():
>     f = open("/proc/kpageidle", "wb", BUFSIZE)
>     while True:
>         try:
>             f.write(struct.pack("Q", pow(2, 64) - 1))
>         except IOError as err:
>             if err.errno == errno.ENXIO:
>                 break
>             raise
>     f.close()
>
>
> def count_idle():
>     f_flags = open("/proc/kpageflags", "rb", BUFSIZE)
>     f_cgroup = open("/proc/kpagecgroup", "rb", BUFSIZE)
>
>     with open("/proc/kpageidle", "rb", BUFSIZE) as f:
>         while f.read(BUFSIZE): pass  # update idle flag
>
>     idlememsz = {}
>     while True:
>         s1, s2 = f_flags.read(8), f_cgroup.read(8)
>         if not s1 or not s2:
>             break
>
>         flags, = struct.unpack('Q', s1)
>         cgino, = struct.unpack('Q', s2)
>
>         unevictable = (flags >> 18) & 1
>         huge = (flags >> 22) & 1
>         idle = (flags >> 25) & 1
>
>         if idle and not unevictable:
>             idlememsz[cgino] = idlememsz.get(cgino, 0) + \
>                 (HUGEPAGE_SIZE if huge else PAGE_SIZE)
>
>     f_flags.close()
>     f_cgroup.close()
>     return idlememsz
>
>
> if __name__ == "__main__":
>     print "Setting the idle flag for each page..."
>     set_idle()
>
>     raw_input("Wait until the workload accesses its working set, "
>               "then press Enter")
>
>     print "Counting idle pages..."
>     idlememsz = count_idle()
>
>     for dir, subdirs, files in os.walk(CGROUP_MOUNT):
>         ino = os.stat(dir)[stat.ST_INO]
>         print dir + ": " + str(idlememsz.get(ino, 0) / 1024) + " kB"
> ---- END SCRIPT ----
>
> Comments are more than welcome.
>
> Thanks,
>
> Vladimir Davydov (8):
>   memcg: add page_cgroup_ino helper
>   hwpoison: use page_cgroup_ino for filtering by memcg
>   memcg: zap try_get_mem_cgroup_from_page
>   proc: add kpagecgroup file
>   mmu-notifier: add clear_young callback
>   proc: add kpageidle file
>   proc: export idle flag via kpageflags
>   proc: add cond_resched to /proc/kpage* read/write loop
>
>  Documentation/vm/pagemap.txt           |  22 ++-
>  fs/proc/page.c                         | 282
> +++++++++++++++++++++++++++++++++
>  fs/proc/task_mmu.c                     |   4 +-
>  include/linux/memcontrol.h             |  10 +-
>  include/linux/mm.h                     |  98 ++++++++++++
>  include/linux/mmu_notifier.h           |  44 +++++
>  include/linux/page-flags.h             |  11 ++
>  include/linux/page_ext.h               |   4 +
>  include/uapi/linux/kernel-page-flags.h |   1 +
>  mm/Kconfig                             |  12 ++
>  mm/debug.c                             |   4 +
>  mm/huge_memory.c                       |  11 +-
>  mm/hwpoison-inject.c                   |   5 +-
>  mm/memcontrol.c                        |  71 ++++-----
>  mm/memory-failure.c                    |  16 +-
>  mm/migrate.c                           |   5 +
>  mm/mmu_notifier.c                      |  17 ++
>  mm/page_ext.c                          |   3 +
>  mm/rmap.c                              |   5 +
>  mm/swap.c                              |   2 +
>  virt/kvm/kvm_main.c                    |  18 +++
>  21 files changed, 579 insertions(+), 66 deletions(-)
>
> --
> 2.1.4
>
>


-- 
Andres Lagar-Cavilla | Google Kernel Team | andreslc@google.com

[-- Attachment #2: Type: text/html, Size: 18850 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (9 preceding siblings ...)
  2015-07-21 21:39 ` Andres Lagar-Cavilla
@ 2015-07-21 23:34 ` Andrew Morton
  2015-07-22 16:23   ` Vladimir Davydov
  2015-07-27 19:18   ` Kees Cook
  2015-07-29 12:36 ` Michal Hocko
  11 siblings, 2 replies; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel, Kees Cook

On Sun, 19 Jul 2015 15:31:09 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> Hi,
> 
> This patch set introduces a new user API for tracking user memory pages
> that have not been used for a given period of time. The purpose of this
> is to provide the userspace with the means of tracking a workload's
> working set, i.e. the set of pages that are actively used by the
> workload. Knowing the working set size can be useful for partitioning
> the system more efficiently, e.g. by tuning memory cgroup limits
> appropriately, or for job placement within a compute cluster.
> 
> It is based on top of v4.2-rc2-mmotm-2015-07-15-16-46
> It applies without conflicts to v4.2-rc2-mmotm-2015-07-17-16-04 as well
> 
> ---- USE CASES ----
> 
> The unified cgroup hierarchy has memory.low and memory.high knobs, which
> are defined as the low and high boundaries for the workload working set
> size. However, the working set size of a workload may be unknown or
> change in time. With this patch set, one can periodically estimate the
> amount of memory unused by each cgroup and tune their memory.low and
> memory.high parameters accordingly, therefore optimizing the overall
> memory utilization.
> 
> Another use case is balancing workloads within a compute cluster.
> Knowing how much memory is not really used by a workload unit may help
> take a more optimal decision when considering migrating the unit to
> another node within the cluster.
> 
> Also, as noted by Minchan, this would be useful for per-process reclaim
> (https://lwn.net/Articles/545668/). With idle tracking, we could reclaim idle
> pages only by smart user memory manager.
> 
> ---- USER API ----
> 
> The user API consists of two new proc files:
> 
>  * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
>    to a page, indexed by PFN.

What are the bit mappings?  If I read the first byte of /proc/kpageidle
I get PFN #0 in bit zero of that byte?  And the second byte of
/proc/kpageidle contains PFN #8 in its LSB, etc?

Maybe this is covered in the documentation file.

> When the bit is set, the corresponding page is
>    idle. A page is considered idle if it has not been accessed since it was
>    marked idle.

Perhaps we can spell out in some detail what "accessed" means?  I see
you've hooked into mark_page_accessed(), so a read from disk is an
access.  What about a write to disk?  And what about a page being
accessed from some random device (could hook into get_user_pages()?) Is
getting written to swap an access?  When a dirty pagecache page is
written out by kswapd or direct reclaim?

This also should be in the permanent documentation.

> To mark a page idle one should set the bit corresponding to the
>    page by writing to the file. A value written to the file is OR-ed with the
>    current bitmap value. Only user memory pages can be marked idle, for other
>    page types input is silently ignored. Writing to this file beyond max PFN
>    results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
>    set.
> 
>    This file can be used to estimate the amount of pages that are not
>    used by a particular workload as follows:
> 
>    1. mark all pages of interest idle by setting corresponding bits in the
>       /proc/kpageidle bitmap
>    2. wait until the workload accesses its working set
>    3. read /proc/kpageidle and count the number of bits set

Security implications.  This interface could be used to learn about a
sensitive application by poking data at it and then observing its
memory access patterns.  Perhaps this is why the proc files are
root-only (whcih I assume is sufficient).  Some words here about the
security side of things and the reasoning behind the chosen permissions
would be good to have.

>  * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
>    memory cgroup each page is charged to, indexed by PFN.

Actually "closest online ancestor".  This also should be in the
interface documentation.

> Only available when CONFIG_MEMCG is set.

CONFIG_MEMCG and CONFIG_IDLE_PAGE_TRACKING I assume?

> 
>    This file can be used to find all pages (including unmapped file
>    pages) accounted to a particular cgroup. Using /proc/kpageidle, one
>    can then estimate the cgroup working set size.
> 
> For an example of using these files for estimating the amount of unused
> memory pages per each memory cgroup, please see the script attached
> below.

Why were these put in /proc anyway?  Rather than under /sys/fs/cgroup
somewhere?  Presumably because /proc/kpageidle is useful in non-memcg
setups.

> ---- PERFORMANCE EVALUATION ----

"^___" means "end of changelog".  Perhaps that should have been
"^---\n" - unclear.

> Documentation/vm/pagemap.txt           |  22 ++-

I think we'll need quite a lot more than this to fully describe the
interface?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper
  2015-07-19 12:31 ` [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper Vladimir Davydov
@ 2015-07-21 23:34   ` Andrew Morton
  2015-07-22  9:21     ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, 19 Jul 2015 15:31:10 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> This function returns the inode number of the closest online ancestor of
> the memory cgroup a page is charged to. It is required for exporting
> information about which page is charged to which cgroup to userspace,
> which will be introduced by a following patch.
> 
> ...
>

> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -441,6 +441,29 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
>  	return &memcg->css;
>  }
>  
> +/**
> + * page_cgroup_ino - return inode number of the memcg a page is charged to
> + * @page: the page
> + *
> + * Look up the closest online ancestor of the memory cgroup @page is charged to
> + * and return its inode number or 0 if @page is not charged to any cgroup. It
> + * is safe to call this function without holding a reference to @page.
> + */
> +unsigned long page_cgroup_ino(struct page *page)

Shouldn't it return an ino_t?

> +{
> +	struct mem_cgroup *memcg;
> +	unsigned long ino = 0;
> +
> +	rcu_read_lock();
> +	memcg = READ_ONCE(page->mem_cgroup);
> +	while (memcg && !(memcg->css.flags & CSS_ONLINE))
> +		memcg = parent_mem_cgroup(memcg);
> +	if (memcg)
> +		ino = cgroup_ino(memcg->css.cgroup);
> +	rcu_read_unlock();
> +	return ino;
> +}

The function is racy, isn't it?  There's nothing to prevent this inode
from getting torn down and potentially reallocated one nanosecond after
page_cgroup_ino() returns?  If so, it is only safely usable by things
which don't care (such as procfs interfaces) and this should be
documented in some fashion.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg
  2015-07-19 12:31 ` [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg Vladimir Davydov
@ 2015-07-21 23:34   ` Andrew Morton
  2015-07-22  9:45     ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, 19 Jul 2015 15:31:11 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> Hwpoison allows to filter pages by memory cgroup ino. Currently, it
> calls try_get_mem_cgroup_from_page to obtain the cgroup from a page and
> then its ino using cgroup_ino, but now we have an apter method for that,
> page_cgroup_ino, so use it instead.

I assume "an apter" was supposed to be "a helper"?

> --- a/mm/hwpoison-inject.c
> +++ b/mm/hwpoison-inject.c
> @@ -45,12 +45,9 @@ static int hwpoison_inject(void *data, u64 val)
>  	/*
>  	 * do a racy check with elevated page count, to make sure PG_hwpoison
>  	 * will only be set for the targeted owner (or on a free page).
> -	 * We temporarily take page lock for try_get_mem_cgroup_from_page().
>  	 * memory_failure() will redo the check reliably inside page lock.
>  	 */
> -	lock_page(hpage);
>  	err = hwpoison_filter(hpage);
> -	unlock_page(hpage);
>  	if (err)
>  		goto put_out;
>  
> @@ -126,7 +123,7 @@ static int pfn_inject_init(void)
>  	if (!dentry)
>  		goto fail;
>  
> -#ifdef CONFIG_MEMCG_SWAP
> +#ifdef CONFIG_MEMCG
>  	dentry = debugfs_create_u64("corrupt-filter-memcg", 0600,
>  				    hwpoison_dir, &hwpoison_filter_memcg);
>  	if (!dentry)

Confused.  We're changing the conditions under which this debugfs file
is created.  Is this a typo or some unchangelogged thing or what?


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 4/8] proc: add kpagecgroup file
  2015-07-19 12:31 ` [PATCH -mm v9 4/8] proc: add kpagecgroup file Vladimir Davydov
@ 2015-07-21 23:34   ` Andrew Morton
  2015-07-22 10:33     ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, 19 Jul 2015 15:31:13 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> /proc/kpagecgroup contains a 64-bit inode number of the memory cgroup
> each page is charged to, indexed by PFN. Having this information is
> useful for estimating a cgroup working set size.
> 
> The file is present if CONFIG_PROC_PAGE_MONITOR && CONFIG_MEMCG.
>
> ...
>
> @@ -225,10 +226,62 @@ static const struct file_operations proc_kpageflags_operations = {
>  	.read = kpageflags_read,
>  };
>  
> +#ifdef CONFIG_MEMCG
> +static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
> +				size_t count, loff_t *ppos)
> +{
> +	u64 __user *out = (u64 __user *)buf;
> +	struct page *ppage;
> +	unsigned long src = *ppos;
> +	unsigned long pfn;
> +	ssize_t ret = 0;
> +	u64 ino;
> +
> +	pfn = src / KPMSIZE;
> +	count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src);
> +	if (src & KPMMASK || count & KPMMASK)
> +		return -EINVAL;

The user-facing documentation should explain that reads must be
performed in multiple-of-8 sizes.

> +	while (count > 0) {
> +		if (pfn_valid(pfn))
> +			ppage = pfn_to_page(pfn);
> +		else
> +			ppage = NULL;
> +
> +		if (ppage)
> +			ino = page_cgroup_ino(ppage);
> +		else
> +			ino = 0;
> +
> +		if (put_user(ino, out)) {
> +			ret = -EFAULT;

Here we do the usual procfs violation of read() behaviour.  read()
normally only returns an error if it read nothing.  This code will
transfer a megabyte then return -EFAULT so userspace doesn't know that
it got that megabyte.

That's easy to fix, but procfs files do this all over the place anyway :(

> +			break;
> +		}
> +
> +		pfn++;
> +		out++;
> +		count -= KPMSIZE;
> +	}
> +
> +	*ppos += (char __user *)out - buf;
> +	if (!ret)
> +		ret = (char __user *)out - buf;
> +	return ret;
> +}
> +

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 6/8] proc: add kpageidle file
  2015-07-19 12:31 ` [PATCH -mm v9 6/8] proc: add kpageidle file Vladimir Davydov
@ 2015-07-21 23:34   ` Andrew Morton
  2015-07-22 15:20     ` Vladimir Davydov
  2015-07-24 14:08   ` Paul Gortmaker
  1 sibling, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, 19 Jul 2015 15:31:15 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> Knowing the portion of memory that is not used by a certain application
> or memory cgroup (idle memory) can be useful for partitioning the system
> efficiently, e.g. by setting memory cgroup limits appropriately.
> Currently, the only means to estimate the amount of idle memory provided
> by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the
> access bit for all pages mapped to a particular process by writing 1 to
> clear_refs, wait for some time, and then count smaps:Referenced.
> However, this method has two serious shortcomings:
> 
>  - it does not count unmapped file pages
>  - it affects the reclaimer logic
> 
> To overcome these drawbacks, this patch introduces two new page flags,
> Idle and Young, and a new proc file, /proc/kpageidle. A page's Idle flag
> can only be set from userspace by setting bit in /proc/kpageidle at the
> offset corresponding to the page, and it is cleared whenever the page is
> accessed either through page tables (it is cleared in page_referenced()
> in this case) or using the read(2) system call (mark_page_accessed()).
> Thus by setting the Idle flag for pages of a particular workload, which
> can be found e.g. by reading /proc/PID/pagemap, waiting for some time to
> let the workload access its working set, and then reading the kpageidle
> file, one can estimate the amount of pages that are not used by the
> workload.
> 
> The Young page flag is used to avoid interference with the memory
> reclaimer. A page's Young flag is set whenever the Access bit of a page
> table entry pointing to the page is cleared by writing to kpageidle. If
> page_referenced() is called on a Young page, it will add 1 to its return
> value, therefore concealing the fact that the Access bit was cleared.
> 
> Note, since there is no room for extra page flags on 32 bit, this
> feature uses extended page flags when compiled on 32 bit.
> 
> ...
>
>
> ...
>
> +static void kpageidle_clear_pte_refs(struct page *page)
> +{
> +	struct rmap_walk_control rwc = {
> +		.rmap_one = kpageidle_clear_pte_refs_one,
> +		.anon_lock = page_lock_anon_vma_read,
> +	};

I think this can be static const, since `arg' is unused?  That would
save some cycles and stack.

> +	bool need_lock;
> +
> +	if (!page_mapped(page) ||
> +	    !page_rmapping(page))
> +		return;
> +
> +	need_lock = !PageAnon(page) || PageKsm(page);
> +	if (need_lock && !trylock_page(page))

Oh.  So the feature is a bit unreliable.

I'm not immediately seeing anything which would prevent us from using
plain old lock_page() here.  What's going on?

> +		return;
> +
> +	rmap_walk(page, &rwc);
> +
> +	if (need_lock)
> +		unlock_page(page);
> +}
> +
> +static ssize_t kpageidle_read(struct file *file, char __user *buf,
> +			      size_t count, loff_t *ppos)
> +{
> +	u64 __user *out = (u64 __user *)buf;
> +	struct page *page;
> +	unsigned long pfn, end_pfn;
> +	ssize_t ret = 0;
> +	u64 idle_bitmap = 0;
> +	int bit;
> +
> +	if (*ppos & KPMMASK || count & KPMMASK)
> +		return -EINVAL;

Interface requires 8-byte aligned offset and size.

> +	pfn = *ppos * BITS_PER_BYTE;
> +	if (pfn >= max_pfn)
> +		return 0;
> +
> +	end_pfn = pfn + count * BITS_PER_BYTE;
> +	if (end_pfn > max_pfn)
> +		end_pfn = ALIGN(max_pfn, KPMBITS);

So we lose up to 63 pages.  Presumably max_pfn is well enough aligned
for this to not matter, dunno.

> +	for (; pfn < end_pfn; pfn++) {
> +		bit = pfn % KPMBITS;
> +		page = kpageidle_get_page(pfn);
> +		if (page) {
> +			if (page_is_idle(page)) {
> +				/*
> +				 * The page might have been referenced via a
> +				 * pte, in which case it is not idle. Clear
> +				 * refs and recheck.
> +				 */
> +				kpageidle_clear_pte_refs(page);
> +				if (page_is_idle(page))
> +					idle_bitmap |= 1ULL << bit;

I don't understand what's going on here.  More details, please?

> +			}
> +			put_page(page);
> +		}
> +		if (bit == KPMBITS - 1) {
> +			if (put_user(idle_bitmap, out)) {
> +				ret = -EFAULT;
> +				break;
> +			}
> +			idle_bitmap = 0;
> +			out++;
> +		}
> +	}
> +
> +	*ppos += (char __user *)out - buf;
> +	if (!ret)
> +		ret = (char __user *)out - buf;
> +	return ret;
> +}
> +
> +static ssize_t kpageidle_write(struct file *file, const char __user *buf,
> +			       size_t count, loff_t *ppos)
> +{
> +	const u64 __user *in = (const u64 __user *)buf;
> +	struct page *page;
> +	unsigned long pfn, end_pfn;
> +	ssize_t ret = 0;
> +	u64 idle_bitmap = 0;
> +	int bit;
> +
> +	if (*ppos & KPMMASK || count & KPMMASK)
> +		return -EINVAL;
> +
> +	pfn = *ppos * BITS_PER_BYTE;
> +	if (pfn >= max_pfn)
> +		return -ENXIO;
> +
> +	end_pfn = pfn + count * BITS_PER_BYTE;
> +	if (end_pfn > max_pfn)
> +		end_pfn = ALIGN(max_pfn, KPMBITS);
> +
> +	for (; pfn < end_pfn; pfn++) {
> +		bit = pfn % KPMBITS;
> +		if (bit == 0) {
> +			if (get_user(idle_bitmap, in)) {
> +				ret = -EFAULT;
> +				break;
> +			}
> +			in++;
> +		}
> +		if (idle_bitmap >> bit & 1) {

Hate it when I have to go look up a C precedence table.  This is

		if ((idle_bitmap >> bit) & 1) {

> +			page = kpageidle_get_page(pfn);
> +			if (page) {
> +				kpageidle_clear_pte_refs(page);
> +				set_page_idle(page);
> +				put_page(page);
> +			}
> +		}
> +	}
> +
> +	*ppos += (const char __user *)in - buf;
> +	if (!ret)
> +		ret = (const char __user *)in - buf;
> +	return ret;
> +}
> +
>
> ...
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-19 12:31 ` [PATCH -mm v9 7/8] proc: export idle flag via kpageflags Vladimir Davydov
@ 2015-07-21 23:35   ` Andrew Morton
  2015-07-22 16:25     ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-21 23:35 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun, 19 Jul 2015 15:31:16 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> As noted by Minchan, a benefit of reading idle flag from
> /proc/kpageflags is that one can easily filter dirty and/or unevictable
> pages while estimating the size of unused memory.
> 
> Note that idle flag read from /proc/kpageflags may be stale in case the
> page was accessed via a PTE, because it would be too costly to iterate
> over all page mappings on each /proc/kpageflags read to provide an
> up-to-date value. To make sure the flag is up-to-date one has to read
> /proc/kpageidle first.

Is there any value in teaching the regular old page scanner to update
these flags?  If it's doing an rmap scan anyway...

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper
  2015-07-21 23:34   ` Andrew Morton
@ 2015-07-22  9:21     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22  9:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Tue, Jul 21, 2015 at 04:34:07PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:10 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > This function returns the inode number of the closest online ancestor of
> > the memory cgroup a page is charged to. It is required for exporting
> > information about which page is charged to which cgroup to userspace,
> > which will be introduced by a following patch.
> > 
> > ...
> >
> 
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -441,6 +441,29 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
> >  	return &memcg->css;
> >  }
> >  
> > +/**
> > + * page_cgroup_ino - return inode number of the memcg a page is charged to
> > + * @page: the page
> > + *
> > + * Look up the closest online ancestor of the memory cgroup @page is charged to
> > + * and return its inode number or 0 if @page is not charged to any cgroup. It
> > + * is safe to call this function without holding a reference to @page.
> > + */
> > +unsigned long page_cgroup_ino(struct page *page)
> 
> Shouldn't it return an ino_t?

Yep, thanks.

> 
> > +{
> > +	struct mem_cgroup *memcg;
> > +	unsigned long ino = 0;
> > +
> > +	rcu_read_lock();
> > +	memcg = READ_ONCE(page->mem_cgroup);
> > +	while (memcg && !(memcg->css.flags & CSS_ONLINE))
> > +		memcg = parent_mem_cgroup(memcg);
> > +	if (memcg)
> > +		ino = cgroup_ino(memcg->css.cgroup);
> > +	rcu_read_unlock();
> > +	return ino;
> > +}
> 
> The function is racy, isn't it?  There's nothing to prevent this inode
> from getting torn down and potentially reallocated one nanosecond after
> page_cgroup_ino() returns?  If so, it is only safely usable by things
> which don't care (such as procfs interfaces) and this should be
> documented in some fashion.

Agree. Here goes the incremental patch:
---
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d644aadfdd0d..ad800e62cb7a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -343,7 +343,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 }
 
 struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
-unsigned long page_cgroup_ino(struct page *page);
+ino_t page_cgroup_ino(struct page *page);
 
 static inline bool mem_cgroup_disabled(void)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b9c76a0906f9..bd30638c2a95 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -448,8 +448,13 @@ struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
  * Look up the closest online ancestor of the memory cgroup @page is charged to
  * and return its inode number or 0 if @page is not charged to any cgroup. It
  * is safe to call this function without holding a reference to @page.
+ *
+ * Note, this function is inherently racy, because there is nothing to prevent
+ * the cgroup inode from getting torn down and potentially reallocated a moment
+ * after page_cgroup_ino() returns, so it only should be used by callers that
+ * do not care (such as procfs interfaces).
  */
-unsigned long page_cgroup_ino(struct page *page)
+ino_t page_cgroup_ino(struct page *page)
 {
 	struct mem_cgroup *memcg;
 	unsigned long ino = 0;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg
  2015-07-21 23:34   ` Andrew Morton
@ 2015-07-22  9:45     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22  9:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Tue, Jul 21, 2015 at 04:34:12PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:11 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > Hwpoison allows to filter pages by memory cgroup ino. Currently, it
> > calls try_get_mem_cgroup_from_page to obtain the cgroup from a page and
> > then its ino using cgroup_ino, but now we have an apter method for that,
> > page_cgroup_ino, so use it instead.
> 
> I assume "an apter" was supposed to be "a helper"?

Yes, sounds better :-)

> 
> > --- a/mm/hwpoison-inject.c
> > +++ b/mm/hwpoison-inject.c
> > @@ -45,12 +45,9 @@ static int hwpoison_inject(void *data, u64 val)
> >  	/*
> >  	 * do a racy check with elevated page count, to make sure PG_hwpoison
> >  	 * will only be set for the targeted owner (or on a free page).
> > -	 * We temporarily take page lock for try_get_mem_cgroup_from_page().
> >  	 * memory_failure() will redo the check reliably inside page lock.
> >  	 */
> > -	lock_page(hpage);
> >  	err = hwpoison_filter(hpage);
> > -	unlock_page(hpage);
> >  	if (err)
> >  		goto put_out;
> >  
> > @@ -126,7 +123,7 @@ static int pfn_inject_init(void)
> >  	if (!dentry)
> >  		goto fail;
> >  
> > -#ifdef CONFIG_MEMCG_SWAP
> > +#ifdef CONFIG_MEMCG
> >  	dentry = debugfs_create_u64("corrupt-filter-memcg", 0600,
> >  				    hwpoison_dir, &hwpoison_filter_memcg);
> >  	if (!dentry)
> 
> Confused.  We're changing the conditions under which this debugfs file
> is created.  Is this a typo or some unchangelogged thing or what?

This is an unchangelogged cleanup. In fact, there had been a comment
regarding it before v6, but then it got lost. Sorry about that. The
commit message should look like this:

"""
Hwpoison allows to filter pages by memory cgroup ino. Currently, it
calls try_get_mem_cgroup_from_page to obtain the cgroup from a page and
then its ino using cgroup_ino, but now we have a helper method for that,
page_cgroup_ino, so use it instead.

This patch also loosens the hwpoison memcg filter dependency rules - it
makes it depend on CONFIG_MEMCG instead of CONFIG_MEMCG_SWAP, because
hwpoison memcg filter does not require anything (nor it used to) from
CONFIG_MEMCG_SWAP side.
"""

Or we can simply revert this cleanups if you don't like it:
---
diff --git a/mm/hwpoison-inject.c b/mm/hwpoison-inject.c
index 5015679014c1..1cd105ee5a7b 100644
--- a/mm/hwpoison-inject.c
+++ b/mm/hwpoison-inject.c
@@ -123,7 +123,7 @@ static int pfn_inject_init(void)
 	if (!dentry)
 		goto fail;
 
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MEMCG_SWAP
 	dentry = debugfs_create_u64("corrupt-filter-memcg", 0600,
 				    hwpoison_dir, &hwpoison_filter_memcg);
 	if (!dentry)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 97005396a507..5ea7d8c760fa 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -130,7 +130,7 @@ static int hwpoison_filter_flags(struct page *p)
  * can only guarantee that the page either belongs to the memcg tasks, or is
  * a freed page.
  */
-#ifdef CONFIG_MEMCG
+#ifdef CONFIG_MEMCG_SWAP
 u64 hwpoison_filter_memcg;
 EXPORT_SYMBOL_GPL(hwpoison_filter_memcg);
 static int hwpoison_filter_task(struct page *p)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 4/8] proc: add kpagecgroup file
  2015-07-21 23:34   ` Andrew Morton
@ 2015-07-22 10:33     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22 10:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Tue, Jul 21, 2015 at 04:34:33PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:13 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > /proc/kpagecgroup contains a 64-bit inode number of the memory cgroup
> > each page is charged to, indexed by PFN. Having this information is
> > useful for estimating a cgroup working set size.
> > 
> > The file is present if CONFIG_PROC_PAGE_MONITOR && CONFIG_MEMCG.
> >
> > ...
> >
> > @@ -225,10 +226,62 @@ static const struct file_operations proc_kpageflags_operations = {
> >  	.read = kpageflags_read,
> >  };
> >  
> > +#ifdef CONFIG_MEMCG
> > +static ssize_t kpagecgroup_read(struct file *file, char __user *buf,
> > +				size_t count, loff_t *ppos)
> > +{
> > +	u64 __user *out = (u64 __user *)buf;
> > +	struct page *ppage;
> > +	unsigned long src = *ppos;
> > +	unsigned long pfn;
> > +	ssize_t ret = 0;
> > +	u64 ino;
> > +
> > +	pfn = src / KPMSIZE;
> > +	count = min_t(unsigned long, count, (max_pfn * KPMSIZE) - src);
> > +	if (src & KPMMASK || count & KPMMASK)
> > +		return -EINVAL;
> 
> The user-facing documentation should explain that reads must be
> performed in multiple-of-8 sizes.

It does. It's in the end of Documentation/vm/pagemap.c:

: Other notes:
: 
: Reading from any of the files will return -EINVAL if you are not starting
: the read on an 8-byte boundary (e.g., if you sought an odd number of bytes
: into the file), or if the size of the read is not a multiple of 8 bytes.

> 
> > +	while (count > 0) {
> > +		if (pfn_valid(pfn))
> > +			ppage = pfn_to_page(pfn);
> > +		else
> > +			ppage = NULL;
> > +
> > +		if (ppage)
> > +			ino = page_cgroup_ino(ppage);
> > +		else
> > +			ino = 0;
> > +
> > +		if (put_user(ino, out)) {
> > +			ret = -EFAULT;
> 
> Here we do the usual procfs violation of read() behaviour.  read()
> normally only returns an error if it read nothing.  This code will
> transfer a megabyte then return -EFAULT so userspace doesn't know that
> it got that megabyte.

Yeah, that's how it works. I did it preliminary for /proc/kpagecgroup to
work exactly like /proc/kpageflags and /proc/kpagecount.

FWIW, the man page I have on my system already warns about this
peculiarity of read(2):

: On error, -1 is returned, and errno is set appropriately. In this
: case, it is left unspecified whether the file position (if any)
: changes.

> 
> That's easy to fix, but procfs files do this all over the place anyway :(
> 
> > +			break;
> > +		}
> > +
> > +		pfn++;
> > +		out++;
> > +		count -= KPMSIZE;
> > +	}
> > +
> > +	*ppos += (char __user *)out - buf;
> > +	if (!ret)
> > +		ret = (char __user *)out - buf;
> > +	return ret;
> > +}
> > +
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 6/8] proc: add kpageidle file
  2015-07-21 23:34   ` Andrew Morton
@ 2015-07-22 15:20     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22 15:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Tue, Jul 21, 2015 at 04:34:52PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:15 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > Knowing the portion of memory that is not used by a certain application
> > or memory cgroup (idle memory) can be useful for partitioning the system
> > efficiently, e.g. by setting memory cgroup limits appropriately.
> > Currently, the only means to estimate the amount of idle memory provided
> > by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the
> > access bit for all pages mapped to a particular process by writing 1 to
> > clear_refs, wait for some time, and then count smaps:Referenced.
> > However, this method has two serious shortcomings:
> > 
> >  - it does not count unmapped file pages
> >  - it affects the reclaimer logic
> > 
> > To overcome these drawbacks, this patch introduces two new page flags,
> > Idle and Young, and a new proc file, /proc/kpageidle. A page's Idle flag
> > can only be set from userspace by setting bit in /proc/kpageidle at the
> > offset corresponding to the page, and it is cleared whenever the page is
> > accessed either through page tables (it is cleared in page_referenced()
> > in this case) or using the read(2) system call (mark_page_accessed()).
> > Thus by setting the Idle flag for pages of a particular workload, which
> > can be found e.g. by reading /proc/PID/pagemap, waiting for some time to
> > let the workload access its working set, and then reading the kpageidle
> > file, one can estimate the amount of pages that are not used by the
> > workload.
> > 
> > The Young page flag is used to avoid interference with the memory
> > reclaimer. A page's Young flag is set whenever the Access bit of a page
> > table entry pointing to the page is cleared by writing to kpageidle. If
> > page_referenced() is called on a Young page, it will add 1 to its return
> > value, therefore concealing the fact that the Access bit was cleared.
> > 
> > Note, since there is no room for extra page flags on 32 bit, this
> > feature uses extended page flags when compiled on 32 bit.
> > 
> > ...
> >
> >
> > ...
> >
> > +static void kpageidle_clear_pte_refs(struct page *page)
> > +{
> > +	struct rmap_walk_control rwc = {
> > +		.rmap_one = kpageidle_clear_pte_refs_one,
> > +		.anon_lock = page_lock_anon_vma_read,
> > +	};
> 
> I think this can be static const, since `arg' is unused?  That would
> save some cycles and stack.

Good catch, thanks.

> 
> > +	bool need_lock;
> > +
> > +	if (!page_mapped(page) ||
> > +	    !page_rmapping(page))
> > +		return;
> > +
> > +	need_lock = !PageAnon(page) || PageKsm(page);
> > +	if (need_lock && !trylock_page(page))
> 
> Oh.  So the feature is a bit unreliable.
> 
> I'm not immediately seeing anything which would prevent us from using
> plain old lock_page() here.  What's going on?

A page may be locked for quite a long period of time, e.g.
truncate_inode_pages_range() may wait until a page writeback finishes
under the page lock. Instead of stalling kpageidle scan, we'd better
move on to the next page. Of course, the result won't be 100% accurate.
In fact, it isn't accurate anyway, because we skip isolated pages,
neither can it possibly be 100% accurate, because the scan itself is not
instant so that while we are performing it the system usage pattern
might change. This new API is only supposed to give a good estimate of
memory usage pattern, which could be used as a hint for adjusting the
system configuration to improve performance.

> 
> > +		return;
> > +
> > +	rmap_walk(page, &rwc);
> > +
> > +	if (need_lock)
> > +		unlock_page(page);
> > +}
> > +
> > +static ssize_t kpageidle_read(struct file *file, char __user *buf,
> > +			      size_t count, loff_t *ppos)
> > +{
> > +	u64 __user *out = (u64 __user *)buf;
> > +	struct page *page;
> > +	unsigned long pfn, end_pfn;
> > +	ssize_t ret = 0;
> > +	u64 idle_bitmap = 0;
> > +	int bit;
> > +
> > +	if (*ppos & KPMMASK || count & KPMMASK)
> > +		return -EINVAL;
> 
> Interface requires 8-byte aligned offset and size.
> 
> > +	pfn = *ppos * BITS_PER_BYTE;
> > +	if (pfn >= max_pfn)
> > +		return 0;
> > +
> > +	end_pfn = pfn + count * BITS_PER_BYTE;
> > +	if (end_pfn > max_pfn)
> > +		end_pfn = ALIGN(max_pfn, KPMBITS);
> 
> So we lose up to 63 pages.  Presumably max_pfn is well enough aligned
> for this to not matter, dunno.

ALIGN(x, a) resolves to ((x + a - 1) & ~(a - 1)), which is >= x, so we
shouldn't loose anything.

> 
> > +	for (; pfn < end_pfn; pfn++) {
> > +		bit = pfn % KPMBITS;
> > +		page = kpageidle_get_page(pfn);
> > +		if (page) {
> > +			if (page_is_idle(page)) {
> > +				/*
> > +				 * The page might have been referenced via a
> > +				 * pte, in which case it is not idle. Clear
> > +				 * refs and recheck.
> > +				 */
> > +				kpageidle_clear_pte_refs(page);
> > +				if (page_is_idle(page))
> > +					idle_bitmap |= 1ULL << bit;
> 
> I don't understand what's going on here.  More details, please?

The output is a bitmap, which is stored as an array of 8-byte elements,
where byte order within each word is native, i.e. if page at pfn #i is
idle we need to set bit #i%64 of element #i/64 of the array. I'll
reflect this in the documentation.

> 
> > +			}
> > +			put_page(page);
> > +		}
> > +		if (bit == KPMBITS - 1) {
> > +			if (put_user(idle_bitmap, out)) {
> > +				ret = -EFAULT;
> > +				break;
> > +			}
> > +			idle_bitmap = 0;
> > +			out++;
> > +		}
> > +	}
> > +
> > +	*ppos += (char __user *)out - buf;
> > +	if (!ret)
> > +		ret = (char __user *)out - buf;
> > +	return ret;
> > +}
> > +
> > +static ssize_t kpageidle_write(struct file *file, const char __user *buf,
> > +			       size_t count, loff_t *ppos)
> > +{
> > +	const u64 __user *in = (const u64 __user *)buf;
> > +	struct page *page;
> > +	unsigned long pfn, end_pfn;
> > +	ssize_t ret = 0;
> > +	u64 idle_bitmap = 0;
> > +	int bit;
> > +
> > +	if (*ppos & KPMMASK || count & KPMMASK)
> > +		return -EINVAL;
> > +
> > +	pfn = *ppos * BITS_PER_BYTE;
> > +	if (pfn >= max_pfn)
> > +		return -ENXIO;
> > +
> > +	end_pfn = pfn + count * BITS_PER_BYTE;
> > +	if (end_pfn > max_pfn)
> > +		end_pfn = ALIGN(max_pfn, KPMBITS);
> > +
> > +	for (; pfn < end_pfn; pfn++) {
> > +		bit = pfn % KPMBITS;
> > +		if (bit == 0) {
> > +			if (get_user(idle_bitmap, in)) {
> > +				ret = -EFAULT;
> > +				break;
> > +			}
> > +			in++;
> > +		}
> > +		if (idle_bitmap >> bit & 1) {
> 
> Hate it when I have to go look up a C precedence table.  This is
> 
> 		if ((idle_bitmap >> bit) & 1) {

Fixed.

Here goes the incremental patch with all the fixes:
---
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 7ff7cba8617b..9daa6e92450f 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -362,7 +362,11 @@ static int kpageidle_clear_pte_refs_one(struct page *page,
 
 static void kpageidle_clear_pte_refs(struct page *page)
 {
-	struct rmap_walk_control rwc = {
+	/*
+	 * Since rwc.arg is unused, rwc is effectively immutable, so we
+	 * can make it static const to save some cycles and stack.
+	 */
+	static const struct rmap_walk_control rwc = {
 		.rmap_one = kpageidle_clear_pte_refs_one,
 		.anon_lock = page_lock_anon_vma_read,
 	};
@@ -376,7 +380,7 @@ static void kpageidle_clear_pte_refs(struct page *page)
 	if (need_lock && !trylock_page(page))
 		return;
 
-	rmap_walk(page, &rwc);
+	rmap_walk(page, (struct rmap_walk_control *)&rwc);
 
 	if (need_lock)
 		unlock_page(page);
@@ -466,7 +470,7 @@ static ssize_t kpageidle_write(struct file *file, const char __user *buf,
 			}
 			in++;
 		}
-		if (idle_bitmap >> bit & 1) {
+		if ((idle_bitmap >> bit) & 1) {
 			page = kpageidle_get_page(pfn);
 			if (page) {
 				kpageidle_clear_pte_refs(page);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-21 23:34 ` Andrew Morton
@ 2015-07-22 16:23   ` Vladimir Davydov
  2015-07-25 16:24     ` Vladimir Davydov
  2015-07-27 19:18   ` Kees Cook
  1 sibling, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22 16:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel, Kees Cook

On Tue, Jul 21, 2015 at 04:34:02PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:09 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > Hi,
> > 
> > This patch set introduces a new user API for tracking user memory pages
> > that have not been used for a given period of time. The purpose of this
> > is to provide the userspace with the means of tracking a workload's
> > working set, i.e. the set of pages that are actively used by the
> > workload. Knowing the working set size can be useful for partitioning
> > the system more efficiently, e.g. by tuning memory cgroup limits
> > appropriately, or for job placement within a compute cluster.
> > 
> > It is based on top of v4.2-rc2-mmotm-2015-07-15-16-46
> > It applies without conflicts to v4.2-rc2-mmotm-2015-07-17-16-04 as well
> > 
> > ---- USE CASES ----
> > 
> > The unified cgroup hierarchy has memory.low and memory.high knobs, which
> > are defined as the low and high boundaries for the workload working set
> > size. However, the working set size of a workload may be unknown or
> > change in time. With this patch set, one can periodically estimate the
> > amount of memory unused by each cgroup and tune their memory.low and
> > memory.high parameters accordingly, therefore optimizing the overall
> > memory utilization.
> > 
> > Another use case is balancing workloads within a compute cluster.
> > Knowing how much memory is not really used by a workload unit may help
> > take a more optimal decision when considering migrating the unit to
> > another node within the cluster.
> > 
> > Also, as noted by Minchan, this would be useful for per-process reclaim
> > (https://lwn.net/Articles/545668/). With idle tracking, we could reclaim idle
> > pages only by smart user memory manager.
> > 
> > ---- USER API ----
> > 
> > The user API consists of two new proc files:
> > 
> >  * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
> >    to a page, indexed by PFN.
> 
> What are the bit mappings?  If I read the first byte of /proc/kpageidle
> I get PFN #0 in bit zero of that byte?  And the second byte of
> /proc/kpageidle contains PFN #8 in its LSB, etc?

The bit mapping is an array of u64 elements. Page at pfn #i corresponds
to bit #i%64 of element #i/64. Byte order is native.

Will add this to docs.

> 
> Maybe this is covered in the documentation file.
> 
> > When the bit is set, the corresponding page is
> >    idle. A page is considered idle if it has not been accessed since it was
> >    marked idle.
> 
> Perhaps we can spell out in some detail what "accessed" means?  I see
> you've hooked into mark_page_accessed(), so a read from disk is an
> access.  What about a write to disk?  And what about a page being
> accessed from some random device (could hook into get_user_pages()?) Is
> getting written to swap an access?  When a dirty pagecache page is
> written out by kswapd or direct reclaim?
> 
> This also should be in the permanent documentation.

OK, will add.

> 
> > To mark a page idle one should set the bit corresponding to the
> >    page by writing to the file. A value written to the file is OR-ed with the
> >    current bitmap value. Only user memory pages can be marked idle, for other
> >    page types input is silently ignored. Writing to this file beyond max PFN
> >    results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
> >    set.
> > 
> >    This file can be used to estimate the amount of pages that are not
> >    used by a particular workload as follows:
> > 
> >    1. mark all pages of interest idle by setting corresponding bits in the
> >       /proc/kpageidle bitmap
> >    2. wait until the workload accesses its working set
> >    3. read /proc/kpageidle and count the number of bits set
> 
> Security implications.  This interface could be used to learn about a
> sensitive application by poking data at it and then observing its
> memory access patterns.  Perhaps this is why the proc files are
> root-only (whcih I assume is sufficient). 

That's one point. Another point is that if we allow unprivileged users
to access it, they may interfere with the system-wide daemon doing the
regular scan and estimating the system wss.

> Some words here about the security side of things and the reasoning
> behind the chosen permissions would be good to have.
> 
> >  * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
> >    memory cgroup each page is charged to, indexed by PFN.
> 
> Actually "closest online ancestor".  This also should be in the
> interface documentation.

Actually, the userspace knows nothing about online/offline cgroups,
because all cgroups used to be online and charge re-parenting was used
to forcibly empty a memcg on deletion. Anyways, I'll add a note.

> 
> > Only available when CONFIG_MEMCG is set.
> 
> CONFIG_MEMCG and CONFIG_IDLE_PAGE_TRACKING I assume?

No, it's present iff CONFIG_PROC_PAGE_MONITOR && CONFIG_MEMCG, because
it might be useful even w/o CONFIG_IDLE_PAGE_TRACKING, e.g. in order to
find out which memcg  pages of a particular process are accounted to.

> 
> > 
> >    This file can be used to find all pages (including unmapped file
> >    pages) accounted to a particular cgroup. Using /proc/kpageidle, one
> >    can then estimate the cgroup working set size.
> > 
> > For an example of using these files for estimating the amount of unused
> > memory pages per each memory cgroup, please see the script attached
> > below.
> 
> Why were these put in /proc anyway?  Rather than under /sys/fs/cgroup
> somewhere?  Presumably because /proc/kpageidle is useful in non-memcg
> setups.

Yes, one might use it for estimating active wss of a single process or
the whole system.

> 
> > ---- PERFORMANCE EVALUATION ----
> 
> "^___" means "end of changelog".  Perhaps that should have been
> "^---\n" - unclear.

Sorry :-/

> 
> > Documentation/vm/pagemap.txt           |  22 ++-
> 
> I think we'll need quite a lot more than this to fully describe the
> interface?

Agree, the documentation sucks :-( Will try to forge something more
thorough.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-21 23:35   ` Andrew Morton
@ 2015-07-22 16:25     ` Vladimir Davydov
  2015-07-22 19:44       ` Andrew Morton
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22 16:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Tue, Jul 21, 2015 at 04:35:00PM -0700, Andrew Morton wrote:
> On Sun, 19 Jul 2015 15:31:16 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > As noted by Minchan, a benefit of reading idle flag from
> > /proc/kpageflags is that one can easily filter dirty and/or unevictable
> > pages while estimating the size of unused memory.
> > 
> > Note that idle flag read from /proc/kpageflags may be stale in case the
> > page was accessed via a PTE, because it would be too costly to iterate
> > over all page mappings on each /proc/kpageflags read to provide an
> > up-to-date value. To make sure the flag is up-to-date one has to read
> > /proc/kpageidle first.
> 
> Is there any value in teaching the regular old page scanner to update
> these flags?  If it's doing an rmap scan anyway...

I don't understand what you mean by "regular old page scanner". Could
you please elaborate?

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback
  2015-07-21  8:51     ` Vladimir Davydov
@ 2015-07-22 16:33       ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-22 16:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

Hi Andrew,

Would you mind merging this incremental patch to the original one? Or
should I better resubmit the whole series with all the fixes?

On Tue, Jul 21, 2015 at 11:51:08AM +0300, Vladimir Davydov wrote:
> On Mon, Jul 20, 2015 at 11:34:21AM -0700, Andres Lagar-Cavilla wrote:
> > On Sun, Jul 19, 2015 at 5:31 AM, Vladimir Davydov <vdavydov@parallels.com>
> [...]
> > > +static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
> > > +                                       struct mm_struct *mm,
> > > +                                       unsigned long start,
> > > +                                       unsigned long end)
> > > +{
> > > +       struct kvm *kvm = mmu_notifier_to_kvm(mn);
> > > +       int young, idx;
> > > +
> > >
> > If you need to cut out another version please add comments as to the two
> > issues raised:
> > - This doesn't proactively flush TLBs -- not obvious if it should.
> > - This adversely affects performance in Pre_haswell Intel EPT.
> 
> Oops, I stopped reading your e-mail in reply to the previous version of
> this patch as soon as I saw the Reviewed-by tag, so I missed your
> request for the comment, sorry about that.
> 
> Here it goes (incremental):
> ---
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index ff4173ce6924..e69a5cb99571 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -397,6 +397,19 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
>  
>  	idx = srcu_read_lock(&kvm->srcu);
>  	spin_lock(&kvm->mmu_lock);
> +	/*
> +	 * Even though we do not flush TLB, this will still adversely
> +	 * affect performance on pre-Haswell Intel EPT, where there is
> +	 * no EPT Access Bit to clear so that we have to tear down EPT
> +	 * tables instead. If we find this unacceptable, we can always
> +	 * add a parameter to kvm_age_hva so that it effectively doesn't
> +	 * do anything on clear_young.
> +	 *
> +	 * Also note that currently we never issue secondary TLB flushes
> +	 * from clear_young, leaving this job up to the regular system
> +	 * cadence. If we find this inaccurate, we might come up with a
> +	 * more sophisticated heuristic later.
> +	 */
>  	young = kvm_age_hva(kvm, start, end);
>  	spin_unlock(&kvm->mmu_lock);
>  	srcu_read_unlock(&kvm->srcu, idx);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-22 16:25     ` Vladimir Davydov
@ 2015-07-22 19:44       ` Andrew Morton
  2015-07-22 20:46         ` Andres Lagar-Cavilla
  0 siblings, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-22 19:44 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed, 22 Jul 2015 19:25:28 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> On Tue, Jul 21, 2015 at 04:35:00PM -0700, Andrew Morton wrote:
> > On Sun, 19 Jul 2015 15:31:16 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> > 
> > > As noted by Minchan, a benefit of reading idle flag from
> > > /proc/kpageflags is that one can easily filter dirty and/or unevictable
> > > pages while estimating the size of unused memory.
> > > 
> > > Note that idle flag read from /proc/kpageflags may be stale in case the
> > > page was accessed via a PTE, because it would be too costly to iterate
> > > over all page mappings on each /proc/kpageflags read to provide an
> > > up-to-date value. To make sure the flag is up-to-date one has to read
> > > /proc/kpageidle first.
> > 
> > Is there any value in teaching the regular old page scanner to update
> > these flags?  If it's doing an rmap scan anyway...
> 
> I don't understand what you mean by "regular old page scanner". Could
> you please elaborate?

Whenever kswapd or direct reclaim perform an rmap scan, take that as an
opportunity to also update PageIdle().

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-22 19:44       ` Andrew Morton
@ 2015-07-22 20:46         ` Andres Lagar-Cavilla
  2015-07-23  7:57           ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Andres Lagar-Cavilla @ 2015-07-22 20:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vladimir Davydov, Minchan Kim, Raghavendra K T, Johannes Weiner,
	Michal Hocko, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1432 bytes --]

In page_referenced_one:

+       if (referenced)
+               clear_page_idle(page);

Andres

On Wed, Jul 22, 2015 at 12:44 PM, Andrew Morton <akpm@linux-foundation.org>
wrote:

> On Wed, 22 Jul 2015 19:25:28 +0300 Vladimir Davydov <
> vdavydov@parallels.com> wrote:
>
> > On Tue, Jul 21, 2015 at 04:35:00PM -0700, Andrew Morton wrote:
> > > On Sun, 19 Jul 2015 15:31:16 +0300 Vladimir Davydov <
> vdavydov@parallels.com> wrote:
> > >
> > > > As noted by Minchan, a benefit of reading idle flag from
> > > > /proc/kpageflags is that one can easily filter dirty and/or
> unevictable
> > > > pages while estimating the size of unused memory.
> > > >
> > > > Note that idle flag read from /proc/kpageflags may be stale in case
> the
> > > > page was accessed via a PTE, because it would be too costly to
> iterate
> > > > over all page mappings on each /proc/kpageflags read to provide an
> > > > up-to-date value. To make sure the flag is up-to-date one has to read
> > > > /proc/kpageidle first.
> > >
> > > Is there any value in teaching the regular old page scanner to update
> > > these flags?  If it's doing an rmap scan anyway...
> >
> > I don't understand what you mean by "regular old page scanner". Could
> > you please elaborate?
>
> Whenever kswapd or direct reclaim perform an rmap scan, take that as an
> opportunity to also update PageIdle().
>
>


-- 
Andres Lagar-Cavilla | Google Kernel Team | andreslc@google.com

[-- Attachment #2: Type: text/html, Size: 3122 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 7/8] proc: export idle flag via kpageflags
  2015-07-22 20:46         ` Andres Lagar-Cavilla
@ 2015-07-23  7:57           ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-23  7:57 UTC (permalink / raw)
  To: Andrew Morton, Andres Lagar-Cavilla
  Cc: Minchan Kim, Raghavendra K T, Johannes Weiner, Michal Hocko,
	Greg Thelen, Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, Jonathan Corbet, linux-api, linux-doc, linux-mm,
	cgroups, linux-kernel

On Wed, Jul 22, 2015 at 01:46:21PM -0700, Andres Lagar-Cavilla wrote:
> In page_referenced_one:
> 
> +       if (referenced)
> +               clear_page_idle(page);
> 

Yep, that's it. Thanks, Andres.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 6/8] proc: add kpageidle file
  2015-07-19 12:31 ` [PATCH -mm v9 6/8] proc: add kpageidle file Vladimir Davydov
  2015-07-21 23:34   ` Andrew Morton
@ 2015-07-24 14:08   ` Paul Gortmaker
  2015-07-24 14:17     ` Vladimir Davydov
  1 sibling, 1 reply; 57+ messages in thread
From: Paul Gortmaker @ 2015-07-24 14:08 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Michal Hocko, Greg Thelen,
	Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, Jonathan Corbet, linux-api, LKML doc, linux-mm,
	cgroups, LKML, linux-next

On Sun, Jul 19, 2015 at 8:31 AM, Vladimir Davydov
<vdavydov@parallels.com> wrote:
> Knowing the portion of memory that is not used by a certain application
> or memory cgroup (idle memory) can be useful for partitioning the system
> efficiently, e.g. by setting memory cgroup limits appropriately.

The version of this commit currently in linux-next breaks cris and m68k
(and maybe others).   Fails with:

fs/proc/page.c:341:4: error: implicit declaration of function
'pmdp_clear_young_notify' [-Werror=implicit-function-declaration]
fs/proc/page.c:347:4: error: implicit declaration of function
'ptep_clear_young_notify' [-Werror=implicit-function-declaration]
cc1: some warnings being treated as errors
make[3]: *** [fs/proc/page.o] Error 1
make[2]: *** [fs/proc] Error 2

http://kisskb.ellerman.id.au/kisskb/buildresult/12470364/

Bisect says:

65525488fa86cda44fb6870f29e9859c974700cd is the first bad commit
commit 65525488fa86cda44fb6870f29e9859c974700cd
Author: Vladimir Davydov <vdavydov@parallels.com>
Date:   Fri Jul 24 09:11:32 2015 +1000

    proc: add kpageidle file

Paul.
--

> Currently, the only means to estimate the amount of idle memory provided
> by the kernel is /proc/PID/{clear_refs,smaps}: the user can clear the
> access bit for all pages mapped to a particular process by writing 1 to
> clear_refs, wait for some time, and then count smaps:Referenced.
> However, this method has two serious shortcomings:
>
>  - it does not count unmapped file pages
>  - it affects the reclaimer logic
>
> To overcome these drawbacks, this patch introduces two new page flags,
> Idle and Young, and a new proc file, /proc/kpageidle. A page's Idle flag
> can only be set from userspace by setting bit in /proc/kpageidle at the
> offset corresponding to the page, and it is cleared whenever the page is
> accessed either through page tables (it is cleared in page_referenced()
> in this case) or using the read(2) system call (mark_page_accessed()).
> Thus by setting the Idle flag for pages of a particular workload, which
> can be found e.g. by reading /proc/PID/pagemap, waiting for some time to
> let the workload access its working set, and then reading the kpageidle
> file, one can estimate the amount of pages that are not used by the
> workload.
>
> The Young page flag is used to avoid interference with the memory
> reclaimer. A page's Young flag is set whenever the Access bit of a page
> table entry pointing to the page is cleared by writing to kpageidle. If
> page_referenced() is called on a Young page, it will add 1 to its return
> value, therefore concealing the fact that the Access bit was cleared.
>
> Note, since there is no room for extra page flags on 32 bit, this
> feature uses extended page flags when compiled on 32 bit.
>
> Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
> ---
>  Documentation/vm/pagemap.txt |  12 ++-
>  fs/proc/page.c               | 218 +++++++++++++++++++++++++++++++++++++++++++
>  fs/proc/task_mmu.c           |   4 +-
>  include/linux/mm.h           |  98 +++++++++++++++++++
>  include/linux/page-flags.h   |  11 +++
>  include/linux/page_ext.h     |   4 +
>  mm/Kconfig                   |  12 +++
>  mm/debug.c                   |   4 +
>  mm/huge_memory.c             |  11 ++-
>  mm/migrate.c                 |   5 +
>  mm/page_ext.c                |   3 +
>  mm/rmap.c                    |   5 +
>  mm/swap.c                    |   2 +
>  13 files changed, 385 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
> index 3a37ed184258..34fe828c3007 100644
> --- a/Documentation/vm/pagemap.txt
> +++ b/Documentation/vm/pagemap.txt
> @@ -5,7 +5,7 @@ pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
>  userspace programs to examine the page tables and related information by
>  reading files in /proc.
>
> -There are four components to pagemap:
> +There are five components to pagemap:
>
>   * /proc/pid/pagemap.  This file lets a userspace process find out which
>     physical frame each virtual page is mapped to.  It contains one 64-bit
> @@ -70,6 +70,16 @@ There are four components to pagemap:
>     memory cgroup each page is charged to, indexed by PFN. Only available when
>     CONFIG_MEMCG is set.
>
> + * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
> +   to a page, indexed by PFN. When the bit is set, the corresponding page is
> +   idle. A page is considered idle if it has not been accessed since it was
> +   marked idle. To mark a page idle one should set the bit corresponding to the
> +   page by writing to the file. A value written to the file is OR-ed with the
> +   current bitmap value. Only user memory pages can be marked idle, for other
> +   page types input is silently ignored. Writing to this file beyond max PFN
> +   results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
> +   set.
> +
>  Short descriptions to the page flags:
>
>   0. LOCKED
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index 70d23245dd43..273537885ab4 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -5,6 +5,8 @@
>  #include <linux/ksm.h>
>  #include <linux/mm.h>
>  #include <linux/mmzone.h>
> +#include <linux/rmap.h>
> +#include <linux/mmu_notifier.h>
>  #include <linux/huge_mm.h>
>  #include <linux/proc_fs.h>
>  #include <linux/seq_file.h>
> @@ -16,6 +18,7 @@
>
>  #define KPMSIZE sizeof(u64)
>  #define KPMMASK (KPMSIZE - 1)
> +#define KPMBITS (KPMSIZE * BITS_PER_BYTE)
>
>  /* /proc/kpagecount - an array exposing page counts
>   *
> @@ -275,6 +278,217 @@ static const struct file_operations proc_kpagecgroup_operations = {
>  };
>  #endif /* CONFIG_MEMCG */
>
> +#ifdef CONFIG_IDLE_PAGE_TRACKING
> +/*
> + * Idle page tracking only considers user memory pages, for other types of
> + * pages the idle flag is always unset and an attempt to set it is silently
> + * ignored.
> + *
> + * We treat a page as a user memory page if it is on an LRU list, because it is
> + * always safe to pass such a page to rmap_walk(), which is essential for idle
> + * page tracking. With such an indicator of user pages we can skip isolated
> + * pages, but since there are not usually many of them, it will hardly affect
> + * the overall result.
> + *
> + * This function tries to get a user memory page by pfn as described above.
> + */
> +static struct page *kpageidle_get_page(unsigned long pfn)
> +{
> +       struct page *page;
> +       struct zone *zone;
> +
> +       if (!pfn_valid(pfn))
> +               return NULL;
> +
> +       page = pfn_to_page(pfn);
> +       if (!page || !PageLRU(page) ||
> +           !get_page_unless_zero(page))
> +               return NULL;
> +
> +       zone = page_zone(page);
> +       spin_lock_irq(&zone->lru_lock);
> +       if (unlikely(!PageLRU(page))) {
> +               put_page(page);
> +               page = NULL;
> +       }
> +       spin_unlock_irq(&zone->lru_lock);
> +       return page;
> +}
> +
> +static int kpageidle_clear_pte_refs_one(struct page *page,
> +                                       struct vm_area_struct *vma,
> +                                       unsigned long addr, void *arg)
> +{
> +       struct mm_struct *mm = vma->vm_mm;
> +       spinlock_t *ptl;
> +       pmd_t *pmd;
> +       pte_t *pte;
> +       bool referenced = false;
> +
> +       if (unlikely(PageTransHuge(page))) {
> +               pmd = page_check_address_pmd(page, mm, addr,
> +                                            PAGE_CHECK_ADDRESS_PMD_FLAG, &ptl);
> +               if (pmd) {
> +                       referenced = pmdp_clear_young_notify(vma, addr, pmd);
> +                       spin_unlock(ptl);
> +               }
> +       } else {
> +               pte = page_check_address(page, mm, addr, &ptl, 0);
> +               if (pte) {
> +                       referenced = ptep_clear_young_notify(vma, addr, pte);
> +                       pte_unmap_unlock(pte, ptl);
> +               }
> +       }
> +       if (referenced) {
> +               clear_page_idle(page);
> +               /*
> +                * We cleared the referenced bit in a mapping to this page. To
> +                * avoid interference with page reclaim, mark it young so that
> +                * page_referenced() will return > 0.
> +                */
> +               set_page_young(page);
> +       }
> +       return SWAP_AGAIN;
> +}
> +
> +static void kpageidle_clear_pte_refs(struct page *page)
> +{
> +       struct rmap_walk_control rwc = {
> +               .rmap_one = kpageidle_clear_pte_refs_one,
> +               .anon_lock = page_lock_anon_vma_read,
> +       };
> +       bool need_lock;
> +
> +       if (!page_mapped(page) ||
> +           !page_rmapping(page))
> +               return;
> +
> +       need_lock = !PageAnon(page) || PageKsm(page);
> +       if (need_lock && !trylock_page(page))
> +               return;
> +
> +       rmap_walk(page, &rwc);
> +
> +       if (need_lock)
> +               unlock_page(page);
> +}
> +
> +static ssize_t kpageidle_read(struct file *file, char __user *buf,
> +                             size_t count, loff_t *ppos)
> +{
> +       u64 __user *out = (u64 __user *)buf;
> +       struct page *page;
> +       unsigned long pfn, end_pfn;
> +       ssize_t ret = 0;
> +       u64 idle_bitmap = 0;
> +       int bit;
> +
> +       if (*ppos & KPMMASK || count & KPMMASK)
> +               return -EINVAL;
> +
> +       pfn = *ppos * BITS_PER_BYTE;
> +       if (pfn >= max_pfn)
> +               return 0;
> +
> +       end_pfn = pfn + count * BITS_PER_BYTE;
> +       if (end_pfn > max_pfn)
> +               end_pfn = ALIGN(max_pfn, KPMBITS);
> +
> +       for (; pfn < end_pfn; pfn++) {
> +               bit = pfn % KPMBITS;
> +               page = kpageidle_get_page(pfn);
> +               if (page) {
> +                       if (page_is_idle(page)) {
> +                               /*
> +                                * The page might have been referenced via a
> +                                * pte, in which case it is not idle. Clear
> +                                * refs and recheck.
> +                                */
> +                               kpageidle_clear_pte_refs(page);
> +                               if (page_is_idle(page))
> +                                       idle_bitmap |= 1ULL << bit;
> +                       }
> +                       put_page(page);
> +               }
> +               if (bit == KPMBITS - 1) {
> +                       if (put_user(idle_bitmap, out)) {
> +                               ret = -EFAULT;
> +                               break;
> +                       }
> +                       idle_bitmap = 0;
> +                       out++;
> +               }
> +       }
> +
> +       *ppos += (char __user *)out - buf;
> +       if (!ret)
> +               ret = (char __user *)out - buf;
> +       return ret;
> +}
> +
> +static ssize_t kpageidle_write(struct file *file, const char __user *buf,
> +                              size_t count, loff_t *ppos)
> +{
> +       const u64 __user *in = (const u64 __user *)buf;
> +       struct page *page;
> +       unsigned long pfn, end_pfn;
> +       ssize_t ret = 0;
> +       u64 idle_bitmap = 0;
> +       int bit;
> +
> +       if (*ppos & KPMMASK || count & KPMMASK)
> +               return -EINVAL;
> +
> +       pfn = *ppos * BITS_PER_BYTE;
> +       if (pfn >= max_pfn)
> +               return -ENXIO;
> +
> +       end_pfn = pfn + count * BITS_PER_BYTE;
> +       if (end_pfn > max_pfn)
> +               end_pfn = ALIGN(max_pfn, KPMBITS);
> +
> +       for (; pfn < end_pfn; pfn++) {
> +               bit = pfn % KPMBITS;
> +               if (bit == 0) {
> +                       if (get_user(idle_bitmap, in)) {
> +                               ret = -EFAULT;
> +                               break;
> +                       }
> +                       in++;
> +               }
> +               if (idle_bitmap >> bit & 1) {
> +                       page = kpageidle_get_page(pfn);
> +                       if (page) {
> +                               kpageidle_clear_pte_refs(page);
> +                               set_page_idle(page);
> +                               put_page(page);
> +                       }
> +               }
> +       }
> +
> +       *ppos += (const char __user *)in - buf;
> +       if (!ret)
> +               ret = (const char __user *)in - buf;
> +       return ret;
> +}
> +
> +static const struct file_operations proc_kpageidle_operations = {
> +       .llseek = mem_lseek,
> +       .read = kpageidle_read,
> +       .write = kpageidle_write,
> +};
> +
> +#ifndef CONFIG_64BIT
> +static bool need_page_idle(void)
> +{
> +       return true;
> +}
> +struct page_ext_operations page_idle_ops = {
> +       .need = need_page_idle,
> +};
> +#endif
> +#endif /* CONFIG_IDLE_PAGE_TRACKING */
> +
>  static int __init proc_page_init(void)
>  {
>         proc_create("kpagecount", S_IRUSR, NULL, &proc_kpagecount_operations);
> @@ -282,6 +496,10 @@ static int __init proc_page_init(void)
>  #ifdef CONFIG_MEMCG
>         proc_create("kpagecgroup", S_IRUSR, NULL, &proc_kpagecgroup_operations);
>  #endif
> +#ifdef CONFIG_IDLE_PAGE_TRACKING
> +       proc_create("kpageidle", S_IRUSR | S_IWUSR, NULL,
> +                   &proc_kpageidle_operations);
> +#endif
>         return 0;
>  }
>  fs_initcall(proc_page_init);
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 860bb0f30f14..7c9a17414106 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -459,7 +459,7 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>
>         mss->resident += size;
>         /* Accumulate the size in pages that have been accessed. */
> -       if (young || PageReferenced(page))
> +       if (young || page_is_young(page) || PageReferenced(page))
>                 mss->referenced += size;
>         mapcount = page_mapcount(page);
>         if (mapcount >= 2) {
> @@ -808,6 +808,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
>
>                 /* Clear accessed and referenced bits. */
>                 pmdp_test_and_clear_young(vma, addr, pmd);
> +               test_and_clear_page_young(page);
>                 ClearPageReferenced(page);
>  out:
>                 spin_unlock(ptl);
> @@ -835,6 +836,7 @@ out:
>
>                 /* Clear accessed and referenced bits. */
>                 ptep_test_and_clear_young(vma, addr, pte);
> +               test_and_clear_page_young(page);
>                 ClearPageReferenced(page);
>         }
>         pte_unmap_unlock(pte - 1, ptl);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c3a2b37365f6..0e62be7d5138 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2202,5 +2202,103 @@ void __init setup_nr_node_ids(void);
>  static inline void setup_nr_node_ids(void) {}
>  #endif
>
> +#ifdef CONFIG_IDLE_PAGE_TRACKING
> +#ifdef CONFIG_64BIT
> +static inline bool page_is_young(struct page *page)
> +{
> +       return PageYoung(page);
> +}
> +
> +static inline void set_page_young(struct page *page)
> +{
> +       SetPageYoung(page);
> +}
> +
> +static inline bool test_and_clear_page_young(struct page *page)
> +{
> +       return TestClearPageYoung(page);
> +}
> +
> +static inline bool page_is_idle(struct page *page)
> +{
> +       return PageIdle(page);
> +}
> +
> +static inline void set_page_idle(struct page *page)
> +{
> +       SetPageIdle(page);
> +}
> +
> +static inline void clear_page_idle(struct page *page)
> +{
> +       ClearPageIdle(page);
> +}
> +#else /* !CONFIG_64BIT */
> +/*
> + * If there is not enough space to store Idle and Young bits in page flags, use
> + * page ext flags instead.
> + */
> +extern struct page_ext_operations page_idle_ops;
> +
> +static inline bool page_is_young(struct page *page)
> +{
> +       return test_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
> +}
> +
> +static inline void set_page_young(struct page *page)
> +{
> +       set_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
> +}
> +
> +static inline bool test_and_clear_page_young(struct page *page)
> +{
> +       return test_and_clear_bit(PAGE_EXT_YOUNG,
> +                                 &lookup_page_ext(page)->flags);
> +}
> +
> +static inline bool page_is_idle(struct page *page)
> +{
> +       return test_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
> +}
> +
> +static inline void set_page_idle(struct page *page)
> +{
> +       set_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
> +}
> +
> +static inline void clear_page_idle(struct page *page)
> +{
> +       clear_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
> +}
> +#endif /* CONFIG_64BIT */
> +#else /* !CONFIG_IDLE_PAGE_TRACKING */
> +static inline bool page_is_young(struct page *page)
> +{
> +       return false;
> +}
> +
> +static inline void set_page_young(struct page *page)
> +{
> +}
> +
> +static inline bool test_and_clear_page_young(struct page *page)
> +{
> +       return false;
> +}
> +
> +static inline bool page_is_idle(struct page *page)
> +{
> +       return false;
> +}
> +
> +static inline void set_page_idle(struct page *page)
> +{
> +}
> +
> +static inline void clear_page_idle(struct page *page)
> +{
> +}
> +#endif /* CONFIG_IDLE_PAGE_TRACKING */
> +
>  #endif /* __KERNEL__ */
>  #endif /* _LINUX_MM_H */
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 91b7f9b2b774..478f2241f284 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -109,6 +109,10 @@ enum pageflags {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>         PG_compound_lock,
>  #endif
> +#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
> +       PG_young,
> +       PG_idle,
> +#endif
>         __NR_PAGEFLAGS,
>
>         /* Filesystems */
> @@ -363,6 +367,13 @@ PAGEFLAG_FALSE(HWPoison)
>  #define __PG_HWPOISON 0
>  #endif
>
> +#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
> +TESTPAGEFLAG(Young, young, PF_ANY)
> +SETPAGEFLAG(Young, young, PF_ANY)
> +TESTCLEARFLAG(Young, young, PF_ANY)
> +PAGEFLAG(Idle, idle, PF_ANY)
> +#endif
> +
>  /*
>   * On an anonymous page mapped into a user virtual memory area,
>   * page->mapping points to its anon_vma, not to a struct address_space;
> diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
> index c42981cd99aa..17f118a82854 100644
> --- a/include/linux/page_ext.h
> +++ b/include/linux/page_ext.h
> @@ -26,6 +26,10 @@ enum page_ext_flags {
>         PAGE_EXT_DEBUG_POISON,          /* Page is poisoned */
>         PAGE_EXT_DEBUG_GUARD,
>         PAGE_EXT_OWNER,
> +#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
> +       PAGE_EXT_YOUNG,
> +       PAGE_EXT_IDLE,
> +#endif
>  };
>
>  /*
> diff --git a/mm/Kconfig b/mm/Kconfig
> index e79de2bd12cd..db817e2c2ec8 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -654,3 +654,15 @@ config DEFERRED_STRUCT_PAGE_INIT
>           when kswapd starts. This has a potential performance impact on
>           processes running early in the lifetime of the systemm until kswapd
>           finishes the initialisation.
> +
> +config IDLE_PAGE_TRACKING
> +       bool "Enable idle page tracking"
> +       select PROC_PAGE_MONITOR
> +       select PAGE_EXTENSION if !64BIT
> +       help
> +         This feature allows to estimate the amount of user pages that have
> +         not been touched during a given period of time. This information can
> +         be useful to tune memory cgroup limits and/or for job placement
> +         within a compute cluster.
> +
> +         See Documentation/vm/pagemap.txt for more details.
> diff --git a/mm/debug.c b/mm/debug.c
> index 76089ddf99ea..6c1b3ea61bfd 100644
> --- a/mm/debug.c
> +++ b/mm/debug.c
> @@ -48,6 +48,10 @@ static const struct trace_print_flags pageflag_names[] = {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>         {1UL << PG_compound_lock,       "compound_lock" },
>  #endif
> +#if defined(CONFIG_IDLE_PAGE_TRACKING) && defined(CONFIG_64BIT)
> +       {1UL << PG_young,               "young"         },
> +       {1UL << PG_idle,                "idle"          },
> +#endif
>  };
>
>  static void dump_flags(unsigned long flags,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 8f9a334a6c66..5ab46adca104 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1806,6 +1806,11 @@ static void __split_huge_page_refcount(struct page *page,
>                 /* clear PageTail before overwriting first_page */
>                 smp_wmb();
>
> +               if (page_is_young(page))
> +                       set_page_young(page_tail);
> +               if (page_is_idle(page))
> +                       set_page_idle(page_tail);
> +
>                 /*
>                  * __split_huge_page_splitting() already set the
>                  * splitting bit in all pmd that could map this
> @@ -2311,7 +2316,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>                 VM_BUG_ON_PAGE(PageLRU(page), page);
>
>                 /* If there is no mapped pte young don't collapse the page */
> -               if (pte_young(pteval) || PageReferenced(page) ||
> +               if (pte_young(pteval) ||
> +                   page_is_young(page) || PageReferenced(page) ||
>                     mmu_notifier_test_young(vma->vm_mm, address))
>                         referenced = true;
>         }
> @@ -2738,7 +2744,8 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
>                  */
>                 if (page_count(page) != 1 + !!PageSwapCache(page))
>                         goto out_unmap;
> -               if (pte_young(pteval) || PageReferenced(page) ||
> +               if (pte_young(pteval) ||
> +                   page_is_young(page) || PageReferenced(page) ||
>                     mmu_notifier_test_young(vma->vm_mm, address))
>                         referenced = true;
>         }
> diff --git a/mm/migrate.c b/mm/migrate.c
> index d3529d620a5b..d86cec005aa6 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -524,6 +524,11 @@ void migrate_page_copy(struct page *newpage, struct page *page)
>                         __set_page_dirty_nobuffers(newpage);
>         }
>
> +       if (page_is_young(page))
> +               set_page_young(newpage);
> +       if (page_is_idle(page))
> +               set_page_idle(newpage);
> +
>         /*
>          * Copy NUMA information to the new page, to prevent over-eager
>          * future migrations of this same page.
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index d86fd2f5353f..e4b3af054bf2 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -59,6 +59,9 @@ static struct page_ext_operations *page_ext_ops[] = {
>  #ifdef CONFIG_PAGE_OWNER
>         &page_owner_ops,
>  #endif
> +#if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT)
> +       &page_idle_ops,
> +#endif
>  };
>
>  static unsigned long total_usage;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 30812e9042ae..9e411aa03176 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -900,6 +900,11 @@ static int page_referenced_one(struct page *page, struct vm_area_struct *vma,
>                 pte_unmap_unlock(pte, ptl);
>         }
>
> +       if (referenced)
> +               clear_page_idle(page);
> +       if (test_and_clear_page_young(page))
> +               referenced++;
> +
>         if (referenced) {
>                 pra->referenced++;
>                 pra->vm_flags |= vma->vm_flags;
> diff --git a/mm/swap.c b/mm/swap.c
> index d398860badd1..04b6ce51bcf0 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -623,6 +623,8 @@ void mark_page_accessed(struct page *page)
>         } else if (!PageReferenced(page)) {
>                 SetPageReferenced(page);
>         }
> +       if (page_is_idle(page))
> +               clear_page_idle(page);
>  }
>  EXPORT_SYMBOL(mark_page_accessed);
>
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 6/8] proc: add kpageidle file
  2015-07-24 14:08   ` Paul Gortmaker
@ 2015-07-24 14:17     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-24 14:17 UTC (permalink / raw)
  To: Paul Gortmaker
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Michal Hocko, Greg Thelen,
	Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, Jonathan Corbet, linux-api, LKML doc, linux-mm,
	cgroups, LKML, linux-next

On Fri, Jul 24, 2015 at 10:08:25AM -0400, Paul Gortmaker wrote:

> fs/proc/page.c:341:4: error: implicit declaration of function
> 'pmdp_clear_young_notify' [-Werror=implicit-function-declaration]
> fs/proc/page.c:347:4: error: implicit declaration of function
> 'ptep_clear_young_notify' [-Werror=implicit-function-declaration]
> cc1: some warnings being treated as errors
> make[3]: *** [fs/proc/page.o] Error 1
> make[2]: *** [fs/proc] Error 2

My bad, sorry.

It's already been reported by the kbuild-test-robot, see

  [linux-next:master 3983/4215] fs/proc/page.c:332:4: error: implicit declaration of function 'pmdp_clear_young_notify'

The fix is:

From: Vladimir Davydov <vdavydov@parallels.com>
Subject: [PATCH] mmu_notifier: add missing stubs for clear_young

This is a compilation fix for !CONFIG_MMU_NOTIFIER.

Fixes: mmu-notifier-add-clear_young-callback
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>

diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index a5b17137c683..a1a210d59961 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -471,6 +471,8 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
 
 #define ptep_clear_flush_young_notify ptep_clear_flush_young
 #define pmdp_clear_flush_young_notify pmdp_clear_flush_young
+#define ptep_clear_young_notify ptep_test_and_clear_young
+#define pmdp_clear_young_notify pmdp_test_and_clear_young
 #define	ptep_clear_flush_notify ptep_clear_flush
 #define pmdp_huge_clear_flush_notify pmdp_huge_clear_flush
 #define pmdp_huge_get_and_clear_notify pmdp_huge_get_and_clear

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-22 16:23   ` Vladimir Davydov
@ 2015-07-25 16:24     ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-25 16:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Michal Hocko, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel, Kees Cook

On Wed, Jul 22, 2015 at 07:23:53PM +0300, Vladimir Davydov wrote:
> On Tue, Jul 21, 2015 at 04:34:02PM -0700, Andrew Morton wrote:
> > On Sun, 19 Jul 2015 15:31:09 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> > > Documentation/vm/pagemap.txt           |  22 ++-
> > 
> > I think we'll need quite a lot more than this to fully describe the
> > interface?
> 
> Agree, the documentation sucks :-( Will try to forge something more
> thorough.

The incremental patch is attached. Could you please merge it into
proc-add-kpageidle-file?
---
From: Vladimir Davydov <vdavydov@parallels.com>
Subject: [PATCH] Documentation: Add idle page tracking description

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>

diff --git a/Documentation/vm/00-INDEX b/Documentation/vm/00-INDEX
index 081c49777abb..6a5e2a102a45 100644
--- a/Documentation/vm/00-INDEX
+++ b/Documentation/vm/00-INDEX
@@ -14,6 +14,8 @@ hugetlbpage.txt
 	- a brief summary of hugetlbpage support in the Linux kernel.
 hwpoison.txt
 	- explains what hwpoison is
+idle_page_tracking.txt
+	- description of the idle page tracking feature.
 ksm.txt
 	- how to use the Kernel Samepage Merging feature.
 numa
diff --git a/Documentation/vm/idle_page_tracking.txt b/Documentation/vm/idle_page_tracking.txt
new file mode 100644
index 000000000000..d0f332d544c4
--- /dev/null
+++ b/Documentation/vm/idle_page_tracking.txt
@@ -0,0 +1,94 @@
+MOTIVATION
+
+The idle page tracking feature allows to track which memory pages are being
+accessed by a workload and which are idle. This information can be useful for
+estimating the workload's working set size, which, in turn, can be taken into
+account when configuring the workload parameters, setting memory cgroup limits,
+or deciding where to place the workload within a compute cluster.
+
+USER API
+
+If CONFIG_IDLE_PAGE_TRACKING was enabled on compile time, a new read-write file
+is present on the proc filesystem, /proc/kpageidle.
+
+The file implements a bitmap where each bit corresponds to a memory page. The
+bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
+mapped to bit #i%64 of array element #i/64, byte order is native. When a bit is
+set, the corresponding page is idle.
+
+A page is considered idle if it has not been accessed since it was marked idle
+(for more details on what "accessed" actually means see the IMPLEMENTATION
+DETAILS section). To mark a page idle one has to set the bit corresponding to
+the page by writing to the file. A value written to the file is OR-ed with the
+current bitmap value.
+
+Only accesses to user memory pages are tracked. These are pages mapped to a
+process address space, page cache and buffer pages, swap cache pages. For other
+page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored,
+and hence such pages are never reported idle.
+
+For huge pages the idle flag is set only on the head page, so one has to read
+/proc/kpageflags in order to correctly count idle huge pages.
+
+Reading from or writing to /proc/kpageidle will return -EINVAL if you are not
+starting the read/write on an 8-byte boundary, or if the size of the read/write
+is not a multiple of 8 bytes. Writing to this file beyond max PFN will return
+-ENXIO.
+
+That said, in order to estimate the amount of pages that are not used by a
+workload one should:
+
+ 1. Mark all the workload's pages as idle by setting corresponding bits in the
+    /proc/kpageidle bitmap. The pages can be found by reading /proc/pid/pagemap
+    if the workload is represented by a process, or by filtering out alien pages
+    using /proc/kpagecgroup in case the workload is placed in a memory cgroup.
+
+ 2. Wait until the workload accesses its working set.
+
+ 3. Read /proc/kpageidle and count the number of bits set. If one wants to
+    ignore certain types of pages, e.g. mlocked pages since they are not
+    reclaimable, he or she can filter them out using /proc/kpageflags.
+
+See Documentation/vm/pagemap.txt for more information about /proc/pid/pagemap,
+/proc/kpageflags, and /proc/kpagecgroup.
+
+IMPLEMENTATION DETAILS
+
+The kernel internally keeps track of accesses to user memory pages in order to
+reclaim unreferenced pages first on memory shortage conditions. A page is
+considered referenced if it has been recently accessed via a process address
+space, in which case one or more PTEs it is mapped to will have the Accessed bit
+set, or marked accessed explicitly by the kernel (see mark_page_accessed()). The
+latter happens when:
+
+ - a userspace process reads or writes a page using a system call (e.g. read(2)
+   or write(2))
+
+ - a page that is used for storing filesystem buffers is read or written,
+   because a process needs filesystem metadata stored in it (e.g. lists a
+   directory tree)
+
+ - a page is accessed by a device driver using get_user_pages()
+
+When a dirty page is written to swap or disk as a result of memory reclaim or
+exceeding the dirty memory limit, it is not marked referenced.
+
+The idle memory tracking feature adds a new page flag, the Idle flag. This flag
+is set manually, by writing to /proc/kpageidle (see the USER API section), and
+cleared automatically whenever a page is referenced as defined above.
+
+When a page is marked idle, the Accessed bit must be cleared in all PTEs it is
+mapped to, otherwise we will not be able to detect accesses to the page coming
+from a process address space. To avoid interference with the reclaimer, which,
+as noted above, uses the Accessed bit to promote actively referenced pages, one
+more page flag is introduced, the Young flag. When the PTE Accessed bit is
+cleared as a result of setting or updating a page's Idle flag, the Young flag
+is set on the page. The reclaimer treats the Young flag as an extra PTE
+Accessed bit and therefore will consider such a page as referenced.
+
+Since the idle memory tracking feature is based on the memory reclaimer logic,
+it only works with pages that are on an LRU list, other pages are silently
+ignored. That means it will ignore a user memory page if it is isolated, but
+since there are usually not many of them, it should not affect the overall
+result noticeably. In order not to stall scanning of /proc/kpageidle, locked
+pages may be skipped too.
diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 538735465693..cff513e28a13 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -71,15 +71,8 @@ There are five components to pagemap:
    memory cgroup each page is charged to, indexed by PFN. Only available when
    CONFIG_MEMCG is set.
 
- * /proc/kpageidle.  This file implements a bitmap where each bit corresponds
-   to a page, indexed by PFN. When the bit is set, the corresponding page is
-   idle. A page is considered idle if it has not been accessed since it was
-   marked idle. To mark a page idle one should set the bit corresponding to the
-   page by writing to the file. A value written to the file is OR-ed with the
-   current bitmap value. Only user memory pages can be marked idle, for other
-   page types input is silently ignored. Writing to this file beyond max PFN
-   results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
-   set.
+ * /proc/kpageidle.  This file comprises API of the idle page tracking feature.
+   See Documentation/vm/idle_page_tracking.txt for more details.
 
 Short descriptions to the page flags:
 
diff --git a/mm/Kconfig b/mm/Kconfig
index a1de09926171..90fa89175102 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -666,4 +666,4 @@ config IDLE_PAGE_TRACKING
 	  be useful to tune memory cgroup limits and/or for job placement
 	  within a compute cluster.
 
-	  See Documentation/vm/pagemap.txt for more details.
+	  See Documentation/vm/idle_page_tracking.txt for more details.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-21 23:34 ` Andrew Morton
  2015-07-22 16:23   ` Vladimir Davydov
@ 2015-07-27 19:18   ` Kees Cook
  2015-07-27 19:25     ` Andrew Morton
  1 sibling, 1 reply; 57+ messages in thread
From: Kees Cook @ 2015-07-27 19:18 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vladimir Davydov, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Michal Hocko, Greg Thelen,
	Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, Jonathan Corbet, Linux API, linux-doc, Linux-MM,
	Cgroups, LKML

On Tue, Jul 21, 2015 at 4:34 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Sun, 19 Jul 2015 15:31:09 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
>> To mark a page idle one should set the bit corresponding to the
>>    page by writing to the file. A value written to the file is OR-ed with the
>>    current bitmap value. Only user memory pages can be marked idle, for other
>>    page types input is silently ignored. Writing to this file beyond max PFN
>>    results in the ENXIO error. Only available when CONFIG_IDLE_PAGE_TRACKING is
>>    set.
>>
>>    This file can be used to estimate the amount of pages that are not
>>    used by a particular workload as follows:
>>
>>    1. mark all pages of interest idle by setting corresponding bits in the
>>       /proc/kpageidle bitmap
>>    2. wait until the workload accesses its working set
>>    3. read /proc/kpageidle and count the number of bits set
>
> Security implications.  This interface could be used to learn about a
> sensitive application by poking data at it and then observing its
> memory access patterns.  Perhaps this is why the proc files are
> root-only (whcih I assume is sufficient).  Some words here about the
> security side of things and the reasoning behind the chosen permissions
> would be good to have.

As long as this stays true-root-only, I think it should be safe enough.

>>  * /proc/kpagecgroup.  This file contains a 64-bit inode number of the
>>    memory cgroup each page is charged to, indexed by PFN.
>
> Actually "closest online ancestor".  This also should be in the
> interface documentation.
>
>> Only available when CONFIG_MEMCG is set.
>
> CONFIG_MEMCG and CONFIG_IDLE_PAGE_TRACKING I assume?
>
>>
>>    This file can be used to find all pages (including unmapped file
>>    pages) accounted to a particular cgroup. Using /proc/kpageidle, one
>>    can then estimate the cgroup working set size.
>>
>> For an example of using these files for estimating the amount of unused
>> memory pages per each memory cgroup, please see the script attached
>> below.
>
> Why were these put in /proc anyway?  Rather than under /sys/fs/cgroup
> somewhere?  Presumably because /proc/kpageidle is useful in non-memcg
> setups.

Do we need a /proc/vm/ for holding these kinds of things? We're
collecting a lot there. Or invent some way for this to be sensible in
/sys?

-Kees

-- 
Kees Cook
Chrome OS Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-27 19:18   ` Kees Cook
@ 2015-07-27 19:25     ` Andrew Morton
  0 siblings, 0 replies; 57+ messages in thread
From: Andrew Morton @ 2015-07-27 19:25 UTC (permalink / raw)
  To: Kees Cook
  Cc: Vladimir Davydov, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Michal Hocko, Greg Thelen,
	Michel Lespinasse, David Rientjes, Pavel Emelyanov,
	Cyrill Gorcunov, Jonathan Corbet, Linux API, linux-doc, Linux-MM,
	Cgroups, LKML

On Mon, 27 Jul 2015 12:18:57 -0700 Kees Cook <keescook@chromium.org> wrote:

> > Why were these put in /proc anyway?  Rather than under /sys/fs/cgroup
> > somewhere?  Presumably because /proc/kpageidle is useful in non-memcg
> > setups.
> 
> Do we need a /proc/vm/ for holding these kinds of things? We're
> collecting a lot there. Or invent some way for this to be sensible in
> /sys?

/proc is the traditional place for such things (/proc/kpagecount,
/proc/kpageflags, /proc/pagetypeinfo).  But that was probably a
mistake.

/proc/sys/vm is rather a dumping ground of random tunables and
statuses, but yes, I do think that moving the kpageidle stuff into there
would be better.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
                   ` (10 preceding siblings ...)
  2015-07-21 23:34 ` Andrew Morton
@ 2015-07-29 12:36 ` Michal Hocko
  2015-07-29 13:59   ` Vladimir Davydov
  11 siblings, 1 reply; 57+ messages in thread
From: Michal Hocko @ 2015-07-29 12:36 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
[...]
> ---- USER API ----
> 
> The user API consists of two new proc files:

I was thinking about this for a while. I dislike the interface.  It is
quite awkward to use - e.g. you have to read the full memory to check a
single memcg idleness. This might turn out being a problem especially on
large machines. It also provides a very low level information (per-pfn
idleness) which is inherently racy. Does anybody really require this
level of detail?

I would assume that most users are interested only in a single number
which tells the idleness of the system/memcg. Well, you have mentioned
a per-process reclaim but I am quite skeptical about this.

I guess the primary reason to rely on the pfn rather than the LRU walk,
which would be more targeted (especially for memcg cases), is that we
cannot hold lru lock for the whole LRU walk and we cannot continue
walking after the lock is dropped. Maybe we can try to address that
instead? I do not think this is easy to achieve but have you considered
that as an option?

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 12:36 ` Michal Hocko
@ 2015-07-29 13:59   ` Vladimir Davydov
  2015-07-29 14:12     ` Michel Lespinasse
  2015-07-29 14:26     ` Michal Hocko
  0 siblings, 2 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 13:59 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> [...]
> > ---- USER API ----
> > 
> > The user API consists of two new proc files:
> 
> I was thinking about this for a while. I dislike the interface.  It is
> quite awkward to use - e.g. you have to read the full memory to check a
> single memcg idleness. This might turn out being a problem especially on
> large machines.

Yes, with this API estimating the wss of a single memory cgroup will
cost almost as much as doing this for the whole system.

Come to think of it, does anyone really need to estimate idleness of one
particular cgroup? If we are doing this for finding an optimal memcg
limits configuration or while considering a load move within a cluster
(which I think are the primary use cases for the feature), we must do it
system-wide to see the whole picture.

> It also provides a very low level information (per-pfn idleness) which
> is inherently racy. Does anybody really require this level of detail?

Well, one might want to do it per-process, obtaining PFNs from
/proc/pid/pagemap.

> 
> I would assume that most users are interested only in a single number
> which tells the idleness of the system/memcg.

Yes, that's what I need it for - estimating containers' wss for setting
their limits accordingly.

> Well, you have mentioned a per-process reclaim but I am quite
> skeptical about this.

This is what Minchan mentioned initially. Personally, I'm not going to
use it per-process, but I wouldn't rule out this use case either.

> 
> I guess the primary reason to rely on the pfn rather than the LRU walk,
> which would be more targeted (especially for memcg cases), is that we
> cannot hold lru lock for the whole LRU walk and we cannot continue
> walking after the lock is dropped. Maybe we can try to address that
> instead? I do not think this is easy to achieve but have you considered
> that as an option?

Yes, I have, and I've come to a conclusion it's not doable, because LRU
lists can be constantly rotating at an arbitrary rate. If you have an
idea in mind how this could be done, please share.

Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
 - You can distribute a walk in time to avoid CPU bursts.
 - You are free to parallelize the scanner as you wish to decrease the
   scan time.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 13:59   ` Vladimir Davydov
@ 2015-07-29 14:12     ` Michel Lespinasse
  2015-07-29 14:13       ` Michel Lespinasse
  2015-07-29 14:45       ` Vladimir Davydov
  2015-07-29 14:26     ` Michal Hocko
  1 sibling, 2 replies; 57+ messages in thread
From: Michel Lespinasse @ 2015-07-29 14:12 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1843 bytes --]

On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov <vdavydov@parallels.com>
wrote:
>> I guess the primary reason to rely on the pfn rather than the LRU walk,
>> which would be more targeted (especially for memcg cases), is that we
>> cannot hold lru lock for the whole LRU walk and we cannot continue
>> walking after the lock is dropped. Maybe we can try to address that
>> instead? I do not think this is easy to achieve but have you considered
>> that as an option?
>
> Yes, I have, and I've come to a conclusion it's not doable, because LRU
> lists can be constantly rotating at an arbitrary rate. If you have an
> idea in mind how this could be done, please share.
>
> Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
>  - You can distribute a walk in time to avoid CPU bursts.
>  - You are free to parallelize the scanner as you wish to decrease the
>    scan time.

There is a third way: one could go through every MM in the system and scan
their page tables. Doing things that way turns out to be generally faster
than scanning by physical address, because you don't have to go through
RMAP for every page. But, you end up needing to take the mmap_sem lock of
every MM (in turn) while scanning them, and that degrades quickly under
memory load, which is exactly when you most need this feature. So, scan by
address is still what we use here.

My only concern about the interface is that it exposes the fact that the
scan is done by address - if the interface only showed per-memcg totals, it
would make it possible to change the implementation underneath if we
somehow figure out how to work around the mmap_sem issue in the future. I
don't think that is necessarily a blocker but this is something to keep in
mind IMO.

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

[-- Attachment #2: Type: text/html, Size: 2060 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 14:12     ` Michel Lespinasse
@ 2015-07-29 14:13       ` Michel Lespinasse
  2015-07-29 14:45       ` Vladimir Davydov
  1 sibling, 0 replies; 57+ messages in thread
From: Michel Lespinasse @ 2015-07-29 14:13 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

(resending as text, sorry for previous post which didn't make it to the ML)

On Wed, Jul 29, 2015 at 7:12 AM, Michel Lespinasse <walken@google.com> wrote:
>
> On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov <vdavydov@parallels.com> wrote:
> >> I guess the primary reason to rely on the pfn rather than the LRU walk,
> >> which would be more targeted (especially for memcg cases), is that we
> >> cannot hold lru lock for the whole LRU walk and we cannot continue
> >> walking after the lock is dropped. Maybe we can try to address that
> >> instead? I do not think this is easy to achieve but have you considered
> >> that as an option?
> >
> > Yes, I have, and I've come to a conclusion it's not doable, because LRU
> > lists can be constantly rotating at an arbitrary rate. If you have an
> > idea in mind how this could be done, please share.
> >
> > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
> >  - You can distribute a walk in time to avoid CPU bursts.
> >  - You are free to parallelize the scanner as you wish to decrease the
> >    scan time.
>
> There is a third way: one could go through every MM in the system and scan their page tables. Doing things that way turns out to be generally faster than scanning by physical address, because you don't have to go through RMAP for every page. But, you end up needing to take the mmap_sem lock of every MM (in turn) while scanning them, and that degrades quickly under memory load, which is exactly when you most need this feature. So, scan by address is still what we use here.
>
> My only concern about the interface is that it exposes the fact that the scan is done by address - if the interface only showed per-memcg totals, it would make it possible to change the implementation underneath if we somehow figure out how to work around the mmap_sem issue in the future. I don't think that is necessarily a blocker but this is something to keep in mind IMO.

--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 13:59   ` Vladimir Davydov
  2015-07-29 14:12     ` Michel Lespinasse
@ 2015-07-29 14:26     ` Michal Hocko
  2015-07-29 15:28       ` Vladimir Davydov
  1 sibling, 1 reply; 57+ messages in thread
From: Michal Hocko @ 2015-07-29 14:26 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> > [...]
> > > ---- USER API ----
> > > 
> > > The user API consists of two new proc files:
> > 
> > I was thinking about this for a while. I dislike the interface.  It is
> > quite awkward to use - e.g. you have to read the full memory to check a
> > single memcg idleness. This might turn out being a problem especially on
> > large machines.
> 
> Yes, with this API estimating the wss of a single memory cgroup will
> cost almost as much as doing this for the whole system.
> 
> Come to think of it, does anyone really need to estimate idleness of one
> particular cgroup?

It is certainly interesting for setting the low limit.

> If we are doing this for finding an optimal memcg
> limits configuration or while considering a load move within a cluster
> (which I think are the primary use cases for the feature), we must do it
> system-wide to see the whole picture.
> 
> > It also provides a very low level information (per-pfn idleness) which
> > is inherently racy. Does anybody really require this level of detail?
> 
> Well, one might want to do it per-process, obtaining PFNs from
> /proc/pid/pagemap.

Sure once the interface is exported you can do whatever ;) But my
question is whether any real usecase _requires_ it. 

> > I would assume that most users are interested only in a single number
> > which tells the idleness of the system/memcg.
> 
> Yes, that's what I need it for - estimating containers' wss for setting
> their limits accordingly.

So why don't we export the single per memcg and global knobs then?
This would have few advantages. First of all it would be much easier to
use, you wouldn't have to export memcg ids and finally the implementation
could be changed without any user visible changes (e.g. lru vs. pfn walks),
potential caching and who knows what. In other words. Michel had a
single number interface AFAIR, what was the primary reason to move away
from that API?

> > Well, you have mentioned a per-process reclaim but I am quite
> > skeptical about this.
> 
> This is what Minchan mentioned initially. Personally, I'm not going to
> use it per-process, but I wouldn't rule out this use case either.

Considering how many times we have been bitten by too broad interfaces I
would rather be conservative.

> > I guess the primary reason to rely on the pfn rather than the LRU walk,
> > which would be more targeted (especially for memcg cases), is that we
> > cannot hold lru lock for the whole LRU walk and we cannot continue
> > walking after the lock is dropped. Maybe we can try to address that
> > instead? I do not think this is easy to achieve but have you considered
> > that as an option?
> 
> Yes, I have, and I've come to a conclusion it's not doable, because LRU
> lists can be constantly rotating at an arbitrary rate. If you have an
> idea in mind how this could be done, please share.

Yes this is really tricky with the current LRU implementation. I
was playing with some ideas (do some checkpoints on the way) but
none of them was really working out on a busy systems. But the LRU
implementation might change in the future. I didn't mean this as a hard
requirement it just sounds that the current implementation restrictions
shape the user visible API which is a good sign to think twice about it.

> Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
>  - You can distribute a walk in time to avoid CPU bursts.

This would make the information even more volatile. I am not sure how
helpful it would be in the end.

>  - You are free to parallelize the scanner as you wish to decrease the
>    scan time.

This is true but you could argue similar with per-node/lru threads if this
was implemented in the kernel and really needed. I am not sure it would
be really needed though. I would expect this would be a low priority
thing.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 14:12     ` Michel Lespinasse
  2015-07-29 14:13       ` Michel Lespinasse
@ 2015-07-29 14:45       ` Vladimir Davydov
  2015-07-29 15:08         ` Michel Lespinasse
  2015-07-29 15:08         ` Michal Hocko
  1 sibling, 2 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 14:45 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, Jul 29, 2015 at 07:12:13AM -0700, Michel Lespinasse wrote:
> On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov <vdavydov@parallels.com>
> wrote:
> >> I guess the primary reason to rely on the pfn rather than the LRU walk,
> >> which would be more targeted (especially for memcg cases), is that we
> >> cannot hold lru lock for the whole LRU walk and we cannot continue
> >> walking after the lock is dropped. Maybe we can try to address that
> >> instead? I do not think this is easy to achieve but have you considered
> >> that as an option?
> >
> > Yes, I have, and I've come to a conclusion it's not doable, because LRU
> > lists can be constantly rotating at an arbitrary rate. If you have an
> > idea in mind how this could be done, please share.
> >
> > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
> >  - You can distribute a walk in time to avoid CPU bursts.
> >  - You are free to parallelize the scanner as you wish to decrease the
> >    scan time.
> 
> There is a third way: one could go through every MM in the system and scan
> their page tables. Doing things that way turns out to be generally faster
> than scanning by physical address, because you don't have to go through
> RMAP for every page. But, you end up needing to take the mmap_sem lock of
> every MM (in turn) while scanning them, and that degrades quickly under
> memory load, which is exactly when you most need this feature. So, scan by
> address is still what we use here.

Page table scan approach has the inherent problem - it ignores unmapped
page cache. If a workload does a lot of read/write or map-access-unmap
operations, we won't be able to even roughly estimate its wss.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 14:45       ` Vladimir Davydov
@ 2015-07-29 15:08         ` Michel Lespinasse
  2015-07-29 15:31           ` Vladimir Davydov
  2015-07-29 15:08         ` Michal Hocko
  1 sibling, 1 reply; 57+ messages in thread
From: Michel Lespinasse @ 2015-07-29 15:08 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 452 bytes --]

On Wed, Jul 29, 2015 at 7:45 AM, Vladimir Davydov <vdavydov@parallels.com>
wrote:
> Page table scan approach has the inherent problem - it ignores unmapped
> page cache. If a workload does a lot of read/write or map-access-unmap
> operations, we won't be able to even roughly estimate its wss.

You can catch that in mark_page_accessed on those paths, though.

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

[-- Attachment #2: Type: text/html, Size: 557 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 14:45       ` Vladimir Davydov
  2015-07-29 15:08         ` Michel Lespinasse
@ 2015-07-29 15:08         ` Michal Hocko
  2015-07-29 15:36           ` Vladimir Davydov
  1 sibling, 1 reply; 57+ messages in thread
From: Michal Hocko @ 2015-07-29 15:08 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michel Lespinasse, Andrew Morton, Andres Lagar-Cavilla,
	Minchan Kim, Raghavendra K T, Johannes Weiner, Greg Thelen,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed 29-07-15 17:45:39, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 07:12:13AM -0700, Michel Lespinasse wrote:
> > On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov <vdavydov@parallels.com>
> > wrote:
> > >> I guess the primary reason to rely on the pfn rather than the LRU walk,
> > >> which would be more targeted (especially for memcg cases), is that we
> > >> cannot hold lru lock for the whole LRU walk and we cannot continue
> > >> walking after the lock is dropped. Maybe we can try to address that
> > >> instead? I do not think this is easy to achieve but have you considered
> > >> that as an option?
> > >
> > > Yes, I have, and I've come to a conclusion it's not doable, because LRU
> > > lists can be constantly rotating at an arbitrary rate. If you have an
> > > idea in mind how this could be done, please share.
> > >
> > > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
> > >  - You can distribute a walk in time to avoid CPU bursts.
> > >  - You are free to parallelize the scanner as you wish to decrease the
> > >    scan time.
> > 
> > There is a third way: one could go through every MM in the system and scan
> > their page tables. Doing things that way turns out to be generally faster
> > than scanning by physical address, because you don't have to go through
> > RMAP for every page. But, you end up needing to take the mmap_sem lock of
> > every MM (in turn) while scanning them, and that degrades quickly under
> > memory load, which is exactly when you most need this feature. So, scan by
> > address is still what we use here.
> 
> Page table scan approach has the inherent problem - it ignores unmapped
> page cache. If a workload does a lot of read/write or map-access-unmap
> operations, we won't be able to even roughly estimate its wss.

That page cache is trivially reclaimable if it is clean. If it needs
writeback then it is non-idle only until the next writeback. So why does
it matter for the estimation?

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 14:26     ` Michal Hocko
@ 2015-07-29 15:28       ` Vladimir Davydov
  2015-07-29 15:47         ` Michal Hocko
  2015-07-29 15:55         ` Andres Lagar-Cavilla
  0 siblings, 2 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 15:28 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed, Jul 29, 2015 at 04:26:19PM +0200, Michal Hocko wrote:
> On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
> > On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> > > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> > > [...]
> > > > ---- USER API ----
> > > > 
> > > > The user API consists of two new proc files:
> > > 
> > > I was thinking about this for a while. I dislike the interface.  It is
> > > quite awkward to use - e.g. you have to read the full memory to check a
> > > single memcg idleness. This might turn out being a problem especially on
> > > large machines.
> > 
> > Yes, with this API estimating the wss of a single memory cgroup will
> > cost almost as much as doing this for the whole system.
> > 
> > Come to think of it, does anyone really need to estimate idleness of one
> > particular cgroup?
> 
> It is certainly interesting for setting the low limit.

Yes, but IMO there is no point in setting the low limit for one
particular cgroup w/o considering what's going on with the rest of the
system.

> 
> > If we are doing this for finding an optimal memcg
> > limits configuration or while considering a load move within a cluster
> > (which I think are the primary use cases for the feature), we must do it
> > system-wide to see the whole picture.
> > 
> > > It also provides a very low level information (per-pfn idleness) which
> > > is inherently racy. Does anybody really require this level of detail?
> > 
> > Well, one might want to do it per-process, obtaining PFNs from
> > /proc/pid/pagemap.
> 
> Sure once the interface is exported you can do whatever ;) But my
> question is whether any real usecase _requires_ it. 

I only know/care about my use case, which is memcg configuration, but I
want to make the API as reusable as possible.

> 
> > > I would assume that most users are interested only in a single number
> > > which tells the idleness of the system/memcg.
> > 
> > Yes, that's what I need it for - estimating containers' wss for setting
> > their limits accordingly.
> 
> So why don't we export the single per memcg and global knobs then?
> This would have few advantages. First of all it would be much easier to
> use, you wouldn't have to export memcg ids and finally the implementation
> could be changed without any user visible changes (e.g. lru vs. pfn walks),
> potential caching and who knows what. In other words. Michel had a
> single number interface AFAIR, what was the primary reason to move away
> from that API?

Because there is too much to be taken care of in the kernel with such an
approach and chances are high that it won't satisfy everyone. What
should the scan period be equal too? Knob. How many kthreads do we want?
Knob. I want to keep history for last N intervals (this was a part of
Michel's implementation), what should N be equal to? Knob. I want to be
able to choose between an instant scan and a scan distributed in time.
Knob. I want to see stats for anon/locked/file/dirty memory separately,
please add them to the API. You see the scale of the problem with doing
it in the kernel?

The API this patch set introduces is simple and fair. It only defines
what "idle" flag mean and gives you a way to flip it. That's it. You
wanna history? DIY. You wanna periodic scans? DIY. Etc.

> 
> > > Well, you have mentioned a per-process reclaim but I am quite
> > > skeptical about this.
> > 
> > This is what Minchan mentioned initially. Personally, I'm not going to
> > use it per-process, but I wouldn't rule out this use case either.
> 
> Considering how many times we have been bitten by too broad interfaces I
> would rather be conservative.

I consider an API "broad" when it tries to do a lot of different things.
sys_prctl is a good example of a broad API.

/proc/kpageidle is not broad, because it does just one thing (I hope it
does it good :). If we attempted to implement the scanner in the kernel
with all those tunables I mentioned above, then we would get a broad API
IMO.

> 
> > > I guess the primary reason to rely on the pfn rather than the LRU walk,
> > > which would be more targeted (especially for memcg cases), is that we
> > > cannot hold lru lock for the whole LRU walk and we cannot continue
> > > walking after the lock is dropped. Maybe we can try to address that
> > > instead? I do not think this is easy to achieve but have you considered
> > > that as an option?
> > 
> > Yes, I have, and I've come to a conclusion it's not doable, because LRU
> > lists can be constantly rotating at an arbitrary rate. If you have an
> > idea in mind how this could be done, please share.
> 
> Yes this is really tricky with the current LRU implementation. I
> was playing with some ideas (do some checkpoints on the way) but
> none of them was really working out on a busy systems. But the LRU
> implementation might change in the future.

It might. Then we could come up with a new /proc or /sys file which
would do the same as /proc/kpageidle, but on per LRU^w whatever-it-is
basis, and give people a choice which one to use.

> I didn't mean this as a hard requirement it just sounds that the
> current implementation restrictions shape the user visible API which
> is a good sign to think twice about it.

Agree. That's why we are discussing it now :-)

> 
> > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
> >  - You can distribute a walk in time to avoid CPU bursts.
> 
> This would make the information even more volatile. I am not sure how
> helpful it would be in the end.

If you do it periodically, it is quite accurate.

> 
> >  - You are free to parallelize the scanner as you wish to decrease the
> >    scan time.
> 
> This is true but you could argue similar with per-node/lru threads if this
> was implemented in the kernel and really needed. I am not sure it would
> be really needed though. I would expect this would be a low priority
> thing.

But if you needed it one day, you'd have to extend the kernel API. With
/proc/kpageidle, you just go and fix your program.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:08         ` Michel Lespinasse
@ 2015-07-29 15:31           ` Vladimir Davydov
  2015-07-29 15:34             ` Michel Lespinasse
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 15:31 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, Jul 29, 2015 at 08:08:22AM -0700, Michel Lespinasse wrote:
> On Wed, Jul 29, 2015 at 7:45 AM, Vladimir Davydov <vdavydov@parallels.com>
> wrote:
> > Page table scan approach has the inherent problem - it ignores unmapped
> > page cache. If a workload does a lot of read/write or map-access-unmap
> > operations, we won't be able to even roughly estimate its wss.
> 
> You can catch that in mark_page_accessed on those paths, though.

Actually, the problem here is how to find an unmapped page cache page
*to mark it idle*, not to mark it accessed.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:31           ` Vladimir Davydov
@ 2015-07-29 15:34             ` Michel Lespinasse
  0 siblings, 0 replies; 57+ messages in thread
From: Michel Lespinasse @ 2015-07-29 15:34 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 956 bytes --]

On Wed, Jul 29, 2015 at 8:31 AM, Vladimir Davydov <vdavydov@parallels.com>
wrote:

> On Wed, Jul 29, 2015 at 08:08:22AM -0700, Michel Lespinasse wrote:
> > On Wed, Jul 29, 2015 at 7:45 AM, Vladimir Davydov <
> vdavydov@parallels.com>
> > wrote:
> > > Page table scan approach has the inherent problem - it ignores unmapped
> > > page cache. If a workload does a lot of read/write or map-access-unmap
> > > operations, we won't be able to even roughly estimate its wss.
> >
> > You can catch that in mark_page_accessed on those paths, though.
>
> Actually, the problem here is how to find an unmapped page cache page
> *to mark it idle*, not to mark it accessed.
>

Ah, yes.

When I tried that I was still scanning memory by address at the end just to
compute such totals - but I did not have to do rmap at that point anymore.

It did look incredibly lame, though.

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

[-- Attachment #2: Type: text/html, Size: 1422 bytes --]

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:08         ` Michal Hocko
@ 2015-07-29 15:36           ` Vladimir Davydov
  2015-07-29 15:58             ` Michal Hocko
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 15:36 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Michel Lespinasse, Andrew Morton, Andres Lagar-Cavilla,
	Minchan Kim, Raghavendra K T, Johannes Weiner, Greg Thelen,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed, Jul 29, 2015 at 05:08:55PM +0200, Michal Hocko wrote:
> On Wed 29-07-15 17:45:39, Vladimir Davydov wrote:
> > On Wed, Jul 29, 2015 at 07:12:13AM -0700, Michel Lespinasse wrote:
> > > On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov <vdavydov@parallels.com>
> > > wrote:
> > > >> I guess the primary reason to rely on the pfn rather than the LRU walk,
> > > >> which would be more targeted (especially for memcg cases), is that we
> > > >> cannot hold lru lock for the whole LRU walk and we cannot continue
> > > >> walking after the lock is dropped. Maybe we can try to address that
> > > >> instead? I do not think this is easy to achieve but have you considered
> > > >> that as an option?
> > > >
> > > > Yes, I have, and I've come to a conclusion it's not doable, because LRU
> > > > lists can be constantly rotating at an arbitrary rate. If you have an
> > > > idea in mind how this could be done, please share.
> > > >
> > > > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
> > > >  - You can distribute a walk in time to avoid CPU bursts.
> > > >  - You are free to parallelize the scanner as you wish to decrease the
> > > >    scan time.
> > > 
> > > There is a third way: one could go through every MM in the system and scan
> > > their page tables. Doing things that way turns out to be generally faster
> > > than scanning by physical address, because you don't have to go through
> > > RMAP for every page. But, you end up needing to take the mmap_sem lock of
> > > every MM (in turn) while scanning them, and that degrades quickly under
> > > memory load, which is exactly when you most need this feature. So, scan by
> > > address is still what we use here.
> > 
> > Page table scan approach has the inherent problem - it ignores unmapped
> > page cache. If a workload does a lot of read/write or map-access-unmap
> > operations, we won't be able to even roughly estimate its wss.
> 
> That page cache is trivially reclaimable if it is clean. If it needs
> writeback then it is non-idle only until the next writeback. So why does
> it matter for the estimation?

Because it might be a part of a workload's working set, in which case
evicting it will make the workload lag.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:28       ` Vladimir Davydov
@ 2015-07-29 15:47         ` Michal Hocko
  2015-07-29 16:29           ` Vladimir Davydov
  2015-07-29 15:55         ` Andres Lagar-Cavilla
  1 sibling, 1 reply; 57+ messages in thread
From: Michal Hocko @ 2015-07-29 15:47 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed 29-07-15 18:28:17, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 04:26:19PM +0200, Michal Hocko wrote:
> > On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
> > > On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> > > > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> > > > [...]
> > > > > ---- USER API ----
> > > > > 
> > > > > The user API consists of two new proc files:
> > > > 
> > > > I was thinking about this for a while. I dislike the interface.  It is
> > > > quite awkward to use - e.g. you have to read the full memory to check a
> > > > single memcg idleness. This might turn out being a problem especially on
> > > > large machines.
> > > 
> > > Yes, with this API estimating the wss of a single memory cgroup will
> > > cost almost as much as doing this for the whole system.
> > > 
> > > Come to think of it, does anyone really need to estimate idleness of one
> > > particular cgroup?
> > 
> > It is certainly interesting for setting the low limit.
> 
> Yes, but IMO there is no point in setting the low limit for one
> particular cgroup w/o considering what's going on with the rest of the
> system.

If you use the low limit for isolating an important load then you do not
have to care about the others that much. All you care about is to set
the reasonable protection level and let others to compete for the rest.

[...]
> > > > I would assume that most users are interested only in a single number
> > > > which tells the idleness of the system/memcg.
> > > 
> > > Yes, that's what I need it for - estimating containers' wss for setting
> > > their limits accordingly.
> > 
> > So why don't we export the single per memcg and global knobs then?
> > This would have few advantages. First of all it would be much easier to
> > use, you wouldn't have to export memcg ids and finally the implementation
> > could be changed without any user visible changes (e.g. lru vs. pfn walks),
> > potential caching and who knows what. In other words. Michel had a
> > single number interface AFAIR, what was the primary reason to move away
> > from that API?
> 
> Because there is too much to be taken care of in the kernel with such an
> approach and chances are high that it won't satisfy everyone. What
> should the scan period be equal too?

No, just gather the data on the read request and let the userspace
to decide when/how often etc. If we are clever enough we can cache
the numbers and prevent from the walk. Write to the file and do the
mark_idle stuff.

> Knob. How many kthreads do we want?
> Knob. I want to keep history for last N intervals (this was a part of
> Michel's implementation), what should N be equal to? Knob.

This all relates to the kernel thread implementation which I wasn't
suggesting. I was referring to Michel's work which might induce that.
I was merely referring to a single number output. Sorry about the
confusion.

> I want to be
> able to choose between an instant scan and a scan distributed in time.
> Knob. I want to see stats for anon/locked/file/dirty memory separately,

Why is this useful for the memcg limits setting or the wss estimation? I
can imagine that a further drop down numbers might be interesting
from the debugging POV but I fail to see what kind of decisions from
userspace you would do based on them.

[...]
> > Yes this is really tricky with the current LRU implementation. I
> > was playing with some ideas (do some checkpoints on the way) but
> > none of them was really working out on a busy systems. But the LRU
> > implementation might change in the future.
> 
> It might. Then we could come up with a new /proc or /sys file which
> would do the same as /proc/kpageidle, but on per LRU^w whatever-it-is
> basis, and give people a choice which one to use.

This just leads to proc files count explosion we are seeing
already... Proc ended up in dump ground for different things which
didn't fit elsewhere and I am not very much happy about it to be honest.

[...]
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:28       ` Vladimir Davydov
  2015-07-29 15:47         ` Michal Hocko
@ 2015-07-29 15:55         ` Andres Lagar-Cavilla
  2015-07-29 16:37           ` Vladimir Davydov
  1 sibling, 1 reply; 57+ messages in thread
From: Andres Lagar-Cavilla @ 2015-07-29 15:55 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andrew Morton, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, Jul 29, 2015 at 8:28 AM, Vladimir Davydov
<vdavydov@parallels.com> wrote:
> On Wed, Jul 29, 2015 at 04:26:19PM +0200, Michal Hocko wrote:
>> On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
>> > On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
>> > > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
>> > > [...]
>> > > > ---- USER API ----
>> > > >
>> > > > The user API consists of two new proc files:
>> > >
>> > > I was thinking about this for a while. I dislike the interface.  It is
>> > > quite awkward to use - e.g. you have to read the full memory to check a
>> > > single memcg idleness. This might turn out being a problem especially on
>> > > large machines.
>> >
>> > Yes, with this API estimating the wss of a single memory cgroup will
>> > cost almost as much as doing this for the whole system.
>> >
>> > Come to think of it, does anyone really need to estimate idleness of one
>> > particular cgroup?

You can always adorn memcg with a boolean, trivially configurable from
user-space, and have all the idle computation paths skip the code if
memcg->dont_care_about_idle

>>
>> It is certainly interesting for setting the low limit.
>

Valuable, IMHO

> Yes, but IMO there is no point in setting the low limit for one
> particular cgroup w/o considering what's going on with the rest of the
> system.
>

Probably worth more fleshing out. Why not? Because global reclaim can
execute in any given context, so a noisy neighbor hurts all?

>>
>> > If we are doing this for finding an optimal memcg
>> > limits configuration or while considering a load move within a cluster
>> > (which I think are the primary use cases for the feature), we must do it
>> > system-wide to see the whole picture.
>> >
>> > > It also provides a very low level information (per-pfn idleness) which
>> > > is inherently racy. Does anybody really require this level of detail?
>> >

It's inherently racy for antagonist workloads, but a lot of workloads
are very stable.

>> > Well, one might want to do it per-process, obtaining PFNs from
>> > /proc/pid/pagemap.
>>
>> Sure once the interface is exported you can do whatever ;) But my
>> question is whether any real usecase _requires_ it.
>
> I only know/care about my use case, which is memcg configuration, but I
> want to make the API as reusable as possible.
>
>>
>> > > I would assume that most users are interested only in a single number
>> > > which tells the idleness of the system/memcg.
>> >
>> > Yes, that's what I need it for - estimating containers' wss for setting
>> > their limits accordingly.
>>
>> So why don't we export the single per memcg and global knobs then?
>> This would have few advantages. First of all it would be much easier to
>> use, you wouldn't have to export memcg ids and finally the implementation
>> could be changed without any user visible changes (e.g. lru vs. pfn walks),
>> potential caching and who knows what. In other words. Michel had a
>> single number interface AFAIR, what was the primary reason to move away
>> from that API?
>
> Because there is too much to be taken care of in the kernel with such an
> approach and chances are high that it won't satisfy everyone. What
> should the scan period be equal too? Knob. How many kthreads do we want?
> Knob. I want to keep history for last N intervals (this was a part of
> Michel's implementation), what should N be equal to? Knob. I want to be
> able to choose between an instant scan and a scan distributed in time.
> Knob. I want to see stats for anon/locked/file/dirty memory separately,
> please add them to the API. You see the scale of the problem with doing
> it in the kernel?
>
> The API this patch set introduces is simple and fair. It only defines
> what "idle" flag mean and gives you a way to flip it. That's it. You
> wanna history? DIY. You wanna periodic scans? DIY. Etc.
>

FTR I'm happy that the subtle internals are built with this patchset,
and the DIY is very appealing.

Andres

>>
>> > > Well, you have mentioned a per-process reclaim but I am quite
>> > > skeptical about this.
>> >
>> > This is what Minchan mentioned initially. Personally, I'm not going to
>> > use it per-process, but I wouldn't rule out this use case either.
>>
>> Considering how many times we have been bitten by too broad interfaces I
>> would rather be conservative.
>
> I consider an API "broad" when it tries to do a lot of different things.
> sys_prctl is a good example of a broad API.
>
> /proc/kpageidle is not broad, because it does just one thing (I hope it
> does it good :). If we attempted to implement the scanner in the kernel
> with all those tunables I mentioned above, then we would get a broad API
> IMO.
>
>>
>> > > I guess the primary reason to rely on the pfn rather than the LRU walk,
>> > > which would be more targeted (especially for memcg cases), is that we
>> > > cannot hold lru lock for the whole LRU walk and we cannot continue
>> > > walking after the lock is dropped. Maybe we can try to address that
>> > > instead? I do not think this is easy to achieve but have you considered
>> > > that as an option?
>> >
>> > Yes, I have, and I've come to a conclusion it's not doable, because LRU
>> > lists can be constantly rotating at an arbitrary rate. If you have an
>> > idea in mind how this could be done, please share.
>>
>> Yes this is really tricky with the current LRU implementation. I
>> was playing with some ideas (do some checkpoints on the way) but
>> none of them was really working out on a busy systems. But the LRU
>> implementation might change in the future.
>
> It might. Then we could come up with a new /proc or /sys file which
> would do the same as /proc/kpageidle, but on per LRU^w whatever-it-is
> basis, and give people a choice which one to use.
>
>> I didn't mean this as a hard requirement it just sounds that the
>> current implementation restrictions shape the user visible API which
>> is a good sign to think twice about it.
>
> Agree. That's why we are discussing it now :-)
>
>>
>> > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages:
>> >  - You can distribute a walk in time to avoid CPU bursts.
>>
>> This would make the information even more volatile. I am not sure how
>> helpful it would be in the end.
>
> If you do it periodically, it is quite accurate.
>
>>
>> >  - You are free to parallelize the scanner as you wish to decrease the
>> >    scan time.
>>
>> This is true but you could argue similar with per-node/lru threads if this
>> was implemented in the kernel and really needed. I am not sure it would
>> be really needed though. I would expect this would be a low priority
>> thing.
>
> But if you needed it one day, you'd have to extend the kernel API. With
> /proc/kpageidle, you just go and fix your program.
>
> Thanks,
> Vladimir



-- 
Andres Lagar-Cavilla | Google Kernel Team | andreslc@google.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:36           ` Vladimir Davydov
@ 2015-07-29 15:58             ` Michal Hocko
  0 siblings, 0 replies; 57+ messages in thread
From: Michal Hocko @ 2015-07-29 15:58 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michel Lespinasse, Andrew Morton, Andres Lagar-Cavilla,
	Minchan Kim, Raghavendra K T, Johannes Weiner, Greg Thelen,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed 29-07-15 18:36:40, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 05:08:55PM +0200, Michal Hocko wrote:
> > On Wed 29-07-15 17:45:39, Vladimir Davydov wrote:
[...]
> > > Page table scan approach has the inherent problem - it ignores unmapped
> > > page cache. If a workload does a lot of read/write or map-access-unmap
> > > operations, we won't be able to even roughly estimate its wss.
> > 
> > That page cache is trivially reclaimable if it is clean. If it needs
> > writeback then it is non-idle only until the next writeback. So why does
> > it matter for the estimation?
> 
> Because it might be a part of a workload's working set, in which case
> evicting it will make the workload lag.

My point was that no sane application will rely on the unmaped pagecache
being part of the working set. But you are right that you might have a
more complex load consisting of many applications each doing buffered
IO on the same set of files which might get evicted due to other memory
pressure in the meantime and have a higher latencies. This is where low
limit covering this memory as well might be helpful.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:47         ` Michal Hocko
@ 2015-07-29 16:29           ` Vladimir Davydov
  2015-07-29 21:30             ` Andrew Morton
  2015-07-30  9:07             ` Michal Hocko
  0 siblings, 2 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 16:29 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed, Jul 29, 2015 at 05:47:18PM +0200, Michal Hocko wrote:
> On Wed 29-07-15 18:28:17, Vladimir Davydov wrote:
> > On Wed, Jul 29, 2015 at 04:26:19PM +0200, Michal Hocko wrote:
> > > On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
> > > > On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> > > > > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> > > > > [...]
> > > > > > ---- USER API ----
> > > > > > 
> > > > > > The user API consists of two new proc files:
> > > > > 
> > > > > I was thinking about this for a while. I dislike the interface.  It is
> > > > > quite awkward to use - e.g. you have to read the full memory to check a
> > > > > single memcg idleness. This might turn out being a problem especially on
> > > > > large machines.
> > > > 
> > > > Yes, with this API estimating the wss of a single memory cgroup will
> > > > cost almost as much as doing this for the whole system.
> > > > 
> > > > Come to think of it, does anyone really need to estimate idleness of one
> > > > particular cgroup?
> > > 
> > > It is certainly interesting for setting the low limit.
> > 
> > Yes, but IMO there is no point in setting the low limit for one
> > particular cgroup w/o considering what's going on with the rest of the
> > system.
> 
> If you use the low limit for isolating an important load then you do not
> have to care about the others that much. All you care about is to set
> the reasonable protection level and let others to compete for the rest.

That's a use case, you're right. Well, it's a natural limitation of this
API - you just have to perform a full PFN scan then. You can avoid
costly rmap walks for the cgroups you are not interested in by filtering
them out using /proc/kpagecgroup though.

> 
> [...]
> > > > > I would assume that most users are interested only in a single number
> > > > > which tells the idleness of the system/memcg.
> > > > 
> > > > Yes, that's what I need it for - estimating containers' wss for setting
> > > > their limits accordingly.
> > > 
> > > So why don't we export the single per memcg and global knobs then?
> > > This would have few advantages. First of all it would be much easier to
> > > use, you wouldn't have to export memcg ids and finally the implementation
> > > could be changed without any user visible changes (e.g. lru vs. pfn walks),
> > > potential caching and who knows what. In other words. Michel had a
> > > single number interface AFAIR, what was the primary reason to move away
> > > from that API?
> > 
> > Because there is too much to be taken care of in the kernel with such an
> > approach and chances are high that it won't satisfy everyone. What
> > should the scan period be equal too?
> 
> No, just gather the data on the read request and let the userspace
> to decide when/how often etc. If we are clever enough we can cache
> the numbers and prevent from the walk. Write to the file and do the
> mark_idle stuff.

Still, scan rate limiting would be an issue IMO.

> 
> > Knob. How many kthreads do we want?
> > Knob. I want to keep history for last N intervals (this was a part of
> > Michel's implementation), what should N be equal to? Knob.
> 
> This all relates to the kernel thread implementation which I wasn't
> suggesting. I was referring to Michel's work which might induce that.
> I was merely referring to a single number output. Sorry about the
> confusion.

Still, what about idle stats history? I mean having info about how many
pages were idle for N scans. It might be useful for more robust/accurate
wss estimation.

> 
> > I want to be
> > able to choose between an instant scan and a scan distributed in time.
> > Knob. I want to see stats for anon/locked/file/dirty memory separately,
> 
> Why is this useful for the memcg limits setting or the wss estimation? I
> can imagine that a further drop down numbers might be interesting
> from the debugging POV but I fail to see what kind of decisions from
> userspace you would do based on them.

A couple examples that pop up in my mind:

It's difficult to make wss estimation perfect. By mlocking pages, a
workload might give a hint to the system that it will be really unhappy
if they are evicted.

One might want to consider anon pages and/or dirty pages as not idle in
order to protect them and hence avoid expensive pageout/swapout.

> 
> [...]
> > > Yes this is really tricky with the current LRU implementation. I
> > > was playing with some ideas (do some checkpoints on the way) but
> > > none of them was really working out on a busy systems. But the LRU
> > > implementation might change in the future.
> > 
> > It might. Then we could come up with a new /proc or /sys file which
> > would do the same as /proc/kpageidle, but on per LRU^w whatever-it-is
> > basis, and give people a choice which one to use.
> 
> This just leads to proc files count explosion we are seeing
> already... Proc ended up in dump ground for different things which
> didn't fit elsewhere and I am not very much happy about it to be honest.

Moving the API to memcg is not a good idea either IMO, because the
feature can actually be useful with memcg disabled, e.g. it might help
estimate if the system is over- or underloaded.

/proc/kpageidle should probably live somewhere in /sys/kernel/mm, but I
added it where similar files are located (kpagecount, kpageflags) to
keep things consistent.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 15:55         ` Andres Lagar-Cavilla
@ 2015-07-29 16:37           ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-29 16:37 UTC (permalink / raw)
  To: Andres Lagar-Cavilla
  Cc: Michal Hocko, Andrew Morton, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, Jul 29, 2015 at 08:55:01AM -0700, Andres Lagar-Cavilla wrote:
> On Wed, Jul 29, 2015 at 8:28 AM, Vladimir Davydov
> <vdavydov@parallels.com> wrote:
> > On Wed, Jul 29, 2015 at 04:26:19PM +0200, Michal Hocko wrote:
> >> On Wed 29-07-15 16:59:07, Vladimir Davydov wrote:
> >> > On Wed, Jul 29, 2015 at 02:36:30PM +0200, Michal Hocko wrote:
> >> > > On Sun 19-07-15 15:31:09, Vladimir Davydov wrote:
> >> > > [...]
> >> > > > ---- USER API ----
> >> > > >
> >> > > > The user API consists of two new proc files:
> >> > >
> >> > > I was thinking about this for a while. I dislike the interface.  It is
> >> > > quite awkward to use - e.g. you have to read the full memory to check a
> >> > > single memcg idleness. This might turn out being a problem especially on
> >> > > large machines.
> >> >
> >> > Yes, with this API estimating the wss of a single memory cgroup will
> >> > cost almost as much as doing this for the whole system.
> >> >
> >> > Come to think of it, does anyone really need to estimate idleness of one
> >> > particular cgroup?
> 
> You can always adorn memcg with a boolean, trivially configurable from
> user-space, and have all the idle computation paths skip the code if
> memcg->dont_care_about_idle

Or we can filter out cgroups in which we're not interested using
/proc/kpagecgroup.

> 
> >>
> >> It is certainly interesting for setting the low limit.
> >
> 
> Valuable, IMHO
> 
> > Yes, but IMO there is no point in setting the low limit for one
> > particular cgroup w/o considering what's going on with the rest of the
> > system.
> >
> 
> Probably worth more fleshing out. Why not? Because global reclaim can
> execute in any given context, so a noisy neighbor hurts all?

The low limit does not necessarily mean, the cgroup will never get
pushed below it. It will, if others feel really bad.

Also, by setting the low limit too high, you can make others thrash
constantly, which will increase IO, which, in turn, might hurt the
workload you're trying to protect. Blkio cgroup might help in this case
though.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 16:29           ` Vladimir Davydov
@ 2015-07-29 21:30             ` Andrew Morton
  2015-07-30  9:12               ` Vladimir Davydov
  2015-07-30  9:07             ` Michal Hocko
  1 sibling, 1 reply; 57+ messages in thread
From: Andrew Morton @ 2015-07-29 21:30 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Michal Hocko, Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, 29 Jul 2015 19:29:08 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:

> /proc/kpageidle should probably live somewhere in /sys/kernel/mm, but I
> added it where similar files are located (kpagecount, kpageflags) to
> keep things consistent.

I think these files should be moved elsewhere.  Consistency is good,
but not when we're being consistent with a bad thing.

So let's place these in /sys/kernel/mm and then start being consistent
with that?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 16:29           ` Vladimir Davydov
  2015-07-29 21:30             ` Andrew Morton
@ 2015-07-30  9:07             ` Michal Hocko
  2015-07-30  9:31               ` Vladimir Davydov
  1 sibling, 1 reply; 57+ messages in thread
From: Michal Hocko @ 2015-07-30  9:07 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Wed 29-07-15 19:29:08, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 05:47:18PM +0200, Michal Hocko wrote:
[...]
> > If you use the low limit for isolating an important load then you do not
> > have to care about the others that much. All you care about is to set
> > the reasonable protection level and let others to compete for the rest.
> 
> That's a use case, you're right. Well, it's a natural limitation of this
> API - you just have to perform a full PFN scan then. You can avoid
> costly rmap walks for the cgroups you are not interested in by filtering
> them out using /proc/kpagecgroup though.

You still have to read through the whole memory and that is inherent to
the API and there no way for a better implementation later on other than
a new exported file.

[...]

> > > Because there is too much to be taken care of in the kernel with such an
> > > approach and chances are high that it won't satisfy everyone. What
> > > should the scan period be equal too?
> > 
> > No, just gather the data on the read request and let the userspace
> > to decide when/how often etc. If we are clever enough we can cache
> > the numbers and prevent from the walk. Write to the file and do the
> > mark_idle stuff.
> 
> Still, scan rate limiting would be an issue IMO.

Not sure what you mean here. Scan rate would be defined by the userspace
by reading/writing to the knob. No background kernel thread is really
necessary.

> > > Knob. How many kthreads do we want?
> > > Knob. I want to keep history for last N intervals (this was a part of
> > > Michel's implementation), what should N be equal to? Knob.
> > 
> > This all relates to the kernel thread implementation which I wasn't
> > suggesting. I was referring to Michel's work which might induce that.
> > I was merely referring to a single number output. Sorry about the
> > confusion.
> 
> Still, what about idle stats history? I mean having info about how many
> pages were idle for N scans. It might be useful for more robust/accurate
> wss estimation.

Why cannot userspace remember those numbers?

> > > I want to be
> > > able to choose between an instant scan and a scan distributed in time.
> > > Knob. I want to see stats for anon/locked/file/dirty memory separately,
> > 
> > Why is this useful for the memcg limits setting or the wss estimation? I
> > can imagine that a further drop down numbers might be interesting
> > from the debugging POV but I fail to see what kind of decisions from
> > userspace you would do based on them.
> 
> A couple examples that pop up in my mind:
> 
> It's difficult to make wss estimation perfect. By mlocking pages, a
> workload might give a hint to the system that it will be really unhappy
> if they are evicted.
> 
> One might want to consider anon pages and/or dirty pages as not idle in
> order to protect them and hence avoid expensive pageout/swapout.

I still seem to miss the point. How do you do that via the proposed
interface which doesn't influence the reclaim AFAIU and you do not have
means to achieve the above (except for swappiness). What am I missing?

> > [...]
> > > > Yes this is really tricky with the current LRU implementation. I
> > > > was playing with some ideas (do some checkpoints on the way) but
> > > > none of them was really working out on a busy systems. But the LRU
> > > > implementation might change in the future.
> > > 
> > > It might. Then we could come up with a new /proc or /sys file which
> > > would do the same as /proc/kpageidle, but on per LRU^w whatever-it-is
> > > basis, and give people a choice which one to use.
> > 
> > This just leads to proc files count explosion we are seeing
> > already... Proc ended up in dump ground for different things which
> > didn't fit elsewhere and I am not very much happy about it to be honest.
> 
> Moving the API to memcg is not a good idea either IMO, because the
> feature can actually be useful with memcg disabled, e.g. it might help
> estimate if the system is over- or underloaded.

I agree and that's why I was referring to memcg/global knobs.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-29 21:30             ` Andrew Morton
@ 2015-07-30  9:12               ` Vladimir Davydov
  2015-07-30 13:01                 ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-30  9:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Wed, Jul 29, 2015 at 02:30:15PM -0700, Andrew Morton wrote:
> On Wed, 29 Jul 2015 19:29:08 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> 
> > /proc/kpageidle should probably live somewhere in /sys/kernel/mm, but I
> > added it where similar files are located (kpagecount, kpageflags) to
> > keep things consistent.
> 
> I think these files should be moved elsewhere.  Consistency is good,
> but not when we're being consistent with a bad thing.
> 
> So let's place these in /sys/kernel/mm and then start being consistent
> with that?

I really don't think we should separate kpagecgroup from kpagecount and
kpageflags, because they look very similar (each of them is read-only,
contains an array of u64 values referenced by PFN). Scattering these
files between different filesystems would look ugly IMO.

However, kpageidle is somewhat different (it's read-write, contains a
bitmap) so I think it's worth moving it to /sys/kernel/mm. We have to
move the code from fs/proc to mm/something then to remove dependency
from PROC_FS, which would be unnecessary. Let me give it a try.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-30  9:07             ` Michal Hocko
@ 2015-07-30  9:31               ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-30  9:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Andres Lagar-Cavilla, Minchan Kim,
	Raghavendra K T, Johannes Weiner, Greg Thelen, Michel Lespinasse,
	David Rientjes, Pavel Emelyanov, Cyrill Gorcunov,
	Jonathan Corbet, linux-api, linux-doc, linux-mm, cgroups,
	linux-kernel

On Thu, Jul 30, 2015 at 11:07:09AM +0200, Michal Hocko wrote:
> On Wed 29-07-15 19:29:08, Vladimir Davydov wrote:
> > On Wed, Jul 29, 2015 at 05:47:18PM +0200, Michal Hocko wrote:
> [...]
> > > If you use the low limit for isolating an important load then you do not
> > > have to care about the others that much. All you care about is to set
> > > the reasonable protection level and let others to compete for the rest.
> > 
> > That's a use case, you're right. Well, it's a natural limitation of this
> > API - you just have to perform a full PFN scan then. You can avoid
> > costly rmap walks for the cgroups you are not interested in by filtering
> > them out using /proc/kpagecgroup though.
> 
> You still have to read through the whole memory and that is inherent to
> the API and there no way for a better implementation later on other than
> a new exported file.

I don't deny that. Nevertheless, PFN-walk is something that will always
be useful, simply because PFN-range is an invariant - it will always
exist. If one day a better page iterator appear (e.g. LRU walk) and the
need for it is justified well enough, we can add one more file. Note, it
won't deprecate the original PFN map - they both can be used for
different use cases then. If we move kpageidle to /sys/kernel/mm attr
group, which I'm doing now, it will be trivial to do and won't pollute
/proc.

> 
> [...]
> 
> > > > Because there is too much to be taken care of in the kernel with such an
> > > > approach and chances are high that it won't satisfy everyone. What
> > > > should the scan period be equal too?
> > > 
> > > No, just gather the data on the read request and let the userspace
> > > to decide when/how often etc. If we are clever enough we can cache
> > > the numbers and prevent from the walk. Write to the file and do the
> > > mark_idle stuff.
> > 
> > Still, scan rate limiting would be an issue IMO.
> 
> Not sure what you mean here. Scan rate would be defined by the userspace
> by reading/writing to the knob. No background kernel thread is really
> necessary.

Nevertheless, it means more logic in the kernel (rate limiter) and a
wider interface (+ rate limit value).

> 
> > > > Knob. How many kthreads do we want?
> > > > Knob. I want to keep history for last N intervals (this was a part of
> > > > Michel's implementation), what should N be equal to? Knob.
> > > 
> > > This all relates to the kernel thread implementation which I wasn't
> > > suggesting. I was referring to Michel's work which might induce that.
> > > I was merely referring to a single number output. Sorry about the
> > > confusion.
> > 
> > Still, what about idle stats history? I mean having info about how many
> > pages were idle for N scans. It might be useful for more robust/accurate
> > wss estimation.
> 
> Why cannot userspace remember those numbers?

Because they must be per-page - you have to remember for how many
periods *each particular* page has been idle. To achieve this, Michel
had to introduce a byte array referenced by PFN in his work. With
kpageidle file one can store this array in the userspace.

> 
> > > > I want to be
> > > > able to choose between an instant scan and a scan distributed in time.
> > > > Knob. I want to see stats for anon/locked/file/dirty memory separately,
> > > 
> > > Why is this useful for the memcg limits setting or the wss estimation? I
> > > can imagine that a further drop down numbers might be interesting
> > > from the debugging POV but I fail to see what kind of decisions from
> > > userspace you would do based on them.
> > 
> > A couple examples that pop up in my mind:
> > 
> > It's difficult to make wss estimation perfect. By mlocking pages, a
> > workload might give a hint to the system that it will be really unhappy
> > if they are evicted.
> > 
> > One might want to consider anon pages and/or dirty pages as not idle in
> > order to protect them and hence avoid expensive pageout/swapout.
> 
> I still seem to miss the point. How do you do that via the proposed
> interface which doesn't influence the reclaim AFAIU and you do not have
> means to achieve the above (except for swappiness). What am I missing?

You can consider idle only those pages that are clean, and then set the
low limit appropriately for your workload. You can find out which pages
are clean by reading /proc/kpageflags. Of course, this won't stop the
reclaimer from evicting them, but it will make the reclaimer less
aggressive with respect to your workload.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-30  9:12               ` Vladimir Davydov
@ 2015-07-30 13:01                 ` Vladimir Davydov
  2015-07-31  9:34                   ` Vladimir Davydov
  0 siblings, 1 reply; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-30 13:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Thu, Jul 30, 2015 at 12:12:12PM +0300, Vladimir Davydov wrote:
> On Wed, Jul 29, 2015 at 02:30:15PM -0700, Andrew Morton wrote:
> > On Wed, 29 Jul 2015 19:29:08 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> > 
> > > /proc/kpageidle should probably live somewhere in /sys/kernel/mm, but I
> > > added it where similar files are located (kpagecount, kpageflags) to
> > > keep things consistent.
> > 
> > I think these files should be moved elsewhere.  Consistency is good,
> > but not when we're being consistent with a bad thing.
> > 
> > So let's place these in /sys/kernel/mm and then start being consistent
> > with that?
> 
> I really don't think we should separate kpagecgroup from kpagecount and
> kpageflags, because they look very similar (each of them is read-only,
> contains an array of u64 values referenced by PFN). Scattering these
> files between different filesystems would look ugly IMO.
> 
> However, kpageidle is somewhat different (it's read-write, contains a
> bitmap) so I think it's worth moving it to /sys/kernel/mm. We have to
> move the code from fs/proc to mm/something then to remove dependency
> from PROC_FS, which would be unnecessary. Let me give it a try.

Here it goes:

From: Vladimir Davydov <vdavydov@parallels.com>
Subject: [PATCH] Move /proc/kpageidle to /sys/kernel/mm/page_idle/bitmap

Since IDLE_PAGE_TRACKING does not need to depend on PROC_FS anymore,
this patch also moves the code from fs/proc/page.c to mm/page_idle.c and
introduces a dedicated header file include/linux/page_idle.h.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>

diff --git a/Documentation/vm/idle_page_tracking.txt b/Documentation/vm/idle_page_tracking.txt
index d0f332d544c4..85dcc3bb85dc 100644
--- a/Documentation/vm/idle_page_tracking.txt
+++ b/Documentation/vm/idle_page_tracking.txt
@@ -6,10 +6,12 @@ estimating the workload's working set size, which, in turn, can be taken into
 account when configuring the workload parameters, setting memory cgroup limits,
 or deciding where to place the workload within a compute cluster.
 
+It is enabled by CONFIG_IDLE_PAGE_TRACKING=y.
+
 USER API
 
-If CONFIG_IDLE_PAGE_TRACKING was enabled on compile time, a new read-write file
-is present on the proc filesystem, /proc/kpageidle.
+The idle page tracking API is located at /sys/kernel/mm/page_idle. Currently,
+it consists of the only read-write file, /sys/kernel/mm/page_idle/bitmap.
 
 The file implements a bitmap where each bit corresponds to a memory page. The
 bitmap is represented by an array of 8-byte integers, and the page at PFN #i is
@@ -30,24 +32,25 @@ and hence such pages are never reported idle.
 For huge pages the idle flag is set only on the head page, so one has to read
 /proc/kpageflags in order to correctly count idle huge pages.
 
-Reading from or writing to /proc/kpageidle will return -EINVAL if you are not
-starting the read/write on an 8-byte boundary, or if the size of the read/write
-is not a multiple of 8 bytes. Writing to this file beyond max PFN will return
--ENXIO.
+Reading from or writing to /sys/kernel/mm/page_idle/bitmap will return
+-EINVAL if you are not starting the read/write on an 8-byte boundary, or
+if the size of the read/write is not a multiple of 8 bytes. Writing to
+this file beyond max PFN will return -ENXIO.
 
 That said, in order to estimate the amount of pages that are not used by a
 workload one should:
 
- 1. Mark all the workload's pages as idle by setting corresponding bits in the
-    /proc/kpageidle bitmap. The pages can be found by reading /proc/pid/pagemap
-    if the workload is represented by a process, or by filtering out alien pages
-    using /proc/kpagecgroup in case the workload is placed in a memory cgroup.
+ 1. Mark all the workload's pages as idle by setting corresponding bits in
+    /sys/kernel/mm/page_idle/bitmap. The pages can be found by reading
+    /proc/pid/pagemap if the workload is represented by a process, or by
+    filtering out alien pages using /proc/kpagecgroup in case the workload is
+    placed in a memory cgroup.
 
  2. Wait until the workload accesses its working set.
 
- 3. Read /proc/kpageidle and count the number of bits set. If one wants to
-    ignore certain types of pages, e.g. mlocked pages since they are not
-    reclaimable, he or she can filter them out using /proc/kpageflags.
+ 3. Read /sys/kernel/mm/page_idle/bitmap and count the number of bits set. If
+    one wants to ignore certain types of pages, e.g. mlocked pages since they
+    are not reclaimable, he or she can filter them out using /proc/kpageflags.
 
 See Documentation/vm/pagemap.txt for more information about /proc/pid/pagemap,
 /proc/kpageflags, and /proc/kpagecgroup.
@@ -74,8 +77,9 @@ When a dirty page is written to swap or disk as a result of memory reclaim or
 exceeding the dirty memory limit, it is not marked referenced.
 
 The idle memory tracking feature adds a new page flag, the Idle flag. This flag
-is set manually, by writing to /proc/kpageidle (see the USER API section), and
-cleared automatically whenever a page is referenced as defined above.
+is set manually, by writing to /sys/kernel/mm/page_idle/bitmap (see the USER API
+section), and cleared automatically whenever a page is referenced as defined
+above.
 
 When a page is marked idle, the Accessed bit must be cleared in all PTEs it is
 mapped to, otherwise we will not be able to detect accesses to the page coming
@@ -90,5 +94,5 @@ Since the idle memory tracking feature is based on the memory reclaimer logic,
 it only works with pages that are on an LRU list, other pages are silently
 ignored. That means it will ignore a user memory page if it is isolated, but
 since there are usually not many of them, it should not affect the overall
-result noticeably. In order not to stall scanning of /proc/kpageidle, locked
-pages may be skipped too.
+result noticeably. In order not to stall scanning of the idle page bitmap,
+locked pages may be skipped too.
diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 8ed148d17c2e..0e1e55588b59 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -5,7 +5,7 @@ pagemap is a new (as of 2.6.25) set of interfaces in the kernel that allow
 userspace programs to examine the page tables and related information by
 reading files in /proc.
 
-There are five components to pagemap:
+There are four components to pagemap:
 
  * /proc/pid/pagemap.  This file lets a userspace process find out which
    physical frame each virtual page is mapped to.  It contains one 64-bit
@@ -76,9 +76,6 @@ There are five components to pagemap:
    memory cgroup each page is charged to, indexed by PFN. Only available when
    CONFIG_MEMCG is set.
 
- * /proc/kpageidle.  This file comprises API of the idle page tracking feature.
-   See Documentation/vm/idle_page_tracking.txt for more details.
-
 Short descriptions to the page flags:
 
  0. LOCKED
@@ -125,9 +122,10 @@ Short descriptions to the page flags:
     zero page for pfn_zero or huge_zero page
 
 25. IDLE
-    page has not been accessed since it was marked idle (see /proc/kpageidle)
-    Note that this flag may be stale in case the page was accessed via a PTE.
-    To make sure the flag is up-to-date one has to read /proc/kpageidle first.
+    page has not been accessed since it was marked idle (see
+    Documentation/vm/idle_page_tracking.txt). Note that this flag may be
+    stale in case the page was accessed via a PTE. To make sure the flag
+    is up-to-date one has to read /sys/kernel/mm/page_idle/bitmap first.
 
     [IO related page flags]
  1. ERROR     IO error occurred
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 4191ddb79b84..72c604b876e4 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -5,20 +5,18 @@
 #include <linux/ksm.h>
 #include <linux/mm.h>
 #include <linux/mmzone.h>
-#include <linux/rmap.h>
-#include <linux/mmu_notifier.h>
 #include <linux/huge_mm.h>
 #include <linux/proc_fs.h>
 #include <linux/seq_file.h>
 #include <linux/hugetlb.h>
 #include <linux/memcontrol.h>
+#include <linux/page_idle.h>
 #include <linux/kernel-page-flags.h>
 #include <asm/uaccess.h>
 #include "internal.h"
 
 #define KPMSIZE sizeof(u64)
 #define KPMMASK (KPMSIZE - 1)
-#define KPMBITS (KPMSIZE * BITS_PER_BYTE)
 
 /* /proc/kpagecount - an array exposing page counts
  *
@@ -287,223 +285,6 @@ static const struct file_operations proc_kpagecgroup_operations = {
 };
 #endif /* CONFIG_MEMCG */
 
-#ifdef CONFIG_IDLE_PAGE_TRACKING
-/*
- * Idle page tracking only considers user memory pages, for other types of
- * pages the idle flag is always unset and an attempt to set it is silently
- * ignored.
- *
- * We treat a page as a user memory page if it is on an LRU list, because it is
- * always safe to pass such a page to rmap_walk(), which is essential for idle
- * page tracking. With such an indicator of user pages we can skip isolated
- * pages, but since there are not usually many of them, it will hardly affect
- * the overall result.
- *
- * This function tries to get a user memory page by pfn as described above.
- */
-static struct page *kpageidle_get_page(unsigned long pfn)
-{
-	struct page *page;
-	struct zone *zone;
-
-	if (!pfn_valid(pfn))
-		return NULL;
-
-	page = pfn_to_page(pfn);
-	if (!page || !PageLRU(page) ||
-	    !get_page_unless_zero(page))
-		return NULL;
-
-	zone = page_zone(page);
-	spin_lock_irq(&zone->lru_lock);
-	if (unlikely(!PageLRU(page))) {
-		put_page(page);
-		page = NULL;
-	}
-	spin_unlock_irq(&zone->lru_lock);
-	return page;
-}
-
-static int kpageidle_clear_pte_refs_one(struct page *page,
-					struct vm_area_struct *vma,
-					unsigned long addr, void *arg)
-{
-	struct mm_struct *mm = vma->vm_mm;
-	spinlock_t *ptl;
-	pmd_t *pmd;
-	pte_t *pte;
-	bool referenced = false;
-
-	if (unlikely(PageTransHuge(page))) {
-		pmd = page_check_address_pmd(page, mm, addr,
-					     PAGE_CHECK_ADDRESS_PMD_FLAG, &ptl);
-		if (pmd) {
-			referenced = pmdp_clear_young_notify(vma, addr, pmd);
-			spin_unlock(ptl);
-		}
-	} else {
-		pte = page_check_address(page, mm, addr, &ptl, 0);
-		if (pte) {
-			referenced = ptep_clear_young_notify(vma, addr, pte);
-			pte_unmap_unlock(pte, ptl);
-		}
-	}
-	if (referenced) {
-		clear_page_idle(page);
-		/*
-		 * We cleared the referenced bit in a mapping to this page. To
-		 * avoid interference with page reclaim, mark it young so that
-		 * page_referenced() will return > 0.
-		 */
-		set_page_young(page);
-	}
-	return SWAP_AGAIN;
-}
-
-static void kpageidle_clear_pte_refs(struct page *page)
-{
-	/*
-	 * Since rwc.arg is unused, rwc is effectively immutable, so we
-	 * can make it static const to save some cycles and stack.
-	 */
-	static const struct rmap_walk_control rwc = {
-		.rmap_one = kpageidle_clear_pte_refs_one,
-		.anon_lock = page_lock_anon_vma_read,
-	};
-	bool need_lock;
-
-	if (!page_mapped(page) ||
-	    !page_rmapping(page))
-		return;
-
-	need_lock = !PageAnon(page) || PageKsm(page);
-	if (need_lock && !trylock_page(page))
-		return;
-
-	rmap_walk(page, (struct rmap_walk_control *)&rwc);
-
-	if (need_lock)
-		unlock_page(page);
-}
-
-static ssize_t kpageidle_read(struct file *file, char __user *buf,
-			      size_t count, loff_t *ppos)
-{
-	u64 __user *out = (u64 __user *)buf;
-	struct page *page;
-	unsigned long pfn, end_pfn;
-	ssize_t ret = 0;
-	u64 idle_bitmap = 0;
-	int bit;
-
-	if (*ppos & KPMMASK || count & KPMMASK)
-		return -EINVAL;
-
-	pfn = *ppos * BITS_PER_BYTE;
-	if (pfn >= max_pfn)
-		return 0;
-
-	end_pfn = pfn + count * BITS_PER_BYTE;
-	if (end_pfn > max_pfn)
-		end_pfn = ALIGN(max_pfn, KPMBITS);
-
-	for (; pfn < end_pfn; pfn++) {
-		bit = pfn % KPMBITS;
-		page = kpageidle_get_page(pfn);
-		if (page) {
-			if (page_is_idle(page)) {
-				/*
-				 * The page might have been referenced via a
-				 * pte, in which case it is not idle. Clear
-				 * refs and recheck.
-				 */
-				kpageidle_clear_pte_refs(page);
-				if (page_is_idle(page))
-					idle_bitmap |= 1ULL << bit;
-			}
-			put_page(page);
-		}
-		if (bit == KPMBITS - 1) {
-			if (put_user(idle_bitmap, out)) {
-				ret = -EFAULT;
-				break;
-			}
-			idle_bitmap = 0;
-			out++;
-		}
-		cond_resched();
-	}
-
-	*ppos += (char __user *)out - buf;
-	if (!ret)
-		ret = (char __user *)out - buf;
-	return ret;
-}
-
-static ssize_t kpageidle_write(struct file *file, const char __user *buf,
-			       size_t count, loff_t *ppos)
-{
-	const u64 __user *in = (const u64 __user *)buf;
-	struct page *page;
-	unsigned long pfn, end_pfn;
-	ssize_t ret = 0;
-	u64 idle_bitmap = 0;
-	int bit;
-
-	if (*ppos & KPMMASK || count & KPMMASK)
-		return -EINVAL;
-
-	pfn = *ppos * BITS_PER_BYTE;
-	if (pfn >= max_pfn)
-		return -ENXIO;
-
-	end_pfn = pfn + count * BITS_PER_BYTE;
-	if (end_pfn > max_pfn)
-		end_pfn = ALIGN(max_pfn, KPMBITS);
-
-	for (; pfn < end_pfn; pfn++) {
-		bit = pfn % KPMBITS;
-		if (bit == 0) {
-			if (copy_from_user(&idle_bitmap, in, sizeof(u64))) {
-				ret = -EFAULT;
-				break;
-			}
-			in++;
-		}
-		if ((idle_bitmap >> bit) & 1) {
-			page = kpageidle_get_page(pfn);
-			if (page) {
-				kpageidle_clear_pte_refs(page);
-				set_page_idle(page);
-				put_page(page);
-			}
-		}
-		cond_resched();
-	}
-
-	*ppos += (const char __user *)in - buf;
-	if (!ret)
-		ret = (const char __user *)in - buf;
-	return ret;
-}
-
-static const struct file_operations proc_kpageidle_operations = {
-	.llseek = mem_lseek,
-	.read = kpageidle_read,
-	.write = kpageidle_write,
-};
-
-#ifndef CONFIG_64BIT
-static bool need_page_idle(void)
-{
-	return true;
-}
-struct page_ext_operations page_idle_ops = {
-	.need = need_page_idle,
-};
-#endif
-#endif /* CONFIG_IDLE_PAGE_TRACKING */
-
 static int __init proc_page_init(void)
 {
 	proc_create("kpagecount", S_IRUSR, NULL, &proc_kpagecount_operations);
@@ -511,10 +292,6 @@ static int __init proc_page_init(void)
 #ifdef CONFIG_MEMCG
 	proc_create("kpagecgroup", S_IRUSR, NULL, &proc_kpagecgroup_operations);
 #endif
-#ifdef CONFIG_IDLE_PAGE_TRACKING
-	proc_create("kpageidle", S_IRUSR | S_IWUSR, NULL,
-		    &proc_kpageidle_operations);
-#endif
 	return 0;
 }
 fs_initcall(proc_page_init);
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 7c9a17414106..bdd7e48a85f0 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -13,6 +13,7 @@
 #include <linux/swap.h>
 #include <linux/swapops.h>
 #include <linux/mmu_notifier.h>
+#include <linux/page_idle.h>
 
 #include <asm/elf.h>
 #include <asm/uaccess.h>
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5e08787c92df..363ea2cda35f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2199,103 +2199,5 @@ void __init setup_nr_node_ids(void);
 static inline void setup_nr_node_ids(void) {}
 #endif
 
-#ifdef CONFIG_IDLE_PAGE_TRACKING
-#ifdef CONFIG_64BIT
-static inline bool page_is_young(struct page *page)
-{
-	return PageYoung(page);
-}
-
-static inline void set_page_young(struct page *page)
-{
-	SetPageYoung(page);
-}
-
-static inline bool test_and_clear_page_young(struct page *page)
-{
-	return TestClearPageYoung(page);
-}
-
-static inline bool page_is_idle(struct page *page)
-{
-	return PageIdle(page);
-}
-
-static inline void set_page_idle(struct page *page)
-{
-	SetPageIdle(page);
-}
-
-static inline void clear_page_idle(struct page *page)
-{
-	ClearPageIdle(page);
-}
-#else /* !CONFIG_64BIT */
-/*
- * If there is not enough space to store Idle and Young bits in page flags, use
- * page ext flags instead.
- */
-extern struct page_ext_operations page_idle_ops;
-
-static inline bool page_is_young(struct page *page)
-{
-	return test_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
-}
-
-static inline void set_page_young(struct page *page)
-{
-	set_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
-}
-
-static inline bool test_and_clear_page_young(struct page *page)
-{
-	return test_and_clear_bit(PAGE_EXT_YOUNG,
-				  &lookup_page_ext(page)->flags);
-}
-
-static inline bool page_is_idle(struct page *page)
-{
-	return test_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
-}
-
-static inline void set_page_idle(struct page *page)
-{
-	set_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
-}
-
-static inline void clear_page_idle(struct page *page)
-{
-	clear_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
-}
-#endif /* CONFIG_64BIT */
-#else /* !CONFIG_IDLE_PAGE_TRACKING */
-static inline bool page_is_young(struct page *page)
-{
-	return false;
-}
-
-static inline void set_page_young(struct page *page)
-{
-}
-
-static inline bool test_and_clear_page_young(struct page *page)
-{
-	return false;
-}
-
-static inline bool page_is_idle(struct page *page)
-{
-	return false;
-}
-
-static inline void set_page_idle(struct page *page)
-{
-}
-
-static inline void clear_page_idle(struct page *page)
-{
-}
-#endif /* CONFIG_IDLE_PAGE_TRACKING */
-
 #endif /* __KERNEL__ */
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h
new file mode 100644
index 000000000000..bf268fa92c5b
--- /dev/null
+++ b/include/linux/page_idle.h
@@ -0,0 +1,110 @@
+#ifndef _LINUX_MM_PAGE_IDLE_H
+#define _LINUX_MM_PAGE_IDLE_H
+
+#include <linux/bitops.h>
+#include <linux/page-flags.h>
+#include <linux/page_ext.h>
+
+#ifdef CONFIG_IDLE_PAGE_TRACKING
+
+#ifdef CONFIG_64BIT
+static inline bool page_is_young(struct page *page)
+{
+	return PageYoung(page);
+}
+
+static inline void set_page_young(struct page *page)
+{
+	SetPageYoung(page);
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return TestClearPageYoung(page);
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return PageIdle(page);
+}
+
+static inline void set_page_idle(struct page *page)
+{
+	SetPageIdle(page);
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+	ClearPageIdle(page);
+}
+#else /* !CONFIG_64BIT */
+/*
+ * If there is not enough space to store Idle and Young bits in page flags, use
+ * page ext flags instead.
+ */
+extern struct page_ext_operations page_idle_ops;
+
+static inline bool page_is_young(struct page *page)
+{
+	return test_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
+}
+
+static inline void set_page_young(struct page *page)
+{
+	set_bit(PAGE_EXT_YOUNG, &lookup_page_ext(page)->flags);
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return test_and_clear_bit(PAGE_EXT_YOUNG,
+				  &lookup_page_ext(page)->flags);
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return test_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+
+static inline void set_page_idle(struct page *page)
+{
+	set_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+	clear_bit(PAGE_EXT_IDLE, &lookup_page_ext(page)->flags);
+}
+#endif /* CONFIG_64BIT */
+
+#else /* !CONFIG_IDLE_PAGE_TRACKING */
+
+static inline bool page_is_young(struct page *page)
+{
+	return false;
+}
+
+static inline void set_page_young(struct page *page)
+{
+}
+
+static inline bool test_and_clear_page_young(struct page *page)
+{
+	return false;
+}
+
+static inline bool page_is_idle(struct page *page)
+{
+	return false;
+}
+
+static inline void set_page_idle(struct page *page)
+{
+}
+
+static inline void clear_page_idle(struct page *page)
+{
+}
+
+#endif /* CONFIG_IDLE_PAGE_TRACKING */
+
+#endif /* _LINUX_MM_PAGE_IDLE_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 7482b60e927f..fe133a98a9ef 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -651,8 +651,7 @@ config DEFERRED_STRUCT_PAGE_INIT
 
 config IDLE_PAGE_TRACKING
 	bool "Enable idle page tracking"
-	depends on PROC_FS && MMU
-	select PROC_PAGE_MONITOR
+	depends on SYSFS
 	select PAGE_EXTENSION if !64BIT
 	help
 	  This feature allows to estimate the amount of user pages that have
diff --git a/mm/Makefile b/mm/Makefile
index b424d5e5b6ff..56f8eed73f1a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -79,3 +79,4 @@ obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
 obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o
 obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o
 obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
+obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index aa58a326d238..1ce276ac4e0c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -25,6 +25,7 @@
 #include <linux/migrate.h>
 #include <linux/hashtable.h>
 #include <linux/userfaultfd_k.h>
+#include <linux/page_idle.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
diff --git a/mm/migrate.c b/mm/migrate.c
index d86cec005aa6..1f887f594cc6 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -37,6 +37,7 @@
 #include <linux/gfp.h>
 #include <linux/balloon_compaction.h>
 #include <linux/mmu_notifier.h>
+#include <linux/page_idle.h>
 
 #include <asm/tlbflush.h>
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index e4b3af054bf2..292ca7b8debd 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -6,6 +6,7 @@
 #include <linux/vmalloc.h>
 #include <linux/kmemleak.h>
 #include <linux/page_owner.h>
+#include <linux/page_idle.h>
 
 /*
  * struct page extension
diff --git a/mm/page_idle.c b/mm/page_idle.c
new file mode 100644
index 000000000000..d5dd79041484
--- /dev/null
+++ b/mm/page_idle.c
@@ -0,0 +1,232 @@
+#include <linux/init.h>
+#include <linux/bootmem.h>
+#include <linux/fs.h>
+#include <linux/sysfs.h>
+#include <linux/kobject.h>
+#include <linux/mm.h>
+#include <linux/mmzone.h>
+#include <linux/pagemap.h>
+#include <linux/rmap.h>
+#include <linux/mmu_notifier.h>
+#include <linux/page_ext.h>
+#include <linux/page_idle.h>
+
+#define BITMAP_CHUNK_SIZE	sizeof(u64)
+#define BITMAP_CHUNK_BITS	(BITMAP_CHUNK_SIZE * BITS_PER_BYTE)
+
+/*
+ * Idle page tracking only considers user memory pages, for other types of
+ * pages the idle flag is always unset and an attempt to set it is silently
+ * ignored.
+ *
+ * We treat a page as a user memory page if it is on an LRU list, because it is
+ * always safe to pass such a page to rmap_walk(), which is essential for idle
+ * page tracking. With such an indicator of user pages we can skip isolated
+ * pages, but since there are not usually many of them, it will hardly affect
+ * the overall result.
+ *
+ * This function tries to get a user memory page by pfn as described above.
+ */
+static struct page *page_idle_get_page(unsigned long pfn)
+{
+	struct page *page;
+	struct zone *zone;
+
+	if (!pfn_valid(pfn))
+		return NULL;
+
+	page = pfn_to_page(pfn);
+	if (!page || !PageLRU(page) ||
+	    !get_page_unless_zero(page))
+		return NULL;
+
+	zone = page_zone(page);
+	spin_lock_irq(&zone->lru_lock);
+	if (unlikely(!PageLRU(page))) {
+		put_page(page);
+		page = NULL;
+	}
+	spin_unlock_irq(&zone->lru_lock);
+	return page;
+}
+
+static int page_idle_clear_pte_refs_one(struct page *page,
+					struct vm_area_struct *vma,
+					unsigned long addr, void *arg)
+{
+	struct mm_struct *mm = vma->vm_mm;
+	spinlock_t *ptl;
+	pmd_t *pmd;
+	pte_t *pte;
+	bool referenced = false;
+
+	if (unlikely(PageTransHuge(page))) {
+		pmd = page_check_address_pmd(page, mm, addr,
+					     PAGE_CHECK_ADDRESS_PMD_FLAG, &ptl);
+		if (pmd) {
+			referenced = pmdp_clear_young_notify(vma, addr, pmd);
+			spin_unlock(ptl);
+		}
+	} else {
+		pte = page_check_address(page, mm, addr, &ptl, 0);
+		if (pte) {
+			referenced = ptep_clear_young_notify(vma, addr, pte);
+			pte_unmap_unlock(pte, ptl);
+		}
+	}
+	if (referenced) {
+		clear_page_idle(page);
+		/*
+		 * We cleared the referenced bit in a mapping to this page. To
+		 * avoid interference with page reclaim, mark it young so that
+		 * page_referenced() will return > 0.
+		 */
+		set_page_young(page);
+	}
+	return SWAP_AGAIN;
+}
+
+static void page_idle_clear_pte_refs(struct page *page)
+{
+	/*
+	 * Since rwc.arg is unused, rwc is effectively immutable, so we
+	 * can make it static const to save some cycles and stack.
+	 */
+	static const struct rmap_walk_control rwc = {
+		.rmap_one = page_idle_clear_pte_refs_one,
+		.anon_lock = page_lock_anon_vma_read,
+	};
+	bool need_lock;
+
+	if (!page_mapped(page) ||
+	    !page_rmapping(page))
+		return;
+
+	need_lock = !PageAnon(page) || PageKsm(page);
+	if (need_lock && !trylock_page(page))
+		return;
+
+	rmap_walk(page, (struct rmap_walk_control *)&rwc);
+
+	if (need_lock)
+		unlock_page(page);
+}
+
+static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *kobj,
+				     struct bin_attribute *attr, char *buf,
+				     loff_t pos, size_t count)
+{
+	u64 *out = (u64 *)buf;
+	struct page *page;
+	unsigned long pfn, end_pfn;
+	int bit;
+
+	if (pos % BITMAP_CHUNK_SIZE || count % BITMAP_CHUNK_SIZE)
+		return -EINVAL;
+
+	pfn = pos * BITS_PER_BYTE;
+	if (pfn >= max_pfn)
+		return 0;
+
+	end_pfn = pfn + count * BITS_PER_BYTE;
+	if (end_pfn > max_pfn)
+		end_pfn = ALIGN(max_pfn, BITMAP_CHUNK_BITS);
+
+	for (; pfn < end_pfn; pfn++) {
+		bit = pfn % BITMAP_CHUNK_BITS;
+		if (!bit)
+			*out = 0ULL;
+		page = page_idle_get_page(pfn);
+		if (page) {
+			if (page_is_idle(page)) {
+				/*
+				 * The page might have been referenced via a
+				 * pte, in which case it is not idle. Clear
+				 * refs and recheck.
+				 */
+				page_idle_clear_pte_refs(page);
+				if (page_is_idle(page))
+					*out |= 1ULL << bit;
+			}
+			put_page(page);
+		}
+		if (bit == BITMAP_CHUNK_BITS - 1)
+			out++;
+		cond_resched();
+	}
+	return (char *)out - buf;
+}
+
+static ssize_t page_idle_bitmap_write(struct file *file, struct kobject *kobj,
+				      struct bin_attribute *attr, char *buf,
+				      loff_t pos, size_t count)
+{
+	const u64 *in = (u64 *)buf;
+	struct page *page;
+	unsigned long pfn, end_pfn;
+	int bit;
+
+	if (pos % BITMAP_CHUNK_SIZE || count % BITMAP_CHUNK_SIZE)
+		return -EINVAL;
+
+	pfn = pos * BITS_PER_BYTE;
+	if (pfn >= max_pfn)
+		return -ENXIO;
+
+	end_pfn = pfn + count * BITS_PER_BYTE;
+	if (end_pfn > max_pfn)
+		end_pfn = ALIGN(max_pfn, BITMAP_CHUNK_BITS);
+
+	for (; pfn < end_pfn; pfn++) {
+		bit = pfn % BITMAP_CHUNK_BITS;
+		if ((*in >> bit) & 1) {
+			page = page_idle_get_page(pfn);
+			if (page) {
+				page_idle_clear_pte_refs(page);
+				set_page_idle(page);
+				put_page(page);
+			}
+		}
+		if (bit == BITMAP_CHUNK_BITS - 1)
+			in++;
+		cond_resched();
+	}
+	return (char *)in - buf;
+}
+
+static struct bin_attribute page_idle_bitmap_attr =
+		__BIN_ATTR(bitmap, S_IRUSR | S_IWUSR,
+			   page_idle_bitmap_read, page_idle_bitmap_write, 0);
+
+static struct bin_attribute *page_idle_bin_attrs[] = {
+	&page_idle_bitmap_attr,
+	NULL,
+};
+
+static struct attribute_group page_idle_attr_group = {
+	.bin_attrs = page_idle_bin_attrs,
+	.name = "page_idle",
+};
+
+#ifndef CONFIG_64BIT
+static bool need_page_idle(void)
+{
+	return true;
+}
+struct page_ext_operations page_idle_ops = {
+	.need = need_page_idle,
+};
+#endif
+
+static int __init page_idle_init(void)
+{
+	int err;
+
+	err = sysfs_create_group(mm_kobj, &page_idle_attr_group);
+	if (err) {
+		pr_err("page_idle: register sysfs failed\n");
+		return err;
+	}
+	return 0;
+}
+subsys_initcall(page_idle_init);
diff --git a/mm/rmap.c b/mm/rmap.c
index b6db6a676f6f..dcaad464aab0 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -59,6 +59,7 @@
 #include <linux/migrate.h>
 #include <linux/hugetlb.h>
 #include <linux/backing-dev.h>
+#include <linux/page_idle.h>
 
 #include <asm/tlbflush.h>
 
diff --git a/mm/swap.c b/mm/swap.c
index db43c9b4891d..4a6aec976ab1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -32,6 +32,7 @@
 #include <linux/gfp.h>
 #include <linux/uio.h>
 #include <linux/hugetlb.h>
+#include <linux/page_idle.h>
 
 #include "internal.h"
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH -mm v9 0/8] idle memory tracking
  2015-07-30 13:01                 ` Vladimir Davydov
@ 2015-07-31  9:34                   ` Vladimir Davydov
  0 siblings, 0 replies; 57+ messages in thread
From: Vladimir Davydov @ 2015-07-31  9:34 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Andres Lagar-Cavilla, Minchan Kim, Raghavendra K T,
	Johannes Weiner, Greg Thelen, Michel Lespinasse, David Rientjes,
	Pavel Emelyanov, Cyrill Gorcunov, Jonathan Corbet, linux-api,
	linux-doc, linux-mm, cgroups, linux-kernel

On Thu, Jul 30, 2015 at 04:01:22PM +0300, Vladimir Davydov wrote:
> On Thu, Jul 30, 2015 at 12:12:12PM +0300, Vladimir Davydov wrote:
> > On Wed, Jul 29, 2015 at 02:30:15PM -0700, Andrew Morton wrote:
> > > On Wed, 29 Jul 2015 19:29:08 +0300 Vladimir Davydov <vdavydov@parallels.com> wrote:
> > > 
> > > > /proc/kpageidle should probably live somewhere in /sys/kernel/mm, but I
> > > > added it where similar files are located (kpagecount, kpageflags) to
> > > > keep things consistent.
> > > 
> > > I think these files should be moved elsewhere.  Consistency is good,
> > > but not when we're being consistent with a bad thing.
> > > 
> > > So let's place these in /sys/kernel/mm and then start being consistent
> > > with that?
> > 
> > I really don't think we should separate kpagecgroup from kpagecount and
> > kpageflags, because they look very similar (each of them is read-only,
> > contains an array of u64 values referenced by PFN). Scattering these
> > files between different filesystems would look ugly IMO.
> > 
> > However, kpageidle is somewhat different (it's read-write, contains a
> > bitmap) so I think it's worth moving it to /sys/kernel/mm. We have to
> > move the code from fs/proc to mm/something then to remove dependency
> > from PROC_FS, which would be unnecessary. Let me give it a try.
> 
> Here it goes:
> 
> From: Vladimir Davydov <vdavydov@parallels.com>
> Subject: [PATCH] Move /proc/kpageidle to /sys/kernel/mm/page_idle/bitmap

Since it is rather difficult to merge it into proc-add-kpageidle, should
I resend the whole series with all fixes included, provided you find
this patch OK of course?

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2015-07-31  9:35 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-19 12:31 [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 1/8] memcg: add page_cgroup_ino helper Vladimir Davydov
2015-07-21 23:34   ` Andrew Morton
2015-07-22  9:21     ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 2/8] hwpoison: use page_cgroup_ino for filtering by memcg Vladimir Davydov
2015-07-21 23:34   ` Andrew Morton
2015-07-22  9:45     ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 3/8] memcg: zap try_get_mem_cgroup_from_page Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 4/8] proc: add kpagecgroup file Vladimir Davydov
2015-07-21 23:34   ` Andrew Morton
2015-07-22 10:33     ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 5/8] mmu-notifier: add clear_young callback Vladimir Davydov
2015-07-20 18:34   ` Andres Lagar-Cavilla
2015-07-21  8:51     ` Vladimir Davydov
2015-07-22 16:33       ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 6/8] proc: add kpageidle file Vladimir Davydov
2015-07-21 23:34   ` Andrew Morton
2015-07-22 15:20     ` Vladimir Davydov
2015-07-24 14:08   ` Paul Gortmaker
2015-07-24 14:17     ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 7/8] proc: export idle flag via kpageflags Vladimir Davydov
2015-07-21 23:35   ` Andrew Morton
2015-07-22 16:25     ` Vladimir Davydov
2015-07-22 19:44       ` Andrew Morton
2015-07-22 20:46         ` Andres Lagar-Cavilla
2015-07-23  7:57           ` Vladimir Davydov
2015-07-19 12:31 ` [PATCH -mm v9 8/8] proc: add cond_resched to /proc/kpage* read/write loop Vladimir Davydov
2015-07-19 12:37 ` [PATCH -mm v9 0/8] idle memory tracking Vladimir Davydov
2015-07-21 21:39 ` Andres Lagar-Cavilla
2015-07-21 23:34 ` Andrew Morton
2015-07-22 16:23   ` Vladimir Davydov
2015-07-25 16:24     ` Vladimir Davydov
2015-07-27 19:18   ` Kees Cook
2015-07-27 19:25     ` Andrew Morton
2015-07-29 12:36 ` Michal Hocko
2015-07-29 13:59   ` Vladimir Davydov
2015-07-29 14:12     ` Michel Lespinasse
2015-07-29 14:13       ` Michel Lespinasse
2015-07-29 14:45       ` Vladimir Davydov
2015-07-29 15:08         ` Michel Lespinasse
2015-07-29 15:31           ` Vladimir Davydov
2015-07-29 15:34             ` Michel Lespinasse
2015-07-29 15:08         ` Michal Hocko
2015-07-29 15:36           ` Vladimir Davydov
2015-07-29 15:58             ` Michal Hocko
2015-07-29 14:26     ` Michal Hocko
2015-07-29 15:28       ` Vladimir Davydov
2015-07-29 15:47         ` Michal Hocko
2015-07-29 16:29           ` Vladimir Davydov
2015-07-29 21:30             ` Andrew Morton
2015-07-30  9:12               ` Vladimir Davydov
2015-07-30 13:01                 ` Vladimir Davydov
2015-07-31  9:34                   ` Vladimir Davydov
2015-07-30  9:07             ` Michal Hocko
2015-07-30  9:31               ` Vladimir Davydov
2015-07-29 15:55         ` Andres Lagar-Cavilla
2015-07-29 16:37           ` Vladimir Davydov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).