linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] cachestat: a new syscall for page cache state of files
@ 2023-01-17 19:59 Nhat Pham
  2023-01-17 19:59 ` [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check Nhat Pham
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Nhat Pham @ 2023-01-17 19:59 UTC (permalink / raw)
  To: akpm
  Cc: hannes, linux-mm, linux-kernel, bfoster, willy, linux-api, kernel-team

Changelog:
v6:
  * Add a missing fdput() (suggested by Brian Foster) (patch 2)
  * Replace cstat_size with cstat_version (suggested by Brian Foster)
    (patch 2)
  * Add conditional resched to the xas walk. (suggested by Hillf Danton) 
    (patch 2)
v5:
  * Separate first patch into its own series.
    (suggested by Andrew Morton)
  * Expose filemap_cachestat() to non-syscall usage
    (patch 2) (suggested by Brian Foster).
  * Fix some build errors from last version.
    (patch 2)
  * Explain eviction and recent eviction in the draft man page and
    documentation (suggested by Andrew Morton).
    (patch 2)
v4:
  * Refactor cachestat and move it to mm/filemap.c (patch 3)
    (suggested by Brian Foster)
  * Remove redundant checks (!folio, access_ok)
    (patch 3) (suggested by Matthew Wilcox and Al Viro)
  * Fix a bug in handling multipages folio.
    (patch 3) (suggested by Matthew Wilcox)
  * Add a selftest for shmem files, which can be used to test huge
    pages (patch 4) (suggested by Johannes Weiner)
v3:
  * Fix some minor formatting issues and build errors.
  * Add the new syscall entry to missing architecture syscall tables.
    (patch 3).
  * Add flags argument for the syscall. (patch 3).
  * Clean up the recency refactoring (patch 2) (suggested by Yu Zhao)
  * Add the new Kconfig (CONFIG_CACHESTAT) to disable the syscall.
    (patch 3) (suggested by Josh Triplett)
v2:
  * len == 0 means query to EOF. len < 0 is invalid.
    (patch 3) (suggested by Brian Foster)
  * Make cachestat extensible by adding the `cstat_size` argument in the
    syscall (patch 3)

There is currently no good way to query the page cache state of large
file sets and directory trees. There is mincore(), but it scales poorly:
the kernel writes out a lot of bitmap data that userspace has to
aggregate, when the user really doesn not care about per-page information
in that case. The user also needs to mmap and unmap each file as it goes
along, which can be quite slow as well.

This series of patches introduces a new system call, cachestat, that
summarizes the page cache statistics (number of cached pages, dirty
pages, pages marked for writeback, evicted pages etc.) of a file, in a
specified range of bytes. It also include a selftest suite that tests some
typical usage

This interface is inspired by past discussion and concerns with fincore,
which has a similar design (and as a result, issues) as mincore.
Relevant links:

https://lkml.indiana.edu/hypermail/linux/kernel/1302.1/04207.html
https://lkml.indiana.edu/hypermail/linux/kernel/1302.1/04209.html

For comparison with mincore, I ran both syscalls on a 2TB sparse file:

Using mincore:
real    0m37.510s
user    0m2.934s
sys     0m34.558s

Using cachestat:
real    0m0.009s
user    0m0.000s
sys     0m0.009s

This series should be applied on top of:

workingset: fix confusion around eviction vs refault container
https://lkml.org/lkml/2023/1/4/1066

This series consist of 3 patches:

Nhat Pham (3):
  workingset: refactor LRU refault to expose refault recency check
  cachestat: implement cachestat syscall
  selftests: Add selftests for cachestat

 MAINTAINERS                                   |   7 +
 arch/alpha/kernel/syscalls/syscall.tbl        |   1 +
 arch/arm/tools/syscall.tbl                    |   1 +
 arch/ia64/kernel/syscalls/syscall.tbl         |   1 +
 arch/m68k/kernel/syscalls/syscall.tbl         |   1 +
 arch/microblaze/kernel/syscalls/syscall.tbl   |   1 +
 arch/parisc/kernel/syscalls/syscall.tbl       |   1 +
 arch/powerpc/kernel/syscalls/syscall.tbl      |   1 +
 arch/s390/kernel/syscalls/syscall.tbl         |   1 +
 arch/sh/kernel/syscalls/syscall.tbl           |   1 +
 arch/sparc/kernel/syscalls/syscall.tbl        |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl        |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl        |   1 +
 arch/xtensa/kernel/syscalls/syscall.tbl       |   1 +
 include/linux/fs.h                            |   3 +
 include/linux/swap.h                          |   1 +
 include/linux/syscalls.h                      |   4 +
 include/uapi/asm-generic/unistd.h             |   5 +-
 include/uapi/linux/mman.h                     |   9 +
 init/Kconfig                                  |  10 +
 kernel/sys_ni.c                               |   1 +
 mm/filemap.c                                  | 154 +++++++++++
 mm/workingset.c                               | 129 ++++++---
 tools/testing/selftests/Makefile              |   1 +
 tools/testing/selftests/cachestat/.gitignore  |   2 +
 tools/testing/selftests/cachestat/Makefile    |   8 +
 .../selftests/cachestat/test_cachestat.c      | 260 ++++++++++++++++++
 27 files changed, 568 insertions(+), 39 deletions(-)
 create mode 100644 tools/testing/selftests/cachestat/.gitignore
 create mode 100644 tools/testing/selftests/cachestat/Makefile
 create mode 100644 tools/testing/selftests/cachestat/test_cachestat.c


base-commit: 1440f576022887004f719883acb094e7e0dd4944
prerequisite-patch-id: 171a43d333e1b267ce14188a5beaea2f313787fb
-- 
2.30.2

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-17 19:59 [PATCH v6 0/3] cachestat: a new syscall for page cache state of files Nhat Pham
@ 2023-01-17 19:59 ` Nhat Pham
  2023-01-20 14:34   ` Brian Foster
  2023-01-17 19:59 ` [PATCH v6 2/3] cachestat: implement cachestat syscall Nhat Pham
  2023-01-17 19:59 ` [PATCH v6 3/3] selftests: Add selftests for cachestat Nhat Pham
  2 siblings, 1 reply; 12+ messages in thread
From: Nhat Pham @ 2023-01-17 19:59 UTC (permalink / raw)
  To: akpm
  Cc: hannes, linux-mm, linux-kernel, bfoster, willy, linux-api, kernel-team

In preparation for computing recently evicted pages in cachestat,
refactor workingset_refault and lru_gen_refault to expose a helper
function that would test if an evicted page is recently evicted.

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 include/linux/swap.h |   1 +
 mm/workingset.c      | 129 ++++++++++++++++++++++++++++++-------------
 2 files changed, 92 insertions(+), 38 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a18cf4b7c724..dae6f6f955eb 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
 }
 
 /* linux/mm/workingset.c */
+bool workingset_test_recent(void *shadow, bool file, bool *workingset);
 void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
 void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
 void workingset_refault(struct folio *folio, void *shadow);
diff --git a/mm/workingset.c b/mm/workingset.c
index 79585d55c45d..006482c4e0bd 100644
--- a/mm/workingset.c
+++ b/mm/workingset.c
@@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
 	return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
 }
 
+/*
+ * Test if the folio is recently evicted.
+ *
+ * As a side effect, also populates the references with
+ * values unpacked from the shadow of the evicted folio.
+ */
+static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
+{
+	struct mem_cgroup *eviction_memcg;
+	struct lruvec *lruvec;
+	struct lru_gen_struct *lrugen;
+	unsigned long min_seq;
+
+	int memcgid;
+	struct pglist_data *pgdat;
+	unsigned long token;
+
+	unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
+	eviction_memcg = mem_cgroup_from_id(memcgid);
+
+	lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
+	lrugen = &lruvec->lrugen;
+
+	min_seq = READ_ONCE(lrugen->min_seq[file]);
+	return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
+}
+
 static void lru_gen_refault(struct folio *folio, void *shadow)
 {
 	int hist, tier, refs;
@@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
 	return NULL;
 }
 
+static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
+{
+	return true;
+}
+
 static void lru_gen_refault(struct folio *folio, void *shadow)
 {
 }
@@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
 				folio_test_workingset(folio));
 }
 
-/**
- * workingset_refault - Evaluate the refault of a previously evicted folio.
- * @folio: The freshly allocated replacement folio.
- * @shadow: Shadow entry of the evicted folio.
+/*
+ * Test if the folio is recently evicted by checking if
+ * refault distance of shadow exceeds workingset size.
  *
- * Calculates and evaluates the refault distance of the previously
- * evicted folio in the context of the node and the memcg whose memory
- * pressure caused the eviction.
+ * As a side effect, populate workingset with the value
+ * unpacked from shadow.
  */
-void workingset_refault(struct folio *folio, void *shadow)
+bool workingset_test_recent(void *shadow, bool file, bool *workingset)
 {
-	bool file = folio_is_file_lru(folio);
 	struct mem_cgroup *eviction_memcg;
 	struct lruvec *eviction_lruvec;
 	unsigned long refault_distance;
 	unsigned long workingset_size;
-	struct pglist_data *pgdat;
-	struct mem_cgroup *memcg;
-	unsigned long eviction;
-	struct lruvec *lruvec;
 	unsigned long refault;
-	bool workingset;
+
 	int memcgid;
-	long nr;
+	struct pglist_data *pgdat;
+	unsigned long eviction;
 
-	if (lru_gen_enabled()) {
-		lru_gen_refault(folio, shadow);
-		return;
-	}
+	if (lru_gen_enabled())
+		return lru_gen_test_recent(shadow, file, workingset);
 
-	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
+	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
 	eviction <<= bucket_order;
 
-	rcu_read_lock();
 	/*
 	 * Look up the memcg associated with the stored ID. It might
 	 * have been deleted since the folio's eviction.
@@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
 	 */
 	eviction_memcg = mem_cgroup_from_id(memcgid);
 	if (!mem_cgroup_disabled() && !eviction_memcg)
-		goto out;
+		return false;
+
 	eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
 	refault = atomic_long_read(&eviction_lruvec->nonresident_age);
 
@@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
 	 */
 	refault_distance = (refault - eviction) & EVICTION_MASK;
 
-	/*
-	 * The activation decision for this folio is made at the level
-	 * where the eviction occurred, as that is where the LRU order
-	 * during folio reclaim is being determined.
-	 *
-	 * However, the cgroup that will own the folio is the one that
-	 * is actually experiencing the refault event.
-	 */
-	nr = folio_nr_pages(folio);
-	memcg = folio_memcg(folio);
-	pgdat = folio_pgdat(folio);
-	lruvec = mem_cgroup_lruvec(memcg, pgdat);
-
-	mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
-
 	mem_cgroup_flush_stats_delayed();
 	/*
 	 * Compare the distance to the existing workingset size. We
@@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
 						     NR_INACTIVE_ANON);
 		}
 	}
-	if (refault_distance > workingset_size)
+
+	return refault_distance <= workingset_size;
+}
+
+/**
+ * workingset_refault - Evaluate the refault of a previously evicted folio.
+ * @folio: The freshly allocated replacement folio.
+ * @shadow: Shadow entry of the evicted folio.
+ *
+ * Calculates and evaluates the refault distance of the previously
+ * evicted folio in the context of the node and the memcg whose memory
+ * pressure caused the eviction.
+ */
+void workingset_refault(struct folio *folio, void *shadow)
+{
+	bool file = folio_is_file_lru(folio);
+	struct pglist_data *pgdat;
+	struct mem_cgroup *memcg;
+	struct lruvec *lruvec;
+	bool workingset;
+	long nr;
+
+	if (lru_gen_enabled()) {
+		lru_gen_refault(folio, shadow);
+		return;
+	}
+
+	rcu_read_lock();
+
+	nr = folio_nr_pages(folio);
+	memcg = folio_memcg(folio);
+	pgdat = folio_pgdat(folio);
+	lruvec = mem_cgroup_lruvec(memcg, pgdat);
+
+	if (!workingset_test_recent(shadow, file, &workingset)) {
+		/*
+		 * The activation decision for this folio is made at the level
+		 * where the eviction occurred, as that is where the LRU order
+		 * during folio reclaim is being determined.
+		 *
+		 * However, the cgroup that will own the folio is the one that
+		 * is actually experiencing the refault event.
+		 */
 		goto out;
+	}
 
 	folio_set_active(folio);
 	workingset_age_nonresident(lruvec, nr);
@@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
 		mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
 	}
 out:
+	mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
 	rcu_read_unlock();
 }
 
-- 
2.30.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v6 2/3] cachestat: implement cachestat syscall
  2023-01-17 19:59 [PATCH v6 0/3] cachestat: a new syscall for page cache state of files Nhat Pham
  2023-01-17 19:59 ` [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check Nhat Pham
@ 2023-01-17 19:59 ` Nhat Pham
  2023-01-20 14:36   ` Brian Foster
  2023-01-17 19:59 ` [PATCH v6 3/3] selftests: Add selftests for cachestat Nhat Pham
  2 siblings, 1 reply; 12+ messages in thread
From: Nhat Pham @ 2023-01-17 19:59 UTC (permalink / raw)
  To: akpm
  Cc: hannes, linux-mm, linux-kernel, bfoster, willy, linux-api, kernel-team

There is currently no good way to query the page cache state of large
file sets and directory trees. There is mincore(), but it scales poorly:
the kernel writes out a lot of bitmap data that userspace has to
aggregate, when the user really doesn not care about per-page
information in that case. The user also needs to mmap and unmap each
file as it goes along, which can be quite slow as well.

This patch implements a new syscall that queries cache state of a file
and summarizes the number of cached pages, number of dirty pages, number
of pages marked for writeback, number of (recently) evicted pages, etc.
in a given range.

NAME
    cachestat - query the page cache statistics of a file.

SYNOPSIS
    #include <sys/mman.h>

    struct cachestat {
        __u64 nr_cache;
        __u64 nr_dirty;
        __u64 nr_writeback;
        __u64 nr_evicted;
        __u64 nr_recently_evicted;
    };

    int cachestat(unsigned int fd, off_t off, size_t len,
          unsigned int cstat_version, struct cachestat *cstat,
          unsigned int flags);

DESCRIPTION
    cachestat() queries the number of cached pages, number of dirty
    pages, number of pages marked for writeback, number of evicted
    pages, number of recently evicted pages, in the bytes range given by
    `off` and `len`.

    An evicted page is a page that is previously in the page cache but
    has been evicted since. A page is recently evicted if its last
    eviction was recent enough that its reentry to the cache would
    indicate that it is actively being used by the system, and that
    there is memory pressure on the system.

    These values are returned in a cachestat struct, whose address is
    given by the `cstat` argument.

    The `off` and `len` arguments must be non-negative integers. If
    `len` > 0, the queried range is [`off`, `off` + `len`]. If `len` ==
    0, we will query in the range from `off` to the end of the file.

    `cstat_version` is an unsigned integer indicating the specific
    version of the cachestat struct. It must be at least 1, and does
    not exceed the latest version number (which is currently 1). For
    now, user should just pass 1.

    The `flags` argument is unused for now, but is included for future
    extensibility. User should pass 0 (i.e no flag specified).

RETURN VALUE
    On success, cachestat returns 0. On error, -1 is returned, and errno
    is set to indicate the error.

ERRORS
    EFAULT cstat points to an invalid address.

    EINVAL invalid `cstat_version` or `flags`

    EBADF  invalid file descriptor.

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 arch/alpha/kernel/syscalls/syscall.tbl      |   1 +
 arch/arm/tools/syscall.tbl                  |   1 +
 arch/ia64/kernel/syscalls/syscall.tbl       |   1 +
 arch/m68k/kernel/syscalls/syscall.tbl       |   1 +
 arch/microblaze/kernel/syscalls/syscall.tbl |   1 +
 arch/parisc/kernel/syscalls/syscall.tbl     |   1 +
 arch/powerpc/kernel/syscalls/syscall.tbl    |   1 +
 arch/s390/kernel/syscalls/syscall.tbl       |   1 +
 arch/sh/kernel/syscalls/syscall.tbl         |   1 +
 arch/sparc/kernel/syscalls/syscall.tbl      |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl      |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl      |   1 +
 arch/xtensa/kernel/syscalls/syscall.tbl     |   1 +
 include/linux/fs.h                          |   3 +
 include/linux/syscalls.h                    |   4 +
 include/uapi/asm-generic/unistd.h           |   5 +-
 include/uapi/linux/mman.h                   |   9 ++
 init/Kconfig                                |  10 ++
 kernel/sys_ni.c                             |   1 +
 mm/filemap.c                                | 154 ++++++++++++++++++++
 20 files changed, 198 insertions(+), 1 deletion(-)

diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
index 8ebacf37a8cf..1f13995d00d7 100644
--- a/arch/alpha/kernel/syscalls/syscall.tbl
+++ b/arch/alpha/kernel/syscalls/syscall.tbl
@@ -490,3 +490,4 @@
 558	common	process_mrelease		sys_process_mrelease
 559	common  futex_waitv                     sys_futex_waitv
 560	common	set_mempolicy_home_node		sys_ni_syscall
+561	common	cachestat			sys_cachestat
diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
index ac964612d8b0..8ebed8a13874 100644
--- a/arch/arm/tools/syscall.tbl
+++ b/arch/arm/tools/syscall.tbl
@@ -464,3 +464,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/ia64/kernel/syscalls/syscall.tbl b/arch/ia64/kernel/syscalls/syscall.tbl
index 72c929d9902b..f8c74ffeeefb 100644
--- a/arch/ia64/kernel/syscalls/syscall.tbl
+++ b/arch/ia64/kernel/syscalls/syscall.tbl
@@ -371,3 +371,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
index b1f3940bc298..4f504783371f 100644
--- a/arch/m68k/kernel/syscalls/syscall.tbl
+++ b/arch/m68k/kernel/syscalls/syscall.tbl
@@ -450,3 +450,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
index 820145e47350..858d22bf275c 100644
--- a/arch/microblaze/kernel/syscalls/syscall.tbl
+++ b/arch/microblaze/kernel/syscalls/syscall.tbl
@@ -456,3 +456,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
index 8a99c998da9b..7c84a72306d1 100644
--- a/arch/parisc/kernel/syscalls/syscall.tbl
+++ b/arch/parisc/kernel/syscalls/syscall.tbl
@@ -448,3 +448,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common	futex_waitv			sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
index 2bca64f96164..937460f0a8ec 100644
--- a/arch/powerpc/kernel/syscalls/syscall.tbl
+++ b/arch/powerpc/kernel/syscalls/syscall.tbl
@@ -530,3 +530,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450 	nospu	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
index 799147658dee..7df0329d46cb 100644
--- a/arch/s390/kernel/syscalls/syscall.tbl
+++ b/arch/s390/kernel/syscalls/syscall.tbl
@@ -453,3 +453,4 @@
 448  common	process_mrelease	sys_process_mrelease		sys_process_mrelease
 449  common	futex_waitv		sys_futex_waitv			sys_futex_waitv
 450  common	set_mempolicy_home_node	sys_set_mempolicy_home_node	sys_set_mempolicy_home_node
+451  common	cachestat		sys_cachestat			sys_cachestat
diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
index 2de85c977f54..97377e8c5025 100644
--- a/arch/sh/kernel/syscalls/syscall.tbl
+++ b/arch/sh/kernel/syscalls/syscall.tbl
@@ -453,3 +453,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
index 4398cc6fb68d..faa835f3c54a 100644
--- a/arch/sparc/kernel/syscalls/syscall.tbl
+++ b/arch/sparc/kernel/syscalls/syscall.tbl
@@ -496,3 +496,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 320480a8db4f..bc0a3c941b35 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -455,3 +455,4 @@
 448	i386	process_mrelease	sys_process_mrelease
 449	i386	futex_waitv		sys_futex_waitv
 450	i386	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	i386	cachestat		sys_cachestat
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index c84d12608cd2..227538b0ce80 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -372,6 +372,7 @@
 448	common	process_mrelease	sys_process_mrelease
 449	common	futex_waitv		sys_futex_waitv
 450	common	set_mempolicy_home_node	sys_set_mempolicy_home_node
+451	common	cachestat		sys_cachestat
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
index 52c94ab5c205..2b69c3c035b6 100644
--- a/arch/xtensa/kernel/syscalls/syscall.tbl
+++ b/arch/xtensa/kernel/syscalls/syscall.tbl
@@ -421,3 +421,4 @@
 448	common	process_mrelease		sys_process_mrelease
 449	common  futex_waitv                     sys_futex_waitv
 450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
+451	common	cachestat			sys_cachestat
diff --git a/include/linux/fs.h b/include/linux/fs.h
index e654435f1651..83300f1491e7 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -75,6 +75,7 @@ struct fs_context;
 struct fs_parameter_spec;
 struct fileattr;
 struct iomap_ops;
+struct cachestat;
 
 extern void __init inode_init(void);
 extern void __init inode_init_early(void);
@@ -830,6 +831,8 @@ void filemap_invalidate_lock_two(struct address_space *mapping1,
 				 struct address_space *mapping2);
 void filemap_invalidate_unlock_two(struct address_space *mapping1,
 				   struct address_space *mapping2);
+void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
+		pgoff_t last_index, struct cachestat *cs);
 
 
 /*
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index a34b0f9a9972..d3fe6ba8eb38 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -72,6 +72,7 @@ struct open_how;
 struct mount_attr;
 struct landlock_ruleset_attr;
 enum landlock_rule_type;
+struct cachestat;
 
 #include <linux/types.h>
 #include <linux/aio_abi.h>
@@ -1056,6 +1057,9 @@ asmlinkage long sys_memfd_secret(unsigned int flags);
 asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len,
 					    unsigned long home_node,
 					    unsigned long flags);
+asmlinkage long sys_cachestat(unsigned int fd, off_t off, size_t len,
+		unsigned int cstat_version, struct cachestat __user *cstat,
+		unsigned int flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 45fa180cc56a..cd639fae9086 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
 #define __NR_set_mempolicy_home_node 450
 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
 
+#define __NR_cachestat 451
+__SYSCALL(__NR_cachestat, sys_cachestat)
+
 #undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
 
 /*
  * 32 bit systems traditionally used different
diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h
index f55bc680b5b0..fe03ed0b7587 100644
--- a/include/uapi/linux/mman.h
+++ b/include/uapi/linux/mman.h
@@ -4,6 +4,7 @@
 
 #include <asm/mman.h>
 #include <asm-generic/hugetlb_encode.h>
+#include <linux/types.h>
 
 #define MREMAP_MAYMOVE		1
 #define MREMAP_FIXED		2
@@ -41,4 +42,12 @@
 #define MAP_HUGE_2GB	HUGETLB_FLAG_ENCODE_2GB
 #define MAP_HUGE_16GB	HUGETLB_FLAG_ENCODE_16GB
 
+struct cachestat {
+	__u64 nr_cache;
+	__u64 nr_dirty;
+	__u64 nr_writeback;
+	__u64 nr_evicted;
+	__u64 nr_recently_evicted;
+};
+
 #endif /* _UAPI_LINUX_MMAN_H */
diff --git a/init/Kconfig b/init/Kconfig
index 694f7c160c9c..da96ac29af1d 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1798,6 +1798,16 @@ config RSEQ
 
 	  If unsure, say Y.
 
+config CACHESTAT_SYSCALL
+	bool "Enable cachestat() system call" if EXPERT
+	default y
+	help
+	  Enable the cachestat system call, which queries the page cache
+	  statistics of a file (number of cached pages, dirty pages,
+	  pages marked for writeback, (recently) evicted pages).
+
+	  If unsure say Y here.
+
 config DEBUG_RSEQ
 	default n
 	bool "Enabled debugging of rseq() system call" if EXPERT
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 860b2dcf3ac4..04bfb1e4d377 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -299,6 +299,7 @@ COND_SYSCALL(set_mempolicy);
 COND_SYSCALL(migrate_pages);
 COND_SYSCALL(move_pages);
 COND_SYSCALL(set_mempolicy_home_node);
+COND_SYSCALL(cachestat);
 
 COND_SYSCALL(perf_event_open);
 COND_SYSCALL(accept4);
diff --git a/mm/filemap.c b/mm/filemap.c
index 08341616ae7a..0305eaf5b3f5 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -22,6 +22,7 @@
 #include <linux/mm.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/syscalls.h>
 #include <linux/mman.h>
 #include <linux/pagemap.h>
 #include <linux/file.h>
@@ -55,6 +56,13 @@
 #include <linux/buffer_head.h> /* for try_to_free_buffers */
 
 #include <asm/mman.h>
+#include <uapi/linux/mman.h>
+
+#include "swap.h"
+
+#ifdef CONFIG_CACHESTAT_SYSCALL
+#define LATEST_CACHESTAT_VERSION	1
+#endif
 
 /*
  * Shared mappings implemented 30.11.1994. It's not fully working yet,
@@ -3949,3 +3957,149 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
 	return try_to_free_buffers(folio);
 }
 EXPORT_SYMBOL(filemap_release_folio);
+
+/**
+ * filemap_cachestat() - compute the page cache statistics of a mapping
+ * @mapping:	The mapping to compute the statistics for.
+ * @first_index:	The starting page cache index.
+ * @last_index:	The final page index (inclusive).
+ * @cs:	the cachestat struct to write the result to.
+ *
+ * This will query the page cache statistics of a mapping in the
+ * page range of [first_index, last_index] (inclusive). The statistics
+ * queried include: number of dirty pages, number of pages marked for
+ * writeback, and the number of (recently) evicted pages.
+ */
+void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
+		pgoff_t last_index, struct cachestat *cs)
+{
+	XA_STATE(xas, &mapping->i_pages, first_index);
+	struct folio *folio;
+
+	rcu_read_lock();
+	xas_for_each(&xas, folio, last_index) {
+		unsigned long nr_pages;
+		pgoff_t folio_first_index, folio_last_index;
+
+		if (xas_retry(&xas, folio))
+			continue;
+
+		nr_pages = folio_nr_pages(folio);
+		folio_first_index = folio_pgoff(folio);
+		folio_last_index = folio_first_index + nr_pages - 1;
+
+		/* Folios might straddle the range boundaries, only count covered subpages */
+		if (folio_first_index < first_index)
+			nr_pages -= first_index - folio_first_index;
+
+		if (folio_last_index > last_index)
+			nr_pages -= folio_last_index - last_index;
+
+		if (xa_is_value(folio)) {
+			/* page is evicted */
+			void *shadow = (void *)folio;
+			bool workingset; /* not used */
+
+			cs->nr_evicted += nr_pages;
+
+#ifdef CONFIG_SWAP /* implies CONFIG_MMU */
+			if (shmem_mapping(mapping)) {
+				/* shmem file - in swap cache */
+				swp_entry_t swp = radix_to_swp_entry(folio);
+
+				shadow = get_shadow_from_swap_cache(swp);
+			}
+#endif
+			if (workingset_test_recent(shadow, true, &workingset))
+				cs->nr_recently_evicted += nr_pages;
+
+			goto resched;
+		}
+
+		/* page is in cache */
+		cs->nr_cache += nr_pages;
+
+		if (folio_test_dirty(folio))
+			cs->nr_dirty += nr_pages;
+
+		if (folio_test_writeback(folio))
+			cs->nr_writeback += nr_pages;
+
+resched:
+		if (need_resched()) {
+			xas_pause(&xas);
+			cond_resched_rcu();
+		}
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL(filemap_cachestat);
+
+#ifdef CONFIG_CACHESTAT_SYSCALL
+/*
+ * The cachestat(5) system call.
+ *
+ * cachestat() returns the page cache statistics of a file in the
+ * bytes range specified by `off` and `len`: number of cached pages,
+ * number of dirty pages, number of pages marked for writeback,
+ * number of evicted pages, and number of recently evicted pages.
+ *
+ * An evicted page is a page that is previously in the page cache
+ * but has been evicted since. A page is recently evicted if its last
+ * eviction was recent enough that its reentry to the cache would
+ * indicate that it is actively being used by the system, and that
+ * there is memory pressure on the system.
+ *
+ * `off` and `len` must be non-negative integers. If `len` > 0,
+ * the queried range is [`off`, `off` + `len`]. If `len` == 0,
+ * we will query in the range from `off` to the end of the file.
+ *
+ * `cstat_version` is an unsigned integer indicating the specific version
+ * of the cachestat struct. It must be at least 1, and does not exceed the
+ * latest version number (which is currently 1). For now, user should
+ * just pass 1.
+ *
+ * The `flags` argument is unused for now, but is included for future
+ * extensibility. User should pass 0 (i.e no flag specified).
+ *
+ * Because the status of a page can change after cachestat() checks it
+ * but before it returns to the application, the returned values may
+ * contain stale information.
+ *
+ * return values:
+ *  zero    - success
+ *  -EFAULT - cstat points to an illegal address
+ *  -EINVAL - invalid arguments
+ *  -EBADF	- invalid file descriptor
+ */
+SYSCALL_DEFINE6(cachestat, unsigned int, fd, off_t, off, size_t, len,
+		unsigned int, cstat_version, struct cachestat __user *, cstat,
+		unsigned int, flags)
+{
+	struct fd f = fdget(fd);
+	struct address_space *mapping;
+	struct cachestat cs;
+	pgoff_t first_index = off >> PAGE_SHIFT;
+	pgoff_t last_index =
+		len == 0 ? ULONG_MAX : (off + len - 1) >> PAGE_SHIFT;
+
+	if (!f.file)
+		return -EBADF;
+
+	if (off < 0 || flags != 0 || cstat_version < 1 ||
+			cstat_version > LATEST_CACHESTAT_VERSION) {
+		fdput(f);
+		return -EINVAL;
+	}
+
+	memset(&cs, 0, sizeof(struct cachestat));
+	mapping = f.file->f_mapping;
+	filemap_cachestat(mapping, first_index, last_index, &cs);
+	fdput(f);
+
+	if (copy_to_user(cstat, &cs, sizeof(struct cachestat)))
+		return -EFAULT;
+
+	return 0;
+}
+#endif /* CONFIG_CACHESTAT_SYSCALL */
-- 
2.30.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v6 3/3] selftests: Add selftests for cachestat
  2023-01-17 19:59 [PATCH v6 0/3] cachestat: a new syscall for page cache state of files Nhat Pham
  2023-01-17 19:59 ` [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check Nhat Pham
  2023-01-17 19:59 ` [PATCH v6 2/3] cachestat: implement cachestat syscall Nhat Pham
@ 2023-01-17 19:59 ` Nhat Pham
  2 siblings, 0 replies; 12+ messages in thread
From: Nhat Pham @ 2023-01-17 19:59 UTC (permalink / raw)
  To: akpm
  Cc: hannes, linux-mm, linux-kernel, bfoster, willy, linux-api, kernel-team

Test cachestat on a newly created file, /dev/ files, and /proc/ files.
Also test on a shmem file (which can also be tested with huge pages
since tmpfs supports huge pages).

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 MAINTAINERS                                   |   7 +
 tools/testing/selftests/Makefile              |   1 +
 tools/testing/selftests/cachestat/.gitignore  |   2 +
 tools/testing/selftests/cachestat/Makefile    |   8 +
 .../selftests/cachestat/test_cachestat.c      | 260 ++++++++++++++++++
 5 files changed, 278 insertions(+)
 create mode 100644 tools/testing/selftests/cachestat/.gitignore
 create mode 100644 tools/testing/selftests/cachestat/Makefile
 create mode 100644 tools/testing/selftests/cachestat/test_cachestat.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a198da986146..792a866353ec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4552,6 +4552,13 @@ S:	Supported
 F:	Documentation/filesystems/caching/cachefiles.rst
 F:	fs/cachefiles/
 
+CACHESTAT: PAGE CACHE STATS FOR A FILE
+M:	Nhat Pham <nphamcs@gmail.com>
+M:	Johannes Weiner <hannes@cmpxchg.org>
+L:	linux-mm@kvack.org
+S:	Maintained
+F:	tools/testing/selftests/cachestat/test_cachestat.c
+
 CADENCE MIPI-CSI2 BRIDGES
 M:	Maxime Ripard <mripard@kernel.org>
 L:	linux-media@vger.kernel.org
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
index 0464b2c6c1e4..3cad0b38c5c2 100644
--- a/tools/testing/selftests/Makefile
+++ b/tools/testing/selftests/Makefile
@@ -4,6 +4,7 @@ TARGETS += amd-pstate
 TARGETS += arm64
 TARGETS += bpf
 TARGETS += breakpoints
+TARGETS += cachestat
 TARGETS += capabilities
 TARGETS += cgroup
 TARGETS += clone3
diff --git a/tools/testing/selftests/cachestat/.gitignore b/tools/testing/selftests/cachestat/.gitignore
new file mode 100644
index 000000000000..d6c30b43a4bb
--- /dev/null
+++ b/tools/testing/selftests/cachestat/.gitignore
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+test_cachestat
diff --git a/tools/testing/selftests/cachestat/Makefile b/tools/testing/selftests/cachestat/Makefile
new file mode 100644
index 000000000000..fca73aaa7d14
--- /dev/null
+++ b/tools/testing/selftests/cachestat/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+TEST_GEN_PROGS := test_cachestat
+
+CFLAGS += $(KHDR_INCLUDES)
+CFLAGS += -Wall
+CFLAGS += -lrt
+
+include ../lib.mk
diff --git a/tools/testing/selftests/cachestat/test_cachestat.c b/tools/testing/selftests/cachestat/test_cachestat.c
new file mode 100644
index 000000000000..dc2894028eee
--- /dev/null
+++ b/tools/testing/selftests/cachestat/test_cachestat.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <linux/kernel.h>
+#include <linux/mman.h>
+#include <sys/mman.h>
+#include <sys/shm.h>
+#include <sys/syscall.h>
+#include <unistd.h>
+#include <string.h>
+#include <fcntl.h>
+#include <errno.h>
+
+#include "../kselftest.h"
+
+static const char * const dev_files[] = {
+	"/dev/zero", "/dev/null", "/dev/urandom",
+	"/proc/version", "/proc"
+};
+static const int cachestat_nr = 451;
+static const int cstat_version = 1; /* first version */
+
+void print_cachestat(struct cachestat *cs)
+{
+	ksft_print_msg(
+	"Using cachestat: Cached: %lu, Dirty: %lu, Writeback: %lu, Evicted: %lu, Recently Evicted: %lu\n",
+	cs->nr_cache, cs->nr_dirty, cs->nr_writeback,
+	cs->nr_evicted, cs->nr_recently_evicted);
+}
+
+bool write_exactly(int fd, size_t filesize)
+{
+	char data[filesize];
+	bool ret = true;
+	int random_fd = open("/dev/urandom", O_RDONLY);
+
+	if (random_fd < 0) {
+		ksft_print_msg("Unable to access urandom.\n");
+		ret = false;
+		goto out;
+	} else {
+		int remained = filesize;
+		char *cursor = data;
+
+		while (remained) {
+			ssize_t read_len = read(random_fd, cursor, remained);
+
+			if (read_len <= 0) {
+				ksft_print_msg("Unable to read from urandom.\n");
+				ret = false;
+				goto close_random_fd;
+			}
+
+			remained -= read_len;
+			cursor += read_len;
+		}
+
+		/* write random data to fd */
+		remained = filesize;
+		cursor = data;
+		while (remained) {
+			ssize_t write_len = write(fd, cursor, remained);
+
+			if (write_len <= 0) {
+				ksft_print_msg("Unable write random data to file.\n");
+				ret = false;
+				goto close_random_fd;
+			}
+
+			remained -= write_len;
+			cursor += write_len;
+		}
+	}
+
+close_random_fd:
+	close(random_fd);
+out:
+	return ret;
+}
+
+/*
+ * Open/create the file at filename, (optionally) write random data to it
+ * (exactly num_pages), then test the cachestat syscall on this file.
+ *
+ * If test_fsync == true, fsync the file, then check the number of dirty
+ * pages.
+ */
+bool test_cachestat(const char *filename, bool write_random, bool create,
+		bool test_fsync, unsigned long num_pages, int open_flags,
+		mode_t open_mode)
+{
+	size_t PS = sysconf(_SC_PAGESIZE);
+	int filesize = num_pages * PS;
+	bool ret = true;
+	long syscall_ret;
+	struct cachestat cs;
+
+	int fd = open(filename, open_flags, open_mode);
+
+	if (fd == -1) {
+		ksft_print_msg("Unable to create/open file.\n");
+		goto out;
+	} else {
+		ksft_print_msg("Create/open %s\n", filename);
+	}
+
+	if (write_random) {
+		if (!write_exactly(fd, filesize)) {
+			ksft_print_msg("Unable to access urandom.\n");
+			ret = false;
+			goto out1;
+		}
+	}
+
+	syscall_ret = syscall(cachestat_nr, fd, 0, filesize,
+		cstat_version, &cs, 0);
+
+	ksft_print_msg("Cachestat call returned %ld\n", syscall_ret);
+
+	if (syscall_ret) {
+		ksft_print_msg("Cachestat returned non-zero.\n");
+		ret = false;
+		goto out1;
+
+	} else {
+		print_cachestat(&cs);
+
+		if (write_random) {
+			if (cs.nr_cache + cs.nr_evicted != num_pages) {
+				ksft_print_msg(
+					"Total number of cached and evicted pages is off.\n");
+				ret = false;
+			}
+		}
+	}
+
+	if (test_fsync) {
+		if (fsync(fd)) {
+			ksft_print_msg("fsync fails.\n");
+			ret = false;
+		} else {
+			syscall_ret = syscall(cachestat_nr, fd, 0, filesize,
+				cstat_version, &cs, 0);
+
+			ksft_print_msg("Cachestat call (after fsync) returned %ld\n",
+				syscall_ret);
+
+			if (!syscall_ret) {
+				print_cachestat(&cs);
+
+				if (cs.nr_dirty) {
+					ret = false;
+					ksft_print_msg(
+						"Number of dirty should be zero after fsync.\n");
+				}
+			} else {
+				ksft_print_msg("Cachestat (after fsync) returned non-zero.\n");
+				ret = false;
+				goto out1;
+			}
+		}
+	}
+
+out1:
+	close(fd);
+
+	if (create)
+		remove(filename);
+out:
+	return ret;
+}
+
+bool test_cachestat_shmem(void)
+{
+	size_t PS = sysconf(_SC_PAGESIZE);
+	size_t filesize = PS * 512 * 2; /* 2 2MB huge pages */
+	int syscall_ret;
+	off_t off = PS;
+	size_t compute_len = PS * 512;
+	char *filename = "tmpshmcstat";
+	struct cachestat cs;
+	bool ret = true;
+	unsigned long num_pages = compute_len / PS;
+	int fd = shm_open(filename, O_CREAT | O_RDWR, 0600);
+
+	if (fd < 0) {
+		ksft_print_msg("Unable to create shmem file.\n");
+		ret = false;
+		goto out;
+	}
+
+	if (ftruncate(fd, filesize)) {
+		ksft_print_msg("Unable to trucate shmem file.\n");
+		ret = false;
+		goto close_fd;
+	}
+
+	if (!write_exactly(fd, filesize)) {
+		ksft_print_msg("Unable to write to shmem file.\n");
+		ret = false;
+		goto close_fd;
+	}
+
+	syscall_ret = syscall(cachestat_nr, fd, off, compute_len,
+		cstat_version, &cs, 0);
+
+	if (syscall_ret) {
+		ksft_print_msg("Cachestat returned non-zero.\n");
+		ret = false;
+		goto close_fd;
+	} else {
+		print_cachestat(&cs);
+		if (cs.nr_cache + cs.nr_evicted != num_pages) {
+			ksft_print_msg(
+				"Total number of cached and evicted pages is off.\n");
+			ret = false;
+		}
+	}
+
+close_fd:
+	shm_unlink(filename);
+out:
+	return ret;
+}
+
+int main(void)
+{
+	int ret = 0;
+
+	for (int i = 0; i < 5; i++) {
+		const char *dev_filename = dev_files[i];
+
+		if (test_cachestat(dev_filename, false, false, false,
+			4, O_RDONLY, 0400))
+			ksft_test_result_pass("cachestat works with %s\n", dev_filename);
+		else {
+			ksft_test_result_fail("cachestat fails with %s\n", dev_filename);
+			ret = 1;
+		}
+	}
+
+	if (test_cachestat("tmpfilecachestat", true, true,
+		true, 4, O_CREAT | O_RDWR, 0400 | 0600))
+		ksft_test_result_pass("cachestat works with a normal file\n");
+	else {
+		ksft_test_result_fail("cachestat fails with normal file\n");
+		ret = 1;
+	}
+
+	if (test_cachestat_shmem())
+		ksft_test_result_pass("cachestat works with a shmem file\n");
+	else {
+		ksft_test_result_fail("cachestat fails with a shmem file\n");
+		ret = 1;
+	}
+
+	return ret;
+}
-- 
2.30.2

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-17 19:59 ` [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check Nhat Pham
@ 2023-01-20 14:34   ` Brian Foster
  2023-01-20 16:27     ` Johannes Weiner
  2023-01-20 17:29     ` Nhat Pham
  0 siblings, 2 replies; 12+ messages in thread
From: Brian Foster @ 2023-01-20 14:34 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, linux-mm, linux-kernel, willy, linux-api, kernel-team

On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> In preparation for computing recently evicted pages in cachestat,
> refactor workingset_refault and lru_gen_refault to expose a helper
> function that would test if an evicted page is recently evicted.
> 
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---

Hi Nhat,

I'm not terribly familiar with the workingset management code, but a few
thoughts now that I've stared at it a bit...

>  include/linux/swap.h |   1 +
>  mm/workingset.c      | 129 ++++++++++++++++++++++++++++++-------------
>  2 files changed, 92 insertions(+), 38 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index a18cf4b7c724..dae6f6f955eb 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
>  }
>  
>  /* linux/mm/workingset.c */
> +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
>  void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
>  void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
>  void workingset_refault(struct folio *folio, void *shadow);
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 79585d55c45d..006482c4e0bd 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
> @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
>  	return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
>  }
>  
> +/*
> + * Test if the folio is recently evicted.
> + *
> + * As a side effect, also populates the references with
> + * values unpacked from the shadow of the evicted folio.
> + */
> +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> +{
> +	struct mem_cgroup *eviction_memcg;
> +	struct lruvec *lruvec;
> +	struct lru_gen_struct *lrugen;
> +	unsigned long min_seq;
> +

Extra whitespace looks a bit funny here.

> +	int memcgid;
> +	struct pglist_data *pgdat;
> +	unsigned long token;
> +
> +	unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> +	eviction_memcg = mem_cgroup_from_id(memcgid);
> +
> +	lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> +	lrugen = &lruvec->lrugen;
> +
> +	min_seq = READ_ONCE(lrugen->min_seq[file]);
> +	return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));

I think this might be more readable without the double negative.

Also it looks like this logic is pulled from lru_gen_refault(). Any
reason the caller isn't refactored to use this helper, similar to how
workingset_refault() is modified? It seems like a potential landmine to
duplicate the logic here for cachestat purposes and somewhere else for
actual workingset management.

> +}
> +
>  static void lru_gen_refault(struct folio *folio, void *shadow)
>  {
>  	int hist, tier, refs;
> @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
>  	return NULL;
>  }
>  
> +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> +{
> +	return true;
> +}

I guess this is a no-op for !MGLRU but given the context (i.e. special
treatment for "recent" refaults), perhaps false is a more sane default?

> +
>  static void lru_gen_refault(struct folio *folio, void *shadow)
>  {
>  }
> @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
>  				folio_test_workingset(folio));
>  }
>  
> -/**
> - * workingset_refault - Evaluate the refault of a previously evicted folio.
> - * @folio: The freshly allocated replacement folio.
> - * @shadow: Shadow entry of the evicted folio.
> +/*
> + * Test if the folio is recently evicted by checking if
> + * refault distance of shadow exceeds workingset size.
>   *
> - * Calculates and evaluates the refault distance of the previously
> - * evicted folio in the context of the node and the memcg whose memory
> - * pressure caused the eviction.
> + * As a side effect, populate workingset with the value
> + * unpacked from shadow.
>   */
> -void workingset_refault(struct folio *folio, void *shadow)
> +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
>  {
> -	bool file = folio_is_file_lru(folio);
>  	struct mem_cgroup *eviction_memcg;
>  	struct lruvec *eviction_lruvec;
>  	unsigned long refault_distance;
>  	unsigned long workingset_size;
> -	struct pglist_data *pgdat;
> -	struct mem_cgroup *memcg;
> -	unsigned long eviction;
> -	struct lruvec *lruvec;
>  	unsigned long refault;
> -	bool workingset;
> +
>  	int memcgid;
> -	long nr;
> +	struct pglist_data *pgdat;
> +	unsigned long eviction;
>  
> -	if (lru_gen_enabled()) {
> -		lru_gen_refault(folio, shadow);
> -		return;
> -	}
> +	if (lru_gen_enabled())
> +		return lru_gen_test_recent(shadow, file, workingset);

Hmm.. so this function is only called by workingset_refault() when
lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
which as noted above duplicates some of the recency logic.

I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
can just call workingset_test_recent(). That seems reasonable, but makes
me wonder...

>  
> -	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> +	unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
>  	eviction <<= bucket_order;
>  
> -	rcu_read_lock();
>  	/*
>  	 * Look up the memcg associated with the stored ID. It might
>  	 * have been deleted since the folio's eviction.
> @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
>  	 */
>  	eviction_memcg = mem_cgroup_from_id(memcgid);
>  	if (!mem_cgroup_disabled() && !eviction_memcg)
> -		goto out;
> +		return false;
> +
>  	eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
>  	refault = atomic_long_read(&eviction_lruvec->nonresident_age);
>  
> @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
>  	 */
>  	refault_distance = (refault - eviction) & EVICTION_MASK;
>  
> -	/*
> -	 * The activation decision for this folio is made at the level
> -	 * where the eviction occurred, as that is where the LRU order
> -	 * during folio reclaim is being determined.
> -	 *
> -	 * However, the cgroup that will own the folio is the one that
> -	 * is actually experiencing the refault event.
> -	 */
> -	nr = folio_nr_pages(folio);
> -	memcg = folio_memcg(folio);
> -	pgdat = folio_pgdat(folio);
> -	lruvec = mem_cgroup_lruvec(memcg, pgdat);
> -
> -	mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> -
>  	mem_cgroup_flush_stats_delayed();
>  	/*
>  	 * Compare the distance to the existing workingset size. We
> @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
>  						     NR_INACTIVE_ANON);
>  		}
>  	}
> -	if (refault_distance > workingset_size)
> +
> +	return refault_distance <= workingset_size;
> +}
> +
> +/**
> + * workingset_refault - Evaluate the refault of a previously evicted folio.
> + * @folio: The freshly allocated replacement folio.
> + * @shadow: Shadow entry of the evicted folio.
> + *
> + * Calculates and evaluates the refault distance of the previously
> + * evicted folio in the context of the node and the memcg whose memory
> + * pressure caused the eviction.
> + */
> +void workingset_refault(struct folio *folio, void *shadow)
> +{
> +	bool file = folio_is_file_lru(folio);
> +	struct pglist_data *pgdat;
> +	struct mem_cgroup *memcg;
> +	struct lruvec *lruvec;
> +	bool workingset;
> +	long nr;
> +
> +	if (lru_gen_enabled()) {
> +		lru_gen_refault(folio, shadow);
> +		return;
> +	}

... if perhaps this should call workingset_test_recent() a bit earlier,
since it also covers the lru_gen_*() case..? That may or may not be
cleaner. It _seems like_ it might produce a bit more consistent logic,
but just a thought and I could easily be missing details.

> +
> +	rcu_read_lock();
> +
> +	nr = folio_nr_pages(folio);
> +	memcg = folio_memcg(folio);
> +	pgdat = folio_pgdat(folio);
> +	lruvec = mem_cgroup_lruvec(memcg, pgdat);
> +
> +	if (!workingset_test_recent(shadow, file, &workingset)) {
> +		/*
> +		 * The activation decision for this folio is made at the level
> +		 * where the eviction occurred, as that is where the LRU order
> +		 * during folio reclaim is being determined.
> +		 *
> +		 * However, the cgroup that will own the folio is the one that
> +		 * is actually experiencing the refault event.
> +		 */

IIUC, this comment is explaining the difference between using the
eviction lru (based on the shadow entry) to calculate recency vs. the
lru for the current folio to process the refault. If so, perhaps it
should go right above the workingset_test_recent() call? (Then the if
braces could go away as well..).

>  		goto out;
> +	}
>  
>  	folio_set_active(folio);
>  	workingset_age_nonresident(lruvec, nr);
> @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
>  		mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
>  	}
>  out:
> +	mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);

Why not just leave this up earlier in the function (i.e. before the
recency check) as it was originally?

Brian

>  	rcu_read_unlock();
>  }
>  
> -- 
> 2.30.2
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 2/3] cachestat: implement cachestat syscall
  2023-01-17 19:59 ` [PATCH v6 2/3] cachestat: implement cachestat syscall Nhat Pham
@ 2023-01-20 14:36   ` Brian Foster
  2023-01-20 18:26     ` Nhat Pham
  0 siblings, 1 reply; 12+ messages in thread
From: Brian Foster @ 2023-01-20 14:36 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, linux-mm, linux-kernel, willy, linux-api, kernel-team

On Tue, Jan 17, 2023 at 11:59:58AM -0800, Nhat Pham wrote:
> There is currently no good way to query the page cache state of large
> file sets and directory trees. There is mincore(), but it scales poorly:
> the kernel writes out a lot of bitmap data that userspace has to
> aggregate, when the user really doesn not care about per-page
> information in that case. The user also needs to mmap and unmap each
> file as it goes along, which can be quite slow as well.
> 
> This patch implements a new syscall that queries cache state of a file
> and summarizes the number of cached pages, number of dirty pages, number
> of pages marked for writeback, number of (recently) evicted pages, etc.
> in a given range.
> 
> NAME
>     cachestat - query the page cache statistics of a file.
> 
> SYNOPSIS
>     #include <sys/mman.h>
> 
>     struct cachestat {
>         __u64 nr_cache;
>         __u64 nr_dirty;
>         __u64 nr_writeback;
>         __u64 nr_evicted;
>         __u64 nr_recently_evicted;
>     };
> 
>     int cachestat(unsigned int fd, off_t off, size_t len,
>           unsigned int cstat_version, struct cachestat *cstat,
>           unsigned int flags);
> 
> DESCRIPTION
>     cachestat() queries the number of cached pages, number of dirty
>     pages, number of pages marked for writeback, number of evicted
>     pages, number of recently evicted pages, in the bytes range given by
>     `off` and `len`.
> 
>     An evicted page is a page that is previously in the page cache but
>     has been evicted since. A page is recently evicted if its last
>     eviction was recent enough that its reentry to the cache would
>     indicate that it is actively being used by the system, and that
>     there is memory pressure on the system.
> 
>     These values are returned in a cachestat struct, whose address is
>     given by the `cstat` argument.
> 
>     The `off` and `len` arguments must be non-negative integers. If
>     `len` > 0, the queried range is [`off`, `off` + `len`]. If `len` ==
>     0, we will query in the range from `off` to the end of the file.
> 
>     `cstat_version` is an unsigned integer indicating the specific
>     version of the cachestat struct. It must be at least 1, and does
>     not exceed the latest version number (which is currently 1). For
>     now, user should just pass 1.
> 

Still not sure about this vs. just requiring the structure size to match
(and maybe padding it), but perhaps linux-api can comment on the best
way to future proof.

I'm not the person to ack a syscall regardless, but since I've had
various previous comments on this patch and no further issues stand out
to me:

Reviewed-by: Brian Foster <bfoster@redhat.com>

>     The `flags` argument is unused for now, but is included for future
>     extensibility. User should pass 0 (i.e no flag specified).
> 
> RETURN VALUE
>     On success, cachestat returns 0. On error, -1 is returned, and errno
>     is set to indicate the error.
> 
> ERRORS
>     EFAULT cstat points to an invalid address.
> 
>     EINVAL invalid `cstat_version` or `flags`
> 
>     EBADF  invalid file descriptor.
> 
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  arch/alpha/kernel/syscalls/syscall.tbl      |   1 +
>  arch/arm/tools/syscall.tbl                  |   1 +
>  arch/ia64/kernel/syscalls/syscall.tbl       |   1 +
>  arch/m68k/kernel/syscalls/syscall.tbl       |   1 +
>  arch/microblaze/kernel/syscalls/syscall.tbl |   1 +
>  arch/parisc/kernel/syscalls/syscall.tbl     |   1 +
>  arch/powerpc/kernel/syscalls/syscall.tbl    |   1 +
>  arch/s390/kernel/syscalls/syscall.tbl       |   1 +
>  arch/sh/kernel/syscalls/syscall.tbl         |   1 +
>  arch/sparc/kernel/syscalls/syscall.tbl      |   1 +
>  arch/x86/entry/syscalls/syscall_32.tbl      |   1 +
>  arch/x86/entry/syscalls/syscall_64.tbl      |   1 +
>  arch/xtensa/kernel/syscalls/syscall.tbl     |   1 +
>  include/linux/fs.h                          |   3 +
>  include/linux/syscalls.h                    |   4 +
>  include/uapi/asm-generic/unistd.h           |   5 +-
>  include/uapi/linux/mman.h                   |   9 ++
>  init/Kconfig                                |  10 ++
>  kernel/sys_ni.c                             |   1 +
>  mm/filemap.c                                | 154 ++++++++++++++++++++
>  20 files changed, 198 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
> index 8ebacf37a8cf..1f13995d00d7 100644
> --- a/arch/alpha/kernel/syscalls/syscall.tbl
> +++ b/arch/alpha/kernel/syscalls/syscall.tbl
> @@ -490,3 +490,4 @@
>  558	common	process_mrelease		sys_process_mrelease
>  559	common  futex_waitv                     sys_futex_waitv
>  560	common	set_mempolicy_home_node		sys_ni_syscall
> +561	common	cachestat			sys_cachestat
> diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
> index ac964612d8b0..8ebed8a13874 100644
> --- a/arch/arm/tools/syscall.tbl
> +++ b/arch/arm/tools/syscall.tbl
> @@ -464,3 +464,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common	futex_waitv			sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/ia64/kernel/syscalls/syscall.tbl b/arch/ia64/kernel/syscalls/syscall.tbl
> index 72c929d9902b..f8c74ffeeefb 100644
> --- a/arch/ia64/kernel/syscalls/syscall.tbl
> +++ b/arch/ia64/kernel/syscalls/syscall.tbl
> @@ -371,3 +371,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
> index b1f3940bc298..4f504783371f 100644
> --- a/arch/m68k/kernel/syscalls/syscall.tbl
> +++ b/arch/m68k/kernel/syscalls/syscall.tbl
> @@ -450,3 +450,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
> index 820145e47350..858d22bf275c 100644
> --- a/arch/microblaze/kernel/syscalls/syscall.tbl
> +++ b/arch/microblaze/kernel/syscalls/syscall.tbl
> @@ -456,3 +456,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
> index 8a99c998da9b..7c84a72306d1 100644
> --- a/arch/parisc/kernel/syscalls/syscall.tbl
> +++ b/arch/parisc/kernel/syscalls/syscall.tbl
> @@ -448,3 +448,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common	futex_waitv			sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
> index 2bca64f96164..937460f0a8ec 100644
> --- a/arch/powerpc/kernel/syscalls/syscall.tbl
> +++ b/arch/powerpc/kernel/syscalls/syscall.tbl
> @@ -530,3 +530,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450 	nospu	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
> index 799147658dee..7df0329d46cb 100644
> --- a/arch/s390/kernel/syscalls/syscall.tbl
> +++ b/arch/s390/kernel/syscalls/syscall.tbl
> @@ -453,3 +453,4 @@
>  448  common	process_mrelease	sys_process_mrelease		sys_process_mrelease
>  449  common	futex_waitv		sys_futex_waitv			sys_futex_waitv
>  450  common	set_mempolicy_home_node	sys_set_mempolicy_home_node	sys_set_mempolicy_home_node
> +451  common	cachestat		sys_cachestat			sys_cachestat
> diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
> index 2de85c977f54..97377e8c5025 100644
> --- a/arch/sh/kernel/syscalls/syscall.tbl
> +++ b/arch/sh/kernel/syscalls/syscall.tbl
> @@ -453,3 +453,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
> index 4398cc6fb68d..faa835f3c54a 100644
> --- a/arch/sparc/kernel/syscalls/syscall.tbl
> +++ b/arch/sparc/kernel/syscalls/syscall.tbl
> @@ -496,3 +496,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
> index 320480a8db4f..bc0a3c941b35 100644
> --- a/arch/x86/entry/syscalls/syscall_32.tbl
> +++ b/arch/x86/entry/syscalls/syscall_32.tbl
> @@ -455,3 +455,4 @@
>  448	i386	process_mrelease	sys_process_mrelease
>  449	i386	futex_waitv		sys_futex_waitv
>  450	i386	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	i386	cachestat		sys_cachestat
> diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
> index c84d12608cd2..227538b0ce80 100644
> --- a/arch/x86/entry/syscalls/syscall_64.tbl
> +++ b/arch/x86/entry/syscalls/syscall_64.tbl
> @@ -372,6 +372,7 @@
>  448	common	process_mrelease	sys_process_mrelease
>  449	common	futex_waitv		sys_futex_waitv
>  450	common	set_mempolicy_home_node	sys_set_mempolicy_home_node
> +451	common	cachestat		sys_cachestat
>  
>  #
>  # Due to a historical design error, certain syscalls are numbered differently
> diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
> index 52c94ab5c205..2b69c3c035b6 100644
> --- a/arch/xtensa/kernel/syscalls/syscall.tbl
> +++ b/arch/xtensa/kernel/syscalls/syscall.tbl
> @@ -421,3 +421,4 @@
>  448	common	process_mrelease		sys_process_mrelease
>  449	common  futex_waitv                     sys_futex_waitv
>  450	common	set_mempolicy_home_node		sys_set_mempolicy_home_node
> +451	common	cachestat			sys_cachestat
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index e654435f1651..83300f1491e7 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -75,6 +75,7 @@ struct fs_context;
>  struct fs_parameter_spec;
>  struct fileattr;
>  struct iomap_ops;
> +struct cachestat;
>  
>  extern void __init inode_init(void);
>  extern void __init inode_init_early(void);
> @@ -830,6 +831,8 @@ void filemap_invalidate_lock_two(struct address_space *mapping1,
>  				 struct address_space *mapping2);
>  void filemap_invalidate_unlock_two(struct address_space *mapping1,
>  				   struct address_space *mapping2);
> +void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
> +		pgoff_t last_index, struct cachestat *cs);
>  
>  
>  /*
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index a34b0f9a9972..d3fe6ba8eb38 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -72,6 +72,7 @@ struct open_how;
>  struct mount_attr;
>  struct landlock_ruleset_attr;
>  enum landlock_rule_type;
> +struct cachestat;
>  
>  #include <linux/types.h>
>  #include <linux/aio_abi.h>
> @@ -1056,6 +1057,9 @@ asmlinkage long sys_memfd_secret(unsigned int flags);
>  asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len,
>  					    unsigned long home_node,
>  					    unsigned long flags);
> +asmlinkage long sys_cachestat(unsigned int fd, off_t off, size_t len,
> +		unsigned int cstat_version, struct cachestat __user *cstat,
> +		unsigned int flags);
>  
>  /*
>   * Architecture-specific system calls
> diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
> index 45fa180cc56a..cd639fae9086 100644
> --- a/include/uapi/asm-generic/unistd.h
> +++ b/include/uapi/asm-generic/unistd.h
> @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
>  #define __NR_set_mempolicy_home_node 450
>  __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
>  
> +#define __NR_cachestat 451
> +__SYSCALL(__NR_cachestat, sys_cachestat)
> +
>  #undef __NR_syscalls
> -#define __NR_syscalls 451
> +#define __NR_syscalls 452
>  
>  /*
>   * 32 bit systems traditionally used different
> diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h
> index f55bc680b5b0..fe03ed0b7587 100644
> --- a/include/uapi/linux/mman.h
> +++ b/include/uapi/linux/mman.h
> @@ -4,6 +4,7 @@
>  
>  #include <asm/mman.h>
>  #include <asm-generic/hugetlb_encode.h>
> +#include <linux/types.h>
>  
>  #define MREMAP_MAYMOVE		1
>  #define MREMAP_FIXED		2
> @@ -41,4 +42,12 @@
>  #define MAP_HUGE_2GB	HUGETLB_FLAG_ENCODE_2GB
>  #define MAP_HUGE_16GB	HUGETLB_FLAG_ENCODE_16GB
>  
> +struct cachestat {
> +	__u64 nr_cache;
> +	__u64 nr_dirty;
> +	__u64 nr_writeback;
> +	__u64 nr_evicted;
> +	__u64 nr_recently_evicted;
> +};
> +
>  #endif /* _UAPI_LINUX_MMAN_H */
> diff --git a/init/Kconfig b/init/Kconfig
> index 694f7c160c9c..da96ac29af1d 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1798,6 +1798,16 @@ config RSEQ
>  
>  	  If unsure, say Y.
>  
> +config CACHESTAT_SYSCALL
> +	bool "Enable cachestat() system call" if EXPERT
> +	default y
> +	help
> +	  Enable the cachestat system call, which queries the page cache
> +	  statistics of a file (number of cached pages, dirty pages,
> +	  pages marked for writeback, (recently) evicted pages).
> +
> +	  If unsure say Y here.
> +
>  config DEBUG_RSEQ
>  	default n
>  	bool "Enabled debugging of rseq() system call" if EXPERT
> diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
> index 860b2dcf3ac4..04bfb1e4d377 100644
> --- a/kernel/sys_ni.c
> +++ b/kernel/sys_ni.c
> @@ -299,6 +299,7 @@ COND_SYSCALL(set_mempolicy);
>  COND_SYSCALL(migrate_pages);
>  COND_SYSCALL(move_pages);
>  COND_SYSCALL(set_mempolicy_home_node);
> +COND_SYSCALL(cachestat);
>  
>  COND_SYSCALL(perf_event_open);
>  COND_SYSCALL(accept4);
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 08341616ae7a..0305eaf5b3f5 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -22,6 +22,7 @@
>  #include <linux/mm.h>
>  #include <linux/swap.h>
>  #include <linux/swapops.h>
> +#include <linux/syscalls.h>
>  #include <linux/mman.h>
>  #include <linux/pagemap.h>
>  #include <linux/file.h>
> @@ -55,6 +56,13 @@
>  #include <linux/buffer_head.h> /* for try_to_free_buffers */
>  
>  #include <asm/mman.h>
> +#include <uapi/linux/mman.h>
> +
> +#include "swap.h"
> +
> +#ifdef CONFIG_CACHESTAT_SYSCALL
> +#define LATEST_CACHESTAT_VERSION	1
> +#endif
>  
>  /*
>   * Shared mappings implemented 30.11.1994. It's not fully working yet,
> @@ -3949,3 +3957,149 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
>  	return try_to_free_buffers(folio);
>  }
>  EXPORT_SYMBOL(filemap_release_folio);
> +
> +/**
> + * filemap_cachestat() - compute the page cache statistics of a mapping
> + * @mapping:	The mapping to compute the statistics for.
> + * @first_index:	The starting page cache index.
> + * @last_index:	The final page index (inclusive).
> + * @cs:	the cachestat struct to write the result to.
> + *
> + * This will query the page cache statistics of a mapping in the
> + * page range of [first_index, last_index] (inclusive). The statistics
> + * queried include: number of dirty pages, number of pages marked for
> + * writeback, and the number of (recently) evicted pages.
> + */
> +void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
> +		pgoff_t last_index, struct cachestat *cs)
> +{
> +	XA_STATE(xas, &mapping->i_pages, first_index);
> +	struct folio *folio;
> +
> +	rcu_read_lock();
> +	xas_for_each(&xas, folio, last_index) {
> +		unsigned long nr_pages;
> +		pgoff_t folio_first_index, folio_last_index;
> +
> +		if (xas_retry(&xas, folio))
> +			continue;
> +
> +		nr_pages = folio_nr_pages(folio);
> +		folio_first_index = folio_pgoff(folio);
> +		folio_last_index = folio_first_index + nr_pages - 1;
> +
> +		/* Folios might straddle the range boundaries, only count covered subpages */
> +		if (folio_first_index < first_index)
> +			nr_pages -= first_index - folio_first_index;
> +
> +		if (folio_last_index > last_index)
> +			nr_pages -= folio_last_index - last_index;
> +
> +		if (xa_is_value(folio)) {
> +			/* page is evicted */
> +			void *shadow = (void *)folio;
> +			bool workingset; /* not used */
> +
> +			cs->nr_evicted += nr_pages;
> +
> +#ifdef CONFIG_SWAP /* implies CONFIG_MMU */
> +			if (shmem_mapping(mapping)) {
> +				/* shmem file - in swap cache */
> +				swp_entry_t swp = radix_to_swp_entry(folio);
> +
> +				shadow = get_shadow_from_swap_cache(swp);
> +			}
> +#endif
> +			if (workingset_test_recent(shadow, true, &workingset))
> +				cs->nr_recently_evicted += nr_pages;
> +
> +			goto resched;
> +		}
> +
> +		/* page is in cache */
> +		cs->nr_cache += nr_pages;
> +
> +		if (folio_test_dirty(folio))
> +			cs->nr_dirty += nr_pages;
> +
> +		if (folio_test_writeback(folio))
> +			cs->nr_writeback += nr_pages;
> +
> +resched:
> +		if (need_resched()) {
> +			xas_pause(&xas);
> +			cond_resched_rcu();
> +		}
> +	}
> +	rcu_read_unlock();
> +}
> +EXPORT_SYMBOL(filemap_cachestat);
> +
> +#ifdef CONFIG_CACHESTAT_SYSCALL
> +/*
> + * The cachestat(5) system call.
> + *
> + * cachestat() returns the page cache statistics of a file in the
> + * bytes range specified by `off` and `len`: number of cached pages,
> + * number of dirty pages, number of pages marked for writeback,
> + * number of evicted pages, and number of recently evicted pages.
> + *
> + * An evicted page is a page that is previously in the page cache
> + * but has been evicted since. A page is recently evicted if its last
> + * eviction was recent enough that its reentry to the cache would
> + * indicate that it is actively being used by the system, and that
> + * there is memory pressure on the system.
> + *
> + * `off` and `len` must be non-negative integers. If `len` > 0,
> + * the queried range is [`off`, `off` + `len`]. If `len` == 0,
> + * we will query in the range from `off` to the end of the file.
> + *
> + * `cstat_version` is an unsigned integer indicating the specific version
> + * of the cachestat struct. It must be at least 1, and does not exceed the
> + * latest version number (which is currently 1). For now, user should
> + * just pass 1.
> + *
> + * The `flags` argument is unused for now, but is included for future
> + * extensibility. User should pass 0 (i.e no flag specified).
> + *
> + * Because the status of a page can change after cachestat() checks it
> + * but before it returns to the application, the returned values may
> + * contain stale information.
> + *
> + * return values:
> + *  zero    - success
> + *  -EFAULT - cstat points to an illegal address
> + *  -EINVAL - invalid arguments
> + *  -EBADF	- invalid file descriptor
> + */
> +SYSCALL_DEFINE6(cachestat, unsigned int, fd, off_t, off, size_t, len,
> +		unsigned int, cstat_version, struct cachestat __user *, cstat,
> +		unsigned int, flags)
> +{
> +	struct fd f = fdget(fd);
> +	struct address_space *mapping;
> +	struct cachestat cs;
> +	pgoff_t first_index = off >> PAGE_SHIFT;
> +	pgoff_t last_index =
> +		len == 0 ? ULONG_MAX : (off + len - 1) >> PAGE_SHIFT;
> +
> +	if (!f.file)
> +		return -EBADF;
> +
> +	if (off < 0 || flags != 0 || cstat_version < 1 ||
> +			cstat_version > LATEST_CACHESTAT_VERSION) {
> +		fdput(f);
> +		return -EINVAL;
> +	}
> +
> +	memset(&cs, 0, sizeof(struct cachestat));
> +	mapping = f.file->f_mapping;
> +	filemap_cachestat(mapping, first_index, last_index, &cs);
> +	fdput(f);
> +
> +	if (copy_to_user(cstat, &cs, sizeof(struct cachestat)))
> +		return -EFAULT;
> +
> +	return 0;
> +}
> +#endif /* CONFIG_CACHESTAT_SYSCALL */
> -- 
> 2.30.2
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-20 14:34   ` Brian Foster
@ 2023-01-20 16:27     ` Johannes Weiner
  2023-01-20 18:53       ` Brian Foster
  2023-01-21  3:43       ` Yu Zhao
  2023-01-20 17:29     ` Nhat Pham
  1 sibling, 2 replies; 12+ messages in thread
From: Johannes Weiner @ 2023-01-20 16:27 UTC (permalink / raw)
  To: Brian Foster
  Cc: Nhat Pham, akpm, linux-mm, linux-kernel, willy, linux-api,
	kernel-team, Yu Zhao

On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > +	int memcgid;
> > +	struct pglist_data *pgdat;
> > +	unsigned long token;
> > +
> > +	unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > +	eviction_memcg = mem_cgroup_from_id(memcgid);
> > +
> > +	lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > +	lrugen = &lruvec->lrugen;
> > +
> > +	min_seq = READ_ONCE(lrugen->min_seq[file]);
> > +	return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> 
> I think this might be more readable without the double negative.
> 
> Also it looks like this logic is pulled from lru_gen_refault(). Any
> reason the caller isn't refactored to use this helper, similar to how
> workingset_refault() is modified? It seems like a potential landmine to
> duplicate the logic here for cachestat purposes and somewhere else for
> actual workingset management.

The initial version was refactored. Yu explicitly requested it be
duplicated [1] to cut down on some boiler plate.

I have to agree with Brian on this one, though. The factored version
is better for maintenance than duplicating the core logic here. Even
if it ends up a bit more boiler plate - it's harder to screw that up,
and easier to catch at compile time, than the duplicates diverging.

[1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-20 14:34   ` Brian Foster
  2023-01-20 16:27     ` Johannes Weiner
@ 2023-01-20 17:29     ` Nhat Pham
  2023-01-20 18:56       ` Brian Foster
  1 sibling, 1 reply; 12+ messages in thread
From: Nhat Pham @ 2023-01-20 17:29 UTC (permalink / raw)
  To: Brian Foster
  Cc: akpm, hannes, linux-mm, linux-kernel, willy, linux-api, kernel-team

 On Fri, Jan 20, 2023 at 6:33 AM Brian Foster <bfoster@redhat.com> wrote:
>
> On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > In preparation for computing recently evicted pages in cachestat,
> > refactor workingset_refault and lru_gen_refault to expose a helper
> > function that would test if an evicted page is recently evicted.
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
>
> Hi Nhat,
>
> I'm not terribly familiar with the workingset management code, but a few
> thoughts now that I've stared at it a bit...
>
> >  include/linux/swap.h |   1 +
> >  mm/workingset.c      | 129 ++++++++++++++++++++++++++++++-------------
> >  2 files changed, 92 insertions(+), 38 deletions(-)
> >
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index a18cf4b7c724..dae6f6f955eb 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
> >  }
> >
> >  /* linux/mm/workingset.c */
> > +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
> >  void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
> >  void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
> >  void workingset_refault(struct folio *folio, void *shadow);
> > diff --git a/mm/workingset.c b/mm/workingset.c
> > index 79585d55c45d..006482c4e0bd 100644
> > --- a/mm/workingset.c
> > +++ b/mm/workingset.c
> > @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
> >       return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
> >  }
> >
> > +/*
> > + * Test if the folio is recently evicted.
> > + *
> > + * As a side effect, also populates the references with
> > + * values unpacked from the shadow of the evicted folio.
> > + */
> > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > +{
> > +     struct mem_cgroup *eviction_memcg;
> > +     struct lruvec *lruvec;
> > +     struct lru_gen_struct *lrugen;
> > +     unsigned long min_seq;
> > +
>
> Extra whitespace looks a bit funny here.
>
> > +     int memcgid;
> > +     struct pglist_data *pgdat;
> > +     unsigned long token;
> > +
> > +     unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > +     eviction_memcg = mem_cgroup_from_id(memcgid);
> > +
> > +     lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > +     lrugen = &lruvec->lrugen;
> > +
> > +     min_seq = READ_ONCE(lrugen->min_seq[file]);
> > +     return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
>
> I think this might be more readable without the double negative.

Hmm indeed. I was just making sure that I did not mess up Yu's
original logic here (by just wrapping it in a parentheses and
negate the whole thing), but if I understand it correctly it's just
an equality check. I'll fix it in the next version to make it cleaner.

>
> Also it looks like this logic is pulled from lru_gen_refault(). Any
> reason the caller isn't refactored to use this helper, similar to how
> workingset_refault() is modified? It seems like a potential landmine to
> duplicate the logic here for cachestat purposes and somewhere else for
> actual workingset management.

In V2, it is actually refactored analogously as well - but we had a discussion
about it here:

https://lkml.org/lkml/2022/12/5/1321

>
> > +}
> > +
> >  static void lru_gen_refault(struct folio *folio, void *shadow)
> >  {
> >       int hist, tier, refs;
> > @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
> >       return NULL;
> >  }
> >
> > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > +{
> > +     return true;
> > +}
>
> I guess this is a no-op for !MGLRU but given the context (i.e. special
> treatment for "recent" refaults), perhaps false is a more sane default?

Hmm, fair point. Let me fix that in the next version.

>
> > +
> >  static void lru_gen_refault(struct folio *folio, void *shadow)
> >  {
> >  }
> > @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
> >                               folio_test_workingset(folio));
> >  }
> >
> > -/**
> > - * workingset_refault - Evaluate the refault of a previously evicted folio.
> > - * @folio: The freshly allocated replacement folio.
> > - * @shadow: Shadow entry of the evicted folio.
> > +/*
> > + * Test if the folio is recently evicted by checking if
> > + * refault distance of shadow exceeds workingset size.
> >   *
> > - * Calculates and evaluates the refault distance of the previously
> > - * evicted folio in the context of the node and the memcg whose memory
> > - * pressure caused the eviction.
> > + * As a side effect, populate workingset with the value
> > + * unpacked from shadow.
> >   */
> > -void workingset_refault(struct folio *folio, void *shadow)
> > +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
> >  {
> > -     bool file = folio_is_file_lru(folio);
> >       struct mem_cgroup *eviction_memcg;
> >       struct lruvec *eviction_lruvec;
> >       unsigned long refault_distance;
> >       unsigned long workingset_size;
> > -     struct pglist_data *pgdat;
> > -     struct mem_cgroup *memcg;
> > -     unsigned long eviction;
> > -     struct lruvec *lruvec;
> >       unsigned long refault;
> > -     bool workingset;
> > +
> >       int memcgid;
> > -     long nr;
> > +     struct pglist_data *pgdat;
> > +     unsigned long eviction;
> >
> > -     if (lru_gen_enabled()) {
> > -             lru_gen_refault(folio, shadow);
> > -             return;
> > -     }
> > +     if (lru_gen_enabled())
> > +             return lru_gen_test_recent(shadow, file, workingset);
>
> Hmm.. so this function is only called by workingset_refault() when
> lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
> which as noted above duplicates some of the recency logic.
>
> I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
> can just call workingset_test_recent(). That seems reasonable, but makes
> me wonder...

You're right. It's a bit clunky...

>
> >
> > -     unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> > +     unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
> >       eviction <<= bucket_order;
> >
> > -     rcu_read_lock();
> >       /*
> >        * Look up the memcg associated with the stored ID. It might
> >        * have been deleted since the folio's eviction.
> > @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
> >        */
> >       eviction_memcg = mem_cgroup_from_id(memcgid);
> >       if (!mem_cgroup_disabled() && !eviction_memcg)
> > -             goto out;
> > +             return false;
> > +
> >       eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> >       refault = atomic_long_read(&eviction_lruvec->nonresident_age);
> >
> > @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
> >        */
> >       refault_distance = (refault - eviction) & EVICTION_MASK;
> >
> > -     /*
> > -      * The activation decision for this folio is made at the level
> > -      * where the eviction occurred, as that is where the LRU order
> > -      * during folio reclaim is being determined.
> > -      *
> > -      * However, the cgroup that will own the folio is the one that
> > -      * is actually experiencing the refault event.
> > -      */
> > -     nr = folio_nr_pages(folio);
> > -     memcg = folio_memcg(folio);
> > -     pgdat = folio_pgdat(folio);
> > -     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > -
> > -     mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> > -
> >       mem_cgroup_flush_stats_delayed();
> >       /*
> >        * Compare the distance to the existing workingset size. We
> > @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
> >                                                    NR_INACTIVE_ANON);
> >               }
> >       }
> > -     if (refault_distance > workingset_size)
> > +
> > +     return refault_distance <= workingset_size;
> > +}
> > +
> > +/**
> > + * workingset_refault - Evaluate the refault of a previously evicted folio.
> > + * @folio: The freshly allocated replacement folio.
> > + * @shadow: Shadow entry of the evicted folio.
> > + *
> > + * Calculates and evaluates the refault distance of the previously
> > + * evicted folio in the context of the node and the memcg whose memory
> > + * pressure caused the eviction.
> > + */
> > +void workingset_refault(struct folio *folio, void *shadow)
> > +{
> > +     bool file = folio_is_file_lru(folio);
> > +     struct pglist_data *pgdat;
> > +     struct mem_cgroup *memcg;
> > +     struct lruvec *lruvec;
> > +     bool workingset;
> > +     long nr;
> > +
> > +     if (lru_gen_enabled()) {
> > +             lru_gen_refault(folio, shadow);
> > +             return;
> > +     }
>
> ... if perhaps this should call workingset_test_recent() a bit earlier,
> since it also covers the lru_gen_*() case..? That may or may not be
> cleaner. It _seems like_ it might produce a bit more consistent logic,
> but just a thought and I could easily be missing details.

Hmm you mean before/in place of the lru_gen_refault call?
workingset_test_recent only covers lru_gen_test_recent,
not the rest of the logic of lru_gen_refault I believe.


>
> > +
> > +     rcu_read_lock();
> > +
> > +     nr = folio_nr_pages(folio);
> > +     memcg = folio_memcg(folio);
> > +     pgdat = folio_pgdat(folio);
> > +     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > +
> > +     if (!workingset_test_recent(shadow, file, &workingset)) {
> > +             /*
> > +              * The activation decision for this folio is made at the level
> > +              * where the eviction occurred, as that is where the LRU order
> > +              * during folio reclaim is being determined.
> > +              *
> > +              * However, the cgroup that will own the folio is the one that
> > +              * is actually experiencing the refault event.
> > +              */
>
> IIUC, this comment is explaining the difference between using the
> eviction lru (based on the shadow entry) to calculate recency vs. the
> lru for the current folio to process the refault. If so, perhaps it
> should go right above the workingset_test_recent() call? (Then the if
> braces could go away as well..).

You're right! I think it should go above `nr = folio_nr_pages(folio);` call.

>
> >               goto out;
> > +     }
> >
> >       folio_set_active(folio);
> >       workingset_age_nonresident(lruvec, nr);
> > @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
> >               mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
> >       }
> >  out:
> > +     mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
>
> Why not just leave this up earlier in the function (i.e. before the
> recency check) as it was originally?

Let me double check, but I think this is a relic from the old (and incorrect)
version of workingset code.

Originally, mod_lruvec_state uses the lruvec computed from a variable
(pgdat) that was unpacked from the shadow. So this mod_lruvec_state
has to go after the unpack_shadow call (which has been moved inside
of workingset_test_recent).

This is actually wrong - we actually want the pgdat from the folio. It
has been fixed in a separate patch:

https://lore.kernel.org/all/20230104222944.2380117-1-nphamcs@gmail.com/T/#u

But I didn't update it here. Let me stare at it a bit more to make sure,
and then fix it in the next version. It should not change the behavior,
but it should be cleaner.

>
> Brian
>
> >       rcu_read_unlock();
> >  }
> >
> > --
> > 2.30.2
> >
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 2/3] cachestat: implement cachestat syscall
  2023-01-20 14:36   ` Brian Foster
@ 2023-01-20 18:26     ` Nhat Pham
  0 siblings, 0 replies; 12+ messages in thread
From: Nhat Pham @ 2023-01-20 18:26 UTC (permalink / raw)
  To: Brian Foster
  Cc: akpm, hannes, linux-mm, linux-kernel, willy, linux-api, kernel-team

On Fri, Jan 20, 2023 at 6:35 AM Brian Foster <bfoster@redhat.com> wrote:
>
> On Tue, Jan 17, 2023 at 11:59:58AM -0800, Nhat Pham wrote:
> > There is currently no good way to query the page cache state of large
> > file sets and directory trees. There is mincore(), but it scales poorly:
> > the kernel writes out a lot of bitmap data that userspace has to
> > aggregate, when the user really doesn not care about per-page
> > information in that case. The user also needs to mmap and unmap each
> > file as it goes along, which can be quite slow as well.
> >
> > This patch implements a new syscall that queries cache state of a file
> > and summarizes the number of cached pages, number of dirty pages, number
> > of pages marked for writeback, number of (recently) evicted pages, etc.
> > in a given range.
> >
> > NAME
> >     cachestat - query the page cache statistics of a file.
> >
> > SYNOPSIS
> >     #include <sys/mman.h>
> >
> >     struct cachestat {
> >         __u64 nr_cache;
> >         __u64 nr_dirty;
> >         __u64 nr_writeback;
> >         __u64 nr_evicted;
> >         __u64 nr_recently_evicted;
> >     };
> >
> >     int cachestat(unsigned int fd, off_t off, size_t len,
> >           unsigned int cstat_version, struct cachestat *cstat,
> >           unsigned int flags);
> >
> > DESCRIPTION
> >     cachestat() queries the number of cached pages, number of dirty
> >     pages, number of pages marked for writeback, number of evicted
> >     pages, number of recently evicted pages, in the bytes range given by
> >     `off` and `len`.
> >
> >     An evicted page is a page that is previously in the page cache but
> >     has been evicted since. A page is recently evicted if its last
> >     eviction was recent enough that its reentry to the cache would
> >     indicate that it is actively being used by the system, and that
> >     there is memory pressure on the system.
> >
> >     These values are returned in a cachestat struct, whose address is
> >     given by the `cstat` argument.
> >
> >     The `off` and `len` arguments must be non-negative integers. If
> >     `len` > 0, the queried range is [`off`, `off` + `len`]. If `len` ==
> >     0, we will query in the range from `off` to the end of the file.
> >
> >     `cstat_version` is an unsigned integer indicating the specific
> >     version of the cachestat struct. It must be at least 1, and does
> >     not exceed the latest version number (which is currently 1). For
> >     now, user should just pass 1.
> >
>
> Still not sure about this vs. just requiring the structure size to match
> (and maybe padding it), but perhaps linux-api can comment on the best
> way to future proof.
>
> I'm not the person to ack a syscall regardless, but since I've had
> various previous comments on this patch and no further issues stand out
> to me:
>
> Reviewed-by: Brian Foster <bfoster@redhat.com>
Thanks for the review and feedback, Brian!
>
> >     The `flags` argument is unused for now, but is included for future
> >     extensibility. User should pass 0 (i.e no flag specified).
> >
> > RETURN VALUE
> >     On success, cachestat returns 0. On error, -1 is returned, and errno
> >     is set to indicate the error.
> >
> > ERRORS
> >     EFAULT cstat points to an invalid address.
> >
> >     EINVAL invalid `cstat_version` or `flags`
> >
> >     EBADF  invalid file descriptor.
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> >  arch/alpha/kernel/syscalls/syscall.tbl      |   1 +
> >  arch/arm/tools/syscall.tbl                  |   1 +
> >  arch/ia64/kernel/syscalls/syscall.tbl       |   1 +
> >  arch/m68k/kernel/syscalls/syscall.tbl       |   1 +
> >  arch/microblaze/kernel/syscalls/syscall.tbl |   1 +
> >  arch/parisc/kernel/syscalls/syscall.tbl     |   1 +
> >  arch/powerpc/kernel/syscalls/syscall.tbl    |   1 +
> >  arch/s390/kernel/syscalls/syscall.tbl       |   1 +
> >  arch/sh/kernel/syscalls/syscall.tbl         |   1 +
> >  arch/sparc/kernel/syscalls/syscall.tbl      |   1 +
> >  arch/x86/entry/syscalls/syscall_32.tbl      |   1 +
> >  arch/x86/entry/syscalls/syscall_64.tbl      |   1 +
> >  arch/xtensa/kernel/syscalls/syscall.tbl     |   1 +
> >  include/linux/fs.h                          |   3 +
> >  include/linux/syscalls.h                    |   4 +
> >  include/uapi/asm-generic/unistd.h           |   5 +-
> >  include/uapi/linux/mman.h                   |   9 ++
> >  init/Kconfig                                |  10 ++
> >  kernel/sys_ni.c                             |   1 +
> >  mm/filemap.c                                | 154 ++++++++++++++++++++
> >  20 files changed, 198 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
> > index 8ebacf37a8cf..1f13995d00d7 100644
> > --- a/arch/alpha/kernel/syscalls/syscall.tbl
> > +++ b/arch/alpha/kernel/syscalls/syscall.tbl
> > @@ -490,3 +490,4 @@
> >  558  common  process_mrelease                sys_process_mrelease
> >  559  common  futex_waitv                     sys_futex_waitv
> >  560  common  set_mempolicy_home_node         sys_ni_syscall
> > +561  common  cachestat                       sys_cachestat
> > diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
> > index ac964612d8b0..8ebed8a13874 100644
> > --- a/arch/arm/tools/syscall.tbl
> > +++ b/arch/arm/tools/syscall.tbl
> > @@ -464,3 +464,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/ia64/kernel/syscalls/syscall.tbl b/arch/ia64/kernel/syscalls/syscall.tbl
> > index 72c929d9902b..f8c74ffeeefb 100644
> > --- a/arch/ia64/kernel/syscalls/syscall.tbl
> > +++ b/arch/ia64/kernel/syscalls/syscall.tbl
> > @@ -371,3 +371,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
> > index b1f3940bc298..4f504783371f 100644
> > --- a/arch/m68k/kernel/syscalls/syscall.tbl
> > +++ b/arch/m68k/kernel/syscalls/syscall.tbl
> > @@ -450,3 +450,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
> > index 820145e47350..858d22bf275c 100644
> > --- a/arch/microblaze/kernel/syscalls/syscall.tbl
> > +++ b/arch/microblaze/kernel/syscalls/syscall.tbl
> > @@ -456,3 +456,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
> > index 8a99c998da9b..7c84a72306d1 100644
> > --- a/arch/parisc/kernel/syscalls/syscall.tbl
> > +++ b/arch/parisc/kernel/syscalls/syscall.tbl
> > @@ -448,3 +448,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
> > index 2bca64f96164..937460f0a8ec 100644
> > --- a/arch/powerpc/kernel/syscalls/syscall.tbl
> > +++ b/arch/powerpc/kernel/syscalls/syscall.tbl
> > @@ -530,3 +530,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  nospu   set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
> > index 799147658dee..7df0329d46cb 100644
> > --- a/arch/s390/kernel/syscalls/syscall.tbl
> > +++ b/arch/s390/kernel/syscalls/syscall.tbl
> > @@ -453,3 +453,4 @@
> >  448  common  process_mrelease        sys_process_mrelease            sys_process_mrelease
> >  449  common  futex_waitv             sys_futex_waitv                 sys_futex_waitv
> >  450  common  set_mempolicy_home_node sys_set_mempolicy_home_node     sys_set_mempolicy_home_node
> > +451  common  cachestat               sys_cachestat                   sys_cachestat
> > diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
> > index 2de85c977f54..97377e8c5025 100644
> > --- a/arch/sh/kernel/syscalls/syscall.tbl
> > +++ b/arch/sh/kernel/syscalls/syscall.tbl
> > @@ -453,3 +453,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
> > index 4398cc6fb68d..faa835f3c54a 100644
> > --- a/arch/sparc/kernel/syscalls/syscall.tbl
> > +++ b/arch/sparc/kernel/syscalls/syscall.tbl
> > @@ -496,3 +496,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
> > index 320480a8db4f..bc0a3c941b35 100644
> > --- a/arch/x86/entry/syscalls/syscall_32.tbl
> > +++ b/arch/x86/entry/syscalls/syscall_32.tbl
> > @@ -455,3 +455,4 @@
> >  448  i386    process_mrelease        sys_process_mrelease
> >  449  i386    futex_waitv             sys_futex_waitv
> >  450  i386    set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  i386    cachestat               sys_cachestat
> > diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
> > index c84d12608cd2..227538b0ce80 100644
> > --- a/arch/x86/entry/syscalls/syscall_64.tbl
> > +++ b/arch/x86/entry/syscalls/syscall_64.tbl
> > @@ -372,6 +372,7 @@
> >  448  common  process_mrelease        sys_process_mrelease
> >  449  common  futex_waitv             sys_futex_waitv
> >  450  common  set_mempolicy_home_node sys_set_mempolicy_home_node
> > +451  common  cachestat               sys_cachestat
> >
> >  #
> >  # Due to a historical design error, certain syscalls are numbered differently
> > diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
> > index 52c94ab5c205..2b69c3c035b6 100644
> > --- a/arch/xtensa/kernel/syscalls/syscall.tbl
> > +++ b/arch/xtensa/kernel/syscalls/syscall.tbl
> > @@ -421,3 +421,4 @@
> >  448  common  process_mrelease                sys_process_mrelease
> >  449  common  futex_waitv                     sys_futex_waitv
> >  450  common  set_mempolicy_home_node         sys_set_mempolicy_home_node
> > +451  common  cachestat                       sys_cachestat
> > diff --git a/include/linux/fs.h b/include/linux/fs.h
> > index e654435f1651..83300f1491e7 100644
> > --- a/include/linux/fs.h
> > +++ b/include/linux/fs.h
> > @@ -75,6 +75,7 @@ struct fs_context;
> >  struct fs_parameter_spec;
> >  struct fileattr;
> >  struct iomap_ops;
> > +struct cachestat;
> >
> >  extern void __init inode_init(void);
> >  extern void __init inode_init_early(void);
> > @@ -830,6 +831,8 @@ void filemap_invalidate_lock_two(struct address_space *mapping1,
> >                                struct address_space *mapping2);
> >  void filemap_invalidate_unlock_two(struct address_space *mapping1,
> >                                  struct address_space *mapping2);
> > +void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
> > +             pgoff_t last_index, struct cachestat *cs);
> >
> >
> >  /*
> > diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> > index a34b0f9a9972..d3fe6ba8eb38 100644
> > --- a/include/linux/syscalls.h
> > +++ b/include/linux/syscalls.h
> > @@ -72,6 +72,7 @@ struct open_how;
> >  struct mount_attr;
> >  struct landlock_ruleset_attr;
> >  enum landlock_rule_type;
> > +struct cachestat;
> >
> >  #include <linux/types.h>
> >  #include <linux/aio_abi.h>
> > @@ -1056,6 +1057,9 @@ asmlinkage long sys_memfd_secret(unsigned int flags);
> >  asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len,
> >                                           unsigned long home_node,
> >                                           unsigned long flags);
> > +asmlinkage long sys_cachestat(unsigned int fd, off_t off, size_t len,
> > +             unsigned int cstat_version, struct cachestat __user *cstat,
> > +             unsigned int flags);
> >
> >  /*
> >   * Architecture-specific system calls
> > diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
> > index 45fa180cc56a..cd639fae9086 100644
> > --- a/include/uapi/asm-generic/unistd.h
> > +++ b/include/uapi/asm-generic/unistd.h
> > @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv)
> >  #define __NR_set_mempolicy_home_node 450
> >  __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
> >
> > +#define __NR_cachestat 451
> > +__SYSCALL(__NR_cachestat, sys_cachestat)
> > +
> >  #undef __NR_syscalls
> > -#define __NR_syscalls 451
> > +#define __NR_syscalls 452
> >
> >  /*
> >   * 32 bit systems traditionally used different
> > diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h
> > index f55bc680b5b0..fe03ed0b7587 100644
> > --- a/include/uapi/linux/mman.h
> > +++ b/include/uapi/linux/mman.h
> > @@ -4,6 +4,7 @@
> >
> >  #include <asm/mman.h>
> >  #include <asm-generic/hugetlb_encode.h>
> > +#include <linux/types.h>
> >
> >  #define MREMAP_MAYMOVE               1
> >  #define MREMAP_FIXED         2
> > @@ -41,4 +42,12 @@
> >  #define MAP_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB
> >  #define MAP_HUGE_16GB        HUGETLB_FLAG_ENCODE_16GB
> >
> > +struct cachestat {
> > +     __u64 nr_cache;
> > +     __u64 nr_dirty;
> > +     __u64 nr_writeback;
> > +     __u64 nr_evicted;
> > +     __u64 nr_recently_evicted;
> > +};
> > +
> >  #endif /* _UAPI_LINUX_MMAN_H */
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 694f7c160c9c..da96ac29af1d 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -1798,6 +1798,16 @@ config RSEQ
> >
> >         If unsure, say Y.
> >
> > +config CACHESTAT_SYSCALL
> > +     bool "Enable cachestat() system call" if EXPERT
> > +     default y
> > +     help
> > +       Enable the cachestat system call, which queries the page cache
> > +       statistics of a file (number of cached pages, dirty pages,
> > +       pages marked for writeback, (recently) evicted pages).
> > +
> > +       If unsure say Y here.
> > +
> >  config DEBUG_RSEQ
> >       default n
> >       bool "Enabled debugging of rseq() system call" if EXPERT
> > diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
> > index 860b2dcf3ac4..04bfb1e4d377 100644
> > --- a/kernel/sys_ni.c
> > +++ b/kernel/sys_ni.c
> > @@ -299,6 +299,7 @@ COND_SYSCALL(set_mempolicy);
> >  COND_SYSCALL(migrate_pages);
> >  COND_SYSCALL(move_pages);
> >  COND_SYSCALL(set_mempolicy_home_node);
> > +COND_SYSCALL(cachestat);
> >
> >  COND_SYSCALL(perf_event_open);
> >  COND_SYSCALL(accept4);
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 08341616ae7a..0305eaf5b3f5 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -22,6 +22,7 @@
> >  #include <linux/mm.h>
> >  #include <linux/swap.h>
> >  #include <linux/swapops.h>
> > +#include <linux/syscalls.h>
> >  #include <linux/mman.h>
> >  #include <linux/pagemap.h>
> >  #include <linux/file.h>
> > @@ -55,6 +56,13 @@
> >  #include <linux/buffer_head.h> /* for try_to_free_buffers */
> >
> >  #include <asm/mman.h>
> > +#include <uapi/linux/mman.h>
> > +
> > +#include "swap.h"
> > +
> > +#ifdef CONFIG_CACHESTAT_SYSCALL
> > +#define LATEST_CACHESTAT_VERSION     1
> > +#endif
> >
> >  /*
> >   * Shared mappings implemented 30.11.1994. It's not fully working yet,
> > @@ -3949,3 +3957,149 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
> >       return try_to_free_buffers(folio);
> >  }
> >  EXPORT_SYMBOL(filemap_release_folio);
> > +
> > +/**
> > + * filemap_cachestat() - compute the page cache statistics of a mapping
> > + * @mapping: The mapping to compute the statistics for.
> > + * @first_index:     The starting page cache index.
> > + * @last_index:      The final page index (inclusive).
> > + * @cs:      the cachestat struct to write the result to.
> > + *
> > + * This will query the page cache statistics of a mapping in the
> > + * page range of [first_index, last_index] (inclusive). The statistics
> > + * queried include: number of dirty pages, number of pages marked for
> > + * writeback, and the number of (recently) evicted pages.
> > + */
> > +void filemap_cachestat(struct address_space *mapping, pgoff_t first_index,
> > +             pgoff_t last_index, struct cachestat *cs)
> > +{
> > +     XA_STATE(xas, &mapping->i_pages, first_index);
> > +     struct folio *folio;
> > +
> > +     rcu_read_lock();
> > +     xas_for_each(&xas, folio, last_index) {
> > +             unsigned long nr_pages;
> > +             pgoff_t folio_first_index, folio_last_index;
> > +
> > +             if (xas_retry(&xas, folio))
> > +                     continue;
> > +
> > +             nr_pages = folio_nr_pages(folio);
> > +             folio_first_index = folio_pgoff(folio);
> > +             folio_last_index = folio_first_index + nr_pages - 1;
> > +
> > +             /* Folios might straddle the range boundaries, only count covered subpages */
> > +             if (folio_first_index < first_index)
> > +                     nr_pages -= first_index - folio_first_index;
> > +
> > +             if (folio_last_index > last_index)
> > +                     nr_pages -= folio_last_index - last_index;
> > +
> > +             if (xa_is_value(folio)) {
> > +                     /* page is evicted */
> > +                     void *shadow = (void *)folio;
> > +                     bool workingset; /* not used */
> > +
> > +                     cs->nr_evicted += nr_pages;
> > +
> > +#ifdef CONFIG_SWAP /* implies CONFIG_MMU */
> > +                     if (shmem_mapping(mapping)) {
> > +                             /* shmem file - in swap cache */
> > +                             swp_entry_t swp = radix_to_swp_entry(folio);
> > +
> > +                             shadow = get_shadow_from_swap_cache(swp);
> > +                     }
> > +#endif
> > +                     if (workingset_test_recent(shadow, true, &workingset))
> > +                             cs->nr_recently_evicted += nr_pages;
> > +
> > +                     goto resched;
> > +             }
> > +
> > +             /* page is in cache */
> > +             cs->nr_cache += nr_pages;
> > +
> > +             if (folio_test_dirty(folio))
> > +                     cs->nr_dirty += nr_pages;
> > +
> > +             if (folio_test_writeback(folio))
> > +                     cs->nr_writeback += nr_pages;
> > +
> > +resched:
> > +             if (need_resched()) {
> > +                     xas_pause(&xas);
> > +                     cond_resched_rcu();
> > +             }
> > +     }
> > +     rcu_read_unlock();
> > +}
> > +EXPORT_SYMBOL(filemap_cachestat);
> > +
> > +#ifdef CONFIG_CACHESTAT_SYSCALL
> > +/*
> > + * The cachestat(5) system call.
> > + *
> > + * cachestat() returns the page cache statistics of a file in the
> > + * bytes range specified by `off` and `len`: number of cached pages,
> > + * number of dirty pages, number of pages marked for writeback,
> > + * number of evicted pages, and number of recently evicted pages.
> > + *
> > + * An evicted page is a page that is previously in the page cache
> > + * but has been evicted since. A page is recently evicted if its last
> > + * eviction was recent enough that its reentry to the cache would
> > + * indicate that it is actively being used by the system, and that
> > + * there is memory pressure on the system.
> > + *
> > + * `off` and `len` must be non-negative integers. If `len` > 0,
> > + * the queried range is [`off`, `off` + `len`]. If `len` == 0,
> > + * we will query in the range from `off` to the end of the file.
> > + *
> > + * `cstat_version` is an unsigned integer indicating the specific version
> > + * of the cachestat struct. It must be at least 1, and does not exceed the
> > + * latest version number (which is currently 1). For now, user should
> > + * just pass 1.
> > + *
> > + * The `flags` argument is unused for now, but is included for future
> > + * extensibility. User should pass 0 (i.e no flag specified).
> > + *
> > + * Because the status of a page can change after cachestat() checks it
> > + * but before it returns to the application, the returned values may
> > + * contain stale information.
> > + *
> > + * return values:
> > + *  zero    - success
> > + *  -EFAULT - cstat points to an illegal address
> > + *  -EINVAL - invalid arguments
> > + *  -EBADF   - invalid file descriptor
> > + */
> > +SYSCALL_DEFINE6(cachestat, unsigned int, fd, off_t, off, size_t, len,
> > +             unsigned int, cstat_version, struct cachestat __user *, cstat,
> > +             unsigned int, flags)
> > +{
> > +     struct fd f = fdget(fd);
> > +     struct address_space *mapping;
> > +     struct cachestat cs;
> > +     pgoff_t first_index = off >> PAGE_SHIFT;
> > +     pgoff_t last_index =
> > +             len == 0 ? ULONG_MAX : (off + len - 1) >> PAGE_SHIFT;
> > +
> > +     if (!f.file)
> > +             return -EBADF;
> > +
> > +     if (off < 0 || flags != 0 || cstat_version < 1 ||
> > +                     cstat_version > LATEST_CACHESTAT_VERSION) {
> > +             fdput(f);
> > +             return -EINVAL;
> > +     }
> > +
> > +     memset(&cs, 0, sizeof(struct cachestat));
> > +     mapping = f.file->f_mapping;
> > +     filemap_cachestat(mapping, first_index, last_index, &cs);
> > +     fdput(f);
> > +
> > +     if (copy_to_user(cstat, &cs, sizeof(struct cachestat)))
> > +             return -EFAULT;
> > +
> > +     return 0;
> > +}
> > +#endif /* CONFIG_CACHESTAT_SYSCALL */
> > --
> > 2.30.2
> >
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-20 16:27     ` Johannes Weiner
@ 2023-01-20 18:53       ` Brian Foster
  2023-01-21  3:43       ` Yu Zhao
  1 sibling, 0 replies; 12+ messages in thread
From: Brian Foster @ 2023-01-20 18:53 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Nhat Pham, akpm, linux-mm, linux-kernel, willy, linux-api,
	kernel-team, Yu Zhao

On Fri, Jan 20, 2023 at 11:27:12AM -0500, Johannes Weiner wrote:
> On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > +	int memcgid;
> > > +	struct pglist_data *pgdat;
> > > +	unsigned long token;
> > > +
> > > +	unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > +	eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > +	lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > +	lrugen = &lruvec->lrugen;
> > > +
> > > +	min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > +	return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> > 
> > I think this might be more readable without the double negative.
> > 
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
> 
> The initial version was refactored. Yu explicitly requested it be
> duplicated [1] to cut down on some boiler plate.
> 

Ah, sorry for missing the previous discussion. TBH I wasn't terribly
comfortable reviewing this one until I had made enough passes at the
second patch..

> I have to agree with Brian on this one, though. The factored version
> is better for maintenance than duplicating the core logic here. Even
> if it ends up a bit more boiler plate - it's harder to screw that up,
> and easier to catch at compile time, than the duplicates diverging.
> 

It seems more elegant to me, FWIW. Glad I'm not totally off the rails at
least. ;) But I'll defer to those who know the code better and the
author, so that's just my .02. I don't want to cause this to go around
in circles..

Brian

> [1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-20 17:29     ` Nhat Pham
@ 2023-01-20 18:56       ` Brian Foster
  0 siblings, 0 replies; 12+ messages in thread
From: Brian Foster @ 2023-01-20 18:56 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, linux-mm, linux-kernel, willy, linux-api, kernel-team

On Fri, Jan 20, 2023 at 09:29:46AM -0800, Nhat Pham wrote:
>  On Fri, Jan 20, 2023 at 6:33 AM Brian Foster <bfoster@redhat.com> wrote:
> >
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > In preparation for computing recently evicted pages in cachestat,
> > > refactor workingset_refault and lru_gen_refault to expose a helper
> > > function that would test if an evicted page is recently evicted.
> > >
> > > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > > ---
> >
> > Hi Nhat,
> >
> > I'm not terribly familiar with the workingset management code, but a few
> > thoughts now that I've stared at it a bit...
> >
> > >  include/linux/swap.h |   1 +
> > >  mm/workingset.c      | 129 ++++++++++++++++++++++++++++++-------------
> > >  2 files changed, 92 insertions(+), 38 deletions(-)
> > >
> > > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > > index a18cf4b7c724..dae6f6f955eb 100644
> > > --- a/include/linux/swap.h
> > > +++ b/include/linux/swap.h
> > > @@ -361,6 +361,7 @@ static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t entry)
> > >  }
> > >
> > >  /* linux/mm/workingset.c */
> > > +bool workingset_test_recent(void *shadow, bool file, bool *workingset);
> > >  void workingset_age_nonresident(struct lruvec *lruvec, unsigned long nr_pages);
> > >  void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg);
> > >  void workingset_refault(struct folio *folio, void *shadow);
> > > diff --git a/mm/workingset.c b/mm/workingset.c
> > > index 79585d55c45d..006482c4e0bd 100644
> > > --- a/mm/workingset.c
> > > +++ b/mm/workingset.c
> > > @@ -244,6 +244,33 @@ static void *lru_gen_eviction(struct folio *folio)
> > >       return pack_shadow(mem_cgroup_id(memcg), pgdat, token, refs);
> > >  }
> > >
> > > +/*
> > > + * Test if the folio is recently evicted.
> > > + *
> > > + * As a side effect, also populates the references with
> > > + * values unpacked from the shadow of the evicted folio.
> > > + */
> > > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > > +{
> > > +     struct mem_cgroup *eviction_memcg;
> > > +     struct lruvec *lruvec;
> > > +     struct lru_gen_struct *lrugen;
> > > +     unsigned long min_seq;
> > > +
> >
> > Extra whitespace looks a bit funny here.
> >
> > > +     int memcgid;
> > > +     struct pglist_data *pgdat;
> > > +     unsigned long token;
> > > +
> > > +     unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > +     eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > +     lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > +     lrugen = &lruvec->lrugen;
> > > +
> > > +     min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > +     return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> >
> > I think this might be more readable without the double negative.
> 
> Hmm indeed. I was just making sure that I did not mess up Yu's
> original logic here (by just wrapping it in a parentheses and
> negate the whole thing), but if I understand it correctly it's just
> an equality check. I'll fix it in the next version to make it cleaner.
> 
> >
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
> 
> In V2, it is actually refactored analogously as well - but we had a discussion
> about it here:
> 
> https://lkml.org/lkml/2022/12/5/1321
> 

Yeah, sorry.. replied to Johannes.

> >
> > > +}
> > > +
> > >  static void lru_gen_refault(struct folio *folio, void *shadow)
> > >  {
> > >       int hist, tier, refs;
> > > @@ -306,6 +333,11 @@ static void *lru_gen_eviction(struct folio *folio)
> > >       return NULL;
> > >  }
> > >
> > > +static bool lru_gen_test_recent(void *shadow, bool file, bool *workingset)
> > > +{
> > > +     return true;
> > > +}
> >
> > I guess this is a no-op for !MGLRU but given the context (i.e. special
> > treatment for "recent" refaults), perhaps false is a more sane default?
> 
> Hmm, fair point. Let me fix that in the next version.
> 
> >
> > > +
> > >  static void lru_gen_refault(struct folio *folio, void *shadow)
> > >  {
> > >  }
> > > @@ -373,40 +405,31 @@ void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_memcg)
> > >                               folio_test_workingset(folio));
> > >  }
> > >
> > > -/**
> > > - * workingset_refault - Evaluate the refault of a previously evicted folio.
> > > - * @folio: The freshly allocated replacement folio.
> > > - * @shadow: Shadow entry of the evicted folio.
> > > +/*
> > > + * Test if the folio is recently evicted by checking if
> > > + * refault distance of shadow exceeds workingset size.
> > >   *
> > > - * Calculates and evaluates the refault distance of the previously
> > > - * evicted folio in the context of the node and the memcg whose memory
> > > - * pressure caused the eviction.
> > > + * As a side effect, populate workingset with the value
> > > + * unpacked from shadow.
> > >   */
> > > -void workingset_refault(struct folio *folio, void *shadow)
> > > +bool workingset_test_recent(void *shadow, bool file, bool *workingset)
> > >  {
> > > -     bool file = folio_is_file_lru(folio);
> > >       struct mem_cgroup *eviction_memcg;
> > >       struct lruvec *eviction_lruvec;
> > >       unsigned long refault_distance;
> > >       unsigned long workingset_size;
> > > -     struct pglist_data *pgdat;
> > > -     struct mem_cgroup *memcg;
> > > -     unsigned long eviction;
> > > -     struct lruvec *lruvec;
> > >       unsigned long refault;
> > > -     bool workingset;
> > > +
> > >       int memcgid;
> > > -     long nr;
> > > +     struct pglist_data *pgdat;
> > > +     unsigned long eviction;
> > >
> > > -     if (lru_gen_enabled()) {
> > > -             lru_gen_refault(folio, shadow);
> > > -             return;
> > > -     }
> > > +     if (lru_gen_enabled())
> > > +             return lru_gen_test_recent(shadow, file, workingset);
> >
> > Hmm.. so this function is only called by workingset_refault() when
> > lru_gen_enabled() == false, otherwise it calls into lru_gen_refault(),
> > which as noted above duplicates some of the recency logic.
> >
> > I'm assuming this lru_gen_test_recent() call is so filemap_cachestat()
> > can just call workingset_test_recent(). That seems reasonable, but makes
> > me wonder...
> 
> You're right. It's a bit clunky...
> 
> >
> > >
> > > -     unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset);
> > > +     unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset);
> > >       eviction <<= bucket_order;
> > >
> > > -     rcu_read_lock();
> > >       /*
> > >        * Look up the memcg associated with the stored ID. It might
> > >        * have been deleted since the folio's eviction.
> > > @@ -425,7 +448,8 @@ void workingset_refault(struct folio *folio, void *shadow)
> > >        */
> > >       eviction_memcg = mem_cgroup_from_id(memcgid);
> > >       if (!mem_cgroup_disabled() && !eviction_memcg)
> > > -             goto out;
> > > +             return false;
> > > +
> > >       eviction_lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > >       refault = atomic_long_read(&eviction_lruvec->nonresident_age);
> > >
> > > @@ -447,21 +471,6 @@ void workingset_refault(struct folio *folio, void *shadow)
> > >        */
> > >       refault_distance = (refault - eviction) & EVICTION_MASK;
> > >
> > > -     /*
> > > -      * The activation decision for this folio is made at the level
> > > -      * where the eviction occurred, as that is where the LRU order
> > > -      * during folio reclaim is being determined.
> > > -      *
> > > -      * However, the cgroup that will own the folio is the one that
> > > -      * is actually experiencing the refault event.
> > > -      */
> > > -     nr = folio_nr_pages(folio);
> > > -     memcg = folio_memcg(folio);
> > > -     pgdat = folio_pgdat(folio);
> > > -     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > -
> > > -     mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> > > -
> > >       mem_cgroup_flush_stats_delayed();
> > >       /*
> > >        * Compare the distance to the existing workingset size. We
> > > @@ -483,8 +492,51 @@ void workingset_refault(struct folio *folio, void *shadow)
> > >                                                    NR_INACTIVE_ANON);
> > >               }
> > >       }
> > > -     if (refault_distance > workingset_size)
> > > +
> > > +     return refault_distance <= workingset_size;
> > > +}
> > > +
> > > +/**
> > > + * workingset_refault - Evaluate the refault of a previously evicted folio.
> > > + * @folio: The freshly allocated replacement folio.
> > > + * @shadow: Shadow entry of the evicted folio.
> > > + *
> > > + * Calculates and evaluates the refault distance of the previously
> > > + * evicted folio in the context of the node and the memcg whose memory
> > > + * pressure caused the eviction.
> > > + */
> > > +void workingset_refault(struct folio *folio, void *shadow)
> > > +{
> > > +     bool file = folio_is_file_lru(folio);
> > > +     struct pglist_data *pgdat;
> > > +     struct mem_cgroup *memcg;
> > > +     struct lruvec *lruvec;
> > > +     bool workingset;
> > > +     long nr;
> > > +
> > > +     if (lru_gen_enabled()) {
> > > +             lru_gen_refault(folio, shadow);
> > > +             return;
> > > +     }
> >
> > ... if perhaps this should call workingset_test_recent() a bit earlier,
> > since it also covers the lru_gen_*() case..? That may or may not be
> > cleaner. It _seems like_ it might produce a bit more consistent logic,
> > but just a thought and I could easily be missing details.
> 
> Hmm you mean before/in place of the lru_gen_refault call?
> workingset_test_recent only covers lru_gen_test_recent,
> not the rest of the logic of lru_gen_refault I believe.
> 

When reading through the code I got the impression that if
workingset_test_recent() -> lru_gen_test_recent() returned false, then
lru_gen_refault() didn't really do anything. I.e., the token/min_seq
check fails and it skips out to the end of the function. That had me
wondering whether workingset_refault() could just skip out of either
mode if workingset_test_recent() returns false (since it calls
lru_gen_test_recent()), but that may not work. Specifically I'm not sure
about that mod_lruvec_state()) call (discussed below) for the MGLRU
case. If that call is correct as is for !MGLRU, maybe it could be made
conditional based on lru_gen_enabled()..?

I guess what I was trying to get at here is that you've created this
nice workingset_test_recent() helper to cover both modes for
filemap_cachestat(), so it would be nice if it could be used in that way
in the core workingset code as well. :)

I suppose yet another option could be to skip the lru_gen_test_recent()
check in workingset_test_recent() and instead call it from
lru_gen_refault(), but then I guess you'd have to open code both checks
in the filemap_cachestat() call, which sounds kind of ugly..

> 
> >
> > > +
> > > +     rcu_read_lock();
> > > +
> > > +     nr = folio_nr_pages(folio);
> > > +     memcg = folio_memcg(folio);
> > > +     pgdat = folio_pgdat(folio);
> > > +     lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > +
> > > +     if (!workingset_test_recent(shadow, file, &workingset)) {
> > > +             /*
> > > +              * The activation decision for this folio is made at the level
> > > +              * where the eviction occurred, as that is where the LRU order
> > > +              * during folio reclaim is being determined.
> > > +              *
> > > +              * However, the cgroup that will own the folio is the one that
> > > +              * is actually experiencing the refault event.
> > > +              */
> >
> > IIUC, this comment is explaining the difference between using the
> > eviction lru (based on the shadow entry) to calculate recency vs. the
> > lru for the current folio to process the refault. If so, perhaps it
> > should go right above the workingset_test_recent() call? (Then the if
> > braces could go away as well..).
> 
> You're right! I think it should go above `nr = folio_nr_pages(folio);` call.
> 

Sounds good.

> >
> > >               goto out;
> > > +     }
> > >
> > >       folio_set_active(folio);
> > >       workingset_age_nonresident(lruvec, nr);
> > > @@ -498,6 +550,7 @@ void workingset_refault(struct folio *folio, void *shadow)
> > >               mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + file, nr);
> > >       }
> > >  out:
> > > +     mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr);
> >
> > Why not just leave this up earlier in the function (i.e. before the
> > recency check) as it was originally?
> 
> Let me double check, but I think this is a relic from the old (and incorrect)
> version of workingset code.
> 
> Originally, mod_lruvec_state uses the lruvec computed from a variable
> (pgdat) that was unpacked from the shadow. So this mod_lruvec_state
> has to go after the unpack_shadow call (which has been moved inside
> of workingset_test_recent).
> 
> This is actually wrong - we actually want the pgdat from the folio. It
> has been fixed in a separate patch:
> 
> https://lore.kernel.org/all/20230104222944.2380117-1-nphamcs@gmail.com/T/#u
> 

Yep, I had applied this series on top of that one..

> But I didn't update it here. Let me stare at it a bit more to make sure,
> and then fix it in the next version. It should not change the behavior,
> but it should be cleaner.
> 

Sounds good. FWIW it looked like the logic hadn't changed with this
series so I just assumed it was correct, just possibly moved around
unnecessarily. I'll try to make more sense of it in the next version.
Thanks.

Brian

> >
> > Brian
> >
> > >       rcu_read_unlock();
> > >  }
> > >
> > > --
> > > 2.30.2
> > >
> >
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check
  2023-01-20 16:27     ` Johannes Weiner
  2023-01-20 18:53       ` Brian Foster
@ 2023-01-21  3:43       ` Yu Zhao
  1 sibling, 0 replies; 12+ messages in thread
From: Yu Zhao @ 2023-01-21  3:43 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Brian Foster, Nhat Pham, akpm, linux-mm, linux-kernel, willy,
	linux-api, kernel-team

On Fri, Jan 20, 2023 at 9:26 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Fri, Jan 20, 2023 at 09:34:18AM -0500, Brian Foster wrote:
> > On Tue, Jan 17, 2023 at 11:59:57AM -0800, Nhat Pham wrote:
> > > +   int memcgid;
> > > +   struct pglist_data *pgdat;
> > > +   unsigned long token;
> > > +
> > > +   unpack_shadow(shadow, &memcgid, &pgdat, &token, workingset);
> > > +   eviction_memcg = mem_cgroup_from_id(memcgid);
> > > +
> > > +   lruvec = mem_cgroup_lruvec(eviction_memcg, pgdat);
> > > +   lrugen = &lruvec->lrugen;
> > > +
> > > +   min_seq = READ_ONCE(lrugen->min_seq[file]);
> > > +   return !((token >> LRU_REFS_WIDTH) != (min_seq & (EVICTION_MASK >> LRU_REFS_WIDTH)));
> >
> > I think this might be more readable without the double negative.
> >
> > Also it looks like this logic is pulled from lru_gen_refault(). Any
> > reason the caller isn't refactored to use this helper, similar to how
> > workingset_refault() is modified? It seems like a potential landmine to
> > duplicate the logic here for cachestat purposes and somewhere else for
> > actual workingset management.
>
> The initial version was refactored. Yu explicitly requested it be
> duplicated [1] to cut down on some boiler plate.
>
> I have to agree with Brian on this one, though. The factored version
> is better for maintenance than duplicating the core logic here. Even
> if it ends up a bit more boiler plate - it's harder to screw that up,
> and easier to catch at compile time, than the duplicates diverging.
>
> [1] https://lore.kernel.org/lkml/CAOUHufZKTqoD2rFwrX9-eCknBmeWqP88rZ7X7A_5KHHbGBUP=A@mail.gmail.com/

No objections to either way. I'll take a look at the final version and
we are good as long as it works as intended.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-01-21  3:44 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-17 19:59 [PATCH v6 0/3] cachestat: a new syscall for page cache state of files Nhat Pham
2023-01-17 19:59 ` [PATCH v6 1/3] workingset: refactor LRU refault to expose refault recency check Nhat Pham
2023-01-20 14:34   ` Brian Foster
2023-01-20 16:27     ` Johannes Weiner
2023-01-20 18:53       ` Brian Foster
2023-01-21  3:43       ` Yu Zhao
2023-01-20 17:29     ` Nhat Pham
2023-01-20 18:56       ` Brian Foster
2023-01-17 19:59 ` [PATCH v6 2/3] cachestat: implement cachestat syscall Nhat Pham
2023-01-20 14:36   ` Brian Foster
2023-01-20 18:26     ` Nhat Pham
2023-01-17 19:59 ` [PATCH v6 3/3] selftests: Add selftests for cachestat Nhat Pham

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).