linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
@ 2023-10-17 23:21 Nhat Pham
  2023-10-17 23:21 ` [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Nhat Pham
                   ` (5 more replies)
  0 siblings, 6 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

Changelog:
v3:
   * Add a patch to export per-cgroup zswap writeback counters
   * Add a patch to update zswap's kselftest
   * Separate the new list_lru functions into its own prep patch
   * Do not start from the top of the hierarchy when encounter a memcg
     that is not online for the global limit zswap writeback (patch 2)
     (suggested by Yosry Ahmed)
   * Do not remove the swap entry from list_lru in
     __read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
   * Removed a redundant zswap pool getting (patch 2)
     (reported by Ryan Roberts)
   * Use atomic for the nr_zswap_protected (instead of lruvec's lock)
     (patch 5) (suggested by Yosry Ahmed)
   * Remove the per-cgroup zswap shrinker knob (patch 5)
     (suggested by Yosry Ahmed)
v2:
   * Fix loongarch compiler errors
   * Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM

There are currently several issues with zswap writeback:

1. There is only a single global LRU for zswap, making it impossible to
   perform worload-specific shrinking - an memcg under memory pressure
   cannot determine which pages in the pool it owns, and often ends up
   writing pages from other memcgs. This issue has been previously
   observed in practice and mitigated by simply disabling
   memcg-initiated shrinking:

   https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u

   But this solution leaves a lot to be desired, as we still do not
   have an avenue for an memcg to free up its own memory locked up in
   the zswap pool.

2. We only shrink the zswap pool when the user-defined limit is hit.
   This means that if we set the limit too high, cold data that are
   unlikely to be used again will reside in the pool, wasting precious
   memory. It is hard to predict how much zswap space will be needed
   ahead of time, as this depends on the workload (specifically, on
   factors such as memory access patterns and compressibility of the
   memory pages).

This patch series solves these issues by separating the global zswap
LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
(i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
new shrinker does not have any parameter that must be tuned by the
user, and can be opted in or out on a per-memcg basis.

As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.

Domenico Cerasuolo (3):
  zswap: make shrinking memcg-aware
  mm: memcg: add per-memcg zswap writeback stat
  selftests: cgroup: update per-memcg zswap writeback selftest

Nhat Pham (2):
  mm: list_lru: allow external numa node and cgroup tracking
  zswap: shrinks zswap pool based on memory pressure

 Documentation/admin-guide/mm/zswap.rst      |   7 +
 include/linux/list_lru.h                    |  38 +++
 include/linux/memcontrol.h                  |   7 +
 include/linux/mmzone.h                      |  14 +
 mm/list_lru.c                               |  43 ++-
 mm/memcontrol.c                             |  15 +
 mm/mmzone.c                                 |   3 +
 mm/swap.h                                   |   3 +-
 mm/swap_state.c                             |  38 ++-
 mm/zswap.c                                  | 335 ++++++++++++++++----
 tools/testing/selftests/cgroup/test_zswap.c |  74 +++--
 11 files changed, 485 insertions(+), 92 deletions(-)

-- 
2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
@ 2023-10-17 23:21 ` Nhat Pham
  2023-10-18 22:26   ` Yosry Ahmed
  2023-10-17 23:21 ` [PATCH v3 2/5] zswap: make shrinking memcg-aware Nhat Pham
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

The interface of list_lru is based on the assumption that objects are
allocated on the correct node/memcg, with this change it is introduced the
possibility to explicitly specify numa node and memcgroup when adding and
removing objects. This is so that users of list_lru can track node/memcg
of the items outside of the list_lru, like in zswap, where the allocations
can be made by kswapd for data that's charged to a different cgroup.

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 include/linux/list_lru.h | 38 +++++++++++++++++++++++++++++++++++
 mm/list_lru.c            | 43 +++++++++++++++++++++++++++++++++++-----
 2 files changed, 76 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index b35968ee9fb5..0f5f39cacbbb 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
  */
 bool list_lru_add(struct list_lru *lru, struct list_head *item);
 
+/**
+ * __list_lru_add: add an element to a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be added.
+ * @memcg: the cgroup of the sublist to add the item to.
+ * @nid: the node id of the sublist to add the item to.
+ *
+ * This function is similar to list_lru_add(), but it allows the caller to
+ * specify the sublist to which the item should be added. This can be useful
+ * when the list_head node is not necessarily in the same cgroup and NUMA node
+ * as the data it represents, such as zswap, where the list_head node could be
+ * from kswapd and the data from a different cgroup altogether.
+ *
+ * Return value: true if the list was updated, false otherwise
+ */
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+		    struct mem_cgroup *memcg);
+
 /**
  * list_lru_del: delete an element to the lru list
  * @list_lru: the lru pointer
@@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
  */
 bool list_lru_del(struct list_lru *lru, struct list_head *item);
 
+/**
+ * __list_lru_del: delete an element from a specific sublist.
+ * @list_lru: the lru pointer
+ * @item: the item to be deleted.
+ * @memcg: the cgroup of the sublist to delete the item from.
+ * @nid: the node id of the sublist to delete the item from.
+ *
+ * Return value: true if the list was updated, false otherwise.
+ */
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+		    struct mem_cgroup *memcg);
+
 /**
  * list_lru_count_one: return the number of objects currently held by @lru
  * @lru: the lru pointer.
@@ -136,6 +166,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru)
 void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
 void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
 			   struct list_head *head);
+/*
+ * list_lru_putback: undo list_lru_isolate.
+ *
+ * Since we might have dropped the LRU lock in between, recompute list_lru_one
+ * from the node's id and memcg.
+ */
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+		      struct mem_cgroup *memcg);
 
 typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
 		struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
diff --git a/mm/list_lru.c b/mm/list_lru.c
index a05e5bef3b40..63b75163c6ad 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -119,13 +119,22 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
 bool list_lru_add(struct list_lru *lru, struct list_head *item)
 {
 	int nid = page_to_nid(virt_to_page(item));
+	struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+		mem_cgroup_from_slab_obj(item) : NULL;
+
+	return __list_lru_add(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_add);
+
+bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
+		    struct mem_cgroup *memcg)
+{
 	struct list_lru_node *nlru = &lru->node[nid];
-	struct mem_cgroup *memcg;
 	struct list_lru_one *l;
 
 	spin_lock(&nlru->lock);
 	if (list_empty(item)) {
-		l = list_lru_from_kmem(lru, nid, item, &memcg);
+		l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
 		list_add_tail(item, &l->list);
 		/* Set shrinker bit if the first element was added */
 		if (!l->nr_items++)
@@ -138,17 +147,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
 	spin_unlock(&nlru->lock);
 	return false;
 }
-EXPORT_SYMBOL_GPL(list_lru_add);
+EXPORT_SYMBOL_GPL(__list_lru_add);
 
 bool list_lru_del(struct list_lru *lru, struct list_head *item)
 {
 	int nid = page_to_nid(virt_to_page(item));
+	struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
+		mem_cgroup_from_slab_obj(item) : NULL;
+
+	return __list_lru_del(lru, item, nid, memcg);
+}
+EXPORT_SYMBOL_GPL(list_lru_del);
+
+bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
+		    struct mem_cgroup *memcg)
+{
 	struct list_lru_node *nlru = &lru->node[nid];
 	struct list_lru_one *l;
 
 	spin_lock(&nlru->lock);
 	if (!list_empty(item)) {
-		l = list_lru_from_kmem(lru, nid, item, NULL);
+		l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
 		list_del_init(item);
 		l->nr_items--;
 		nlru->nr_items--;
@@ -158,7 +177,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
 	spin_unlock(&nlru->lock);
 	return false;
 }
-EXPORT_SYMBOL_GPL(list_lru_del);
+EXPORT_SYMBOL_GPL(__list_lru_del);
 
 void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
 {
@@ -175,6 +194,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
 }
 EXPORT_SYMBOL_GPL(list_lru_isolate_move);
 
+void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
+		      struct mem_cgroup *memcg)
+{
+	struct list_lru_one *list =
+		list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
+
+	if (list_empty(item)) {
+		list_add_tail(item, &list->list);
+		if (!list->nr_items++)
+			set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
+	}
+}
+EXPORT_SYMBOL_GPL(list_lru_putback);
+
 unsigned long list_lru_count_one(struct list_lru *lru,
 				 int nid, struct mem_cgroup *memcg)
 {
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
  2023-10-17 23:21 ` [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Nhat Pham
@ 2023-10-17 23:21 ` Nhat Pham
  2023-10-18 23:20   ` Yosry Ahmed
  2023-10-17 23:21 ` [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>

Currently, we only have a single global LRU for zswap. This makes it
impossible to perform worload-specific shrinking - an memcg cannot
determine which pages in the pool it owns, and often ends up writing
pages from other memcgs. This issue has been previously observed in
practice and mitigated by simply disabling memcg-initiated shrinking:

https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u

This patch fully resolves the issue by replacing the global zswap LRU
with memcg- and NUMA-specific LRUs, and modify the reclaim logic:

a) When a store attempt hits an memcg limit, it now triggers a
   synchronous reclaim attempt that, if successful, allows the new
   hotter page to be accepted by zswap.
b) If the store attempt instead hits the global zswap limit, it will
   trigger an asynchronous reclaim attempt, in which an memcg is
   selected for reclaim in a round-robin-like fashion.

Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Co-developed-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 include/linux/memcontrol.h |   5 ++
 mm/swap.h                  |   3 +-
 mm/swap_state.c            |  17 +++-
 mm/zswap.c                 | 179 ++++++++++++++++++++++++++-----------
 4 files changed, 147 insertions(+), 57 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 031102ac9311..3de10fabea0f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
 	return NULL;
 }
 
+static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
+{
+	return NULL;
+}
+
 static inline bool folio_memcg_kmem(struct folio *folio)
 {
 	return false;
diff --git a/mm/swap.h b/mm/swap.h
index 8a3c7a0ace4f..bbd6ce661a20 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 				     struct vm_area_struct *vma,
 				     unsigned long addr,
-				     bool *new_page_allocated);
+				     bool *new_page_allocated,
+				     bool fail_if_exists);
 struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
 				    struct vm_fault *vmf);
 struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b3b14bd0dd64..0356df52b06a 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
 
 struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 			struct vm_area_struct *vma, unsigned long addr,
-			bool *new_page_allocated)
+			bool *new_page_allocated, bool fail_if_exists)
 {
 	struct swap_info_struct *si;
 	struct folio *folio;
@@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		if (err != -EEXIST)
 			goto fail_put_swap;
 
+		/*
+		 * This check guards against a state that happens if a call
+		 * to __read_swap_cache_async triggers a reclaim, if the
+		 * reclaimer (zswap's writeback as of now) then decides to
+		 * reclaim that same entry, then the subsequent call to
+		 * __read_swap_cache_async would get stuck in this loop.
+		 */
+		if (fail_if_exists && err == -EEXIST)
+			goto fail_put_swap;
 		/*
 		 * We might race against __delete_from_swap_cache(), and
 		 * stumble across a swap_map entry whose SWAP_HAS_CACHE
@@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 {
 	bool page_was_allocated;
 	struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
-			vma, addr, &page_was_allocated);
+			vma, addr, &page_was_allocated, false);
 
 	if (page_was_allocated)
 		swap_readpage(retpage, false, plug);
@@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 		/* Ok, do the async read-ahead now */
 		page = __read_swap_cache_async(
 			swp_entry(swp_type(entry), offset),
-			gfp_mask, vma, addr, &page_allocated);
+			gfp_mask, vma, addr, &page_allocated, false);
 		if (!page)
 			continue;
 		if (page_allocated) {
@@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
 		pte_unmap(pte);
 		pte = NULL;
 		page = __read_swap_cache_async(entry, gfp_mask, vma,
-					       addr, &page_allocated);
+					       addr, &page_allocated, false);
 		if (!page)
 			continue;
 		if (page_allocated) {
diff --git a/mm/zswap.c b/mm/zswap.c
index 083c693602b8..d2989ad11814 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -34,6 +34,7 @@
 #include <linux/writeback.h>
 #include <linux/pagemap.h>
 #include <linux/workqueue.h>
+#include <linux/list_lru.h>
 
 #include "swap.h"
 #include "internal.h"
@@ -171,8 +172,8 @@ struct zswap_pool {
 	struct work_struct shrink_work;
 	struct hlist_node node;
 	char tfm_name[CRYPTO_MAX_ALG_NAME];
-	struct list_head lru;
-	spinlock_t lru_lock;
+	struct list_lru list_lru;
+	struct mem_cgroup *next_shrink;
 };
 
 /*
@@ -288,15 +289,25 @@ static void zswap_update_total_size(void)
 	zswap_pool_total_size = total;
 }
 
+static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
+{
+	return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
+}
+
+static inline int entry_to_nid(struct zswap_entry *entry)
+{
+	return page_to_nid(virt_to_page(entry));
+}
+
 /*********************************
 * zswap entry functions
 **********************************/
 static struct kmem_cache *zswap_entry_cache;
 
-static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
 {
 	struct zswap_entry *entry;
-	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+	entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
 	if (!entry)
 		return NULL;
 	entry->refcount = 1;
@@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
 	kmem_cache_free(zswap_entry_cache, entry);
 }
 
+/*********************************
+* lru functions
+**********************************/
+static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+	struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
+	bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
+
+	mem_cgroup_put(memcg);
+	return added;
+}
+
+static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
+{
+	struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
+	bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg);
+
+	mem_cgroup_put(memcg);
+	return removed;
+}
+
 /*********************************
 * rbtree functions
 **********************************/
@@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
 	if (!entry->length)
 		atomic_dec(&zswap_same_filled_pages);
 	else {
-		spin_lock(&entry->pool->lru_lock);
-		list_del(&entry->lru);
-		spin_unlock(&entry->pool->lru_lock);
+		zswap_lru_del(&entry->pool->list_lru, entry);
 		zpool_free(zswap_find_zpool(entry), entry->handle);
 		zswap_pool_put(entry->pool);
 	}
@@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
 		zswap_entry_put(tree, entry);
 }
 
-static int zswap_reclaim_entry(struct zswap_pool *pool)
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+				       spinlock_t *lock, void *arg)
 {
-	struct zswap_entry *entry;
+	struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+	struct mem_cgroup *memcg;
 	struct zswap_tree *tree;
 	pgoff_t swpoffset;
-	int ret;
+	enum lru_status ret = LRU_REMOVED_RETRY;
+	int writeback_result;
 
-	/* Get an entry off the LRU */
-	spin_lock(&pool->lru_lock);
-	if (list_empty(&pool->lru)) {
-		spin_unlock(&pool->lru_lock);
-		return -EINVAL;
-	}
-	entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
-	list_del_init(&entry->lru);
 	/*
 	 * Once the lru lock is dropped, the entry might get freed. The
 	 * swpoffset is copied to the stack, and entry isn't deref'd again
@@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
 	 */
 	swpoffset = swp_offset(entry->swpentry);
 	tree = zswap_trees[swp_type(entry->swpentry)];
-	spin_unlock(&pool->lru_lock);
+	list_lru_isolate(l, item);
+	spin_unlock(lock);
 
 	/* Check for invalidate() race */
 	spin_lock(&tree->lock);
 	if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
-		ret = -EAGAIN;
 		goto unlock;
 	}
 	/* Hold a reference to prevent a free during writeback */
 	zswap_entry_get(entry);
 	spin_unlock(&tree->lock);
 
-	ret = zswap_writeback_entry(entry, tree);
+	writeback_result = zswap_writeback_entry(entry, tree);
 
 	spin_lock(&tree->lock);
-	if (ret) {
-		/* Writeback failed, put entry back on LRU */
-		spin_lock(&pool->lru_lock);
-		list_move(&entry->lru, &pool->lru);
-		spin_unlock(&pool->lru_lock);
+	if (writeback_result) {
+		zswap_reject_reclaim_fail++;
+		memcg = get_mem_cgroup_from_entry(entry);
+		spin_lock(lock);
+		/* we cannot use zswap_lru_add here, because it increments node's lru count */
+		list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
+		spin_unlock(lock);
+		mem_cgroup_put(memcg);
+		ret = LRU_RETRY;
 		goto put_unlock;
 	}
+	zswap_written_back_pages++;
 
 	/*
 	 * Writeback started successfully, the page now belongs to the
@@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
 	zswap_entry_put(tree, entry);
 unlock:
 	spin_unlock(&tree->lock);
-	return ret ? -EAGAIN : 0;
+	spin_lock(lock);
+	return ret;
+}
+
+static int shrink_memcg(struct mem_cgroup *memcg)
+{
+	struct zswap_pool *pool;
+	int nid, shrunk = 0;
+
+	pool = zswap_pool_current_get();
+	if (!pool)
+		return -EINVAL;
+
+	/*
+	 * Skip zombies because their LRUs are reparented and we would be
+	 * reclaiming from the parent instead of the dead memcgroup.
+	 */
+	if (memcg && !mem_cgroup_online(memcg))
+		goto out;
+
+	for_each_node_state(nid, N_NORMAL_MEMORY) {
+		unsigned long nr_to_walk = 1;
+
+		if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
+				      NULL, &nr_to_walk))
+			shrunk++;
+	}
+out:
+	zswap_pool_put(pool);
+	return shrunk ? 0 : -EAGAIN;
 }
 
 static void shrink_worker(struct work_struct *w)
@@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
 						shrink_work);
 	int ret, failures = 0;
 
+	/* global reclaim will select cgroup in a round-robin fashion. */
 	do {
-		ret = zswap_reclaim_entry(pool);
+		pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
+
+		ret = shrink_memcg(pool->next_shrink);
+
 		if (ret) {
-			zswap_reject_reclaim_fail++;
 			if (ret != -EAGAIN)
 				break;
 			if (++failures == MAX_RECLAIM_RETRIES)
@@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
 	 */
 	kref_init(&pool->kref);
 	INIT_LIST_HEAD(&pool->list);
-	INIT_LIST_HEAD(&pool->lru);
-	spin_lock_init(&pool->lru_lock);
+	list_lru_init_memcg(&pool->list_lru, NULL);
 	INIT_WORK(&pool->shrink_work, shrink_worker);
 
 	zswap_pool_debug("created", pool);
@@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
 
 	cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
 	free_percpu(pool->acomp_ctx);
+	list_lru_destroy(&pool->list_lru);
+	if (pool->next_shrink)
+		mem_cgroup_put(pool->next_shrink);
 	for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
 		zpool_destroy_pool(pool->zpools[i]);
 	kfree(pool);
@@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
 
 	/* try to allocate swap cache page */
 	page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
-				       &page_was_allocated);
+				       &page_was_allocated, true);
 	if (!page) {
 		ret = -ENOMEM;
 		goto fail;
@@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
 	/* start writeback */
 	__swap_writepage(page, &wbc);
 	put_page(page);
-	zswap_written_back_pages++;
 
 	return ret;
 
@@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
 	struct scatterlist input, output;
 	struct crypto_acomp_ctx *acomp_ctx;
 	struct obj_cgroup *objcg = NULL;
+	struct mem_cgroup *memcg = NULL;
 	struct zswap_pool *pool;
 	struct zpool *zpool;
+	int lru_alloc_ret;
 	unsigned int dlen = PAGE_SIZE;
 	unsigned long handle, value;
 	char *buf;
@@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
 		zswap_invalidate_entry(tree, dupentry);
 	}
 	spin_unlock(&tree->lock);
-
-	/*
-	 * XXX: zswap reclaim does not work with cgroups yet. Without a
-	 * cgroup-aware entry LRU, we will push out entries system-wide based on
-	 * local cgroup limits.
-	 */
 	objcg = get_obj_cgroup_from_folio(folio);
-	if (objcg && !obj_cgroup_may_zswap(objcg))
-		goto reject;
+	if (objcg && !obj_cgroup_may_zswap(objcg)) {
+		memcg = get_mem_cgroup_from_objcg(objcg);
+		if (shrink_memcg(memcg)) {
+			mem_cgroup_put(memcg);
+			goto reject;
+		}
+		mem_cgroup_put(memcg);
+	}
 
 	/* reclaim space if needed */
 	if (zswap_is_full()) {
@@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
 			zswap_pool_reached_full = false;
 	}
 
+	pool = zswap_pool_current_get();
+	if (!pool)
+		goto reject;
+
 	/* allocate entry */
-	entry = zswap_entry_cache_alloc(GFP_KERNEL);
+	entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
 	if (!entry) {
 		zswap_reject_kmemcache_fail++;
+		zswap_pool_put(pool);
 		goto reject;
 	}
 
@@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
 			entry->length = 0;
 			entry->value = value;
 			atomic_inc(&zswap_same_filled_pages);
+			zswap_pool_put(pool);
 			goto insert_entry;
 		}
 		kunmap_atomic(src);
@@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
 		goto freepage;
 
 	/* if entry is successfully added, it keeps the reference */
-	entry->pool = zswap_pool_current_get();
-	if (!entry->pool)
-		goto freepage;
+	entry->pool = pool;
+	if (objcg) {
+		memcg = get_mem_cgroup_from_objcg(objcg);
+		lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
+		mem_cgroup_put(memcg);
+
+		if (lru_alloc_ret)
+			goto freepage;
+	}
 
 	/* compress */
 	acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
@@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
 		zswap_invalidate_entry(tree, dupentry);
 	}
 	if (entry->length) {
-		spin_lock(&entry->pool->lru_lock);
-		list_add(&entry->lru, &entry->pool->lru);
-		spin_unlock(&entry->pool->lru_lock);
+		INIT_LIST_HEAD(&entry->lru);
+		zswap_lru_add(&pool->list_lru, entry);
 	}
 	spin_unlock(&tree->lock);
 
@@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
 
 put_dstmem:
 	mutex_unlock(acomp_ctx->mutex);
-	zswap_pool_put(entry->pool);
 freepage:
+	zswap_pool_put(entry->pool);
 	zswap_entry_cache_free(entry);
 reject:
 	if (objcg)
@@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
 		zswap_invalidate_entry(tree, entry);
 		folio_mark_dirty(folio);
 	} else if (entry->length) {
-		spin_lock(&entry->pool->lru_lock);
-		list_move(&entry->lru, &entry->pool->lru);
-		spin_unlock(&entry->pool->lru_lock);
+		zswap_lru_del(&entry->pool->list_lru, entry);
+		zswap_lru_add(&entry->pool->list_lru, entry);
 	}
 	zswap_entry_put(tree, entry);
 	spin_unlock(&tree->lock);
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
  2023-10-17 23:21 ` [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Nhat Pham
  2023-10-17 23:21 ` [PATCH v3 2/5] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-10-17 23:21 ` Nhat Pham
  2023-10-17 23:35   ` Nhat Pham
  2023-10-18 23:24   ` Yosry Ahmed
  2023-10-17 23:21 ` [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>

Since zswap now writes back pages from memcg-specific LRUs, we now need a
new stat to show writebacks count for each memcg.

Suggested-by: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 include/linux/memcontrol.h |  2 ++
 mm/memcontrol.c            | 15 +++++++++++++++
 mm/zswap.c                 |  3 +++
 3 files changed, 20 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3de10fabea0f..7868b1e00bf5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -38,6 +38,7 @@ enum memcg_stat_item {
 	MEMCG_KMEM,
 	MEMCG_ZSWAP_B,
 	MEMCG_ZSWAPPED,
+	MEMCG_ZSWAP_WB,
 	MEMCG_NR_STAT,
 };
 
@@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
 bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
 void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
 void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
+void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
 #else
 static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1bde67b29287..a9118871e5a6 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
 #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
 	{ "zswap",			MEMCG_ZSWAP_B			},
 	{ "zswapped",			MEMCG_ZSWAPPED			},
+	{ "zswap_wb",			MEMCG_ZSWAP_WB			},
 #endif
 	{ "file_mapped",		NR_FILE_MAPPED			},
 	{ "file_dirty",			NR_FILE_DIRTY			},
@@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
 	switch (item) {
 	case MEMCG_PERCPU_B:
 	case MEMCG_ZSWAP_B:
+	case MEMCG_ZSWAP_WB:
 	case NR_SLAB_RECLAIMABLE_B:
 	case NR_SLAB_UNRECLAIMABLE_B:
 	case WORKINGSET_REFAULT_ANON:
@@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
 	rcu_read_unlock();
 }
 
+void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
+{
+	struct mem_cgroup *memcg;
+
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		return;
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
+	rcu_read_unlock();
+}
+
 static u64 zswap_current_read(struct cgroup_subsys_state *css,
 			      struct cftype *cft)
 {
diff --git a/mm/zswap.c b/mm/zswap.c
index d2989ad11814..15485427e3fa 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
 	}
 	zswap_written_back_pages++;
 
+	if (entry->objcg)
+		obj_cgroup_report_zswap_wb(entry->objcg);
+
 	/*
 	 * Writeback started successfully, the page now belongs to the
 	 * swapcache. Drop the entry from zswap - unless invalidate already
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
                   ` (2 preceding siblings ...)
  2023-10-17 23:21 ` [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
@ 2023-10-17 23:21 ` Nhat Pham
  2023-10-17 23:34   ` Nhat Pham
  2023-10-17 23:21 ` [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Nhat Pham
  2023-10-19 17:12 ` [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Andrew Morton
  5 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>

The memcg-zswap self test is updated to adjust to the behavior change
implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
where zswap performs writeback for specific memcg.

Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
 1 file changed, 50 insertions(+), 24 deletions(-)

diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
index 49def87a909b..11271fabeffc 100644
--- a/tools/testing/selftests/cgroup/test_zswap.c
+++ b/tools/testing/selftests/cgroup/test_zswap.c
@@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
 	return read_int("/sys/kernel/debug/zswap/stored_pages", value);
 }
 
-static int get_zswap_written_back_pages(size_t *value)
+static int get_cg_wb_count(const char *cg)
 {
-	return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
+	return cg_read_key_long(cg, "memory.stat", "zswap_wb");
 }
 
 static int allocate_bytes(const char *cgroup, void *arg)
@@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
 	return 0;
 }
 
+static char *setup_test_group_1M(const char *root, const char *name)
+{
+	char *group_name = cg_name(root, name);
+
+	if (!group_name)
+		return NULL;
+	if (cg_create(group_name))
+		goto fail;
+	if (cg_write(group_name, "memory.max", "1M")) {
+		cg_destroy(group_name);
+		goto fail;
+	}
+	return group_name;
+fail:
+	free(group_name);
+	return NULL;
+}
+
 /*
  * When trying to store a memcg page in zswap, if the memcg hits its memory
- * limit in zswap, writeback should not be triggered.
- *
- * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
- * not zswap"). Needs to be revised when a per memcg writeback mechanism is
- * implemented.
+ * limit in zswap, writeback should affect only the zswapped pages of that
+ * memcg.
  */
 static int test_no_invasive_cgroup_shrink(const char *root)
 {
-	size_t written_back_before, written_back_after;
 	int ret = KSFT_FAIL;
-	char *test_group;
+	size_t control_allocation_size = MB(10);
+	char *control_allocation, *wb_group = NULL, *control_group = NULL;
 
 	/* Set up */
-	test_group = cg_name(root, "no_shrink_test");
-	if (!test_group)
-		goto out;
-	if (cg_create(test_group))
+	wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
+	if (!wb_group)
+		return KSFT_FAIL;
+	if (cg_write(wb_group, "memory.zswap.max", "10K"))
 		goto out;
-	if (cg_write(test_group, "memory.max", "1M"))
+	control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
+	if (!control_group)
 		goto out;
-	if (cg_write(test_group, "memory.zswap.max", "10K"))
+
+	/* Push some test_group2 memory into zswap */
+	if (cg_enter_current(control_group))
 		goto out;
-	if (get_zswap_written_back_pages(&written_back_before))
+	control_allocation = malloc(control_allocation_size);
+	for (int i = 0; i < control_allocation_size; i += 4095)
+		control_allocation[i] = 'a';
+	if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
 		goto out;
 
-	/* Allocate 10x memory.max to push memory into zswap */
-	if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
+	/* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
+	if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
 		goto out;
 
-	/* Verify that no writeback happened because of the memcg allocation */
-	if (get_zswap_written_back_pages(&written_back_after))
-		goto out;
-	if (written_back_after == written_back_before)
+	/* Verify that only zswapped memory from gwb_group has been written back */
+	if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
 		ret = KSFT_PASS;
 out:
-	cg_destroy(test_group);
-	free(test_group);
+	cg_enter_current(root);
+	if (control_group) {
+		cg_destroy(control_group);
+		free(control_group);
+	}
+	cg_destroy(wb_group);
+	free(wb_group);
+	if (control_allocation)
+		free(control_allocation);
 	return ret;
 }
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
                   ` (3 preceding siblings ...)
  2023-10-17 23:21 ` [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
@ 2023-10-17 23:21 ` Nhat Pham
  2023-10-18 23:36   ` Yosry Ahmed
  2023-10-19 17:12 ` [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Andrew Morton
  5 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:21 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

Currently, we only shrink the zswap pool when the user-defined limit is
hit. This means that if we set the limit too high, cold data that are
unlikely to be used again will reside in the pool, wasting precious
memory. It is hard to predict how much zswap space will be needed ahead
of time, as this depends on the workload (specifically, on factors such
as memory access patterns and compressibility of the memory pages).

This patch implements a memcg- and NUMA-aware shrinker for zswap, that
is initiated when there is memory pressure. The shrinker does not
have any parameter that must be tuned by the user, and can be opted in
or out on a per-memcg basis.

Furthermore, to make it more robust for many workloads and prevent
overshrinking (i.e evicting warm pages that might be refaulted into
memory), we build in the following heuristics:

* Estimate the number of warm pages residing in zswap, and attempt to
  protect this region of the zswap LRU.
* Scale the number of freeable objects by an estimate of the memory
  saving factor. The better zswap compresses the data, the fewer pages
  we will evict to swap (as we will otherwise incur IO for relatively
  small memory saving).
* During reclaim, if the shrinker encounters a page that is also being
  brought into memory, the shrinker will cautiously terminate its
  shrinking action, as this is a sign that it is touching the warmer
  region of the zswap LRU.

As a proof of concept, we ran the following synthetic benchmark:
build the linux kernel in a memory-limited cgroup, and allocate some
cold data in tmpfs to see if the shrinker could write them out and
improved the overall performance. Depending on the amount of cold data
generated, we observe from 14% to 35% reduction in kernel CPU time used
in the kernel builds.

Signed-off-by: Nhat Pham <nphamcs@gmail.com>
---
 Documentation/admin-guide/mm/zswap.rst |   7 ++
 include/linux/mmzone.h                 |  14 +++
 mm/mmzone.c                            |   3 +
 mm/swap_state.c                        |  21 +++-
 mm/zswap.c                             | 161 +++++++++++++++++++++++--
 5 files changed, 196 insertions(+), 10 deletions(-)

diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
index 45b98390e938..522ae22ccb84 100644
--- a/Documentation/admin-guide/mm/zswap.rst
+++ b/Documentation/admin-guide/mm/zswap.rst
@@ -153,6 +153,13 @@ attribute, e. g.::
 
 Setting this parameter to 100 will disable the hysteresis.
 
+When there is a sizable amount of cold memory residing in the zswap pool, it
+can be advantageous to proactively write these cold pages to swap and reclaim
+the memory for other use cases. By default, the zswap shrinker is disabled.
+User can enable it as follows:
+
+  echo Y > /sys/module/zswap/parameters/shrinker_enabled
+
 A debugfs interface is provided for various statistic about pool size, number
 of pages stored, same-value filled pages and various counters for the reasons
 pages are rejected.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 486587fcd27f..8947a1bfbe9c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -637,6 +637,20 @@ struct lruvec {
 #ifdef CONFIG_MEMCG
 	struct pglist_data *pgdat;
 #endif
+#ifdef CONFIG_ZSWAP
+	/*
+	 * Number of pages in zswap that should be protected from the shrinker.
+	 * This number is an estimate of the following counts:
+	 *
+	 * a) Recent page faults.
+	 * b) Recent insertion to the zswap LRU. This includes new zswap stores,
+	 *    as well as recent zswap LRU rotations.
+	 *
+	 * These pages are likely to be warm, and might incur IO if the are written
+	 * to swap.
+	 */
+	atomic_long_t nr_zswap_protected;
+#endif
 };
 
 /* Isolate for asynchronous migration */
diff --git a/mm/mmzone.c b/mm/mmzone.c
index 68e1511be12d..4137f3ac42cd 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)
 
 	memset(lruvec, 0, sizeof(struct lruvec));
 	spin_lock_init(&lruvec->lru_lock);
+#ifdef CONFIG_ZSWAP
+	atomic_long_set(&lruvec->nr_zswap_protected, 0);
+#endif
 
 	for_each_lru(lru)
 		INIT_LIST_HEAD(&lruvec->lists[lru]);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0356df52b06a..a60197b55a28 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
 	lru_add_drain();	/* Push any new pages onto the LRU now */
 skip:
 	/* The page was likely read above, so no need for plugging here */
-	return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+	page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+#ifdef CONFIG_ZSWAP
+	if (page) {
+		struct lruvec *lruvec = folio_lruvec(page_folio(page));
+
+		atomic_long_inc(&lruvec->nr_zswap_protected);
+	}
+#endif
+	return page;
 }
 
 int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
 	lru_add_drain();
 skip:
 	/* The page was likely read above, so no need for plugging here */
-	return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
-				     NULL);
+	page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
+#ifdef CONFIG_ZSWAP
+	if (page) {
+		struct lruvec *lruvec = folio_lruvec(page_folio(page));
+
+		atomic_long_inc(&lruvec->nr_zswap_protected);
+	}
+#endif
+	return page;
 }
 
 /**
diff --git a/mm/zswap.c b/mm/zswap.c
index 15485427e3fa..1d1fe75a5237 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
 /* Number of zpools in zswap_pool (empirically determined for scalability) */
 #define ZSWAP_NR_ZPOOLS 32
 
+/* Enable/disable memory pressure-based shrinker. */
+static bool zswap_shrinker_enabled;
+module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
+
 /*********************************
 * data structures
 **********************************/
@@ -174,6 +178,8 @@ struct zswap_pool {
 	char tfm_name[CRYPTO_MAX_ALG_NAME];
 	struct list_lru list_lru;
 	struct mem_cgroup *next_shrink;
+	struct shrinker *shrinker;
+	atomic_t nr_stored;
 };
 
 /*
@@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
 			DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
 }
 
+static u64 get_zswap_pool_size(struct zswap_pool *pool)
+{
+	u64 pool_size = 0;
+	int i;
+
+	for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
+		pool_size += zpool_get_total_size(pool->zpools[i]);
+
+	return pool_size;
+}
+
 static void zswap_update_total_size(void)
 {
 	struct zswap_pool *pool;
 	u64 total = 0;
-	int i;
 
 	rcu_read_lock();
 
 	list_for_each_entry_rcu(pool, &zswap_pools, list)
-		for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
-			total += zpool_get_total_size(pool->zpools[i]);
+		total += get_zswap_pool_size(pool);
 
 	rcu_read_unlock();
 
@@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
 static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
 {
 	struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
-	bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
-
+	int nid = entry_to_nid(entry);
+	struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+	bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
+	unsigned long lru_size, old, new;
+
+	if (added) {
+		lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
+		old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
+
+		/*
+		 * Decay to avoid overflow and adapt to changing workloads.
+		 * This is based on LRU reclaim cost decaying heuristics.
+		 */
+		do {
+			new = old > lru_size / 4 ? old / 2 : old;
+		} while (
+			!atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
+	}
 	mem_cgroup_put(memcg);
 	return added;
 }
@@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
 	else {
 		zswap_lru_del(&entry->pool->list_lru, entry);
 		zpool_free(zswap_find_zpool(entry), entry->handle);
+		atomic_dec(&entry->pool->nr_stored);
 		zswap_pool_put(entry->pool);
 	}
 	zswap_entry_cache_free(entry);
@@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
 	return entry;
 }
 
+/*********************************
+* shrinker functions
+**********************************/
+static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
+				       spinlock_t *lock, void *arg);
+
+static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
+		struct shrink_control *sc)
+{
+	struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
+	unsigned long shrink_ret, nr_protected, lru_size;
+	struct zswap_pool *pool = shrinker->private_data;
+	bool encountered_page_in_swapcache = false;
+
+	nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
+	lru_size = list_lru_shrink_count(&pool->list_lru, sc);
+
+	/*
+	 * Abort if the shrinker is disabled or if we are shrinking into the
+	 * protected region.
+	 */
+	if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
+		sc->nr_scanned = 0;
+		return SHRINK_STOP;
+	}
+
+	shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
+		&encountered_page_in_swapcache);
+
+	if (encountered_page_in_swapcache)
+		return SHRINK_STOP;
+
+	return shrink_ret ? shrink_ret : SHRINK_STOP;
+}
+
+static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
+		struct shrink_control *sc)
+{
+	struct zswap_pool *pool = shrinker->private_data;
+	struct mem_cgroup *memcg = sc->memcg;
+	struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
+	unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
+
+#ifdef CONFIG_MEMCG_KMEM
+	cgroup_rstat_flush(memcg->css.cgroup);
+	nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
+	nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
+#else
+	/* use pool stats instead of memcg stats */
+	nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
+	nr_stored = atomic_read(&pool->nr_stored);
+#endif
+
+	if (!zswap_shrinker_enabled || !nr_stored)
+		return 0;
+
+	nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
+	nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
+	/*
+	 * Subtract the lru size by an estimate of the number of pages
+	 * that should be protected.
+	 */
+	nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
+
+	/*
+	 * Scale the number of freeable pages by the memory saving factor.
+	 * This ensures that the better zswap compresses memory, the fewer
+	 * pages we will evict to swap (as it will otherwise incur IO for
+	 * relatively small memory saving).
+	 */
+	return mult_frac(nr_freeable, nr_backing, nr_stored);
+}
+
+static void zswap_alloc_shrinker(struct zswap_pool *pool)
+{
+	pool->shrinker =
+		shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
+	if (!pool->shrinker)
+		return;
+
+	pool->shrinker->private_data = pool;
+	pool->shrinker->scan_objects = zswap_shrinker_scan;
+	pool->shrinker->count_objects = zswap_shrinker_count;
+	pool->shrinker->batch = 0;
+	pool->shrinker->seeks = DEFAULT_SEEKS;
+}
+
 /*********************************
 * per-cpu code
 **********************************/
@@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
 				       spinlock_t *lock, void *arg)
 {
 	struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
+	bool *encountered_page_in_swapcache = (bool *)arg;
 	struct mem_cgroup *memcg;
 	struct zswap_tree *tree;
+	struct lruvec *lruvec;
 	pgoff_t swpoffset;
 	enum lru_status ret = LRU_REMOVED_RETRY;
 	int writeback_result;
@@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
 		/* we cannot use zswap_lru_add here, because it increments node's lru count */
 		list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
 		spin_unlock(lock);
-		mem_cgroup_put(memcg);
 		ret = LRU_RETRY;
+
+		/*
+		 * Encountering a page already in swap cache is a sign that we are shrinking
+		 * into the warmer region. We should terminate shrinking (if we're in the dynamic
+		 * shrinker context).
+		 */
+		if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
+			ret = LRU_SKIP;
+			*encountered_page_in_swapcache = true;
+		}
+		lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
+		/* Increment the protection area to account for the LRU rotation. */
+		atomic_long_inc(&lruvec->nr_zswap_protected);
+
+		mem_cgroup_put(memcg);
 		goto put_unlock;
 	}
 	zswap_written_back_pages++;
@@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
 				       &pool->node);
 	if (ret)
 		goto error;
+
+	zswap_alloc_shrinker(pool);
+	if (!pool->shrinker)
+		goto error;
+
 	pr_debug("using %s compressor\n", pool->tfm_name);
 
 	/* being the current pool takes 1 ref; this func expects the
@@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
 	 */
 	kref_init(&pool->kref);
 	INIT_LIST_HEAD(&pool->list);
-	list_lru_init_memcg(&pool->list_lru, NULL);
+	if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
+		goto lru_fail;
+	shrinker_register(pool->shrinker);
 	INIT_WORK(&pool->shrink_work, shrink_worker);
 
 	zswap_pool_debug("created", pool);
 
 	return pool;
 
+lru_fail:
+	list_lru_destroy(&pool->list_lru);
+	shrinker_free(pool->shrinker);
 error:
 	if (pool->acomp_ctx)
 		free_percpu(pool->acomp_ctx);
@@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
 
 	zswap_pool_debug("destroying", pool);
 
+	shrinker_free(pool->shrinker);
 	cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
 	free_percpu(pool->acomp_ctx);
 	list_lru_destroy(&pool->list_lru);
@@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
 	if (entry->length) {
 		INIT_LIST_HEAD(&entry->lru);
 		zswap_lru_add(&pool->list_lru, entry);
+		atomic_inc(&pool->nr_stored);
 	}
 	spin_unlock(&tree->lock);
 
-- 
2.34.1

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest
  2023-10-17 23:21 ` [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
@ 2023-10-17 23:34   ` Nhat Pham
  2023-10-17 23:44     ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:34 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> The memcg-zswap self test is updated to adjust to the behavior change
> implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
> where zswap performs writeback for specific memcg.
>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>

/s/Signed-off/Acked
This is Domenico's work :) I used the wrong tag here. Should be:
Acked-by: Nhat Pham <nphamcs@gmail.com>

> ---
>  tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
>  1 file changed, 50 insertions(+), 24 deletions(-)
>
> diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
> index 49def87a909b..11271fabeffc 100644
> --- a/tools/testing/selftests/cgroup/test_zswap.c
> +++ b/tools/testing/selftests/cgroup/test_zswap.c
> @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
>         return read_int("/sys/kernel/debug/zswap/stored_pages", value);
>  }
>
> -static int get_zswap_written_back_pages(size_t *value)
> +static int get_cg_wb_count(const char *cg)
>  {
> -       return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
> +       return cg_read_key_long(cg, "memory.stat", "zswap_wb");
>  }
>
>  static int allocate_bytes(const char *cgroup, void *arg)
> @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
>         return 0;
>  }
>
> +static char *setup_test_group_1M(const char *root, const char *name)
> +{
> +       char *group_name = cg_name(root, name);
> +
> +       if (!group_name)
> +               return NULL;
> +       if (cg_create(group_name))
> +               goto fail;
> +       if (cg_write(group_name, "memory.max", "1M")) {
> +               cg_destroy(group_name);
> +               goto fail;
> +       }
> +       return group_name;
> +fail:
> +       free(group_name);
> +       return NULL;
> +}
> +
>  /*
>   * When trying to store a memcg page in zswap, if the memcg hits its memory
> - * limit in zswap, writeback should not be triggered.
> - *
> - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
> - * not zswap"). Needs to be revised when a per memcg writeback mechanism is
> - * implemented.
> + * limit in zswap, writeback should affect only the zswapped pages of that
> + * memcg.
>   */
>  static int test_no_invasive_cgroup_shrink(const char *root)
>  {
> -       size_t written_back_before, written_back_after;
>         int ret = KSFT_FAIL;
> -       char *test_group;
> +       size_t control_allocation_size = MB(10);
> +       char *control_allocation, *wb_group = NULL, *control_group = NULL;
>
>         /* Set up */
> -       test_group = cg_name(root, "no_shrink_test");
> -       if (!test_group)
> -               goto out;
> -       if (cg_create(test_group))
> +       wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
> +       if (!wb_group)
> +               return KSFT_FAIL;
> +       if (cg_write(wb_group, "memory.zswap.max", "10K"))
>                 goto out;
> -       if (cg_write(test_group, "memory.max", "1M"))
> +       control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
> +       if (!control_group)
>                 goto out;
> -       if (cg_write(test_group, "memory.zswap.max", "10K"))
> +
> +       /* Push some test_group2 memory into zswap */
> +       if (cg_enter_current(control_group))
>                 goto out;
> -       if (get_zswap_written_back_pages(&written_back_before))
> +       control_allocation = malloc(control_allocation_size);
> +       for (int i = 0; i < control_allocation_size; i += 4095)
> +               control_allocation[i] = 'a';
> +       if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
>                 goto out;
>
> -       /* Allocate 10x memory.max to push memory into zswap */
> -       if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
> +       /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
> +       if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
>                 goto out;
>
> -       /* Verify that no writeback happened because of the memcg allocation */
> -       if (get_zswap_written_back_pages(&written_back_after))
> -               goto out;
> -       if (written_back_after == written_back_before)
> +       /* Verify that only zswapped memory from gwb_group has been written back */
> +       if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
>                 ret = KSFT_PASS;
>  out:
> -       cg_destroy(test_group);
> -       free(test_group);
> +       cg_enter_current(root);
> +       if (control_group) {
> +               cg_destroy(control_group);
> +               free(control_group);
> +       }
> +       cg_destroy(wb_group);
> +       free(wb_group);
> +       if (control_allocation)
> +               free(control_allocation);
>         return ret;
>  }
>
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-17 23:21 ` [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
@ 2023-10-17 23:35   ` Nhat Pham
  2023-10-17 23:37     ` Jeff Johnson
  2023-10-18 23:24   ` Yosry Ahmed
  1 sibling, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:35 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>

/s/Signed-off/Acked
This is Domenico's work :) I used the wrong tag here. Should be:
Acked-by: Nhat Pham <nphamcs@gmail.com>

> ---
>  include/linux/memcontrol.h |  2 ++
>  mm/memcontrol.c            | 15 +++++++++++++++
>  mm/zswap.c                 |  3 +++
>  3 files changed, 20 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3de10fabea0f..7868b1e00bf5 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -38,6 +38,7 @@ enum memcg_stat_item {
>         MEMCG_KMEM,
>         MEMCG_ZSWAP_B,
>         MEMCG_ZSWAPPED,
> +       MEMCG_ZSWAP_WB,
>         MEMCG_NR_STAT,
>  };
>
> @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
>  bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
>  void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
>  void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
>  #else
>  static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
>  {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 1bde67b29287..a9118871e5a6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
>  #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
>         { "zswap",                      MEMCG_ZSWAP_B                   },
>         { "zswapped",                   MEMCG_ZSWAPPED                  },
> +       { "zswap_wb",                   MEMCG_ZSWAP_WB                  },
>  #endif
>         { "file_mapped",                NR_FILE_MAPPED                  },
>         { "file_dirty",                 NR_FILE_DIRTY                   },
> @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
>         switch (item) {
>         case MEMCG_PERCPU_B:
>         case MEMCG_ZSWAP_B:
> +       case MEMCG_ZSWAP_WB:
>         case NR_SLAB_RECLAIMABLE_B:
>         case NR_SLAB_UNRECLAIMABLE_B:
>         case WORKINGSET_REFAULT_ANON:
> @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
>         rcu_read_unlock();
>  }
>
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> +{
> +       struct mem_cgroup *memcg;
> +
> +       if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> +               return;
> +
> +       rcu_read_lock();
> +       memcg = obj_cgroup_memcg(objcg);
> +       mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> +       rcu_read_unlock();
> +}
> +
>  static u64 zswap_current_read(struct cgroup_subsys_state *css,
>                               struct cftype *cft)
>  {
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d2989ad11814..15485427e3fa 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
>         }
>         zswap_written_back_pages++;
>
> +       if (entry->objcg)
> +               obj_cgroup_report_zswap_wb(entry->objcg);
> +
>         /*
>          * Writeback started successfully, the page now belongs to the
>          * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-17 23:35   ` Nhat Pham
@ 2023-10-17 23:37     ` Jeff Johnson
  2023-10-17 23:40       ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Jeff Johnson @ 2023-10-17 23:37 UTC (permalink / raw)
  To: Nhat Pham, akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On 10/17/2023 4:35 PM, Nhat Pham wrote:
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>>
>> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>>
>> Since zswap now writes back pages from memcg-specific LRUs, we now need a
>> new stat to show writebacks count for each memcg.
>>
>> Suggested-by: Nhat Pham <nphamcs@gmail.com>
>> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> 
> /s/Signed-off/Acked
> This is Domenico's work :) I used the wrong tag here. Should be:
> Acked-by: Nhat Pham <nphamcs@gmail.com>

no, since you are posting the patch, you have to sign off on it.
Signed-off-by is correct


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-17 23:37     ` Jeff Johnson
@ 2023-10-17 23:40       ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:40 UTC (permalink / raw)
  To: Jeff Johnson
  Cc: akpm, hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On Tue, Oct 17, 2023 at 4:38 PM Jeff Johnson <quic_jjohnson@quicinc.com> wrote:
>
> On 10/17/2023 4:35 PM, Nhat Pham wrote:
> > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >>
> >> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >>
> >> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> >> new stat to show writebacks count for each memcg.
> >>
> >> Suggested-by: Nhat Pham <nphamcs@gmail.com>
> >> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> >
> > /s/Signed-off/Acked
> > This is Domenico's work :) I used the wrong tag here. Should be:
> > Acked-by: Nhat Pham <nphamcs@gmail.com>
>
> no, since you are posting the patch, you have to sign off on it.
> Signed-off-by is correct
>

Ah so past Nhat was right all along. Ignore my comments then.
Thanks for letting me know, Jeff!

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest
  2023-10-17 23:34   ` Nhat Pham
@ 2023-10-17 23:44     ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-17 23:44 UTC (permalink / raw)
  To: akpm
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On Tue, Oct 17, 2023 at 4:34 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > The memcg-zswap self test is updated to adjust to the behavior change
> > implemented by commit 87730b165089 ("zswap: make shrinking memcg-aware"),
> > where zswap performs writeback for specific memcg.
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
>
> /s/Signed-off/Acked
> This is Domenico's work :) I used the wrong tag here. Should be:
> Acked-by: Nhat Pham <nphamcs@gmail.com>

Please ignore this comment - it was pointed out to me that Signed-off is
the appropriate tag here.

>
> > ---
> >  tools/testing/selftests/cgroup/test_zswap.c | 74 ++++++++++++++-------
> >  1 file changed, 50 insertions(+), 24 deletions(-)
> >
> > diff --git a/tools/testing/selftests/cgroup/test_zswap.c b/tools/testing/selftests/cgroup/test_zswap.c
> > index 49def87a909b..11271fabeffc 100644
> > --- a/tools/testing/selftests/cgroup/test_zswap.c
> > +++ b/tools/testing/selftests/cgroup/test_zswap.c
> > @@ -50,9 +50,9 @@ static int get_zswap_stored_pages(size_t *value)
> >         return read_int("/sys/kernel/debug/zswap/stored_pages", value);
> >  }
> >
> > -static int get_zswap_written_back_pages(size_t *value)
> > +static int get_cg_wb_count(const char *cg)
> >  {
> > -       return read_int("/sys/kernel/debug/zswap/written_back_pages", value);
> > +       return cg_read_key_long(cg, "memory.stat", "zswap_wb");
> >  }
> >
> >  static int allocate_bytes(const char *cgroup, void *arg)
> > @@ -68,45 +68,71 @@ static int allocate_bytes(const char *cgroup, void *arg)
> >         return 0;
> >  }
> >
> > +static char *setup_test_group_1M(const char *root, const char *name)
> > +{
> > +       char *group_name = cg_name(root, name);
> > +
> > +       if (!group_name)
> > +               return NULL;
> > +       if (cg_create(group_name))
> > +               goto fail;
> > +       if (cg_write(group_name, "memory.max", "1M")) {
> > +               cg_destroy(group_name);
> > +               goto fail;
> > +       }
> > +       return group_name;
> > +fail:
> > +       free(group_name);
> > +       return NULL;
> > +}
> > +
> >  /*
> >   * When trying to store a memcg page in zswap, if the memcg hits its memory
> > - * limit in zswap, writeback should not be triggered.
> > - *
> > - * This was fixed with commit 0bdf0efa180a("zswap: do not shrink if cgroup may
> > - * not zswap"). Needs to be revised when a per memcg writeback mechanism is
> > - * implemented.
> > + * limit in zswap, writeback should affect only the zswapped pages of that
> > + * memcg.
> >   */
> >  static int test_no_invasive_cgroup_shrink(const char *root)
> >  {
> > -       size_t written_back_before, written_back_after;
> >         int ret = KSFT_FAIL;
> > -       char *test_group;
> > +       size_t control_allocation_size = MB(10);
> > +       char *control_allocation, *wb_group = NULL, *control_group = NULL;
> >
> >         /* Set up */
> > -       test_group = cg_name(root, "no_shrink_test");
> > -       if (!test_group)
> > -               goto out;
> > -       if (cg_create(test_group))
> > +       wb_group = setup_test_group_1M(root, "per_memcg_wb_test1");
> > +       if (!wb_group)
> > +               return KSFT_FAIL;
> > +       if (cg_write(wb_group, "memory.zswap.max", "10K"))
> >                 goto out;
> > -       if (cg_write(test_group, "memory.max", "1M"))
> > +       control_group = setup_test_group_1M(root, "per_memcg_wb_test2");
> > +       if (!control_group)
> >                 goto out;
> > -       if (cg_write(test_group, "memory.zswap.max", "10K"))
> > +
> > +       /* Push some test_group2 memory into zswap */
> > +       if (cg_enter_current(control_group))
> >                 goto out;
> > -       if (get_zswap_written_back_pages(&written_back_before))
> > +       control_allocation = malloc(control_allocation_size);
> > +       for (int i = 0; i < control_allocation_size; i += 4095)
> > +               control_allocation[i] = 'a';
> > +       if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> >                 goto out;
> >
> > -       /* Allocate 10x memory.max to push memory into zswap */
> > -       if (cg_run(test_group, allocate_bytes, (void *)MB(10)))
> > +       /* Allocate 10x memory.max to push wb_group memory into zswap and trigger wb */
> > +       if (cg_run(wb_group, allocate_bytes, (void *)MB(10)))
> >                 goto out;
> >
> > -       /* Verify that no writeback happened because of the memcg allocation */
> > -       if (get_zswap_written_back_pages(&written_back_after))
> > -               goto out;
> > -       if (written_back_after == written_back_before)
> > +       /* Verify that only zswapped memory from gwb_group has been written back */
> > +       if (get_cg_wb_count(wb_group) > 0 && get_cg_wb_count(control_group) == 0)
> >                 ret = KSFT_PASS;
> >  out:
> > -       cg_destroy(test_group);
> > -       free(test_group);
> > +       cg_enter_current(root);
> > +       if (control_group) {
> > +               cg_destroy(control_group);
> > +               free(control_group);
> > +       }
> > +       cg_destroy(wb_group);
> > +       free(wb_group);
> > +       if (control_allocation)
> > +               free(control_allocation);
> >         return ret;
> >  }
> >
> > --
> > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking
  2023-10-17 23:21 ` [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Nhat Pham
@ 2023-10-18 22:26   ` Yosry Ahmed
  2023-10-18 23:09     ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-18 22:26 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> The interface of list_lru is based on the assumption that objects are
> allocated on the correct node/memcg, with this change it is introduced the
> possibility to explicitly specify numa node and memcgroup when adding and
> removing objects. This is so that users of list_lru can track node/memcg
> of the items outside of the list_lru, like in zswap, where the allocations
> can be made by kswapd for data that's charged to a different cgroup.
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>

I prefer what Johannes suggested, making list_lru_add() and friends
take in the memcg and nid, and add list_lru_add_obj() (or similar) and
friends that assume the object is on the right node and memcg. This is
clearer and more explicit imo. I am not very familiar with list_lrus
though, so I'll leave this to folks who actually are.

> ---
>  include/linux/list_lru.h | 38 +++++++++++++++++++++++++++++++++++
>  mm/list_lru.c            | 43 +++++++++++++++++++++++++++++++++++-----
>  2 files changed, 76 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index b35968ee9fb5..0f5f39cacbbb 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
>   */
>  bool list_lru_add(struct list_lru *lru, struct list_head *item);
>
> +/**
> + * __list_lru_add: add an element to a specific sublist.
> + * @list_lru: the lru pointer
> + * @item: the item to be added.
> + * @memcg: the cgroup of the sublist to add the item to.
> + * @nid: the node id of the sublist to add the item to.
> + *
> + * This function is similar to list_lru_add(), but it allows the caller to
> + * specify the sublist to which the item should be added. This can be useful
> + * when the list_head node is not necessarily in the same cgroup and NUMA node
> + * as the data it represents, such as zswap, where the list_head node could be
> + * from kswapd and the data from a different cgroup altogether.
> + *
> + * Return value: true if the list was updated, false otherwise
> + */
> +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> +                   struct mem_cgroup *memcg);
> +
>  /**
>   * list_lru_del: delete an element to the lru list
>   * @list_lru: the lru pointer
> @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
>   */
>  bool list_lru_del(struct list_lru *lru, struct list_head *item);
>
> +/**
> + * __list_lru_del: delete an element from a specific sublist.
> + * @list_lru: the lru pointer
> + * @item: the item to be deleted.
> + * @memcg: the cgroup of the sublist to delete the item from.
> + * @nid: the node id of the sublist to delete the item from.
> + *
> + * Return value: true if the list was updated, false otherwise.
> + */
> +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> +                   struct mem_cgroup *memcg);
> +
>  /**
>   * list_lru_count_one: return the number of objects currently held by @lru
>   * @lru: the lru pointer.
> @@ -136,6 +166,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru)
>  void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
>  void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
>                            struct list_head *head);
> +/*
> + * list_lru_putback: undo list_lru_isolate.
> + *
> + * Since we might have dropped the LRU lock in between, recompute list_lru_one
> + * from the node's id and memcg.
> + */
> +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> +                     struct mem_cgroup *memcg);
>
>  typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
>                 struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index a05e5bef3b40..63b75163c6ad 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -119,13 +119,22 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
>  bool list_lru_add(struct list_lru *lru, struct list_head *item)
>  {
>         int nid = page_to_nid(virt_to_page(item));
> +       struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> +               mem_cgroup_from_slab_obj(item) : NULL;
> +
> +       return __list_lru_add(lru, item, nid, memcg);
> +}
> +EXPORT_SYMBOL_GPL(list_lru_add);
> +
> +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> +                   struct mem_cgroup *memcg)
> +{
>         struct list_lru_node *nlru = &lru->node[nid];
> -       struct mem_cgroup *memcg;
>         struct list_lru_one *l;
>
>         spin_lock(&nlru->lock);
>         if (list_empty(item)) {
> -               l = list_lru_from_kmem(lru, nid, item, &memcg);
> +               l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
>                 list_add_tail(item, &l->list);
>                 /* Set shrinker bit if the first element was added */
>                 if (!l->nr_items++)
> @@ -138,17 +147,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
>         spin_unlock(&nlru->lock);
>         return false;
>  }
> -EXPORT_SYMBOL_GPL(list_lru_add);
> +EXPORT_SYMBOL_GPL(__list_lru_add);
>
>  bool list_lru_del(struct list_lru *lru, struct list_head *item)
>  {
>         int nid = page_to_nid(virt_to_page(item));
> +       struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> +               mem_cgroup_from_slab_obj(item) : NULL;
> +
> +       return __list_lru_del(lru, item, nid, memcg);
> +}
> +EXPORT_SYMBOL_GPL(list_lru_del);
> +
> +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> +                   struct mem_cgroup *memcg)
> +{
>         struct list_lru_node *nlru = &lru->node[nid];
>         struct list_lru_one *l;
>
>         spin_lock(&nlru->lock);
>         if (!list_empty(item)) {
> -               l = list_lru_from_kmem(lru, nid, item, NULL);
> +               l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
>                 list_del_init(item);
>                 l->nr_items--;
>                 nlru->nr_items--;
> @@ -158,7 +177,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
>         spin_unlock(&nlru->lock);
>         return false;
>  }
> -EXPORT_SYMBOL_GPL(list_lru_del);
> +EXPORT_SYMBOL_GPL(__list_lru_del);
>
>  void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
>  {
> @@ -175,6 +194,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
>  }
>  EXPORT_SYMBOL_GPL(list_lru_isolate_move);
>
> +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> +                     struct mem_cgroup *memcg)
> +{
> +       struct list_lru_one *list =
> +               list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> +
> +       if (list_empty(item)) {
> +               list_add_tail(item, &list->list);
> +               if (!list->nr_items++)
> +                       set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> +       }
> +}
> +EXPORT_SYMBOL_GPL(list_lru_putback);
> +
>  unsigned long list_lru_count_one(struct list_lru *lru,
>                                  int nid, struct mem_cgroup *memcg)
>  {
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking
  2023-10-18 22:26   ` Yosry Ahmed
@ 2023-10-18 23:09     ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-18 23:09 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 3:27 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > The interface of list_lru is based on the assumption that objects are
> > allocated on the correct node/memcg, with this change it is introduced the
> > possibility to explicitly specify numa node and memcgroup when adding and
> > removing objects. This is so that users of list_lru can track node/memcg
> > of the items outside of the list_lru, like in zswap, where the allocations
> > can be made by kswapd for data that's charged to a different cgroup.
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
>
> I prefer what Johannes suggested, making list_lru_add() and friends
> take in the memcg and nid, and add list_lru_add_obj() (or similar) and
> friends that assume the object is on the right node and memcg. This is
> clearer and more explicit imo. I am not very familiar with list_lrus
> though, so I'll leave this to folks who actually are.

Yeah the original naming is... most unfortunate, to say the least :)

I create a new function to avoid renaming list_lru_add's usage
everywhere, but if the consensus is that everyone prefers
list_lru_add() to be the one taking memcg + nid (and the original
renamed to list_lru_add_obj()), I can go around fixing all of it :)

Seems like a separate endeavour though.

>
> > ---
> >  include/linux/list_lru.h | 38 +++++++++++++++++++++++++++++++++++
> >  mm/list_lru.c            | 43 +++++++++++++++++++++++++++++++++++-----
> >  2 files changed, 76 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> > index b35968ee9fb5..0f5f39cacbbb 100644
> > --- a/include/linux/list_lru.h
> > +++ b/include/linux/list_lru.h
> > @@ -89,6 +89,24 @@ void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *paren
> >   */
> >  bool list_lru_add(struct list_lru *lru, struct list_head *item);
> >
> > +/**
> > + * __list_lru_add: add an element to a specific sublist.
> > + * @list_lru: the lru pointer
> > + * @item: the item to be added.
> > + * @memcg: the cgroup of the sublist to add the item to.
> > + * @nid: the node id of the sublist to add the item to.
> > + *
> > + * This function is similar to list_lru_add(), but it allows the caller to
> > + * specify the sublist to which the item should be added. This can be useful
> > + * when the list_head node is not necessarily in the same cgroup and NUMA node
> > + * as the data it represents, such as zswap, where the list_head node could be
> > + * from kswapd and the data from a different cgroup altogether.
> > + *
> > + * Return value: true if the list was updated, false otherwise
> > + */
> > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > +                   struct mem_cgroup *memcg);
> > +
> >  /**
> >   * list_lru_del: delete an element to the lru list
> >   * @list_lru: the lru pointer
> > @@ -102,6 +120,18 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item);
> >   */
> >  bool list_lru_del(struct list_lru *lru, struct list_head *item);
> >
> > +/**
> > + * __list_lru_del: delete an element from a specific sublist.
> > + * @list_lru: the lru pointer
> > + * @item: the item to be deleted.
> > + * @memcg: the cgroup of the sublist to delete the item from.
> > + * @nid: the node id of the sublist to delete the item from.
> > + *
> > + * Return value: true if the list was updated, false otherwise.
> > + */
> > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > +                   struct mem_cgroup *memcg);
> > +
> >  /**
> >   * list_lru_count_one: return the number of objects currently held by @lru
> >   * @lru: the lru pointer.
> > @@ -136,6 +166,14 @@ static inline unsigned long list_lru_count(struct list_lru *lru)
> >  void list_lru_isolate(struct list_lru_one *list, struct list_head *item);
> >  void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> >                            struct list_head *head);
> > +/*
> > + * list_lru_putback: undo list_lru_isolate.
> > + *
> > + * Since we might have dropped the LRU lock in between, recompute list_lru_one
> > + * from the node's id and memcg.
> > + */
> > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > +                     struct mem_cgroup *memcg);
> >
> >  typedef enum lru_status (*list_lru_walk_cb)(struct list_head *item,
> >                 struct list_lru_one *list, spinlock_t *lock, void *cb_arg);
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index a05e5bef3b40..63b75163c6ad 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -119,13 +119,22 @@ list_lru_from_kmem(struct list_lru *lru, int nid, void *ptr,
> >  bool list_lru_add(struct list_lru *lru, struct list_head *item)
> >  {
> >         int nid = page_to_nid(virt_to_page(item));
> > +       struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > +               mem_cgroup_from_slab_obj(item) : NULL;
> > +
> > +       return __list_lru_add(lru, item, nid, memcg);
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_add);
> > +
> > +bool __list_lru_add(struct list_lru *lru, struct list_head *item, int nid,
> > +                   struct mem_cgroup *memcg)
> > +{
> >         struct list_lru_node *nlru = &lru->node[nid];
> > -       struct mem_cgroup *memcg;
> >         struct list_lru_one *l;
> >
> >         spin_lock(&nlru->lock);
> >         if (list_empty(item)) {
> > -               l = list_lru_from_kmem(lru, nid, item, &memcg);
> > +               l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> >                 list_add_tail(item, &l->list);
> >                 /* Set shrinker bit if the first element was added */
> >                 if (!l->nr_items++)
> > @@ -138,17 +147,27 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item)
> >         spin_unlock(&nlru->lock);
> >         return false;
> >  }
> > -EXPORT_SYMBOL_GPL(list_lru_add);
> > +EXPORT_SYMBOL_GPL(__list_lru_add);
> >
> >  bool list_lru_del(struct list_lru *lru, struct list_head *item)
> >  {
> >         int nid = page_to_nid(virt_to_page(item));
> > +       struct mem_cgroup *memcg = list_lru_memcg_aware(lru) ?
> > +               mem_cgroup_from_slab_obj(item) : NULL;
> > +
> > +       return __list_lru_del(lru, item, nid, memcg);
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_del);
> > +
> > +bool __list_lru_del(struct list_lru *lru, struct list_head *item, int nid,
> > +                   struct mem_cgroup *memcg)
> > +{
> >         struct list_lru_node *nlru = &lru->node[nid];
> >         struct list_lru_one *l;
> >
> >         spin_lock(&nlru->lock);
> >         if (!list_empty(item)) {
> > -               l = list_lru_from_kmem(lru, nid, item, NULL);
> > +               l = list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> >                 list_del_init(item);
> >                 l->nr_items--;
> >                 nlru->nr_items--;
> > @@ -158,7 +177,7 @@ bool list_lru_del(struct list_lru *lru, struct list_head *item)
> >         spin_unlock(&nlru->lock);
> >         return false;
> >  }
> > -EXPORT_SYMBOL_GPL(list_lru_del);
> > +EXPORT_SYMBOL_GPL(__list_lru_del);
> >
> >  void list_lru_isolate(struct list_lru_one *list, struct list_head *item)
> >  {
> > @@ -175,6 +194,20 @@ void list_lru_isolate_move(struct list_lru_one *list, struct list_head *item,
> >  }
> >  EXPORT_SYMBOL_GPL(list_lru_isolate_move);
> >
> > +void list_lru_putback(struct list_lru *lru, struct list_head *item, int nid,
> > +                     struct mem_cgroup *memcg)
> > +{
> > +       struct list_lru_one *list =
> > +               list_lru_from_memcg_idx(lru, nid, memcg_kmem_id(memcg));
> > +
> > +       if (list_empty(item)) {
> > +               list_add_tail(item, &list->list);
> > +               if (!list->nr_items++)
> > +                       set_shrinker_bit(memcg, nid, lru_shrinker_id(lru));
> > +       }
> > +}
> > +EXPORT_SYMBOL_GPL(list_lru_putback);
> > +
> >  unsigned long list_lru_count_one(struct list_lru *lru,
> >                                  int nid, struct mem_cgroup *memcg)
> >  {
> > --
> > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-17 23:21 ` [PATCH v3 2/5] zswap: make shrinking memcg-aware Nhat Pham
@ 2023-10-18 23:20   ` Yosry Ahmed
  2023-10-18 23:46     ` Nhat Pham
  2023-10-19 12:29     ` Domenico Cerasuolo
  0 siblings, 2 replies; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-18 23:20 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Currently, we only have a single global LRU for zswap. This makes it
> impossible to perform worload-specific shrinking - an memcg cannot
> determine which pages in the pool it owns, and often ends up writing
> pages from other memcgs. This issue has been previously observed in
> practice and mitigated by simply disabling memcg-initiated shrinking:
>
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
>
> This patch fully resolves the issue by replacing the global zswap LRU
> with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
>
> a) When a store attempt hits an memcg limit, it now triggers a
>    synchronous reclaim attempt that, if successful, allows the new
>    hotter page to be accepted by zswap.
> b) If the store attempt instead hits the global zswap limit, it will
>    trigger an asynchronous reclaim attempt, in which an memcg is
>    selected for reclaim in a round-robin-like fashion.

Could you explain the rationale behind the difference in behavior here
between the global limit and the memcg limit?

>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  include/linux/memcontrol.h |   5 ++
>  mm/swap.h                  |   3 +-
>  mm/swap_state.c            |  17 +++-
>  mm/zswap.c                 | 179 ++++++++++++++++++++++++++-----------
>  4 files changed, 147 insertions(+), 57 deletions(-)

This is a dense patch, I haven't absorbed all of it yet, but the first
round of comments below.

>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 031102ac9311..3de10fabea0f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
>         return NULL;
>  }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> +{
> +       return NULL;
> +}
> +
>  static inline bool folio_memcg_kmem(struct folio *folio)
>  {
>         return false;
> diff --git a/mm/swap.h b/mm/swap.h
> index 8a3c7a0ace4f..bbd6ce661a20 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>                                      struct vm_area_struct *vma,
>                                      unsigned long addr,
> -                                    bool *new_page_allocated);
> +                                    bool *new_page_allocated,
> +                                    bool fail_if_exists);
>  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
>                                     struct vm_fault *vmf);
>  struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index b3b14bd0dd64..0356df52b06a 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
>
>  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>                         struct vm_area_struct *vma, unsigned long addr,
> -                       bool *new_page_allocated)
> +                       bool *new_page_allocated, bool fail_if_exists)

nit: I don't feel like "fail" is the correct word here. Perhaps "skip"?

>  {
>         struct swap_info_struct *si;
>         struct folio *folio;
> @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>                 if (err != -EEXIST)
>                         goto fail_put_swap;
>
> +               /*
> +                * This check guards against a state that happens if a call
> +                * to __read_swap_cache_async triggers a reclaim, if the
> +                * reclaimer (zswap's writeback as of now) then decides to
> +                * reclaim that same entry, then the subsequent call to
> +                * __read_swap_cache_async would get stuck in this loop.

I think this comment needs to first state that it is protecting
against a recursive call in general, not necessarily in reclaim, as
__read_swap_cache_async() is not usually called in the context of
reclaim so this can be confusing. Then it can give the exact example
we have today. Perhaps something like:

Protect against a recursive call to __read_swap_cache_async() on the
same entry waiting forever here because SWAP_HAS_CACHE is set but the
folio is not the swap cache yet. This can happen today if
mem_cgroup_swapin_charge_folio() below triggers reclaim through zswap,
which may call __read_swap_cache_async() in the writeback path.

> +                */
> +               if (fail_if_exists && err == -EEXIST)

We already made sure  in the preceding condition that err is -EEXIST.

> +                       goto fail_put_swap;
>                 /*
>                  * We might race against __delete_from_swap_cache(), and
>                  * stumble across a swap_map entry whose SWAP_HAS_CACHE
> @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>  {
>         bool page_was_allocated;
>         struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
> -                       vma, addr, &page_was_allocated);
> +                       vma, addr, &page_was_allocated, false);
>
>         if (page_was_allocated)
>                 swap_readpage(retpage, false, plug);
> @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>                 /* Ok, do the async read-ahead now */
>                 page = __read_swap_cache_async(
>                         swp_entry(swp_type(entry), offset),
> -                       gfp_mask, vma, addr, &page_allocated);
> +                       gfp_mask, vma, addr, &page_allocated, false);
>                 if (!page)
>                         continue;
>                 if (page_allocated) {
> @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
>                 pte_unmap(pte);
>                 pte = NULL;
>                 page = __read_swap_cache_async(entry, gfp_mask, vma,
> -                                              addr, &page_allocated);
> +                                              addr, &page_allocated, false);
>                 if (!page)
>                         continue;
>                 if (page_allocated) {
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 083c693602b8..d2989ad11814 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -34,6 +34,7 @@
>  #include <linux/writeback.h>
>  #include <linux/pagemap.h>
>  #include <linux/workqueue.h>
> +#include <linux/list_lru.h>
>
>  #include "swap.h"
>  #include "internal.h"
> @@ -171,8 +172,8 @@ struct zswap_pool {
>         struct work_struct shrink_work;
>         struct hlist_node node;
>         char tfm_name[CRYPTO_MAX_ALG_NAME];
> -       struct list_head lru;
> -       spinlock_t lru_lock;
> +       struct list_lru list_lru;
> +       struct mem_cgroup *next_shrink;
>  };
>
>  /*
> @@ -288,15 +289,25 @@ static void zswap_update_total_size(void)
>         zswap_pool_total_size = total;
>  }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
> +{
> +       return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> +}
> +
> +static inline int entry_to_nid(struct zswap_entry *entry)
> +{
> +       return page_to_nid(virt_to_page(entry));
> +}
> +
>  /*********************************
>  * zswap entry functions
>  **********************************/
>  static struct kmem_cache *zswap_entry_cache;
>
> -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
>  {
>         struct zswap_entry *entry;
> -       entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> +       entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
>         if (!entry)
>                 return NULL;
>         entry->refcount = 1;
> @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
>         kmem_cache_free(zswap_entry_cache, entry);
>  }
>
> +/*********************************
> +* lru functions
> +**********************************/
> +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);

Could we avoid the need for get/put with an rcu_read_lock() instead?

> +       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> +
> +       mem_cgroup_put(memcg);
> +       return added;
> +}
> +
> +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> +{
> +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> +       bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> +
> +       mem_cgroup_put(memcg);
> +       return removed;
> +}
> +
>  /*********************************
>  * rbtree functions
>  **********************************/
> @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
>         if (!entry->length)
>                 atomic_dec(&zswap_same_filled_pages);
>         else {
> -               spin_lock(&entry->pool->lru_lock);
> -               list_del(&entry->lru);
> -               spin_unlock(&entry->pool->lru_lock);
> +               zswap_lru_del(&entry->pool->list_lru, entry);
>                 zpool_free(zswap_find_zpool(entry), entry->handle);
>                 zswap_pool_put(entry->pool);
>         }
> @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
>                 zswap_entry_put(tree, entry);
>  }
>
> -static int zswap_reclaim_entry(struct zswap_pool *pool)
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> +                                      spinlock_t *lock, void *arg)
>  {
> -       struct zswap_entry *entry;
> +       struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> +       struct mem_cgroup *memcg;
>         struct zswap_tree *tree;
>         pgoff_t swpoffset;
> -       int ret;
> +       enum lru_status ret = LRU_REMOVED_RETRY;
> +       int writeback_result;
>
> -       /* Get an entry off the LRU */
> -       spin_lock(&pool->lru_lock);
> -       if (list_empty(&pool->lru)) {
> -               spin_unlock(&pool->lru_lock);
> -               return -EINVAL;
> -       }
> -       entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> -       list_del_init(&entry->lru);
>         /*
>          * Once the lru lock is dropped, the entry might get freed. The
>          * swpoffset is copied to the stack, and entry isn't deref'd again
> @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
>          */
>         swpoffset = swp_offset(entry->swpentry);
>         tree = zswap_trees[swp_type(entry->swpentry)];
> -       spin_unlock(&pool->lru_lock);
> +       list_lru_isolate(l, item);
> +       spin_unlock(lock);

Perhaps a comment somewhere stating that we only return either
LRU_REMOVED_RETRY or LRU_RETRY, so it's fine to drop and reacquire the
lock.

>
>         /* Check for invalidate() race */
>         spin_lock(&tree->lock);
>         if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> -               ret = -EAGAIN;
>                 goto unlock;
>         }

nit: braces no longer needed?

>         /* Hold a reference to prevent a free during writeback */
>         zswap_entry_get(entry);
>         spin_unlock(&tree->lock);
>
> -       ret = zswap_writeback_entry(entry, tree);
> +       writeback_result = zswap_writeback_entry(entry, tree);
>
>         spin_lock(&tree->lock);
> -       if (ret) {
> -               /* Writeback failed, put entry back on LRU */
> -               spin_lock(&pool->lru_lock);
> -               list_move(&entry->lru, &pool->lru);
> -               spin_unlock(&pool->lru_lock);
> +       if (writeback_result) {
> +               zswap_reject_reclaim_fail++;
> +               memcg = get_mem_cgroup_from_entry(entry);
> +               spin_lock(lock);
> +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> +               spin_unlock(lock);
> +               mem_cgroup_put(memcg);
> +               ret = LRU_RETRY;
>                 goto put_unlock;
>         }
> +       zswap_written_back_pages++;

Why is this moved here from zswap_writeback_entry()? Also why is
zswap_reject_reclaim_fail incremented here instead of inside
zswap_writeback_entry()?

>
>         /*
>          * Writeback started successfully, the page now belongs to the
> @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
>         zswap_entry_put(tree, entry);
>  unlock:
>         spin_unlock(&tree->lock);
> -       return ret ? -EAGAIN : 0;
> +       spin_lock(lock);
> +       return ret;
> +}
> +
> +static int shrink_memcg(struct mem_cgroup *memcg)
> +{
> +       struct zswap_pool *pool;
> +       int nid, shrunk = 0;
> +
> +       pool = zswap_pool_current_get();
> +       if (!pool)
> +               return -EINVAL;
> +
> +       /*
> +        * Skip zombies because their LRUs are reparented and we would be
> +        * reclaiming from the parent instead of the dead memcgroup.

nit: s/memcgroup/memcg.

> +        */
> +       if (memcg && !mem_cgroup_online(memcg))
> +               goto out;

If we move this above zswap_pool_current_get(), we can return directly
and remove the label. I noticed we will return -EAGAIN if memcg is
offline. IIUC -EAGAIN for the caller will move on to the next memcg,
but I am wondering if a different errno would be clearer here.

> +
> +       for_each_node_state(nid, N_NORMAL_MEMORY) {
> +               unsigned long nr_to_walk = 1;
> +
> +               if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> +                                     NULL, &nr_to_walk))
> +                       shrunk++;

nit:
shrunk += list_lru_walk_one(..);

> +       }
> +out:
> +       zswap_pool_put(pool);
> +       return shrunk ? 0 : -EAGAIN;
>  }
>
>  static void shrink_worker(struct work_struct *w)
> @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
>                                                 shrink_work);
>         int ret, failures = 0;
>
> +       /* global reclaim will select cgroup in a round-robin fashion. */
>         do {
> -               ret = zswap_reclaim_entry(pool);
> +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);

Perhaps next_shrink_memcg is a better name here?

> +
> +               ret = shrink_memcg(pool->next_shrink);
> +
>                 if (ret) {
> -                       zswap_reject_reclaim_fail++;
>                         if (ret != -EAGAIN)
>                                 break;
>                         if (++failures == MAX_RECLAIM_RETRIES)
> @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>          */
>         kref_init(&pool->kref);
>         INIT_LIST_HEAD(&pool->list);
> -       INIT_LIST_HEAD(&pool->lru);
> -       spin_lock_init(&pool->lru_lock);
> +       list_lru_init_memcg(&pool->list_lru, NULL);
>         INIT_WORK(&pool->shrink_work, shrink_worker);
>
>         zswap_pool_debug("created", pool);
> @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
>         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
>         free_percpu(pool->acomp_ctx);
> +       list_lru_destroy(&pool->list_lru);
> +       if (pool->next_shrink)
> +               mem_cgroup_put(pool->next_shrink);
>         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
>                 zpool_destroy_pool(pool->zpools[i]);
>         kfree(pool);
> @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
>
>         /* try to allocate swap cache page */
>         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> -                                      &page_was_allocated);
> +                                      &page_was_allocated, true);
>         if (!page) {
>                 ret = -ENOMEM;
>                 goto fail;
> @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
>         /* start writeback */
>         __swap_writepage(page, &wbc);
>         put_page(page);
> -       zswap_written_back_pages++;
>
>         return ret;
>
> @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
>         struct scatterlist input, output;
>         struct crypto_acomp_ctx *acomp_ctx;
>         struct obj_cgroup *objcg = NULL;
> +       struct mem_cgroup *memcg = NULL;
>         struct zswap_pool *pool;
>         struct zpool *zpool;
> +       int lru_alloc_ret;
>         unsigned int dlen = PAGE_SIZE;
>         unsigned long handle, value;
>         char *buf;
> @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
>                 zswap_invalidate_entry(tree, dupentry);
>         }
>         spin_unlock(&tree->lock);
> -
> -       /*
> -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> -        * local cgroup limits.
> -        */
>         objcg = get_obj_cgroup_from_folio(folio);
> -       if (objcg && !obj_cgroup_may_zswap(objcg))
> -               goto reject;
> +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> +               memcg = get_mem_cgroup_from_objcg(objcg);
> +               if (shrink_memcg(memcg)) {
> +                       mem_cgroup_put(memcg);
> +                       goto reject;
> +               }
> +               mem_cgroup_put(memcg);
> +       }
>
>         /* reclaim space if needed */
>         if (zswap_is_full()) {
> @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
>                         zswap_pool_reached_full = false;
>         }
>
> +       pool = zswap_pool_current_get();
> +       if (!pool)
> +               goto reject;
> +

Why do we need to move zswap_pool_current_get() up here?

>         /* allocate entry */
> -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
>         if (!entry) {
>                 zswap_reject_kmemcache_fail++;
> +               zswap_pool_put(pool);
>                 goto reject;
>         }
>
> @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
>                         entry->length = 0;
>                         entry->value = value;
>                         atomic_inc(&zswap_same_filled_pages);
> +                       zswap_pool_put(pool);
>                         goto insert_entry;
>                 }
>                 kunmap_atomic(src);
> @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
>                 goto freepage;
>
>         /* if entry is successfully added, it keeps the reference */
> -       entry->pool = zswap_pool_current_get();
> -       if (!entry->pool)
> -               goto freepage;
> +       entry->pool = pool;
> +       if (objcg) {
> +               memcg = get_mem_cgroup_from_objcg(objcg);
> +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> +               mem_cgroup_put(memcg);
> +
> +               if (lru_alloc_ret)
> +                       goto freepage;
> +       }
>
>         /* compress */
>         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
>                 zswap_invalidate_entry(tree, dupentry);
>         }
>         if (entry->length) {
> -               spin_lock(&entry->pool->lru_lock);
> -               list_add(&entry->lru, &entry->pool->lru);
> -               spin_unlock(&entry->pool->lru_lock);
> +               INIT_LIST_HEAD(&entry->lru);
> +               zswap_lru_add(&pool->list_lru, entry);
>         }
>         spin_unlock(&tree->lock);
>
> @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
>
>  put_dstmem:
>         mutex_unlock(acomp_ctx->mutex);
> -       zswap_pool_put(entry->pool);
>  freepage:
> +       zswap_pool_put(entry->pool);
>         zswap_entry_cache_free(entry);
>  reject:
>         if (objcg)
> @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
>                 zswap_invalidate_entry(tree, entry);
>                 folio_mark_dirty(folio);
>         } else if (entry->length) {
> -               spin_lock(&entry->pool->lru_lock);
> -               list_move(&entry->lru, &entry->pool->lru);
> -               spin_unlock(&entry->pool->lru_lock);
> +               zswap_lru_del(&entry->pool->list_lru, entry);
> +               zswap_lru_add(&entry->pool->list_lru, entry);
>         }
>         zswap_entry_put(tree, entry);
>         spin_unlock(&tree->lock);
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-17 23:21 ` [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
  2023-10-17 23:35   ` Nhat Pham
@ 2023-10-18 23:24   ` Yosry Ahmed
  2023-10-18 23:50     ` Nhat Pham
  1 sibling, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-18 23:24 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
>
> Since zswap now writes back pages from memcg-specific LRUs, we now need a
> new stat to show writebacks count for each memcg.
>
> Suggested-by: Nhat Pham <nphamcs@gmail.com>
> Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  include/linux/memcontrol.h |  2 ++
>  mm/memcontrol.c            | 15 +++++++++++++++
>  mm/zswap.c                 |  3 +++
>  3 files changed, 20 insertions(+)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 3de10fabea0f..7868b1e00bf5 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -38,6 +38,7 @@ enum memcg_stat_item {
>         MEMCG_KMEM,
>         MEMCG_ZSWAP_B,
>         MEMCG_ZSWAPPED,
> +       MEMCG_ZSWAP_WB,
>         MEMCG_NR_STAT,
>  };
>
> @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
>  bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
>  void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
>  void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
>  #else
>  static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
>  {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 1bde67b29287..a9118871e5a6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
>  #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
>         { "zswap",                      MEMCG_ZSWAP_B                   },
>         { "zswapped",                   MEMCG_ZSWAPPED                  },
> +       { "zswap_wb",                   MEMCG_ZSWAP_WB                  },

zswap_writeback would be more consistent with file_writeback below.

Taking a step back, this is not really a "state". We increment it by 1
every time and never decrement it. Sounds awfully similar to events :)

You can also use count_objcg_event() directly and avoid the need for
obj_cgroup_report_zswap_wb() below.

>  #endif
>         { "file_mapped",                NR_FILE_MAPPED                  },
>         { "file_dirty",                 NR_FILE_DIRTY                   },
> @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
>         switch (item) {
>         case MEMCG_PERCPU_B:
>         case MEMCG_ZSWAP_B:
> +       case MEMCG_ZSWAP_WB:
>         case NR_SLAB_RECLAIMABLE_B:
>         case NR_SLAB_UNRECLAIMABLE_B:
>         case WORKINGSET_REFAULT_ANON:
> @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
>         rcu_read_unlock();
>  }
>
> +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> +{
> +       struct mem_cgroup *memcg;
> +
> +       if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> +               return;
> +
> +       rcu_read_lock();
> +       memcg = obj_cgroup_memcg(objcg);
> +       mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> +       rcu_read_unlock();
> +}
> +
>  static u64 zswap_current_read(struct cgroup_subsys_state *css,
>                               struct cftype *cft)
>  {
> diff --git a/mm/zswap.c b/mm/zswap.c
> index d2989ad11814..15485427e3fa 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
>         }
>         zswap_written_back_pages++;
>
> +       if (entry->objcg)
> +               obj_cgroup_report_zswap_wb(entry->objcg);
> +
>         /*
>          * Writeback started successfully, the page now belongs to the
>          * swapcache. Drop the entry from zswap - unless invalidate already
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure
  2023-10-17 23:21 ` [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Nhat Pham
@ 2023-10-18 23:36   ` Yosry Ahmed
  2023-10-20 19:14     ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-18 23:36 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> Currently, we only shrink the zswap pool when the user-defined limit is
> hit. This means that if we set the limit too high, cold data that are
> unlikely to be used again will reside in the pool, wasting precious
> memory. It is hard to predict how much zswap space will be needed ahead
> of time, as this depends on the workload (specifically, on factors such
> as memory access patterns and compressibility of the memory pages).
>
> This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> is initiated when there is memory pressure. The shrinker does not
> have any parameter that must be tuned by the user, and can be opted in
> or out on a per-memcg basis.
>
> Furthermore, to make it more robust for many workloads and prevent
> overshrinking (i.e evicting warm pages that might be refaulted into
> memory), we build in the following heuristics:
>
> * Estimate the number of warm pages residing in zswap, and attempt to
>   protect this region of the zswap LRU.
> * Scale the number of freeable objects by an estimate of the memory
>   saving factor. The better zswap compresses the data, the fewer pages
>   we will evict to swap (as we will otherwise incur IO for relatively
>   small memory saving).
> * During reclaim, if the shrinker encounters a page that is also being
>   brought into memory, the shrinker will cautiously terminate its
>   shrinking action, as this is a sign that it is touching the warmer
>   region of the zswap LRU.

I really hope someone with more familiarity with reclaim heuristics
makes sure this makes sense.

>
> As a proof of concept, we ran the following synthetic benchmark:
> build the linux kernel in a memory-limited cgroup, and allocate some
> cold data in tmpfs to see if the shrinker could write them out and
> improved the overall performance. Depending on the amount of cold data
> generated, we observe from 14% to 35% reduction in kernel CPU time used
> in the kernel builds.
>
> Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> ---
>  Documentation/admin-guide/mm/zswap.rst |   7 ++
>  include/linux/mmzone.h                 |  14 +++
>  mm/mmzone.c                            |   3 +
>  mm/swap_state.c                        |  21 +++-
>  mm/zswap.c                             | 161 +++++++++++++++++++++++--
>  5 files changed, 196 insertions(+), 10 deletions(-)
>
> diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> index 45b98390e938..522ae22ccb84 100644
> --- a/Documentation/admin-guide/mm/zswap.rst
> +++ b/Documentation/admin-guide/mm/zswap.rst
> @@ -153,6 +153,13 @@ attribute, e. g.::
>
>  Setting this parameter to 100 will disable the hysteresis.
>
> +When there is a sizable amount of cold memory residing in the zswap pool, it
> +can be advantageous to proactively write these cold pages to swap and reclaim
> +the memory for other use cases. By default, the zswap shrinker is disabled.
> +User can enable it as follows:
> +
> +  echo Y > /sys/module/zswap/parameters/shrinker_enabled
> +
>  A debugfs interface is provided for various statistic about pool size, number
>  of pages stored, same-value filled pages and various counters for the reasons
>  pages are rejected.
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 486587fcd27f..8947a1bfbe9c 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -637,6 +637,20 @@ struct lruvec {
>  #ifdef CONFIG_MEMCG
>         struct pglist_data *pgdat;
>  #endif
> +#ifdef CONFIG_ZSWAP
> +       /*
> +        * Number of pages in zswap that should be protected from the shrinker.
> +        * This number is an estimate of the following counts:
> +        *
> +        * a) Recent page faults.
> +        * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> +        *    as well as recent zswap LRU rotations.
> +        *
> +        * These pages are likely to be warm, and might incur IO if the are written
> +        * to swap.
> +        */
> +       atomic_long_t nr_zswap_protected;
> +#endif

Instead of the ifdef's all over the code, we can define
nr_zswap_protected inside a struct and helpers that
increment/initialize nr_zswap_protected in zswap.h, and have a single
ifdef there. All the code would be oblivious to the existence of
nr_zswap_protected.

Something like:

#ifdef CONFIG_ZSWAP

struct zswap_lruvec_state {
         /* insert large comment */
        atomic_long_t nr_zswap_protected;
};

static inline void zswap_lruvec_init(..)
{
        atomic_long_set(&lruvec->nr_zswap_protected, 0);
}

static inline void zswap_lruvec_swapin(..)
{
       if (page) {
               struct lruvec *lruvec = folio_lruvec(page_folio(page));

               atomic_long_inc(&lruvec->nr_zswap_protected);
       }
}

#else

/* empty struct and functions

#endif

>  };
>
>  /* Isolate for asynchronous migration */
> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index 68e1511be12d..4137f3ac42cd 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)
>
>         memset(lruvec, 0, sizeof(struct lruvec));
>         spin_lock_init(&lruvec->lru_lock);
> +#ifdef CONFIG_ZSWAP
> +       atomic_long_set(&lruvec->nr_zswap_protected, 0);
> +#endif
>
>         for_each_lru(lru)
>                 INIT_LIST_HEAD(&lruvec->lists[lru]);
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 0356df52b06a..a60197b55a28 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>         lru_add_drain();        /* Push any new pages onto the LRU now */
>  skip:
>         /* The page was likely read above, so no need for plugging here */
> -       return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> +       page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> +#ifdef CONFIG_ZSWAP
> +       if (page) {
> +               struct lruvec *lruvec = folio_lruvec(page_folio(page));
> +
> +               atomic_long_inc(&lruvec->nr_zswap_protected);
> +       }
> +#endif
> +       return page;
>  }
>
>  int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
>         lru_add_drain();
>  skip:
>         /* The page was likely read above, so no need for plugging here */
> -       return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> -                                    NULL);
> +       page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> +#ifdef CONFIG_ZSWAP
> +       if (page) {
> +               struct lruvec *lruvec = folio_lruvec(page_folio(page));
> +
> +               atomic_long_inc(&lruvec->nr_zswap_protected);
> +       }
> +#endif
> +       return page;
>  }
>
>  /**
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 15485427e3fa..1d1fe75a5237 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
>  /* Number of zpools in zswap_pool (empirically determined for scalability) */
>  #define ZSWAP_NR_ZPOOLS 32
>
> +/* Enable/disable memory pressure-based shrinker. */
> +static bool zswap_shrinker_enabled;
> +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> +
>  /*********************************
>  * data structures
>  **********************************/
> @@ -174,6 +178,8 @@ struct zswap_pool {
>         char tfm_name[CRYPTO_MAX_ALG_NAME];
>         struct list_lru list_lru;
>         struct mem_cgroup *next_shrink;
> +       struct shrinker *shrinker;
> +       atomic_t nr_stored;
>  };
>
>  /*
> @@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
>                         DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
>  }
>
> +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> +{
> +       u64 pool_size = 0;
> +       int i;
> +
> +       for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> +               pool_size += zpool_get_total_size(pool->zpools[i]);
> +
> +       return pool_size;
> +}
> +
>  static void zswap_update_total_size(void)
>  {
>         struct zswap_pool *pool;
>         u64 total = 0;
> -       int i;
>
>         rcu_read_lock();
>
>         list_for_each_entry_rcu(pool, &zswap_pools, list)
> -               for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> -                       total += zpool_get_total_size(pool->zpools[i]);
> +               total += get_zswap_pool_size(pool);
>
>         rcu_read_unlock();
>
> @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
>  static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
>  {
>         struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> -       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> -
> +       int nid = entry_to_nid(entry);
> +       struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> +       bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
> +       unsigned long lru_size, old, new;
> +
> +       if (added) {
> +               lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
> +               old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
> +
> +               /*
> +                * Decay to avoid overflow and adapt to changing workloads.
> +                * This is based on LRU reclaim cost decaying heuristics.
> +                */
> +               do {
> +                       new = old > lru_size / 4 ? old / 2 : old;
> +               } while (
> +                       !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
> +       }
>         mem_cgroup_put(memcg);
>         return added;
>  }
> @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
>         else {
>                 zswap_lru_del(&entry->pool->list_lru, entry);
>                 zpool_free(zswap_find_zpool(entry), entry->handle);
> +               atomic_dec(&entry->pool->nr_stored);
>                 zswap_pool_put(entry->pool);
>         }
>         zswap_entry_cache_free(entry);
> @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
>         return entry;
>  }
>
> +/*********************************
> +* shrinker functions
> +**********************************/
> +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> +                                      spinlock_t *lock, void *arg);
> +
> +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> +               struct shrink_control *sc)
> +{
> +       struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> +       unsigned long shrink_ret, nr_protected, lru_size;
> +       struct zswap_pool *pool = shrinker->private_data;
> +       bool encountered_page_in_swapcache = false;
> +
> +       nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> +       lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> +
> +       /*
> +        * Abort if the shrinker is disabled or if we are shrinking into the
> +        * protected region.
> +        */
> +       if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> +               sc->nr_scanned = 0;
> +               return SHRINK_STOP;
> +       }
> +
> +       shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> +               &encountered_page_in_swapcache);
> +
> +       if (encountered_page_in_swapcache)
> +               return SHRINK_STOP;
> +
> +       return shrink_ret ? shrink_ret : SHRINK_STOP;
> +}
> +
> +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> +               struct shrink_control *sc)
> +{
> +       struct zswap_pool *pool = shrinker->private_data;
> +       struct mem_cgroup *memcg = sc->memcg;
> +       struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> +       unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> +
> +#ifdef CONFIG_MEMCG_KMEM
> +       cgroup_rstat_flush(memcg->css.cgroup);
> +       nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> +       nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> +#else
> +       /* use pool stats instead of memcg stats */
> +       nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> +       nr_stored = atomic_read(&pool->nr_stored);
> +#endif
> +
> +       if (!zswap_shrinker_enabled || !nr_stored)
> +               return 0;
> +
> +       nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> +       nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> +       /*
> +        * Subtract the lru size by an estimate of the number of pages
> +        * that should be protected.
> +        */
> +       nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> +
> +       /*
> +        * Scale the number of freeable pages by the memory saving factor.
> +        * This ensures that the better zswap compresses memory, the fewer
> +        * pages we will evict to swap (as it will otherwise incur IO for
> +        * relatively small memory saving).
> +        */
> +       return mult_frac(nr_freeable, nr_backing, nr_stored);
> +}
> +
> +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> +{
> +       pool->shrinker =
> +               shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> +       if (!pool->shrinker)
> +               return;
> +
> +       pool->shrinker->private_data = pool;
> +       pool->shrinker->scan_objects = zswap_shrinker_scan;
> +       pool->shrinker->count_objects = zswap_shrinker_count;
> +       pool->shrinker->batch = 0;
> +       pool->shrinker->seeks = DEFAULT_SEEKS;
> +}
> +
>  /*********************************
>  * per-cpu code
>  **********************************/
> @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
>                                        spinlock_t *lock, void *arg)
>  {
>         struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> +       bool *encountered_page_in_swapcache = (bool *)arg;
>         struct mem_cgroup *memcg;
>         struct zswap_tree *tree;
> +       struct lruvec *lruvec;
>         pgoff_t swpoffset;
>         enum lru_status ret = LRU_REMOVED_RETRY;
>         int writeback_result;
> @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
>                 /* we cannot use zswap_lru_add here, because it increments node's lru count */
>                 list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
>                 spin_unlock(lock);
> -               mem_cgroup_put(memcg);
>                 ret = LRU_RETRY;
> +
> +               /*
> +                * Encountering a page already in swap cache is a sign that we are shrinking
> +                * into the warmer region. We should terminate shrinking (if we're in the dynamic
> +                * shrinker context).
> +                */
> +               if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> +                       ret = LRU_SKIP;
> +                       *encountered_page_in_swapcache = true;
> +               }
> +               lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
> +               /* Increment the protection area to account for the LRU rotation. */
> +               atomic_long_inc(&lruvec->nr_zswap_protected);
> +
> +               mem_cgroup_put(memcg);
>                 goto put_unlock;
>         }
>         zswap_written_back_pages++;
> @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>                                        &pool->node);
>         if (ret)
>                 goto error;
> +
> +       zswap_alloc_shrinker(pool);
> +       if (!pool->shrinker)
> +               goto error;
> +
>         pr_debug("using %s compressor\n", pool->tfm_name);
>
>         /* being the current pool takes 1 ref; this func expects the
> @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
>          */
>         kref_init(&pool->kref);
>         INIT_LIST_HEAD(&pool->list);
> -       list_lru_init_memcg(&pool->list_lru, NULL);
> +       if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> +               goto lru_fail;
> +       shrinker_register(pool->shrinker);
>         INIT_WORK(&pool->shrink_work, shrink_worker);
>
>         zswap_pool_debug("created", pool);
>
>         return pool;
>
> +lru_fail:
> +       list_lru_destroy(&pool->list_lru);
> +       shrinker_free(pool->shrinker);
>  error:
>         if (pool->acomp_ctx)
>                 free_percpu(pool->acomp_ctx);
> @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
>
>         zswap_pool_debug("destroying", pool);
>
> +       shrinker_free(pool->shrinker);
>         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
>         free_percpu(pool->acomp_ctx);
>         list_lru_destroy(&pool->list_lru);
> @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
>         if (entry->length) {
>                 INIT_LIST_HEAD(&entry->lru);
>                 zswap_lru_add(&pool->list_lru, entry);
> +               atomic_inc(&pool->nr_stored);
>         }
>         spin_unlock(&tree->lock);
>
> --
> 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-18 23:20   ` Yosry Ahmed
@ 2023-10-18 23:46     ` Nhat Pham
  2023-10-18 23:48       ` Nhat Pham
  2023-10-19  1:11       ` Yosry Ahmed
  2023-10-19 12:29     ` Domenico Cerasuolo
  1 sibling, 2 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-18 23:46 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> >    synchronous reclaim attempt that, if successful, allows the new
> >    hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> >    trigger an asynchronous reclaim attempt, in which an memcg is
> >    selected for reclaim in a round-robin-like fashion.
>
> Could you explain the rationale behind the difference in behavior here
> between the global limit and the memcg limit?

The global limit hit reclaim behavior was previously asynchronous too.
We just added the round-robin part because now the zswap LRU is
cgroup-aware :)

For the cgroup limit hit, however, we cannot make it asynchronous,
as it is a bit hairy to add a per-cgroup shrink_work. So, we just
perform the reclaim synchronously.

The question is whether it makes sense to make the global limit
reclaim synchronous too. That is a task of its own IMO.

(FWIW, this somewhat mirrors the direct reclaimer v.s kswapd
story to me, but don't quote me too hard on this).

>
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> >  include/linux/memcontrol.h |   5 ++
> >  mm/swap.h                  |   3 +-
> >  mm/swap_state.c            |  17 +++-
> >  mm/zswap.c                 | 179 ++++++++++++++++++++++++++-----------
> >  4 files changed, 147 insertions(+), 57 deletions(-)
>
> This is a dense patch, I haven't absorbed all of it yet, but the first
> round of comments below.

Regardless, thanks for the feedback, Yosry! Domenico definitely
knows more than me about this, but I'll respond with what I know,
and he can expand and/or fact-check me :)

>
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 031102ac9311..3de10fabea0f 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> >         return NULL;
> >  }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > +       return NULL;
> > +}
> > +
> >  static inline bool folio_memcg_kmem(struct folio *folio)
> >  {
> >         return false;
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 8a3c7a0ace4f..bbd6ce661a20 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                                      struct vm_area_struct *vma,
> >                                      unsigned long addr,
> > -                                    bool *new_page_allocated);
> > +                                    bool *new_page_allocated,
> > +                                    bool fail_if_exists);
> >  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> >                                     struct vm_fault *vmf);
> >  struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index b3b14bd0dd64..0356df52b06a 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> >
> >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                         struct vm_area_struct *vma, unsigned long addr,
> > -                       bool *new_page_allocated)
> > +                       bool *new_page_allocated, bool fail_if_exists)
>
> nit: I don't feel like "fail" is the correct word here. Perhaps "skip"?
>
> >  {
> >         struct swap_info_struct *si;
> >         struct folio *folio;
> > @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                 if (err != -EEXIST)
> >                         goto fail_put_swap;
> >
> > +               /*
> > +                * This check guards against a state that happens if a call
> > +                * to __read_swap_cache_async triggers a reclaim, if the
> > +                * reclaimer (zswap's writeback as of now) then decides to
> > +                * reclaim that same entry, then the subsequent call to
> > +                * __read_swap_cache_async would get stuck in this loop.
>
> I think this comment needs to first state that it is protecting
> against a recursive call in general, not necessarily in reclaim, as
> __read_swap_cache_async() is not usually called in the context of
> reclaim so this can be confusing. Then it can give the exact example
> we have today. Perhaps something like:
>
> Protect against a recursive call to __read_swap_cache_async() on the
> same entry waiting forever here because SWAP_HAS_CACHE is set but the
> folio is not the swap cache yet. This can happen today if
> mem_cgroup_swapin_charge_folio() below triggers reclaim through zswap,
> which may call __read_swap_cache_async() in the writeback path.
>
> > +                */
> > +               if (fail_if_exists && err == -EEXIST)
>
> We already made sure  in the preceding condition that err is -EEXIST.
>
> > +                       goto fail_put_swap;
> >                 /*
> >                  * We might race against __delete_from_swap_cache(), and
> >                  * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >  {
> >         bool page_was_allocated;
> >         struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
> > -                       vma, addr, &page_was_allocated);
> > +                       vma, addr, &page_was_allocated, false);
> >
> >         if (page_was_allocated)
> >                 swap_readpage(retpage, false, plug);
> > @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> >                 /* Ok, do the async read-ahead now */
> >                 page = __read_swap_cache_async(
> >                         swp_entry(swp_type(entry), offset),
> > -                       gfp_mask, vma, addr, &page_allocated);
> > +                       gfp_mask, vma, addr, &page_allocated, false);
> >                 if (!page)
> >                         continue;
> >                 if (page_allocated) {
> > @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> >                 pte_unmap(pte);
> >                 pte = NULL;
> >                 page = __read_swap_cache_async(entry, gfp_mask, vma,
> > -                                              addr, &page_allocated);
> > +                                              addr, &page_allocated, false);
> >                 if (!page)
> >                         continue;
> >                 if (page_allocated) {
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 083c693602b8..d2989ad11814 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -34,6 +34,7 @@
> >  #include <linux/writeback.h>
> >  #include <linux/pagemap.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/list_lru.h>
> >
> >  #include "swap.h"
> >  #include "internal.h"
> > @@ -171,8 +172,8 @@ struct zswap_pool {
> >         struct work_struct shrink_work;
> >         struct hlist_node node;
> >         char tfm_name[CRYPTO_MAX_ALG_NAME];
> > -       struct list_head lru;
> > -       spinlock_t lru_lock;
> > +       struct list_lru list_lru;
> > +       struct mem_cgroup *next_shrink;
> >  };
> >
> >  /*
> > @@ -288,15 +289,25 @@ static void zswap_update_total_size(void)
> >         zswap_pool_total_size = total;
> >  }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > +       return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > +}
> > +
> > +static inline int entry_to_nid(struct zswap_entry *entry)
> > +{
> > +       return page_to_nid(virt_to_page(entry));
> > +}
> > +
> >  /*********************************
> >  * zswap entry functions
> >  **********************************/
> >  static struct kmem_cache *zswap_entry_cache;
> >
> > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> >  {
> >         struct zswap_entry *entry;
> > -       entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > +       entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> >         if (!entry)
> >                 return NULL;
> >         entry->refcount = 1;
> > @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> >         kmem_cache_free(zswap_entry_cache, entry);
> >  }
> >
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
>
> Could we avoid the need for get/put with an rcu_read_lock() instead?
>
> > +       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > +
> > +       mem_cgroup_put(memcg);
> > +       return added;
> > +}
> > +
> > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > +       bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > +
> > +       mem_cgroup_put(memcg);
> > +       return removed;
> > +}
> > +
> >  /*********************************
> >  * rbtree functions
> >  **********************************/
> > @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> >         if (!entry->length)
> >                 atomic_dec(&zswap_same_filled_pages);
> >         else {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_del(&entry->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               zswap_lru_del(&entry->pool->list_lru, entry);
> >                 zpool_free(zswap_find_zpool(entry), entry->handle);
> >                 zswap_pool_put(entry->pool);
> >         }
> > @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> >                 zswap_entry_put(tree, entry);
> >  }
> >
> > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > +                                      spinlock_t *lock, void *arg)
> >  {
> > -       struct zswap_entry *entry;
> > +       struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > +       struct mem_cgroup *memcg;
> >         struct zswap_tree *tree;
> >         pgoff_t swpoffset;
> > -       int ret;
> > +       enum lru_status ret = LRU_REMOVED_RETRY;
> > +       int writeback_result;
> >
> > -       /* Get an entry off the LRU */
> > -       spin_lock(&pool->lru_lock);
> > -       if (list_empty(&pool->lru)) {
> > -               spin_unlock(&pool->lru_lock);
> > -               return -EINVAL;
> > -       }
> > -       entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > -       list_del_init(&entry->lru);
> >         /*
> >          * Once the lru lock is dropped, the entry might get freed. The
> >          * swpoffset is copied to the stack, and entry isn't deref'd again
> > @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> >          */
> >         swpoffset = swp_offset(entry->swpentry);
> >         tree = zswap_trees[swp_type(entry->swpentry)];
> > -       spin_unlock(&pool->lru_lock);
> > +       list_lru_isolate(l, item);
> > +       spin_unlock(lock);
>
> Perhaps a comment somewhere stating that we only return either
> LRU_REMOVED_RETRY or LRU_RETRY, so it's fine to drop and reacquire the
> lock.
>
> >
> >         /* Check for invalidate() race */
> >         spin_lock(&tree->lock);
> >         if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > -               ret = -EAGAIN;
> >                 goto unlock;
> >         }
>
> nit: braces no longer needed?

Ah, for some reason checkpatch did not pick up on this.
Weird.

>
> >         /* Hold a reference to prevent a free during writeback */
> >         zswap_entry_get(entry);
> >         spin_unlock(&tree->lock);
> >
> > -       ret = zswap_writeback_entry(entry, tree);
> > +       writeback_result = zswap_writeback_entry(entry, tree);
> >
> >         spin_lock(&tree->lock);
> > -       if (ret) {
> > -               /* Writeback failed, put entry back on LRU */
> > -               spin_lock(&pool->lru_lock);
> > -               list_move(&entry->lru, &pool->lru);
> > -               spin_unlock(&pool->lru_lock);
> > +       if (writeback_result) {
> > +               zswap_reject_reclaim_fail++;
> > +               memcg = get_mem_cgroup_from_entry(entry);
> > +               spin_lock(lock);
> > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > +               spin_unlock(lock);
> > +               mem_cgroup_put(memcg);
> > +               ret = LRU_RETRY;
> >                 goto put_unlock;
> >         }
> > +       zswap_written_back_pages++;
>
> Why is this moved here from zswap_writeback_entry()? Also why is
> zswap_reject_reclaim_fail incremented here instead of inside
> zswap_writeback_entry()?

Domenico should know this better than me, but my understanding
is that moving it here protects concurrent modifications of
zswap_written_back_pages with the tree lock.

Is writeback single-threaded in the past? This counter is non-atomic,
and doesn't seem to be protected by any locks...

There definitely can be concurrent stores now though - with
a synchronous reclaim from cgroup-limit hit and another
from the old shrink worker.

(and with the new zswap shrinker, concurrent reclaim is
the expectation!)

zswap_reject_reclaim_fail was previously incremented in
shrink_worker I think. We need it to be incremented
for the shrinker as well, so might as well move it here.

>
> >
> >         /*
> >          * Writeback started successfully, the page now belongs to the
> > @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> >         zswap_entry_put(tree, entry);
> >  unlock:
> >         spin_unlock(&tree->lock);
> > -       return ret ? -EAGAIN : 0;
> > +       spin_lock(lock);
> > +       return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > +       struct zswap_pool *pool;
> > +       int nid, shrunk = 0;
> > +
> > +       pool = zswap_pool_current_get();
> > +       if (!pool)
> > +               return -EINVAL;
> > +
> > +       /*
> > +        * Skip zombies because their LRUs are reparented and we would be
> > +        * reclaiming from the parent instead of the dead memcgroup.
>
> nit: s/memcgroup/memcg.
>
> > +        */
> > +       if (memcg && !mem_cgroup_online(memcg))
> > +               goto out;
>
> If we move this above zswap_pool_current_get(), we can return directly
> and remove the label. I noticed we will return -EAGAIN if memcg is
> offline. IIUC -EAGAIN for the caller will move on to the next memcg,
> but I am wondering if a different errno would be clearer here.
>
> > +
> > +       for_each_node_state(nid, N_NORMAL_MEMORY) {
> > +               unsigned long nr_to_walk = 1;
> > +
> > +               if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> > +                                     NULL, &nr_to_walk))
> > +                       shrunk++;
>
> nit:
> shrunk += list_lru_walk_one(..);

yeah might be a tad cleaner.

>
> > +       }
> > +out:
> > +       zswap_pool_put(pool);
> > +       return shrunk ? 0 : -EAGAIN;
> >  }
> >
> >  static void shrink_worker(struct work_struct *w)
> > @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
> >                                                 shrink_work);
> >         int ret, failures = 0;
> >
> > +       /* global reclaim will select cgroup in a round-robin fashion. */
> >         do {
> > -               ret = zswap_reclaim_entry(pool);
> > +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
>
> Perhaps next_shrink_memcg is a better name here?
>
> > +
> > +               ret = shrink_memcg(pool->next_shrink);
> > +
> >                 if (ret) {
> > -                       zswap_reject_reclaim_fail++;
> >                         if (ret != -EAGAIN)
> >                                 break;
> >                         if (++failures == MAX_RECLAIM_RETRIES)
> > @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> >          */
> >         kref_init(&pool->kref);
> >         INIT_LIST_HEAD(&pool->list);
> > -       INIT_LIST_HEAD(&pool->lru);
> > -       spin_lock_init(&pool->lru_lock);
> > +       list_lru_init_memcg(&pool->list_lru, NULL);
> >         INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> >         zswap_pool_debug("created", pool);
> > @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> >         free_percpu(pool->acomp_ctx);
> > +       list_lru_destroy(&pool->list_lru);
> > +       if (pool->next_shrink)
> > +               mem_cgroup_put(pool->next_shrink);
> >         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> >                 zpool_destroy_pool(pool->zpools[i]);
> >         kfree(pool);
> > @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> >
> >         /* try to allocate swap cache page */
> >         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> > -                                      &page_was_allocated);
> > +                                      &page_was_allocated, true);
> >         if (!page) {
> >                 ret = -ENOMEM;
> >                 goto fail;
> > @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> >         /* start writeback */
> >         __swap_writepage(page, &wbc);
> >         put_page(page);
> > -       zswap_written_back_pages++;
> >
> >         return ret;
> >
> > @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
> >         struct scatterlist input, output;
> >         struct crypto_acomp_ctx *acomp_ctx;
> >         struct obj_cgroup *objcg = NULL;
> > +       struct mem_cgroup *memcg = NULL;
> >         struct zswap_pool *pool;
> >         struct zpool *zpool;
> > +       int lru_alloc_ret;
> >         unsigned int dlen = PAGE_SIZE;
> >         unsigned long handle, value;
> >         char *buf;
> > @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
> >                 zswap_invalidate_entry(tree, dupentry);
> >         }
> >         spin_unlock(&tree->lock);
> > -
> > -       /*
> > -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> > -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> > -        * local cgroup limits.
> > -        */
> >         objcg = get_obj_cgroup_from_folio(folio);
> > -       if (objcg && !obj_cgroup_may_zswap(objcg))
> > -               goto reject;
> > +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > +               if (shrink_memcg(memcg)) {
> > +                       mem_cgroup_put(memcg);
> > +                       goto reject;
> > +               }
> > +               mem_cgroup_put(memcg);
> > +       }
> >
> >         /* reclaim space if needed */
> >         if (zswap_is_full()) {
> > @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
> >                         zswap_pool_reached_full = false;
> >         }
> >
> > +       pool = zswap_pool_current_get();
> > +       if (!pool)
> > +               goto reject;
> > +
>
> Why do we need to move zswap_pool_current_get() up here?
>
> >         /* allocate entry */
> > -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> >         if (!entry) {
> >                 zswap_reject_kmemcache_fail++;
> > +               zswap_pool_put(pool);
> >                 goto reject;
> >         }
> >
> > @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
> >                         entry->length = 0;
> >                         entry->value = value;
> >                         atomic_inc(&zswap_same_filled_pages);
> > +                       zswap_pool_put(pool);
> >                         goto insert_entry;
> >                 }
> >                 kunmap_atomic(src);
> > @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
> >                 goto freepage;
> >
> >         /* if entry is successfully added, it keeps the reference */
> > -       entry->pool = zswap_pool_current_get();
> > -       if (!entry->pool)
> > -               goto freepage;
> > +       entry->pool = pool;
> > +       if (objcg) {
> > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > +               mem_cgroup_put(memcg);
> > +
> > +               if (lru_alloc_ret)
> > +                       goto freepage;
> > +       }
> >
> >         /* compress */
> >         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
> >                 zswap_invalidate_entry(tree, dupentry);
> >         }
> >         if (entry->length) {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_add(&entry->lru, &entry->pool->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               INIT_LIST_HEAD(&entry->lru);
> > +               zswap_lru_add(&pool->list_lru, entry);
> >         }
> >         spin_unlock(&tree->lock);
> >
> > @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
> >
> >  put_dstmem:
> >         mutex_unlock(acomp_ctx->mutex);
> > -       zswap_pool_put(entry->pool);
> >  freepage:
> > +       zswap_pool_put(entry->pool);
> >         zswap_entry_cache_free(entry);
> >  reject:
> >         if (objcg)
> > @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
> >                 zswap_invalidate_entry(tree, entry);
> >                 folio_mark_dirty(folio);
> >         } else if (entry->length) {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_move(&entry->lru, &entry->pool->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > +               zswap_lru_add(&entry->pool->list_lru, entry);
> >         }
> >         zswap_entry_put(tree, entry);
> >         spin_unlock(&tree->lock);
> > --
> > 2.34.1

I don't have (strong) opinions or (educated) guesses
on the rest.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-18 23:46     ` Nhat Pham
@ 2023-10-18 23:48       ` Nhat Pham
  2023-10-19  1:11       ` Yosry Ahmed
  1 sibling, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-18 23:48 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 4:46 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > >
> > > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > >
> > > Currently, we only have a single global LRU for zswap. This makes it
> > > impossible to perform worload-specific shrinking - an memcg cannot
> > > determine which pages in the pool it owns, and often ends up writing
> > > pages from other memcgs. This issue has been previously observed in
> > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > >
> > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> > >
> > > This patch fully resolves the issue by replacing the global zswap LRU
> > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > >
> > > a) When a store attempt hits an memcg limit, it now triggers a
> > >    synchronous reclaim attempt that, if successful, allows the new
> > >    hotter page to be accepted by zswap.
> > > b) If the store attempt instead hits the global zswap limit, it will
> > >    trigger an asynchronous reclaim attempt, in which an memcg is
> > >    selected for reclaim in a round-robin-like fashion.
> >
> > Could you explain the rationale behind the difference in behavior here
> > between the global limit and the memcg limit?
>
> The global limit hit reclaim behavior was previously asynchronous too.
> We just added the round-robin part because now the zswap LRU is
> cgroup-aware :)
>
> For the cgroup limit hit, however, we cannot make it asynchronous,
> as it is a bit hairy to add a per-cgroup shrink_work. So, we just
> perform the reclaim synchronously.
>
> The question is whether it makes sense to make the global limit
> reclaim synchronous too. That is a task of its own IMO.
>
> (FWIW, this somewhat mirrors the direct reclaimer v.s kswapd
> story to me, but don't quote me too hard on this).
>
> >
> > >
> > > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > > Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> > > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > > ---
> > >  include/linux/memcontrol.h |   5 ++
> > >  mm/swap.h                  |   3 +-
> > >  mm/swap_state.c            |  17 +++-
> > >  mm/zswap.c                 | 179 ++++++++++++++++++++++++++-----------
> > >  4 files changed, 147 insertions(+), 57 deletions(-)
> >
> > This is a dense patch, I haven't absorbed all of it yet, but the first
> > round of comments below.
>
> Regardless, thanks for the feedback, Yosry! Domenico definitely
> knows more than me about this, but I'll respond with what I know,
> and he can expand and/or fact-check me :)
>
> >
> > >
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 031102ac9311..3de10fabea0f 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> > >         return NULL;
> > >  }
> > >
> > > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > > +{
> > > +       return NULL;
> > > +}
> > > +
> > >  static inline bool folio_memcg_kmem(struct folio *folio)
> > >  {
> > >         return false;
> > > diff --git a/mm/swap.h b/mm/swap.h
> > > index 8a3c7a0ace4f..bbd6ce661a20 100644
> > > --- a/mm/swap.h
> > > +++ b/mm/swap.h
> > > @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > >                                      struct vm_area_struct *vma,
> > >                                      unsigned long addr,
> > > -                                    bool *new_page_allocated);
> > > +                                    bool *new_page_allocated,
> > > +                                    bool fail_if_exists);
> > >  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> > >                                     struct vm_fault *vmf);
> > >  struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > > index b3b14bd0dd64..0356df52b06a 100644
> > > --- a/mm/swap_state.c
> > > +++ b/mm/swap_state.c
> > > @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> > >
> > >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > >                         struct vm_area_struct *vma, unsigned long addr,
> > > -                       bool *new_page_allocated)
> > > +                       bool *new_page_allocated, bool fail_if_exists)
> >
> > nit: I don't feel like "fail" is the correct word here. Perhaps "skip"?
> >
> > >  {
> > >         struct swap_info_struct *si;
> > >         struct folio *folio;
> > > @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > >                 if (err != -EEXIST)
> > >                         goto fail_put_swap;
> > >
> > > +               /*
> > > +                * This check guards against a state that happens if a call
> > > +                * to __read_swap_cache_async triggers a reclaim, if the
> > > +                * reclaimer (zswap's writeback as of now) then decides to
> > > +                * reclaim that same entry, then the subsequent call to
> > > +                * __read_swap_cache_async would get stuck in this loop.
> >
> > I think this comment needs to first state that it is protecting
> > against a recursive call in general, not necessarily in reclaim, as
> > __read_swap_cache_async() is not usually called in the context of
> > reclaim so this can be confusing. Then it can give the exact example
> > we have today. Perhaps something like:
> >
> > Protect against a recursive call to __read_swap_cache_async() on the
> > same entry waiting forever here because SWAP_HAS_CACHE is set but the
> > folio is not the swap cache yet. This can happen today if
> > mem_cgroup_swapin_charge_folio() below triggers reclaim through zswap,
> > which may call __read_swap_cache_async() in the writeback path.
> >
> > > +                */
> > > +               if (fail_if_exists && err == -EEXIST)
> >
> > We already made sure  in the preceding condition that err is -EEXIST.
> >
> > > +                       goto fail_put_swap;
> > >                 /*
> > >                  * We might race against __delete_from_swap_cache(), and
> > >                  * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > > @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> > >  {
> > >         bool page_was_allocated;
> > >         struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
> > > -                       vma, addr, &page_was_allocated);
> > > +                       vma, addr, &page_was_allocated, false);
> > >
> > >         if (page_was_allocated)
> > >                 swap_readpage(retpage, false, plug);
> > > @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> > >                 /* Ok, do the async read-ahead now */
> > >                 page = __read_swap_cache_async(
> > >                         swp_entry(swp_type(entry), offset),
> > > -                       gfp_mask, vma, addr, &page_allocated);
> > > +                       gfp_mask, vma, addr, &page_allocated, false);
> > >                 if (!page)
> > >                         continue;
> > >                 if (page_allocated) {
> > > @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> > >                 pte_unmap(pte);
> > >                 pte = NULL;
> > >                 page = __read_swap_cache_async(entry, gfp_mask, vma,
> > > -                                              addr, &page_allocated);
> > > +                                              addr, &page_allocated, false);
> > >                 if (!page)
> > >                         continue;
> > >                 if (page_allocated) {
> > > diff --git a/mm/zswap.c b/mm/zswap.c
> > > index 083c693602b8..d2989ad11814 100644
> > > --- a/mm/zswap.c
> > > +++ b/mm/zswap.c
> > > @@ -34,6 +34,7 @@
> > >  #include <linux/writeback.h>
> > >  #include <linux/pagemap.h>
> > >  #include <linux/workqueue.h>
> > > +#include <linux/list_lru.h>
> > >
> > >  #include "swap.h"
> > >  #include "internal.h"
> > > @@ -171,8 +172,8 @@ struct zswap_pool {
> > >         struct work_struct shrink_work;
> > >         struct hlist_node node;
> > >         char tfm_name[CRYPTO_MAX_ALG_NAME];
> > > -       struct list_head lru;
> > > -       spinlock_t lru_lock;
> > > +       struct list_lru list_lru;
> > > +       struct mem_cgroup *next_shrink;
> > >  };
> > >
> > >  /*
> > > @@ -288,15 +289,25 @@ static void zswap_update_total_size(void)
> > >         zswap_pool_total_size = total;
> > >  }
> > >
> > > +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
> > > +{
> > > +       return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > > +}
> > > +
> > > +static inline int entry_to_nid(struct zswap_entry *entry)
> > > +{
> > > +       return page_to_nid(virt_to_page(entry));
> > > +}
> > > +
> > >  /*********************************
> > >  * zswap entry functions
> > >  **********************************/
> > >  static struct kmem_cache *zswap_entry_cache;
> > >
> > > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> > >  {
> > >         struct zswap_entry *entry;
> > > -       entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > > +       entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> > >         if (!entry)
> > >                 return NULL;
> > >         entry->refcount = 1;
> > > @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > >         kmem_cache_free(zswap_entry_cache, entry);
> > >  }
> > >
> > > +/*********************************
> > > +* lru functions
> > > +**********************************/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> >
> > Could we avoid the need for get/put with an rcu_read_lock() instead?
> >
> > > +       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > > +
> > > +       mem_cgroup_put(memcg);
> > > +       return added;
> > > +}
> > > +
> > > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > > +       bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > > +
> > > +       mem_cgroup_put(memcg);
> > > +       return removed;
> > > +}
> > > +
> > >  /*********************************
> > >  * rbtree functions
> > >  **********************************/
> > > @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> > >         if (!entry->length)
> > >                 atomic_dec(&zswap_same_filled_pages);
> > >         else {
> > > -               spin_lock(&entry->pool->lru_lock);
> > > -               list_del(&entry->lru);
> > > -               spin_unlock(&entry->pool->lru_lock);
> > > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > >                 zpool_free(zswap_find_zpool(entry), entry->handle);
> > >                 zswap_pool_put(entry->pool);
> > >         }
> > > @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> > >                 zswap_entry_put(tree, entry);
> > >  }
> > >
> > > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > +                                      spinlock_t *lock, void *arg)
> > >  {
> > > -       struct zswap_entry *entry;
> > > +       struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > > +       struct mem_cgroup *memcg;
> > >         struct zswap_tree *tree;
> > >         pgoff_t swpoffset;
> > > -       int ret;
> > > +       enum lru_status ret = LRU_REMOVED_RETRY;
> > > +       int writeback_result;
> > >
> > > -       /* Get an entry off the LRU */
> > > -       spin_lock(&pool->lru_lock);
> > > -       if (list_empty(&pool->lru)) {
> > > -               spin_unlock(&pool->lru_lock);
> > > -               return -EINVAL;
> > > -       }
> > > -       entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > > -       list_del_init(&entry->lru);
> > >         /*
> > >          * Once the lru lock is dropped, the entry might get freed. The
> > >          * swpoffset is copied to the stack, and entry isn't deref'd again
> > > @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > >          */
> > >         swpoffset = swp_offset(entry->swpentry);
> > >         tree = zswap_trees[swp_type(entry->swpentry)];
> > > -       spin_unlock(&pool->lru_lock);
> > > +       list_lru_isolate(l, item);
> > > +       spin_unlock(lock);
> >
> > Perhaps a comment somewhere stating that we only return either
> > LRU_REMOVED_RETRY or LRU_RETRY, so it's fine to drop and reacquire the
> > lock.
> >
> > >
> > >         /* Check for invalidate() race */
> > >         spin_lock(&tree->lock);
> > >         if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > > -               ret = -EAGAIN;
> > >                 goto unlock;
> > >         }
> >
> > nit: braces no longer needed?
>
> Ah, for some reason checkpatch did not pick up on this.
> Weird.
>
> >
> > >         /* Hold a reference to prevent a free during writeback */
> > >         zswap_entry_get(entry);
> > >         spin_unlock(&tree->lock);
> > >
> > > -       ret = zswap_writeback_entry(entry, tree);
> > > +       writeback_result = zswap_writeback_entry(entry, tree);
> > >
> > >         spin_lock(&tree->lock);
> > > -       if (ret) {
> > > -               /* Writeback failed, put entry back on LRU */
> > > -               spin_lock(&pool->lru_lock);
> > > -               list_move(&entry->lru, &pool->lru);
> > > -               spin_unlock(&pool->lru_lock);
> > > +       if (writeback_result) {
> > > +               zswap_reject_reclaim_fail++;
> > > +               memcg = get_mem_cgroup_from_entry(entry);
> > > +               spin_lock(lock);
> > > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > > +               spin_unlock(lock);
> > > +               mem_cgroup_put(memcg);
> > > +               ret = LRU_RETRY;
> > >                 goto put_unlock;
> > >         }
> > > +       zswap_written_back_pages++;
> >
> > Why is this moved here from zswap_writeback_entry()? Also why is
> > zswap_reject_reclaim_fail incremented here instead of inside
> > zswap_writeback_entry()?
>
> Domenico should know this better than me, but my understanding
> is that moving it here protects concurrent modifications of
> zswap_written_back_pages with the tree lock.
>
> Is writeback single-threaded in the past? This counter is non-atomic,
> and doesn't seem to be protected by any locks...
>
> There definitely can be concurrent stores now though - with
/s/stores/writebacks
> a synchronous reclaim from cgroup-limit hit and another
> from the old shrink worker.
>
> (and with the new zswap shrinker, concurrent reclaim is
> the expectation!)
>
> zswap_reject_reclaim_fail was previously incremented in
> shrink_worker I think. We need it to be incremented
> for the shrinker as well, so might as well move it here.
>
> >
> > >
> > >         /*
> > >          * Writeback started successfully, the page now belongs to the
> > > @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > >         zswap_entry_put(tree, entry);
> > >  unlock:
> > >         spin_unlock(&tree->lock);
> > > -       return ret ? -EAGAIN : 0;
> > > +       spin_lock(lock);
> > > +       return ret;
> > > +}
> > > +
> > > +static int shrink_memcg(struct mem_cgroup *memcg)
> > > +{
> > > +       struct zswap_pool *pool;
> > > +       int nid, shrunk = 0;
> > > +
> > > +       pool = zswap_pool_current_get();
> > > +       if (!pool)
> > > +               return -EINVAL;
> > > +
> > > +       /*
> > > +        * Skip zombies because their LRUs are reparented and we would be
> > > +        * reclaiming from the parent instead of the dead memcgroup.
> >
> > nit: s/memcgroup/memcg.
> >
> > > +        */
> > > +       if (memcg && !mem_cgroup_online(memcg))
> > > +               goto out;
> >
> > If we move this above zswap_pool_current_get(), we can return directly
> > and remove the label. I noticed we will return -EAGAIN if memcg is
> > offline. IIUC -EAGAIN for the caller will move on to the next memcg,
> > but I am wondering if a different errno would be clearer here.
> >
> > > +
> > > +       for_each_node_state(nid, N_NORMAL_MEMORY) {
> > > +               unsigned long nr_to_walk = 1;
> > > +
> > > +               if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> > > +                                     NULL, &nr_to_walk))
> > > +                       shrunk++;
> >
> > nit:
> > shrunk += list_lru_walk_one(..);
>
> yeah might be a tad cleaner.
>
> >
> > > +       }
> > > +out:
> > > +       zswap_pool_put(pool);
> > > +       return shrunk ? 0 : -EAGAIN;
> > >  }
> > >
> > >  static void shrink_worker(struct work_struct *w)
> > > @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
> > >                                                 shrink_work);
> > >         int ret, failures = 0;
> > >
> > > +       /* global reclaim will select cgroup in a round-robin fashion. */
> > >         do {
> > > -               ret = zswap_reclaim_entry(pool);
> > > +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> >
> > Perhaps next_shrink_memcg is a better name here?
> >
> > > +
> > > +               ret = shrink_memcg(pool->next_shrink);
> > > +
> > >                 if (ret) {
> > > -                       zswap_reject_reclaim_fail++;
> > >                         if (ret != -EAGAIN)
> > >                                 break;
> > >                         if (++failures == MAX_RECLAIM_RETRIES)
> > > @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > >          */
> > >         kref_init(&pool->kref);
> > >         INIT_LIST_HEAD(&pool->list);
> > > -       INIT_LIST_HEAD(&pool->lru);
> > > -       spin_lock_init(&pool->lru_lock);
> > > +       list_lru_init_memcg(&pool->list_lru, NULL);
> > >         INIT_WORK(&pool->shrink_work, shrink_worker);
> > >
> > >         zswap_pool_debug("created", pool);
> > > @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > >
> > >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > >         free_percpu(pool->acomp_ctx);
> > > +       list_lru_destroy(&pool->list_lru);
> > > +       if (pool->next_shrink)
> > > +               mem_cgroup_put(pool->next_shrink);
> > >         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > >                 zpool_destroy_pool(pool->zpools[i]);
> > >         kfree(pool);
> > > @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > >
> > >         /* try to allocate swap cache page */
> > >         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> > > -                                      &page_was_allocated);
> > > +                                      &page_was_allocated, true);
> > >         if (!page) {
> > >                 ret = -ENOMEM;
> > >                 goto fail;
> > > @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > >         /* start writeback */
> > >         __swap_writepage(page, &wbc);
> > >         put_page(page);
> > > -       zswap_written_back_pages++;
> > >
> > >         return ret;
> > >
> > > @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
> > >         struct scatterlist input, output;
> > >         struct crypto_acomp_ctx *acomp_ctx;
> > >         struct obj_cgroup *objcg = NULL;
> > > +       struct mem_cgroup *memcg = NULL;
> > >         struct zswap_pool *pool;
> > >         struct zpool *zpool;
> > > +       int lru_alloc_ret;
> > >         unsigned int dlen = PAGE_SIZE;
> > >         unsigned long handle, value;
> > >         char *buf;
> > > @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, dupentry);
> > >         }
> > >         spin_unlock(&tree->lock);
> > > -
> > > -       /*
> > > -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > -        * local cgroup limits.
> > > -        */
> > >         objcg = get_obj_cgroup_from_folio(folio);
> > > -       if (objcg && !obj_cgroup_may_zswap(objcg))
> > > -               goto reject;
> > > +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > +               if (shrink_memcg(memcg)) {
> > > +                       mem_cgroup_put(memcg);
> > > +                       goto reject;
> > > +               }
> > > +               mem_cgroup_put(memcg);
> > > +       }
> > >
> > >         /* reclaim space if needed */
> > >         if (zswap_is_full()) {
> > > @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
> > >                         zswap_pool_reached_full = false;
> > >         }
> > >
> > > +       pool = zswap_pool_current_get();
> > > +       if (!pool)
> > > +               goto reject;
> > > +
> >
> > Why do we need to move zswap_pool_current_get() up here?
> >
> > >         /* allocate entry */
> > > -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > > +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > >         if (!entry) {
> > >                 zswap_reject_kmemcache_fail++;
> > > +               zswap_pool_put(pool);
> > >                 goto reject;
> > >         }
> > >
> > > @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
> > >                         entry->length = 0;
> > >                         entry->value = value;
> > >                         atomic_inc(&zswap_same_filled_pages);
> > > +                       zswap_pool_put(pool);
> > >                         goto insert_entry;
> > >                 }
> > >                 kunmap_atomic(src);
> > > @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
> > >                 goto freepage;
> > >
> > >         /* if entry is successfully added, it keeps the reference */
> > > -       entry->pool = zswap_pool_current_get();
> > > -       if (!entry->pool)
> > > -               goto freepage;
> > > +       entry->pool = pool;
> > > +       if (objcg) {
> > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > > +               mem_cgroup_put(memcg);
> > > +
> > > +               if (lru_alloc_ret)
> > > +                       goto freepage;
> > > +       }
> > >
> > >         /* compress */
> > >         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > > @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, dupentry);
> > >         }
> > >         if (entry->length) {
> > > -               spin_lock(&entry->pool->lru_lock);
> > > -               list_add(&entry->lru, &entry->pool->lru);
> > > -               spin_unlock(&entry->pool->lru_lock);
> > > +               INIT_LIST_HEAD(&entry->lru);
> > > +               zswap_lru_add(&pool->list_lru, entry);
> > >         }
> > >         spin_unlock(&tree->lock);
> > >
> > > @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
> > >
> > >  put_dstmem:
> > >         mutex_unlock(acomp_ctx->mutex);
> > > -       zswap_pool_put(entry->pool);
> > >  freepage:
> > > +       zswap_pool_put(entry->pool);
> > >         zswap_entry_cache_free(entry);
> > >  reject:
> > >         if (objcg)
> > > @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, entry);
> > >                 folio_mark_dirty(folio);
> > >         } else if (entry->length) {
> > > -               spin_lock(&entry->pool->lru_lock);
> > > -               list_move(&entry->lru, &entry->pool->lru);
> > > -               spin_unlock(&entry->pool->lru_lock);
> > > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > > +               zswap_lru_add(&entry->pool->list_lru, entry);
> > >         }
> > >         zswap_entry_put(tree, entry);
> > >         spin_unlock(&tree->lock);
> > > --
> > > 2.34.1
>
> I don't have (strong) opinions or (educated) guesses
> on the rest.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat
  2023-10-18 23:24   ` Yosry Ahmed
@ 2023-10-18 23:50     ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-18 23:50 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 4:25 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Since zswap now writes back pages from memcg-specific LRUs, we now need a
> > new stat to show writebacks count for each memcg.
> >
> > Suggested-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> >  include/linux/memcontrol.h |  2 ++
> >  mm/memcontrol.c            | 15 +++++++++++++++
> >  mm/zswap.c                 |  3 +++
> >  3 files changed, 20 insertions(+)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 3de10fabea0f..7868b1e00bf5 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -38,6 +38,7 @@ enum memcg_stat_item {
> >         MEMCG_KMEM,
> >         MEMCG_ZSWAP_B,
> >         MEMCG_ZSWAPPED,
> > +       MEMCG_ZSWAP_WB,
> >         MEMCG_NR_STAT,
> >  };
> >
> > @@ -1884,6 +1885,7 @@ static inline void count_objcg_event(struct obj_cgroup *objcg,
> >  bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
> >  void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
> >  void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
> > +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg);
> >  #else
> >  static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
> >  {
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 1bde67b29287..a9118871e5a6 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -1505,6 +1505,7 @@ static const struct memory_stat memory_stats[] = {
> >  #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
> >         { "zswap",                      MEMCG_ZSWAP_B                   },
> >         { "zswapped",                   MEMCG_ZSWAPPED                  },
> > +       { "zswap_wb",                   MEMCG_ZSWAP_WB                  },
>
> zswap_writeback would be more consistent with file_writeback below.
>
> Taking a step back, this is not really a "state". We increment it by 1
> every time and never decrement it. Sounds awfully similar to events :)

Ah yeah, this is probably closer to zswpin/zswpout counters :)
We can probably re-use that piece of logic.

>
> You can also use count_objcg_event() directly and avoid the need for
> obj_cgroup_report_zswap_wb() below.
>
> >  #endif
> >         { "file_mapped",                NR_FILE_MAPPED                  },
> >         { "file_dirty",                 NR_FILE_DIRTY                   },
> > @@ -1541,6 +1542,7 @@ static int memcg_page_state_unit(int item)
> >         switch (item) {
> >         case MEMCG_PERCPU_B:
> >         case MEMCG_ZSWAP_B:
> > +       case MEMCG_ZSWAP_WB:
> >         case NR_SLAB_RECLAIMABLE_B:
> >         case NR_SLAB_UNRECLAIMABLE_B:
> >         case WORKINGSET_REFAULT_ANON:
> > @@ -7861,6 +7863,19 @@ void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
> >         rcu_read_unlock();
> >  }
> >
> > +void obj_cgroup_report_zswap_wb(struct obj_cgroup *objcg)
> > +{
> > +       struct mem_cgroup *memcg;
> > +
> > +       if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > +               return;
> > +
> > +       rcu_read_lock();
> > +       memcg = obj_cgroup_memcg(objcg);
> > +       mod_memcg_state(memcg, MEMCG_ZSWAP_WB, 1);
> > +       rcu_read_unlock();
> > +}
> > +
> >  static u64 zswap_current_read(struct cgroup_subsys_state *css,
> >                               struct cftype *cft)
> >  {
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index d2989ad11814..15485427e3fa 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -704,6 +704,9 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> >         }
> >         zswap_written_back_pages++;
> >
> > +       if (entry->objcg)
> > +               obj_cgroup_report_zswap_wb(entry->objcg);
> > +
> >         /*
> >          * Writeback started successfully, the page now belongs to the
> >          * swapcache. Drop the entry from zswap - unless invalidate already
> > --
> > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-18 23:46     ` Nhat Pham
  2023-10-18 23:48       ` Nhat Pham
@ 2023-10-19  1:11       ` Yosry Ahmed
  2023-10-19 12:47         ` Domenico Cerasuolo
  1 sibling, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-19  1:11 UTC (permalink / raw)
  To: Nhat Pham
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 4:47 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > >
> > > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > >
> > > Currently, we only have a single global LRU for zswap. This makes it
> > > impossible to perform worload-specific shrinking - an memcg cannot
> > > determine which pages in the pool it owns, and often ends up writing
> > > pages from other memcgs. This issue has been previously observed in
> > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > >
> > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> > >
> > > This patch fully resolves the issue by replacing the global zswap LRU
> > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > >
> > > a) When a store attempt hits an memcg limit, it now triggers a
> > >    synchronous reclaim attempt that, if successful, allows the new
> > >    hotter page to be accepted by zswap.
> > > b) If the store attempt instead hits the global zswap limit, it will
> > >    trigger an asynchronous reclaim attempt, in which an memcg is
> > >    selected for reclaim in a round-robin-like fashion.
> >
> > Could you explain the rationale behind the difference in behavior here
> > between the global limit and the memcg limit?
>
> The global limit hit reclaim behavior was previously asynchronous too.
> We just added the round-robin part because now the zswap LRU is
> cgroup-aware :)
>
> For the cgroup limit hit, however, we cannot make it asynchronous,
> as it is a bit hairy to add a per-cgroup shrink_work. So, we just
> perform the reclaim synchronously.
>
> The question is whether it makes sense to make the global limit
> reclaim synchronous too. That is a task of its own IMO.

Let's add such context to the commit log, and perhaps an XXX comment
in the code asking whether we should consider doing the reclaim
synchronously for the global limit too.

>
> (FWIW, this somewhat mirrors the direct reclaimer v.s kswapd
> story to me, but don't quote me too hard on this).
>
[..]
>
> >
> > >         /* Hold a reference to prevent a free during writeback */
> > >         zswap_entry_get(entry);
> > >         spin_unlock(&tree->lock);
> > >
> > > -       ret = zswap_writeback_entry(entry, tree);
> > > +       writeback_result = zswap_writeback_entry(entry, tree);
> > >
> > >         spin_lock(&tree->lock);
> > > -       if (ret) {
> > > -               /* Writeback failed, put entry back on LRU */
> > > -               spin_lock(&pool->lru_lock);
> > > -               list_move(&entry->lru, &pool->lru);
> > > -               spin_unlock(&pool->lru_lock);
> > > +       if (writeback_result) {
> > > +               zswap_reject_reclaim_fail++;
> > > +               memcg = get_mem_cgroup_from_entry(entry);
> > > +               spin_lock(lock);
> > > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > > +               spin_unlock(lock);
> > > +               mem_cgroup_put(memcg);
> > > +               ret = LRU_RETRY;
> > >                 goto put_unlock;
> > >         }
> > > +       zswap_written_back_pages++;
> >
> > Why is this moved here from zswap_writeback_entry()? Also why is
> > zswap_reject_reclaim_fail incremented here instead of inside
> > zswap_writeback_entry()?
>
> Domenico should know this better than me, but my understanding
> is that moving it here protects concurrent modifications of
> zswap_written_back_pages with the tree lock.
>
> Is writeback single-threaded in the past? This counter is non-atomic,
> and doesn't seem to be protected by any locks...
>
> There definitely can be concurrent stores now though - with
> a synchronous reclaim from cgroup-limit hit and another
> from the old shrink worker.
>
> (and with the new zswap shrinker, concurrent reclaim is
> the expectation!)

The comment above the stats definition stats that they are left
unprotected purposefully. If we want to fix that let's do it
separately. If this patch makes it significantly worse such that it
would cause a regression, let's at least do it in a separate patch.
The diff here is too large already.

>
> zswap_reject_reclaim_fail was previously incremented in
> shrink_worker I think. We need it to be incremented
> for the shrinker as well, so might as well move it here.

Wouldn't moving it inside zswap_writeback_entry() near incrementing
zswap_written_back_pages make it easier to follow?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-18 23:20   ` Yosry Ahmed
  2023-10-18 23:46     ` Nhat Pham
@ 2023-10-19 12:29     ` Domenico Cerasuolo
  2023-10-19 16:14       ` Yosry Ahmed
  1 sibling, 1 reply; 31+ messages in thread
From: Domenico Cerasuolo @ 2023-10-19 12:29 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Nhat Pham, akpm, hannes, sjenning, ddstreet, vitaly.wool, mhocko,
	roman.gushchin, shakeelb, muchun.song, linux-mm, kernel-team,
	linux-kernel, cgroups, linux-doc, linux-kselftest, shuah

On Thu, Oct 19, 2023 at 1:20 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> >    synchronous reclaim attempt that, if successful, allows the new
> >    hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> >    trigger an asynchronous reclaim attempt, in which an memcg is
> >    selected for reclaim in a round-robin-like fashion.
>
> Could you explain the rationale behind the difference in behavior here
> between the global limit and the memcg limit?
>
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > Co-developed-by: Nhat Pham <nphamcs@gmail.com>
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> >  include/linux/memcontrol.h |   5 ++
> >  mm/swap.h                  |   3 +-
> >  mm/swap_state.c            |  17 +++-
> >  mm/zswap.c                 | 179 ++++++++++++++++++++++++++-----------
> >  4 files changed, 147 insertions(+), 57 deletions(-)
>
> This is a dense patch, I haven't absorbed all of it yet, but the first
> round of comments below.
>
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 031102ac9311..3de10fabea0f 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -1179,6 +1179,11 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
> >         return NULL;
> >  }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
> > +{
> > +       return NULL;
> > +}
> > +
> >  static inline bool folio_memcg_kmem(struct folio *folio)
> >  {
> >         return false;
> > diff --git a/mm/swap.h b/mm/swap.h
> > index 8a3c7a0ace4f..bbd6ce661a20 100644
> > --- a/mm/swap.h
> > +++ b/mm/swap.h
> > @@ -50,7 +50,8 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                                      struct vm_area_struct *vma,
> >                                      unsigned long addr,
> > -                                    bool *new_page_allocated);
> > +                                    bool *new_page_allocated,
> > +                                    bool fail_if_exists);
> >  struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> >                                     struct vm_fault *vmf);
> >  struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index b3b14bd0dd64..0356df52b06a 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -411,7 +411,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> >
> >  struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                         struct vm_area_struct *vma, unsigned long addr,
> > -                       bool *new_page_allocated)
> > +                       bool *new_page_allocated, bool fail_if_exists)
>
> nit: I don't feel like "fail" is the correct word here. Perhaps "skip"?

I really don't have a preference, we can go with "skip".

>
> >  {
> >         struct swap_info_struct *si;
> >         struct folio *folio;
> > @@ -468,6 +468,15 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >                 if (err != -EEXIST)
> >                         goto fail_put_swap;
> >
> > +               /*
> > +                * This check guards against a state that happens if a call
> > +                * to __read_swap_cache_async triggers a reclaim, if the
> > +                * reclaimer (zswap's writeback as of now) then decides to
> > +                * reclaim that same entry, then the subsequent call to
> > +                * __read_swap_cache_async would get stuck in this loop.
>
> I think this comment needs to first state that it is protecting
> against a recursive call in general, not necessarily in reclaim, as
> __read_swap_cache_async() is not usually called in the context of
> reclaim so this can be confusing. Then it can give the exact example
> we have today. Perhaps something like:
>
> Protect against a recursive call to __read_swap_cache_async() on the
> same entry waiting forever here because SWAP_HAS_CACHE is set but the
> folio is not the swap cache yet. This can happen today if
> mem_cgroup_swapin_charge_folio() below triggers reclaim through zswap,
> which may call __read_swap_cache_async() in the writeback path.
>

Sold.

> > +                */
> > +               if (fail_if_exists && err == -EEXIST)
>
> We already made sure  in the preceding condition that err is -EEXIST.
>
> > +                       goto fail_put_swap;
> >                 /*
> >                  * We might race against __delete_from_swap_cache(), and
> >                  * stumble across a swap_map entry whose SWAP_HAS_CACHE
> > @@ -530,7 +539,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> >  {
> >         bool page_was_allocated;
> >         struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
> > -                       vma, addr, &page_was_allocated);
> > +                       vma, addr, &page_was_allocated, false);
> >
> >         if (page_was_allocated)
> >                 swap_readpage(retpage, false, plug);
> > @@ -649,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> >                 /* Ok, do the async read-ahead now */
> >                 page = __read_swap_cache_async(
> >                         swp_entry(swp_type(entry), offset),
> > -                       gfp_mask, vma, addr, &page_allocated);
> > +                       gfp_mask, vma, addr, &page_allocated, false);
> >                 if (!page)
> >                         continue;
> >                 if (page_allocated) {
> > @@ -815,7 +824,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> >                 pte_unmap(pte);
> >                 pte = NULL;
> >                 page = __read_swap_cache_async(entry, gfp_mask, vma,
> > -                                              addr, &page_allocated);
> > +                                              addr, &page_allocated, false);
> >                 if (!page)
> >                         continue;
> >                 if (page_allocated) {
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 083c693602b8..d2989ad11814 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -34,6 +34,7 @@
> >  #include <linux/writeback.h>
> >  #include <linux/pagemap.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/list_lru.h>
> >
> >  #include "swap.h"
> >  #include "internal.h"
> > @@ -171,8 +172,8 @@ struct zswap_pool {
> >         struct work_struct shrink_work;
> >         struct hlist_node node;
> >         char tfm_name[CRYPTO_MAX_ALG_NAME];
> > -       struct list_head lru;
> > -       spinlock_t lru_lock;
> > +       struct list_lru list_lru;
> > +       struct mem_cgroup *next_shrink;
> >  };
> >
> >  /*
> > @@ -288,15 +289,25 @@ static void zswap_update_total_size(void)
> >         zswap_pool_total_size = total;
> >  }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_entry(struct zswap_entry *entry)
> > +{
> > +       return entry->objcg ? get_mem_cgroup_from_objcg(entry->objcg) : NULL;
> > +}
> > +
> > +static inline int entry_to_nid(struct zswap_entry *entry)
> > +{
> > +       return page_to_nid(virt_to_page(entry));
> > +}
> > +
> >  /*********************************
> >  * zswap entry functions
> >  **********************************/
> >  static struct kmem_cache *zswap_entry_cache;
> >
> > -static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> > +static struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp, int nid)
> >  {
> >         struct zswap_entry *entry;
> > -       entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> > +       entry = kmem_cache_alloc_node(zswap_entry_cache, gfp, nid);
> >         if (!entry)
> >                 return NULL;
> >         entry->refcount = 1;
> > @@ -309,6 +320,27 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> >         kmem_cache_free(zswap_entry_cache, entry);
> >  }
> >
> > +/*********************************
> > +* lru functions
> > +**********************************/
> > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
>
> Could we avoid the need for get/put with an rcu_read_lock() instead?

I think we can, I'm not entirely sure of the consequences though. By the
look of it I'd say it's safe but I wouldn't trust my judgement on this.

>
> > +       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > +
> > +       mem_cgroup_put(memcg);
> > +       return added;
> > +}
> > +
> > +static bool zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *entry)
> > +{
> > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > +       bool removed = __list_lru_del(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > +
> > +       mem_cgroup_put(memcg);
> > +       return removed;
> > +}
> > +
> >  /*********************************
> >  * rbtree functions
> >  **********************************/
> > @@ -393,9 +425,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> >         if (!entry->length)
> >                 atomic_dec(&zswap_same_filled_pages);
> >         else {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_del(&entry->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               zswap_lru_del(&entry->pool->list_lru, entry);
> >                 zpool_free(zswap_find_zpool(entry), entry->handle);
> >                 zswap_pool_put(entry->pool);
> >         }
> > @@ -629,21 +659,16 @@ static void zswap_invalidate_entry(struct zswap_tree *tree,
> >                 zswap_entry_put(tree, entry);
> >  }
> >
> > -static int zswap_reclaim_entry(struct zswap_pool *pool)
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > +                                      spinlock_t *lock, void *arg)
> >  {
> > -       struct zswap_entry *entry;
> > +       struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > +       struct mem_cgroup *memcg;
> >         struct zswap_tree *tree;
> >         pgoff_t swpoffset;
> > -       int ret;
> > +       enum lru_status ret = LRU_REMOVED_RETRY;
> > +       int writeback_result;
> >
> > -       /* Get an entry off the LRU */
> > -       spin_lock(&pool->lru_lock);
> > -       if (list_empty(&pool->lru)) {
> > -               spin_unlock(&pool->lru_lock);
> > -               return -EINVAL;
> > -       }
> > -       entry = list_last_entry(&pool->lru, struct zswap_entry, lru);
> > -       list_del_init(&entry->lru);
> >         /*
> >          * Once the lru lock is dropped, the entry might get freed. The
> >          * swpoffset is copied to the stack, and entry isn't deref'd again
> > @@ -651,28 +676,33 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> >          */
> >         swpoffset = swp_offset(entry->swpentry);
> >         tree = zswap_trees[swp_type(entry->swpentry)];
> > -       spin_unlock(&pool->lru_lock);
> > +       list_lru_isolate(l, item);
> > +       spin_unlock(lock);
>
> Perhaps a comment somewhere stating that we only return either
> LRU_REMOVED_RETRY or LRU_RETRY, so it's fine to drop and reacquire the
> lock.
>
> >
> >         /* Check for invalidate() race */
> >         spin_lock(&tree->lock);
> >         if (entry != zswap_rb_search(&tree->rbroot, swpoffset)) {
> > -               ret = -EAGAIN;
> >                 goto unlock;
> >         }
>
> nit: braces no longer needed?
>
> >         /* Hold a reference to prevent a free during writeback */
> >         zswap_entry_get(entry);
> >         spin_unlock(&tree->lock);
> >
> > -       ret = zswap_writeback_entry(entry, tree);
> > +       writeback_result = zswap_writeback_entry(entry, tree);
> >
> >         spin_lock(&tree->lock);
> > -       if (ret) {
> > -               /* Writeback failed, put entry back on LRU */
> > -               spin_lock(&pool->lru_lock);
> > -               list_move(&entry->lru, &pool->lru);
> > -               spin_unlock(&pool->lru_lock);
> > +       if (writeback_result) {
> > +               zswap_reject_reclaim_fail++;
> > +               memcg = get_mem_cgroup_from_entry(entry);
> > +               spin_lock(lock);
> > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > +               spin_unlock(lock);
> > +               mem_cgroup_put(memcg);
> > +               ret = LRU_RETRY;
> >                 goto put_unlock;
> >         }
> > +       zswap_written_back_pages++;
>
> Why is this moved here from zswap_writeback_entry()? Also why is
> zswap_reject_reclaim_fail incremented here instead of inside
> zswap_writeback_entry()?
>
> >
> >         /*
> >          * Writeback started successfully, the page now belongs to the
> > @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> >         zswap_entry_put(tree, entry);
> >  unlock:
> >         spin_unlock(&tree->lock);
> > -       return ret ? -EAGAIN : 0;
> > +       spin_lock(lock);
> > +       return ret;
> > +}
> > +
> > +static int shrink_memcg(struct mem_cgroup *memcg)
> > +{
> > +       struct zswap_pool *pool;
> > +       int nid, shrunk = 0;
> > +
> > +       pool = zswap_pool_current_get();
> > +       if (!pool)
> > +               return -EINVAL;
> > +
> > +       /*
> > +        * Skip zombies because their LRUs are reparented and we would be
> > +        * reclaiming from the parent instead of the dead memcgroup.
>
> nit: s/memcgroup/memcg.
>
> > +        */
> > +       if (memcg && !mem_cgroup_online(memcg))
> > +               goto out;
>
> If we move this above zswap_pool_current_get(), we can return directly
> and remove the label. I noticed we will return -EAGAIN if memcg is
> offline. IIUC -EAGAIN for the caller will move on to the next memcg,
> but I am wondering if a different errno would be clearer here.

True, I remember spending some time staring at error codes but couldn't find a
better one. What if we use -EINVAL for retryable errors, and use something else
for the one where there is no pool? -ENODEV?

>
> > +
> > +       for_each_node_state(nid, N_NORMAL_MEMORY) {
> > +               unsigned long nr_to_walk = 1;
> > +
> > +               if (list_lru_walk_one(&pool->list_lru, nid, memcg, &shrink_memcg_cb,
> > +                                     NULL, &nr_to_walk))
> > +                       shrunk++;
>
> nit:
> shrunk += list_lru_walk_one(..);
>
> > +       }
> > +out:
> > +       zswap_pool_put(pool);
> > +       return shrunk ? 0 : -EAGAIN;
> >  }
> >
> >  static void shrink_worker(struct work_struct *w)
> > @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
> >                                                 shrink_work);
> >         int ret, failures = 0;
> >
> > +       /* global reclaim will select cgroup in a round-robin fashion. */
> >         do {
> > -               ret = zswap_reclaim_entry(pool);
> > +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
>
> Perhaps next_shrink_memcg is a better name here?

Will change if you have a strong preference, I'd keep it shorter because it's
always used in conjunction with a memcg type or function.

>
> > +
> > +               ret = shrink_memcg(pool->next_shrink);
> > +
> >                 if (ret) {
> > -                       zswap_reject_reclaim_fail++;
> >                         if (ret != -EAGAIN)
> >                                 break;
> >                         if (++failures == MAX_RECLAIM_RETRIES)
> > @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> >          */
> >         kref_init(&pool->kref);
> >         INIT_LIST_HEAD(&pool->list);
> > -       INIT_LIST_HEAD(&pool->lru);
> > -       spin_lock_init(&pool->lru_lock);
> > +       list_lru_init_memcg(&pool->list_lru, NULL);
> >         INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> >         zswap_pool_debug("created", pool);
> > @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> >         free_percpu(pool->acomp_ctx);
> > +       list_lru_destroy(&pool->list_lru);
> > +       if (pool->next_shrink)
> > +               mem_cgroup_put(pool->next_shrink);
> >         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> >                 zpool_destroy_pool(pool->zpools[i]);
> >         kfree(pool);
> > @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> >
> >         /* try to allocate swap cache page */
> >         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> > -                                      &page_was_allocated);
> > +                                      &page_was_allocated, true);
> >         if (!page) {
> >                 ret = -ENOMEM;
> >                 goto fail;
> > @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> >         /* start writeback */
> >         __swap_writepage(page, &wbc);
> >         put_page(page);
> > -       zswap_written_back_pages++;
> >
> >         return ret;
> >
> > @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
> >         struct scatterlist input, output;
> >         struct crypto_acomp_ctx *acomp_ctx;
> >         struct obj_cgroup *objcg = NULL;
> > +       struct mem_cgroup *memcg = NULL;
> >         struct zswap_pool *pool;
> >         struct zpool *zpool;
> > +       int lru_alloc_ret;
> >         unsigned int dlen = PAGE_SIZE;
> >         unsigned long handle, value;
> >         char *buf;
> > @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
> >                 zswap_invalidate_entry(tree, dupentry);
> >         }
> >         spin_unlock(&tree->lock);
> > -
> > -       /*
> > -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> > -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> > -        * local cgroup limits.
> > -        */
> >         objcg = get_obj_cgroup_from_folio(folio);
> > -       if (objcg && !obj_cgroup_may_zswap(objcg))
> > -               goto reject;
> > +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > +               if (shrink_memcg(memcg)) {
> > +                       mem_cgroup_put(memcg);
> > +                       goto reject;
> > +               }
> > +               mem_cgroup_put(memcg);
> > +       }
> >
> >         /* reclaim space if needed */
> >         if (zswap_is_full()) {
> > @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
> >                         zswap_pool_reached_full = false;
> >         }
> >
> > +       pool = zswap_pool_current_get();
> > +       if (!pool)
> > +               goto reject;
> > +
>
> Why do we need to move zswap_pool_current_get() up here?

Ah, thanks. This is a leftover from a previous version where the pool was needed
to allocate the entry.


>
> >         /* allocate entry */
> > -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> >         if (!entry) {
> >                 zswap_reject_kmemcache_fail++;
> > +               zswap_pool_put(pool);
> >                 goto reject;
> >         }
> >
> > @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
> >                         entry->length = 0;
> >                         entry->value = value;
> >                         atomic_inc(&zswap_same_filled_pages);
> > +                       zswap_pool_put(pool);
> >                         goto insert_entry;
> >                 }
> >                 kunmap_atomic(src);
> > @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
> >                 goto freepage;
> >
> >         /* if entry is successfully added, it keeps the reference */
> > -       entry->pool = zswap_pool_current_get();
> > -       if (!entry->pool)
> > -               goto freepage;
> > +       entry->pool = pool;
> > +       if (objcg) {
> > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > +               mem_cgroup_put(memcg);
> > +
> > +               if (lru_alloc_ret)
> > +                       goto freepage;
> > +       }
> >
> >         /* compress */
> >         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
> >                 zswap_invalidate_entry(tree, dupentry);
> >         }
> >         if (entry->length) {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_add(&entry->lru, &entry->pool->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               INIT_LIST_HEAD(&entry->lru);
> > +               zswap_lru_add(&pool->list_lru, entry);
> >         }
> >         spin_unlock(&tree->lock);
> >
> > @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
> >
> >  put_dstmem:
> >         mutex_unlock(acomp_ctx->mutex);
> > -       zswap_pool_put(entry->pool);
> >  freepage:
> > +       zswap_pool_put(entry->pool);
> >         zswap_entry_cache_free(entry);
> >  reject:
> >         if (objcg)
> > @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
> >                 zswap_invalidate_entry(tree, entry);
> >                 folio_mark_dirty(folio);
> >         } else if (entry->length) {
> > -               spin_lock(&entry->pool->lru_lock);
> > -               list_move(&entry->lru, &entry->pool->lru);
> > -               spin_unlock(&entry->pool->lru_lock);
> > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > +               zswap_lru_add(&entry->pool->list_lru, entry);
> >         }
> >         zswap_entry_put(tree, entry);
> >         spin_unlock(&tree->lock);
> > --
> > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-19  1:11       ` Yosry Ahmed
@ 2023-10-19 12:47         ` Domenico Cerasuolo
  2023-10-19 16:28           ` Yosry Ahmed
  0 siblings, 1 reply; 31+ messages in thread
From: Domenico Cerasuolo @ 2023-10-19 12:47 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Nhat Pham, akpm, hannes, sjenning, ddstreet, vitaly.wool, mhocko,
	roman.gushchin, shakeelb, muchun.song, linux-mm, kernel-team,
	linux-kernel, cgroups, linux-doc, linux-kselftest, shuah

On Thu, Oct 19, 2023 at 3:12 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Wed, Oct 18, 2023 at 4:47 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> > >
> > > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > > >
> > > > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > > >
> > > > Currently, we only have a single global LRU for zswap. This makes it
> > > > impossible to perform worload-specific shrinking - an memcg cannot
> > > > determine which pages in the pool it owns, and often ends up writing
> > > > pages from other memcgs. This issue has been previously observed in
> > > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > > >
> > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> > > >
> > > > This patch fully resolves the issue by replacing the global zswap LRU
> > > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > > >
> > > > a) When a store attempt hits an memcg limit, it now triggers a
> > > >    synchronous reclaim attempt that, if successful, allows the new
> > > >    hotter page to be accepted by zswap.
> > > > b) If the store attempt instead hits the global zswap limit, it will
> > > >    trigger an asynchronous reclaim attempt, in which an memcg is
> > > >    selected for reclaim in a round-robin-like fashion.
> > >
> > > Could you explain the rationale behind the difference in behavior here
> > > between the global limit and the memcg limit?
> >
> > The global limit hit reclaim behavior was previously asynchronous too.
> > We just added the round-robin part because now the zswap LRU is
> > cgroup-aware :)
> >
> > For the cgroup limit hit, however, we cannot make it asynchronous,
> > as it is a bit hairy to add a per-cgroup shrink_work. So, we just
> > perform the reclaim synchronously.
> >
> > The question is whether it makes sense to make the global limit
> > reclaim synchronous too. That is a task of its own IMO.
>
> Let's add such context to the commit log, and perhaps an XXX comment
> in the code asking whether we should consider doing the reclaim
> synchronously for the global limit too.

Makes sense, I wonder if the original reason for switching from a synchronous
to asynchronous reclaim will still be valid with the shrinker in place.

>
> >
> > (FWIW, this somewhat mirrors the direct reclaimer v.s kswapd
> > story to me, but don't quote me too hard on this).
> >
> [..]
> >
> > >
> > > >         /* Hold a reference to prevent a free during writeback */
> > > >         zswap_entry_get(entry);
> > > >         spin_unlock(&tree->lock);
> > > >
> > > > -       ret = zswap_writeback_entry(entry, tree);
> > > > +       writeback_result = zswap_writeback_entry(entry, tree);
> > > >
> > > >         spin_lock(&tree->lock);
> > > > -       if (ret) {
> > > > -               /* Writeback failed, put entry back on LRU */
> > > > -               spin_lock(&pool->lru_lock);
> > > > -               list_move(&entry->lru, &pool->lru);
> > > > -               spin_unlock(&pool->lru_lock);
> > > > +       if (writeback_result) {
> > > > +               zswap_reject_reclaim_fail++;
> > > > +               memcg = get_mem_cgroup_from_entry(entry);
> > > > +               spin_lock(lock);
> > > > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > > > +               spin_unlock(lock);
> > > > +               mem_cgroup_put(memcg);
> > > > +               ret = LRU_RETRY;
> > > >                 goto put_unlock;
> > > >         }
> > > > +       zswap_written_back_pages++;
> > >
> > > Why is this moved here from zswap_writeback_entry()? Also why is
> > > zswap_reject_reclaim_fail incremented here instead of inside
> > > zswap_writeback_entry()?
> >
> > Domenico should know this better than me, but my understanding
> > is that moving it here protects concurrent modifications of
> > zswap_written_back_pages with the tree lock.
> >
> > Is writeback single-threaded in the past? This counter is non-atomic,
> > and doesn't seem to be protected by any locks...
> >
> > There definitely can be concurrent stores now though - with
> > a synchronous reclaim from cgroup-limit hit and another
> > from the old shrink worker.
> >
> > (and with the new zswap shrinker, concurrent reclaim is
> > the expectation!)
>
> The comment above the stats definition stats that they are left
> unprotected purposefully. If we want to fix that let's do it
> separately. If this patch makes it significantly worse such that it
> would cause a regression, let's at least do it in a separate patch.
> The diff here is too large already.
>
> >
> > zswap_reject_reclaim_fail was previously incremented in
> > shrink_worker I think. We need it to be incremented
> > for the shrinker as well, so might as well move it here.
>
> Wouldn't moving it inside zswap_writeback_entry() near incrementing
> zswap_written_back_pages make it easier to follow?

As Nhat said, zswap_reject_reclaim_fail++ had to be moved, I naturally moved it
here because it's where we act upon the result of the writeback. I then noticed
that zswap_written_back_pages++ was elsewhere and decided to move that as well
so that they're in the same place and at least they're under the tree lock.

It's not meant to fix the unprotected counters, it's just a mitigation since we
are forced to move at least one of them.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-19 12:29     ` Domenico Cerasuolo
@ 2023-10-19 16:14       ` Yosry Ahmed
  2023-10-20 19:58         ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-19 16:14 UTC (permalink / raw)
  To: Domenico Cerasuolo
  Cc: Nhat Pham, akpm, hannes, sjenning, ddstreet, vitaly.wool, mhocko,
	roman.gushchin, shakeelb, muchun.song, linux-mm, kernel-team,
	linux-kernel, cgroups, linux-doc, linux-kselftest, shuah

[..]
> > >
> > > +/*********************************
> > > +* lru functions
> > > +**********************************/
> > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > +{
> > > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> >
> > Could we avoid the need for get/put with an rcu_read_lock() instead?
>
> I think we can, I'm not entirely sure of the consequences though. By the
> look of it I'd say it's safe but I wouldn't trust my judgement on this.

It just seems like we have a pattern of short-lived get/put. If RCU
gives enough protection it should be simpler. IIUC taking a reference
does not protect against offlining or reparenting, so I am not sure if
taking a reference here would provide any more protection than

>
> >
[..]
> > > @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > >         zswap_entry_put(tree, entry);
> > >  unlock:
> > >         spin_unlock(&tree->lock);
> > > -       return ret ? -EAGAIN : 0;
> > > +       spin_lock(lock);
> > > +       return ret;
> > > +}
> > > +
> > > +static int shrink_memcg(struct mem_cgroup *memcg)
> > > +{
> > > +       struct zswap_pool *pool;
> > > +       int nid, shrunk = 0;
> > > +
> > > +       pool = zswap_pool_current_get();
> > > +       if (!pool)
> > > +               return -EINVAL;
> > > +
> > > +       /*
> > > +        * Skip zombies because their LRUs are reparented and we would be
> > > +        * reclaiming from the parent instead of the dead memcgroup.
> >
> > nit: s/memcgroup/memcg.
> >
> > > +        */
> > > +       if (memcg && !mem_cgroup_online(memcg))
> > > +               goto out;
> >
> > If we move this above zswap_pool_current_get(), we can return directly
> > and remove the label. I noticed we will return -EAGAIN if memcg is
> > offline. IIUC -EAGAIN for the caller will move on to the next memcg,
> > but I am wondering if a different errno would be clearer here.
>
> True, I remember spending some time staring at error codes but couldn't find a
> better one. What if we use -EINVAL for retryable errors, and use something else
> for the one where there is no pool? -ENODEV?

Do you mean -EINVAL for non-retryable errors? Perhaps -ENOENT is more
appropriate as a return for offline memcgs?

>
> >
[..]
> > >  static void shrink_worker(struct work_struct *w)
> > > @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
> > >                                                 shrink_work);
> > >         int ret, failures = 0;
> > >
> > > +       /* global reclaim will select cgroup in a round-robin fashion. */
> > >         do {
> > > -               ret = zswap_reclaim_entry(pool);
> > > +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> >
> > Perhaps next_shrink_memcg is a better name here?
>
> Will change if you have a strong preference, I'd keep it shorter because it's
> always used in conjunction with a memcg type or function.

I'd rather have the more explicit name unless it causes some annoying
line breaks or so.

>
> >
> > > +
> > > +               ret = shrink_memcg(pool->next_shrink);
> > > +
> > >                 if (ret) {
> > > -                       zswap_reject_reclaim_fail++;
> > >                         if (ret != -EAGAIN)
> > >                                 break;
> > >                         if (++failures == MAX_RECLAIM_RETRIES)
> > > @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > >          */
> > >         kref_init(&pool->kref);
> > >         INIT_LIST_HEAD(&pool->list);
> > > -       INIT_LIST_HEAD(&pool->lru);
> > > -       spin_lock_init(&pool->lru_lock);
> > > +       list_lru_init_memcg(&pool->list_lru, NULL);
> > >         INIT_WORK(&pool->shrink_work, shrink_worker);
> > >
> > >         zswap_pool_debug("created", pool);
> > > @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > >
> > >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > >         free_percpu(pool->acomp_ctx);
> > > +       list_lru_destroy(&pool->list_lru);
> > > +       if (pool->next_shrink)
> > > +               mem_cgroup_put(pool->next_shrink);
> > >         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > >                 zpool_destroy_pool(pool->zpools[i]);
> > >         kfree(pool);
> > > @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > >
> > >         /* try to allocate swap cache page */
> > >         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> > > -                                      &page_was_allocated);
> > > +                                      &page_was_allocated, true);
> > >         if (!page) {
> > >                 ret = -ENOMEM;
> > >                 goto fail;
> > > @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > >         /* start writeback */
> > >         __swap_writepage(page, &wbc);
> > >         put_page(page);
> > > -       zswap_written_back_pages++;
> > >
> > >         return ret;
> > >
> > > @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
> > >         struct scatterlist input, output;
> > >         struct crypto_acomp_ctx *acomp_ctx;
> > >         struct obj_cgroup *objcg = NULL;
> > > +       struct mem_cgroup *memcg = NULL;
> > >         struct zswap_pool *pool;
> > >         struct zpool *zpool;
> > > +       int lru_alloc_ret;
> > >         unsigned int dlen = PAGE_SIZE;
> > >         unsigned long handle, value;
> > >         char *buf;
> > > @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, dupentry);
> > >         }
> > >         spin_unlock(&tree->lock);
> > > -
> > > -       /*
> > > -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > -        * local cgroup limits.
> > > -        */
> > >         objcg = get_obj_cgroup_from_folio(folio);
> > > -       if (objcg && !obj_cgroup_may_zswap(objcg))
> > > -               goto reject;
> > > +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > +               if (shrink_memcg(memcg)) {
> > > +                       mem_cgroup_put(memcg);
> > > +                       goto reject;
> > > +               }
> > > +               mem_cgroup_put(memcg);
> > > +       }
> > >
> > >         /* reclaim space if needed */
> > >         if (zswap_is_full()) {
> > > @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
> > >                         zswap_pool_reached_full = false;
> > >         }
> > >
> > > +       pool = zswap_pool_current_get();
> > > +       if (!pool)
> > > +               goto reject;
> > > +
> >
> > Why do we need to move zswap_pool_current_get() up here?
>
> Ah, thanks. This is a leftover from a previous version where the pool was needed
> to allocate the entry.
>
>
> >
> > >         /* allocate entry */
> > > -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > > +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > >         if (!entry) {
> > >                 zswap_reject_kmemcache_fail++;
> > > +               zswap_pool_put(pool);
> > >                 goto reject;
> > >         }
> > >
> > > @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
> > >                         entry->length = 0;
> > >                         entry->value = value;
> > >                         atomic_inc(&zswap_same_filled_pages);
> > > +                       zswap_pool_put(pool);
> > >                         goto insert_entry;
> > >                 }
> > >                 kunmap_atomic(src);
> > > @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
> > >                 goto freepage;
> > >
> > >         /* if entry is successfully added, it keeps the reference */
> > > -       entry->pool = zswap_pool_current_get();
> > > -       if (!entry->pool)
> > > -               goto freepage;
> > > +       entry->pool = pool;
> > > +       if (objcg) {
> > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > > +               mem_cgroup_put(memcg);
> > > +
> > > +               if (lru_alloc_ret)
> > > +                       goto freepage;
> > > +       }
> > >
> > >         /* compress */
> > >         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > > @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, dupentry);
> > >         }
> > >         if (entry->length) {
> > > -               spin_lock(&entry->pool->lru_lock);
> > > -               list_add(&entry->lru, &entry->pool->lru);
> > > -               spin_unlock(&entry->pool->lru_lock);
> > > +               INIT_LIST_HEAD(&entry->lru);
> > > +               zswap_lru_add(&pool->list_lru, entry);
> > >         }
> > >         spin_unlock(&tree->lock);
> > >
> > > @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
> > >
> > >  put_dstmem:
> > >         mutex_unlock(acomp_ctx->mutex);
> > > -       zswap_pool_put(entry->pool);
> > >  freepage:
> > > +       zswap_pool_put(entry->pool);
> > >         zswap_entry_cache_free(entry);
> > >  reject:
> > >         if (objcg)
> > > @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
> > >                 zswap_invalidate_entry(tree, entry);
> > >                 folio_mark_dirty(folio);
> > >         } else if (entry->length) {
> > > -               spin_lock(&entry->pool->lru_lock);
> > > -               list_move(&entry->lru, &entry->pool->lru);
> > > -               spin_unlock(&entry->pool->lru_lock);
> > > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > > +               zswap_lru_add(&entry->pool->list_lru, entry);
> > >         }
> > >         zswap_entry_put(tree, entry);
> > >         spin_unlock(&tree->lock);
> > > --
> > > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-19 12:47         ` Domenico Cerasuolo
@ 2023-10-19 16:28           ` Yosry Ahmed
  0 siblings, 0 replies; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-19 16:28 UTC (permalink / raw)
  To: Domenico Cerasuolo
  Cc: Nhat Pham, akpm, hannes, sjenning, ddstreet, vitaly.wool, mhocko,
	roman.gushchin, shakeelb, muchun.song, linux-mm, kernel-team,
	linux-kernel, cgroups, linux-doc, linux-kselftest, shuah

On Thu, Oct 19, 2023 at 5:47 AM Domenico Cerasuolo
<cerasuolodomenico@gmail.com> wrote:
>
> On Thu, Oct 19, 2023 at 3:12 AM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Wed, Oct 18, 2023 at 4:47 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > >
> > > On Wed, Oct 18, 2023 at 4:20 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> > > >
> > > > On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> > > > >
> > > > > From: Domenico Cerasuolo <cerasuolodomenico@gmail.com>
> > > > >
> > > > > Currently, we only have a single global LRU for zswap. This makes it
> > > > > impossible to perform worload-specific shrinking - an memcg cannot
> > > > > determine which pages in the pool it owns, and often ends up writing
> > > > > pages from other memcgs. This issue has been previously observed in
> > > > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > > > >
> > > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> > > > >
> > > > > This patch fully resolves the issue by replacing the global zswap LRU
> > > > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > > > >
> > > > > a) When a store attempt hits an memcg limit, it now triggers a
> > > > >    synchronous reclaim attempt that, if successful, allows the new
> > > > >    hotter page to be accepted by zswap.
> > > > > b) If the store attempt instead hits the global zswap limit, it will
> > > > >    trigger an asynchronous reclaim attempt, in which an memcg is
> > > > >    selected for reclaim in a round-robin-like fashion.
> > > >
> > > > Could you explain the rationale behind the difference in behavior here
> > > > between the global limit and the memcg limit?
> > >
> > > The global limit hit reclaim behavior was previously asynchronous too.
> > > We just added the round-robin part because now the zswap LRU is
> > > cgroup-aware :)
> > >
> > > For the cgroup limit hit, however, we cannot make it asynchronous,
> > > as it is a bit hairy to add a per-cgroup shrink_work. So, we just
> > > perform the reclaim synchronously.
> > >
> > > The question is whether it makes sense to make the global limit
> > > reclaim synchronous too. That is a task of its own IMO.
> >
> > Let's add such context to the commit log, and perhaps an XXX comment
> > in the code asking whether we should consider doing the reclaim
> > synchronously for the global limit too.
>
> Makes sense, I wonder if the original reason for switching from a synchronous
> to asynchronous reclaim will still be valid with the shrinker in place.
>

Seems like it was done as part of the hysteresis that stops accepting
pages into zswap once full until it reaches a certain threshold,
commit 45190f01dd40 ("mm/zswap.c: add allocation hysteresis if pool
limit is hit"). I guess the point is that zswap will stop accepting
pages when the limit is hit anyway, so no need to synchronously shrink
it since we can't store soon anyway. More useful context for the
commit log.

> >
> > >
> > > (FWIW, this somewhat mirrors the direct reclaimer v.s kswapd
> > > story to me, but don't quote me too hard on this).
> > >
> > [..]
> > >
> > > >
> > > > >         /* Hold a reference to prevent a free during writeback */
> > > > >         zswap_entry_get(entry);
> > > > >         spin_unlock(&tree->lock);
> > > > >
> > > > > -       ret = zswap_writeback_entry(entry, tree);
> > > > > +       writeback_result = zswap_writeback_entry(entry, tree);
> > > > >
> > > > >         spin_lock(&tree->lock);
> > > > > -       if (ret) {
> > > > > -               /* Writeback failed, put entry back on LRU */
> > > > > -               spin_lock(&pool->lru_lock);
> > > > > -               list_move(&entry->lru, &pool->lru);
> > > > > -               spin_unlock(&pool->lru_lock);
> > > > > +       if (writeback_result) {
> > > > > +               zswap_reject_reclaim_fail++;
> > > > > +               memcg = get_mem_cgroup_from_entry(entry);
> > > > > +               spin_lock(lock);
> > > > > +               /* we cannot use zswap_lru_add here, because it increments node's lru count */
> > > > > +               list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> > > > > +               spin_unlock(lock);
> > > > > +               mem_cgroup_put(memcg);
> > > > > +               ret = LRU_RETRY;
> > > > >                 goto put_unlock;
> > > > >         }
> > > > > +       zswap_written_back_pages++;
> > > >
> > > > Why is this moved here from zswap_writeback_entry()? Also why is
> > > > zswap_reject_reclaim_fail incremented here instead of inside
> > > > zswap_writeback_entry()?
> > >
> > > Domenico should know this better than me, but my understanding
> > > is that moving it here protects concurrent modifications of
> > > zswap_written_back_pages with the tree lock.
> > >
> > > Is writeback single-threaded in the past? This counter is non-atomic,
> > > and doesn't seem to be protected by any locks...
> > >
> > > There definitely can be concurrent stores now though - with
> > > a synchronous reclaim from cgroup-limit hit and another
> > > from the old shrink worker.
> > >
> > > (and with the new zswap shrinker, concurrent reclaim is
> > > the expectation!)
> >
> > The comment above the stats definition stats that they are left
> > unprotected purposefully. If we want to fix that let's do it
> > separately. If this patch makes it significantly worse such that it
> > would cause a regression, let's at least do it in a separate patch.
> > The diff here is too large already.
> >
> > >
> > > zswap_reject_reclaim_fail was previously incremented in
> > > shrink_worker I think. We need it to be incremented
> > > for the shrinker as well, so might as well move it here.
> >
> > Wouldn't moving it inside zswap_writeback_entry() near incrementing
> > zswap_written_back_pages make it easier to follow?
>
> As Nhat said, zswap_reject_reclaim_fail++ had to be moved, I naturally moved it
> here because it's where we act upon the result of the writeback. I then noticed
> that zswap_written_back_pages++ was elsewhere and decided to move that as well
> so that they're in the same place and at least they're under the tree lock.
>
> It's not meant to fix the unprotected counters, it's just a mitigation since we
> are forced to move at least one of them.

I see. Just for my own understanding, it would be correct to update
them in zswap_writeback_entry(), but we choose to do it here as we
happen to hold the lock so we get the free synchronization? I think it
could be beneficial as we may see increased concurrent writeback with
this series. Probably something to call out in the commit log as well.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
  2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
                   ` (4 preceding siblings ...)
  2023-10-17 23:21 ` [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Nhat Pham
@ 2023-10-19 17:12 ` Andrew Morton
  2023-10-19 17:33   ` Yosry Ahmed
  5 siblings, 1 reply; 31+ messages in thread
From: Andrew Morton @ 2023-10-19 17:12 UTC (permalink / raw)
  To: Nhat Pham
  Cc: hannes, cerasuolodomenico, yosryahmed, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah, Hugh Dickins

On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <nphamcs@gmail.com> wrote:

> Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback

We're at -rc6 and I'd prefer to drop this series from mm.git, have
another go during the next cycle.

However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
without vma" has syntactic dependencies on this series and will need
rework, so I'd like to make that decision soon.

Do we feel that this series can be made into a mergeable state within
the next few days?

Thanks.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
  2023-10-19 17:12 ` [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Andrew Morton
@ 2023-10-19 17:33   ` Yosry Ahmed
  2023-10-19 18:31     ` Nhat Pham
  0 siblings, 1 reply; 31+ messages in thread
From: Yosry Ahmed @ 2023-10-19 17:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Nhat Pham, hannes, cerasuolodomenico, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah, Hugh Dickins

On Thu, Oct 19, 2023 at 10:12 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <nphamcs@gmail.com> wrote:
>
> > Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
>
> We're at -rc6 and I'd prefer to drop this series from mm.git, have
> another go during the next cycle.
>
> However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
> without vma" has syntactic dependencies on this series and will need
> rework, so I'd like to make that decision soon.
>
> Do we feel that this series can be made into a mergeable state within
> the next few days?

There are parts of the code that I would feel more comfortable if
someone took a look at (which I mentioned in individual patches). So
unless this happens in the next few days I wouldn't say so.

>
> Thanks.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
  2023-10-19 17:33   ` Yosry Ahmed
@ 2023-10-19 18:31     ` Nhat Pham
  2023-10-19 18:36       ` Andrew Morton
  0 siblings, 1 reply; 31+ messages in thread
From: Nhat Pham @ 2023-10-19 18:31 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, hannes, cerasuolodomenico, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah, Hugh Dickins

On Thu, Oct 19, 2023 at 10:34 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Thu, Oct 19, 2023 at 10:12 AM Andrew Morton
> <akpm@linux-foundation.org> wrote:
> >
> > On Tue, 17 Oct 2023 16:21:47 -0700 Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > > Subject: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
> >
> > We're at -rc6 and I'd prefer to drop this series from mm.git, have
> > another go during the next cycle.
> >
> > However Hugh's v2 series "mempolicy: cleanups leading to NUMA mpol
> > without vma" has syntactic dependencies on this series and will need
> > rework, so I'd like to make that decision soon.
> >
> > Do we feel that this series can be made into a mergeable state within
> > the next few days?
>
> There are parts of the code that I would feel more comfortable if
> someone took a look at (which I mentioned in individual patches). So
> unless this happens in the next few days I wouldn't say so.
>

I'm not super familiar with the other series. How big is the dependency?
Looks like it's just a small part in the swapcache code right?

If this is the case, I feel like the best course of action is to rebase
the mempolicy patch series on top of mm-unstable, and resolve
this merge conflict. I will then send out v4 of the zswap shrinker,
rebased on top of the mempolicy patch series.

If this is not the case, one thing we can do is:

a) Fix bugs (there's one kernel test robot it seems)
b) Fix user-visible details (writeback counter for e.g)

and just merge the series for now. FWIW, this is an optional
feature and disabled by default. So performance optimization
and aesthetics change (list_lru_add() renaming etc.) can wait.

We can push out v4 by the end of today and early tomorrow
if all goes well. Then everyone can review and comment on it.

How does everyone feel about this strategy?

> >
> > Thanks.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
  2023-10-19 18:31     ` Nhat Pham
@ 2023-10-19 18:36       ` Andrew Morton
  2023-10-19 19:23         ` Hugh Dickins
  0 siblings, 1 reply; 31+ messages in thread
From: Andrew Morton @ 2023-10-19 18:36 UTC (permalink / raw)
  To: Nhat Pham
  Cc: Yosry Ahmed, hannes, cerasuolodomenico, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah, Hugh Dickins

On Thu, 19 Oct 2023 11:31:17 -0700 Nhat Pham <nphamcs@gmail.com> wrote:

> > There are parts of the code that I would feel more comfortable if
> > someone took a look at (which I mentioned in individual patches). So
> > unless this happens in the next few days I wouldn't say so.
> >
> 
> I'm not super familiar with the other series. How big is the dependency?
> Looks like it's just a small part in the swapcache code right?
> 
> If this is the case, I feel like the best course of action is to rebase
> the mempolicy patch series on top of mm-unstable, and resolve
> this merge conflict.

OK, thanks.

Hugh, do you have time to look at rebasing on the mm-stable which I
pushed out 15 minutes ago?

> I will then send out v4 of the zswap shrinker,
> rebased on top of the mempolicy patch series.
> 
> If this is not the case, one thing we can do is:
> 
> a) Fix bugs (there's one kernel test robot it seems)
> b) Fix user-visible details (writeback counter for e.g)
> 
> and just merge the series for now. FWIW, this is an optional
> feature and disabled by default. So performance optimization
> and aesthetics change (list_lru_add() renaming etc.) can wait.
> 
> We can push out v4 by the end of today and early tomorrow
> if all goes well. Then everyone can review and comment on it.
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback
  2023-10-19 18:36       ` Andrew Morton
@ 2023-10-19 19:23         ` Hugh Dickins
  0 siblings, 0 replies; 31+ messages in thread
From: Hugh Dickins @ 2023-10-19 19:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Nhat Pham, Yosry Ahmed, hannes, cerasuolodomenico, sjenning,
	ddstreet, vitaly.wool, mhocko, roman.gushchin, shakeelb,
	muchun.song, linux-mm, kernel-team, linux-kernel, cgroups,
	linux-doc, linux-kselftest, shuah, Hugh Dickins

On Thu, 19 Oct 2023, Andrew Morton wrote:
> On Thu, 19 Oct 2023 11:31:17 -0700 Nhat Pham <nphamcs@gmail.com> wrote:
> 
> > > There are parts of the code that I would feel more comfortable if
> > > someone took a look at (which I mentioned in individual patches). So
> > > unless this happens in the next few days I wouldn't say so.
> > >
> > 
> > I'm not super familiar with the other series. How big is the dependency?
> > Looks like it's just a small part in the swapcache code right?
> > 
> > If this is the case, I feel like the best course of action is to rebase
> > the mempolicy patch series on top of mm-unstable, and resolve
> > this merge conflict.
> 
> OK, thanks.
> 
> Hugh, do you have time to look at rebasing on the mm-stable which I
> pushed out 15 minutes ago?

Okay, I'm on it - but (unless you insist otherwise) it's only a v3 of
the 10/12 "mempolicy: alloc_pages_mpol() for NUMA policy without vma"
that I'm expecting to send you - the rest should just cherry-pick in
cleanly.  I'll check that of course, but I'm afraid of losing details
(e.g. any Acks you've meanwhile added) if I resend the lot.

Hugh

> 
> > I will then send out v4 of the zswap shrinker,
> > rebased on top of the mempolicy patch series.
> > 
> > If this is not the case, one thing we can do is:
> > 
> > a) Fix bugs (there's one kernel test robot it seems)
> > b) Fix user-visible details (writeback counter for e.g)
> > 
> > and just merge the series for now. FWIW, this is an optional
> > feature and disabled by default. So performance optimization
> > and aesthetics change (list_lru_add() renaming etc.) can wait.
> > 
> > We can push out v4 by the end of today and early tomorrow
> > if all goes well. Then everyone can review and comment on it.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure
  2023-10-18 23:36   ` Yosry Ahmed
@ 2023-10-20 19:14     ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-20 19:14 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: akpm, hannes, cerasuolodomenico, sjenning, ddstreet, vitaly.wool,
	mhocko, roman.gushchin, shakeelb, muchun.song, linux-mm,
	kernel-team, linux-kernel, cgroups, linux-doc, linux-kselftest,
	shuah

On Wed, Oct 18, 2023 at 4:36 PM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> On Tue, Oct 17, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
> >
> > Currently, we only shrink the zswap pool when the user-defined limit is
> > hit. This means that if we set the limit too high, cold data that are
> > unlikely to be used again will reside in the pool, wasting precious
> > memory. It is hard to predict how much zswap space will be needed ahead
> > of time, as this depends on the workload (specifically, on factors such
> > as memory access patterns and compressibility of the memory pages).
> >
> > This patch implements a memcg- and NUMA-aware shrinker for zswap, that
> > is initiated when there is memory pressure. The shrinker does not
> > have any parameter that must be tuned by the user, and can be opted in
> > or out on a per-memcg basis.
> >
> > Furthermore, to make it more robust for many workloads and prevent
> > overshrinking (i.e evicting warm pages that might be refaulted into
> > memory), we build in the following heuristics:
> >
> > * Estimate the number of warm pages residing in zswap, and attempt to
> >   protect this region of the zswap LRU.
> > * Scale the number of freeable objects by an estimate of the memory
> >   saving factor. The better zswap compresses the data, the fewer pages
> >   we will evict to swap (as we will otherwise incur IO for relatively
> >   small memory saving).
> > * During reclaim, if the shrinker encounters a page that is also being
> >   brought into memory, the shrinker will cautiously terminate its
> >   shrinking action, as this is a sign that it is touching the warmer
> >   region of the zswap LRU.
>
> I really hope someone with more familiarity with reclaim heuristics
> makes sure this makes sense.
>
> >
> > As a proof of concept, we ran the following synthetic benchmark:
> > build the linux kernel in a memory-limited cgroup, and allocate some
> > cold data in tmpfs to see if the shrinker could write them out and
> > improved the overall performance. Depending on the amount of cold data
> > generated, we observe from 14% to 35% reduction in kernel CPU time used
> > in the kernel builds.
> >
> > Signed-off-by: Nhat Pham <nphamcs@gmail.com>
> > ---
> >  Documentation/admin-guide/mm/zswap.rst |   7 ++
> >  include/linux/mmzone.h                 |  14 +++
> >  mm/mmzone.c                            |   3 +
> >  mm/swap_state.c                        |  21 +++-
> >  mm/zswap.c                             | 161 +++++++++++++++++++++++--
> >  5 files changed, 196 insertions(+), 10 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > index 45b98390e938..522ae22ccb84 100644
> > --- a/Documentation/admin-guide/mm/zswap.rst
> > +++ b/Documentation/admin-guide/mm/zswap.rst
> > @@ -153,6 +153,13 @@ attribute, e. g.::
> >
> >  Setting this parameter to 100 will disable the hysteresis.
> >
> > +When there is a sizable amount of cold memory residing in the zswap pool, it
> > +can be advantageous to proactively write these cold pages to swap and reclaim
> > +the memory for other use cases. By default, the zswap shrinker is disabled.
> > +User can enable it as follows:
> > +
> > +  echo Y > /sys/module/zswap/parameters/shrinker_enabled
> > +
> >  A debugfs interface is provided for various statistic about pool size, number
> >  of pages stored, same-value filled pages and various counters for the reasons
> >  pages are rejected.
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 486587fcd27f..8947a1bfbe9c 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -637,6 +637,20 @@ struct lruvec {
> >  #ifdef CONFIG_MEMCG
> >         struct pglist_data *pgdat;
> >  #endif
> > +#ifdef CONFIG_ZSWAP
> > +       /*
> > +        * Number of pages in zswap that should be protected from the shrinker.
> > +        * This number is an estimate of the following counts:
> > +        *
> > +        * a) Recent page faults.
> > +        * b) Recent insertion to the zswap LRU. This includes new zswap stores,
> > +        *    as well as recent zswap LRU rotations.
> > +        *
> > +        * These pages are likely to be warm, and might incur IO if the are written
> > +        * to swap.
> > +        */
> > +       atomic_long_t nr_zswap_protected;
> > +#endif
>
> Instead of the ifdef's all over the code, we can define
> nr_zswap_protected inside a struct and helpers that
> increment/initialize nr_zswap_protected in zswap.h, and have a single
> ifdef there. All the code would be oblivious to the existence of
> nr_zswap_protected.
>
> Something like:
>
> #ifdef CONFIG_ZSWAP
>
> struct zswap_lruvec_state {
>          /* insert large comment */
>         atomic_long_t nr_zswap_protected;
> };
>
> static inline void zswap_lruvec_init(..)
> {
>         atomic_long_set(&lruvec->nr_zswap_protected, 0);
> }
>
> static inline void zswap_lruvec_swapin(..)
> {
>        if (page) {
>                struct lruvec *lruvec = folio_lruvec(page_folio(page));
>
>                atomic_long_inc(&lruvec->nr_zswap_protected);
>        }
> }
>
> #else
>
> /* empty struct and functions
>
> #endif
>
> >  };
> >
> >  /* Isolate for asynchronous migration */
> > diff --git a/mm/mmzone.c b/mm/mmzone.c
> > index 68e1511be12d..4137f3ac42cd 100644
> > --- a/mm/mmzone.c
> > +++ b/mm/mmzone.c
> > @@ -78,6 +78,9 @@ void lruvec_init(struct lruvec *lruvec)
> >
> >         memset(lruvec, 0, sizeof(struct lruvec));
> >         spin_lock_init(&lruvec->lru_lock);
> > +#ifdef CONFIG_ZSWAP
> > +       atomic_long_set(&lruvec->nr_zswap_protected, 0);
> > +#endif
> >
> >         for_each_lru(lru)
> >                 INIT_LIST_HEAD(&lruvec->lists[lru]);
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index 0356df52b06a..a60197b55a28 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -676,7 +676,15 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> >         lru_add_drain();        /* Push any new pages onto the LRU now */
> >  skip:
> >         /* The page was likely read above, so no need for plugging here */
> > -       return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > +       page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> > +#ifdef CONFIG_ZSWAP
> > +       if (page) {
> > +               struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > +
> > +               atomic_long_inc(&lruvec->nr_zswap_protected);
> > +       }
> > +#endif
> > +       return page;
> >  }
> >
> >  int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> > @@ -843,8 +851,15 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> >         lru_add_drain();
> >  skip:
> >         /* The page was likely read above, so no need for plugging here */
> > -       return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> > -                                    NULL);
> > +       page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
> > +#ifdef CONFIG_ZSWAP
> > +       if (page) {
> > +               struct lruvec *lruvec = folio_lruvec(page_folio(page));
> > +
> > +               atomic_long_inc(&lruvec->nr_zswap_protected);
> > +       }
> > +#endif
> > +       return page;
> >  }
> >
> >  /**
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 15485427e3fa..1d1fe75a5237 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -145,6 +145,10 @@ module_param_named(exclusive_loads, zswap_exclusive_loads_enabled, bool, 0644);
> >  /* Number of zpools in zswap_pool (empirically determined for scalability) */
> >  #define ZSWAP_NR_ZPOOLS 32
> >
> > +/* Enable/disable memory pressure-based shrinker. */
> > +static bool zswap_shrinker_enabled;
> > +module_param_named(shrinker_enabled, zswap_shrinker_enabled, bool, 0644);
> > +
> >  /*********************************
> >  * data structures
> >  **********************************/
> > @@ -174,6 +178,8 @@ struct zswap_pool {
> >         char tfm_name[CRYPTO_MAX_ALG_NAME];
> >         struct list_lru list_lru;
> >         struct mem_cgroup *next_shrink;
> > +       struct shrinker *shrinker;
> > +       atomic_t nr_stored;
> >  };
> >
> >  /*
> > @@ -272,17 +278,26 @@ static bool zswap_can_accept(void)
> >                         DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE);
> >  }
> >
> > +static u64 get_zswap_pool_size(struct zswap_pool *pool)
> > +{
> > +       u64 pool_size = 0;
> > +       int i;
> > +
> > +       for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > +               pool_size += zpool_get_total_size(pool->zpools[i]);
> > +
> > +       return pool_size;
> > +}
> > +
> >  static void zswap_update_total_size(void)
> >  {
> >         struct zswap_pool *pool;
> >         u64 total = 0;
> > -       int i;
> >
> >         rcu_read_lock();
> >
> >         list_for_each_entry_rcu(pool, &zswap_pools, list)
> > -               for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > -                       total += zpool_get_total_size(pool->zpools[i]);
> > +               total += get_zswap_pool_size(pool);
> >
> >         rcu_read_unlock();
> >
> > @@ -326,8 +341,24 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> >  static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> >  {
> >         struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > -       bool added = __list_lru_add(list_lru, &entry->lru, entry_to_nid(entry), memcg);
> > -
> > +       int nid = entry_to_nid(entry);
> > +       struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
> > +       bool added = __list_lru_add(list_lru, &entry->lru, nid, memcg);
> > +       unsigned long lru_size, old, new;
> > +
> > +       if (added) {
> > +               lru_size = list_lru_count_one(list_lru, entry_to_nid(entry), memcg);
> > +               old = atomic_long_inc_return(&lruvec->nr_zswap_protected);
> > +
> > +               /*
> > +                * Decay to avoid overflow and adapt to changing workloads.
> > +                * This is based on LRU reclaim cost decaying heuristics.
> > +                */
> > +               do {
> > +                       new = old > lru_size / 4 ? old / 2 : old;
> > +               } while (
> > +                       !atomic_long_try_cmpxchg(&lruvec->nr_zswap_protected, &old, new));
> > +       }
> >         mem_cgroup_put(memcg);
> >         return added;
> >  }
> > @@ -427,6 +458,7 @@ static void zswap_free_entry(struct zswap_entry *entry)
> >         else {
> >                 zswap_lru_del(&entry->pool->list_lru, entry);
> >                 zpool_free(zswap_find_zpool(entry), entry->handle);
> > +               atomic_dec(&entry->pool->nr_stored);
> >                 zswap_pool_put(entry->pool);
> >         }
> >         zswap_entry_cache_free(entry);
> > @@ -468,6 +500,93 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> >         return entry;
> >  }
> >
> > +/*********************************
> > +* shrinker functions
> > +**********************************/
> > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > +                                      spinlock_t *lock, void *arg);
> > +
> > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > +               struct shrink_control *sc)
> > +{
> > +       struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > +       unsigned long shrink_ret, nr_protected, lru_size;
> > +       struct zswap_pool *pool = shrinker->private_data;
> > +       bool encountered_page_in_swapcache = false;
> > +
> > +       nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> > +       lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > +
> > +       /*
> > +        * Abort if the shrinker is disabled or if we are shrinking into the
> > +        * protected region.
> > +        */
> > +       if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > +               sc->nr_scanned = 0;
> > +               return SHRINK_STOP;
> > +       }
> > +
> > +       shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > +               &encountered_page_in_swapcache);
> > +
> > +       if (encountered_page_in_swapcache)
> > +               return SHRINK_STOP;
> > +
> > +       return shrink_ret ? shrink_ret : SHRINK_STOP;
> > +}
> > +
> > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > +               struct shrink_control *sc)
> > +{
> > +       struct zswap_pool *pool = shrinker->private_data;
> > +       struct mem_cgroup *memcg = sc->memcg;
> > +       struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > +       unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > +
> > +#ifdef CONFIG_MEMCG_KMEM
> > +       cgroup_rstat_flush(memcg->css.cgroup);
> > +       nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > +       nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > +#else
> > +       /* use pool stats instead of memcg stats */
> > +       nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > +       nr_stored = atomic_read(&pool->nr_stored);
> > +#endif
> > +
> > +       if (!zswap_shrinker_enabled || !nr_stored)
> > +               return 0;
> > +
> > +       nr_protected = atomic_long_read(&lruvec->nr_zswap_protected);
> > +       nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > +       /*
> > +        * Subtract the lru size by an estimate of the number of pages
> > +        * that should be protected.
> > +        */
> > +       nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > +
> > +       /*
> > +        * Scale the number of freeable pages by the memory saving factor.
> > +        * This ensures that the better zswap compresses memory, the fewer
> > +        * pages we will evict to swap (as it will otherwise incur IO for
> > +        * relatively small memory saving).
> > +        */
> > +       return mult_frac(nr_freeable, nr_backing, nr_stored);
> > +}
> > +
> > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > +{
> > +       pool->shrinker =
> > +               shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > +       if (!pool->shrinker)
> > +               return;
> > +
> > +       pool->shrinker->private_data = pool;
> > +       pool->shrinker->scan_objects = zswap_shrinker_scan;
> > +       pool->shrinker->count_objects = zswap_shrinker_count;
> > +       pool->shrinker->batch = 0;
> > +       pool->shrinker->seeks = DEFAULT_SEEKS;
> > +}
> > +
> >  /*********************************
> >  * per-cpu code
> >  **********************************/
> > @@ -663,8 +782,10 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> >                                        spinlock_t *lock, void *arg)
> >  {
> >         struct zswap_entry *entry = container_of(item, struct zswap_entry, lru);
> > +       bool *encountered_page_in_swapcache = (bool *)arg;
> >         struct mem_cgroup *memcg;
> >         struct zswap_tree *tree;
> > +       struct lruvec *lruvec;
> >         pgoff_t swpoffset;
> >         enum lru_status ret = LRU_REMOVED_RETRY;
> >         int writeback_result;
> > @@ -698,8 +819,22 @@ static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_o
> >                 /* we cannot use zswap_lru_add here, because it increments node's lru count */
> >                 list_lru_putback(&entry->pool->list_lru, item, entry_to_nid(entry), memcg);
> >                 spin_unlock(lock);
> > -               mem_cgroup_put(memcg);
> >                 ret = LRU_RETRY;
> > +
> > +               /*
> > +                * Encountering a page already in swap cache is a sign that we are shrinking
> > +                * into the warmer region. We should terminate shrinking (if we're in the dynamic
> > +                * shrinker context).
> > +                */
> > +               if (writeback_result == -EEXIST && encountered_page_in_swapcache) {
> > +                       ret = LRU_SKIP;
> > +                       *encountered_page_in_swapcache = true;
> > +               }
> > +               lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(entry_to_nid(entry)));
> > +               /* Increment the protection area to account for the LRU rotation. */
> > +               atomic_long_inc(&lruvec->nr_zswap_protected);
> > +
> > +               mem_cgroup_put(memcg);
> >                 goto put_unlock;
> >         }
> >         zswap_written_back_pages++;
> > @@ -822,6 +957,11 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> >                                        &pool->node);
> >         if (ret)
> >                 goto error;
> > +
> > +       zswap_alloc_shrinker(pool);
> > +       if (!pool->shrinker)
> > +               goto error;
> > +
> >         pr_debug("using %s compressor\n", pool->tfm_name);
> >
> >         /* being the current pool takes 1 ref; this func expects the
> > @@ -829,13 +969,18 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> >          */
> >         kref_init(&pool->kref);
> >         INIT_LIST_HEAD(&pool->list);
> > -       list_lru_init_memcg(&pool->list_lru, NULL);
> > +       if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> > +               goto lru_fail;
> > +       shrinker_register(pool->shrinker);
> >         INIT_WORK(&pool->shrink_work, shrink_worker);
> >
> >         zswap_pool_debug("created", pool);
> >
> >         return pool;
> >
> > +lru_fail:
> > +       list_lru_destroy(&pool->list_lru);
> > +       shrinker_free(pool->shrinker);
> >  error:
> >         if (pool->acomp_ctx)
> >                 free_percpu(pool->acomp_ctx);
> > @@ -893,6 +1038,7 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> >
> >         zswap_pool_debug("destroying", pool);
> >
> > +       shrinker_free(pool->shrinker);
> >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> >         free_percpu(pool->acomp_ctx);
> >         list_lru_destroy(&pool->list_lru);
> > @@ -1440,6 +1586,7 @@ bool zswap_store(struct folio *folio)
> >         if (entry->length) {
> >                 INIT_LIST_HEAD(&entry->lru);
> >                 zswap_lru_add(&pool->list_lru, entry);
> > +               atomic_inc(&pool->nr_stored);
> >         }
> >         spin_unlock(&tree->lock);
> >
> > --
> > 2.34.1

I like this. And FWIW, if we have more states to store
(i.e if the shrinker heuristics needs to change), we just need
to update things in zswap.h.

Will be present in v4 of the patch series. Thanks for the suggestion,
Yosry!

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v3 2/5] zswap: make shrinking memcg-aware
  2023-10-19 16:14       ` Yosry Ahmed
@ 2023-10-20 19:58         ` Nhat Pham
  0 siblings, 0 replies; 31+ messages in thread
From: Nhat Pham @ 2023-10-20 19:58 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Domenico Cerasuolo, akpm, hannes, sjenning, ddstreet,
	vitaly.wool, mhocko, roman.gushchin, shakeelb, muchun.song,
	linux-mm, kernel-team, linux-kernel, cgroups, linux-doc,
	linux-kselftest, shuah

On Thu, Oct 19, 2023 at 9:15 AM Yosry Ahmed <yosryahmed@google.com> wrote:
>
> [..]
> > > >
> > > > +/*********************************
> > > > +* lru functions
> > > > +**********************************/
> > > > +static bool zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *entry)
> > > > +{
> > > > +       struct mem_cgroup *memcg = get_mem_cgroup_from_entry(entry);
> > >
> > > Could we avoid the need for get/put with an rcu_read_lock() instead?
> >
> > I think we can, I'm not entirely sure of the consequences though. By the
> > look of it I'd say it's safe but I wouldn't trust my judgement on this.
>
> It just seems like we have a pattern of short-lived get/put. If RCU
> gives enough protection it should be simpler. IIUC taking a reference
> does not protect against offlining or reparenting, so I am not sure if
> taking a reference here would provide any more protection than
>

I'd keep it for now. Sounds like an optimization to me, which could
always be done as a follow-up :)

Unless of course, if somebody has a really strong opinion regarding
this.

> >
> > >
> [..]
> > > > @@ -686,7 +716,36 @@ static int zswap_reclaim_entry(struct zswap_pool *pool)
> > > >         zswap_entry_put(tree, entry);
> > > >  unlock:
> > > >         spin_unlock(&tree->lock);
> > > > -       return ret ? -EAGAIN : 0;
> > > > +       spin_lock(lock);
> > > > +       return ret;
> > > > +}
> > > > +
> > > > +static int shrink_memcg(struct mem_cgroup *memcg)
> > > > +{
> > > > +       struct zswap_pool *pool;
> > > > +       int nid, shrunk = 0;
> > > > +
> > > > +       pool = zswap_pool_current_get();
> > > > +       if (!pool)
> > > > +               return -EINVAL;
> > > > +
> > > > +       /*
> > > > +        * Skip zombies because their LRUs are reparented and we would be
> > > > +        * reclaiming from the parent instead of the dead memcgroup.
> > >
> > > nit: s/memcgroup/memcg.
> > >
> > > > +        */
> > > > +       if (memcg && !mem_cgroup_online(memcg))
> > > > +               goto out;
> > >
> > > If we move this above zswap_pool_current_get(), we can return directly
> > > and remove the label. I noticed we will return -EAGAIN if memcg is
> > > offline. IIUC -EAGAIN for the caller will move on to the next memcg,
> > > but I am wondering if a different errno would be clearer here.
> >
> > True, I remember spending some time staring at error codes but couldn't find a
> > better one. What if we use -EINVAL for retryable errors, and use something else
> > for the one where there is no pool? -ENODEV?
>
> Do you mean -EINVAL for non-retryable errors? Perhaps -ENOENT is more
> appropriate as a return for offline memcgs?
>
> >
> > >
> [..]
> > > >  static void shrink_worker(struct work_struct *w)
> > > > @@ -695,10 +754,13 @@ static void shrink_worker(struct work_struct *w)
> > > >                                                 shrink_work);
> > > >         int ret, failures = 0;
> > > >
> > > > +       /* global reclaim will select cgroup in a round-robin fashion. */
> > > >         do {
> > > > -               ret = zswap_reclaim_entry(pool);
> > > > +               pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > >
> > > Perhaps next_shrink_memcg is a better name here?
> >
> > Will change if you have a strong preference, I'd keep it shorter because it's
> > always used in conjunction with a memcg type or function.
>
> I'd rather have the more explicit name unless it causes some annoying
> line breaks or so.
>
> >
> > >
> > > > +
> > > > +               ret = shrink_memcg(pool->next_shrink);
> > > > +
> > > >                 if (ret) {
> > > > -                       zswap_reject_reclaim_fail++;
> > > >                         if (ret != -EAGAIN)
> > > >                                 break;
> > > >                         if (++failures == MAX_RECLAIM_RETRIES)
> > > > @@ -764,8 +826,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> > > >          */
> > > >         kref_init(&pool->kref);
> > > >         INIT_LIST_HEAD(&pool->list);
> > > > -       INIT_LIST_HEAD(&pool->lru);
> > > > -       spin_lock_init(&pool->lru_lock);
> > > > +       list_lru_init_memcg(&pool->list_lru, NULL);
> > > >         INIT_WORK(&pool->shrink_work, shrink_worker);
> > > >
> > > >         zswap_pool_debug("created", pool);
> > > > @@ -831,6 +892,9 @@ static void zswap_pool_destroy(struct zswap_pool *pool)
> > > >
> > > >         cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node);
> > > >         free_percpu(pool->acomp_ctx);
> > > > +       list_lru_destroy(&pool->list_lru);
> > > > +       if (pool->next_shrink)
> > > > +               mem_cgroup_put(pool->next_shrink);
> > > >         for (i = 0; i < ZSWAP_NR_ZPOOLS; i++)
> > > >                 zpool_destroy_pool(pool->zpools[i]);
> > > >         kfree(pool);
> > > > @@ -1076,7 +1140,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > > >
> > > >         /* try to allocate swap cache page */
> > > >         page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
> > > > -                                      &page_was_allocated);
> > > > +                                      &page_was_allocated, true);
> > > >         if (!page) {
> > > >                 ret = -ENOMEM;
> > > >                 goto fail;
> > > > @@ -1142,7 +1206,6 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
> > > >         /* start writeback */
> > > >         __swap_writepage(page, &wbc);
> > > >         put_page(page);
> > > > -       zswap_written_back_pages++;
> > > >
> > > >         return ret;
> > > >
> > > > @@ -1199,8 +1262,10 @@ bool zswap_store(struct folio *folio)
> > > >         struct scatterlist input, output;
> > > >         struct crypto_acomp_ctx *acomp_ctx;
> > > >         struct obj_cgroup *objcg = NULL;
> > > > +       struct mem_cgroup *memcg = NULL;
> > > >         struct zswap_pool *pool;
> > > >         struct zpool *zpool;
> > > > +       int lru_alloc_ret;
> > > >         unsigned int dlen = PAGE_SIZE;
> > > >         unsigned long handle, value;
> > > >         char *buf;
> > > > @@ -1230,15 +1295,15 @@ bool zswap_store(struct folio *folio)
> > > >                 zswap_invalidate_entry(tree, dupentry);
> > > >         }
> > > >         spin_unlock(&tree->lock);
> > > > -
> > > > -       /*
> > > > -        * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > > -        * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > > -        * local cgroup limits.
> > > > -        */
> > > >         objcg = get_obj_cgroup_from_folio(folio);
> > > > -       if (objcg && !obj_cgroup_may_zswap(objcg))
> > > > -               goto reject;
> > > > +       if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > > +               if (shrink_memcg(memcg)) {
> > > > +                       mem_cgroup_put(memcg);
> > > > +                       goto reject;
> > > > +               }
> > > > +               mem_cgroup_put(memcg);
> > > > +       }
> > > >
> > > >         /* reclaim space if needed */
> > > >         if (zswap_is_full()) {
> > > > @@ -1254,10 +1319,15 @@ bool zswap_store(struct folio *folio)
> > > >                         zswap_pool_reached_full = false;
> > > >         }
> > > >
> > > > +       pool = zswap_pool_current_get();
> > > > +       if (!pool)
> > > > +               goto reject;
> > > > +
> > >
> > > Why do we need to move zswap_pool_current_get() up here?
> >
> > Ah, thanks. This is a leftover from a previous version where the pool was needed
> > to allocate the entry.
> >
> >
> > >
> > > >         /* allocate entry */
> > > > -       entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > > > +       entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
> > > >         if (!entry) {
> > > >                 zswap_reject_kmemcache_fail++;
> > > > +               zswap_pool_put(pool);
> > > >                 goto reject;
> > > >         }
> > > >
> > > > @@ -1269,6 +1339,7 @@ bool zswap_store(struct folio *folio)
> > > >                         entry->length = 0;
> > > >                         entry->value = value;
> > > >                         atomic_inc(&zswap_same_filled_pages);
> > > > +                       zswap_pool_put(pool);
> > > >                         goto insert_entry;
> > > >                 }
> > > >                 kunmap_atomic(src);
> > > > @@ -1278,9 +1349,15 @@ bool zswap_store(struct folio *folio)
> > > >                 goto freepage;
> > > >
> > > >         /* if entry is successfully added, it keeps the reference */
> > > > -       entry->pool = zswap_pool_current_get();
> > > > -       if (!entry->pool)
> > > > -               goto freepage;
> > > > +       entry->pool = pool;
> > > > +       if (objcg) {
> > > > +               memcg = get_mem_cgroup_from_objcg(objcg);
> > > > +               lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru, GFP_KERNEL);
> > > > +               mem_cgroup_put(memcg);
> > > > +
> > > > +               if (lru_alloc_ret)
> > > > +                       goto freepage;
> > > > +       }
> > > >
> > > >         /* compress */
> > > >         acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
> > > > @@ -1358,9 +1435,8 @@ bool zswap_store(struct folio *folio)
> > > >                 zswap_invalidate_entry(tree, dupentry);
> > > >         }
> > > >         if (entry->length) {
> > > > -               spin_lock(&entry->pool->lru_lock);
> > > > -               list_add(&entry->lru, &entry->pool->lru);
> > > > -               spin_unlock(&entry->pool->lru_lock);
> > > > +               INIT_LIST_HEAD(&entry->lru);
> > > > +               zswap_lru_add(&pool->list_lru, entry);
> > > >         }
> > > >         spin_unlock(&tree->lock);
> > > >
> > > > @@ -1373,8 +1449,8 @@ bool zswap_store(struct folio *folio)
> > > >
> > > >  put_dstmem:
> > > >         mutex_unlock(acomp_ctx->mutex);
> > > > -       zswap_pool_put(entry->pool);
> > > >  freepage:
> > > > +       zswap_pool_put(entry->pool);
> > > >         zswap_entry_cache_free(entry);
> > > >  reject:
> > > >         if (objcg)
> > > > @@ -1467,9 +1543,8 @@ bool zswap_load(struct folio *folio)
> > > >                 zswap_invalidate_entry(tree, entry);
> > > >                 folio_mark_dirty(folio);
> > > >         } else if (entry->length) {
> > > > -               spin_lock(&entry->pool->lru_lock);
> > > > -               list_move(&entry->lru, &entry->pool->lru);
> > > > -               spin_unlock(&entry->pool->lru_lock);
> > > > +               zswap_lru_del(&entry->pool->list_lru, entry);
> > > > +               zswap_lru_add(&entry->pool->list_lru, entry);
> > > >         }
> > > >         zswap_entry_put(tree, entry);
> > > >         spin_unlock(&tree->lock);
> > > > --
> > > > 2.34.1

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2023-10-20 19:58 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-17 23:21 [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-10-17 23:21 ` [PATCH v3 1/5] mm: list_lru: allow external numa node and cgroup tracking Nhat Pham
2023-10-18 22:26   ` Yosry Ahmed
2023-10-18 23:09     ` Nhat Pham
2023-10-17 23:21 ` [PATCH v3 2/5] zswap: make shrinking memcg-aware Nhat Pham
2023-10-18 23:20   ` Yosry Ahmed
2023-10-18 23:46     ` Nhat Pham
2023-10-18 23:48       ` Nhat Pham
2023-10-19  1:11       ` Yosry Ahmed
2023-10-19 12:47         ` Domenico Cerasuolo
2023-10-19 16:28           ` Yosry Ahmed
2023-10-19 12:29     ` Domenico Cerasuolo
2023-10-19 16:14       ` Yosry Ahmed
2023-10-20 19:58         ` Nhat Pham
2023-10-17 23:21 ` [PATCH v3 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-10-17 23:35   ` Nhat Pham
2023-10-17 23:37     ` Jeff Johnson
2023-10-17 23:40       ` Nhat Pham
2023-10-18 23:24   ` Yosry Ahmed
2023-10-18 23:50     ` Nhat Pham
2023-10-17 23:21 ` [PATCH v3 4/5] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
2023-10-17 23:34   ` Nhat Pham
2023-10-17 23:44     ` Nhat Pham
2023-10-17 23:21 ` [PATCH v3 5/5] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-10-18 23:36   ` Yosry Ahmed
2023-10-20 19:14     ` Nhat Pham
2023-10-19 17:12 ` [PATCH v3 0/5] workload-specific and memory pressure-driven zswap writeback Andrew Morton
2023-10-19 17:33   ` Yosry Ahmed
2023-10-19 18:31     ` Nhat Pham
2023-10-19 18:36       ` Andrew Morton
2023-10-19 19:23         ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).