All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] zswap: cgroup accounting & control
@ 2022-04-27 16:00 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

Zswap backing memory is currently not tracked (and limited) on a
per-cgroup basis. As a result, workloads can escape their memory
containment and cause resource priority inversions on a shared host.
E.g. a lo-pri group fills the global zswap pool and forces a hi-pri
group out to disk.

Also, zswap doesn't benefit all workloads equally. Some even suffer
when memory contents compress poorly, and are better off going to disk
swap directly. On a host with mixed workloads, it's currently not
possible to enable zswap for one workload but not for the other.

This series implements missing cgroup awareness and control for zswap
to address both issues.

More details on interface and implementation in patch 5.

Patches 1-3 clean up related and adjacent options in Kconfig. Not
dependencies, just things I noticed during development.

Based on v5.18-rc4-mmots-2022-04-26-19-34-5-g5e1fdb02de7a.

 Documentation/admin-guide/cgroup-v2.rst |  21 ++
 drivers/block/zram/Kconfig              |   3 +-
 fs/proc/meminfo.c                       |   7 +
 include/linux/memcontrol.h              |  54 +++
 include/linux/swap.h                    |   5 +
 include/linux/vm_event_item.h           |   4 +
 init/Kconfig                            | 123 -------
 mm/Kconfig                              | 523 +++++++++++++++++++-----------
 mm/memcontrol.c                         | 196 ++++++++++-
 mm/vmstat.c                             |   4 +
 mm/zswap.c                              |  50 ++-
 11 files changed, 648 insertions(+), 342 deletions(-)



^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 0/5] zswap: cgroup accounting & control
@ 2022-04-27 16:00 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

Zswap backing memory is currently not tracked (and limited) on a
per-cgroup basis. As a result, workloads can escape their memory
containment and cause resource priority inversions on a shared host.
E.g. a lo-pri group fills the global zswap pool and forces a hi-pri
group out to disk.

Also, zswap doesn't benefit all workloads equally. Some even suffer
when memory contents compress poorly, and are better off going to disk
swap directly. On a host with mixed workloads, it's currently not
possible to enable zswap for one workload but not for the other.

This series implements missing cgroup awareness and control for zswap
to address both issues.

More details on interface and implementation in patch 5.

Patches 1-3 clean up related and adjacent options in Kconfig. Not
dependencies, just things I noticed during development.

Based on v5.18-rc4-mmots-2022-04-26-19-34-5-g5e1fdb02de7a.

 Documentation/admin-guide/cgroup-v2.rst |  21 ++
 drivers/block/zram/Kconfig              |   3 +-
 fs/proc/meminfo.c                       |   7 +
 include/linux/memcontrol.h              |  54 +++
 include/linux/swap.h                    |   5 +
 include/linux/vm_event_item.h           |   4 +
 init/Kconfig                            | 123 -------
 mm/Kconfig                              | 523 +++++++++++++++++++-----------
 mm/memcontrol.c                         | 196 ++++++++++-
 mm/vmstat.c                             |   4 +
 mm/zswap.c                              |  50 ++-
 11 files changed, 648 insertions(+), 342 deletions(-)



^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 1/5] mm: Kconfig: move swap and slab config options to the MM section
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

These are currently under General Setup. MM seems like a better fit.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 init/Kconfig | 123 ---------------------------------------------------
 mm/Kconfig   | 123 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 123 insertions(+), 123 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 4489416f1e5c..468fe27cec0b 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -375,23 +375,6 @@ config DEFAULT_HOSTNAME
 	  but you may wish to use a different default here to make a minimal
 	  system more usable with less configuration.
 
-#
-# For some reason microblaze and nios2 hard code SWAP=n.  Hopefully we can
-# add proper SWAP support to them, in which case this can be remove.
-#
-config ARCH_NO_SWAP
-	bool
-
-config SWAP
-	bool "Support for paging of anonymous memory (swap)"
-	depends on MMU && BLOCK && !ARCH_NO_SWAP
-	default y
-	help
-	  This option allows you to choose whether you want to have support
-	  for so called swap devices or swap files in your kernel that are
-	  used to provide more virtual memory than the actual RAM present
-	  in your computer.  If unsure say Y.
-
 config SYSVIPC
 	bool "System V IPC"
 	help
@@ -1909,112 +1892,6 @@ config COMPAT_BRK
 
 	  On non-ancient distros (post-2000 ones) N is usually a safe choice.
 
-choice
-	prompt "Choose SLAB allocator"
-	default SLUB
-	help
-	   This option allows to select a slab allocator.
-
-config SLAB
-	bool "SLAB"
-	depends on !PREEMPT_RT
-	select HAVE_HARDENED_USERCOPY_ALLOCATOR
-	help
-	  The regular slab allocator that is established and known to work
-	  well in all environments. It organizes cache hot objects in
-	  per cpu and per node queues.
-
-config SLUB
-	bool "SLUB (Unqueued Allocator)"
-	select HAVE_HARDENED_USERCOPY_ALLOCATOR
-	help
-	   SLUB is a slab allocator that minimizes cache line usage
-	   instead of managing queues of cached objects (SLAB approach).
-	   Per cpu caching is realized using slabs of objects instead
-	   of queues of objects. SLUB can use memory efficiently
-	   and has enhanced diagnostics. SLUB is the default choice for
-	   a slab allocator.
-
-config SLOB
-	depends on EXPERT
-	bool "SLOB (Simple Allocator)"
-	depends on !PREEMPT_RT
-	help
-	   SLOB replaces the stock allocator with a drastically simpler
-	   allocator. SLOB is generally more space efficient but
-	   does not perform as well on large systems.
-
-endchoice
-
-config SLAB_MERGE_DEFAULT
-	bool "Allow slab caches to be merged"
-	default y
-	depends on SLAB || SLUB
-	help
-	  For reduced kernel memory fragmentation, slab caches can be
-	  merged when they share the same size and other characteristics.
-	  This carries a risk of kernel heap overflows being able to
-	  overwrite objects from merged caches (and more easily control
-	  cache layout), which makes such heap attacks easier to exploit
-	  by attackers. By keeping caches unmerged, these kinds of exploits
-	  can usually only damage objects in the same cache. To disable
-	  merging at runtime, "slab_nomerge" can be passed on the kernel
-	  command line.
-
-config SLAB_FREELIST_RANDOM
-	bool "Randomize slab freelist"
-	depends on SLAB || SLUB
-	help
-	  Randomizes the freelist order used on creating new pages. This
-	  security feature reduces the predictability of the kernel slab
-	  allocator against heap overflows.
-
-config SLAB_FREELIST_HARDENED
-	bool "Harden slab freelist metadata"
-	depends on SLAB || SLUB
-	help
-	  Many kernel heap attacks try to target slab cache metadata and
-	  other infrastructure. This options makes minor performance
-	  sacrifices to harden the kernel slab allocator against common
-	  freelist exploit methods. Some slab implementations have more
-	  sanity-checking than others. This option is most effective with
-	  CONFIG_SLUB.
-
-config SHUFFLE_PAGE_ALLOCATOR
-	bool "Page allocator randomization"
-	default SLAB_FREELIST_RANDOM && ACPI_NUMA
-	help
-	  Randomization of the page allocator improves the average
-	  utilization of a direct-mapped memory-side-cache. See section
-	  5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
-	  6.2a specification for an example of how a platform advertises
-	  the presence of a memory-side-cache. There are also incidental
-	  security benefits as it reduces the predictability of page
-	  allocations to compliment SLAB_FREELIST_RANDOM, but the
-	  default granularity of shuffling on the "MAX_ORDER - 1" i.e,
-	  10th order of pages is selected based on cache utilization
-	  benefits on x86.
-
-	  While the randomization improves cache utilization it may
-	  negatively impact workloads on platforms without a cache. For
-	  this reason, by default, the randomization is enabled only
-	  after runtime detection of a direct-mapped memory-side-cache.
-	  Otherwise, the randomization may be force enabled with the
-	  'page_alloc.shuffle' kernel command line parameter.
-
-	  Say Y if unsure.
-
-config SLUB_CPU_PARTIAL
-	default y
-	depends on SLUB && SMP
-	bool "SLUB per cpu partial cache"
-	help
-	  Per cpu partial caches accelerate objects allocation and freeing
-	  that is local to a processor at the price of more indeterminism
-	  in the latency of the free. On overflow these caches will be cleared
-	  which requires the taking of locks that may cause latency spikes.
-	  Typically one would choose no for a realtime system.
-
 config MMAP_ALLOW_UNINITIALIZED
 	bool "Allow mmapped anonymous memory to be uninitialized"
 	depends on EXPERT && !MMU
diff --git a/mm/Kconfig b/mm/Kconfig
index c2141dd639e3..675a6be43739 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -2,6 +2,129 @@
 
 menu "Memory Management options"
 
+#
+# For some reason microblaze and nios2 hard code SWAP=n.  Hopefully we can
+# add proper SWAP support to them, in which case this can be remove.
+#
+config ARCH_NO_SWAP
+	bool
+
+config SWAP
+	bool "Support for paging of anonymous memory (swap)"
+	depends on MMU && BLOCK && !ARCH_NO_SWAP
+	default y
+	help
+	  This option allows you to choose whether you want to have support
+	  for so called swap devices or swap files in your kernel that are
+	  used to provide more virtual memory than the actual RAM present
+	  in your computer.  If unsure say Y.
+
+choice
+	prompt "Choose SLAB allocator"
+	default SLUB
+	help
+	   This option allows to select a slab allocator.
+
+config SLAB
+	bool "SLAB"
+	depends on !PREEMPT_RT
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
+	help
+	  The regular slab allocator that is established and known to work
+	  well in all environments. It organizes cache hot objects in
+	  per cpu and per node queues.
+
+config SLUB
+	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
+	help
+	   SLUB is a slab allocator that minimizes cache line usage
+	   instead of managing queues of cached objects (SLAB approach).
+	   Per cpu caching is realized using slabs of objects instead
+	   of queues of objects. SLUB can use memory efficiently
+	   and has enhanced diagnostics. SLUB is the default choice for
+	   a slab allocator.
+
+config SLOB
+	depends on EXPERT
+	bool "SLOB (Simple Allocator)"
+	depends on !PREEMPT_RT
+	help
+	   SLOB replaces the stock allocator with a drastically simpler
+	   allocator. SLOB is generally more space efficient but
+	   does not perform as well on large systems.
+
+endchoice
+
+config SLAB_MERGE_DEFAULT
+	bool "Allow slab caches to be merged"
+	default y
+	depends on SLAB || SLUB
+	help
+	  For reduced kernel memory fragmentation, slab caches can be
+	  merged when they share the same size and other characteristics.
+	  This carries a risk of kernel heap overflows being able to
+	  overwrite objects from merged caches (and more easily control
+	  cache layout), which makes such heap attacks easier to exploit
+	  by attackers. By keeping caches unmerged, these kinds of exploits
+	  can usually only damage objects in the same cache. To disable
+	  merging at runtime, "slab_nomerge" can be passed on the kernel
+	  command line.
+
+config SLAB_FREELIST_RANDOM
+	bool "Randomize slab freelist"
+	depends on SLAB || SLUB
+	help
+	  Randomizes the freelist order used on creating new pages. This
+	  security feature reduces the predictability of the kernel slab
+	  allocator against heap overflows.
+
+config SLAB_FREELIST_HARDENED
+	bool "Harden slab freelist metadata"
+	depends on SLAB || SLUB
+	help
+	  Many kernel heap attacks try to target slab cache metadata and
+	  other infrastructure. This options makes minor performance
+	  sacrifices to harden the kernel slab allocator against common
+	  freelist exploit methods. Some slab implementations have more
+	  sanity-checking than others. This option is most effective with
+	  CONFIG_SLUB.
+
+config SHUFFLE_PAGE_ALLOCATOR
+	bool "Page allocator randomization"
+	default SLAB_FREELIST_RANDOM && ACPI_NUMA
+	help
+	  Randomization of the page allocator improves the average
+	  utilization of a direct-mapped memory-side-cache. See section
+	  5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
+	  6.2a specification for an example of how a platform advertises
+	  the presence of a memory-side-cache. There are also incidental
+	  security benefits as it reduces the predictability of page
+	  allocations to compliment SLAB_FREELIST_RANDOM, but the
+	  default granularity of shuffling on the "MAX_ORDER - 1" i.e,
+	  10th order of pages is selected based on cache utilization
+	  benefits on x86.
+
+	  While the randomization improves cache utilization it may
+	  negatively impact workloads on platforms without a cache. For
+	  this reason, by default, the randomization is enabled only
+	  after runtime detection of a direct-mapped memory-side-cache.
+	  Otherwise, the randomization may be force enabled with the
+	  'page_alloc.shuffle' kernel command line parameter.
+
+	  Say Y if unsure.
+
+config SLUB_CPU_PARTIAL
+	default y
+	depends on SLUB && SMP
+	bool "SLUB per cpu partial cache"
+	help
+	  Per cpu partial caches accelerate objects allocation and freeing
+	  that is local to a processor at the price of more indeterminism
+	  in the latency of the free. On overflow these caches will be cleared
+	  which requires the taking of locks that may cause latency spikes.
+	  Typically one would choose no for a realtime system.
+
 config SELECT_MEMORY_MODEL
 	def_bool y
 	depends on ARCH_SELECT_MEMORY_MODEL
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 1/5] mm: Kconfig: move swap and slab config options to the MM section
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

These are currently under General Setup. MM seems like a better fit.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 init/Kconfig | 123 ---------------------------------------------------
 mm/Kconfig   | 123 +++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 123 insertions(+), 123 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index 4489416f1e5c..468fe27cec0b 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -375,23 +375,6 @@ config DEFAULT_HOSTNAME
 	  but you may wish to use a different default here to make a minimal
 	  system more usable with less configuration.
 
-#
-# For some reason microblaze and nios2 hard code SWAP=n.  Hopefully we can
-# add proper SWAP support to them, in which case this can be remove.
-#
-config ARCH_NO_SWAP
-	bool
-
-config SWAP
-	bool "Support for paging of anonymous memory (swap)"
-	depends on MMU && BLOCK && !ARCH_NO_SWAP
-	default y
-	help
-	  This option allows you to choose whether you want to have support
-	  for so called swap devices or swap files in your kernel that are
-	  used to provide more virtual memory than the actual RAM present
-	  in your computer.  If unsure say Y.
-
 config SYSVIPC
 	bool "System V IPC"
 	help
@@ -1909,112 +1892,6 @@ config COMPAT_BRK
 
 	  On non-ancient distros (post-2000 ones) N is usually a safe choice.
 
-choice
-	prompt "Choose SLAB allocator"
-	default SLUB
-	help
-	   This option allows to select a slab allocator.
-
-config SLAB
-	bool "SLAB"
-	depends on !PREEMPT_RT
-	select HAVE_HARDENED_USERCOPY_ALLOCATOR
-	help
-	  The regular slab allocator that is established and known to work
-	  well in all environments. It organizes cache hot objects in
-	  per cpu and per node queues.
-
-config SLUB
-	bool "SLUB (Unqueued Allocator)"
-	select HAVE_HARDENED_USERCOPY_ALLOCATOR
-	help
-	   SLUB is a slab allocator that minimizes cache line usage
-	   instead of managing queues of cached objects (SLAB approach).
-	   Per cpu caching is realized using slabs of objects instead
-	   of queues of objects. SLUB can use memory efficiently
-	   and has enhanced diagnostics. SLUB is the default choice for
-	   a slab allocator.
-
-config SLOB
-	depends on EXPERT
-	bool "SLOB (Simple Allocator)"
-	depends on !PREEMPT_RT
-	help
-	   SLOB replaces the stock allocator with a drastically simpler
-	   allocator. SLOB is generally more space efficient but
-	   does not perform as well on large systems.
-
-endchoice
-
-config SLAB_MERGE_DEFAULT
-	bool "Allow slab caches to be merged"
-	default y
-	depends on SLAB || SLUB
-	help
-	  For reduced kernel memory fragmentation, slab caches can be
-	  merged when they share the same size and other characteristics.
-	  This carries a risk of kernel heap overflows being able to
-	  overwrite objects from merged caches (and more easily control
-	  cache layout), which makes such heap attacks easier to exploit
-	  by attackers. By keeping caches unmerged, these kinds of exploits
-	  can usually only damage objects in the same cache. To disable
-	  merging at runtime, "slab_nomerge" can be passed on the kernel
-	  command line.
-
-config SLAB_FREELIST_RANDOM
-	bool "Randomize slab freelist"
-	depends on SLAB || SLUB
-	help
-	  Randomizes the freelist order used on creating new pages. This
-	  security feature reduces the predictability of the kernel slab
-	  allocator against heap overflows.
-
-config SLAB_FREELIST_HARDENED
-	bool "Harden slab freelist metadata"
-	depends on SLAB || SLUB
-	help
-	  Many kernel heap attacks try to target slab cache metadata and
-	  other infrastructure. This options makes minor performance
-	  sacrifices to harden the kernel slab allocator against common
-	  freelist exploit methods. Some slab implementations have more
-	  sanity-checking than others. This option is most effective with
-	  CONFIG_SLUB.
-
-config SHUFFLE_PAGE_ALLOCATOR
-	bool "Page allocator randomization"
-	default SLAB_FREELIST_RANDOM && ACPI_NUMA
-	help
-	  Randomization of the page allocator improves the average
-	  utilization of a direct-mapped memory-side-cache. See section
-	  5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
-	  6.2a specification for an example of how a platform advertises
-	  the presence of a memory-side-cache. There are also incidental
-	  security benefits as it reduces the predictability of page
-	  allocations to compliment SLAB_FREELIST_RANDOM, but the
-	  default granularity of shuffling on the "MAX_ORDER - 1" i.e,
-	  10th order of pages is selected based on cache utilization
-	  benefits on x86.
-
-	  While the randomization improves cache utilization it may
-	  negatively impact workloads on platforms without a cache. For
-	  this reason, by default, the randomization is enabled only
-	  after runtime detection of a direct-mapped memory-side-cache.
-	  Otherwise, the randomization may be force enabled with the
-	  'page_alloc.shuffle' kernel command line parameter.
-
-	  Say Y if unsure.
-
-config SLUB_CPU_PARTIAL
-	default y
-	depends on SLUB && SMP
-	bool "SLUB per cpu partial cache"
-	help
-	  Per cpu partial caches accelerate objects allocation and freeing
-	  that is local to a processor at the price of more indeterminism
-	  in the latency of the free. On overflow these caches will be cleared
-	  which requires the taking of locks that may cause latency spikes.
-	  Typically one would choose no for a realtime system.
-
 config MMAP_ALLOW_UNINITIALIZED
 	bool "Allow mmapped anonymous memory to be uninitialized"
 	depends on EXPERT && !MMU
diff --git a/mm/Kconfig b/mm/Kconfig
index c2141dd639e3..675a6be43739 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -2,6 +2,129 @@
 
 menu "Memory Management options"
 
+#
+# For some reason microblaze and nios2 hard code SWAP=n.  Hopefully we can
+# add proper SWAP support to them, in which case this can be remove.
+#
+config ARCH_NO_SWAP
+	bool
+
+config SWAP
+	bool "Support for paging of anonymous memory (swap)"
+	depends on MMU && BLOCK && !ARCH_NO_SWAP
+	default y
+	help
+	  This option allows you to choose whether you want to have support
+	  for so called swap devices or swap files in your kernel that are
+	  used to provide more virtual memory than the actual RAM present
+	  in your computer.  If unsure say Y.
+
+choice
+	prompt "Choose SLAB allocator"
+	default SLUB
+	help
+	   This option allows to select a slab allocator.
+
+config SLAB
+	bool "SLAB"
+	depends on !PREEMPT_RT
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
+	help
+	  The regular slab allocator that is established and known to work
+	  well in all environments. It organizes cache hot objects in
+	  per cpu and per node queues.
+
+config SLUB
+	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
+	help
+	   SLUB is a slab allocator that minimizes cache line usage
+	   instead of managing queues of cached objects (SLAB approach).
+	   Per cpu caching is realized using slabs of objects instead
+	   of queues of objects. SLUB can use memory efficiently
+	   and has enhanced diagnostics. SLUB is the default choice for
+	   a slab allocator.
+
+config SLOB
+	depends on EXPERT
+	bool "SLOB (Simple Allocator)"
+	depends on !PREEMPT_RT
+	help
+	   SLOB replaces the stock allocator with a drastically simpler
+	   allocator. SLOB is generally more space efficient but
+	   does not perform as well on large systems.
+
+endchoice
+
+config SLAB_MERGE_DEFAULT
+	bool "Allow slab caches to be merged"
+	default y
+	depends on SLAB || SLUB
+	help
+	  For reduced kernel memory fragmentation, slab caches can be
+	  merged when they share the same size and other characteristics.
+	  This carries a risk of kernel heap overflows being able to
+	  overwrite objects from merged caches (and more easily control
+	  cache layout), which makes such heap attacks easier to exploit
+	  by attackers. By keeping caches unmerged, these kinds of exploits
+	  can usually only damage objects in the same cache. To disable
+	  merging at runtime, "slab_nomerge" can be passed on the kernel
+	  command line.
+
+config SLAB_FREELIST_RANDOM
+	bool "Randomize slab freelist"
+	depends on SLAB || SLUB
+	help
+	  Randomizes the freelist order used on creating new pages. This
+	  security feature reduces the predictability of the kernel slab
+	  allocator against heap overflows.
+
+config SLAB_FREELIST_HARDENED
+	bool "Harden slab freelist metadata"
+	depends on SLAB || SLUB
+	help
+	  Many kernel heap attacks try to target slab cache metadata and
+	  other infrastructure. This options makes minor performance
+	  sacrifices to harden the kernel slab allocator against common
+	  freelist exploit methods. Some slab implementations have more
+	  sanity-checking than others. This option is most effective with
+	  CONFIG_SLUB.
+
+config SHUFFLE_PAGE_ALLOCATOR
+	bool "Page allocator randomization"
+	default SLAB_FREELIST_RANDOM && ACPI_NUMA
+	help
+	  Randomization of the page allocator improves the average
+	  utilization of a direct-mapped memory-side-cache. See section
+	  5.2.27 Heterogeneous Memory Attribute Table (HMAT) in the ACPI
+	  6.2a specification for an example of how a platform advertises
+	  the presence of a memory-side-cache. There are also incidental
+	  security benefits as it reduces the predictability of page
+	  allocations to compliment SLAB_FREELIST_RANDOM, but the
+	  default granularity of shuffling on the "MAX_ORDER - 1" i.e,
+	  10th order of pages is selected based on cache utilization
+	  benefits on x86.
+
+	  While the randomization improves cache utilization it may
+	  negatively impact workloads on platforms without a cache. For
+	  this reason, by default, the randomization is enabled only
+	  after runtime detection of a direct-mapped memory-side-cache.
+	  Otherwise, the randomization may be force enabled with the
+	  'page_alloc.shuffle' kernel command line parameter.
+
+	  Say Y if unsure.
+
+config SLUB_CPU_PARTIAL
+	default y
+	depends on SLUB && SMP
+	bool "SLUB per cpu partial cache"
+	help
+	  Per cpu partial caches accelerate objects allocation and freeing
+	  that is local to a processor at the price of more indeterminism
+	  in the latency of the free. On overflow these caches will be cleared
+	  which requires the taking of locks that may cause latency spikes.
+	  Typically one would choose no for a realtime system.
+
 config SELECT_MEMORY_MODEL
 	def_bool y
 	depends on ARCH_SELECT_MEMORY_MODEL
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 2/5] mm: Kconfig: group swap, slab, hotplug and thp options into submenus
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

There are several clusters of related config options spread throughout
the mostly flat MM submenu. Group them together and put specialization
options into further subdirectories to make the MM submenu a bit more
organized and easier to navigate.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/Kconfig | 429 +++++++++++++++++++++++++++--------------------------
 1 file changed, 221 insertions(+), 208 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index 675a6be43739..2c5935a28edf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -9,7 +9,7 @@ menu "Memory Management options"
 config ARCH_NO_SWAP
 	bool
 
-config SWAP
+menuconfig SWAP
 	bool "Support for paging of anonymous memory (swap)"
 	depends on MMU && BLOCK && !ARCH_NO_SWAP
 	default y
@@ -19,6 +19,191 @@ config SWAP
 	  used to provide more virtual memory than the actual RAM present
 	  in your computer.  If unsure say Y.
 
+config ZSWAP
+	bool "Compressed cache for swap pages (EXPERIMENTAL)"
+	depends on SWAP && CRYPTO=y
+	select FRONTSWAP
+	select ZPOOL
+	help
+	  A lightweight compressed cache for swap pages.  It takes
+	  pages that are in the process of being swapped out and attempts to
+	  compress them into a dynamically allocated RAM-based memory pool.
+	  This can result in a significant I/O reduction on swap device and,
+	  in the case where decompressing from RAM is faster that swap device
+	  reads, can also improve workload performance.
+
+	  This is marked experimental because it is a new feature (as of
+	  v3.11) that interacts heavily with memory reclaim.  While these
+	  interactions don't cause any known issues on simple memory setups,
+	  they have not be fully explored on the large set of potential
+	  configurations and workloads that exist.
+
+choice
+	prompt "Compressed cache for swap pages default compressor"
+	depends on ZSWAP
+	default ZSWAP_COMPRESSOR_DEFAULT_LZO
+	help
+	  Selects the default compression algorithm for the compressed cache
+	  for swap pages.
+
+	  For an overview what kind of performance can be expected from
+	  a particular compression algorithm please refer to the benchmarks
+	  available at the following LWN page:
+	  https://lwn.net/Articles/751795/
+
+	  If in doubt, select 'LZO'.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.compressor=' option.
+
+config ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
+	bool "Deflate"
+	select CRYPTO_DEFLATE
+	help
+	  Use the Deflate algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZO
+	bool "LZO"
+	select CRYPTO_LZO
+	help
+	  Use the LZO algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_842
+	bool "842"
+	select CRYPTO_842
+	help
+	  Use the 842 algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZ4
+	bool "LZ4"
+	select CRYPTO_LZ4
+	help
+	  Use the LZ4 algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
+	bool "LZ4HC"
+	select CRYPTO_LZ4HC
+	help
+	  Use the LZ4HC algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_ZSTD
+	bool "zstd"
+	select CRYPTO_ZSTD
+	help
+	  Use the zstd algorithm as the default compression algorithm.
+endchoice
+
+config ZSWAP_COMPRESSOR_DEFAULT
+       string
+       depends on ZSWAP
+       default "deflate" if ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
+       default "lzo" if ZSWAP_COMPRESSOR_DEFAULT_LZO
+       default "842" if ZSWAP_COMPRESSOR_DEFAULT_842
+       default "lz4" if ZSWAP_COMPRESSOR_DEFAULT_LZ4
+       default "lz4hc" if ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
+       default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD
+       default ""
+
+choice
+	prompt "Compressed cache for swap pages default allocator"
+	depends on ZSWAP
+	default ZSWAP_ZPOOL_DEFAULT_ZBUD
+	help
+	  Selects the default allocator for the compressed cache for
+	  swap pages.
+	  The default is 'zbud' for compatibility, however please do
+	  read the description of each of the allocators below before
+	  making a right choice.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.zpool=' option.
+
+config ZSWAP_ZPOOL_DEFAULT_ZBUD
+	bool "zbud"
+	select ZBUD
+	help
+	  Use the zbud allocator as the default allocator.
+
+config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+	bool "z3fold"
+	select Z3FOLD
+	help
+	  Use the z3fold allocator as the default allocator.
+
+config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+	bool "zsmalloc"
+	select ZSMALLOC
+	help
+	  Use the zsmalloc allocator as the default allocator.
+endchoice
+
+config ZSWAP_ZPOOL_DEFAULT
+       string
+       depends on ZSWAP
+       default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
+       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+       default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+       default ""
+
+config ZSWAP_DEFAULT_ON
+	bool "Enable the compressed cache for swap pages by default"
+	depends on ZSWAP
+	help
+	  If selected, the compressed cache for swap pages will be enabled
+	  at boot, otherwise it will be disabled.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.enabled=' option.
+
+config ZPOOL
+	tristate "Common API for compressed memory storage"
+	depends on ZSWAP
+	help
+	  Compressed memory storage API.  This allows using either zbud or
+	  zsmalloc.
+
+config ZBUD
+	tristate "Low (Up to 2x) density storage for compressed pages"
+	depends on ZPOOL
+	help
+	  A special purpose allocator for storing compressed pages.
+	  It is designed to store up to two compressed pages per physical
+	  page.  While this design limits storage density, it has simple and
+	  deterministic reclaim properties that make it preferable to a higher
+	  density approach when reclaim will be used.
+
+config Z3FOLD
+	tristate "Up to 3x density storage for compressed pages"
+	depends on ZPOOL
+	help
+	  A special purpose allocator for storing compressed pages.
+	  It is designed to store up to three compressed pages per physical
+	  page. It is a ZBUD derivative so the simplicity and determinism are
+	  still there.
+
+config ZSMALLOC
+	tristate "Memory allocator for compressed pages"
+	depends on MMU
+	help
+	  zsmalloc is a slab-based memory allocator designed to store
+	  compressed RAM pages.  zsmalloc uses virtual memory mapping
+	  in order to reduce fragmentation.  However, this results in a
+	  non-standard allocator interface where a handle, not a pointer, is
+	  returned by an alloc().  This handle must be mapped in order to
+	  access the allocated space.
+
+config ZSMALLOC_STAT
+	bool "Export zsmalloc statistics"
+	depends on ZSMALLOC
+	select DEBUG_FS
+	help
+	  This option enables code in the zsmalloc to collect various
+	  statistics about what's happening in zsmalloc and exports that
+	  information to userspace via debugfs.
+	  If unsure, say N.
+
+menu "SLAB allocator options"
+
 choice
 	prompt "Choose SLAB allocator"
 	default SLUB
@@ -90,6 +275,19 @@ config SLAB_FREELIST_HARDENED
 	  sanity-checking than others. This option is most effective with
 	  CONFIG_SLUB.
 
+config SLUB_CPU_PARTIAL
+	default y
+	depends on SLUB && SMP
+	bool "SLUB per cpu partial cache"
+	help
+	  Per cpu partial caches accelerate objects allocation and freeing
+	  that is local to a processor at the price of more indeterminism
+	  in the latency of the free. On overflow these caches will be cleared
+	  which requires the taking of locks that may cause latency spikes.
+	  Typically one would choose no for a realtime system.
+
+endmenu # SLAB allocator options
+
 config SHUFFLE_PAGE_ALLOCATOR
 	bool "Page allocator randomization"
 	default SLAB_FREELIST_RANDOM && ACPI_NUMA
@@ -114,17 +312,6 @@ config SHUFFLE_PAGE_ALLOCATOR
 
 	  Say Y if unsure.
 
-config SLUB_CPU_PARTIAL
-	default y
-	depends on SLUB && SMP
-	bool "SLUB per cpu partial cache"
-	help
-	  Per cpu partial caches accelerate objects allocation and freeing
-	  that is local to a processor at the price of more indeterminism
-	  in the latency of the free. On overflow these caches will be cleared
-	  which requires the taking of locks that may cause latency spikes.
-	  Typically one would choose no for a realtime system.
-
 config SELECT_MEMORY_MODEL
 	def_bool y
 	depends on ARCH_SELECT_MEMORY_MODEL
@@ -250,14 +437,16 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
 	bool
 
 # eventually, we can have this option just 'select SPARSEMEM'
-config MEMORY_HOTPLUG
-	bool "Allow for memory hot-add"
+menuconfig MEMORY_HOTPLUG
+	bool "Memory hotplug"
 	select MEMORY_ISOLATION
 	depends on SPARSEMEM
 	depends on ARCH_ENABLE_MEMORY_HOTPLUG
 	depends on 64BIT
 	select NUMA_KEEP_MEMINFO if NUMA
 
+if MEMORY_HOTPLUG
+
 config MEMORY_HOTPLUG_DEFAULT_ONLINE
 	bool "Online the newly added memory blocks by default"
 	depends on MEMORY_HOTPLUG
@@ -287,6 +476,8 @@ config MHP_MEMMAP_ON_MEMORY
 	depends on MEMORY_HOTPLUG && SPARSEMEM_VMEMMAP
 	depends on ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
 
+endif # MEMORY_HOTPLUG
+
 # Heavily threaded applications may benefit from splitting the mm-wide
 # page_table_lock, so that faults on different parts of the user address
 # space can be handled with less contention: split it at this NR_CPUS.
@@ -501,7 +692,7 @@ config NOMMU_INITIAL_TRIM_EXCESS
 
 	  See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
 
-config TRANSPARENT_HUGEPAGE
+menuconfig TRANSPARENT_HUGEPAGE
 	bool "Transparent Hugepage Support"
 	depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT
 	select COMPACTION
@@ -516,6 +707,8 @@ config TRANSPARENT_HUGEPAGE
 
 	  If memory constrained on embedded, you may want to say N.
 
+if TRANSPARENT_HUGEPAGE
+
 choice
 	prompt "Transparent Hugepage Support sysfs defaults"
 	depends on TRANSPARENT_HUGEPAGE
@@ -556,6 +749,19 @@ config THP_SWAP
 
 	  For selection by architectures with reasonable THP sizes.
 
+config READ_ONLY_THP_FOR_FS
+	bool "Read-only THP for filesystems (EXPERIMENTAL)"
+	depends on TRANSPARENT_HUGEPAGE && SHMEM
+
+	help
+	  Allow khugepaged to put read-only file-backed pages in THP.
+
+	  This is marked experimental because it is a new feature. Write
+	  support of file THPs will be developed in the next few release
+	  cycles.
+
+endif # TRANSPARENT_HUGEPAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
@@ -640,188 +846,6 @@ config MEM_SOFT_DIRTY
 
 	  See Documentation/admin-guide/mm/soft-dirty.rst for more details.
 
-config ZSWAP
-	bool "Compressed cache for swap pages (EXPERIMENTAL)"
-	depends on SWAP && CRYPTO=y
-	select FRONTSWAP
-	select ZPOOL
-	help
-	  A lightweight compressed cache for swap pages.  It takes
-	  pages that are in the process of being swapped out and attempts to
-	  compress them into a dynamically allocated RAM-based memory pool.
-	  This can result in a significant I/O reduction on swap device and,
-	  in the case where decompressing from RAM is faster that swap device
-	  reads, can also improve workload performance.
-
-	  This is marked experimental because it is a new feature (as of
-	  v3.11) that interacts heavily with memory reclaim.  While these
-	  interactions don't cause any known issues on simple memory setups,
-	  they have not be fully explored on the large set of potential
-	  configurations and workloads that exist.
-
-choice
-	prompt "Compressed cache for swap pages default compressor"
-	depends on ZSWAP
-	default ZSWAP_COMPRESSOR_DEFAULT_LZO
-	help
-	  Selects the default compression algorithm for the compressed cache
-	  for swap pages.
-
-	  For an overview what kind of performance can be expected from
-	  a particular compression algorithm please refer to the benchmarks
-	  available at the following LWN page:
-	  https://lwn.net/Articles/751795/
-
-	  If in doubt, select 'LZO'.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.compressor=' option.
-
-config ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
-	bool "Deflate"
-	select CRYPTO_DEFLATE
-	help
-	  Use the Deflate algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZO
-	bool "LZO"
-	select CRYPTO_LZO
-	help
-	  Use the LZO algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_842
-	bool "842"
-	select CRYPTO_842
-	help
-	  Use the 842 algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZ4
-	bool "LZ4"
-	select CRYPTO_LZ4
-	help
-	  Use the LZ4 algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
-	bool "LZ4HC"
-	select CRYPTO_LZ4HC
-	help
-	  Use the LZ4HC algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_ZSTD
-	bool "zstd"
-	select CRYPTO_ZSTD
-	help
-	  Use the zstd algorithm as the default compression algorithm.
-endchoice
-
-config ZSWAP_COMPRESSOR_DEFAULT
-       string
-       depends on ZSWAP
-       default "deflate" if ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
-       default "lzo" if ZSWAP_COMPRESSOR_DEFAULT_LZO
-       default "842" if ZSWAP_COMPRESSOR_DEFAULT_842
-       default "lz4" if ZSWAP_COMPRESSOR_DEFAULT_LZ4
-       default "lz4hc" if ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
-       default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD
-       default ""
-
-choice
-	prompt "Compressed cache for swap pages default allocator"
-	depends on ZSWAP
-	default ZSWAP_ZPOOL_DEFAULT_ZBUD
-	help
-	  Selects the default allocator for the compressed cache for
-	  swap pages.
-	  The default is 'zbud' for compatibility, however please do
-	  read the description of each of the allocators below before
-	  making a right choice.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.zpool=' option.
-
-config ZSWAP_ZPOOL_DEFAULT_ZBUD
-	bool "zbud"
-	select ZBUD
-	help
-	  Use the zbud allocator as the default allocator.
-
-config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
-	bool "z3fold"
-	select Z3FOLD
-	help
-	  Use the z3fold allocator as the default allocator.
-
-config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
-	bool "zsmalloc"
-	select ZSMALLOC
-	help
-	  Use the zsmalloc allocator as the default allocator.
-endchoice
-
-config ZSWAP_ZPOOL_DEFAULT
-       string
-       depends on ZSWAP
-       default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
-       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
-       default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
-       default ""
-
-config ZSWAP_DEFAULT_ON
-	bool "Enable the compressed cache for swap pages by default"
-	depends on ZSWAP
-	help
-	  If selected, the compressed cache for swap pages will be enabled
-	  at boot, otherwise it will be disabled.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.enabled=' option.
-
-config ZPOOL
-	tristate "Common API for compressed memory storage"
-	help
-	  Compressed memory storage API.  This allows using either zbud or
-	  zsmalloc.
-
-config ZBUD
-	tristate "Low (Up to 2x) density storage for compressed pages"
-	depends on ZPOOL
-	help
-	  A special purpose allocator for storing compressed pages.
-	  It is designed to store up to two compressed pages per physical
-	  page.  While this design limits storage density, it has simple and
-	  deterministic reclaim properties that make it preferable to a higher
-	  density approach when reclaim will be used.
-
-config Z3FOLD
-	tristate "Up to 3x density storage for compressed pages"
-	depends on ZPOOL
-	help
-	  A special purpose allocator for storing compressed pages.
-	  It is designed to store up to three compressed pages per physical
-	  page. It is a ZBUD derivative so the simplicity and determinism are
-	  still there.
-
-config ZSMALLOC
-	tristate "Memory allocator for compressed pages"
-	depends on MMU
-	help
-	  zsmalloc is a slab-based memory allocator designed to store
-	  compressed RAM pages.  zsmalloc uses virtual memory mapping
-	  in order to reduce fragmentation.  However, this results in a
-	  non-standard allocator interface where a handle, not a pointer, is
-	  returned by an alloc().  This handle must be mapped in order to
-	  access the allocated space.
-
-config ZSMALLOC_STAT
-	bool "Export zsmalloc statistics"
-	depends on ZSMALLOC
-	select DEBUG_FS
-	help
-	  This option enables code in the zsmalloc to collect various
-	  statistics about what's happening in zsmalloc and exports that
-	  information to userspace via debugfs.
-	  If unsure, say N.
-
 config GENERIC_EARLY_IOREMAP
 	bool
 
@@ -978,17 +1002,6 @@ comment "GUP_TEST needs to have DEBUG_FS enabled"
 config GUP_GET_PTE_LOW_HIGH
 	bool
 
-config READ_ONLY_THP_FOR_FS
-	bool "Read-only THP for filesystems (EXPERIMENTAL)"
-	depends on TRANSPARENT_HUGEPAGE && SHMEM
-
-	help
-	  Allow khugepaged to put read-only file-backed pages in THP.
-
-	  This is marked experimental because it is a new feature. Write
-	  support of file THPs will be developed in the next few release
-	  cycles.
-
 config ARCH_HAS_PTE_SPECIAL
 	bool
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 2/5] mm: Kconfig: group swap, slab, hotplug and thp options into submenus
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

There are several clusters of related config options spread throughout
the mostly flat MM submenu. Group them together and put specialization
options into further subdirectories to make the MM submenu a bit more
organized and easier to navigate.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 mm/Kconfig | 429 +++++++++++++++++++++++++++--------------------------
 1 file changed, 221 insertions(+), 208 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index 675a6be43739..2c5935a28edf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -9,7 +9,7 @@ menu "Memory Management options"
 config ARCH_NO_SWAP
 	bool
 
-config SWAP
+menuconfig SWAP
 	bool "Support for paging of anonymous memory (swap)"
 	depends on MMU && BLOCK && !ARCH_NO_SWAP
 	default y
@@ -19,6 +19,191 @@ config SWAP
 	  used to provide more virtual memory than the actual RAM present
 	  in your computer.  If unsure say Y.
 
+config ZSWAP
+	bool "Compressed cache for swap pages (EXPERIMENTAL)"
+	depends on SWAP && CRYPTO=y
+	select FRONTSWAP
+	select ZPOOL
+	help
+	  A lightweight compressed cache for swap pages.  It takes
+	  pages that are in the process of being swapped out and attempts to
+	  compress them into a dynamically allocated RAM-based memory pool.
+	  This can result in a significant I/O reduction on swap device and,
+	  in the case where decompressing from RAM is faster that swap device
+	  reads, can also improve workload performance.
+
+	  This is marked experimental because it is a new feature (as of
+	  v3.11) that interacts heavily with memory reclaim.  While these
+	  interactions don't cause any known issues on simple memory setups,
+	  they have not be fully explored on the large set of potential
+	  configurations and workloads that exist.
+
+choice
+	prompt "Compressed cache for swap pages default compressor"
+	depends on ZSWAP
+	default ZSWAP_COMPRESSOR_DEFAULT_LZO
+	help
+	  Selects the default compression algorithm for the compressed cache
+	  for swap pages.
+
+	  For an overview what kind of performance can be expected from
+	  a particular compression algorithm please refer to the benchmarks
+	  available at the following LWN page:
+	  https://lwn.net/Articles/751795/
+
+	  If in doubt, select 'LZO'.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.compressor=' option.
+
+config ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
+	bool "Deflate"
+	select CRYPTO_DEFLATE
+	help
+	  Use the Deflate algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZO
+	bool "LZO"
+	select CRYPTO_LZO
+	help
+	  Use the LZO algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_842
+	bool "842"
+	select CRYPTO_842
+	help
+	  Use the 842 algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZ4
+	bool "LZ4"
+	select CRYPTO_LZ4
+	help
+	  Use the LZ4 algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
+	bool "LZ4HC"
+	select CRYPTO_LZ4HC
+	help
+	  Use the LZ4HC algorithm as the default compression algorithm.
+
+config ZSWAP_COMPRESSOR_DEFAULT_ZSTD
+	bool "zstd"
+	select CRYPTO_ZSTD
+	help
+	  Use the zstd algorithm as the default compression algorithm.
+endchoice
+
+config ZSWAP_COMPRESSOR_DEFAULT
+       string
+       depends on ZSWAP
+       default "deflate" if ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
+       default "lzo" if ZSWAP_COMPRESSOR_DEFAULT_LZO
+       default "842" if ZSWAP_COMPRESSOR_DEFAULT_842
+       default "lz4" if ZSWAP_COMPRESSOR_DEFAULT_LZ4
+       default "lz4hc" if ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
+       default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD
+       default ""
+
+choice
+	prompt "Compressed cache for swap pages default allocator"
+	depends on ZSWAP
+	default ZSWAP_ZPOOL_DEFAULT_ZBUD
+	help
+	  Selects the default allocator for the compressed cache for
+	  swap pages.
+	  The default is 'zbud' for compatibility, however please do
+	  read the description of each of the allocators below before
+	  making a right choice.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.zpool=' option.
+
+config ZSWAP_ZPOOL_DEFAULT_ZBUD
+	bool "zbud"
+	select ZBUD
+	help
+	  Use the zbud allocator as the default allocator.
+
+config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+	bool "z3fold"
+	select Z3FOLD
+	help
+	  Use the z3fold allocator as the default allocator.
+
+config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+	bool "zsmalloc"
+	select ZSMALLOC
+	help
+	  Use the zsmalloc allocator as the default allocator.
+endchoice
+
+config ZSWAP_ZPOOL_DEFAULT
+       string
+       depends on ZSWAP
+       default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
+       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
+       default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
+       default ""
+
+config ZSWAP_DEFAULT_ON
+	bool "Enable the compressed cache for swap pages by default"
+	depends on ZSWAP
+	help
+	  If selected, the compressed cache for swap pages will be enabled
+	  at boot, otherwise it will be disabled.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.enabled=' option.
+
+config ZPOOL
+	tristate "Common API for compressed memory storage"
+	depends on ZSWAP
+	help
+	  Compressed memory storage API.  This allows using either zbud or
+	  zsmalloc.
+
+config ZBUD
+	tristate "Low (Up to 2x) density storage for compressed pages"
+	depends on ZPOOL
+	help
+	  A special purpose allocator for storing compressed pages.
+	  It is designed to store up to two compressed pages per physical
+	  page.  While this design limits storage density, it has simple and
+	  deterministic reclaim properties that make it preferable to a higher
+	  density approach when reclaim will be used.
+
+config Z3FOLD
+	tristate "Up to 3x density storage for compressed pages"
+	depends on ZPOOL
+	help
+	  A special purpose allocator for storing compressed pages.
+	  It is designed to store up to three compressed pages per physical
+	  page. It is a ZBUD derivative so the simplicity and determinism are
+	  still there.
+
+config ZSMALLOC
+	tristate "Memory allocator for compressed pages"
+	depends on MMU
+	help
+	  zsmalloc is a slab-based memory allocator designed to store
+	  compressed RAM pages.  zsmalloc uses virtual memory mapping
+	  in order to reduce fragmentation.  However, this results in a
+	  non-standard allocator interface where a handle, not a pointer, is
+	  returned by an alloc().  This handle must be mapped in order to
+	  access the allocated space.
+
+config ZSMALLOC_STAT
+	bool "Export zsmalloc statistics"
+	depends on ZSMALLOC
+	select DEBUG_FS
+	help
+	  This option enables code in the zsmalloc to collect various
+	  statistics about what's happening in zsmalloc and exports that
+	  information to userspace via debugfs.
+	  If unsure, say N.
+
+menu "SLAB allocator options"
+
 choice
 	prompt "Choose SLAB allocator"
 	default SLUB
@@ -90,6 +275,19 @@ config SLAB_FREELIST_HARDENED
 	  sanity-checking than others. This option is most effective with
 	  CONFIG_SLUB.
 
+config SLUB_CPU_PARTIAL
+	default y
+	depends on SLUB && SMP
+	bool "SLUB per cpu partial cache"
+	help
+	  Per cpu partial caches accelerate objects allocation and freeing
+	  that is local to a processor at the price of more indeterminism
+	  in the latency of the free. On overflow these caches will be cleared
+	  which requires the taking of locks that may cause latency spikes.
+	  Typically one would choose no for a realtime system.
+
+endmenu # SLAB allocator options
+
 config SHUFFLE_PAGE_ALLOCATOR
 	bool "Page allocator randomization"
 	default SLAB_FREELIST_RANDOM && ACPI_NUMA
@@ -114,17 +312,6 @@ config SHUFFLE_PAGE_ALLOCATOR
 
 	  Say Y if unsure.
 
-config SLUB_CPU_PARTIAL
-	default y
-	depends on SLUB && SMP
-	bool "SLUB per cpu partial cache"
-	help
-	  Per cpu partial caches accelerate objects allocation and freeing
-	  that is local to a processor at the price of more indeterminism
-	  in the latency of the free. On overflow these caches will be cleared
-	  which requires the taking of locks that may cause latency spikes.
-	  Typically one would choose no for a realtime system.
-
 config SELECT_MEMORY_MODEL
 	def_bool y
 	depends on ARCH_SELECT_MEMORY_MODEL
@@ -250,14 +437,16 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
 	bool
 
 # eventually, we can have this option just 'select SPARSEMEM'
-config MEMORY_HOTPLUG
-	bool "Allow for memory hot-add"
+menuconfig MEMORY_HOTPLUG
+	bool "Memory hotplug"
 	select MEMORY_ISOLATION
 	depends on SPARSEMEM
 	depends on ARCH_ENABLE_MEMORY_HOTPLUG
 	depends on 64BIT
 	select NUMA_KEEP_MEMINFO if NUMA
 
+if MEMORY_HOTPLUG
+
 config MEMORY_HOTPLUG_DEFAULT_ONLINE
 	bool "Online the newly added memory blocks by default"
 	depends on MEMORY_HOTPLUG
@@ -287,6 +476,8 @@ config MHP_MEMMAP_ON_MEMORY
 	depends on MEMORY_HOTPLUG && SPARSEMEM_VMEMMAP
 	depends on ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
 
+endif # MEMORY_HOTPLUG
+
 # Heavily threaded applications may benefit from splitting the mm-wide
 # page_table_lock, so that faults on different parts of the user address
 # space can be handled with less contention: split it at this NR_CPUS.
@@ -501,7 +692,7 @@ config NOMMU_INITIAL_TRIM_EXCESS
 
 	  See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
 
-config TRANSPARENT_HUGEPAGE
+menuconfig TRANSPARENT_HUGEPAGE
 	bool "Transparent Hugepage Support"
 	depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT
 	select COMPACTION
@@ -516,6 +707,8 @@ config TRANSPARENT_HUGEPAGE
 
 	  If memory constrained on embedded, you may want to say N.
 
+if TRANSPARENT_HUGEPAGE
+
 choice
 	prompt "Transparent Hugepage Support sysfs defaults"
 	depends on TRANSPARENT_HUGEPAGE
@@ -556,6 +749,19 @@ config THP_SWAP
 
 	  For selection by architectures with reasonable THP sizes.
 
+config READ_ONLY_THP_FOR_FS
+	bool "Read-only THP for filesystems (EXPERIMENTAL)"
+	depends on TRANSPARENT_HUGEPAGE && SHMEM
+
+	help
+	  Allow khugepaged to put read-only file-backed pages in THP.
+
+	  This is marked experimental because it is a new feature. Write
+	  support of file THPs will be developed in the next few release
+	  cycles.
+
+endif # TRANSPARENT_HUGEPAGE
+
 #
 # UP and nommu archs use km based percpu allocator
 #
@@ -640,188 +846,6 @@ config MEM_SOFT_DIRTY
 
 	  See Documentation/admin-guide/mm/soft-dirty.rst for more details.
 
-config ZSWAP
-	bool "Compressed cache for swap pages (EXPERIMENTAL)"
-	depends on SWAP && CRYPTO=y
-	select FRONTSWAP
-	select ZPOOL
-	help
-	  A lightweight compressed cache for swap pages.  It takes
-	  pages that are in the process of being swapped out and attempts to
-	  compress them into a dynamically allocated RAM-based memory pool.
-	  This can result in a significant I/O reduction on swap device and,
-	  in the case where decompressing from RAM is faster that swap device
-	  reads, can also improve workload performance.
-
-	  This is marked experimental because it is a new feature (as of
-	  v3.11) that interacts heavily with memory reclaim.  While these
-	  interactions don't cause any known issues on simple memory setups,
-	  they have not be fully explored on the large set of potential
-	  configurations and workloads that exist.
-
-choice
-	prompt "Compressed cache for swap pages default compressor"
-	depends on ZSWAP
-	default ZSWAP_COMPRESSOR_DEFAULT_LZO
-	help
-	  Selects the default compression algorithm for the compressed cache
-	  for swap pages.
-
-	  For an overview what kind of performance can be expected from
-	  a particular compression algorithm please refer to the benchmarks
-	  available at the following LWN page:
-	  https://lwn.net/Articles/751795/
-
-	  If in doubt, select 'LZO'.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.compressor=' option.
-
-config ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
-	bool "Deflate"
-	select CRYPTO_DEFLATE
-	help
-	  Use the Deflate algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZO
-	bool "LZO"
-	select CRYPTO_LZO
-	help
-	  Use the LZO algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_842
-	bool "842"
-	select CRYPTO_842
-	help
-	  Use the 842 algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZ4
-	bool "LZ4"
-	select CRYPTO_LZ4
-	help
-	  Use the LZ4 algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
-	bool "LZ4HC"
-	select CRYPTO_LZ4HC
-	help
-	  Use the LZ4HC algorithm as the default compression algorithm.
-
-config ZSWAP_COMPRESSOR_DEFAULT_ZSTD
-	bool "zstd"
-	select CRYPTO_ZSTD
-	help
-	  Use the zstd algorithm as the default compression algorithm.
-endchoice
-
-config ZSWAP_COMPRESSOR_DEFAULT
-       string
-       depends on ZSWAP
-       default "deflate" if ZSWAP_COMPRESSOR_DEFAULT_DEFLATE
-       default "lzo" if ZSWAP_COMPRESSOR_DEFAULT_LZO
-       default "842" if ZSWAP_COMPRESSOR_DEFAULT_842
-       default "lz4" if ZSWAP_COMPRESSOR_DEFAULT_LZ4
-       default "lz4hc" if ZSWAP_COMPRESSOR_DEFAULT_LZ4HC
-       default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD
-       default ""
-
-choice
-	prompt "Compressed cache for swap pages default allocator"
-	depends on ZSWAP
-	default ZSWAP_ZPOOL_DEFAULT_ZBUD
-	help
-	  Selects the default allocator for the compressed cache for
-	  swap pages.
-	  The default is 'zbud' for compatibility, however please do
-	  read the description of each of the allocators below before
-	  making a right choice.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.zpool=' option.
-
-config ZSWAP_ZPOOL_DEFAULT_ZBUD
-	bool "zbud"
-	select ZBUD
-	help
-	  Use the zbud allocator as the default allocator.
-
-config ZSWAP_ZPOOL_DEFAULT_Z3FOLD
-	bool "z3fold"
-	select Z3FOLD
-	help
-	  Use the z3fold allocator as the default allocator.
-
-config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
-	bool "zsmalloc"
-	select ZSMALLOC
-	help
-	  Use the zsmalloc allocator as the default allocator.
-endchoice
-
-config ZSWAP_ZPOOL_DEFAULT
-       string
-       depends on ZSWAP
-       default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD
-       default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD
-       default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
-       default ""
-
-config ZSWAP_DEFAULT_ON
-	bool "Enable the compressed cache for swap pages by default"
-	depends on ZSWAP
-	help
-	  If selected, the compressed cache for swap pages will be enabled
-	  at boot, otherwise it will be disabled.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.enabled=' option.
-
-config ZPOOL
-	tristate "Common API for compressed memory storage"
-	help
-	  Compressed memory storage API.  This allows using either zbud or
-	  zsmalloc.
-
-config ZBUD
-	tristate "Low (Up to 2x) density storage for compressed pages"
-	depends on ZPOOL
-	help
-	  A special purpose allocator for storing compressed pages.
-	  It is designed to store up to two compressed pages per physical
-	  page.  While this design limits storage density, it has simple and
-	  deterministic reclaim properties that make it preferable to a higher
-	  density approach when reclaim will be used.
-
-config Z3FOLD
-	tristate "Up to 3x density storage for compressed pages"
-	depends on ZPOOL
-	help
-	  A special purpose allocator for storing compressed pages.
-	  It is designed to store up to three compressed pages per physical
-	  page. It is a ZBUD derivative so the simplicity and determinism are
-	  still there.
-
-config ZSMALLOC
-	tristate "Memory allocator for compressed pages"
-	depends on MMU
-	help
-	  zsmalloc is a slab-based memory allocator designed to store
-	  compressed RAM pages.  zsmalloc uses virtual memory mapping
-	  in order to reduce fragmentation.  However, this results in a
-	  non-standard allocator interface where a handle, not a pointer, is
-	  returned by an alloc().  This handle must be mapped in order to
-	  access the allocated space.
-
-config ZSMALLOC_STAT
-	bool "Export zsmalloc statistics"
-	depends on ZSMALLOC
-	select DEBUG_FS
-	help
-	  This option enables code in the zsmalloc to collect various
-	  statistics about what's happening in zsmalloc and exports that
-	  information to userspace via debugfs.
-	  If unsure, say N.
-
 config GENERIC_EARLY_IOREMAP
 	bool
 
@@ -978,17 +1002,6 @@ comment "GUP_TEST needs to have DEBUG_FS enabled"
 config GUP_GET_PTE_LOW_HIGH
 	bool
 
-config READ_ONLY_THP_FOR_FS
-	bool "Read-only THP for filesystems (EXPERIMENTAL)"
-	depends on TRANSPARENT_HUGEPAGE && SHMEM
-
-	help
-	  Allow khugepaged to put read-only file-backed pages in THP.
-
-	  This is marked experimental because it is a new feature. Write
-	  support of file THPs will be developed in the next few release
-	  cycles.
-
 config ARCH_HAS_PTE_SPECIAL
 	bool
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 3/5] mm: Kconfig: simplify zswap configuration
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

- CONFIG_ZRAM: Zram is a user-facing feature, whereas zsmalloc is
  not. Don't make the user chase down a technical dependency like
  that, just select it in automatically when zram is requested. The
  CONFIG_CRYPTO dependency is redundant due to more specific deps.

- CONFIG_ZPOOL: This is not a user-facing feature. Hide the symbol and
  have it selected in as needed.

- CONFIG_ZSWAP: Select CRYPTO instead of depend. Common pattern.

- Make the ZSWAP suboptions and their descriptions (compression,
  allocation backend) a bit more straight-forward for the user.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 drivers/block/zram/Kconfig |  3 ++-
 mm/Kconfig                 | 55 +++++++++++++++++---------------------
 2 files changed, 27 insertions(+), 31 deletions(-)

diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
index 668c6bf2554d..e4163d4b936b 100644
--- a/drivers/block/zram/Kconfig
+++ b/drivers/block/zram/Kconfig
@@ -1,8 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
 config ZRAM
 	tristate "Compressed RAM block device support"
-	depends on BLOCK && SYSFS && ZSMALLOC && CRYPTO
+	depends on BLOCK && SYSFS
 	depends on CRYPTO_LZO || CRYPTO_ZSTD || CRYPTO_LZ4 || CRYPTO_LZ4HC || CRYPTO_842
+	select ZSMALLOC
 	help
 	  Creates virtual block devices called /dev/zramX (X = 0, 1, ...).
 	  Pages written to these disks are compressed and stored in memory
diff --git a/mm/Kconfig b/mm/Kconfig
index 2c5935a28edf..c87ffd0d98b3 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -9,6 +9,9 @@ menu "Memory Management options"
 config ARCH_NO_SWAP
 	bool
 
+config ZPOOL
+	bool
+
 menuconfig SWAP
 	bool "Support for paging of anonymous memory (swap)"
 	depends on MMU && BLOCK && !ARCH_NO_SWAP
@@ -21,8 +24,9 @@ menuconfig SWAP
 
 config ZSWAP
 	bool "Compressed cache for swap pages (EXPERIMENTAL)"
-	depends on SWAP && CRYPTO=y
+	depends on SWAP
 	select FRONTSWAP
+	select CRYPTO
 	select ZPOOL
 	help
 	  A lightweight compressed cache for swap pages.  It takes
@@ -38,8 +42,18 @@ config ZSWAP
 	  they have not be fully explored on the large set of potential
 	  configurations and workloads that exist.
 
+config ZSWAP_DEFAULT_ON
+	bool "Enable the compressed cache for swap pages by default"
+	depends on ZSWAP
+	help
+	  If selected, the compressed cache for swap pages will be enabled
+	  at boot, otherwise it will be disabled.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.enabled=' option.
+
 choice
-	prompt "Compressed cache for swap pages default compressor"
+	prompt "Default compressor"
 	depends on ZSWAP
 	default ZSWAP_COMPRESSOR_DEFAULT_LZO
 	help
@@ -105,7 +119,7 @@ config ZSWAP_COMPRESSOR_DEFAULT
        default ""
 
 choice
-	prompt "Compressed cache for swap pages default allocator"
+	prompt "Default allocator"
 	depends on ZSWAP
 	default ZSWAP_ZPOOL_DEFAULT_ZBUD
 	help
@@ -145,26 +159,9 @@ config ZSWAP_ZPOOL_DEFAULT
        default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
        default ""
 
-config ZSWAP_DEFAULT_ON
-	bool "Enable the compressed cache for swap pages by default"
-	depends on ZSWAP
-	help
-	  If selected, the compressed cache for swap pages will be enabled
-	  at boot, otherwise it will be disabled.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.enabled=' option.
-
-config ZPOOL
-	tristate "Common API for compressed memory storage"
-	depends on ZSWAP
-	help
-	  Compressed memory storage API.  This allows using either zbud or
-	  zsmalloc.
-
 config ZBUD
-	tristate "Low (Up to 2x) density storage for compressed pages"
-	depends on ZPOOL
+	tristate "2:1 compression allocator (zbud)"
+	depends on ZSWAP
 	help
 	  A special purpose allocator for storing compressed pages.
 	  It is designed to store up to two compressed pages per physical
@@ -173,8 +170,8 @@ config ZBUD
 	  density approach when reclaim will be used.
 
 config Z3FOLD
-	tristate "Up to 3x density storage for compressed pages"
-	depends on ZPOOL
+	tristate "3:1 compression allocator (z3fold)"
+	depends on ZSWAP
 	help
 	  A special purpose allocator for storing compressed pages.
 	  It is designed to store up to three compressed pages per physical
@@ -182,15 +179,13 @@ config Z3FOLD
 	  still there.
 
 config ZSMALLOC
-	tristate "Memory allocator for compressed pages"
+	tristate
+	prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
 	depends on MMU
 	help
 	  zsmalloc is a slab-based memory allocator designed to store
-	  compressed RAM pages.  zsmalloc uses virtual memory mapping
-	  in order to reduce fragmentation.  However, this results in a
-	  non-standard allocator interface where a handle, not a pointer, is
-	  returned by an alloc().  This handle must be mapped in order to
-	  access the allocated space.
+	  pages of various compression levels efficiently. It achieves
+	  the highest storage density with the least amount of fragmentation.
 
 config ZSMALLOC_STAT
 	bool "Export zsmalloc statistics"
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 3/5] mm: Kconfig: simplify zswap configuration
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

- CONFIG_ZRAM: Zram is a user-facing feature, whereas zsmalloc is
  not. Don't make the user chase down a technical dependency like
  that, just select it in automatically when zram is requested. The
  CONFIG_CRYPTO dependency is redundant due to more specific deps.

- CONFIG_ZPOOL: This is not a user-facing feature. Hide the symbol and
  have it selected in as needed.

- CONFIG_ZSWAP: Select CRYPTO instead of depend. Common pattern.

- Make the ZSWAP suboptions and their descriptions (compression,
  allocation backend) a bit more straight-forward for the user.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 drivers/block/zram/Kconfig |  3 ++-
 mm/Kconfig                 | 55 +++++++++++++++++---------------------
 2 files changed, 27 insertions(+), 31 deletions(-)

diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
index 668c6bf2554d..e4163d4b936b 100644
--- a/drivers/block/zram/Kconfig
+++ b/drivers/block/zram/Kconfig
@@ -1,8 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
 config ZRAM
 	tristate "Compressed RAM block device support"
-	depends on BLOCK && SYSFS && ZSMALLOC && CRYPTO
+	depends on BLOCK && SYSFS
 	depends on CRYPTO_LZO || CRYPTO_ZSTD || CRYPTO_LZ4 || CRYPTO_LZ4HC || CRYPTO_842
+	select ZSMALLOC
 	help
 	  Creates virtual block devices called /dev/zramX (X = 0, 1, ...).
 	  Pages written to these disks are compressed and stored in memory
diff --git a/mm/Kconfig b/mm/Kconfig
index 2c5935a28edf..c87ffd0d98b3 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -9,6 +9,9 @@ menu "Memory Management options"
 config ARCH_NO_SWAP
 	bool
 
+config ZPOOL
+	bool
+
 menuconfig SWAP
 	bool "Support for paging of anonymous memory (swap)"
 	depends on MMU && BLOCK && !ARCH_NO_SWAP
@@ -21,8 +24,9 @@ menuconfig SWAP
 
 config ZSWAP
 	bool "Compressed cache for swap pages (EXPERIMENTAL)"
-	depends on SWAP && CRYPTO=y
+	depends on SWAP
 	select FRONTSWAP
+	select CRYPTO
 	select ZPOOL
 	help
 	  A lightweight compressed cache for swap pages.  It takes
@@ -38,8 +42,18 @@ config ZSWAP
 	  they have not be fully explored on the large set of potential
 	  configurations and workloads that exist.
 
+config ZSWAP_DEFAULT_ON
+	bool "Enable the compressed cache for swap pages by default"
+	depends on ZSWAP
+	help
+	  If selected, the compressed cache for swap pages will be enabled
+	  at boot, otherwise it will be disabled.
+
+	  The selection made here can be overridden by using the kernel
+	  command line 'zswap.enabled=' option.
+
 choice
-	prompt "Compressed cache for swap pages default compressor"
+	prompt "Default compressor"
 	depends on ZSWAP
 	default ZSWAP_COMPRESSOR_DEFAULT_LZO
 	help
@@ -105,7 +119,7 @@ config ZSWAP_COMPRESSOR_DEFAULT
        default ""
 
 choice
-	prompt "Compressed cache for swap pages default allocator"
+	prompt "Default allocator"
 	depends on ZSWAP
 	default ZSWAP_ZPOOL_DEFAULT_ZBUD
 	help
@@ -145,26 +159,9 @@ config ZSWAP_ZPOOL_DEFAULT
        default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC
        default ""
 
-config ZSWAP_DEFAULT_ON
-	bool "Enable the compressed cache for swap pages by default"
-	depends on ZSWAP
-	help
-	  If selected, the compressed cache for swap pages will be enabled
-	  at boot, otherwise it will be disabled.
-
-	  The selection made here can be overridden by using the kernel
-	  command line 'zswap.enabled=' option.
-
-config ZPOOL
-	tristate "Common API for compressed memory storage"
-	depends on ZSWAP
-	help
-	  Compressed memory storage API.  This allows using either zbud or
-	  zsmalloc.
-
 config ZBUD
-	tristate "Low (Up to 2x) density storage for compressed pages"
-	depends on ZPOOL
+	tristate "2:1 compression allocator (zbud)"
+	depends on ZSWAP
 	help
 	  A special purpose allocator for storing compressed pages.
 	  It is designed to store up to two compressed pages per physical
@@ -173,8 +170,8 @@ config ZBUD
 	  density approach when reclaim will be used.
 
 config Z3FOLD
-	tristate "Up to 3x density storage for compressed pages"
-	depends on ZPOOL
+	tristate "3:1 compression allocator (z3fold)"
+	depends on ZSWAP
 	help
 	  A special purpose allocator for storing compressed pages.
 	  It is designed to store up to three compressed pages per physical
@@ -182,15 +179,13 @@ config Z3FOLD
 	  still there.
 
 config ZSMALLOC
-	tristate "Memory allocator for compressed pages"
+	tristate
+	prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
 	depends on MMU
 	help
 	  zsmalloc is a slab-based memory allocator designed to store
-	  compressed RAM pages.  zsmalloc uses virtual memory mapping
-	  in order to reduce fragmentation.  However, this results in a
-	  non-standard allocator interface where a handle, not a pointer, is
-	  returned by an alloc().  This handle must be mapped in order to
-	  access the allocated space.
+	  pages of various compression levels efficiently. It achieves
+	  the highest storage density with the least amount of fragmentation.
 
 config ZSMALLOC_STAT
 	bool "Export zsmalloc statistics"
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

Currently it requires poking at debugfs to figure out the size and
population of the zswap cache on a host. There are no counters for
reads and writes against the cache. As a result, it's difficult to
understand zswap behavior on production systems.

Print zswap memory consumption and how many pages are zswapped out in
/proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 fs/proc/meminfo.c             |  7 +++++++
 include/linux/swap.h          |  5 +++++
 include/linux/vm_event_item.h |  4 ++++
 mm/vmstat.c                   |  4 ++++
 mm/zswap.c                    | 13 ++++++-------
 5 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 6fa761c9cc78..6e89f0e2fd20 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 
 	show_val_kb(m, "SwapTotal:      ", i.totalswap);
 	show_val_kb(m, "SwapFree:       ", i.freeswap);
+#ifdef CONFIG_ZSWAP
+	seq_printf(m,  "Zswap:          %8lu kB\n",
+		   (unsigned long)(zswap_pool_total_size >> 10));
+	seq_printf(m,  "Zswapped:       %8lu kB\n",
+		   (unsigned long)atomic_read(&zswap_stored_pages) <<
+		   (PAGE_SHIFT - 10));
+#endif
 	show_val_kb(m, "Dirty:          ",
 		    global_node_page_state(NR_FILE_DIRTY));
 	show_val_kb(m, "Writeback:      ",
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b82c196d8867..07074afa79a7 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -632,6 +632,11 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
 }
 #endif
 
+#ifdef CONFIG_ZSWAP
+extern u64 zswap_pool_total_size;
+extern atomic_t zswap_stored_pages;
+#endif
+
 #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
 extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
 static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5e80138ce624..1ce8fadb2b1c 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -132,6 +132,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 #ifdef CONFIG_KSM
 		COW_KSM,
 #endif
+#ifdef CONFIG_ZSWAP
+		ZSWPIN,
+		ZSWPOUT,
+#endif
 #ifdef CONFIG_X86
 		DIRECT_MAP_LEVEL2_SPLIT,
 		DIRECT_MAP_LEVEL3_SPLIT,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4a2aa2fa88db..da7e389cf33c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1392,6 +1392,10 @@ const char * const vmstat_text[] = {
 #ifdef CONFIG_KSM
 	"cow_ksm",
 #endif
+#ifdef CONFIG_ZSWAP
+	"zswpin",
+	"zswpout",
+#endif
 #ifdef CONFIG_X86
 	"direct_map_level2_splits",
 	"direct_map_level3_splits",
diff --git a/mm/zswap.c b/mm/zswap.c
index 2c5db4cbedea..e3c16a70f533 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -42,9 +42,9 @@
 * statistics
 **********************************/
 /* Total bytes used by the compressed storage */
-static u64 zswap_pool_total_size;
+u64 zswap_pool_total_size;
 /* The number of compressed pages currently stored in zswap */
-static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+atomic_t zswap_stored_pages = ATOMIC_INIT(0);
 /* The number of same-value filled pages currently stored in zswap */
 static atomic_t zswap_same_filled_pages = ATOMIC_INIT(0);
 
@@ -1243,6 +1243,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	/* update stats */
 	atomic_inc(&zswap_stored_pages);
 	zswap_update_total_size();
+	count_vm_event(ZSWPOUT);
 
 	return 0;
 
@@ -1285,11 +1286,10 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		zswap_fill_page(dst, entry->value);
 		kunmap_atomic(dst);
 		ret = 0;
-		goto freeentry;
+		goto stats;
 	}
 
 	if (!zpool_can_sleep_mapped(entry->pool->zpool)) {
-
 		tmp = kmalloc(entry->length, GFP_ATOMIC);
 		if (!tmp) {
 			ret = -ENOMEM;
@@ -1304,10 +1304,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		src += sizeof(struct zswap_header);
 
 	if (!zpool_can_sleep_mapped(entry->pool->zpool)) {
-
 		memcpy(tmp, src, entry->length);
 		src = tmp;
-
 		zpool_unmap_handle(entry->pool->zpool, entry->handle);
 	}
 
@@ -1326,7 +1324,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		kfree(tmp);
 
 	BUG_ON(ret);
-
+stats:
+	count_vm_event(ZSWPIN);
 freeentry:
 	spin_lock(&tree->lock);
 	zswap_entry_put(tree, entry);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

Currently it requires poking at debugfs to figure out the size and
population of the zswap cache on a host. There are no counters for
reads and writes against the cache. As a result, it's difficult to
understand zswap behavior on production systems.

Print zswap memory consumption and how many pages are zswapped out in
/proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 fs/proc/meminfo.c             |  7 +++++++
 include/linux/swap.h          |  5 +++++
 include/linux/vm_event_item.h |  4 ++++
 mm/vmstat.c                   |  4 ++++
 mm/zswap.c                    | 13 ++++++-------
 5 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 6fa761c9cc78..6e89f0e2fd20 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 
 	show_val_kb(m, "SwapTotal:      ", i.totalswap);
 	show_val_kb(m, "SwapFree:       ", i.freeswap);
+#ifdef CONFIG_ZSWAP
+	seq_printf(m,  "Zswap:          %8lu kB\n",
+		   (unsigned long)(zswap_pool_total_size >> 10));
+	seq_printf(m,  "Zswapped:       %8lu kB\n",
+		   (unsigned long)atomic_read(&zswap_stored_pages) <<
+		   (PAGE_SHIFT - 10));
+#endif
 	show_val_kb(m, "Dirty:          ",
 		    global_node_page_state(NR_FILE_DIRTY));
 	show_val_kb(m, "Writeback:      ",
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b82c196d8867..07074afa79a7 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -632,6 +632,11 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
 }
 #endif
 
+#ifdef CONFIG_ZSWAP
+extern u64 zswap_pool_total_size;
+extern atomic_t zswap_stored_pages;
+#endif
+
 #if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
 extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
 static inline  void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index 5e80138ce624..1ce8fadb2b1c 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -132,6 +132,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
 #ifdef CONFIG_KSM
 		COW_KSM,
 #endif
+#ifdef CONFIG_ZSWAP
+		ZSWPIN,
+		ZSWPOUT,
+#endif
 #ifdef CONFIG_X86
 		DIRECT_MAP_LEVEL2_SPLIT,
 		DIRECT_MAP_LEVEL3_SPLIT,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4a2aa2fa88db..da7e389cf33c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1392,6 +1392,10 @@ const char * const vmstat_text[] = {
 #ifdef CONFIG_KSM
 	"cow_ksm",
 #endif
+#ifdef CONFIG_ZSWAP
+	"zswpin",
+	"zswpout",
+#endif
 #ifdef CONFIG_X86
 	"direct_map_level2_splits",
 	"direct_map_level3_splits",
diff --git a/mm/zswap.c b/mm/zswap.c
index 2c5db4cbedea..e3c16a70f533 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -42,9 +42,9 @@
 * statistics
 **********************************/
 /* Total bytes used by the compressed storage */
-static u64 zswap_pool_total_size;
+u64 zswap_pool_total_size;
 /* The number of compressed pages currently stored in zswap */
-static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+atomic_t zswap_stored_pages = ATOMIC_INIT(0);
 /* The number of same-value filled pages currently stored in zswap */
 static atomic_t zswap_same_filled_pages = ATOMIC_INIT(0);
 
@@ -1243,6 +1243,7 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	/* update stats */
 	atomic_inc(&zswap_stored_pages);
 	zswap_update_total_size();
+	count_vm_event(ZSWPOUT);
 
 	return 0;
 
@@ -1285,11 +1286,10 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		zswap_fill_page(dst, entry->value);
 		kunmap_atomic(dst);
 		ret = 0;
-		goto freeentry;
+		goto stats;
 	}
 
 	if (!zpool_can_sleep_mapped(entry->pool->zpool)) {
-
 		tmp = kmalloc(entry->length, GFP_ATOMIC);
 		if (!tmp) {
 			ret = -ENOMEM;
@@ -1304,10 +1304,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		src += sizeof(struct zswap_header);
 
 	if (!zpool_can_sleep_mapped(entry->pool->zpool)) {
-
 		memcpy(tmp, src, entry->length);
 		src = tmp;
-
 		zpool_unmap_handle(entry->pool->zpool, entry->handle);
 	}
 
@@ -1326,7 +1324,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 		kfree(tmp);
 
 	BUG_ON(ret);
-
+stats:
+	count_vm_event(ZSWPIN);
 freeentry:
 	spin_lock(&tree->lock);
 	zswap_entry_put(tree, entry);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 5/5] zswap: memcg accounting
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

Applications can currently escape their cgroup memory containment when
zswap is enabled. This patch adds per-cgroup tracking and limiting of
zswap backend memory to rectify this.

The existing cgroup2 memory.stat file is extended to show zswap
statistics analogous to what's in meminfo and vmstat. Furthermore, two
new control files, memory.zswap.current and memory.zswap.max, are
added to allow tuning zswap usage on a per-workload basis. This is
important since not all workloads benefit from zswap equally; some
even suffer compared to disk swap when memory contents don't compress
well. The optimal size of the zswap pool, and the threshold for
writeback, also depends on the size of the workload's warm set.

The implementation doesn't use a traditional page_counter transaction.
zswap is unconventional as a memory consumer in that we only know the
amount of memory to charge once expensive compression has occurred. If
zwap is disabled or the limit is already exceeded we obviously don't
want to compress page upon page only to reject them all. Instead, the
limit is checked against current usage, then we compress and charge.
This allows some limit overrun, but not enough to matter in practice.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 Documentation/admin-guide/cgroup-v2.rst |  21 +++
 include/linux/memcontrol.h              |  54 +++++++
 mm/memcontrol.c                         | 196 +++++++++++++++++++++++-
 mm/zswap.c                              |  37 ++++-
 4 files changed, 293 insertions(+), 15 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 19bcd73cad03..b4c262e99b5f 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1347,6 +1347,12 @@ PAGE_SIZE multiple when read back.
 		Amount of cached filesystem data that is swap-backed,
 		such as tmpfs, shm segments, shared anonymous mmap()s
 
+	  zswap
+		Amount of memory consumed by the zswap compression backend.
+
+	  zswapped
+		Amount of application memory swapped out to zswap.
+
 	  file_mapped
 		Amount of cached filesystem data mapped with mmap()
 
@@ -1537,6 +1543,21 @@ PAGE_SIZE multiple when read back.
 	higher than the limit for an extended period of time.  This
 	reduces the impact on the workload and memory management.
 
+  memory.zswap.current
+	A read-only single value file which exists on non-root
+	cgroups.
+
+	The total amount of memory consumed by the zswap compression
+	backend.
+
+  memory.zswap.max
+	A read-write single value file which exists on non-root
+	cgroups.  The default is "max".
+
+	Zswap usage hard limit. If a cgroup's zswap pool reaches this
+	limit, it will refuse to take any more stores before existing
+	entries fault back in or are written out to disk.
+
   memory.pressure
 	A read-only nested-keyed file.
 
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index fe580cb96683..3385ce81ecf3 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -35,6 +35,8 @@ enum memcg_stat_item {
 	MEMCG_PERCPU_B,
 	MEMCG_VMALLOC,
 	MEMCG_KMEM,
+	MEMCG_ZSWAP_B,
+	MEMCG_ZSWAPPED,
 	MEMCG_NR_STAT,
 };
 
@@ -252,6 +254,10 @@ struct mem_cgroup {
 	/* Range enforcement for interrupt charges */
 	struct work_struct high_work;
 
+#ifdef CONFIG_ZSWAP
+	unsigned long zswap_max;
+#endif
+
 	unsigned long soft_limit;
 
 	/* vmpressure notifications */
@@ -1264,6 +1270,10 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css)
 	return NULL;
 }
 
+static inline void obj_cgroup_put(struct obj_cgroup *objcg)
+{
+}
+
 static inline void mem_cgroup_put(struct mem_cgroup *memcg)
 {
 }
@@ -1680,6 +1690,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_page(struct page *page, int order);
 
 struct obj_cgroup *get_obj_cgroup_from_current(void);
+struct obj_cgroup *get_obj_cgroup_from_page(struct page *page);
 
 int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size);
 void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size);
@@ -1716,6 +1727,20 @@ static inline int memcg_kmem_id(struct mem_cgroup *memcg)
 
 struct mem_cgroup *mem_cgroup_from_obj(void *p);
 
+static inline void count_objcg_event(struct obj_cgroup *objcg,
+				     enum vm_event_item idx)
+{
+	struct mem_cgroup *memcg;
+
+	if (mem_cgroup_kmem_disabled())
+		return;
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	count_memcg_events(memcg, idx, 1);
+	rcu_read_unlock();
+}
+
 #else
 static inline bool mem_cgroup_kmem_disabled(void)
 {
@@ -1742,6 +1767,11 @@ static inline void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 }
 
+static inline struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline bool memcg_kmem_enabled(void)
 {
 	return false;
@@ -1757,6 +1787,30 @@ static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
        return NULL;
 }
 
+static inline void count_objcg_event(struct obj_cgroup *objcg,
+				     enum vm_event_item idx)
+{
+}
+
 #endif /* CONFIG_MEMCG_KMEM */
 
+#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
+void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
+void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
+#else
+static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
+{
+	return true;
+}
+static inline void obj_cgroup_charge_zswap(struct obj_cgroup *objcg,
+					   size_t size)
+{
+}
+static inline void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg,
+					     size_t size)
+{
+}
+#endif
+
 #endif /* _LINUX_MEMCONTROL_H */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 04cea4fa362a..cbb9b43bdb80 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1398,6 +1398,10 @@ static const struct memory_stat memory_stats[] = {
 	{ "sock",			MEMCG_SOCK			},
 	{ "vmalloc",			MEMCG_VMALLOC			},
 	{ "shmem",			NR_SHMEM			},
+#ifdef CONFIG_ZSWAP
+	{ "zswap",			MEMCG_ZSWAP_B			},
+	{ "zswapped",			MEMCG_ZSWAPPED			},
+#endif
 	{ "file_mapped",		NR_FILE_MAPPED			},
 	{ "file_dirty",			NR_FILE_DIRTY			},
 	{ "file_writeback",		NR_WRITEBACK			},
@@ -1432,6 +1436,7 @@ static int memcg_page_state_unit(int item)
 {
 	switch (item) {
 	case MEMCG_PERCPU_B:
+	case MEMCG_ZSWAP_B:
 	case NR_SLAB_RECLAIMABLE_B:
 	case NR_SLAB_UNRECLAIMABLE_B:
 	case WORKINGSET_REFAULT_ANON:
@@ -1512,6 +1517,13 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
 	seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGLAZYFREED),
 		       memcg_events(memcg, PGLAZYFREED));
 
+#ifdef CONFIG_ZSWAP
+	seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPIN),
+		       memcg_events(memcg, ZSWPIN));
+	seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPOUT),
+		       memcg_events(memcg, ZSWPOUT));
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	seq_buf_printf(&s, "%s %lu\n", vm_event_name(THP_FAULT_ALLOC),
 		       memcg_events(memcg, THP_FAULT_ALLOC));
@@ -2883,6 +2895,19 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
 	return page_memcg_check(folio_page(folio, 0));
 }
 
+static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+	struct obj_cgroup *objcg = NULL;
+
+	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+		objcg = NULL;
+	}
+	return objcg;
+}
+
 __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 {
 	struct obj_cgroup *objcg = NULL;
@@ -2896,15 +2921,32 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 		memcg = active_memcg();
 	else
 		memcg = mem_cgroup_from_task(current);
-
-	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
-		objcg = rcu_dereference(memcg->objcg);
-		if (objcg && obj_cgroup_tryget(objcg))
-			break;
-		objcg = NULL;
-	}
+	objcg = __get_obj_cgroup_from_memcg(memcg);
 	rcu_read_unlock();
+	return objcg;
+}
+
+struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
+{
+	struct obj_cgroup *objcg;
+
+	if (!memcg_kmem_enabled() || memcg_kmem_bypass())
+		return NULL;
 
+	if (PageMemcgKmem(page)) {
+		objcg = __folio_objcg(page_folio(page));
+		obj_cgroup_get(objcg);
+	} else {
+		struct mem_cgroup *memcg;
+
+		rcu_read_lock();
+		memcg = __folio_memcg(page_folio(page));
+		if (memcg)
+			objcg = __get_obj_cgroup_from_memcg(memcg);
+		else
+			objcg = NULL;
+		rcu_read_unlock();
+	}
 	return objcg;
 }
 
@@ -5142,6 +5184,9 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 
 	page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX);
 	memcg->soft_limit = PAGE_COUNTER_MAX;
+#ifdef CONFIG_ZSWAP
+	memcg->zswap_max = PAGE_COUNTER_MAX;
+#endif
 	page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
 	if (parent) {
 		memcg->swappiness = mem_cgroup_swappiness(parent);
@@ -7406,6 +7451,139 @@ static struct cftype memsw_files[] = {
 	{ },	/* terminate */
 };
 
+#ifdef CONFIG_ZSWAP
+/**
+ * obj_cgroup_may_zswap - check if this cgroup can zswap
+ * @objcg: the object cgroup
+ *
+ * Check if the hierarchical zswap limit has been reached.
+ *
+ * This doesn't check for specific headroom, and it is not atomic
+ * either. But with zswap, the size of the allocation is only known
+ * once compression has occured, and this optimistic pre-check avoids
+ * spending cycles on compression when there is already no room left
+ * or zswap is disabled altogether somewhere in the hierarchy.
+ */
+bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
+{
+	struct mem_cgroup *memcg, *original_memcg;
+	bool ret = true;
+
+	original_memcg = get_mem_cgroup_from_objcg(objcg);
+	for (memcg = original_memcg; memcg != root_mem_cgroup;
+	     memcg = parent_mem_cgroup(memcg)) {
+		unsigned long max = READ_ONCE(memcg->zswap_max);
+		unsigned long pages;
+
+		if (max == PAGE_COUNTER_MAX)
+			continue;
+		if (max == 0) {
+			ret = false;
+			break;
+		}
+
+		cgroup_rstat_flush(memcg->css.cgroup);
+		pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE;
+		if (pages < max)
+			continue;
+		ret = false;
+		break;
+	}
+	mem_cgroup_put(original_memcg);
+	return ret;
+}
+
+/**
+ * obj_cgroup_charge_zswap - charge compression backend memory
+ * @objcg: the object cgroup
+ * @size: size of compressed object
+ *
+ * This forces the charge after obj_cgroup_may_swap() allowed
+ * compression and storage in zwap for this cgroup to go ahead.
+ */
+void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size)
+{
+	struct mem_cgroup *memcg;
+
+	VM_WARN_ON_ONCE(!(current->flags & PF_MEMALLOC));
+
+	/* PF_MEMALLOC context, charging must succeed */
+	if (obj_cgroup_charge(objcg, GFP_KERNEL, size))
+		VM_WARN_ON_ONCE(1);
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	mod_memcg_state(memcg, MEMCG_ZSWAP_B, size);
+	mod_memcg_state(memcg, MEMCG_ZSWAPPED, 1);
+	rcu_read_unlock();
+}
+
+/**
+ * obj_cgroup_uncharge_zswap - uncharge compression backend memory
+ * @objcg: the object cgroup
+ * @size: size of compressed object
+ *
+ * Uncharges zswap memory on page in.
+ */
+void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
+{
+	struct mem_cgroup *memcg;
+
+	obj_cgroup_uncharge(objcg, size);
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	mod_memcg_state(memcg, MEMCG_ZSWAP_B, -size);
+	mod_memcg_state(memcg, MEMCG_ZSWAPPED, -1);
+	rcu_read_unlock();
+}
+
+static u64 zswap_current_read(struct cgroup_subsys_state *css,
+			      struct cftype *cft)
+{
+	cgroup_rstat_flush(css->cgroup);
+	return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B);
+}
+
+static int zswap_max_show(struct seq_file *m, void *v)
+{
+	return seq_puts_memcg_tunable(m,
+		READ_ONCE(mem_cgroup_from_seq(m)->zswap_max));
+}
+
+static ssize_t zswap_max_write(struct kernfs_open_file *of,
+			       char *buf, size_t nbytes, loff_t off)
+{
+	struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+	unsigned long max;
+	int err;
+
+	buf = strstrip(buf);
+	err = page_counter_memparse(buf, "max", &max);
+	if (err)
+		return err;
+
+	xchg(&memcg->zswap_max, max);
+
+	return nbytes;
+}
+
+static struct cftype zswap_files[] = {
+	{
+		.name = "zswap.current",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = zswap_current_read,
+	},
+	{
+		.name = "zswap.max",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.seq_show = zswap_max_show,
+		.write = zswap_max_write,
+	},
+	{ }	/* terminate */
+};
+#endif /* CONFIG_ZSWAP */
+
 /*
  * If mem_cgroup_swap_init() is implemented as a subsys_initcall()
  * instead of a core_initcall(), this could mean cgroup_memory_noswap still
@@ -7424,7 +7602,9 @@ static int __init mem_cgroup_swap_init(void)
 
 	WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files));
 	WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, memsw_files));
-
+#ifdef CONFIG_ZSWAP
+	WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, zswap_files));
+#endif
 	return 0;
 }
 core_initcall(mem_cgroup_swap_init);
diff --git a/mm/zswap.c b/mm/zswap.c
index e3c16a70f533..104835b379ec 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -188,6 +188,7 @@ struct zswap_entry {
 		unsigned long handle;
 		unsigned long value;
 	};
+	struct obj_cgroup *objcg;
 };
 
 struct zswap_header {
@@ -359,6 +360,10 @@ static void zswap_rb_erase(struct rb_root *root, struct zswap_entry *entry)
  */
 static void zswap_free_entry(struct zswap_entry *entry)
 {
+	if (entry->objcg) {
+		obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
+		obj_cgroup_put(entry->objcg);
+	}
 	if (!entry->length)
 		atomic_dec(&zswap_same_filled_pages);
 	else {
@@ -1096,6 +1101,8 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	struct zswap_entry *entry, *dupentry;
 	struct scatterlist input, output;
 	struct crypto_acomp_ctx *acomp_ctx;
+	struct obj_cgroup *objcg = NULL;
+	struct zswap_pool *pool;
 	int ret;
 	unsigned int hlen, dlen = PAGE_SIZE;
 	unsigned long handle, value;
@@ -1115,17 +1122,15 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 		goto reject;
 	}
 
+	objcg = get_obj_cgroup_from_page(page);
+	if (objcg && !obj_cgroup_may_zswap(objcg))
+		goto shrink;
+
 	/* reclaim space if needed */
 	if (zswap_is_full()) {
-		struct zswap_pool *pool;
-
 		zswap_pool_limit_hit++;
 		zswap_pool_reached_full = true;
-		pool = zswap_pool_last_get();
-		if (pool)
-			queue_work(shrink_wq, &pool->shrink_work);
-		ret = -ENOMEM;
-		goto reject;
+		goto shrink;
 	}
 
 	if (zswap_pool_reached_full) {
@@ -1227,6 +1232,13 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	entry->length = dlen;
 
 insert_entry:
+	entry->objcg = objcg;
+	if (objcg) {
+		obj_cgroup_charge_zswap(objcg, entry->length);
+		/* Account before objcg ref is moved to tree */
+		count_objcg_event(objcg, ZSWPOUT);
+	}
+
 	/* map */
 	spin_lock(&tree->lock);
 	do {
@@ -1253,7 +1265,16 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 freepage:
 	zswap_entry_cache_free(entry);
 reject:
+	if (objcg)
+		obj_cgroup_put(objcg);
 	return ret;
+
+shrink:
+	pool = zswap_pool_last_get();
+	if (pool)
+		queue_work(shrink_wq, &pool->shrink_work);
+	ret = -ENOMEM;
+	goto reject;
 }
 
 /*
@@ -1326,6 +1347,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 	BUG_ON(ret);
 stats:
 	count_vm_event(ZSWPIN);
+	if (entry->objcg)
+		count_objcg_event(entry->objcg, ZSWPIN);
 freeentry:
 	spin_lock(&tree->lock);
 	zswap_entry_put(tree, entry);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [PATCH 5/5] zswap: memcg accounting
@ 2022-04-27 16:00   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

Applications can currently escape their cgroup memory containment when
zswap is enabled. This patch adds per-cgroup tracking and limiting of
zswap backend memory to rectify this.

The existing cgroup2 memory.stat file is extended to show zswap
statistics analogous to what's in meminfo and vmstat. Furthermore, two
new control files, memory.zswap.current and memory.zswap.max, are
added to allow tuning zswap usage on a per-workload basis. This is
important since not all workloads benefit from zswap equally; some
even suffer compared to disk swap when memory contents don't compress
well. The optimal size of the zswap pool, and the threshold for
writeback, also depends on the size of the workload's warm set.

The implementation doesn't use a traditional page_counter transaction.
zswap is unconventional as a memory consumer in that we only know the
amount of memory to charge once expensive compression has occurred. If
zwap is disabled or the limit is already exceeded we obviously don't
want to compress page upon page only to reject them all. Instead, the
limit is checked against current usage, then we compress and charge.
This allows some limit overrun, but not enough to matter in practice.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 Documentation/admin-guide/cgroup-v2.rst |  21 +++
 include/linux/memcontrol.h              |  54 +++++++
 mm/memcontrol.c                         | 196 +++++++++++++++++++++++-
 mm/zswap.c                              |  37 ++++-
 4 files changed, 293 insertions(+), 15 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 19bcd73cad03..b4c262e99b5f 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1347,6 +1347,12 @@ PAGE_SIZE multiple when read back.
 		Amount of cached filesystem data that is swap-backed,
 		such as tmpfs, shm segments, shared anonymous mmap()s
 
+	  zswap
+		Amount of memory consumed by the zswap compression backend.
+
+	  zswapped
+		Amount of application memory swapped out to zswap.
+
 	  file_mapped
 		Amount of cached filesystem data mapped with mmap()
 
@@ -1537,6 +1543,21 @@ PAGE_SIZE multiple when read back.
 	higher than the limit for an extended period of time.  This
 	reduces the impact on the workload and memory management.
 
+  memory.zswap.current
+	A read-only single value file which exists on non-root
+	cgroups.
+
+	The total amount of memory consumed by the zswap compression
+	backend.
+
+  memory.zswap.max
+	A read-write single value file which exists on non-root
+	cgroups.  The default is "max".
+
+	Zswap usage hard limit. If a cgroup's zswap pool reaches this
+	limit, it will refuse to take any more stores before existing
+	entries fault back in or are written out to disk.
+
   memory.pressure
 	A read-only nested-keyed file.
 
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index fe580cb96683..3385ce81ecf3 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -35,6 +35,8 @@ enum memcg_stat_item {
 	MEMCG_PERCPU_B,
 	MEMCG_VMALLOC,
 	MEMCG_KMEM,
+	MEMCG_ZSWAP_B,
+	MEMCG_ZSWAPPED,
 	MEMCG_NR_STAT,
 };
 
@@ -252,6 +254,10 @@ struct mem_cgroup {
 	/* Range enforcement for interrupt charges */
 	struct work_struct high_work;
 
+#ifdef CONFIG_ZSWAP
+	unsigned long zswap_max;
+#endif
+
 	unsigned long soft_limit;
 
 	/* vmpressure notifications */
@@ -1264,6 +1270,10 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css)
 	return NULL;
 }
 
+static inline void obj_cgroup_put(struct obj_cgroup *objcg)
+{
+}
+
 static inline void mem_cgroup_put(struct mem_cgroup *memcg)
 {
 }
@@ -1680,6 +1690,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_page(struct page *page, int order);
 
 struct obj_cgroup *get_obj_cgroup_from_current(void);
+struct obj_cgroup *get_obj_cgroup_from_page(struct page *page);
 
 int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size);
 void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size);
@@ -1716,6 +1727,20 @@ static inline int memcg_kmem_id(struct mem_cgroup *memcg)
 
 struct mem_cgroup *mem_cgroup_from_obj(void *p);
 
+static inline void count_objcg_event(struct obj_cgroup *objcg,
+				     enum vm_event_item idx)
+{
+	struct mem_cgroup *memcg;
+
+	if (mem_cgroup_kmem_disabled())
+		return;
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	count_memcg_events(memcg, idx, 1);
+	rcu_read_unlock();
+}
+
 #else
 static inline bool mem_cgroup_kmem_disabled(void)
 {
@@ -1742,6 +1767,11 @@ static inline void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 }
 
+static inline struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline bool memcg_kmem_enabled(void)
 {
 	return false;
@@ -1757,6 +1787,30 @@ static inline struct mem_cgroup *mem_cgroup_from_obj(void *p)
        return NULL;
 }
 
+static inline void count_objcg_event(struct obj_cgroup *objcg,
+				     enum vm_event_item idx)
+{
+}
+
 #endif /* CONFIG_MEMCG_KMEM */
 
+#if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+bool obj_cgroup_may_zswap(struct obj_cgroup *objcg);
+void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size);
+void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size);
+#else
+static inline bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
+{
+	return true;
+}
+static inline void obj_cgroup_charge_zswap(struct obj_cgroup *objcg,
+					   size_t size)
+{
+}
+static inline void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg,
+					     size_t size)
+{
+}
+#endif
+
 #endif /* _LINUX_MEMCONTROL_H */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 04cea4fa362a..cbb9b43bdb80 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1398,6 +1398,10 @@ static const struct memory_stat memory_stats[] = {
 	{ "sock",			MEMCG_SOCK			},
 	{ "vmalloc",			MEMCG_VMALLOC			},
 	{ "shmem",			NR_SHMEM			},
+#ifdef CONFIG_ZSWAP
+	{ "zswap",			MEMCG_ZSWAP_B			},
+	{ "zswapped",			MEMCG_ZSWAPPED			},
+#endif
 	{ "file_mapped",		NR_FILE_MAPPED			},
 	{ "file_dirty",			NR_FILE_DIRTY			},
 	{ "file_writeback",		NR_WRITEBACK			},
@@ -1432,6 +1436,7 @@ static int memcg_page_state_unit(int item)
 {
 	switch (item) {
 	case MEMCG_PERCPU_B:
+	case MEMCG_ZSWAP_B:
 	case NR_SLAB_RECLAIMABLE_B:
 	case NR_SLAB_UNRECLAIMABLE_B:
 	case WORKINGSET_REFAULT_ANON:
@@ -1512,6 +1517,13 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
 	seq_buf_printf(&s, "%s %lu\n", vm_event_name(PGLAZYFREED),
 		       memcg_events(memcg, PGLAZYFREED));
 
+#ifdef CONFIG_ZSWAP
+	seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPIN),
+		       memcg_events(memcg, ZSWPIN));
+	seq_buf_printf(&s, "%s %lu\n", vm_event_name(ZSWPOUT),
+		       memcg_events(memcg, ZSWPOUT));
+#endif
+
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	seq_buf_printf(&s, "%s %lu\n", vm_event_name(THP_FAULT_ALLOC),
 		       memcg_events(memcg, THP_FAULT_ALLOC));
@@ -2883,6 +2895,19 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
 	return page_memcg_check(folio_page(folio, 0));
 }
 
+static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+	struct obj_cgroup *objcg = NULL;
+
+	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+		objcg = NULL;
+	}
+	return objcg;
+}
+
 __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 {
 	struct obj_cgroup *objcg = NULL;
@@ -2896,15 +2921,32 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 		memcg = active_memcg();
 	else
 		memcg = mem_cgroup_from_task(current);
-
-	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
-		objcg = rcu_dereference(memcg->objcg);
-		if (objcg && obj_cgroup_tryget(objcg))
-			break;
-		objcg = NULL;
-	}
+	objcg = __get_obj_cgroup_from_memcg(memcg);
 	rcu_read_unlock();
+	return objcg;
+}
+
+struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
+{
+	struct obj_cgroup *objcg;
+
+	if (!memcg_kmem_enabled() || memcg_kmem_bypass())
+		return NULL;
 
+	if (PageMemcgKmem(page)) {
+		objcg = __folio_objcg(page_folio(page));
+		obj_cgroup_get(objcg);
+	} else {
+		struct mem_cgroup *memcg;
+
+		rcu_read_lock();
+		memcg = __folio_memcg(page_folio(page));
+		if (memcg)
+			objcg = __get_obj_cgroup_from_memcg(memcg);
+		else
+			objcg = NULL;
+		rcu_read_unlock();
+	}
 	return objcg;
 }
 
@@ -5142,6 +5184,9 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 
 	page_counter_set_high(&memcg->memory, PAGE_COUNTER_MAX);
 	memcg->soft_limit = PAGE_COUNTER_MAX;
+#ifdef CONFIG_ZSWAP
+	memcg->zswap_max = PAGE_COUNTER_MAX;
+#endif
 	page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX);
 	if (parent) {
 		memcg->swappiness = mem_cgroup_swappiness(parent);
@@ -7406,6 +7451,139 @@ static struct cftype memsw_files[] = {
 	{ },	/* terminate */
 };
 
+#ifdef CONFIG_ZSWAP
+/**
+ * obj_cgroup_may_zswap - check if this cgroup can zswap
+ * @objcg: the object cgroup
+ *
+ * Check if the hierarchical zswap limit has been reached.
+ *
+ * This doesn't check for specific headroom, and it is not atomic
+ * either. But with zswap, the size of the allocation is only known
+ * once compression has occured, and this optimistic pre-check avoids
+ * spending cycles on compression when there is already no room left
+ * or zswap is disabled altogether somewhere in the hierarchy.
+ */
+bool obj_cgroup_may_zswap(struct obj_cgroup *objcg)
+{
+	struct mem_cgroup *memcg, *original_memcg;
+	bool ret = true;
+
+	original_memcg = get_mem_cgroup_from_objcg(objcg);
+	for (memcg = original_memcg; memcg != root_mem_cgroup;
+	     memcg = parent_mem_cgroup(memcg)) {
+		unsigned long max = READ_ONCE(memcg->zswap_max);
+		unsigned long pages;
+
+		if (max == PAGE_COUNTER_MAX)
+			continue;
+		if (max == 0) {
+			ret = false;
+			break;
+		}
+
+		cgroup_rstat_flush(memcg->css.cgroup);
+		pages = memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE;
+		if (pages < max)
+			continue;
+		ret = false;
+		break;
+	}
+	mem_cgroup_put(original_memcg);
+	return ret;
+}
+
+/**
+ * obj_cgroup_charge_zswap - charge compression backend memory
+ * @objcg: the object cgroup
+ * @size: size of compressed object
+ *
+ * This forces the charge after obj_cgroup_may_swap() allowed
+ * compression and storage in zwap for this cgroup to go ahead.
+ */
+void obj_cgroup_charge_zswap(struct obj_cgroup *objcg, size_t size)
+{
+	struct mem_cgroup *memcg;
+
+	VM_WARN_ON_ONCE(!(current->flags & PF_MEMALLOC));
+
+	/* PF_MEMALLOC context, charging must succeed */
+	if (obj_cgroup_charge(objcg, GFP_KERNEL, size))
+		VM_WARN_ON_ONCE(1);
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	mod_memcg_state(memcg, MEMCG_ZSWAP_B, size);
+	mod_memcg_state(memcg, MEMCG_ZSWAPPED, 1);
+	rcu_read_unlock();
+}
+
+/**
+ * obj_cgroup_uncharge_zswap - uncharge compression backend memory
+ * @objcg: the object cgroup
+ * @size: size of compressed object
+ *
+ * Uncharges zswap memory on page in.
+ */
+void obj_cgroup_uncharge_zswap(struct obj_cgroup *objcg, size_t size)
+{
+	struct mem_cgroup *memcg;
+
+	obj_cgroup_uncharge(objcg, size);
+
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	mod_memcg_state(memcg, MEMCG_ZSWAP_B, -size);
+	mod_memcg_state(memcg, MEMCG_ZSWAPPED, -1);
+	rcu_read_unlock();
+}
+
+static u64 zswap_current_read(struct cgroup_subsys_state *css,
+			      struct cftype *cft)
+{
+	cgroup_rstat_flush(css->cgroup);
+	return memcg_page_state(mem_cgroup_from_css(css), MEMCG_ZSWAP_B);
+}
+
+static int zswap_max_show(struct seq_file *m, void *v)
+{
+	return seq_puts_memcg_tunable(m,
+		READ_ONCE(mem_cgroup_from_seq(m)->zswap_max));
+}
+
+static ssize_t zswap_max_write(struct kernfs_open_file *of,
+			       char *buf, size_t nbytes, loff_t off)
+{
+	struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
+	unsigned long max;
+	int err;
+
+	buf = strstrip(buf);
+	err = page_counter_memparse(buf, "max", &max);
+	if (err)
+		return err;
+
+	xchg(&memcg->zswap_max, max);
+
+	return nbytes;
+}
+
+static struct cftype zswap_files[] = {
+	{
+		.name = "zswap.current",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = zswap_current_read,
+	},
+	{
+		.name = "zswap.max",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.seq_show = zswap_max_show,
+		.write = zswap_max_write,
+	},
+	{ }	/* terminate */
+};
+#endif /* CONFIG_ZSWAP */
+
 /*
  * If mem_cgroup_swap_init() is implemented as a subsys_initcall()
  * instead of a core_initcall(), this could mean cgroup_memory_noswap still
@@ -7424,7 +7602,9 @@ static int __init mem_cgroup_swap_init(void)
 
 	WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, swap_files));
 	WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, memsw_files));
-
+#ifdef CONFIG_ZSWAP
+	WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, zswap_files));
+#endif
 	return 0;
 }
 core_initcall(mem_cgroup_swap_init);
diff --git a/mm/zswap.c b/mm/zswap.c
index e3c16a70f533..104835b379ec 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -188,6 +188,7 @@ struct zswap_entry {
 		unsigned long handle;
 		unsigned long value;
 	};
+	struct obj_cgroup *objcg;
 };
 
 struct zswap_header {
@@ -359,6 +360,10 @@ static void zswap_rb_erase(struct rb_root *root, struct zswap_entry *entry)
  */
 static void zswap_free_entry(struct zswap_entry *entry)
 {
+	if (entry->objcg) {
+		obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
+		obj_cgroup_put(entry->objcg);
+	}
 	if (!entry->length)
 		atomic_dec(&zswap_same_filled_pages);
 	else {
@@ -1096,6 +1101,8 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	struct zswap_entry *entry, *dupentry;
 	struct scatterlist input, output;
 	struct crypto_acomp_ctx *acomp_ctx;
+	struct obj_cgroup *objcg = NULL;
+	struct zswap_pool *pool;
 	int ret;
 	unsigned int hlen, dlen = PAGE_SIZE;
 	unsigned long handle, value;
@@ -1115,17 +1122,15 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 		goto reject;
 	}
 
+	objcg = get_obj_cgroup_from_page(page);
+	if (objcg && !obj_cgroup_may_zswap(objcg))
+		goto shrink;
+
 	/* reclaim space if needed */
 	if (zswap_is_full()) {
-		struct zswap_pool *pool;
-
 		zswap_pool_limit_hit++;
 		zswap_pool_reached_full = true;
-		pool = zswap_pool_last_get();
-		if (pool)
-			queue_work(shrink_wq, &pool->shrink_work);
-		ret = -ENOMEM;
-		goto reject;
+		goto shrink;
 	}
 
 	if (zswap_pool_reached_full) {
@@ -1227,6 +1232,13 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 	entry->length = dlen;
 
 insert_entry:
+	entry->objcg = objcg;
+	if (objcg) {
+		obj_cgroup_charge_zswap(objcg, entry->length);
+		/* Account before objcg ref is moved to tree */
+		count_objcg_event(objcg, ZSWPOUT);
+	}
+
 	/* map */
 	spin_lock(&tree->lock);
 	do {
@@ -1253,7 +1265,16 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
 freepage:
 	zswap_entry_cache_free(entry);
 reject:
+	if (objcg)
+		obj_cgroup_put(objcg);
 	return ret;
+
+shrink:
+	pool = zswap_pool_last_get();
+	if (pool)
+		queue_work(shrink_wq, &pool->shrink_work);
+	ret = -ENOMEM;
+	goto reject;
 }
 
 /*
@@ -1326,6 +1347,8 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset,
 	BUG_ON(ret);
 stats:
 	count_vm_event(ZSWPIN);
+	if (entry->objcg)
+		count_objcg_event(entry->objcg, ZSWPIN);
 freeentry:
 	spin_lock(&tree->lock);
 	zswap_entry_put(tree, entry);
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 18:36     ` Andrew Morton
  0 siblings, 0 replies; 63+ messages in thread
From: Andrew Morton @ 2022-04-27 18:36 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

On Wed, 27 Apr 2022 12:00:15 -0400 Johannes Weiner <hannes@cmpxchg.org> wrote:

> Currently it requires poking at debugfs to figure out the size and
> population of the zswap cache on a host. There are no counters for
> reads and writes against the cache. As a result, it's difficult to
> understand zswap behavior on production systems.
> 
> Print zswap memory consumption and how many pages are zswapped out in
> /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.

/proc/meminfo is rather prime real estate.  Is this important enough to
be placed in there, or should it instead be in the more lowly
/proc/vmstat?

/proc/meminfo is documented in Documentation/filesystems/proc.rst ;)

That file appears to need a bit of updating for other things.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 18:36     ` Andrew Morton
  0 siblings, 0 replies; 63+ messages in thread
From: Andrew Morton @ 2022-04-27 18:36 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, 27 Apr 2022 12:00:15 -0400 Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:

> Currently it requires poking at debugfs to figure out the size and
> population of the zswap cache on a host. There are no counters for
> reads and writes against the cache. As a result, it's difficult to
> understand zswap behavior on production systems.
> 
> Print zswap memory consumption and how many pages are zswapped out in
> /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.

/proc/meminfo is rather prime real estate.  Is this important enough to
be placed in there, or should it instead be in the more lowly
/proc/vmstat?

/proc/meminfo is documented in Documentation/filesystems/proc.rst ;)

That file appears to need a bit of updating for other things.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-27 18:36     ` Andrew Morton
  (?)
@ 2022-04-27 18:53     ` Johannes Weiner
  2022-04-27 19:50         ` Johannes Weiner
  2022-04-27 19:51       ` Johannes Weiner
  -1 siblings, 2 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 18:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

On Wed, Apr 27, 2022 at 11:36:54AM -0700, Andrew Morton wrote:
> On Wed, 27 Apr 2022 12:00:15 -0400 Johannes Weiner <hannes@cmpxchg.org> wrote:
> 
> > Currently it requires poking at debugfs to figure out the size and
> > population of the zswap cache on a host. There are no counters for
> > reads and writes against the cache. As a result, it's difficult to
> > understand zswap behavior on production systems.
> > 
> > Print zswap memory consumption and how many pages are zswapped out in
> > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> 
> /proc/meminfo is rather prime real estate.  Is this important enough to
> be placed in there, or should it instead be in the more lowly
> /proc/vmstat?

The zswap pool size is capped to 20% of available RAM, and we usually
have a utilization of tens of gigabytes. I think it's fair to say it's
a first class memory consumer when enabled, and actually a huge hole
in /proc/meminfo coverage right now.

> /proc/meminfo is documented in Documentation/filesystems/proc.rst ;)
> 
> That file appears to need a bit of updating for other things.

"The following is from a 16GB PIII, which has highmem enabled."

lmao.

I'll send a general update for that, and a delta fixlet for 4/5.

Thanks!

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 19:50         ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 19:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

On Wed, Apr 27, 2022 at 02:53:10PM -0400, Johannes Weiner wrote:
> I'll send a general update for that [...]

From dca20a3a4ae2218f2db7d6e9abb47f6ca9004273 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@cmpxchg.org>
Date: Wed, 27 Apr 2022 15:36:07 -0400
Subject: [PATCH 1/7] Documentation: filesystems: proc: update meminfo section

Add new entries. Minor corrections and cleanups.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 Documentation/filesystems/proc.rst | 155 ++++++++++++++++++-----------
 1 file changed, 99 insertions(+), 56 deletions(-)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 061744c436d9..736ed384750c 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -942,56 +942,71 @@ can be substantial.  In many cases there are other means to find out
 additional memory using subsystem specific interfaces, for instance
 /proc/net/sockstat for TCP memory allocations.
 
-The following is from a 16GB PIII, which has highmem enabled.
-You may not have all of these fields.
+Example output. You may not have all of these fields.
 
 ::
 
     > cat /proc/meminfo
 
-    MemTotal:     16344972 kB
-    MemFree:      13634064 kB
-    MemAvailable: 14836172 kB
-    Buffers:          3656 kB
-    Cached:        1195708 kB
-    SwapCached:          0 kB
-    Active:         891636 kB
-    Inactive:      1077224 kB
-    HighTotal:    15597528 kB
-    HighFree:     13629632 kB
-    LowTotal:       747444 kB
-    LowFree:          4432 kB
-    SwapTotal:           0 kB
-    SwapFree:            0 kB
-    Dirty:             968 kB
-    Writeback:           0 kB
-    AnonPages:      861800 kB
-    Mapped:         280372 kB
-    Shmem:             644 kB
-    KReclaimable:   168048 kB
-    Slab:           284364 kB
-    SReclaimable:   159856 kB
-    SUnreclaim:     124508 kB
-    PageTables:      24448 kB
-    NFS_Unstable:        0 kB
-    Bounce:              0 kB
-    WritebackTmp:        0 kB
-    CommitLimit:   7669796 kB
-    Committed_AS:   100056 kB
-    VmallocTotal:   112216 kB
-    VmallocUsed:       428 kB
-    VmallocChunk:   111088 kB
-    Percpu:          62080 kB
-    HardwareCorrupted:   0 kB
-    AnonHugePages:   49152 kB
-    ShmemHugePages:      0 kB
-    ShmemPmdMapped:      0 kB
+    MemTotal:       32858820 kB
+    MemFree:        21001236 kB
+    MemAvailable:   27214312 kB
+    Buffers:          581092 kB
+    Cached:          5587612 kB
+    SwapCached:            0 kB
+    Active:          3237152 kB
+    Inactive:        7586256 kB
+    Active(anon):      94064 kB
+    Inactive(anon):  4570616 kB
+    Active(file):    3143088 kB
+    Inactive(file):  3015640 kB
+    Unevictable:           0 kB
+    Mlocked:               0 kB
+    SwapTotal:             0 kB
+    SwapFree:              0 kB
+    Dirty:                12 kB
+    Writeback:             0 kB
+    AnonPages:       4654780 kB
+    Mapped:           266244 kB
+    Shmem:              9976 kB
+    KReclaimable:     517708 kB
+    Slab:             660044 kB
+    SReclaimable:     517708 kB
+    SUnreclaim:       142336 kB
+    KernelStack:       11168 kB
+    PageTables:        20540 kB
+    NFS_Unstable:          0 kB
+    Bounce:                0 kB
+    WritebackTmp:          0 kB
+    CommitLimit:    16429408 kB
+    Committed_AS:    7715148 kB
+    VmallocTotal:   34359738367 kB
+    VmallocUsed:       40444 kB
+    VmallocChunk:          0 kB
+    Percpu:            29312 kB
+    HardwareCorrupted:     0 kB
+    AnonHugePages:   4149248 kB
+    ShmemHugePages:        0 kB
+    ShmemPmdMapped:        0 kB
+    FileHugePages:         0 kB
+    FilePmdMapped:         0 kB
+    CmaTotal:              0 kB
+    CmaFree:               0 kB
+    HugePages_Total:       0
+    HugePages_Free:        0
+    HugePages_Rsvd:        0
+    HugePages_Surp:        0
+    Hugepagesize:       2048 kB
+    Hugetlb:               0 kB
+    DirectMap4k:      401152 kB
+    DirectMap2M:    10008576 kB
+    DirectMap1G:    24117248 kB
 
 MemTotal
               Total usable RAM (i.e. physical RAM minus a few reserved
               bits and the kernel binary code)
 MemFree
-              The sum of LowFree+HighFree
+              Total free RAM. On highmem systems, the sum of LowFree+HighFree
 MemAvailable
               An estimate of how much memory is available for starting new
               applications, without swapping. Calculated from MemFree,
@@ -1005,8 +1020,9 @@ Buffers
               Relatively temporary storage for raw disk blocks
               shouldn't get tremendously large (20MB or so)
 Cached
-              in-memory cache for files read from the disk (the
-              pagecache).  Doesn't include SwapCached
+              In-memory cache for files read from the disk (the
+              pagecache) as well as tmpfs & shmem.
+              Doesn't include SwapCached.
 SwapCached
               Memory that once was swapped out, is swapped back in but
               still also is in the swapfile (if memory is needed it
@@ -1018,6 +1034,11 @@ Active
 Inactive
               Memory which has been less recently used.  It is more
               eligible to be reclaimed for other purposes
+Unevictable
+              Memory that cannot be reclaimed, such as mlocked pages,
+              ramfs backing pages, secret memfd pages etc.
+Mlocked
+              Memory locked with mlock().
 HighTotal, HighFree
               Highmem is all memory above ~860MB of physical memory.
               Highmem areas are for use by userspace programs, or
@@ -1040,20 +1061,10 @@ Writeback
               Memory which is actively being written back to the disk
 AnonPages
               Non-file backed pages mapped into userspace page tables
-HardwareCorrupted
-              The amount of RAM/memory in KB, the kernel identifies as
-	      corrupted.
-AnonHugePages
-              Non-file backed huge pages mapped into userspace page tables
 Mapped
               files which have been mmaped, such as libraries
 Shmem
               Total memory used by shared memory (shmem) and tmpfs
-ShmemHugePages
-              Memory used by shared memory (shmem) and tmpfs allocated
-              with huge pages
-ShmemPmdMapped
-              Shared memory mapped into userspace with huge pages
 KReclaimable
               Kernel allocations that the kernel will attempt to reclaim
               under memory pressure. Includes SReclaimable (below), and other
@@ -1064,9 +1075,10 @@ SReclaimable
               Part of Slab, that might be reclaimed, such as caches
 SUnreclaim
               Part of Slab, that cannot be reclaimed on memory pressure
+KernelStack
+              Memory consumed by the kernel stacks of all tasks
 PageTables
-              amount of memory dedicated to the lowest level of page
-              tables.
+              Memory consumed by userspace page tables
 NFS_Unstable
               Always zero. Previous counted pages which had been written to
               the server, but has not been committed to stable storage.
@@ -1098,7 +1110,7 @@ Committed_AS
               has been allocated by processes, even if it has not been
               "used" by them as of yet. A process which malloc()'s 1G
               of memory, but only touches 300M of it will show up as
-	      using 1G. This 1G is memory which has been "committed" to
+              using 1G. This 1G is memory which has been "committed" to
               by the VM and can be used at any time by the allocating
               application. With strict overcommit enabled on the system
               (mode 2 in 'vm.overcommit_memory'), allocations which would
@@ -1107,7 +1119,7 @@ Committed_AS
               not fail due to lack of memory once that memory has been
               successfully allocated.
 VmallocTotal
-              total size of vmalloc memory area
+              total size of vmalloc virtual address space
 VmallocUsed
               amount of vmalloc area which is used
 VmallocChunk
@@ -1115,6 +1127,37 @@ VmallocChunk
 Percpu
               Memory allocated to the percpu allocator used to back percpu
               allocations. This stat excludes the cost of metadata.
+HardwareCorrupted
+              The amount of RAM/memory in KB, the kernel identifies as
+              corrupted.
+AnonHugePages
+              Non-file backed huge pages mapped into userspace page tables
+ShmemHugePages
+              Memory used by shared memory (shmem) and tmpfs allocated
+              with huge pages
+ShmemPmdMapped
+              Shared memory mapped into userspace with huge pages
+FileHugePages
+              Memory used for filesystem data (page cache) allocated
+              with huge pages
+FilePmdMapped
+              Page cache mapped into userspace with huge pages
+CmaTotal
+              Memory reserved for the Contiguous Memory Allocator (CMA)
+CmaFree
+              Free remaining memory in the CMA reserves
+HugePages_Total
+HugePages_Free
+HugePages_Rsvd
+HugePages_Surp
+Hugepagesize
+Hugetlb
+              See Documentation/admin-guide/mm/hugetlbpage.rst.
+DirectMap4k
+DirectMap2M
+DirectMap1G
+              Breakdown of page table sizes used in the kernel's
+              identity mapping of RAM
 
 vmallocinfo
 ~~~~~~~~~~~
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 19:50         ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 19:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 02:53:10PM -0400, Johannes Weiner wrote:
> I'll send a general update for that [...]

From dca20a3a4ae2218f2db7d6e9abb47f6ca9004273 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Date: Wed, 27 Apr 2022 15:36:07 -0400
Subject: [PATCH 1/7] Documentation: filesystems: proc: update meminfo section

Add new entries. Minor corrections and cleanups.

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 Documentation/filesystems/proc.rst | 155 ++++++++++++++++++-----------
 1 file changed, 99 insertions(+), 56 deletions(-)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 061744c436d9..736ed384750c 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -942,56 +942,71 @@ can be substantial.  In many cases there are other means to find out
 additional memory using subsystem specific interfaces, for instance
 /proc/net/sockstat for TCP memory allocations.
 
-The following is from a 16GB PIII, which has highmem enabled.
-You may not have all of these fields.
+Example output. You may not have all of these fields.
 
 ::
 
     > cat /proc/meminfo
 
-    MemTotal:     16344972 kB
-    MemFree:      13634064 kB
-    MemAvailable: 14836172 kB
-    Buffers:          3656 kB
-    Cached:        1195708 kB
-    SwapCached:          0 kB
-    Active:         891636 kB
-    Inactive:      1077224 kB
-    HighTotal:    15597528 kB
-    HighFree:     13629632 kB
-    LowTotal:       747444 kB
-    LowFree:          4432 kB
-    SwapTotal:           0 kB
-    SwapFree:            0 kB
-    Dirty:             968 kB
-    Writeback:           0 kB
-    AnonPages:      861800 kB
-    Mapped:         280372 kB
-    Shmem:             644 kB
-    KReclaimable:   168048 kB
-    Slab:           284364 kB
-    SReclaimable:   159856 kB
-    SUnreclaim:     124508 kB
-    PageTables:      24448 kB
-    NFS_Unstable:        0 kB
-    Bounce:              0 kB
-    WritebackTmp:        0 kB
-    CommitLimit:   7669796 kB
-    Committed_AS:   100056 kB
-    VmallocTotal:   112216 kB
-    VmallocUsed:       428 kB
-    VmallocChunk:   111088 kB
-    Percpu:          62080 kB
-    HardwareCorrupted:   0 kB
-    AnonHugePages:   49152 kB
-    ShmemHugePages:      0 kB
-    ShmemPmdMapped:      0 kB
+    MemTotal:       32858820 kB
+    MemFree:        21001236 kB
+    MemAvailable:   27214312 kB
+    Buffers:          581092 kB
+    Cached:          5587612 kB
+    SwapCached:            0 kB
+    Active:          3237152 kB
+    Inactive:        7586256 kB
+    Active(anon):      94064 kB
+    Inactive(anon):  4570616 kB
+    Active(file):    3143088 kB
+    Inactive(file):  3015640 kB
+    Unevictable:           0 kB
+    Mlocked:               0 kB
+    SwapTotal:             0 kB
+    SwapFree:              0 kB
+    Dirty:                12 kB
+    Writeback:             0 kB
+    AnonPages:       4654780 kB
+    Mapped:           266244 kB
+    Shmem:              9976 kB
+    KReclaimable:     517708 kB
+    Slab:             660044 kB
+    SReclaimable:     517708 kB
+    SUnreclaim:       142336 kB
+    KernelStack:       11168 kB
+    PageTables:        20540 kB
+    NFS_Unstable:          0 kB
+    Bounce:                0 kB
+    WritebackTmp:          0 kB
+    CommitLimit:    16429408 kB
+    Committed_AS:    7715148 kB
+    VmallocTotal:   34359738367 kB
+    VmallocUsed:       40444 kB
+    VmallocChunk:          0 kB
+    Percpu:            29312 kB
+    HardwareCorrupted:     0 kB
+    AnonHugePages:   4149248 kB
+    ShmemHugePages:        0 kB
+    ShmemPmdMapped:        0 kB
+    FileHugePages:         0 kB
+    FilePmdMapped:         0 kB
+    CmaTotal:              0 kB
+    CmaFree:               0 kB
+    HugePages_Total:       0
+    HugePages_Free:        0
+    HugePages_Rsvd:        0
+    HugePages_Surp:        0
+    Hugepagesize:       2048 kB
+    Hugetlb:               0 kB
+    DirectMap4k:      401152 kB
+    DirectMap2M:    10008576 kB
+    DirectMap1G:    24117248 kB
 
 MemTotal
               Total usable RAM (i.e. physical RAM minus a few reserved
               bits and the kernel binary code)
 MemFree
-              The sum of LowFree+HighFree
+              Total free RAM. On highmem systems, the sum of LowFree+HighFree
 MemAvailable
               An estimate of how much memory is available for starting new
               applications, without swapping. Calculated from MemFree,
@@ -1005,8 +1020,9 @@ Buffers
               Relatively temporary storage for raw disk blocks
               shouldn't get tremendously large (20MB or so)
 Cached
-              in-memory cache for files read from the disk (the
-              pagecache).  Doesn't include SwapCached
+              In-memory cache for files read from the disk (the
+              pagecache) as well as tmpfs & shmem.
+              Doesn't include SwapCached.
 SwapCached
               Memory that once was swapped out, is swapped back in but
               still also is in the swapfile (if memory is needed it
@@ -1018,6 +1034,11 @@ Active
 Inactive
               Memory which has been less recently used.  It is more
               eligible to be reclaimed for other purposes
+Unevictable
+              Memory that cannot be reclaimed, such as mlocked pages,
+              ramfs backing pages, secret memfd pages etc.
+Mlocked
+              Memory locked with mlock().
 HighTotal, HighFree
               Highmem is all memory above ~860MB of physical memory.
               Highmem areas are for use by userspace programs, or
@@ -1040,20 +1061,10 @@ Writeback
               Memory which is actively being written back to the disk
 AnonPages
               Non-file backed pages mapped into userspace page tables
-HardwareCorrupted
-              The amount of RAM/memory in KB, the kernel identifies as
-	      corrupted.
-AnonHugePages
-              Non-file backed huge pages mapped into userspace page tables
 Mapped
               files which have been mmaped, such as libraries
 Shmem
               Total memory used by shared memory (shmem) and tmpfs
-ShmemHugePages
-              Memory used by shared memory (shmem) and tmpfs allocated
-              with huge pages
-ShmemPmdMapped
-              Shared memory mapped into userspace with huge pages
 KReclaimable
               Kernel allocations that the kernel will attempt to reclaim
               under memory pressure. Includes SReclaimable (below), and other
@@ -1064,9 +1075,10 @@ SReclaimable
               Part of Slab, that might be reclaimed, such as caches
 SUnreclaim
               Part of Slab, that cannot be reclaimed on memory pressure
+KernelStack
+              Memory consumed by the kernel stacks of all tasks
 PageTables
-              amount of memory dedicated to the lowest level of page
-              tables.
+              Memory consumed by userspace page tables
 NFS_Unstable
               Always zero. Previous counted pages which had been written to
               the server, but has not been committed to stable storage.
@@ -1098,7 +1110,7 @@ Committed_AS
               has been allocated by processes, even if it has not been
               "used" by them as of yet. A process which malloc()'s 1G
               of memory, but only touches 300M of it will show up as
-	      using 1G. This 1G is memory which has been "committed" to
+              using 1G. This 1G is memory which has been "committed" to
               by the VM and can be used at any time by the allocating
               application. With strict overcommit enabled on the system
               (mode 2 in 'vm.overcommit_memory'), allocations which would
@@ -1107,7 +1119,7 @@ Committed_AS
               not fail due to lack of memory once that memory has been
               successfully allocated.
 VmallocTotal
-              total size of vmalloc memory area
+              total size of vmalloc virtual address space
 VmallocUsed
               amount of vmalloc area which is used
 VmallocChunk
@@ -1115,6 +1127,37 @@ VmallocChunk
 Percpu
               Memory allocated to the percpu allocator used to back percpu
               allocations. This stat excludes the cost of metadata.
+HardwareCorrupted
+              The amount of RAM/memory in KB, the kernel identifies as
+              corrupted.
+AnonHugePages
+              Non-file backed huge pages mapped into userspace page tables
+ShmemHugePages
+              Memory used by shared memory (shmem) and tmpfs allocated
+              with huge pages
+ShmemPmdMapped
+              Shared memory mapped into userspace with huge pages
+FileHugePages
+              Memory used for filesystem data (page cache) allocated
+              with huge pages
+FilePmdMapped
+              Page cache mapped into userspace with huge pages
+CmaTotal
+              Memory reserved for the Contiguous Memory Allocator (CMA)
+CmaFree
+              Free remaining memory in the CMA reserves
+HugePages_Total
+HugePages_Free
+HugePages_Rsvd
+HugePages_Surp
+Hugepagesize
+Hugetlb
+              See Documentation/admin-guide/mm/hugetlbpage.rst.
+DirectMap4k
+DirectMap2M
+DirectMap1G
+              Breakdown of page table sizes used in the kernel's
+              identity mapping of RAM
 
 vmallocinfo
 ~~~~~~~~~~~
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-27 18:53     ` Johannes Weiner
  2022-04-27 19:50         ` Johannes Weiner
@ 2022-04-27 19:51       ` Johannes Weiner
  1 sibling, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 19:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Roman Gushchin, Shakeel Butt, Seth Jennings,
	Dan Streetman, linux-mm, cgroups, linux-kernel, kernel-team

On Wed, Apr 27, 2022 at 02:53:10PM -0400, Johannes Weiner wrote:
> [...] and a delta fixlet for 4/5.

From 35851ad3ddbf30122d755bdf8abea6dc188492a2 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@cmpxchg.org>
Date: Wed, 27 Apr 2022 15:44:23 -0400
Subject: [PATCH 6/7] mm: zswap: add basic meminfo and vmstat coverage fix

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 Documentation/filesystems/proc.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 736ed384750c..8b5a94cfa722 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -964,6 +964,8 @@ Example output. You may not have all of these fields.
     Mlocked:               0 kB
     SwapTotal:             0 kB
     SwapFree:              0 kB
+    Zswap:              1904 kB
+    Zswapped:           7792 kB
     Dirty:                12 kB
     Writeback:             0 kB
     AnonPages:       4654780 kB
@@ -1055,6 +1057,10 @@ SwapTotal
 SwapFree
               Memory which has been evicted from RAM, and is temporarily
               on the disk
+Zswap
+              Memory consumed by the zswap backend (compressed size)
+Zswapped
+              Amount of anonymous memory stored in zswap (original size)
 Dirty
               Memory which is waiting to get written back to the disk
 Writeback
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 20:29     ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 20:29 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

Hi Johannes,

On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> Currently it requires poking at debugfs to figure out the size and
> population of the zswap cache on a host. There are no counters for
> reads and writes against the cache. As a result, it's difficult to
> understand zswap behavior on production systems.
> 
> Print zswap memory consumption and how many pages are zswapped out in
> /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  fs/proc/meminfo.c             |  7 +++++++
>  include/linux/swap.h          |  5 +++++
>  include/linux/vm_event_item.h |  4 ++++
>  mm/vmstat.c                   |  4 ++++
>  mm/zswap.c                    | 13 ++++++-------
>  5 files changed, 26 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index 6fa761c9cc78..6e89f0e2fd20 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  
>  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
>  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> +#ifdef CONFIG_ZSWAP
> +	seq_printf(m,  "Zswap:          %8lu kB\n",
> +		   (unsigned long)(zswap_pool_total_size >> 10));
> +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> +		   (PAGE_SHIFT - 10));
> +#endif

I agree it would be very handy to have the memory consumption in meminfo

https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/

If we really go this Zswap only metric instead of general term
"Compressed", I'd like to post maybe "Zram:" with same reason
in this patchset. Do you think that's better idea instead of
introducing general term like "Compressed:" or something else?

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 20:29     ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 20:29 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

Hi Johannes,

On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> Currently it requires poking at debugfs to figure out the size and
> population of the zswap cache on a host. There are no counters for
> reads and writes against the cache. As a result, it's difficult to
> understand zswap behavior on production systems.
> 
> Print zswap memory consumption and how many pages are zswapped out in
> /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> 
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> ---
>  fs/proc/meminfo.c             |  7 +++++++
>  include/linux/swap.h          |  5 +++++
>  include/linux/vm_event_item.h |  4 ++++
>  mm/vmstat.c                   |  4 ++++
>  mm/zswap.c                    | 13 ++++++-------
>  5 files changed, 26 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index 6fa761c9cc78..6e89f0e2fd20 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  
>  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
>  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> +#ifdef CONFIG_ZSWAP
> +	seq_printf(m,  "Zswap:          %8lu kB\n",
> +		   (unsigned long)(zswap_pool_total_size >> 10));
> +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> +		   (PAGE_SHIFT - 10));
> +#endif

I agree it would be very handy to have the memory consumption in meminfo

https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/

If we really go this Zswap only metric instead of general term
"Compressed", I'd like to post maybe "Zram:" with same reason
in this patchset. Do you think that's better idea instead of
introducing general term like "Compressed:" or something else?

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 21:20       ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 21:20 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> Hi Johannes,
> 
> On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > Currently it requires poking at debugfs to figure out the size and
> > population of the zswap cache on a host. There are no counters for
> > reads and writes against the cache. As a result, it's difficult to
> > understand zswap behavior on production systems.
> > 
> > Print zswap memory consumption and how many pages are zswapped out in
> > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > 
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > ---
> >  fs/proc/meminfo.c             |  7 +++++++
> >  include/linux/swap.h          |  5 +++++
> >  include/linux/vm_event_item.h |  4 ++++
> >  mm/vmstat.c                   |  4 ++++
> >  mm/zswap.c                    | 13 ++++++-------
> >  5 files changed, 26 insertions(+), 7 deletions(-)
> > 
> > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > index 6fa761c9cc78..6e89f0e2fd20 100644
> > --- a/fs/proc/meminfo.c
> > +++ b/fs/proc/meminfo.c
> > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >  
> >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > +#ifdef CONFIG_ZSWAP
> > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > +		   (PAGE_SHIFT - 10));
> > +#endif
> 
> I agree it would be very handy to have the memory consumption in meminfo
> 
> https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> 
> If we really go this Zswap only metric instead of general term
> "Compressed", I'd like to post maybe "Zram:" with same reason
> in this patchset. Do you think that's better idea instead of
> introducing general term like "Compressed:" or something else?

I'm fine with changing it to Compressed. If somebody cares about a
more detailed breakdown, we can add Zswap, Zram subsets as needed.

From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes@cmpxchg.org>
Date: Wed, 27 Apr 2022 17:15:15 -0400
Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 Documentation/filesystems/proc.rst | 7 ++++---
 fs/proc/meminfo.c                  | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 8b5a94cfa722..93edcf233464 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -964,7 +964,7 @@ Example output. You may not have all of these fields.
     Mlocked:               0 kB
     SwapTotal:             0 kB
     SwapFree:              0 kB
-    Zswap:              1904 kB
+    Compressed:         1904 kB
     Zswapped:           7792 kB
     Dirty:                12 kB
     Writeback:             0 kB
@@ -1057,8 +1057,9 @@ SwapTotal
 SwapFree
               Memory which has been evicted from RAM, and is temporarily
               on the disk
-Zswap
-              Memory consumed by the zswap backend (compressed size)
+Compressed
+              Memory consumed by compression backends, such as zswap
+              (compressed size)
 Zswapped
               Amount of anonymous memory stored in zswap (original size)
 Dirty
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 6e89f0e2fd20..554d6f230e67 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 	show_val_kb(m, "SwapTotal:      ", i.totalswap);
 	show_val_kb(m, "SwapFree:       ", i.freeswap);
 #ifdef CONFIG_ZSWAP
-	seq_printf(m,  "Zswap:          %8lu kB\n",
+	seq_printf(m,  "Compressed:     %8lu kB\n",
 		   (unsigned long)(zswap_pool_total_size >> 10));
 	seq_printf(m,  "Zswapped:       %8lu kB\n",
 		   (unsigned long)atomic_read(&zswap_stored_pages) <<
-- 
2.35.3

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 21:20       ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 21:20 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> Hi Johannes,
> 
> On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > Currently it requires poking at debugfs to figure out the size and
> > population of the zswap cache on a host. There are no counters for
> > reads and writes against the cache. As a result, it's difficult to
> > understand zswap behavior on production systems.
> > 
> > Print zswap memory consumption and how many pages are zswapped out in
> > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > 
> > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > ---
> >  fs/proc/meminfo.c             |  7 +++++++
> >  include/linux/swap.h          |  5 +++++
> >  include/linux/vm_event_item.h |  4 ++++
> >  mm/vmstat.c                   |  4 ++++
> >  mm/zswap.c                    | 13 ++++++-------
> >  5 files changed, 26 insertions(+), 7 deletions(-)
> > 
> > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > index 6fa761c9cc78..6e89f0e2fd20 100644
> > --- a/fs/proc/meminfo.c
> > +++ b/fs/proc/meminfo.c
> > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >  
> >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > +#ifdef CONFIG_ZSWAP
> > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > +		   (PAGE_SHIFT - 10));
> > +#endif
> 
> I agree it would be very handy to have the memory consumption in meminfo
> 
> https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> 
> If we really go this Zswap only metric instead of general term
> "Compressed", I'd like to post maybe "Zram:" with same reason
> in this patchset. Do you think that's better idea instead of
> introducing general term like "Compressed:" or something else?

I'm fine with changing it to Compressed. If somebody cares about a
more detailed breakdown, we can add Zswap, Zram subsets as needed.

From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Date: Wed, 27 Apr 2022 17:15:15 -0400
Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix

Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
---
 Documentation/filesystems/proc.rst | 7 ++++---
 fs/proc/meminfo.c                  | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 8b5a94cfa722..93edcf233464 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -964,7 +964,7 @@ Example output. You may not have all of these fields.
     Mlocked:               0 kB
     SwapTotal:             0 kB
     SwapFree:              0 kB
-    Zswap:              1904 kB
+    Compressed:         1904 kB
     Zswapped:           7792 kB
     Dirty:                12 kB
     Writeback:             0 kB
@@ -1057,8 +1057,9 @@ SwapTotal
 SwapFree
               Memory which has been evicted from RAM, and is temporarily
               on the disk
-Zswap
-              Memory consumed by the zswap backend (compressed size)
+Compressed
+              Memory consumed by compression backends, such as zswap
+              (compressed size)
 Zswapped
               Amount of anonymous memory stored in zswap (original size)
 Dirty
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index 6e89f0e2fd20..554d6f230e67 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 	show_val_kb(m, "SwapTotal:      ", i.totalswap);
 	show_val_kb(m, "SwapFree:       ", i.freeswap);
 #ifdef CONFIG_ZSWAP
-	seq_printf(m,  "Zswap:          %8lu kB\n",
+	seq_printf(m,  "Compressed:     %8lu kB\n",
 		   (unsigned long)(zswap_pool_total_size >> 10));
 	seq_printf(m,  "Zswapped:       %8lu kB\n",
 		   (unsigned long)atomic_read(&zswap_stored_pages) <<
-- 
2.35.3

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 21:36         ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 21:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > Hi Johannes,
> > 
> > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > Currently it requires poking at debugfs to figure out the size and
> > > population of the zswap cache on a host. There are no counters for
> > > reads and writes against the cache. As a result, it's difficult to
> > > understand zswap behavior on production systems.
> > > 
> > > Print zswap memory consumption and how many pages are zswapped out in
> > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > ---
> > >  fs/proc/meminfo.c             |  7 +++++++
> > >  include/linux/swap.h          |  5 +++++
> > >  include/linux/vm_event_item.h |  4 ++++
> > >  mm/vmstat.c                   |  4 ++++
> > >  mm/zswap.c                    | 13 ++++++-------
> > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > --- a/fs/proc/meminfo.c
> > > +++ b/fs/proc/meminfo.c
> > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > >  
> > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > +#ifdef CONFIG_ZSWAP
> > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > +		   (PAGE_SHIFT - 10));
> > > +#endif
> > 
> > I agree it would be very handy to have the memory consumption in meminfo
> > 
> > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > 
> > If we really go this Zswap only metric instead of general term
> > "Compressed", I'd like to post maybe "Zram:" with same reason
> > in this patchset. Do you think that's better idea instead of
> > introducing general term like "Compressed:" or something else?
> 
> I'm fine with changing it to Compressed. If somebody cares about a
> more detailed breakdown, we can add Zswap, Zram subsets as needed.

It does raise the question what to do about cgroup, though. Should the
control files (memory.zswap.current & memory.zswap.max) apply to zram
in the future? If so, we should rename them, too.

I'm not too familiar with zram, maybe you can provide some
background. AFAIU, Google uses zram quite widely; all the more
confusing why there is no container support for it yet.

Could you shed some light?

Thanks

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 21:36         ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-27 21:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > Hi Johannes,
> > 
> > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > Currently it requires poking at debugfs to figure out the size and
> > > population of the zswap cache on a host. There are no counters for
> > > reads and writes against the cache. As a result, it's difficult to
> > > understand zswap behavior on production systems.
> > > 
> > > Print zswap memory consumption and how many pages are zswapped out in
> > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > ---
> > >  fs/proc/meminfo.c             |  7 +++++++
> > >  include/linux/swap.h          |  5 +++++
> > >  include/linux/vm_event_item.h |  4 ++++
> > >  mm/vmstat.c                   |  4 ++++
> > >  mm/zswap.c                    | 13 ++++++-------
> > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > --- a/fs/proc/meminfo.c
> > > +++ b/fs/proc/meminfo.c
> > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > >  
> > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > +#ifdef CONFIG_ZSWAP
> > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > +		   (PAGE_SHIFT - 10));
> > > +#endif
> > 
> > I agree it would be very handy to have the memory consumption in meminfo
> > 
> > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > 
> > If we really go this Zswap only metric instead of general term
> > "Compressed", I'd like to post maybe "Zram:" with same reason
> > in this patchset. Do you think that's better idea instead of
> > introducing general term like "Compressed:" or something else?
> 
> I'm fine with changing it to Compressed. If somebody cares about a
> more detailed breakdown, we can add Zswap, Zram subsets as needed.

It does raise the question what to do about cgroup, though. Should the
control files (memory.zswap.current & memory.zswap.max) apply to zram
in the future? If so, we should rename them, too.

I'm not too familiar with zram, maybe you can provide some
background. AFAIU, Google uses zram quite widely; all the more
confusing why there is no container support for it yet.

Could you shed some light?

Thanks

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-27 21:36         ` Johannes Weiner
@ 2022-04-27 22:12           ` Minchan Kim
  -1 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 22:12 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > > 
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > > 
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > 
> > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >  
> > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +		   (PAGE_SHIFT - 10));
> > > > +#endif
> > > 
> > > I agree it would be very handy to have the memory consumption in meminfo
> > > 
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > 
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> > 
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> 
> It does raise the question what to do about cgroup, though. Should the
> control files (memory.zswap.current & memory.zswap.max) apply to zram
> in the future? If so, we should rename them, too.
> 
> I'm not too familiar with zram, maybe you can provide some
> background. AFAIU, Google uses zram quite widely; all the more
> confusing why there is no container support for it yet.

My usecase with zram is Android which doesn't use memcg.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 22:12           ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 22:12 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > > 
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > > 
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > 
> > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >  
> > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +		   (PAGE_SHIFT - 10));
> > > > +#endif
> > > 
> > > I agree it would be very handy to have the memory consumption in meminfo
> > > 
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > 
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> > 
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> 
> It does raise the question what to do about cgroup, though. Should the
> control files (memory.zswap.current & memory.zswap.max) apply to zram
> in the future? If so, we should rename them, too.
> 
> I'm not too familiar with zram, maybe you can provide some
> background. AFAIU, Google uses zram quite widely; all the more
> confusing why there is no container support for it yet.

My usecase with zram is Android which doesn't use memcg.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 22:16         ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 22:16 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > Hi Johannes,
> > 
> > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > Currently it requires poking at debugfs to figure out the size and
> > > population of the zswap cache on a host. There are no counters for
> > > reads and writes against the cache. As a result, it's difficult to
> > > understand zswap behavior on production systems.
> > > 
> > > Print zswap memory consumption and how many pages are zswapped out in
> > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > ---
> > >  fs/proc/meminfo.c             |  7 +++++++
> > >  include/linux/swap.h          |  5 +++++
> > >  include/linux/vm_event_item.h |  4 ++++
> > >  mm/vmstat.c                   |  4 ++++
> > >  mm/zswap.c                    | 13 ++++++-------
> > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > --- a/fs/proc/meminfo.c
> > > +++ b/fs/proc/meminfo.c
> > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > >  
> > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > +#ifdef CONFIG_ZSWAP
> > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > +		   (PAGE_SHIFT - 10));
> > > +#endif
> > 
> > I agree it would be very handy to have the memory consumption in meminfo
> > 
> > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > 
> > If we really go this Zswap only metric instead of general term
> > "Compressed", I'd like to post maybe "Zram:" with same reason
> > in this patchset. Do you think that's better idea instead of
> > introducing general term like "Compressed:" or something else?
> 
> I'm fine with changing it to Compressed. If somebody cares about a
> more detailed breakdown, we can add Zswap, Zram subsets as needed.

Thanks! Please consider ZSWPIN to rename more general term, too.

> 
> From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes@cmpxchg.org>
> Date: Wed, 27 Apr 2022 17:15:15 -0400
> Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  Documentation/filesystems/proc.rst | 7 ++++---
>  fs/proc/meminfo.c                  | 2 +-
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index 8b5a94cfa722..93edcf233464 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -964,7 +964,7 @@ Example output. You may not have all of these fields.
>      Mlocked:               0 kB
>      SwapTotal:             0 kB
>      SwapFree:              0 kB
> -    Zswap:              1904 kB
> +    Compressed:         1904 kB
>      Zswapped:           7792 kB
>      Dirty:                12 kB
>      Writeback:             0 kB
> @@ -1057,8 +1057,9 @@ SwapTotal
>  SwapFree
>                Memory which has been evicted from RAM, and is temporarily
>                on the disk
> -Zswap
> -              Memory consumed by the zswap backend (compressed size)
> +Compressed
> +              Memory consumed by compression backends, such as zswap
> +              (compressed size)
>  Zswapped
>                Amount of anonymous memory stored in zswap (original size)
>  Dirty
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index 6e89f0e2fd20..554d6f230e67 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
>  	show_val_kb(m, "SwapFree:       ", i.freeswap);
>  #ifdef CONFIG_ZSWAP
> -	seq_printf(m,  "Zswap:          %8lu kB\n",
> +	seq_printf(m,  "Compressed:     %8lu kB\n",
>  		   (unsigned long)(zswap_pool_total_size >> 10));
>  	seq_printf(m,  "Zswapped:       %8lu kB\n",
>  		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> -- 
> 2.35.3
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 22:16         ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-27 22:16 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > Hi Johannes,
> > 
> > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > Currently it requires poking at debugfs to figure out the size and
> > > population of the zswap cache on a host. There are no counters for
> > > reads and writes against the cache. As a result, it's difficult to
> > > understand zswap behavior on production systems.
> > > 
> > > Print zswap memory consumption and how many pages are zswapped out in
> > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > 
> > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > ---
> > >  fs/proc/meminfo.c             |  7 +++++++
> > >  include/linux/swap.h          |  5 +++++
> > >  include/linux/vm_event_item.h |  4 ++++
> > >  mm/vmstat.c                   |  4 ++++
> > >  mm/zswap.c                    | 13 ++++++-------
> > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > --- a/fs/proc/meminfo.c
> > > +++ b/fs/proc/meminfo.c
> > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > >  
> > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > +#ifdef CONFIG_ZSWAP
> > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > +		   (PAGE_SHIFT - 10));
> > > +#endif
> > 
> > I agree it would be very handy to have the memory consumption in meminfo
> > 
> > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > 
> > If we really go this Zswap only metric instead of general term
> > "Compressed", I'd like to post maybe "Zram:" with same reason
> > in this patchset. Do you think that's better idea instead of
> > introducing general term like "Compressed:" or something else?
> 
> I'm fine with changing it to Compressed. If somebody cares about a
> more detailed breakdown, we can add Zswap, Zram subsets as needed.

Thanks! Please consider ZSWPIN to rename more general term, too.

> 
> From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> Date: Wed, 27 Apr 2022 17:15:15 -0400
> Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix
> 
> Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> ---
>  Documentation/filesystems/proc.rst | 7 ++++---
>  fs/proc/meminfo.c                  | 2 +-
>  2 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index 8b5a94cfa722..93edcf233464 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -964,7 +964,7 @@ Example output. You may not have all of these fields.
>      Mlocked:               0 kB
>      SwapTotal:             0 kB
>      SwapFree:              0 kB
> -    Zswap:              1904 kB
> +    Compressed:         1904 kB
>      Zswapped:           7792 kB
>      Dirty:                12 kB
>      Writeback:             0 kB
> @@ -1057,8 +1057,9 @@ SwapTotal
>  SwapFree
>                Memory which has been evicted from RAM, and is temporarily
>                on the disk
> -Zswap
> -              Memory consumed by the zswap backend (compressed size)
> +Compressed
> +              Memory consumed by compression backends, such as zswap
> +              (compressed size)
>  Zswapped
>                Amount of anonymous memory stored in zswap (original size)
>  Dirty
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index 6e89f0e2fd20..554d6f230e67 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
>  	show_val_kb(m, "SwapFree:       ", i.freeswap);
>  #ifdef CONFIG_ZSWAP
> -	seq_printf(m,  "Zswap:          %8lu kB\n",
> +	seq_printf(m,  "Compressed:     %8lu kB\n",
>  		   (unsigned long)(zswap_pool_total_size >> 10));
>  	seq_printf(m,  "Zswapped:       %8lu kB\n",
>  		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> -- 
> 2.35.3
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-27 21:36         ` Johannes Weiner
@ 2022-04-27 23:36           ` Shakeel Butt
  -1 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-04-27 23:36 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > >
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > >
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > >
> > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >
> > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +            (PAGE_SHIFT - 10));
> > > > +#endif
> > >
> > > I agree it would be very handy to have the memory consumption in meminfo
> > >
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > >
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> >
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
>
> It does raise the question what to do about cgroup, though. Should the
> control files (memory.zswap.current & memory.zswap.max) apply to zram
> in the future? If so, we should rename them, too.
>
> I'm not too familiar with zram, maybe you can provide some
> background. AFAIU, Google uses zram quite widely; all the more
> confusing why there is no container support for it yet.
>
> Could you shed some light?
>

I can shed light on the datacenter workloads. We use cgroup (still on
v1) and zswap. For the workloads/applications, the swap (or zswap) is
transparent in the sense that they are charged exactly the same
irrespective of how much their memory is zswapped-out. Basically the
applications see the same usage which is actually v1's
memsw.usage_in_bytes. We dynamically increase the swap size if it is
low, so we are not really worried about one job hogging the swap
space.

Regarding stats we actually do have them internally representing
compressed size and number of pages in zswap. The compressed size is
actually used for OOM victim selection. The memsw or v2's swap usage
in the presence of compression based swap does not actually tell how
much memory can potentially be released by evicting a job. For example
if there are two jobs 'A' and 'B'. Both of them have 100 pages
compressed but A's 100 pages are compressed to let's say 10 pages
while B's 100 pages are compressed to 70 pages. It is preferable to
kill B as that will release 70 pages. (This is a very simplified
explanation of what we actually do).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-27 23:36           ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-04-27 23:36 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
> On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > >
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > >
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > >
> > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >
> > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +            (PAGE_SHIFT - 10));
> > > > +#endif
> > >
> > > I agree it would be very handy to have the memory consumption in meminfo
> > >
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > >
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> >
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
>
> It does raise the question what to do about cgroup, though. Should the
> control files (memory.zswap.current & memory.zswap.max) apply to zram
> in the future? If so, we should rename them, too.
>
> I'm not too familiar with zram, maybe you can provide some
> background. AFAIU, Google uses zram quite widely; all the more
> confusing why there is no container support for it yet.
>
> Could you shed some light?
>

I can shed light on the datacenter workloads. We use cgroup (still on
v1) and zswap. For the workloads/applications, the swap (or zswap) is
transparent in the sense that they are charged exactly the same
irrespective of how much their memory is zswapped-out. Basically the
applications see the same usage which is actually v1's
memsw.usage_in_bytes. We dynamically increase the swap size if it is
low, so we are not really worried about one job hogging the swap
space.

Regarding stats we actually do have them internally representing
compressed size and number of pages in zswap. The compressed size is
actually used for OOM victim selection. The memsw or v2's swap usage
in the presence of compression based swap does not actually tell how
much memory can potentially be released by evicting a job. For example
if there are two jobs 'A' and 'B'. Both of them have 100 pages
compressed but A's 100 pages are compressed to let's say 10 pages
while B's 100 pages are compressed to 70 pages. It is preferable to
kill B as that will release 70 pages. (This is a very simplified
explanation of what we actually do).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:05             ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:05 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 03:12:17PM -0700, Minchan Kim wrote:
> On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > Hi Johannes,
> > > > 
> > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > population of the zswap cache on a host. There are no counters for
> > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > understand zswap behavior on production systems.
> > > > > 
> > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > 
> > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > ---
> > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > >  include/linux/swap.h          |  5 +++++
> > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > >  mm/vmstat.c                   |  4 ++++
> > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > 
> > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > --- a/fs/proc/meminfo.c
> > > > > +++ b/fs/proc/meminfo.c
> > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >  
> > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > +		   (PAGE_SHIFT - 10));
> > > > > +#endif
> > > > 
> > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > 
> > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > 
> > > > If we really go this Zswap only metric instead of general term
> > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > in this patchset. Do you think that's better idea instead of
> > > > introducing general term like "Compressed:" or something else?
> > > 
> > > I'm fine with changing it to Compressed. If somebody cares about a
> > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > 
> > It does raise the question what to do about cgroup, though. Should the
> > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > in the future? If so, we should rename them, too.
> > 
> > I'm not too familiar with zram, maybe you can provide some
> > background. AFAIU, Google uses zram quite widely; all the more
> > confusing why there is no container support for it yet.
> 
> My usecase with zram is Android which doesn't use memcg.

Ok.

After more thought, my take is that in the future it could make sense
to track zram pages in a cgroup's memory.current. But it should NOT be
included in the dedicated memory.zswap.* files. Zswap is an in-kernel
writeback cache, and those files allow userspace to tune writeback
thresholds depending on the composition of the workload's
workingset. This doesn't translate to zram: the wb facility that it
has is triggered by hand, based on criteria such as idle pages and
compression rate. It's not based on size. From a cgroup POV, it's a
memory consumer that should be subject to memory.max, nothing more.

This distinction applies to meminfo as well, though. While I think it
makes sense to have a combined "Compressed" counter for zram and
zswap, it's still important to understand zswap behavior on its own to
tune the system-wide writeback threshold in max_pool_percent. (And
again, while zram can also be limited, it's not a writeback threshold,
it's just a red line for returning -ENOMEM).

So I'm going to keep the Zswap and Zswapped items and retract the
delta patch for renaming it to Compressed.

But I'd ack a patch that adds a combined "Compressed" counter for zram
+ zswap if you send it, Minchan.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:05             ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:05 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 03:12:17PM -0700, Minchan Kim wrote:
> On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > Hi Johannes,
> > > > 
> > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > population of the zswap cache on a host. There are no counters for
> > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > understand zswap behavior on production systems.
> > > > > 
> > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > 
> > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > ---
> > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > >  include/linux/swap.h          |  5 +++++
> > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > >  mm/vmstat.c                   |  4 ++++
> > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > 
> > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > --- a/fs/proc/meminfo.c
> > > > > +++ b/fs/proc/meminfo.c
> > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >  
> > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > +		   (PAGE_SHIFT - 10));
> > > > > +#endif
> > > > 
> > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > 
> > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > 
> > > > If we really go this Zswap only metric instead of general term
> > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > in this patchset. Do you think that's better idea instead of
> > > > introducing general term like "Compressed:" or something else?
> > > 
> > > I'm fine with changing it to Compressed. If somebody cares about a
> > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > 
> > It does raise the question what to do about cgroup, though. Should the
> > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > in the future? If so, we should rename them, too.
> > 
> > I'm not too familiar with zram, maybe you can provide some
> > background. AFAIU, Google uses zram quite widely; all the more
> > confusing why there is no container support for it yet.
> 
> My usecase with zram is Android which doesn't use memcg.

Ok.

After more thought, my take is that in the future it could make sense
to track zram pages in a cgroup's memory.current. But it should NOT be
included in the dedicated memory.zswap.* files. Zswap is an in-kernel
writeback cache, and those files allow userspace to tune writeback
thresholds depending on the composition of the workload's
workingset. This doesn't translate to zram: the wb facility that it
has is triggered by hand, based on criteria such as idle pages and
compression rate. It's not based on size. From a cgroup POV, it's a
memory consumer that should be subject to memory.max, nothing more.

This distinction applies to meminfo as well, though. While I think it
makes sense to have a combined "Compressed" counter for zram and
zswap, it's still important to understand zswap behavior on its own to
tune the system-wide writeback threshold in max_pool_percent. (And
again, while zram can also be limited, it's not a writeback threshold,
it's just a red line for returning -ENOMEM).

So I'm going to keep the Zswap and Zswapped items and retract the
delta patch for renaming it to Compressed.

But I'd ack a patch that adds a combined "Compressed" counter for zram
+ zswap if you send it, Minchan.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:25           ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > > 
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > > 
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > 
> > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >  
> > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +		   (PAGE_SHIFT - 10));
> > > > +#endif
> > > 
> > > I agree it would be very handy to have the memory consumption in meminfo
> > > 
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > 
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> > 
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> 
> Thanks! Please consider ZSWPIN to rename more general term, too.

That doesn't make sense to me.

Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
is a writeback cache on top of the swap backend. It has pages
entering, refaulting, and being written back to the swap backend
(PSWPOUT). A zswpout and a zramout are different things.

> > From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner <hannes@cmpxchg.org>
> > Date: Wed, 27 Apr 2022 17:15:15 -0400
> > Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix
> > 
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Just for completeness,

Nacked-by: Johannes Weiner <hannes@cmxpchg.org>

> > @@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> >  #ifdef CONFIG_ZSWAP
> > -	seq_printf(m,  "Zswap:          %8lu kB\n",
> > +	seq_printf(m,  "Compressed:     %8lu kB\n",
> >  		   (unsigned long)(zswap_pool_total_size >> 10));
> >  	seq_printf(m,  "Zswapped:       %8lu kB\n",
> >  		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > -- 
> > 2.35.3
> > 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:25           ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > Hi Johannes,
> > > 
> > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > Currently it requires poking at debugfs to figure out the size and
> > > > population of the zswap cache on a host. There are no counters for
> > > > reads and writes against the cache. As a result, it's difficult to
> > > > understand zswap behavior on production systems.
> > > > 
> > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > 
> > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > ---
> > > >  fs/proc/meminfo.c             |  7 +++++++
> > > >  include/linux/swap.h          |  5 +++++
> > > >  include/linux/vm_event_item.h |  4 ++++
> > > >  mm/vmstat.c                   |  4 ++++
> > > >  mm/zswap.c                    | 13 ++++++-------
> > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > --- a/fs/proc/meminfo.c
> > > > +++ b/fs/proc/meminfo.c
> > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > >  
> > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > +#ifdef CONFIG_ZSWAP
> > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > +		   (PAGE_SHIFT - 10));
> > > > +#endif
> > > 
> > > I agree it would be very handy to have the memory consumption in meminfo
> > > 
> > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > 
> > > If we really go this Zswap only metric instead of general term
> > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > in this patchset. Do you think that's better idea instead of
> > > introducing general term like "Compressed:" or something else?
> > 
> > I'm fine with changing it to Compressed. If somebody cares about a
> > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> 
> Thanks! Please consider ZSWPIN to rename more general term, too.

That doesn't make sense to me.

Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
is a writeback cache on top of the swap backend. It has pages
entering, refaulting, and being written back to the swap backend
(PSWPOUT). A zswpout and a zramout are different things.

> > From 8e9e2d6490b7082c41743fbdb9ffd2db4e3ce962 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > Date: Wed, 27 Apr 2022 17:15:15 -0400
> > Subject: [PATCH] mm: zswap: add basic meminfo and vmstat coverage fix fix
> > 
> > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>

Just for completeness,

Nacked-by: Johannes Weiner <hannes-ss0hH0V9XVJAfugRpC6u6w@public.gmane.org>

> > @@ -87,7 +87,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> >  #ifdef CONFIG_ZSWAP
> > -	seq_printf(m,  "Zswap:          %8lu kB\n",
> > +	seq_printf(m,  "Compressed:     %8lu kB\n",
> >  		   (unsigned long)(zswap_pool_total_size >> 10));
> >  	seq_printf(m,  "Zswapped:       %8lu kB\n",
> >  		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > -- 
> > 2.35.3
> > 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:36             ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:36 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > Hi Johannes,
> > > >
> > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > population of the zswap cache on a host. There are no counters for
> > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > understand zswap behavior on production systems.
> > > > >
> > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > >
> > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > ---
> > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > >  include/linux/swap.h          |  5 +++++
> > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > >  mm/vmstat.c                   |  4 ++++
> > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > --- a/fs/proc/meminfo.c
> > > > > +++ b/fs/proc/meminfo.c
> > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >
> > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > +            (PAGE_SHIFT - 10));
> > > > > +#endif
> > > >
> > > > I agree it would be very handy to have the memory consumption in meminfo
> > > >
> > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > >
> > > > If we really go this Zswap only metric instead of general term
> > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > in this patchset. Do you think that's better idea instead of
> > > > introducing general term like "Compressed:" or something else?
> > >
> > > I'm fine with changing it to Compressed. If somebody cares about a
> > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> >
> > It does raise the question what to do about cgroup, though. Should the
> > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > in the future? If so, we should rename them, too.
> >
> > I'm not too familiar with zram, maybe you can provide some
> > background. AFAIU, Google uses zram quite widely; all the more
> > confusing why there is no container support for it yet.
> >
> > Could you shed some light?
> >
> 
> I can shed light on the datacenter workloads. We use cgroup (still on
> v1) and zswap. For the workloads/applications, the swap (or zswap) is
> transparent in the sense that they are charged exactly the same
> irrespective of how much their memory is zswapped-out. Basically the
> applications see the same usage which is actually v1's
> memsw.usage_in_bytes. We dynamically increase the swap size if it is
> low, so we are not really worried about one job hogging the swap
> space.
> 
> Regarding stats we actually do have them internally representing
> compressed size and number of pages in zswap. The compressed size is
> actually used for OOM victim selection. The memsw or v2's swap usage
> in the presence of compression based swap does not actually tell how
> much memory can potentially be released by evicting a job. For example
> if there are two jobs 'A' and 'B'. Both of them have 100 pages
> compressed but A's 100 pages are compressed to let's say 10 pages
> while B's 100 pages are compressed to 70 pages. It is preferable to
> kill B as that will release 70 pages. (This is a very simplified
> explanation of what we actually do).

Ah, so zram is really only used by the mobile stuff after all.

In the DC, I guess you don't use disk swap in conjunction with zswap,
so those writeback cache controls are less interesting to you?

But it sounds like you would benefit from the zswap(ped) counters in
memory.stat at least.

Thanks, that is enlightening!

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:36             ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 14:36 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> >
> > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > Hi Johannes,
> > > >
> > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > population of the zswap cache on a host. There are no counters for
> > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > understand zswap behavior on production systems.
> > > > >
> > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > >
> > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > ---
> > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > >  include/linux/swap.h          |  5 +++++
> > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > >  mm/vmstat.c                   |  4 ++++
> > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > >
> > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > --- a/fs/proc/meminfo.c
> > > > > +++ b/fs/proc/meminfo.c
> > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >
> > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > +            (PAGE_SHIFT - 10));
> > > > > +#endif
> > > >
> > > > I agree it would be very handy to have the memory consumption in meminfo
> > > >
> > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > >
> > > > If we really go this Zswap only metric instead of general term
> > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > in this patchset. Do you think that's better idea instead of
> > > > introducing general term like "Compressed:" or something else?
> > >
> > > I'm fine with changing it to Compressed. If somebody cares about a
> > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> >
> > It does raise the question what to do about cgroup, though. Should the
> > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > in the future? If so, we should rename them, too.
> >
> > I'm not too familiar with zram, maybe you can provide some
> > background. AFAIU, Google uses zram quite widely; all the more
> > confusing why there is no container support for it yet.
> >
> > Could you shed some light?
> >
> 
> I can shed light on the datacenter workloads. We use cgroup (still on
> v1) and zswap. For the workloads/applications, the swap (or zswap) is
> transparent in the sense that they are charged exactly the same
> irrespective of how much their memory is zswapped-out. Basically the
> applications see the same usage which is actually v1's
> memsw.usage_in_bytes. We dynamically increase the swap size if it is
> low, so we are not really worried about one job hogging the swap
> space.
> 
> Regarding stats we actually do have them internally representing
> compressed size and number of pages in zswap. The compressed size is
> actually used for OOM victim selection. The memsw or v2's swap usage
> in the presence of compression based swap does not actually tell how
> much memory can potentially be released by evicting a job. For example
> if there are two jobs 'A' and 'B'. Both of them have 100 pages
> compressed but A's 100 pages are compressed to let's say 10 pages
> while B's 100 pages are compressed to 70 pages. It is preferable to
> kill B as that will release 70 pages. (This is a very simplified
> explanation of what we actually do).

Ah, so zram is really only used by the mobile stuff after all.

In the DC, I guess you don't use disk swap in conjunction with zswap,
so those writeback cache controls are less interesting to you?

But it sounds like you would benefit from the zswap(ped) counters in
memory.stat at least.

Thanks, that is enlightening!

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:49               ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-04-28 14:49 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > >
> > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > >
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > >
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > >
> > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > >
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >
> > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +            (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > >
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > >
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > >
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > >
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > >
> > > It does raise the question what to do about cgroup, though. Should the
> > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > in the future? If so, we should rename them, too.
> > >
> > > I'm not too familiar with zram, maybe you can provide some
> > > background. AFAIU, Google uses zram quite widely; all the more
> > > confusing why there is no container support for it yet.
> > >
> > > Could you shed some light?
> > >
> >
> > I can shed light on the datacenter workloads. We use cgroup (still on
> > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > transparent in the sense that they are charged exactly the same
> > irrespective of how much their memory is zswapped-out. Basically the
> > applications see the same usage which is actually v1's
> > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > low, so we are not really worried about one job hogging the swap
> > space.
> >
> > Regarding stats we actually do have them internally representing
> > compressed size and number of pages in zswap. The compressed size is
> > actually used for OOM victim selection. The memsw or v2's swap usage
> > in the presence of compression based swap does not actually tell how
> > much memory can potentially be released by evicting a job. For example
> > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > compressed but A's 100 pages are compressed to let's say 10 pages
> > while B's 100 pages are compressed to 70 pages. It is preferable to
> > kill B as that will release 70 pages. (This is a very simplified
> > explanation of what we actually do).
>
> Ah, so zram is really only used by the mobile stuff after all.
>
> In the DC, I guess you don't use disk swap in conjunction with zswap,
> so those writeback cache controls are less interesting to you?

Yes, we have some modifications to zswap to make it work without any
backing real swap. Though there is a future plan to move to zram
eventually.

>
> But it sounds like you would benefit from the zswap(ped) counters in
> memory.stat at least.

Yes and I think if we need zram specific counters/stats in future,
those can be added then.

>
> Thanks, that is enlightening!

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 14:49               ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-04-28 14:49 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
> On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> > >
> > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > >
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > >
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > >
> > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > >
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >
> > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +            (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > >
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > >
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > >
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > >
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > >
> > > It does raise the question what to do about cgroup, though. Should the
> > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > in the future? If so, we should rename them, too.
> > >
> > > I'm not too familiar with zram, maybe you can provide some
> > > background. AFAIU, Google uses zram quite widely; all the more
> > > confusing why there is no container support for it yet.
> > >
> > > Could you shed some light?
> > >
> >
> > I can shed light on the datacenter workloads. We use cgroup (still on
> > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > transparent in the sense that they are charged exactly the same
> > irrespective of how much their memory is zswapped-out. Basically the
> > applications see the same usage which is actually v1's
> > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > low, so we are not really worried about one job hogging the swap
> > space.
> >
> > Regarding stats we actually do have them internally representing
> > compressed size and number of pages in zswap. The compressed size is
> > actually used for OOM victim selection. The memsw or v2's swap usage
> > in the presence of compression based swap does not actually tell how
> > much memory can potentially be released by evicting a job. For example
> > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > compressed but A's 100 pages are compressed to let's say 10 pages
> > while B's 100 pages are compressed to 70 pages. It is preferable to
> > kill B as that will release 70 pages. (This is a very simplified
> > explanation of what we actually do).
>
> Ah, so zram is really only used by the mobile stuff after all.
>
> In the DC, I guess you don't use disk swap in conjunction with zswap,
> so those writeback cache controls are less interesting to you?

Yes, we have some modifications to zswap to make it work without any
backing real swap. Though there is a future plan to move to zram
eventually.

>
> But it sounds like you would benefit from the zswap(ped) counters in
> memory.stat at least.

Yes and I think if we need zram specific counters/stats in future,
those can be added then.

>
> Thanks, that is enlightening!

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 15:16                 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 15:16 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Thu, Apr 28, 2022 at 07:49:33AM -0700, Shakeel Butt wrote:
> On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > >
> > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > Hi Johannes,
> > > > > >
> > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > understand zswap behavior on production systems.
> > > > > > >
> > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > >
> > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > ---
> > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > >
> > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > >
> > > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > +            (PAGE_SHIFT - 10));
> > > > > > > +#endif
> > > > > >
> > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > >
> > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > >
> > > > > > If we really go this Zswap only metric instead of general term
> > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > introducing general term like "Compressed:" or something else?
> > > > >
> > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > >
> > > > It does raise the question what to do about cgroup, though. Should the
> > > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > > in the future? If so, we should rename them, too.
> > > >
> > > > I'm not too familiar with zram, maybe you can provide some
> > > > background. AFAIU, Google uses zram quite widely; all the more
> > > > confusing why there is no container support for it yet.
> > > >
> > > > Could you shed some light?
> > > >
> > >
> > > I can shed light on the datacenter workloads. We use cgroup (still on
> > > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > > transparent in the sense that they are charged exactly the same
> > > irrespective of how much their memory is zswapped-out. Basically the
> > > applications see the same usage which is actually v1's
> > > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > > low, so we are not really worried about one job hogging the swap
> > > space.
> > >
> > > Regarding stats we actually do have them internally representing
> > > compressed size and number of pages in zswap. The compressed size is
> > > actually used for OOM victim selection. The memsw or v2's swap usage
> > > in the presence of compression based swap does not actually tell how
> > > much memory can potentially be released by evicting a job. For example
> > > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > > compressed but A's 100 pages are compressed to let's say 10 pages
> > > while B's 100 pages are compressed to 70 pages. It is preferable to
> > > kill B as that will release 70 pages. (This is a very simplified
> > > explanation of what we actually do).
> >
> > Ah, so zram is really only used by the mobile stuff after all.
> >
> > In the DC, I guess you don't use disk swap in conjunction with zswap,
> > so those writeback cache controls are less interesting to you?
> 
> Yes, we have some modifications to zswap to make it work without any
> backing real swap.

Not sure if you can share them, but I would be interested in those
changes. We have real backing swap, but because of the way swap
entries are allocated, pages stored in zswap will consume physical
disk slots. So on top of regular swap, you need to provision disk
space for zswap as well, which is unfortunate.

What could be useful is a separate swap entry address space that maps
zswap slots and disk slots alike. This would fix the above problem. It
would have the added benefit of making swapoff much simpler and faster
too, as it doesn't need to chase down page tables to free disk slots.

> > But it sounds like you would benefit from the zswap(ped) counters in
> > memory.stat at least.
> 
> Yes and I think if we need zram specific counters/stats in future,
> those can be added then.

I agree.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 15:16                 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 15:16 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

On Thu, Apr 28, 2022 at 07:49:33AM -0700, Shakeel Butt wrote:
> On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> >
> > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> > > >
> > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > Hi Johannes,
> > > > > >
> > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > understand zswap behavior on production systems.
> > > > > > >
> > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > >
> > > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > > ---
> > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > >
> > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > >
> > > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > +            (PAGE_SHIFT - 10));
> > > > > > > +#endif
> > > > > >
> > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > >
> > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > >
> > > > > > If we really go this Zswap only metric instead of general term
> > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > introducing general term like "Compressed:" or something else?
> > > > >
> > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > >
> > > > It does raise the question what to do about cgroup, though. Should the
> > > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > > in the future? If so, we should rename them, too.
> > > >
> > > > I'm not too familiar with zram, maybe you can provide some
> > > > background. AFAIU, Google uses zram quite widely; all the more
> > > > confusing why there is no container support for it yet.
> > > >
> > > > Could you shed some light?
> > > >
> > >
> > > I can shed light on the datacenter workloads. We use cgroup (still on
> > > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > > transparent in the sense that they are charged exactly the same
> > > irrespective of how much their memory is zswapped-out. Basically the
> > > applications see the same usage which is actually v1's
> > > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > > low, so we are not really worried about one job hogging the swap
> > > space.
> > >
> > > Regarding stats we actually do have them internally representing
> > > compressed size and number of pages in zswap. The compressed size is
> > > actually used for OOM victim selection. The memsw or v2's swap usage
> > > in the presence of compression based swap does not actually tell how
> > > much memory can potentially be released by evicting a job. For example
> > > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > > compressed but A's 100 pages are compressed to let's say 10 pages
> > > while B's 100 pages are compressed to 70 pages. It is preferable to
> > > kill B as that will release 70 pages. (This is a very simplified
> > > explanation of what we actually do).
> >
> > Ah, so zram is really only used by the mobile stuff after all.
> >
> > In the DC, I guess you don't use disk swap in conjunction with zswap,
> > so those writeback cache controls are less interesting to you?
> 
> Yes, we have some modifications to zswap to make it work without any
> backing real swap.

Not sure if you can share them, but I would be interested in those
changes. We have real backing swap, but because of the way swap
entries are allocated, pages stored in zswap will consume physical
disk slots. So on top of regular swap, you need to provision disk
space for zswap as well, which is unfortunate.

What could be useful is a separate swap entry address space that maps
zswap slots and disk slots alike. This would fix the above problem. It
would have the added benefit of making swapoff much simpler and faster
too, as it doesn't need to chase down page tables to free disk slots.

> > But it sounds like you would benefit from the zswap(ped) counters in
> > memory.stat at least.
> 
> Yes and I think if we need zram specific counters/stats in future,
> those can be added then.

I agree.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-28 14:49               ` Shakeel Butt
  (?)
  (?)
@ 2022-04-28 16:54               ` Yang Shi
  2022-05-05 19:33                   ` Shakeel Butt
  -1 siblings, 1 reply; 63+ messages in thread
From: Yang Shi @ 2022-04-28 16:54 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Johannes Weiner, Minchan Kim, Andrew Morton, Michal Hocko,
	Roman Gushchin, Seth Jennings, Dan Streetman, Linux MM, Cgroups,
	LKML, Kernel Team

On Thu, Apr 28, 2022 at 7:49 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > >
> > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > Hi Johannes,
> > > > > >
> > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > understand zswap behavior on production systems.
> > > > > > >
> > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > >
> > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > ---
> > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > >
> > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > >
> > > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > +            (PAGE_SHIFT - 10));
> > > > > > > +#endif
> > > > > >
> > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > >
> > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > >
> > > > > > If we really go this Zswap only metric instead of general term
> > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > introducing general term like "Compressed:" or something else?
> > > > >
> > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > >
> > > > It does raise the question what to do about cgroup, though. Should the
> > > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > > in the future? If so, we should rename them, too.
> > > >
> > > > I'm not too familiar with zram, maybe you can provide some
> > > > background. AFAIU, Google uses zram quite widely; all the more
> > > > confusing why there is no container support for it yet.
> > > >
> > > > Could you shed some light?
> > > >
> > >
> > > I can shed light on the datacenter workloads. We use cgroup (still on
> > > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > > transparent in the sense that they are charged exactly the same
> > > irrespective of how much their memory is zswapped-out. Basically the
> > > applications see the same usage which is actually v1's
> > > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > > low, so we are not really worried about one job hogging the swap
> > > space.
> > >
> > > Regarding stats we actually do have them internally representing
> > > compressed size and number of pages in zswap. The compressed size is
> > > actually used for OOM victim selection. The memsw or v2's swap usage
> > > in the presence of compression based swap does not actually tell how
> > > much memory can potentially be released by evicting a job. For example
> > > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > > compressed but A's 100 pages are compressed to let's say 10 pages
> > > while B's 100 pages are compressed to 70 pages. It is preferable to
> > > kill B as that will release 70 pages. (This is a very simplified
> > > explanation of what we actually do).
> >
> > Ah, so zram is really only used by the mobile stuff after all.
> >
> > In the DC, I guess you don't use disk swap in conjunction with zswap,
> > so those writeback cache controls are less interesting to you?
>
> Yes, we have some modifications to zswap to make it work without any
> backing real swap. Though there is a future plan to move to zram
> eventually.

Interesting, if so why not just simply use zram?

>
> >
> > But it sounds like you would benefit from the zswap(ped) counters in
> > memory.stat at least.
>
> Yes and I think if we need zram specific counters/stats in future,
> those can be added then.
>
> >
> > Thanks, that is enlightening!
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 16:59                   ` Yang Shi
  0 siblings, 0 replies; 63+ messages in thread
From: Yang Shi @ 2022-04-28 16:59 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Shakeel Butt, Minchan Kim, Andrew Morton, Michal Hocko,
	Roman Gushchin, Seth Jennings, Dan Streetman, Linux MM, Cgroups,
	LKML, Kernel Team

On Thu, Apr 28, 2022 at 8:17 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Apr 28, 2022 at 07:49:33AM -0700, Shakeel Butt wrote:
> > On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > >
> > > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > >
> > > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > Hi Johannes,
> > > > > > >
> > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > understand zswap behavior on production systems.
> > > > > > > >
> > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > >
> > > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > > ---
> > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > >
> > > > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > +            (PAGE_SHIFT - 10));
> > > > > > > > +#endif
> > > > > > >
> > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > >
> > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > > >
> > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > introducing general term like "Compressed:" or something else?
> > > > > >
> > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > >
> > > > > It does raise the question what to do about cgroup, though. Should the
> > > > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > > > in the future? If so, we should rename them, too.
> > > > >
> > > > > I'm not too familiar with zram, maybe you can provide some
> > > > > background. AFAIU, Google uses zram quite widely; all the more
> > > > > confusing why there is no container support for it yet.
> > > > >
> > > > > Could you shed some light?
> > > > >
> > > >
> > > > I can shed light on the datacenter workloads. We use cgroup (still on
> > > > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > > > transparent in the sense that they are charged exactly the same
> > > > irrespective of how much their memory is zswapped-out. Basically the
> > > > applications see the same usage which is actually v1's
> > > > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > > > low, so we are not really worried about one job hogging the swap
> > > > space.
> > > >
> > > > Regarding stats we actually do have them internally representing
> > > > compressed size and number of pages in zswap. The compressed size is
> > > > actually used for OOM victim selection. The memsw or v2's swap usage
> > > > in the presence of compression based swap does not actually tell how
> > > > much memory can potentially be released by evicting a job. For example
> > > > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > > > compressed but A's 100 pages are compressed to let's say 10 pages
> > > > while B's 100 pages are compressed to 70 pages. It is preferable to
> > > > kill B as that will release 70 pages. (This is a very simplified
> > > > explanation of what we actually do).
> > >
> > > Ah, so zram is really only used by the mobile stuff after all.
> > >
> > > In the DC, I guess you don't use disk swap in conjunction with zswap,
> > > so those writeback cache controls are less interesting to you?
> >
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap.
>
> Not sure if you can share them, but I would be interested in those
> changes. We have real backing swap, but because of the way swap
> entries are allocated, pages stored in zswap will consume physical
> disk slots. So on top of regular swap, you need to provision disk
> space for zswap as well, which is unfortunate.

Yes, exactly. For our usecase I noticed the swap backend is used up,
but there is no writeback from zswap to swap backend at all. The
bright side is it may mean the compression ratio is high for our
workload, but the disk space is actually wasted.

>
> What could be useful is a separate swap entry address space that maps
> zswap slots and disk slots alike. This would fix the above problem. It
> would have the added benefit of making swapoff much simpler and faster
> too, as it doesn't need to chase down page tables to free disk slots.

I was thinking about this too, but it seems not easy since the swap
slot on swap backen is allocated when the page is added to swap, but
not entry on zswap since zswap is just a cache and invisible to
vmscan. If we have separate entries for zswap and swap backend, it
would be complicated to convert zswap entries to swap backend entries
since we may have to traverse rmap to find all the PTEs mapped to
zswap entry in order to convert them to swap backend entry.

>
> > > But it sounds like you would benefit from the zswap(ped) counters in
> > > memory.stat at least.
> >
> > Yes and I think if we need zram specific counters/stats in future,
> > those can be added then.
>
> I agree.
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 16:59                   ` Yang Shi
  0 siblings, 0 replies; 63+ messages in thread
From: Yang Shi @ 2022-04-28 16:59 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Shakeel Butt, Minchan Kim, Andrew Morton, Michal Hocko,
	Roman Gushchin, Seth Jennings, Dan Streetman, Linux MM, Cgroups,
	LKML, Kernel Team

On Thu, Apr 28, 2022 at 8:17 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
> On Thu, Apr 28, 2022 at 07:49:33AM -0700, Shakeel Butt wrote:
> > On Thu, Apr 28, 2022 at 7:36 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> > >
> > > On Wed, Apr 27, 2022 at 04:36:22PM -0700, Shakeel Butt wrote:
> > > > On Wed, Apr 27, 2022 at 3:32 PM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
> > > > >
> > > > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > Hi Johannes,
> > > > > > >
> > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > understand zswap behavior on production systems.
> > > > > > > >
> > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > >
> > > > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > > > ---
> > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > >
> > > > > > > >   show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > >   show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > + seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > +            (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > + seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > +            (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > +            (PAGE_SHIFT - 10));
> > > > > > > > +#endif
> > > > > > >
> > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > >
> > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > > >
> > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > introducing general term like "Compressed:" or something else?
> > > > > >
> > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > >
> > > > > It does raise the question what to do about cgroup, though. Should the
> > > > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > > > in the future? If so, we should rename them, too.
> > > > >
> > > > > I'm not too familiar with zram, maybe you can provide some
> > > > > background. AFAIU, Google uses zram quite widely; all the more
> > > > > confusing why there is no container support for it yet.
> > > > >
> > > > > Could you shed some light?
> > > > >
> > > >
> > > > I can shed light on the datacenter workloads. We use cgroup (still on
> > > > v1) and zswap. For the workloads/applications, the swap (or zswap) is
> > > > transparent in the sense that they are charged exactly the same
> > > > irrespective of how much their memory is zswapped-out. Basically the
> > > > applications see the same usage which is actually v1's
> > > > memsw.usage_in_bytes. We dynamically increase the swap size if it is
> > > > low, so we are not really worried about one job hogging the swap
> > > > space.
> > > >
> > > > Regarding stats we actually do have them internally representing
> > > > compressed size and number of pages in zswap. The compressed size is
> > > > actually used for OOM victim selection. The memsw or v2's swap usage
> > > > in the presence of compression based swap does not actually tell how
> > > > much memory can potentially be released by evicting a job. For example
> > > > if there are two jobs 'A' and 'B'. Both of them have 100 pages
> > > > compressed but A's 100 pages are compressed to let's say 10 pages
> > > > while B's 100 pages are compressed to 70 pages. It is preferable to
> > > > kill B as that will release 70 pages. (This is a very simplified
> > > > explanation of what we actually do).
> > >
> > > Ah, so zram is really only used by the mobile stuff after all.
> > >
> > > In the DC, I guess you don't use disk swap in conjunction with zswap,
> > > so those writeback cache controls are less interesting to you?
> >
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap.
>
> Not sure if you can share them, but I would be interested in those
> changes. We have real backing swap, but because of the way swap
> entries are allocated, pages stored in zswap will consume physical
> disk slots. So on top of regular swap, you need to provision disk
> space for zswap as well, which is unfortunate.

Yes, exactly. For our usecase I noticed the swap backend is used up,
but there is no writeback from zswap to swap backend at all. The
bright side is it may mean the compression ratio is high for our
workload, but the disk space is actually wasted.

>
> What could be useful is a separate swap entry address space that maps
> zswap slots and disk slots alike. This would fix the above problem. It
> would have the added benefit of making swapoff much simpler and faster
> too, as it doesn't need to chase down page tables to free disk slots.

I was thinking about this too, but it seems not easy since the swap
slot on swap backen is allocated when the page is added to swap, but
not entry on zswap since zswap is just a cache and invisible to
vmscan. If we have separate entries for zswap and swap backend, it
would be complicated to convert zswap entries to swap backend entries
since we may have to traverse rmap to find all the PTEs mapped to
zswap entry in order to convert them to swap backend entry.

>
> > > But it sounds like you would benefit from the zswap(ped) counters in
> > > memory.stat at least.
> >
> > Yes and I think if we need zram specific counters/stats in future,
> > those can be added then.
>
> I agree.
>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-28 14:25           ` Johannes Weiner
  (?)
@ 2022-04-28 16:59           ` Minchan Kim
  2022-04-28 17:23               ` Johannes Weiner
  -1 siblings, 1 reply; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 16:59 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > Hi Johannes,
> > > > 
> > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > population of the zswap cache on a host. There are no counters for
> > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > understand zswap behavior on production systems.
> > > > > 
> > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > 
> > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > ---
> > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > >  include/linux/swap.h          |  5 +++++
> > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > >  mm/vmstat.c                   |  4 ++++
> > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > 
> > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > --- a/fs/proc/meminfo.c
> > > > > +++ b/fs/proc/meminfo.c
> > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > >  
> > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > +#ifdef CONFIG_ZSWAP
> > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > +		   (PAGE_SHIFT - 10));
> > > > > +#endif
> > > > 
> > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > 
> > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > 
> > > > If we really go this Zswap only metric instead of general term
> > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > in this patchset. Do you think that's better idea instead of
> > > > introducing general term like "Compressed:" or something else?
> > > 
> > > I'm fine with changing it to Compressed. If somebody cares about a
> > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > 
> > Thanks! Please consider ZSWPIN to rename more general term, too.
> 
> That doesn't make sense to me.
> 
> Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> is a writeback cache on top of the swap backend. It has pages
> entering, refaulting, and being written back to the swap backend
> (PSWPOUT). A zswpout and a zramout are different things.

Think about that system has two swap devices (storage + zram).
I think it's useful to know how many swap IO comes from zram
and rest of them are storage.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:02               ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 17:02 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 10:05:13AM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 03:12:17PM -0700, Minchan Kim wrote:
> > On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > > 
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > > 
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > 
> > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >  
> > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > > 
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > 
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > 
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > > 
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > 
> > > It does raise the question what to do about cgroup, though. Should the
> > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > in the future? If so, we should rename them, too.
> > > 
> > > I'm not too familiar with zram, maybe you can provide some
> > > background. AFAIU, Google uses zram quite widely; all the more
> > > confusing why there is no container support for it yet.
> > 
> > My usecase with zram is Android which doesn't use memcg.
> 
> Ok.
> 
> After more thought, my take is that in the future it could make sense
> to track zram pages in a cgroup's memory.current. But it should NOT be
> included in the dedicated memory.zswap.* files. Zswap is an in-kernel
> writeback cache, and those files allow userspace to tune writeback
> thresholds depending on the composition of the workload's
> workingset. This doesn't translate to zram: the wb facility that it
> has is triggered by hand, based on criteria such as idle pages and
> compression rate. It's not based on size. From a cgroup POV, it's a
> memory consumer that should be subject to memory.max, nothing more.
> 
> This distinction applies to meminfo as well, though. While I think it
> makes sense to have a combined "Compressed" counter for zram and
> zswap, it's still important to understand zswap behavior on its own to
> tune the system-wide writeback threshold in max_pool_percent. (And
> again, while zram can also be limited, it's not a writeback threshold,
> it's just a red line for returning -ENOMEM).
> 
> So I'm going to keep the Zswap and Zswapped items and retract the
> delta patch for renaming it to Compressed.
> 
> But I'd ack a patch that adds a combined "Compressed" counter for zram
> + zswap if you send it, Minchan.

If we really want to go separate stat for zswap and zram, it would
be better to use direct name "Zram: " instead of comrpessed.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:02               ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 17:02 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Thu, Apr 28, 2022 at 10:05:13AM -0400, Johannes Weiner wrote:
> On Wed, Apr 27, 2022 at 03:12:17PM -0700, Minchan Kim wrote:
> > On Wed, Apr 27, 2022 at 05:36:26PM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 05:20:31PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > > 
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > > 
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > 
> > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >  
> > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > > 
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > 
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > 
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > > 
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > 
> > > It does raise the question what to do about cgroup, though. Should the
> > > control files (memory.zswap.current & memory.zswap.max) apply to zram
> > > in the future? If so, we should rename them, too.
> > > 
> > > I'm not too familiar with zram, maybe you can provide some
> > > background. AFAIU, Google uses zram quite widely; all the more
> > > confusing why there is no container support for it yet.
> > 
> > My usecase with zram is Android which doesn't use memcg.
> 
> Ok.
> 
> After more thought, my take is that in the future it could make sense
> to track zram pages in a cgroup's memory.current. But it should NOT be
> included in the dedicated memory.zswap.* files. Zswap is an in-kernel
> writeback cache, and those files allow userspace to tune writeback
> thresholds depending on the composition of the workload's
> workingset. This doesn't translate to zram: the wb facility that it
> has is triggered by hand, based on criteria such as idle pages and
> compression rate. It's not based on size. From a cgroup POV, it's a
> memory consumer that should be subject to memory.max, nothing more.
> 
> This distinction applies to meminfo as well, though. While I think it
> makes sense to have a combined "Compressed" counter for zram and
> zswap, it's still important to understand zswap behavior on its own to
> tune the system-wide writeback threshold in max_pool_percent. (And
> again, while zram can also be limited, it's not a writeback threshold,
> it's just a red line for returning -ENOMEM).
> 
> So I'm going to keep the Zswap and Zswapped items and retract the
> delta patch for renaming it to Compressed.
> 
> But I'd ack a patch that adds a combined "Compressed" counter for zram
> + zswap if you send it, Minchan.

If we really want to go separate stat for zswap and zram, it would
be better to use direct name "Zram: " instead of comrpessed.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:23               ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 17:23 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > > 
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > > 
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > 
> > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >  
> > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > > 
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > 
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > 
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > > 
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > 
> > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > 
> > That doesn't make sense to me.
> > 
> > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > is a writeback cache on top of the swap backend. It has pages
> > entering, refaulting, and being written back to the swap backend
> > (PSWPOUT). A zswpout and a zramout are different things.
> 
> Think about that system has two swap devices (storage + zram).
> I think it's useful to know how many swap IO comes from zram
> and rest of them are storage.

Hm, isn't this comparable to having one swap on flash and one swap on
a rotating disk? /sys/block/*/stat should be able to tell you how
traffic is distributed, no?

What I'm more worried about is the fact that in theory you can stack
zswap on top of zram. Consider a fast compression cache on top of a
higher compression backend. Is somebody doing this now? I doubt
it. But as people look into memory tiering more and more, this doesn't
sound entirely implausible. If the stacked layers then share the same
in/out events, it would be quite confusing.

If you think PSWPIN/OUT and per-device stats aren't enough, I'm not
opposed to adding zramin/out to /proc/vmstat as well. I think we're
less worried there than with /proc/meminfo. I'd just prefer to keep
them separate from the zswap events.

Does that sound reasonable?

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:23               ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 17:23 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > Hi Johannes,
> > > > > 
> > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > understand zswap behavior on production systems.
> > > > > > 
> > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > 
> > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > ---
> > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > >  include/linux/swap.h          |  5 +++++
> > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > --- a/fs/proc/meminfo.c
> > > > > > +++ b/fs/proc/meminfo.c
> > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > >  
> > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > +#endif
> > > > > 
> > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > 
> > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > 
> > > > > If we really go this Zswap only metric instead of general term
> > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > in this patchset. Do you think that's better idea instead of
> > > > > introducing general term like "Compressed:" or something else?
> > > > 
> > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > 
> > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > 
> > That doesn't make sense to me.
> > 
> > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > is a writeback cache on top of the swap backend. It has pages
> > entering, refaulting, and being written back to the swap backend
> > (PSWPOUT). A zswpout and a zramout are different things.
> 
> Think about that system has two swap devices (storage + zram).
> I think it's useful to know how many swap IO comes from zram
> and rest of them are storage.

Hm, isn't this comparable to having one swap on flash and one swap on
a rotating disk? /sys/block/*/stat should be able to tell you how
traffic is distributed, no?

What I'm more worried about is the fact that in theory you can stack
zswap on top of zram. Consider a fast compression cache on top of a
higher compression backend. Is somebody doing this now? I doubt
it. But as people look into memory tiering more and more, this doesn't
sound entirely implausible. If the stacked layers then share the same
in/out events, it would be quite confusing.

If you think PSWPIN/OUT and per-device stats aren't enough, I'm not
opposed to adding zramin/out to /proc/vmstat as well. I think we're
less worried there than with /proc/meminfo. I'd just prefer to keep
them separate from the zswap events.

Does that sound reasonable?

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:27                 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 17:27 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 10:02:46AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 10:05:13AM -0400, Johannes Weiner wrote:
> > But I'd ack a patch that adds a combined "Compressed" counter for zram
> > + zswap if you send it, Minchan.
> 
> If we really want to go separate stat for zswap and zram, it would
> be better to use direct name "Zram: " instead of comrpessed.

That works for me as well.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 17:27                 ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 17:27 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Thu, Apr 28, 2022 at 10:02:46AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 10:05:13AM -0400, Johannes Weiner wrote:
> > But I'd ack a patch that adds a combined "Compressed" counter for zram
> > + zswap if you send it, Minchan.
> 
> If we really want to go separate stat for zswap and zram, it would
> be better to use direct name "Zram: " instead of comrpessed.

That works for me as well.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
  2022-04-28 17:23               ` Johannes Weiner
  (?)
@ 2022-04-28 17:31               ` Minchan Kim
  2022-04-28 18:34                   ` Johannes Weiner
  -1 siblings, 1 reply; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 17:31 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 01:23:21PM -0400, Johannes Weiner wrote:
> On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> > On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > Hi Johannes,
> > > > > > 
> > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > understand zswap behavior on production systems.
> > > > > > > 
> > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > 
> > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > ---
> > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > >  
> > > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > > +#endif
> > > > > > 
> > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > 
> > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > > 
> > > > > > If we really go this Zswap only metric instead of general term
> > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > introducing general term like "Compressed:" or something else?
> > > > > 
> > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > 
> > > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > > 
> > > That doesn't make sense to me.
> > > 
> > > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > > is a writeback cache on top of the swap backend. It has pages
> > > entering, refaulting, and being written back to the swap backend
> > > (PSWPOUT). A zswpout and a zramout are different things.
> > 
> > Think about that system has two swap devices (storage + zram).
> > I think it's useful to know how many swap IO comes from zram
> > and rest of them are storage.
> 
> Hm, isn't this comparable to having one swap on flash and one swap on
> a rotating disk? /sys/block/*/stat should be able to tell you how
> traffic is distributed, no?

That raises me a same question. Could you also look at the zswap stat
instead of adding it into vmstat? (If zswap doesn't have the counter,
couldn't we simply add new stat in sysfs?)

I thought the patch aims for exposting statistics to grab easier
using popular meminfo and vmstat and wanted to leverage it for
zram, too.

> 
> What I'm more worried about is the fact that in theory you can stack
> zswap on top of zram. Consider a fast compression cache on top of a
> higher compression backend. Is somebody doing this now? I doubt
> it. But as people look into memory tiering more and more, this doesn't
> sound entirely implausible. If the stacked layers then share the same
> in/out events, it would be quite confusing.
> 
> If you think PSWPIN/OUT and per-device stats aren't enough, I'm not
> opposed to adding zramin/out to /proc/vmstat as well. I think we're
> less worried there than with /proc/meminfo. I'd just prefer to keep
> them separate from the zswap events.
> 
> Does that sound reasonable?
> 

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 18:34                   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 18:34 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 10:31:45AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 01:23:21PM -0400, Johannes Weiner wrote:
> > On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> > > On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > Hi Johannes,
> > > > > > > 
> > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > understand zswap behavior on production systems.
> > > > > > > > 
> > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > > 
> > > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > > ---
> > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > > 
> > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > >  
> > > > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > > > +#endif
> > > > > > > 
> > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > > 
> > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > > > 
> > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > introducing general term like "Compressed:" or something else?
> > > > > > 
> > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > > 
> > > > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > > > 
> > > > That doesn't make sense to me.
> > > > 
> > > > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > > > is a writeback cache on top of the swap backend. It has pages
> > > > entering, refaulting, and being written back to the swap backend
> > > > (PSWPOUT). A zswpout and a zramout are different things.
> > > 
> > > Think about that system has two swap devices (storage + zram).
> > > I think it's useful to know how many swap IO comes from zram
> > > and rest of them are storage.
> > 
> > Hm, isn't this comparable to having one swap on flash and one swap on
> > a rotating disk? /sys/block/*/stat should be able to tell you how
> > traffic is distributed, no?
> 
> That raises me a same question. Could you also look at the zswap stat
> instead of adding it into vmstat? (If zswap doesn't have the counter,
> couldn't we simply add new stat in sysfs?)

My point is that for regular swap backends there is already
PSWP*. Distinguishing traffic between two swap backends is legitimate
of course, but zram is not really special compared to other backends
from that POV. It's only special in its memory consumption.

zswap *is* special, though. Even though some people use it *like* a
swap backend, it's also a cache on top of swap. zswap loads and stores
do not show up in PSWP*. And they shouldn't, because in a cache
configuration, you still need the separate PSWP* stats to understand
cache eviction behavior and cache miss ratio. memory -> zswap is
ZSWPOUT; zswap -> disk is PSWPOUT; PSWPIN is a cache miss etc.

> I thought the patch aims for exposting statistics to grab easier
> using popular meminfo and vmstat and wanted to leverage it for
> zram, too.

Right. zram and zswap overlap in their functionality and have similar
deficits in their stats. Both should be fixed, I'm not opposing
that. But IMO we should be careful about conflating
them. Fundamentally, one is a block device, the other is an MM-native
cache layer that sits on top of block devices. Drawing false
equivalencies between them will come back to haunt us.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 18:34                   ` Johannes Weiner
  0 siblings, 0 replies; 63+ messages in thread
From: Johannes Weiner @ 2022-04-28 18:34 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Thu, Apr 28, 2022 at 10:31:45AM -0700, Minchan Kim wrote:
> On Thu, Apr 28, 2022 at 01:23:21PM -0400, Johannes Weiner wrote:
> > On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> > > On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > > > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > Hi Johannes,
> > > > > > > 
> > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > understand zswap behavior on production systems.
> > > > > > > > 
> > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > > 
> > > > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > > > ---
> > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > > 
> > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > >  
> > > > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > > > +#endif
> > > > > > > 
> > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > > 
> > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > > > 
> > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > introducing general term like "Compressed:" or something else?
> > > > > > 
> > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > > 
> > > > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > > > 
> > > > That doesn't make sense to me.
> > > > 
> > > > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > > > is a writeback cache on top of the swap backend. It has pages
> > > > entering, refaulting, and being written back to the swap backend
> > > > (PSWPOUT). A zswpout and a zramout are different things.
> > > 
> > > Think about that system has two swap devices (storage + zram).
> > > I think it's useful to know how many swap IO comes from zram
> > > and rest of them are storage.
> > 
> > Hm, isn't this comparable to having one swap on flash and one swap on
> > a rotating disk? /sys/block/*/stat should be able to tell you how
> > traffic is distributed, no?
> 
> That raises me a same question. Could you also look at the zswap stat
> instead of adding it into vmstat? (If zswap doesn't have the counter,
> couldn't we simply add new stat in sysfs?)

My point is that for regular swap backends there is already
PSWP*. Distinguishing traffic between two swap backends is legitimate
of course, but zram is not really special compared to other backends
from that POV. It's only special in its memory consumption.

zswap *is* special, though. Even though some people use it *like* a
swap backend, it's also a cache on top of swap. zswap loads and stores
do not show up in PSWP*. And they shouldn't, because in a cache
configuration, you still need the separate PSWP* stats to understand
cache eviction behavior and cache miss ratio. memory -> zswap is
ZSWPOUT; zswap -> disk is PSWPOUT; PSWPIN is a cache miss etc.

> I thought the patch aims for exposting statistics to grab easier
> using popular meminfo and vmstat and wanted to leverage it for
> zram, too.

Right. zram and zswap overlap in their functionality and have similar
deficits in their stats. Both should be fixed, I'm not opposing
that. But IMO we should be careful about conflating
them. Fundamentally, one is a block device, the other is an MM-native
cache layer that sits on top of block devices. Drawing false
equivalencies between them will come back to haunt us.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 19:58                     ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 19:58 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm, cgroups, linux-kernel,
	kernel-team

On Thu, Apr 28, 2022 at 02:34:28PM -0400, Johannes Weiner wrote:
> On Thu, Apr 28, 2022 at 10:31:45AM -0700, Minchan Kim wrote:
> > On Thu, Apr 28, 2022 at 01:23:21PM -0400, Johannes Weiner wrote:
> > > On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> > > > On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > > > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > > Hi Johannes,
> > > > > > > > 
> > > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > > understand zswap behavior on production systems.
> > > > > > > > > 
> > > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > > > > > > > ---
> > > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > > > 
> > > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > > >  
> > > > > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > > > > +#endif
> > > > > > > > 
> > > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > > > 
> > > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw@google.com/
> > > > > > > > 
> > > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > > introducing general term like "Compressed:" or something else?
> > > > > > > 
> > > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > > > 
> > > > > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > > > > 
> > > > > That doesn't make sense to me.
> > > > > 
> > > > > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > > > > is a writeback cache on top of the swap backend. It has pages
> > > > > entering, refaulting, and being written back to the swap backend
> > > > > (PSWPOUT). A zswpout and a zramout are different things.
> > > > 
> > > > Think about that system has two swap devices (storage + zram).
> > > > I think it's useful to know how many swap IO comes from zram
> > > > and rest of them are storage.
> > > 
> > > Hm, isn't this comparable to having one swap on flash and one swap on
> > > a rotating disk? /sys/block/*/stat should be able to tell you how
> > > traffic is distributed, no?
> > 
> > That raises me a same question. Could you also look at the zswap stat
> > instead of adding it into vmstat? (If zswap doesn't have the counter,
> > couldn't we simply add new stat in sysfs?)
> 
> My point is that for regular swap backends there is already
> PSWP*. Distinguishing traffic between two swap backends is legitimate
> of course, but zram is not really special compared to other backends
> from that POV. It's only special in its memory consumption.
> 
> zswap *is* special, though. Even though some people use it *like* a
> swap backend, it's also a cache on top of swap. zswap loads and stores
> do not show up in PSWP*. And they shouldn't, because in a cache
> configuration, you still need the separate PSWP* stats to understand
> cache eviction behavior and cache miss ratio. memory -> zswap is
> ZSWPOUT; zswap -> disk is PSWPOUT; PSWPIN is a cache miss etc.
> 
> > I thought the patch aims for exposting statistics to grab easier
> > using popular meminfo and vmstat and wanted to leverage it for
> > zram, too.
> 
> Right. zram and zswap overlap in their functionality and have similar
> deficits in their stats. Both should be fixed, I'm not opposing
> that. But IMO we should be careful about conflating
> them. Fundamentally, one is a block device, the other is an MM-native
> cache layer that sits on top of block devices. Drawing false
> equivalencies between them will come back to haunt us.

Make sense to me.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-04-28 19:58                     ` Minchan Kim
  0 siblings, 0 replies; 63+ messages in thread
From: Minchan Kim @ 2022-04-28 19:58 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Michal Hocko, Roman Gushchin, Shakeel Butt,
	Seth Jennings, Dan Streetman, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, kernel-team-b10kYP2dOMg

On Thu, Apr 28, 2022 at 02:34:28PM -0400, Johannes Weiner wrote:
> On Thu, Apr 28, 2022 at 10:31:45AM -0700, Minchan Kim wrote:
> > On Thu, Apr 28, 2022 at 01:23:21PM -0400, Johannes Weiner wrote:
> > > On Thu, Apr 28, 2022 at 09:59:53AM -0700, Minchan Kim wrote:
> > > > On Thu, Apr 28, 2022 at 10:25:59AM -0400, Johannes Weiner wrote:
> > > > > On Wed, Apr 27, 2022 at 03:16:48PM -0700, Minchan Kim wrote:
> > > > > > On Wed, Apr 27, 2022 at 05:20:29PM -0400, Johannes Weiner wrote:
> > > > > > > On Wed, Apr 27, 2022 at 01:29:34PM -0700, Minchan Kim wrote:
> > > > > > > > Hi Johannes,
> > > > > > > > 
> > > > > > > > On Wed, Apr 27, 2022 at 12:00:15PM -0400, Johannes Weiner wrote:
> > > > > > > > > Currently it requires poking at debugfs to figure out the size and
> > > > > > > > > population of the zswap cache on a host. There are no counters for
> > > > > > > > > reads and writes against the cache. As a result, it's difficult to
> > > > > > > > > understand zswap behavior on production systems.
> > > > > > > > > 
> > > > > > > > > Print zswap memory consumption and how many pages are zswapped out in
> > > > > > > > > /proc/meminfo. Count zswapouts and zswapins in /proc/vmstat.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
> > > > > > > > > ---
> > > > > > > > >  fs/proc/meminfo.c             |  7 +++++++
> > > > > > > > >  include/linux/swap.h          |  5 +++++
> > > > > > > > >  include/linux/vm_event_item.h |  4 ++++
> > > > > > > > >  mm/vmstat.c                   |  4 ++++
> > > > > > > > >  mm/zswap.c                    | 13 ++++++-------
> > > > > > > > >  5 files changed, 26 insertions(+), 7 deletions(-)
> > > > > > > > > 
> > > > > > > > > diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> > > > > > > > > index 6fa761c9cc78..6e89f0e2fd20 100644
> > > > > > > > > --- a/fs/proc/meminfo.c
> > > > > > > > > +++ b/fs/proc/meminfo.c
> > > > > > > > > @@ -86,6 +86,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
> > > > > > > > >  
> > > > > > > > >  	show_val_kb(m, "SwapTotal:      ", i.totalswap);
> > > > > > > > >  	show_val_kb(m, "SwapFree:       ", i.freeswap);
> > > > > > > > > +#ifdef CONFIG_ZSWAP
> > > > > > > > > +	seq_printf(m,  "Zswap:          %8lu kB\n",
> > > > > > > > > +		   (unsigned long)(zswap_pool_total_size >> 10));
> > > > > > > > > +	seq_printf(m,  "Zswapped:       %8lu kB\n",
> > > > > > > > > +		   (unsigned long)atomic_read(&zswap_stored_pages) <<
> > > > > > > > > +		   (PAGE_SHIFT - 10));
> > > > > > > > > +#endif
> > > > > > > > 
> > > > > > > > I agree it would be very handy to have the memory consumption in meminfo
> > > > > > > > 
> > > > > > > > https://lore.kernel.org/all/YYwZXrL3Fu8%2FvLZw-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org/
> > > > > > > > 
> > > > > > > > If we really go this Zswap only metric instead of general term
> > > > > > > > "Compressed", I'd like to post maybe "Zram:" with same reason
> > > > > > > > in this patchset. Do you think that's better idea instead of
> > > > > > > > introducing general term like "Compressed:" or something else?
> > > > > > > 
> > > > > > > I'm fine with changing it to Compressed. If somebody cares about a
> > > > > > > more detailed breakdown, we can add Zswap, Zram subsets as needed.
> > > > > > 
> > > > > > Thanks! Please consider ZSWPIN to rename more general term, too.
> > > > > 
> > > > > That doesn't make sense to me.
> > > > > 
> > > > > Zram is a swap backend, its traffic is accounted in PSWPIN/OUT. Zswap
> > > > > is a writeback cache on top of the swap backend. It has pages
> > > > > entering, refaulting, and being written back to the swap backend
> > > > > (PSWPOUT). A zswpout and a zramout are different things.
> > > > 
> > > > Think about that system has two swap devices (storage + zram).
> > > > I think it's useful to know how many swap IO comes from zram
> > > > and rest of them are storage.
> > > 
> > > Hm, isn't this comparable to having one swap on flash and one swap on
> > > a rotating disk? /sys/block/*/stat should be able to tell you how
> > > traffic is distributed, no?
> > 
> > That raises me a same question. Could you also look at the zswap stat
> > instead of adding it into vmstat? (If zswap doesn't have the counter,
> > couldn't we simply add new stat in sysfs?)
> 
> My point is that for regular swap backends there is already
> PSWP*. Distinguishing traffic between two swap backends is legitimate
> of course, but zram is not really special compared to other backends
> from that POV. It's only special in its memory consumption.
> 
> zswap *is* special, though. Even though some people use it *like* a
> swap backend, it's also a cache on top of swap. zswap loads and stores
> do not show up in PSWP*. And they shouldn't, because in a cache
> configuration, you still need the separate PSWP* stats to understand
> cache eviction behavior and cache miss ratio. memory -> zswap is
> ZSWPOUT; zswap -> disk is PSWPOUT; PSWPIN is a cache miss etc.
> 
> > I thought the patch aims for exposting statistics to grab easier
> > using popular meminfo and vmstat and wanted to leverage it for
> > zram, too.
> 
> Right. zram and zswap overlap in their functionality and have similar
> deficits in their stats. Both should be fixed, I'm not opposing
> that. But IMO we should be careful about conflating
> them. Fundamentally, one is a block device, the other is an MM-native
> cache layer that sits on top of block devices. Drawing false
> equivalencies between them will come back to haunt us.

Make sense to me.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 19:30                   ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-05-05 19:30 UTC (permalink / raw)
  To: Johannes Weiner, Yosry Ahmed, Yuanchu Xie
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

+Yosry & Yuanchu

On Thu, Apr 28, 2022 at 8:17 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
[...]
> >
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap.
>
> Not sure if you can share them, but I would be interested in those
> changes. We have real backing swap, but because of the way swap
> entries are allocated, pages stored in zswap will consume physical
> disk slots. So on top of regular swap, you need to provision disk
> space for zswap as well, which is unfortunate.
>
> What could be useful is a separate swap entry address space that maps
> zswap slots and disk slots alike. This would fix the above problem. It
> would have the added benefit of making swapoff much simpler and faster
> too, as it doesn't need to chase down page tables to free disk slots.
>

I think we can share the code. Adding Yosry & Yuanchu who are
currently maintaining that piece of code.

Though that code might not be in an upstreamable state. At the high
level, it introduces a new type of swap (SWP_GHOST) which underlying
is a truncated file, so no real disk space is needed. The zswap always
accepts the page, so the kernel never tries to go to the underlying
swapfile (reality is a bit more complicated due to the presence of
incompressible memory and no real disk present on the system).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 19:30                   ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-05-05 19:30 UTC (permalink / raw)
  To: Johannes Weiner, Yosry Ahmed, Yuanchu Xie
  Cc: Minchan Kim, Andrew Morton, Michal Hocko, Roman Gushchin,
	Seth Jennings, Dan Streetman, Linux MM, Cgroups, LKML,
	Kernel Team

+Yosry & Yuanchu

On Thu, Apr 28, 2022 at 8:17 AM Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org> wrote:
>
[...]
> >
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap.
>
> Not sure if you can share them, but I would be interested in those
> changes. We have real backing swap, but because of the way swap
> entries are allocated, pages stored in zswap will consume physical
> disk slots. So on top of regular swap, you need to provision disk
> space for zswap as well, which is unfortunate.
>
> What could be useful is a separate swap entry address space that maps
> zswap slots and disk slots alike. This would fix the above problem. It
> would have the added benefit of making swapoff much simpler and faster
> too, as it doesn't need to chase down page tables to free disk slots.
>

I think we can share the code. Adding Yosry & Yuanchu who are
currently maintaining that piece of code.

Though that code might not be in an upstreamable state. At the high
level, it introduces a new type of swap (SWP_GHOST) which underlying
is a truncated file, so no real disk space is needed. The zswap always
accepts the page, so the kernel never tries to go to the underlying
swapfile (reality is a bit more complicated due to the presence of
incompressible memory and no real disk present on the system).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 19:33                   ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-05-05 19:33 UTC (permalink / raw)
  To: Yang Shi, Suleiman Souhlal
  Cc: Johannes Weiner, Minchan Kim, Andrew Morton, Michal Hocko,
	Roman Gushchin, Seth Jennings, Dan Streetman, Linux MM, Cgroups,
	LKML, Kernel Team

On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301@gmail.com> wrote:
>
[...]
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap. Though there is a future plan to move to zram
> > eventually.
>
> Interesting, if so why not just simply use zram?
>

Historical reasons. When we started trying out the zswap, I think zram
was still in staging or not stable enough (Suleiman can give a better
answer).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 19:33                   ` Shakeel Butt
  0 siblings, 0 replies; 63+ messages in thread
From: Shakeel Butt @ 2022-05-05 19:33 UTC (permalink / raw)
  To: Yang Shi, Suleiman Souhlal
  Cc: Johannes Weiner, Minchan Kim, Andrew Morton, Michal Hocko,
	Roman Gushchin, Seth Jennings, Dan Streetman, Linux MM, Cgroups,
	LKML, Kernel Team

On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>
[...]
> > Yes, we have some modifications to zswap to make it work without any
> > backing real swap. Though there is a future plan to move to zram
> > eventually.
>
> Interesting, if so why not just simply use zram?
>

Historical reasons. When we started trying out the zswap, I think zram
was still in staging or not stable enough (Suleiman can give a better
answer).

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 22:24                     ` Suleiman Souhlal
  0 siblings, 0 replies; 63+ messages in thread
From: Suleiman Souhlal @ 2022-05-05 22:24 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Yang Shi, Johannes Weiner, Minchan Kim, Andrew Morton,
	Michal Hocko, Roman Gushchin, Seth Jennings, Dan Streetman,
	Linux MM, Cgroups, LKML, Kernel Team

On Fri, May 6, 2022 at 4:33 AM Shakeel Butt <shakeelb@google.com> wrote:
>
> On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301@gmail.com> wrote:
> >
> [...]
> > > Yes, we have some modifications to zswap to make it work without any
> > > backing real swap. Though there is a future plan to move to zram
> > > eventually.
> >
> > Interesting, if so why not just simply use zram?
> >
>
> Historical reasons. When we started trying out the zswap, I think zram
> was still in staging or not stable enough (Suleiman can give a better
> answer).

One of the reasons we chose zswap instead of zram is that zswap can
reject pages.
Also, we wanted to have per-memcg pools, which zswap made much easier to do.

-- Suleiman

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 22:24                     ` Suleiman Souhlal
  0 siblings, 0 replies; 63+ messages in thread
From: Suleiman Souhlal @ 2022-05-05 22:24 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Yang Shi, Johannes Weiner, Minchan Kim, Andrew Morton,
	Michal Hocko, Roman Gushchin, Seth Jennings, Dan Streetman,
	Linux MM, Cgroups, LKML, Kernel Team

On Fri, May 6, 2022 at 4:33 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >
> [...]
> > > Yes, we have some modifications to zswap to make it work without any
> > > backing real swap. Though there is a future plan to move to zram
> > > eventually.
> >
> > Interesting, if so why not just simply use zram?
> >
>
> Historical reasons. When we started trying out the zswap, I think zram
> was still in staging or not stable enough (Suleiman can give a better
> answer).

One of the reasons we chose zswap instead of zram is that zswap can
reject pages.
Also, we wanted to have per-memcg pools, which zswap made much easier to do.

-- Suleiman

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 23:54                       ` Yu Zhao
  0 siblings, 0 replies; 63+ messages in thread
From: Yu Zhao @ 2022-05-05 23:54 UTC (permalink / raw)
  To: Suleiman Souhlal, Yang Shi
  Cc: Shakeel Butt, Johannes Weiner, Minchan Kim, Andrew Morton,
	Michal Hocko, Roman Gushchin, Seth Jennings, Dan Streetman,
	Linux MM, Cgroups, LKML, Kernel Team

On Thu, May 5, 2022 at 3:25 PM Suleiman Souhlal <suleiman@google.com> wrote:
>
> On Fri, May 6, 2022 at 4:33 AM Shakeel Butt <shakeelb@google.com> wrote:
> >
> > On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301@gmail.com> wrote:
> > >
> > [...]
> > > > Yes, we have some modifications to zswap to make it work without any
> > > > backing real swap. Though there is a future plan to move to zram
> > > > eventually.
> > >
> > > Interesting, if so why not just simply use zram?
> > >
> >
> > Historical reasons. When we started trying out the zswap, I think zram
> > was still in staging or not stable enough (Suleiman can give a better
> > answer).
>
> One of the reasons we chose zswap instead of zram is that zswap can
> reject pages.
> Also, we wanted to have per-memcg pools, which zswap made much easier to do.

Yes, it was a design choice. zswap was cache-like (tiering) and zram
was storage-like (endpoint). Though nowadays the distinction is
blurry.

It had nothing to do with zram being in staging -- when we took zswap,
it was out of the tree.

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage
@ 2022-05-05 23:54                       ` Yu Zhao
  0 siblings, 0 replies; 63+ messages in thread
From: Yu Zhao @ 2022-05-05 23:54 UTC (permalink / raw)
  To: Suleiman Souhlal, Yang Shi
  Cc: Shakeel Butt, Johannes Weiner, Minchan Kim, Andrew Morton,
	Michal Hocko, Roman Gushchin, Seth Jennings, Dan Streetman,
	Linux MM, Cgroups, LKML, Kernel Team

On Thu, May 5, 2022 at 3:25 PM Suleiman Souhlal <suleiman-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Fri, May 6, 2022 at 4:33 AM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> >
> > On Thu, Apr 28, 2022 at 9:54 AM Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > >
> > [...]
> > > > Yes, we have some modifications to zswap to make it work without any
> > > > backing real swap. Though there is a future plan to move to zram
> > > > eventually.
> > >
> > > Interesting, if so why not just simply use zram?
> > >
> >
> > Historical reasons. When we started trying out the zswap, I think zram
> > was still in staging or not stable enough (Suleiman can give a better
> > answer).
>
> One of the reasons we chose zswap instead of zram is that zswap can
> reject pages.
> Also, we wanted to have per-memcg pools, which zswap made much easier to do.

Yes, it was a design choice. zswap was cache-like (tiering) and zram
was storage-like (endpoint). Though nowadays the distinction is
blurry.

It had nothing to do with zram being in staging -- when we took zswap,
it was out of the tree.

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2022-05-05 23:54 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-27 16:00 [PATCH 0/5] zswap: cgroup accounting & control Johannes Weiner
2022-04-27 16:00 ` Johannes Weiner
2022-04-27 16:00 ` [PATCH 1/5] mm: Kconfig: move swap and slab config options to the MM section Johannes Weiner
2022-04-27 16:00   ` Johannes Weiner
2022-04-27 16:00 ` [PATCH 2/5] mm: Kconfig: group swap, slab, hotplug and thp options into submenus Johannes Weiner
2022-04-27 16:00   ` Johannes Weiner
2022-04-27 16:00 ` [PATCH 3/5] mm: Kconfig: simplify zswap configuration Johannes Weiner
2022-04-27 16:00   ` Johannes Weiner
2022-04-27 16:00 ` [PATCH 4/5] mm: zswap: add basic meminfo and vmstat coverage Johannes Weiner
2022-04-27 16:00   ` Johannes Weiner
2022-04-27 18:36   ` Andrew Morton
2022-04-27 18:36     ` Andrew Morton
2022-04-27 18:53     ` Johannes Weiner
2022-04-27 19:50       ` Johannes Weiner
2022-04-27 19:50         ` Johannes Weiner
2022-04-27 19:51       ` Johannes Weiner
2022-04-27 20:29   ` Minchan Kim
2022-04-27 20:29     ` Minchan Kim
2022-04-27 21:20     ` Johannes Weiner
2022-04-27 21:20       ` Johannes Weiner
2022-04-27 21:36       ` Johannes Weiner
2022-04-27 21:36         ` Johannes Weiner
2022-04-27 22:12         ` Minchan Kim
2022-04-27 22:12           ` Minchan Kim
2022-04-28 14:05           ` Johannes Weiner
2022-04-28 14:05             ` Johannes Weiner
2022-04-28 17:02             ` Minchan Kim
2022-04-28 17:02               ` Minchan Kim
2022-04-28 17:27               ` Johannes Weiner
2022-04-28 17:27                 ` Johannes Weiner
2022-04-27 23:36         ` Shakeel Butt
2022-04-27 23:36           ` Shakeel Butt
2022-04-28 14:36           ` Johannes Weiner
2022-04-28 14:36             ` Johannes Weiner
2022-04-28 14:49             ` Shakeel Butt
2022-04-28 14:49               ` Shakeel Butt
2022-04-28 15:16               ` Johannes Weiner
2022-04-28 15:16                 ` Johannes Weiner
2022-04-28 16:59                 ` Yang Shi
2022-04-28 16:59                   ` Yang Shi
2022-05-05 19:30                 ` Shakeel Butt
2022-05-05 19:30                   ` Shakeel Butt
2022-04-28 16:54               ` Yang Shi
2022-05-05 19:33                 ` Shakeel Butt
2022-05-05 19:33                   ` Shakeel Butt
2022-05-05 22:24                   ` Suleiman Souhlal
2022-05-05 22:24                     ` Suleiman Souhlal
2022-05-05 23:54                     ` Yu Zhao
2022-05-05 23:54                       ` Yu Zhao
2022-04-27 22:16       ` Minchan Kim
2022-04-27 22:16         ` Minchan Kim
2022-04-28 14:25         ` Johannes Weiner
2022-04-28 14:25           ` Johannes Weiner
2022-04-28 16:59           ` Minchan Kim
2022-04-28 17:23             ` Johannes Weiner
2022-04-28 17:23               ` Johannes Weiner
2022-04-28 17:31               ` Minchan Kim
2022-04-28 18:34                 ` Johannes Weiner
2022-04-28 18:34                   ` Johannes Weiner
2022-04-28 19:58                   ` Minchan Kim
2022-04-28 19:58                     ` Minchan Kim
2022-04-27 16:00 ` [PATCH 5/5] zswap: memcg accounting Johannes Weiner
2022-04-27 16:00   ` Johannes Weiner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.