All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] mm: process/cgroup ksm support
@ 2023-03-10 18:28 Stefan Roesch
  2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
                   ` (4 more replies)
  0 siblings, 5 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-03-10 18:28 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes

So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

Use case 1:
The madvise call is not available in the programming language. An example for
this are programs with forked workloads using a garbage collected language without
pointers. In such a language madvise cannot be made available.

In addition the addresses of objects get moved around as they are garbage
collected. KSM sharing needs to be enabled "from the outside" for these type of
workloads.

Use case 2:
The same interpreter can also be used for workloads where KSM brings no
benefit or even has overhead. We'd like to be able to enable KSM on a workload
by workload basis.

Use case 3:
With the madvise call sharing opportunities are only enabled for the current
process: it is a workload-local decision. A considerable number of sharing
opportuniites may exist across multiple workloads or jobs. Only a higler level
entity like a job scheduler or container can know for certain if its running
one or more instances of a job. That job scheduler however doesn't have
the necessary internal worklaod knowledge to make targeted madvise calls.

Security concerns:
In previous discussions security concerns have been brought up. The problem is
that an individual workload does not have the knowledge about what else is
running on a machine. Therefore it has to be very conservative in what memory
areas can be shared or not. However, if the system is dedicated to running
multiple jobs within the same security domain, its the job scheduler that has
the knowledge that sharing can be safely enabled and is even desirable.

Performance:
Experiments with using UKSM have shown a capacity increase of around 20%.


1. New options for prctl system command
This patch series adds two new options to the prctl system call. The first
one allows to enable KSM at the process level and the second one to query the
setting.

The setting will be inherited by child processes.

With the above setting, KSM can be enabled for the seed process of a cgroup
and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing
When KSM is enabled at the process level, the KSM code will iterate over all
the VMA's and enable KSM for the eligible VMA's.

When forking a process that has KSM enabled, the setting will be inherited by
the new child process.

In addition when KSM is disabled for a process, KSM will be disabled for the
VMA's where KSM has been enabled.

3. Add general_profit metric
The general_profit metric of KSM is specified in the documentation, but not
calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm.

4. Add more metrics to ksm_stat
This adds the process profit and ksm type metric to /proc/<pid>/ksm_stat.

5. Add more tests to ksm_tests
This adds an option to specify the merge type to the ksm_tests. This allows to
test madvise and prctl KSM. It also adds a new option to query if prctl KSM has
been enabled. It adds a fork test to verify that the KSM process setting is
inherited by client processes.


Changes:
- V4:
  - removing check in prctl for MMF_VM_MERGEABLE in PR_SET_MEMORY_MERGE
    handling
  - Checking for VM_MERGEABLE AND MMF_VM_MERGE_ANY to avoid chaning vm_flags
    - This requires also checking that the vma is compatible. The
      compatibility check is provided by a new helper
    - processes which have set MMF_VM_MERGE_ANY, only need to call the
      helper and not madvise.
  - removed unmerge_vmas function, this function is no longer necessary,
    clearing the MMF_VM_MERGE_ANY bit is sufficient

- V3:
  - folded patch 1 - 6
  - folded patch 7 - 14
  - folded patch 15 - 19
  - Expanded on the use cases in the cover letter
  - Added a section on security concerns to the cover letter

- V2:
  - Added use cases to the cover letter
  - Removed the tracing patch from the patch series and posted it as an
    individual patch
  - Refreshed repo



Stefan Roesch (3):
  mm: add new api to enable ksm per process
  mm: add new KSM process and sysfs knobs
  selftests/mm: add new selftests for KSM

 Documentation/ABI/testing/sysfs-kernel-mm-ksm |   8 +
 Documentation/admin-guide/mm/ksm.rst          |   8 +-
 fs/proc/base.c                                |   5 +
 include/linux/ksm.h                           |  19 +-
 include/linux/sched/coredump.h                |   1 +
 include/uapi/linux/prctl.h                    |   2 +
 kernel/sys.c                                  |  27 ++
 mm/ksm.c                                      | 137 +++++++---
 tools/include/uapi/linux/prctl.h              |   2 +
 tools/testing/selftests/mm/Makefile           |   2 +-
 tools/testing/selftests/mm/ksm_tests.c        | 254 +++++++++++++++---
 11 files changed, 388 insertions(+), 77 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
@ 2023-03-10 18:28 ` Stefan Roesch
  2023-03-13 16:26   ` Johannes Weiner
  2023-04-03 10:37   ` David Hildenbrand
  2023-03-10 18:28 ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-03-10 18:28 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

Patch series "mm: process/cgroup ksm support", v3.

So far KSM can only be enabled by calling madvise for memory regions.  To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

Use case 1:

  The madvise call is not available in the programming language.  An
  example for this are programs with forked workloads using a garbage
  collected language without pointers.  In such a language madvise cannot
  be made available.

  In addition the addresses of objects get moved around as they are
  garbage collected.  KSM sharing needs to be enabled "from the outside"
  for these type of workloads.

Use case 2:

  The same interpreter can also be used for workloads where KSM brings
  no benefit or even has overhead.  We'd like to be able to enable KSM on
  a workload by workload basis.

Use case 3:

  With the madvise call sharing opportunities are only enabled for the
  current process: it is a workload-local decision.  A considerable number
  of sharing opportuniites may exist across multiple workloads or jobs.
  Only a higler level entity like a job scheduler or container can know
  for certain if its running one or more instances of a job.  That job
  scheduler however doesn't have the necessary internal worklaod knowledge
  to make targeted madvise calls.

Security concerns:

  In previous discussions security concerns have been brought up.  The
  problem is that an individual workload does not have the knowledge about
  what else is running on a machine.  Therefore it has to be very
  conservative in what memory areas can be shared or not.  However, if the
  system is dedicated to running multiple jobs within the same security
  domain, its the job scheduler that has the knowledge that sharing can be
  safely enabled and is even desirable.

Performance:

  Experiments with using UKSM have shown a capacity increase of around
  20%.


1. New options for prctl system command

   This patch series adds two new options to the prctl system call.
   The first one allows to enable KSM at the process level and the second
   one to query the setting.

   The setting will be inherited by child processes.

   With the above setting, KSM can be enabled for the seed process of a
   cgroup and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing

   When KSM is enabled at the process level, the KSM code will iterate
   over all the VMA's and enable KSM for the eligible VMA's.

   When forking a process that has KSM enabled, the setting will be
   inherited by the new child process.

   In addition when KSM is disabled for a process, KSM will be disabled
   for the VMA's where KSM has been enabled.

3. Add general_profit metric

   The general_profit metric of KSM is specified in the documentation,
   but not calculated.  This adds the general profit metric to
   /sys/kernel/debug/mm/ksm.

4. Add more metrics to ksm_stat

   This adds the process profit and ksm type metric to
   /proc/<pid>/ksm_stat.

5. Add more tests to ksm_tests

   This adds an option to specify the merge type to the ksm_tests.
   This allows to test madvise and prctl KSM.  It also adds a new option
   to query if prctl KSM has been enabled.  It adds a fork test to verify
   that the KSM process setting is inherited by client processes.

An update to the prctl(2) manpage has been proposed at [1].

This patch (of 3):

This adds a new prctl to API to enable and disable KSM on a per process
basis instead of only at the VMA basis (with madvise).

1) Introduce new MMF_VM_MERGE_ANY flag

   This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
   is set, kernel samepage merging (ksm) gets enabled for all vma's of a
   process.

2) add flag to __ksm_enter

   This change adds the flag parameter to __ksm_enter.  This allows to
   distinguish if ksm was called by prctl or madvise.

3) add flag to __ksm_exit call

   This adds the flag parameter to the __ksm_exit() call.  This allows
   to distinguish if this call is for an prctl or madvise invocation.

4) invoke madvise for all vmas in scan_get_next_rmap_item

   If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
   over all the vmas and enable ksm if possible.  For the vmas that can be
   ksm enabled this is only done once.

5) support disabling of ksm for a process

   This adds the ability to disable ksm for a process if ksm has been
   enabled for the process.

6) add new prctl option to get and set ksm for a process

   This adds two new options to the prctl system call
   - enable ksm for all vmas of a process (if the vmas support it).
   - query if ksm has been enabled for a process.

Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1]
Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io
Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 include/linux/ksm.h            | 14 ++++--
 include/linux/sched/coredump.h |  1 +
 include/uapi/linux/prctl.h     |  2 +
 kernel/sys.c                   | 27 ++++++++++
 mm/ksm.c                       | 90 +++++++++++++++++++++++-----------
 5 files changed, 101 insertions(+), 33 deletions(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 7e232ba59b86..d38a05a36298 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -18,20 +18,24 @@
 #ifdef CONFIG_KSM
 int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags);
-int __ksm_enter(struct mm_struct *mm);
-void __ksm_exit(struct mm_struct *mm);
+int __ksm_enter(struct mm_struct *mm, int flag);
+void __ksm_exit(struct mm_struct *mm, int flag);
 
 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
 {
+	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
+		return __ksm_enter(mm, MMF_VM_MERGE_ANY);
 	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
-		return __ksm_enter(mm);
+		return __ksm_enter(mm, MMF_VM_MERGEABLE);
 	return 0;
 }
 
 static inline void ksm_exit(struct mm_struct *mm)
 {
-	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
-		__ksm_exit(mm);
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		__ksm_exit(mm, MMF_VM_MERGE_ANY);
+	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+		__ksm_exit(mm, MMF_VM_MERGEABLE);
 }
 
 /*
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index 0e17ae7fbfd3..0ee96ea7a0e9 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
 #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
 				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
 
+#define MMF_VM_MERGE_ANY	29
 #endif /* _LINUX_SCHED_COREDUMP_H */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 1312a137f7fb..759b3f53e53f 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -290,4 +290,6 @@ struct prctl_mm_map {
 #define PR_SET_VMA		0x53564d41
 # define PR_SET_VMA_ANON_NAME		0
 
+#define PR_SET_MEMORY_MERGE		67
+#define PR_GET_MEMORY_MERGE		68
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index 495cd87d9bf4..edc439b1cae9 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -15,6 +15,7 @@
 #include <linux/highuid.h>
 #include <linux/fs.h>
 #include <linux/kmod.h>
+#include <linux/ksm.h>
 #include <linux/perf_event.h>
 #include <linux/resource.h>
 #include <linux/kernel.h>
@@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 	case PR_SET_VMA:
 		error = prctl_set_vma(arg2, arg3, arg4, arg5);
 		break;
+#ifdef CONFIG_KSM
+	case PR_SET_MEMORY_MERGE:
+		if (!capable(CAP_SYS_RESOURCE))
+			return -EPERM;
+
+		if (arg2) {
+			if (mmap_write_lock_killable(me->mm))
+				return -EINTR;
+
+			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
+				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
+			mmap_write_unlock(me->mm);
+		} else {
+			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);
+		}
+		break;
+	case PR_GET_MEMORY_MERGE:
+		if (!capable(CAP_SYS_RESOURCE))
+			return -EPERM;
+
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+
+		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+		break;
+#endif
 	default:
 		error = -EINVAL;
 		break;
diff --git a/mm/ksm.c b/mm/ksm.c
index d7bd28199f6c..b8e6e734dd69 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
 	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
 }
 
+static bool vma_ksm_compatible(struct vm_area_struct *vma)
+{
+	/*
+	 * Be somewhat over-protective for now!
+	 */
+	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
+			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
+			     VM_HUGETLB | VM_MIXEDMAP))
+		return false;		/* just ignore the advice */
+
+	if (vma_is_dax(vma))
+		return false;
+
+#ifdef VM_SAO
+	if (*vm_flags & VM_SAO)
+		return false;
+#endif
+#ifdef VM_SPARC_ADI
+	if (*vm_flags & VM_SPARC_ADI)
+		return false;
+#endif
+
+	return true;
+}
+
+static bool vma_ksm_mergeable(struct vm_area_struct *vma)
+{
+	if (vma->vm_flags & VM_MERGEABLE)
+		return true;
+
+	if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags) &&
+		vma_ksm_compatible(vma))
+		return true;
+
+	return false;
+}
+
 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
 		unsigned long addr)
 {
 	struct vm_area_struct *vma;
+
 	if (ksm_test_exit(mm))
 		return NULL;
+
 	vma = vma_lookup(mm, addr);
-	if (!vma || !(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
+	if (!vma || !vma->anon_vma)
 		return NULL;
-	return vma;
+	if (vma_ksm_mergeable(vma))
+		return vma;
+
+	return NULL;
 }
 
 static void break_cow(struct ksm_rmap_item *rmap_item)
@@ -1042,7 +1084,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 			goto mm_exiting;
 
 		for_each_vma(vmi, vma) {
-			if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
+			if (!vma_ksm_mergeable(vma) || !vma->anon_vma)
 				continue;
 			err = unmerge_ksm_pages(vma,
 						vma->vm_start, vma->vm_end);
@@ -1065,6 +1107,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 
 			mm_slot_free(mm_slot_cache, mm_slot);
 			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
 			mmdrop(mm);
 		} else
 			spin_unlock(&ksm_mmlist_lock);
@@ -2409,8 +2452,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 		goto no_vmas;
 
 	for_each_vma(vmi, vma) {
-		if (!(vma->vm_flags & VM_MERGEABLE))
+		if (!vma_ksm_mergeable(vma))
 			continue;
+
 		if (ksm_scan.address < vma->vm_start)
 			ksm_scan.address = vma->vm_start;
 		if (!vma->anon_vma)
@@ -2495,6 +2539,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 
 		mm_slot_free(mm_slot_cache, mm_slot);
 		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
 		mmap_read_unlock(mm);
 		mmdrop(mm);
 	} else {
@@ -2579,28 +2624,12 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 
 	switch (advice) {
 	case MADV_MERGEABLE:
-		/*
-		 * Be somewhat over-protective for now!
-		 */
-		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
-				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
-				 VM_HUGETLB | VM_MIXEDMAP))
-			return 0;		/* just ignore the advice */
-
-		if (vma_is_dax(vma))
-			return 0;
-
-#ifdef VM_SAO
-		if (*vm_flags & VM_SAO)
-			return 0;
-#endif
-#ifdef VM_SPARC_ADI
-		if (*vm_flags & VM_SPARC_ADI)
+		if (!vma_ksm_compatible(vma))
 			return 0;
-#endif
 
-		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
-			err = __ksm_enter(mm);
+		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags) &&
+		    !test_bit(MMF_VM_MERGE_ANY, &mm->flags)) {
+			err = __ksm_enter(mm, MMF_VM_MERGEABLE);
 			if (err)
 				return err;
 		}
@@ -2626,7 +2655,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 }
 EXPORT_SYMBOL_GPL(ksm_madvise);
 
-int __ksm_enter(struct mm_struct *mm)
+int __ksm_enter(struct mm_struct *mm, int flag)
 {
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
@@ -2659,7 +2688,7 @@ int __ksm_enter(struct mm_struct *mm)
 		list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node);
 	spin_unlock(&ksm_mmlist_lock);
 
-	set_bit(MMF_VM_MERGEABLE, &mm->flags);
+	set_bit(flag, &mm->flags);
 	mmgrab(mm);
 
 	if (needs_wakeup)
@@ -2668,12 +2697,17 @@ int __ksm_enter(struct mm_struct *mm)
 	return 0;
 }
 
-void __ksm_exit(struct mm_struct *mm)
+void __ksm_exit(struct mm_struct *mm, int flag)
 {
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
 	int easy_to_free = 0;
 
+	if (!(current->flags & PF_EXITING) &&
+	      flag == MMF_VM_MERGE_ANY &&
+	      test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
+
 	/*
 	 * This process is exiting: if it's straightforward (as is the
 	 * case when ksmd was never running), free mm_slot immediately.
@@ -2700,7 +2734,7 @@ void __ksm_exit(struct mm_struct *mm)
 
 	if (easy_to_free) {
 		mm_slot_free(mm_slot_cache, mm_slot);
-		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(flag, &mm->flags);
 		mmdrop(mm);
 	} else if (mm_slot) {
 		mmap_write_lock(mm);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
  2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-03-10 18:28 ` Stefan Roesch
  2023-04-05 17:04   ` David Hildenbrand
  2023-03-10 18:28 ` [PATCH v4 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 36+ messages in thread
From: Stefan Roesch @ 2023-03-10 18:28 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

This adds the general_profit KSM sysfs knob and the process profit metric
and process merge type knobs to ksm_stat.

1) split off pages_volatile function

   This splits off the pages_volatile function.  The next patch will
   use this function.

2) expose general_profit metric

   The documentation mentions a general profit metric, however this
   metric is not calculated.  In addition the formula depends on the size
   of internal structures, which makes it more difficult for an
   administrator to make the calculation.  Adding the metric for a better
   user experience.

3) document general_profit sysfs knob

4) calculate ksm process profit metric

   The ksm documentation mentions the process profit metric and how to
   calculate it.  This adds the calculation of the metric.

5) add ksm_merge_type() function

   This adds the ksm_merge_type function.  The function returns the
   merge type for the process.  For madvise it returns "madvise", for
   prctl it returns "process" and otherwise it returns "none".

6) mm: expose ksm process profit metric and merge type in ksm_stat

   This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
   The name of the value is ksm_merge_type.  The documentation mentions
   the formula for the ksm process profit metric, however it does not
   calculate it.  In addition the formula depends on the size of internal
   structures.  So it makes sense to expose it.

7) document new procfs ksm knobs

Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 Documentation/ABI/testing/sysfs-kernel-mm-ksm |  8 ++++
 Documentation/admin-guide/mm/ksm.rst          |  8 +++-
 fs/proc/base.c                                |  5 ++
 include/linux/ksm.h                           |  5 ++
 mm/ksm.c                                      | 47 +++++++++++++++++--
 5 files changed, 69 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-ksm b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
index d244674a9480..7768e90f7a8f 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-ksm
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
@@ -51,3 +51,11 @@ Description:	Control merging pages across different NUMA nodes.
 
 		When it is set to 0 only pages from the same node are merged,
 		otherwise pages from all nodes can be merged together (default).
+
+What:		/sys/kernel/mm/ksm/general_profit
+Date:		January 2023
+KernelVersion:  6.1
+Contact:	Linux memory management mailing list <linux-mm@kvack.org>
+Description:	Measure how effective KSM is.
+		general_profit: how effective is KSM. The formula for the
+		calculation is in Documentation/admin-guide/mm/ksm.rst.
diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 270560fef3b2..d2929964cd0f 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -157,6 +157,8 @@ stable_node_chains_prune_millisecs
 
 The effectiveness of KSM and MADV_MERGEABLE is shown in ``/sys/kernel/mm/ksm/``:
 
+general_profit
+        how effective is KSM. The calculation is explained below.
 pages_shared
         how many shared pages are being used
 pages_sharing
@@ -214,7 +216,8 @@ several times, which are unprofitable memory consumed.
 			  ksm_rmap_items * sizeof(rmap_item).
 
    where ksm_merging_pages is shown under the directory ``/proc/<pid>/``,
-   and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``.
+   and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``. The process profit
+   is also shown in ``/proc/<pid>/ksm_stat`` as ksm_process_profit.
 
 From the perspective of application, a high ratio of ``ksm_rmap_items`` to
 ``ksm_merging_pages`` means a bad madvise-applied policy, so developers or
@@ -225,6 +228,9 @@ so if the ``ksm_rmap_items/ksm_merging_pages`` ratio exceeds 64 on 64-bit CPU
 or exceeds 128 on 32-bit CPU, then the app's madvise policy should be dropped,
 because the ksm profit is approximately zero or negative.
 
+The ksm_merge_type in ``/proc/<pid>/ksm_stat`` shows the merge type of the
+process. Valid values are ``none``, ``madvise`` and ``process``.
+
 Monitoring KSM events
 =====================
 
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 07463ad4a70a..c74450318e05 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -96,6 +96,7 @@
 #include <linux/time_namespace.h>
 #include <linux/resctrl.h>
 #include <linux/cn_proc.h>
+#include <linux/ksm.h>
 #include <trace/events/oom.h>
 #include "internal.h"
 #include "fd.h"
@@ -3199,6 +3200,7 @@ static int proc_pid_ksm_merging_pages(struct seq_file *m, struct pid_namespace *
 
 	return 0;
 }
+
 static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
 				struct pid *pid, struct task_struct *task)
 {
@@ -3208,6 +3210,9 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
 	if (mm) {
 		seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items);
 		seq_printf(m, "zero_pages_sharing %lu\n", mm->ksm_zero_pages_sharing);
+		seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages);
+		seq_printf(m, "ksm_merge_type %s\n", ksm_merge_type(mm));
+		seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm));
 		mmput(mm);
 	}
 
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d38a05a36298..d5f69f18ee5a 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -55,6 +55,11 @@ struct page *ksm_might_need_to_copy(struct page *page,
 void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
 void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
 
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *);
+const char *ksm_merge_type(struct mm_struct *mm);
+#endif /* CONFIG_PROC_FS */
+
 #else  /* !CONFIG_KSM */
 
 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
diff --git a/mm/ksm.c b/mm/ksm.c
index b8e6e734dd69..5fa7f239dbd8 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3009,6 +3009,25 @@ static void wait_while_offlining(void)
 }
 #endif /* CONFIG_MEMORY_HOTREMOVE */
 
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *mm)
+{
+	return (long)mm->ksm_merging_pages * PAGE_SIZE -
+		mm->ksm_rmap_items * sizeof(struct ksm_rmap_item);
+}
+
+/* Return merge type name as string. */
+const char *ksm_merge_type(struct mm_struct *mm)
+{
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		return "process";
+	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+		return "madvise";
+	else
+		return "none";
+}
+#endif /* CONFIG_PROC_FS */
+
 #ifdef CONFIG_SYSFS
 /*
  * This all compiles without CONFIG_SYSFS, but is a waste of space.
@@ -3256,8 +3275,7 @@ static ssize_t pages_unshared_show(struct kobject *kobj,
 }
 KSM_ATTR_RO(pages_unshared);
 
-static ssize_t pages_volatile_show(struct kobject *kobj,
-				   struct kobj_attribute *attr, char *buf)
+static long pages_volatile(void)
 {
 	long ksm_pages_volatile;
 
@@ -3269,7 +3287,14 @@ static ssize_t pages_volatile_show(struct kobject *kobj,
 	 */
 	if (ksm_pages_volatile < 0)
 		ksm_pages_volatile = 0;
-	return sysfs_emit(buf, "%ld\n", ksm_pages_volatile);
+
+	return ksm_pages_volatile;
+}
+
+static ssize_t pages_volatile_show(struct kobject *kobj,
+				   struct kobj_attribute *attr, char *buf)
+{
+	return sysfs_emit(buf, "%ld\n", pages_volatile());
 }
 KSM_ATTR_RO(pages_volatile);
 
@@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject *kobj,
 }
 KSM_ATTR_RO(zero_pages_sharing);
 
+static ssize_t general_profit_show(struct kobject *kobj,
+				   struct kobj_attribute *attr, char *buf)
+{
+	long general_profit;
+	long all_rmap_items;
+
+	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
+				ksm_pages_unshared + pages_volatile();
+	general_profit = ksm_pages_sharing * PAGE_SIZE -
+				all_rmap_items * sizeof(struct ksm_rmap_item);
+
+	return sysfs_emit(buf, "%ld\n", general_profit);
+}
+KSM_ATTR_RO(general_profit);
+
 static ssize_t stable_node_dups_show(struct kobject *kobj,
 				     struct kobj_attribute *attr, char *buf)
 {
@@ -3345,6 +3385,7 @@ static struct attribute *ksm_attrs[] = {
 	&stable_node_dups_attr.attr,
 	&stable_node_chains_prune_millisecs_attr.attr,
 	&use_zero_pages_attr.attr,
+	&general_profit_attr.attr,
 	NULL,
 };
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 3/3] selftests/mm: add new selftests for KSM
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
  2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
  2023-03-10 18:28 ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
@ 2023-03-10 18:28 ` Stefan Roesch
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
  2023-03-28 23:09 ` Andrew Morton
  4 siblings, 0 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-03-10 18:28 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

This adds three new tests to the selftests for KSM.  These tests use the
new prctl API's to enable and disable KSM.

1) add new prctl flags to prctl header file in tools dir

   This adds the new prctl flags to the include file prct.h in the
   tools directory.  This makes sure they are available for testing.

2) add KSM prctl merge test

   This adds the -t option to the ksm_tests program.  The -t flag
   allows to specify if it should use madvise or prctl ksm merging.

3) add KSM get merge type test

   This adds the -G flag to the ksm_tests program to query the KSM
   status with prctl after KSM has been enabled with prctl.

4) add KSM fork test

   Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
   by the child process.

5) add two functions for debugging merge outcome

   This adds two functions to report the metrics in /proc/self/ksm_stat
   and /sys/kernel/debug/mm/ksm.

The debugging can be enabled with the following command line:
make -C tools/testing/selftests TARGETS="mm" --keep-going \
        EXTRA_CFLAGS=-DDEBUG=1

[akpm@linux-foundation.org: fix Makefile]
Link: https://lkml.kernel.org/r/20230224044000.3084046-4-shr@devkernel.io
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 tools/include/uapi/linux/prctl.h       |   2 +
 tools/testing/selftests/mm/Makefile    |   2 +-
 tools/testing/selftests/mm/ksm_tests.c | 254 +++++++++++++++++++++----
 3 files changed, 218 insertions(+), 40 deletions(-)

diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index a5e06dcbba13..e4c629c1f1b0 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -284,4 +284,6 @@ struct prctl_mm_map {
 #define PR_SET_VMA		0x53564d41
 # define PR_SET_VMA_ANON_NAME		0
 
+#define PR_SET_MEMORY_MERGE		67
+#define PR_GET_MEMORY_MERGE		68
 #endif /* _LINUX_PRCTL_H */
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index c31d952cff68..fbf5646b1072 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -29,7 +29,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
 # LDLIBS.
 MAKEFLAGS += --no-builtin-rules
 
-CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/tools/include/uapi $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
 LDLIBS = -lrt -lpthread
 TEST_GEN_FILES = cow
 TEST_GEN_FILES += compaction_test
diff --git a/tools/testing/selftests/mm/ksm_tests.c b/tools/testing/selftests/mm/ksm_tests.c
index f9eb4d67e0dd..9fb21b982dc9 100644
--- a/tools/testing/selftests/mm/ksm_tests.c
+++ b/tools/testing/selftests/mm/ksm_tests.c
@@ -1,6 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
 
 #include <sys/mman.h>
+#include <sys/prctl.h>
+#include <sys/wait.h>
 #include <stdbool.h>
 #include <time.h>
 #include <string.h>
@@ -21,6 +23,7 @@
 #define KSM_PROT_STR_DEFAULT "rw"
 #define KSM_USE_ZERO_PAGES_DEFAULT false
 #define KSM_MERGE_ACROSS_NODES_DEFAULT true
+#define KSM_MERGE_TYPE_DEFAULT 0
 #define MB (1ul << 20)
 
 struct ksm_sysfs {
@@ -33,9 +36,17 @@ struct ksm_sysfs {
 	unsigned long use_zero_pages;
 };
 
+enum ksm_merge_type {
+	KSM_MERGE_MADVISE,
+	KSM_MERGE_PRCTL,
+	KSM_MERGE_LAST = KSM_MERGE_PRCTL
+};
+
 enum ksm_test_name {
 	CHECK_KSM_MERGE,
+	CHECK_KSM_MERGE_FORK,
 	CHECK_KSM_UNMERGE,
+	CHECK_KSM_GET_MERGE_TYPE,
 	CHECK_KSM_ZERO_PAGE_MERGE,
 	CHECK_KSM_NUMA_MERGE,
 	KSM_MERGE_TIME,
@@ -82,6 +93,55 @@ static int ksm_read_sysfs(const char *file_path, unsigned long *val)
 	return 0;
 }
 
+#ifdef DEBUG
+static void ksm_print_sysfs(void)
+{
+	unsigned long max_page_sharing, pages_sharing, pages_shared;
+	unsigned long full_scans, pages_unshared, pages_volatile;
+	unsigned long stable_node_chains, stable_node_dups;
+	long general_profit;
+
+	if (ksm_read_sysfs(KSM_FP("pages_shared"), &pages_shared) ||
+	    ksm_read_sysfs(KSM_FP("pages_sharing"), &pages_sharing) ||
+	    ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing) ||
+	    ksm_read_sysfs(KSM_FP("full_scans"), &full_scans) ||
+	    ksm_read_sysfs(KSM_FP("pages_unshared"), &pages_unshared) ||
+	    ksm_read_sysfs(KSM_FP("pages_volatile"), &pages_volatile) ||
+	    ksm_read_sysfs(KSM_FP("stable_node_chains"), &stable_node_chains) ||
+	    ksm_read_sysfs(KSM_FP("stable_node_dups"), &stable_node_dups) ||
+	    ksm_read_sysfs(KSM_FP("general_profit"), (unsigned long *)&general_profit))
+		return;
+
+	printf("pages_shared      : %lu\n", pages_shared);
+	printf("pages_sharing     : %lu\n", pages_sharing);
+	printf("max_page_sharing  : %lu\n", max_page_sharing);
+	printf("full_scans        : %lu\n", full_scans);
+	printf("pages_unshared    : %lu\n", pages_unshared);
+	printf("pages_volatile    : %lu\n", pages_volatile);
+	printf("stable_node_chains: %lu\n", stable_node_chains);
+	printf("stable_node_dups  : %lu\n", stable_node_dups);
+	printf("general_profit    : %ld\n", general_profit);
+}
+
+static void ksm_print_procfs(void)
+{
+	const char *file_name = "/proc/self/ksm_stat";
+	char buffer[512];
+	FILE *f = fopen(file_name, "r");
+
+	if (!f) {
+		fprintf(stderr, "f %s\n", file_name);
+		perror("fopen");
+		return;
+	}
+
+	while (fgets(buffer, sizeof(buffer), f))
+		printf("%s", buffer);
+
+	fclose(f);
+}
+#endif
+
 static int str_to_prot(char *prot_str)
 {
 	int prot = 0;
@@ -115,7 +175,9 @@ static void print_help(void)
 	       " -D evaluate unmerging time and speed when disabling KSM.\n"
 	       "    For this test, the size of duplicated memory area (in MiB)\n"
 	       "    must be provided using -s option\n"
-	       " -C evaluate the time required to break COW of merged pages.\n\n");
+	       " -C evaluate the time required to break COW of merged pages.\n"
+	       " -G query merge mode\n"
+	       " -F evaluate that the KSM process flag is inherited\n\n");
 
 	printf(" -a: specify the access protections of pages.\n"
 	       "     <prot> must be of the form [rwx].\n"
@@ -129,6 +191,10 @@ static void print_help(void)
 	printf(" -m: change merge_across_nodes tunable\n"
 	       "     Default: %d\n", KSM_MERGE_ACROSS_NODES_DEFAULT);
 	printf(" -s: the size of duplicated memory area (in MiB)\n");
+	printf(" -t: KSM merge type\n"
+	       "     Default: 0\n"
+	       "     0: madvise merging\n"
+	       "     1: prctl merging\n");
 
 	exit(0);
 }
@@ -176,12 +242,21 @@ static int ksm_do_scan(int scan_count, struct timespec start_time, int timeout)
 	return 0;
 }
 
-static int ksm_merge_pages(void *addr, size_t size, struct timespec start_time, int timeout)
+static int ksm_merge_pages(int merge_type, void *addr, size_t size,
+			struct timespec start_time, int timeout)
 {
-	if (madvise(addr, size, MADV_MERGEABLE)) {
-		perror("madvise");
-		return 1;
+	if (merge_type == KSM_MERGE_MADVISE) {
+		if (madvise(addr, size, MADV_MERGEABLE)) {
+			perror("madvise");
+			return 1;
+		}
+	} else if (merge_type == KSM_MERGE_PRCTL) {
+		if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+			perror("prctl");
+			return 1;
+		}
 	}
+
 	if (ksm_write_sysfs(KSM_FP("run"), 1))
 		return 1;
 
@@ -211,6 +286,11 @@ static bool assert_ksm_pages_count(long dupl_page_count)
 	    ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing))
 		return false;
 
+#ifdef DEBUG
+	ksm_print_sysfs();
+	ksm_print_procfs();
+#endif
+
 	/*
 	 * Since there must be at least 2 pages for merging and 1 page can be
 	 * shared with the limited number of pages (max_page_sharing), sometimes
@@ -266,7 +346,8 @@ static int ksm_restore(struct ksm_sysfs *ksm_sysfs)
 	return 0;
 }
 
-static int check_ksm_merge(int mapping, int prot, long page_count, int timeout, size_t page_size)
+static int check_ksm_merge(int merge_type, int mapping, int prot,
+			long page_count, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -281,13 +362,16 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	/* verify that the right number of pages are merged */
 	if (assert_ksm_pages_count(page_count)) {
 		printf("OK\n");
-		munmap(map_ptr, page_size * page_count);
+		if (merge_type == KSM_MERGE_MADVISE)
+			munmap(map_ptr, page_size * page_count);
+		else if (merge_type == KSM_MERGE_PRCTL)
+			prctl(PR_SET_MEMORY_MERGE, 0);
 		return KSFT_PASS;
 	}
 
@@ -297,7 +381,73 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
 	return KSFT_FAIL;
 }
 
-static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_size)
+/* Verify that prctl ksm flag is inherited. */
+static int check_ksm_fork(void)
+{
+	int rc = KSFT_FAIL;
+	pid_t child_pid;
+
+	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+		perror("prctl");
+		return KSFT_FAIL;
+	}
+
+	child_pid = fork();
+	if (child_pid == 0) {
+		int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+		if (!is_on)
+			exit(KSFT_FAIL);
+
+		exit(KSFT_PASS);
+	}
+
+	if (child_pid < 0)
+		goto out;
+
+	if (waitpid(child_pid, &rc, 0) < 0)
+		rc = KSFT_FAIL;
+
+	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+		perror("prctl");
+		rc = KSFT_FAIL;
+	}
+
+out:
+	if (rc == KSFT_PASS)
+		printf("OK\n");
+	else
+		printf("Not OK\n");
+
+	return rc;
+}
+
+static int check_ksm_get_merge_type(void)
+{
+	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+		perror("prctl set");
+		return 1;
+	}
+
+	int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+		perror("prctl set");
+		return 1;
+	}
+
+	int is_off = prctl(PR_GET_MEMORY_MERGE, 0);
+
+	if (is_on && is_off) {
+		printf("OK\n");
+		return KSFT_PASS;
+	}
+
+	printf("Not OK\n");
+	return KSFT_FAIL;
+}
+
+static int check_ksm_unmerge(int merge_type, int mapping, int prot, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -313,7 +463,7 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	/* change 1 byte in each of the 2 pages -- KSM must automatically unmerge them */
@@ -337,8 +487,8 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
 	return KSFT_FAIL;
 }
 
-static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int timeout,
-				     bool use_zero_pages, size_t page_size)
+static int check_ksm_zero_page_merge(int merge_type, int mapping, int prot, long page_count,
+				int timeout, bool use_zero_pages, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -356,7 +506,7 @@ static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
        /*
@@ -402,8 +552,8 @@ static int get_first_mem_node(void)
 	return get_next_mem_node(numa_max_node());
 }
 
-static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_across_nodes,
-				size_t page_size)
+static int check_ksm_numa_merge(int merge_type, int mapping, int prot, int timeout,
+				bool merge_across_nodes, size_t page_size)
 {
 	void *numa1_map_ptr, *numa2_map_ptr;
 	struct timespec start_time;
@@ -439,8 +589,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
 	memset(numa2_map_ptr, '*', page_size);
 
 	/* try to merge the pages */
-	if (ksm_merge_pages(numa1_map_ptr, page_size, start_time, timeout) ||
-	    ksm_merge_pages(numa2_map_ptr, page_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, numa1_map_ptr, page_size, start_time, timeout) ||
+	    ksm_merge_pages(merge_type, numa2_map_ptr, page_size, start_time, timeout))
 		goto err_out;
 
        /*
@@ -466,7 +616,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
 	return KSFT_FAIL;
 }
 
-static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_hugepages_time(int merge_type, int mapping, int prot,
+				int timeout, size_t map_size)
 {
 	void *map_ptr, *map_ptr_orig;
 	struct timespec start_time, end_time;
@@ -508,7 +659,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
 		perror("clock_gettime");
@@ -533,7 +684,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
 	return KSFT_FAIL;
 }
 
-static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -549,7 +700,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
 		perror("clock_gettime");
@@ -574,7 +725,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
 	return KSFT_FAIL;
 }
 
-static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_unmerge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -589,7 +740,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -621,7 +772,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
 	return KSFT_FAIL;
 }
 
-static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
+static int ksm_cow_time(int merge_type, int mapping, int prot, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -660,7 +811,7 @@ static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
 		memset(map_ptr + page_size * i, '+', i / 2 + 1);
 		memset(map_ptr + page_size * (i + 1), '+', i / 2 + 1);
 	}
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -697,6 +848,7 @@ int main(int argc, char *argv[])
 	int ret, opt;
 	int prot = 0;
 	int ksm_scan_limit_sec = KSM_SCAN_LIMIT_SEC_DEFAULT;
+	int merge_type = KSM_MERGE_TYPE_DEFAULT;
 	long page_count = KSM_PAGE_COUNT_DEFAULT;
 	size_t page_size = sysconf(_SC_PAGESIZE);
 	struct ksm_sysfs ksm_sysfs_old;
@@ -705,7 +857,7 @@ int main(int argc, char *argv[])
 	bool merge_across_nodes = KSM_MERGE_ACROSS_NODES_DEFAULT;
 	long size_MB = 0;
 
-	while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:MUZNPCHD")) != -1) {
+	while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:t:FGMUZNPCHD")) != -1) {
 		switch (opt) {
 		case 'a':
 			prot = str_to_prot(optarg);
@@ -745,6 +897,20 @@ int main(int argc, char *argv[])
 				printf("Size must be greater than 0\n");
 				return KSFT_FAIL;
 			}
+		case 't':
+			{
+				int tmp = atoi(optarg);
+
+				if (tmp < 0 || tmp > KSM_MERGE_LAST) {
+					printf("Invalid merge type\n");
+					return KSFT_FAIL;
+				}
+				merge_type = atoi(optarg);
+			}
+			break;
+		case 'F':
+			test_name = CHECK_KSM_MERGE_FORK;
+			break;
 		case 'M':
 			break;
 		case 'U':
@@ -753,6 +919,9 @@ int main(int argc, char *argv[])
 		case 'Z':
 			test_name = CHECK_KSM_ZERO_PAGE_MERGE;
 			break;
+		case 'G':
+			test_name = CHECK_KSM_GET_MERGE_TYPE;
+			break;
 		case 'N':
 			test_name = CHECK_KSM_NUMA_MERGE;
 			break;
@@ -795,35 +964,42 @@ int main(int argc, char *argv[])
 
 	switch (test_name) {
 	case CHECK_KSM_MERGE:
-		ret = check_ksm_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
+		ret = check_ksm_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
 				      ksm_scan_limit_sec, page_size);
 		break;
+	case CHECK_KSM_MERGE_FORK:
+		ret = check_ksm_fork();
+		break;
 	case CHECK_KSM_UNMERGE:
-		ret = check_ksm_unmerge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-					page_size);
+		ret = check_ksm_unmerge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+					ksm_scan_limit_sec, page_size);
+		break;
+	case CHECK_KSM_GET_MERGE_TYPE:
+		ret = check_ksm_get_merge_type();
 		break;
 	case CHECK_KSM_ZERO_PAGE_MERGE:
-		ret = check_ksm_zero_page_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
-						ksm_scan_limit_sec, use_zero_pages, page_size);
+		ret = check_ksm_zero_page_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+						page_count, ksm_scan_limit_sec, use_zero_pages,
+						page_size);
 		break;
 	case CHECK_KSM_NUMA_MERGE:
-		ret = check_ksm_numa_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-					   merge_across_nodes, page_size);
+		ret = check_ksm_numa_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+					ksm_scan_limit_sec, merge_across_nodes, page_size);
 		break;
 	case KSM_MERGE_TIME:
 		if (size_MB == 0) {
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_merge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-				     size_MB);
+		ret = ksm_merge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+				ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_MERGE_TIME_HUGE_PAGES:
 		if (size_MB == 0) {
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_merge_hugepages_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+		ret = ksm_merge_hugepages_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
 				ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_UNMERGE_TIME:
@@ -831,12 +1007,12 @@ int main(int argc, char *argv[])
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_unmerge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+		ret = ksm_unmerge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
 				       ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_COW_TIME:
-		ret = ksm_cow_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-				   page_size);
+		ret = ksm_cow_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+				ksm_scan_limit_sec, page_size);
 		break;
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-03-13 16:26   ` Johannes Weiner
  2023-04-03 10:37   ` David Hildenbrand
  1 sibling, 0 replies; 36+ messages in thread
From: Johannes Weiner @ 2023-03-13 16:26 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, akpm, Bagas Sanjaya

On Fri, Mar 10, 2023 at 10:28:49AM -0800, Stefan Roesch wrote:
> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>  }
>  
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	/*
> +	 * Be somewhat over-protective for now!
> +	 */
> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> +			     VM_HUGETLB | VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */
> +
> +	if (vma_is_dax(vma))
> +		return false;
> +
> +#ifdef VM_SAO
> +	if (*vm_flags & VM_SAO)
> +		return false;
> +#endif
> +#ifdef VM_SPARC_ADI
> +	if (*vm_flags & VM_SPARC_ADI)
> +		return false;
> +#endif

These two also need to check vma->vm_flags, or it won't build on those
configs.

Otherwise, the patch looks good to me.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
                   ` (2 preceding siblings ...)
  2023-03-10 18:28 ` [PATCH v4 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
@ 2023-03-15 20:03 ` David Hildenbrand
  2023-03-15 20:23   ` Mike Kravetz
  2023-03-15 21:05   ` Johannes Weiner
  2023-03-28 23:09 ` Andrew Morton
  4 siblings, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-03-15 20:03 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Mike Kravetz, Rik van Riel

On 10.03.23 19:28, Stefan Roesch wrote:
> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
> 
> Use case 1:
> The madvise call is not available in the programming language. An example for
> this are programs with forked workloads using a garbage collected language without
> pointers. In such a language madvise cannot be made available.
> 
> In addition the addresses of objects get moved around as they are garbage
> collected. KSM sharing needs to be enabled "from the outside" for these type of
> workloads.
> 
> Use case 2:
> The same interpreter can also be used for workloads where KSM brings no
> benefit or even has overhead. We'd like to be able to enable KSM on a workload
> by workload basis.
> 
> Use case 3:
> With the madvise call sharing opportunities are only enabled for the current
> process: it is a workload-local decision. A considerable number of sharing
> opportuniites may exist across multiple workloads or jobs. Only a higler level
> entity like a job scheduler or container can know for certain if its running
> one or more instances of a job. That job scheduler however doesn't have
> the necessary internal worklaod knowledge to make targeted madvise calls.
> 
> Security concerns:
> In previous discussions security concerns have been brought up. The problem is
> that an individual workload does not have the knowledge about what else is
> running on a machine. Therefore it has to be very conservative in what memory
> areas can be shared or not. However, if the system is dedicated to running
> multiple jobs within the same security domain, its the job scheduler that has
> the knowledge that sharing can be safely enabled and is even desirable.
> 
> Performance:
> Experiments with using UKSM have shown a capacity increase of around 20%.

Stefan, can you do me a favor and investigate which pages we end up 
deduplicating -- especially if it's mostly only the zeropage and if it's 
still that significant when disabling THP?


I'm currently investigating with some engineers on playing with enabling 
KSM on some selected processes (enabling it blindly on all VMAs of that 
process via madvise() ).

One thing we noticed is that such (~50 times) 20MiB processes end up 
saving ~2MiB of memory per process. That made me suspicious, because 
it's the THP size.

What I think happens is that we have a 2 MiB area (stack?) and only 
touch a single page. We get a whole 2 MiB THP populated. Most of that 
THP is zeroes.

KSM somehow ends up splitting that THP and deduplicates all resulting 
zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer 
"waste" 2 MiB. I think the processes with KSM have less (none) THP than 
the processes with THP enabled, but I only took a look at a sample of 
the process' smaps so far.

I recall that there was a proposal to split underutilized THP and free 
up the zeropages (IIRC Rik was involved).

I also recall that Mike reported memory waste due to THP.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
@ 2023-03-15 20:23   ` Mike Kravetz
  2023-03-15 21:05   ` Johannes Weiner
  1 sibling, 0 replies; 36+ messages in thread
From: Mike Kravetz @ 2023-03-15 20:23 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, hannes

On 03/15/23 21:03, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
> 
> Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?
> 
> I'm currently investigating with some engineers on playing with enabling KSM
> on some selected processes (enabling it blindly on all VMAs of that process
> via madvise() ).
> 
> One thing we noticed is that such (~50 times) 20MiB processes end up saving
> ~2MiB of memory per process. That made me suspicious, because it's the THP
> size.
> 
> What I think happens is that we have a 2 MiB area (stack?) and only touch a
> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> 
> KSM somehow ends up splitting that THP and deduplicates all resulting
> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> processes with THP enabled, but I only took a look at a sample of the
> process' smaps so far.
> 
> I recall that there was a proposal to split underutilized THP and free up
> the zeropages (IIRC Rik was involved).
> 
> I also recall that Mike reported memory waste due to THP.

Interesting!

2MB stacks were also involved in our case.  That stack would first get a
write fault allocating a THP.  The write fault would be followed by a
mprotect(PROT_NONE) of the 4K page at the bottom of the stack to create
a guard page.  The mprotect would result in the THP being split resulting
in 510 zero filled pages.  I suppose KSM could dedup those zero pages.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
  2023-03-15 20:23   ` Mike Kravetz
@ 2023-03-15 21:05   ` Johannes Weiner
  2023-03-15 21:19     ` Johannes Weiner
  1 sibling, 1 reply; 36+ messages in thread
From: Johannes Weiner @ 2023-03-15 21:05 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
> > So far KSM can only be enabled by calling madvise for memory regions. To
> > be able to use KSM for more workloads, KSM needs to have the ability to be
> > enabled / disabled at the process / cgroup level.
> > 
> > Use case 1:
> > The madvise call is not available in the programming language. An example for
> > this are programs with forked workloads using a garbage collected language without
> > pointers. In such a language madvise cannot be made available.
> > 
> > In addition the addresses of objects get moved around as they are garbage
> > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > workloads.
> > 
> > Use case 2:
> > The same interpreter can also be used for workloads where KSM brings no
> > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > by workload basis.
> > 
> > Use case 3:
> > With the madvise call sharing opportunities are only enabled for the current
> > process: it is a workload-local decision. A considerable number of sharing
> > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > entity like a job scheduler or container can know for certain if its running
> > one or more instances of a job. That job scheduler however doesn't have
> > the necessary internal worklaod knowledge to make targeted madvise calls.
> > 
> > Security concerns:
> > In previous discussions security concerns have been brought up. The problem is
> > that an individual workload does not have the knowledge about what else is
> > running on a machine. Therefore it has to be very conservative in what memory
> > areas can be shared or not. However, if the system is dedicated to running
> > multiple jobs within the same security domain, its the job scheduler that has
> > the knowledge that sharing can be safely enabled and is even desirable.
> > 
> > Performance:
> > Experiments with using UKSM have shown a capacity increase of around 20%.
> 
> Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?
> 
> 
> I'm currently investigating with some engineers on playing with enabling KSM
> on some selected processes (enabling it blindly on all VMAs of that process
> via madvise() ).
> 
> One thing we noticed is that such (~50 times) 20MiB processes end up saving
> ~2MiB of memory per process. That made me suspicious, because it's the THP
> size.
> 
> What I think happens is that we have a 2 MiB area (stack?) and only touch a
> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> 
> KSM somehow ends up splitting that THP and deduplicates all resulting
> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> processes with THP enabled, but I only took a look at a sample of the
> process' smaps so far.

THP and KSM is indeed an interesting problem. Better TLB hits with
THPs, but reduced chance of deduplicating memory - which may or may
not result in more IO that outweighs any THP benefits.

That said, the service in the experiment referenced above has swap
turned on and is under significant memory pressure. Unused splitpages
would get swapped out. The difference from KSM was from deduplicating
pages that were in active use, not internal THP fragmentation.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:05   ` Johannes Weiner
@ 2023-03-15 21:19     ` Johannes Weiner
  2023-03-15 21:45       ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Johannes Weiner @ 2023-03-15 21:19 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
> > On 10.03.23 19:28, Stefan Roesch wrote:
> > > So far KSM can only be enabled by calling madvise for memory regions. To
> > > be able to use KSM for more workloads, KSM needs to have the ability to be
> > > enabled / disabled at the process / cgroup level.
> > > 
> > > Use case 1:
> > > The madvise call is not available in the programming language. An example for
> > > this are programs with forked workloads using a garbage collected language without
> > > pointers. In such a language madvise cannot be made available.
> > > 
> > > In addition the addresses of objects get moved around as they are garbage
> > > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > > workloads.
> > > 
> > > Use case 2:
> > > The same interpreter can also be used for workloads where KSM brings no
> > > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > > by workload basis.
> > > 
> > > Use case 3:
> > > With the madvise call sharing opportunities are only enabled for the current
> > > process: it is a workload-local decision. A considerable number of sharing
> > > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > > entity like a job scheduler or container can know for certain if its running
> > > one or more instances of a job. That job scheduler however doesn't have
> > > the necessary internal worklaod knowledge to make targeted madvise calls.
> > > 
> > > Security concerns:
> > > In previous discussions security concerns have been brought up. The problem is
> > > that an individual workload does not have the knowledge about what else is
> > > running on a machine. Therefore it has to be very conservative in what memory
> > > areas can be shared or not. However, if the system is dedicated to running
> > > multiple jobs within the same security domain, its the job scheduler that has
> > > the knowledge that sharing can be safely enabled and is even desirable.
> > > 
> > > Performance:
> > > Experiments with using UKSM have shown a capacity increase of around 20%.
> > 
> > Stefan, can you do me a favor and investigate which pages we end up
> > deduplicating -- especially if it's mostly only the zeropage and if it's
> > still that significant when disabling THP?
> > 
> > 
> > I'm currently investigating with some engineers on playing with enabling KSM
> > on some selected processes (enabling it blindly on all VMAs of that process
> > via madvise() ).
> > 
> > One thing we noticed is that such (~50 times) 20MiB processes end up saving
> > ~2MiB of memory per process. That made me suspicious, because it's the THP
> > size.
> > 
> > What I think happens is that we have a 2 MiB area (stack?) and only touch a
> > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> > 
> > KSM somehow ends up splitting that THP and deduplicates all resulting
> > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> > processes with THP enabled, but I only took a look at a sample of the
> > process' smaps so far.
> 
> THP and KSM is indeed an interesting problem. Better TLB hits with
> THPs, but reduced chance of deduplicating memory - which may or may
> not result in more IO that outweighs any THP benefits.
> 
> That said, the service in the experiment referenced above has swap
> turned on and is under significant memory pressure. Unused splitpages
> would get swapped out. The difference from KSM was from deduplicating
> pages that were in active use, not internal THP fragmentation.

Brainfart, my apologies. It could have been the ksm-induced splits
themselves that allowed the unused subpages to get swapped out in the
first place.

But no, I double checked that workload just now. On a weekly average,
it has about 50 anon THPs and 12 million regular anon. THP is not a
factor in the reduction results.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:19     ` Johannes Weiner
@ 2023-03-15 21:45       ` David Hildenbrand
  2023-03-15 21:47         ` David Hildenbrand
  2023-03-30 16:19         ` Stefan Roesch
  0 siblings, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-03-15 21:45 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On 15.03.23 22:19, Johannes Weiner wrote:
> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Use case 1:
>>>> The madvise call is not available in the programming language. An example for
>>>> this are programs with forked workloads using a garbage collected language without
>>>> pointers. In such a language madvise cannot be made available.
>>>>
>>>> In addition the addresses of objects get moved around as they are garbage
>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>> workloads.
>>>>
>>>> Use case 2:
>>>> The same interpreter can also be used for workloads where KSM brings no
>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>> by workload basis.
>>>>
>>>> Use case 3:
>>>> With the madvise call sharing opportunities are only enabled for the current
>>>> process: it is a workload-local decision. A considerable number of sharing
>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>> entity like a job scheduler or container can know for certain if its running
>>>> one or more instances of a job. That job scheduler however doesn't have
>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>
>>>> Security concerns:
>>>> In previous discussions security concerns have been brought up. The problem is
>>>> that an individual workload does not have the knowledge about what else is
>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>> areas can be shared or not. However, if the system is dedicated to running
>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>
>>>> Performance:
>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>
>>> Stefan, can you do me a favor and investigate which pages we end up
>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>> still that significant when disabling THP?
>>>
>>>
>>> I'm currently investigating with some engineers on playing with enabling KSM
>>> on some selected processes (enabling it blindly on all VMAs of that process
>>> via madvise() ).
>>>
>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>> size.
>>>
>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>
>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>> processes with THP enabled, but I only took a look at a sample of the
>>> process' smaps so far.
>>
>> THP and KSM is indeed an interesting problem. Better TLB hits with
>> THPs, but reduced chance of deduplicating memory - which may or may
>> not result in more IO that outweighs any THP benefits.
>>
>> That said, the service in the experiment referenced above has swap
>> turned on and is under significant memory pressure. Unused splitpages
>> would get swapped out. The difference from KSM was from deduplicating
>> pages that were in active use, not internal THP fragmentation.
> 
> Brainfart, my apologies. It could have been the ksm-induced splits
> themselves that allowed the unused subpages to get swapped out in the
> first place.

Yes, it's not easy to spot that this is implemented. I just wrote a 
simple reproducer to confirm: modifying a single subpage in a bunch of 
THP ranges will populate a THP whereby most of the THP is zeroes.

As long as you keep accessing the single subpage via the PMD I assume 
chances of getting it swapped out are lower, because the folio will be 
references/dirty.

KSM will come around and split the THP filled mostly with zeroes and 
deduplciate the resulting zero pages.

[that's where a zeropage-only KSM could be very valuable eventually I think]

> 
> But no, I double checked that workload just now. On a weekly average,
> it has about 50 anon THPs and 12 million regular anon. THP is not a
> factor in the reduction results.

You mean with KSM enabled or with KSM disabled for the process? Not sure 
if your observation reliably implies that the scenario described 
couldn't have happened, but it's late in Germany already :)

In any case, it would be nice to get a feeling for how much variety in 
these 20% of deduplicated pages are. For example, if it's 99% the same 
page or just a wild collection.

Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. 
But I seem to be getting "126" in my simple example where only zeropages 
should get deduplicated, so I have to take another look at the stats 
tomorrow ...

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:45       ` David Hildenbrand
@ 2023-03-15 21:47         ` David Hildenbrand
  2023-03-30 16:19         ` Stefan Roesch
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-03-15 21:47 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On 15.03.23 22:45, David Hildenbrand wrote:
> On 15.03.23 22:19, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>>
>>>>> Use case 1:
>>>>> The madvise call is not available in the programming language. An example for
>>>>> this are programs with forked workloads using a garbage collected language without
>>>>> pointers. In such a language madvise cannot be made available.
>>>>>
>>>>> In addition the addresses of objects get moved around as they are garbage
>>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>>> workloads.
>>>>>
>>>>> Use case 2:
>>>>> The same interpreter can also be used for workloads where KSM brings no
>>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>>> by workload basis.
>>>>>
>>>>> Use case 3:
>>>>> With the madvise call sharing opportunities are only enabled for the current
>>>>> process: it is a workload-local decision. A considerable number of sharing
>>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>>> entity like a job scheduler or container can know for certain if its running
>>>>> one or more instances of a job. That job scheduler however doesn't have
>>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>>
>>>>> Security concerns:
>>>>> In previous discussions security concerns have been brought up. The problem is
>>>>> that an individual workload does not have the knowledge about what else is
>>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>>> areas can be shared or not. However, if the system is dedicated to running
>>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>>
>>>>> Performance:
>>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>>
>>>> Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?
>>>>
>>>>
>>>> I'm currently investigating with some engineers on playing with enabling KSM
>>>> on some selected processes (enabling it blindly on all VMAs of that process
>>>> via madvise() ).
>>>>
>>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>>> size.
>>>>
>>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>>
>>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>>> processes with THP enabled, but I only took a look at a sample of the
>>>> process' smaps so far.
>>>
>>> THP and KSM is indeed an interesting problem. Better TLB hits with
>>> THPs, but reduced chance of deduplicating memory - which may or may
>>> not result in more IO that outweighs any THP benefits.
>>>
>>> That said, the service in the experiment referenced above has swap
>>> turned on and is under significant memory pressure. Unused splitpages
>>> would get swapped out. The difference from KSM was from deduplicating
>>> pages that were in active use, not internal THP fragmentation.
>>
>> Brainfart, my apologies. It could have been the ksm-induced splits
>> themselves that allowed the unused subpages to get swapped out in the
>> first place.
> 
> Yes, it's not easy to spot that this is implemented. I just wrote a
> simple reproducer to confirm: modifying a single subpage in a bunch of
> THP ranges will populate a THP whereby most of the THP is zeroes.
> 
> As long as you keep accessing the single subpage via the PMD I assume
> chances of getting it swapped out are lower, because the folio will be
> references/dirty.
> 
> KSM will come around and split the THP filled mostly with zeroes and
> deduplciate the resulting zero pages.
> 
> [that's where a zeropage-only KSM could be very valuable eventually I think]
> 
>>
>> But no, I double checked that workload just now. On a weekly average,
>> it has about 50 anon THPs and 12 million regular anon. THP is not a
>> factor in the reduction results.
> 
> You mean with KSM enabled or with KSM disabled for the process? Not sure
> if your observation reliably implies that the scenario described
> couldn't have happened, but it's late in Germany already :)
> 
> In any case, it would be nice to get a feeling for how much variety in
> these 20% of deduplicated pages are. For example, if it's 99% the same
> page or just a wild collection.
> 
> Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already.
> But I seem to be getting "126" in my simple example where only zeropages
> should get deduplicated, so I have to take another look at the stats
> tomorrow ...
> 

On second thought, I guess it's because of "max_page_sharing". So one 
has to set that really high to make pages_shared more expressive.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
                   ` (3 preceding siblings ...)
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
@ 2023-03-28 23:09 ` Andrew Morton
  2023-03-30  4:55   ` David Hildenbrand
  4 siblings, 1 reply; 36+ messages in thread
From: Andrew Morton @ 2023-03-28 23:09 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, hannes

On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:

> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.

Review on this series has been a bit thin.  Are we OK with moving this
into mm-stable for the next merge window?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-28 23:09 ` Andrew Morton
@ 2023-03-30  4:55   ` David Hildenbrand
  2023-03-30 14:26     ` Johannes Weiner
  2023-03-30 20:18     ` Andrew Morton
  0 siblings, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-03-30  4:55 UTC (permalink / raw)
  To: Andrew Morton, Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	hannes, Hugh Dickins

On 29.03.23 01:09, Andrew Morton wrote:
> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> 
>> So far KSM can only be enabled by calling madvise for memory regions. To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
> 
> Review on this series has been a bit thin.  Are we OK with moving this
> into mm-stable for the next merge window?

I still want to review (traveling this week), but I also don't want to 
block this forever.

I think I didn't get a reply from Stefan to my question [1] yet (only 
some comments from Johannes). I would still be interested in the 
variance of pages we end up de-duplicating for processes.

The 20% statement in the cover letter is rather useless and possibly 
misleading if no details about the actual workload are shared.

Maybe Hugh has some comments as well (most probably he's also busy).

[1] 
https://lore.kernel.org/all/273a2f82-928f-5ad1-0988-1a886d169e83@redhat.com/

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30  4:55   ` David Hildenbrand
@ 2023-03-30 14:26     ` Johannes Weiner
  2023-03-30 14:40       ` David Hildenbrand
  2023-03-30 20:18     ` Andrew Morton
  1 sibling, 1 reply; 36+ messages in thread
From: Johannes Weiner @ 2023-03-30 14:26 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Stefan Roesch, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
> On 29.03.23 01:09, Andrew Morton wrote:
> > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> > 
> > > So far KSM can only be enabled by calling madvise for memory regions. To
> > > be able to use KSM for more workloads, KSM needs to have the ability to be
> > > enabled / disabled at the process / cgroup level.
> > 
> > Review on this series has been a bit thin.  Are we OK with moving this
> > into mm-stable for the next merge window?
> 
> I still want to review (traveling this week), but I also don't want to block
> this forever.
> 
> I think I didn't get a reply from Stefan to my question [1] yet (only some
> comments from Johannes). I would still be interested in the variance of
> pages we end up de-duplicating for processes.
> 
> The 20% statement in the cover letter is rather useless and possibly
> misleading if no details about the actual workload are shared.

The workload is instagram. It forks off Django runtimes on-demand
until it saturates whatever hardware it's running on. This benefits
from merging common heap/stack state between instances. Since that
runtime is quite large, the 20% number is not surprising, and matches
our expectations of duplicative memory between instances.

Obviously we could spend months analysing which exact allocations are
identical, and then more months or years reworking the architecture to
deduplicate them by hand and in userspace. But this isn't practical,
and KSM is specifically for cases where this isn't practical.

Based on your request in the previous thread, we investigated whether
the boost was coming from the unintended side effects of KSM splitting
THPs. This wasn't the case.

If you have other theories on how the results could be bogus, we'd be
happy to investigate those as well. But you have to let us know what
you're looking for.

Beyond that, I don't think we need to prove from scratch that KSM can
be a worthwhile optimization. It's been established that it can
be. This series is about enabling it in scenarios where madvise()
isn't practical, that's it, and it's yielding the expected results.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 14:26     ` Johannes Weiner
@ 2023-03-30 14:40       ` David Hildenbrand
  2023-03-30 16:41         ` Stefan Roesch
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2023-03-30 14:40 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Stefan Roesch, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 30.03.23 16:26, Johannes Weiner wrote:
> On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
>> On 29.03.23 01:09, Andrew Morton wrote:
>>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>>>
>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>
>>> Review on this series has been a bit thin.  Are we OK with moving this
>>> into mm-stable for the next merge window?
>>
>> I still want to review (traveling this week), but I also don't want to block
>> this forever.
>>
>> I think I didn't get a reply from Stefan to my question [1] yet (only some
>> comments from Johannes). I would still be interested in the variance of
>> pages we end up de-duplicating for processes.
>>
>> The 20% statement in the cover letter is rather useless and possibly
>> misleading if no details about the actual workload are shared.
> 
> The workload is instagram. It forks off Django runtimes on-demand
> until it saturates whatever hardware it's running on. This benefits
> from merging common heap/stack state between instances. Since that
> runtime is quite large, the 20% number is not surprising, and matches
> our expectations of duplicative memory between instances.

Thanks for this explanation. It's valuable to get at least a feeling for 
the workload because it doesn't seem to apply to other workloads at all.

> 
> Obviously we could spend months analysing which exact allocations are
> identical, and then more months or years reworking the architecture to
> deduplicate them by hand and in userspace. But this isn't practical,
> and KSM is specifically for cases where this isn't practical.
> 
> Based on your request in the previous thread, we investigated whether
> the boost was coming from the unintended side effects of KSM splitting
> THPs. This wasn't the case.
> 
> If you have other theories on how the results could be bogus, we'd be
> happy to investigate those as well. But you have to let us know what
> you're looking for.
> 

Maybe I'm bad at making such requests but

"Stefan, can you do me a favor and investigate which pages we end up
deduplicating -- especially if it's mostly only the zeropage and if it's
still that significant when disabling THP?"

"In any case, it would be nice to get a feeling for how much variety in
these 20% of deduplicated pages are. "

is pretty clear to me. And shouldn't take months.

> Beyond that, I don't think we need to prove from scratch that KSM can

I never expected a proof. I was merely trying to understand if it's 
really KSM that helps here. Also with the intention to figure out if KSM 
is really the right tool to use here or if it simply "helps by luck" as 
with the shared zeropage. That end result could have been valuable to 
your use case as well, because KSM overhead is real.

> be a worthwhile optimization. It's been established that it can
> be. This series is about enabling it in scenarios where madvise()
> isn't practical, that's it, and it's yielding the expected results.

I'm sorry to say, but you sound a bit aggressive and annoyed. I also 
have no idea why Stefan isn't replying to me but always you.

Am I asking the wrong questions? Do you want me to stop looking at KSM code?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:45       ` David Hildenbrand
  2023-03-15 21:47         ` David Hildenbrand
@ 2023-03-30 16:19         ` Stefan Roesch
  1 sibling, 0 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-03-30 16:19 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz


David Hildenbrand <david@redhat.com> writes:

> On 15.03.23 22:19, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>>
>>>>> Use case 1:
>>>>> The madvise call is not available in the programming language. An example for
>>>>> this are programs with forked workloads using a garbage collected language without
>>>>> pointers. In such a language madvise cannot be made available.
>>>>>
>>>>> In addition the addresses of objects get moved around as they are garbage
>>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>>> workloads.
>>>>>
>>>>> Use case 2:
>>>>> The same interpreter can also be used for workloads where KSM brings no
>>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>>> by workload basis.
>>>>>
>>>>> Use case 3:
>>>>> With the madvise call sharing opportunities are only enabled for the current
>>>>> process: it is a workload-local decision. A considerable number of sharing
>>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>>> entity like a job scheduler or container can know for certain if its running
>>>>> one or more instances of a job. That job scheduler however doesn't have
>>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>>
>>>>> Security concerns:
>>>>> In previous discussions security concerns have been brought up. The problem is
>>>>> that an individual workload does not have the knowledge about what else is
>>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>>> areas can be shared or not. However, if the system is dedicated to running
>>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>>
>>>>> Performance:
>>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>>
>>>> Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?
>>>>
>>>>
>>>> I'm currently investigating with some engineers on playing with enabling KSM
>>>> on some selected processes (enabling it blindly on all VMAs of that process
>>>> via madvise() ).
>>>>
>>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>>> size.
>>>>
>>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>>
>>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>>> processes with THP enabled, but I only took a look at a sample of the
>>>> process' smaps so far.
>>>
>>> THP and KSM is indeed an interesting problem. Better TLB hits with
>>> THPs, but reduced chance of deduplicating memory - which may or may
>>> not result in more IO that outweighs any THP benefits.
>>>
>>> That said, the service in the experiment referenced above has swap
>>> turned on and is under significant memory pressure. Unused splitpages
>>> would get swapped out. The difference from KSM was from deduplicating
>>> pages that were in active use, not internal THP fragmentation.
>> Brainfart, my apologies. It could have been the ksm-induced splits
>> themselves that allowed the unused subpages to get swapped out in the
>> first place.
>
> Yes, it's not easy to spot that this is implemented. I just wrote a simple
> reproducer to confirm: modifying a single subpage in a bunch of THP ranges will
> populate a THP whereby most of the THP is zeroes.
>
> As long as you keep accessing the single subpage via the PMD I assume chances of
> getting it swapped out are lower, because the folio will be references/dirty.
>
> KSM will come around and split the THP filled mostly with zeroes and deduplciate
> the resulting zero pages.
>
> [that's where a zeropage-only KSM could be very valuable eventually I think]
>

We can certainly run an experiment where THP is turned off to verify if we
observe similar savings,

>> But no, I double checked that workload just now. On a weekly average,
>> it has about 50 anon THPs and 12 million regular anon. THP is not a
>> factor in the reduction results.
>
> You mean with KSM enabled or with KSM disabled for the process? Not sure if your
> observation reliably implies that the scenario described couldn't have happened,
> but it's late in Germany already :)
>
> In any case, it would be nice to get a feeling for how much variety in these 20%
> of deduplicated pages are. For example, if it's 99% the same page or just a wild
> collection.
>
> Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. But I
> seem to be getting "126" in my simple example where only zeropages should get
> deduplicated, so I have to take another look at the stats tomorrow ...

/sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
Instagram workload. The workload consists of 36 processes plus a few
sidecar processes.

Also to give some idea for individual VMA's

7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: 73160 KB)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 14:40       ` David Hildenbrand
@ 2023-03-30 16:41         ` Stefan Roesch
  2023-04-03  9:48           ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Stefan Roesch @ 2023-03-30 16:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


My mistake I first answered to an older email.

David Hildenbrand <david@redhat.com> writes:

> On 30.03.23 16:26, Johannes Weiner wrote:
>> On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
>>> On 29.03.23 01:09, Andrew Morton wrote:
>>>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>>>>
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Review on this series has been a bit thin.  Are we OK with moving this
>>>> into mm-stable for the next merge window?
>>>
>>> I still want to review (traveling this week), but I also don't want to block
>>> this forever.
>>>
>>> I think I didn't get a reply from Stefan to my question [1] yet (only some
>>> comments from Johannes). I would still be interested in the variance of
>>> pages we end up de-duplicating for processes.
>>>
>>> The 20% statement in the cover letter is rather useless and possibly
>>> misleading if no details about the actual workload are shared.
>> The workload is instagram. It forks off Django runtimes on-demand
>> until it saturates whatever hardware it's running on. This benefits
>> from merging common heap/stack state between instances. Since that
>> runtime is quite large, the 20% number is not surprising, and matches
>> our expectations of duplicative memory between instances.
>
> Thanks for this explanation. It's valuable to get at least a feeling for the
> workload because it doesn't seem to apply to other workloads at all.
>
>> Obviously we could spend months analysing which exact allocations are
>> identical, and then more months or years reworking the architecture to
>> deduplicate them by hand and in userspace. But this isn't practical,
>> and KSM is specifically for cases where this isn't practical.
>> Based on your request in the previous thread, we investigated whether
>> the boost was coming from the unintended side effects of KSM splitting
>> THPs. This wasn't the case.
>> If you have other theories on how the results could be bogus, we'd be
>> happy to investigate those as well. But you have to let us know what
>> you're looking for.
>>
>
> Maybe I'm bad at making such requests but
>
> "Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?"
>
> "In any case, it would be nice to get a feeling for how much variety in
> these 20% of deduplicated pages are. "
>
> is pretty clear to me. And shouldn't take months.
>

/sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
Instagram workload. The workload consists of 36 processes plus a few
sidecar processes.

Each of these individual processes has around 500MB in KSM pages.

Also to give some idea for individual VMA's

7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
73160 KB)

>> Beyond that, I don't think we need to prove from scratch that KSM can
>
> I never expected a proof. I was merely trying to understand if it's really KSM
> that helps here. Also with the intention to figure out if KSM is really the
> right tool to use here or if it simply "helps by luck" as with the shared
> zeropage. That end result could have been valuable to your use case as well,
> because KSM overhead is real.
>
>> be a worthwhile optimization. It's been established that it can
>> be. This series is about enabling it in scenarios where madvise()
>> isn't practical, that's it, and it's yielding the expected results.
>
> I'm sorry to say, but you sound a bit aggressive and annoyed. I also have no
> idea why Stefan isn't replying to me but always you.
>
> Am I asking the wrong questions? Do you want me to stop looking at KSM code?
>

Your review is valuable, Johannes was quicker than me.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30  4:55   ` David Hildenbrand
  2023-03-30 14:26     ` Johannes Weiner
@ 2023-03-30 20:18     ` Andrew Morton
  1 sibling, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2023-03-30 20:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, hannes, Hugh Dickins

On Thu, 30 Mar 2023 06:55:31 +0200 David Hildenbrand <david@redhat.com> wrote:

> On 29.03.23 01:09, Andrew Morton wrote:
> > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> > 
> >> So far KSM can only be enabled by calling madvise for memory regions. To
> >> be able to use KSM for more workloads, KSM needs to have the ability to be
> >> enabled / disabled at the process / cgroup level.
> > 
> > Review on this series has been a bit thin.  Are we OK with moving this
> > into mm-stable for the next merge window?
> 
> I still want to review (traveling this week), but I also don't want to 
> block this forever.

No hurry, we're only at -rc4.  Holding this series in mm-unstable for
another 2-3 weeks isn't a problem.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 16:41         ` Stefan Roesch
@ 2023-04-03  9:48           ` David Hildenbrand
  2023-04-03 16:34             ` Stefan Roesch
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2023-04-03  9:48 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

>>> Obviously we could spend months analysing which exact allocations are
>>> identical, and then more months or years reworking the architecture to
>>> deduplicate them by hand and in userspace. But this isn't practical,
>>> and KSM is specifically for cases where this isn't practical.
>>> Based on your request in the previous thread, we investigated whether
>>> the boost was coming from the unintended side effects of KSM splitting
>>> THPs. This wasn't the case.
>>> If you have other theories on how the results could be bogus, we'd be
>>> happy to investigate those as well. But you have to let us know what
>>> you're looking for.
>>>
>>
>> Maybe I'm bad at making such requests but
>>
>> "Stefan, can you do me a favor and investigate which pages we end up
>> deduplicating -- especially if it's mostly only the zeropage and if it's
>> still that significant when disabling THP?"
>>
>> "In any case, it would be nice to get a feeling for how much variety in
>> these 20% of deduplicated pages are. "
>>
>> is pretty clear to me. And shouldn't take months.
>>

Just to clarify: the details I requested are not meant to decide whether 
to reject the patch set (I understand that it can be beneficial to 
have); I primarily want to understand if we're really dealing with a 
workload where KSM is able to deduplicate pages that are non-trivial, to 
maybe figure out if there are other workloads that could similarly 
benefit -- or if we could optimize KSM for these specific cases or avoid 
the memory deduplication altogether.

In contrast to e.g.:

1) THP resulted in many zeropages we end up deduplicating again. The THP
    placement was unfortunate.

2) Unoptimized memory allocators that leave many identical pages mapped
    after freeing up memory (e.g., zeroed pages, pages all filled with
    poison values) instead of e.g., using MADV_DONTNEED to free up that
    memory.


> 
> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
> Instagram workload. The workload consists of 36 processes plus a few
> sidecar processes.

Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in 
that environment?

What would be interesting is pages_shared after max_page_sharing was set 
to a very high number such that pages_shared does not include 
duplicates. Then pages_shared actually expresses how many different 
pages we deduplicate. No need to run without THP in that case.

Similarly, enabling "use_zero_pages" could highlight if your workload 
ends up deduplciating a lot of zeropages. But maxing out 
max_page_sharing would be sufficient to understand what's happening.


> 
> Each of these individual processes has around 500MB in KSM pages.
> 

That's really a lot, thanks.

> Also to give some idea for individual VMA's
> 
> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
> 73160 KB)
> 

I'll have a look at the patches today.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
  2023-03-13 16:26   ` Johannes Weiner
@ 2023-04-03 10:37   ` David Hildenbrand
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-03 15:50     ` Stefan Roesch
  1 sibling, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-03 10:37 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya

On 10.03.23 19:28, Stefan Roesch wrote:
> Patch series "mm: process/cgroup ksm support", v3.
> 
> So far KSM can only be enabled by calling madvise for memory regions.  To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
> 
> Use case 1:
> 
>    The madvise call is not available in the programming language.  An
>    example for this are programs with forked workloads using a garbage
>    collected language without pointers.  In such a language madvise cannot
>    be made available.
> 
>    In addition the addresses of objects get moved around as they are
>    garbage collected.  KSM sharing needs to be enabled "from the outside"
>    for these type of workloads.

I guess the interpreter could enable it (like a memory allocator could 
enable it for the whole heap). But I get that it's much easier to enable 
this per-process, and eventually only when a lot of the same processes 
are running in that particular environment.

> 
> Use case 2:
> 
>    The same interpreter can also be used for workloads where KSM brings
>    no benefit or even has overhead.  We'd like to be able to enable KSM on
>    a workload by workload basis.

Agreed. A per-process control is also helpful to identidy workloads 
where KSM might be beneficial (and to which degree).

> 
> Use case 3:
> 
>    With the madvise call sharing opportunities are only enabled for the
>    current process: it is a workload-local decision.  A considerable number
>    of sharing opportuniites may exist across multiple workloads or jobs.
>    Only a higler level entity like a job scheduler or container can know
>    for certain if its running one or more instances of a job.  That job
>    scheduler however doesn't have the necessary internal worklaod knowledge
>    to make targeted madvise calls.
> 
> Security concerns:
> 
>    In previous discussions security concerns have been brought up.  The
>    problem is that an individual workload does not have the knowledge about
>    what else is running on a machine.  Therefore it has to be very
>    conservative in what memory areas can be shared or not.  However, if the
>    system is dedicated to running multiple jobs within the same security
>    domain, its the job scheduler that has the knowledge that sharing can be
>    safely enabled and is even desirable.
> 
> Performance:
> 
>    Experiments with using UKSM have shown a capacity increase of around
>    20%.
> 

As raised, it would be great to include more details about the workload 
where this particulalry helps (e.g., a lot of Django processes operating 
in the same domain).

> 
> 1. New options for prctl system command
> 
>     This patch series adds two new options to the prctl system call.
>     The first one allows to enable KSM at the process level and the second
>     one to query the setting.
> 
>     The setting will be inherited by child processes.
> 
>     With the above setting, KSM can be enabled for the seed process of a
>     cgroup and all processes in the cgroup will inherit the setting.
> 
> 2. Changes to KSM processing
> 
>     When KSM is enabled at the process level, the KSM code will iterate
>     over all the VMA's and enable KSM for the eligible VMA's.
> 
>     When forking a process that has KSM enabled, the setting will be
>     inherited by the new child process.
> 
>     In addition when KSM is disabled for a process, KSM will be disabled
>     for the VMA's where KSM has been enabled.

Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new 
prctl is enabled for a process?

> 
> 3. Add general_profit metric
> 
>     The general_profit metric of KSM is specified in the documentation,
>     but not calculated.  This adds the general profit metric to
>     /sys/kernel/debug/mm/ksm.
> 
> 4. Add more metrics to ksm_stat
> 
>     This adds the process profit and ksm type metric to
>     /proc/<pid>/ksm_stat.
> 
> 5. Add more tests to ksm_tests
> 
>     This adds an option to specify the merge type to the ksm_tests.
>     This allows to test madvise and prctl KSM.  It also adds a new option
>     to query if prctl KSM has been enabled.  It adds a fork test to verify
>     that the KSM process setting is inherited by client processes.
> 
> An update to the prctl(2) manpage has been proposed at [1].
> 
> This patch (of 3):
> 
> This adds a new prctl to API to enable and disable KSM on a per process
> basis instead of only at the VMA basis (with madvise).
> 
> 1) Introduce new MMF_VM_MERGE_ANY flag
> 
>     This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>     is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>     process.
> 
> 2) add flag to __ksm_enter
> 
>     This change adds the flag parameter to __ksm_enter.  This allows to
>     distinguish if ksm was called by prctl or madvise.
> 
> 3) add flag to __ksm_exit call
> 
>     This adds the flag parameter to the __ksm_exit() call.  This allows
>     to distinguish if this call is for an prctl or madvise invocation.
> 
> 4) invoke madvise for all vmas in scan_get_next_rmap_item
> 
>     If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>     over all the vmas and enable ksm if possible.  For the vmas that can be
>     ksm enabled this is only done once.
> 
> 5) support disabling of ksm for a process
> 
>     This adds the ability to disable ksm for a process if ksm has been
>     enabled for the process.
> 
> 6) add new prctl option to get and set ksm for a process
> 
>     This adds two new options to the prctl system call
>     - enable ksm for all vmas of a process (if the vmas support it).
>     - query if ksm has been enabled for a process.


Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, 
to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE 
infrastructure. Especially:

1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
    compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
    already set.

2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
    VM_MERGABLE?

The you can avoid all runtime checks for compatible VMAs and only look 
at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely 
expressive then for all VMAs. You don't need vma_ksm_mergeable() then.


Another thing to consider is interaction with arch/s390/mm/gmap.c: 
s390x/kvm does not support KSM and it has to disable it for all VMAs. We 
have to find a way to fence the prctl (for example, fail setting the 
prctl after gmap_mark_unmergeable() ran, and make 
gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully 
in some other way).


> 
> Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1]
> Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io
> Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io
> Signed-off-by: Stefan Roesch <shr@devkernel.io>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Bagas Sanjaya <bagasdotme@gmail.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
>   include/linux/ksm.h            | 14 ++++--
>   include/linux/sched/coredump.h |  1 +
>   include/uapi/linux/prctl.h     |  2 +
>   kernel/sys.c                   | 27 ++++++++++
>   mm/ksm.c                       | 90 +++++++++++++++++++++++-----------
>   5 files changed, 101 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 7e232ba59b86..d38a05a36298 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -18,20 +18,24 @@
>   #ifdef CONFIG_KSM
>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   		unsigned long end, int advice, unsigned long *vm_flags);
> -int __ksm_enter(struct mm_struct *mm);
> -void __ksm_exit(struct mm_struct *mm);
> +int __ksm_enter(struct mm_struct *mm, int flag);
> +void __ksm_exit(struct mm_struct *mm, int flag);
>   
>   static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>   {
> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
> +		return __ksm_enter(mm, MMF_VM_MERGE_ANY);
>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
> -		return __ksm_enter(mm);
> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>   	return 0;
>   }
>   
>   static inline void ksm_exit(struct mm_struct *mm)
>   {
> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> -		__ksm_exit(mm);
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGEABLE);
>   }
>   
>   /*
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index 0e17ae7fbfd3..0ee96ea7a0e9 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
>   #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
>   				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
>   
> +#define MMF_VM_MERGE_ANY	29
>   #endif /* _LINUX_SCHED_COREDUMP_H */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 1312a137f7fb..759b3f53e53f 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>   #define PR_SET_VMA		0x53564d41
>   # define PR_SET_VMA_ANON_NAME		0
>   
> +#define PR_SET_MEMORY_MERGE		67
> +#define PR_GET_MEMORY_MERGE		68
>   #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 495cd87d9bf4..edc439b1cae9 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -15,6 +15,7 @@
>   #include <linux/highuid.h>
>   #include <linux/fs.h>
>   #include <linux/kmod.h>
> +#include <linux/ksm.h>
>   #include <linux/perf_event.h>
>   #include <linux/resource.h>
>   #include <linux/kernel.h>
> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>   	case PR_SET_VMA:
>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>   		break;
> +#ifdef CONFIG_KSM
> +	case PR_SET_MEMORY_MERGE:
> +		if (!capable(CAP_SYS_RESOURCE))
> +			return -EPERM;
> +
> +		if (arg2) {
> +			if (mmap_write_lock_killable(me->mm))
> +				return -EINTR;
> +
> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);

Hm, I think this might be problematic if we alread called __ksm_enter() 
via madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY 
set MMF_VM_MERGABLE instead. Like:

error = 0;
if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
	error = __ksm_enter(me->mm);
if (!error)
	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);

> +			mmap_write_unlock(me->mm);
> +		} else {
> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);

Hm, I'd prefer if we really only call __ksm_exit() when we really exit 
the process. Is there a strong requirement to optimize disabling of KSM 
or would it be sufficient to clear the MMF_VM_MERGE_ANY flag here?

Also, I wonder what happens if we have another VMA in that process that 
has it enabled ..

Last but not least, wouldn't we want to do the same thing as 
MADV_UNMERGEABLE and actually unmerge the KSM pages?


It smells like it could be simpler and more consistent to handle by 
letting PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE 
and mimic what ksm_madvise() does simply for all VMAs.

> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>   }
>   
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	/*
> +	 * Be somewhat over-protective for now!
> +	 */
> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> +			     VM_HUGETLB | VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */

That comment is kind-of stale and ksm_madvise() specific.

> +

The VM_MERGEABLE check is really only used for ksm_madvise() to return 
immediately. I suggest keeping it in ksm_madvise() -- "Already enabled". 
Returning "false" in that case looks wrong (it's not broken because you 
do an early check in vma_ksm_mergeable(), it's just semantically weird).



-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 10:37   ` David Hildenbrand
@ 2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
                         ` (2 more replies)
  2023-04-03 15:50     ` Stefan Roesch
  1 sibling, 3 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-03 11:03 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank, Christian Borntraeger

On 03.04.23 12:37, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
>> Patch series "mm: process/cgroup ksm support", v3.
>>
>> So far KSM can only be enabled by calling madvise for memory regions.  To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>>
>> Use case 1:
>>
>>     The madvise call is not available in the programming language.  An
>>     example for this are programs with forked workloads using a garbage
>>     collected language without pointers.  In such a language madvise cannot
>>     be made available.
>>
>>     In addition the addresses of objects get moved around as they are
>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>     for these type of workloads.
> 
> I guess the interpreter could enable it (like a memory allocator could
> enable it for the whole heap). But I get that it's much easier to enable
> this per-process, and eventually only when a lot of the same processes
> are running in that particular environment.
> 
>>
>> Use case 2:
>>
>>     The same interpreter can also be used for workloads where KSM brings
>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>     a workload by workload basis.
> 
> Agreed. A per-process control is also helpful to identidy workloads
> where KSM might be beneficial (and to which degree).
> 
>>
>> Use case 3:
>>
>>     With the madvise call sharing opportunities are only enabled for the
>>     current process: it is a workload-local decision.  A considerable number
>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>     Only a higler level entity like a job scheduler or container can know
>>     for certain if its running one or more instances of a job.  That job
>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>     to make targeted madvise calls.
>>
>> Security concerns:
>>
>>     In previous discussions security concerns have been brought up.  The
>>     problem is that an individual workload does not have the knowledge about
>>     what else is running on a machine.  Therefore it has to be very
>>     conservative in what memory areas can be shared or not.  However, if the
>>     system is dedicated to running multiple jobs within the same security
>>     domain, its the job scheduler that has the knowledge that sharing can be
>>     safely enabled and is even desirable.
>>
>> Performance:
>>
>>     Experiments with using UKSM have shown a capacity increase of around
>>     20%.
>>
> 
> As raised, it would be great to include more details about the workload
> where this particulalry helps (e.g., a lot of Django processes operating
> in the same domain).
> 
>>
>> 1. New options for prctl system command
>>
>>      This patch series adds two new options to the prctl system call.
>>      The first one allows to enable KSM at the process level and the second
>>      one to query the setting.
>>
>>      The setting will be inherited by child processes.
>>
>>      With the above setting, KSM can be enabled for the seed process of a
>>      cgroup and all processes in the cgroup will inherit the setting.
>>
>> 2. Changes to KSM processing
>>
>>      When KSM is enabled at the process level, the KSM code will iterate
>>      over all the VMA's and enable KSM for the eligible VMA's.
>>
>>      When forking a process that has KSM enabled, the setting will be
>>      inherited by the new child process.
>>
>>      In addition when KSM is disabled for a process, KSM will be disabled
>>      for the VMA's where KSM has been enabled.
> 
> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
> prctl is enabled for a process?
> 
>>
>> 3. Add general_profit metric
>>
>>      The general_profit metric of KSM is specified in the documentation,
>>      but not calculated.  This adds the general profit metric to
>>      /sys/kernel/debug/mm/ksm.
>>
>> 4. Add more metrics to ksm_stat
>>
>>      This adds the process profit and ksm type metric to
>>      /proc/<pid>/ksm_stat.
>>
>> 5. Add more tests to ksm_tests
>>
>>      This adds an option to specify the merge type to the ksm_tests.
>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>      that the KSM process setting is inherited by client processes.
>>
>> An update to the prctl(2) manpage has been proposed at [1].
>>
>> This patch (of 3):
>>
>> This adds a new prctl to API to enable and disable KSM on a per process
>> basis instead of only at the VMA basis (with madvise).
>>
>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>
>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>      process.
>>
>> 2) add flag to __ksm_enter
>>
>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>      distinguish if ksm was called by prctl or madvise.
>>
>> 3) add flag to __ksm_exit call
>>
>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>      to distinguish if this call is for an prctl or madvise invocation.
>>
>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>
>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>      ksm enabled this is only done once.
>>
>> 5) support disabling of ksm for a process
>>
>>      This adds the ability to disable ksm for a process if ksm has been
>>      enabled for the process.
>>
>> 6) add new prctl option to get and set ksm for a process
>>
>>      This adds two new options to the prctl system call
>>      - enable ksm for all vmas of a process (if the vmas support it).
>>      - query if ksm has been enabled for a process.
> 
> 
> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
> infrastructure. Especially:
> 
> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>      already set.
> 
> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>      VM_MERGABLE?
> 
> The you can avoid all runtime checks for compatible VMAs and only look
> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
> 
> 
> Another thing to consider is interaction with arch/s390/mm/gmap.c:
> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
> have to find a way to fence the prctl (for example, fail setting the
> prctl after gmap_mark_unmergeable() ran, and make
> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
> in some other way).


Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" 
is doing what it's supposed to do. I don't think this effectively 
prevents right now madvise() from getting re-enabled on that VMA.

@Christian, Janosch, am I missing something?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 10:37   ` David Hildenbrand
  2023-04-03 11:03     ` David Hildenbrand
@ 2023-04-03 15:50     ` Stefan Roesch
  2023-04-03 17:02       ` David Hildenbrand
  1 sibling, 1 reply; 36+ messages in thread
From: Stefan Roesch @ 2023-04-03 15:50 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 10.03.23 19:28, Stefan Roesch wrote:
>> Patch series "mm: process/cgroup ksm support", v3.
>> So far KSM can only be enabled by calling madvise for memory regions.  To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>> Use case 1:
>>    The madvise call is not available in the programming language.  An
>>    example for this are programs with forked workloads using a garbage
>>    collected language without pointers.  In such a language madvise cannot
>>    be made available.
>>    In addition the addresses of objects get moved around as they are
>>    garbage collected.  KSM sharing needs to be enabled "from the outside"
>>    for these type of workloads.
>
> I guess the interpreter could enable it (like a memory allocator could enable it
> for the whole heap). But I get that it's much easier to enable this per-process,
> and eventually only when a lot of the same processes are running in that
> particular environment.
>

We don't want it to get enabled for all workloads of that interpreter,
instead we want to be able to select for which workloads we enable KSM.

>> Use case 2:
>>    The same interpreter can also be used for workloads where KSM brings
>>    no benefit or even has overhead.  We'd like to be able to enable KSM on
>>    a workload by workload basis.
>
> Agreed. A per-process control is also helpful to identidy workloads where KSM
> might be beneficial (and to which degree).
>
>> Use case 3:
>>    With the madvise call sharing opportunities are only enabled for the
>>    current process: it is a workload-local decision.  A considerable number
>>    of sharing opportuniites may exist across multiple workloads or jobs.
>>    Only a higler level entity like a job scheduler or container can know
>>    for certain if its running one or more instances of a job.  That job
>>    scheduler however doesn't have the necessary internal worklaod knowledge
>>    to make targeted madvise calls.
>> Security concerns:
>>    In previous discussions security concerns have been brought up.  The
>>    problem is that an individual workload does not have the knowledge about
>>    what else is running on a machine.  Therefore it has to be very
>>    conservative in what memory areas can be shared or not.  However, if the
>>    system is dedicated to running multiple jobs within the same security
>>    domain, its the job scheduler that has the knowledge that sharing can be
>>    safely enabled and is even desirable.
>> Performance:
>>    Experiments with using UKSM have shown a capacity increase of around
>>    20%.
>>
>
> As raised, it would be great to include more details about the workload where
> this particulalry helps (e.g., a lot of Django processes operating in the same
> domain).
>

I can add that the django processes are part of the same domain with the
next version of the patch series.

>> 1. New options for prctl system command
>>     This patch series adds two new options to the prctl system call.
>>     The first one allows to enable KSM at the process level and the second
>>     one to query the setting.
>>     The setting will be inherited by child processes.
>>     With the above setting, KSM can be enabled for the seed process of a
>>     cgroup and all processes in the cgroup will inherit the setting.
>> 2. Changes to KSM processing
>>     When KSM is enabled at the process level, the KSM code will iterate
>>     over all the VMA's and enable KSM for the eligible VMA's.
>>     When forking a process that has KSM enabled, the setting will be
>>     inherited by the new child process.
>>     In addition when KSM is disabled for a process, KSM will be disabled
>>     for the VMA's where KSM has been enabled.
>
> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is
> enabled for a process?

I decided to allow enabling KSM with prctl even when MADV_MERGEABLE,
this allows more flexibility.
>
>> 3. Add general_profit metric
>>     The general_profit metric of KSM is specified in the documentation,
>>     but not calculated.  This adds the general profit metric to
>>     /sys/kernel/debug/mm/ksm.
>> 4. Add more metrics to ksm_stat
>>     This adds the process profit and ksm type metric to
>>     /proc/<pid>/ksm_stat.
>> 5. Add more tests to ksm_tests
>>     This adds an option to specify the merge type to the ksm_tests.
>>     This allows to test madvise and prctl KSM.  It also adds a new option
>>     to query if prctl KSM has been enabled.  It adds a fork test to verify
>>     that the KSM process setting is inherited by client processes.
>> An update to the prctl(2) manpage has been proposed at [1].
>> This patch (of 3):
>> This adds a new prctl to API to enable and disable KSM on a per process
>> basis instead of only at the VMA basis (with madvise).
>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>     This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>     is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>     process.
>> 2) add flag to __ksm_enter
>>     This change adds the flag parameter to __ksm_enter.  This allows to
>>     distinguish if ksm was called by prctl or madvise.
>> 3) add flag to __ksm_exit call
>>     This adds the flag parameter to the __ksm_exit() call.  This allows
>>     to distinguish if this call is for an prctl or madvise invocation.
>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>     If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>     over all the vmas and enable ksm if possible.  For the vmas that can be
>>     ksm enabled this is only done once.
>> 5) support disabling of ksm for a process
>>     This adds the ability to disable ksm for a process if ksm has been
>>     enabled for the process.
>> 6) add new prctl option to get and set ksm for a process
>>     This adds two new options to the prctl system call
>>     - enable ksm for all vmas of a process (if the vmas support it).
>>     - query if ksm has been enabled for a process.
>
>
> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, to
> instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE infrastructure.
> Especially:
>
> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>    compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>    already set.
>
> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>    VM_MERGABLE?
>
> The you can avoid all runtime checks for compatible VMAs and only look at the
> VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely expressive then
> for all VMAs. You don't need vma_ksm_mergeable() then.
>

I didn't consider the above approach, I can have a look. I can see the
benefit of not needing vma_ksm_mergeable().

> Another thing to consider is interaction with arch/s390/mm/gmap.c: s390x/kvm
> does not support KSM and it has to disable it for all VMAs. We have to find a
> way to fence the prctl (for example, fail setting the prctl after
> gmap_mark_unmergeable() ran, and make gmap_mark_unmergeable() fail if the prctl
> ran -- or handle it gracefully in some other way).
>
>

I'll have a look.

>> Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1]
>> Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io
>> Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io
>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Rik van Riel <riel@surriel.com>
>> Cc: Bagas Sanjaya <bagasdotme@gmail.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>>   include/linux/ksm.h            | 14 ++++--
>>   include/linux/sched/coredump.h |  1 +
>>   include/uapi/linux/prctl.h     |  2 +
>>   kernel/sys.c                   | 27 ++++++++++
>>   mm/ksm.c                       | 90 +++++++++++++++++++++++-----------
>>   5 files changed, 101 insertions(+), 33 deletions(-)
>> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
>> index 7e232ba59b86..d38a05a36298 100644
>> --- a/include/linux/ksm.h
>> +++ b/include/linux/ksm.h
>> @@ -18,20 +18,24 @@
>>   #ifdef CONFIG_KSM
>>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>   		unsigned long end, int advice, unsigned long *vm_flags);
>> -int __ksm_enter(struct mm_struct *mm);
>> -void __ksm_exit(struct mm_struct *mm);
>> +int __ksm_enter(struct mm_struct *mm, int flag);
>> +void __ksm_exit(struct mm_struct *mm, int flag);
>>     static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>>   {
>> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
>> +		return __ksm_enter(mm, MMF_VM_MERGE_ANY);
>>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
>> -		return __ksm_enter(mm);
>> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>>   	return 0;
>>   }
>>     static inline void ksm_exit(struct mm_struct *mm)
>>   {
>> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> -		__ksm_exit(mm);
>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
>> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGEABLE);
>>   }
>>     /*
>> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
>> index 0e17ae7fbfd3..0ee96ea7a0e9 100644
>> --- a/include/linux/sched/coredump.h
>> +++ b/include/linux/sched/coredump.h
>> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
>>   #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
>>   				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
>>   +#define MMF_VM_MERGE_ANY	29
>>   #endif /* _LINUX_SCHED_COREDUMP_H */
>> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
>> index 1312a137f7fb..759b3f53e53f 100644
>> --- a/include/uapi/linux/prctl.h
>> +++ b/include/uapi/linux/prctl.h
>> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>>   #define PR_SET_VMA		0x53564d41
>>   # define PR_SET_VMA_ANON_NAME		0
>>   +#define PR_SET_MEMORY_MERGE		67
>> +#define PR_GET_MEMORY_MERGE		68
>>   #endif /* _LINUX_PRCTL_H */
>> diff --git a/kernel/sys.c b/kernel/sys.c
>> index 495cd87d9bf4..edc439b1cae9 100644
>> --- a/kernel/sys.c
>> +++ b/kernel/sys.c
>> @@ -15,6 +15,7 @@
>>   #include <linux/highuid.h>
>>   #include <linux/fs.h>
>>   #include <linux/kmod.h>
>> +#include <linux/ksm.h>
>>   #include <linux/perf_event.h>
>>   #include <linux/resource.h>
>>   #include <linux/kernel.h>
>> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>>   	case PR_SET_VMA:
>>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>   		break;
>> +#ifdef CONFIG_KSM
>> +	case PR_SET_MEMORY_MERGE:
>> +		if (!capable(CAP_SYS_RESOURCE))
>> +			return -EPERM;
>> +
>> +		if (arg2) {
>> +			if (mmap_write_lock_killable(me->mm))
>> +				return -EINTR;
>> +
>> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
>> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
>
> Hm, I think this might be problematic if we alread called __ksm_enter() via
> madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set
> MMF_VM_MERGABLE instead. Like:
>
> error = 0;
> if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
> 	error = __ksm_enter(me->mm);
> if (!error)
> 	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>

If we make that change, we would no longer be able to distinguish
if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set.

>> +			mmap_write_unlock(me->mm);
>> +		} else {
>> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);
>
> Hm, I'd prefer if we really only call __ksm_exit() when we really exit the
> process. Is there a strong requirement to optimize disabling of KSM or would it
> be sufficient to clear the MMF_VM_MERGE_ANY flag here?
>
Then we still have the mm_slot allocated until the process gets
terminated.

> Also, I wonder what happens if we have another VMA in that process that has it
> enabled ..
>
> Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE
> and actually unmerge the KSM pages?
>
Do you want to call unmerge for all VMA's?

>
> It smells like it could be simpler and more consistent to handle by letting
> PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE and mimic what
> ksm_madvise() does simply for all VMAs.
>
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>>   }
>>   +static bool vma_ksm_compatible(struct vm_area_struct *vma)
>> +{
>> +	/*
>> +	 * Be somewhat over-protective for now!
>> +	 */
>> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
>> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
>> +			     VM_HUGETLB | VM_MIXEDMAP))
>> +		return false;		/* just ignore the advice */
>
> That comment is kind-of stale and ksm_madvise() specific.
>

I'll remove the comment.

>> +
>
> The VM_MERGEABLE check is really only used for ksm_madvise() to return
> immediately. I suggest keeping it in ksm_madvise() -- "Already enabled".
> Returning "false" in that case looks wrong (it's not broken because you do an
> early check in vma_ksm_mergeable(), it's just semantically weird).
>

I'll make it in ksm_madvise and remove it here.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03  9:48           ` David Hildenbrand
@ 2023-04-03 16:34             ` Stefan Roesch
  2023-04-03 17:04               ` David Hildenbrand
  2023-04-06 16:59               ` Stefan Roesch
  0 siblings, 2 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-04-03 16:34 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


David Hildenbrand <david@redhat.com> writes:

>>>> Obviously we could spend months analysing which exact allocations are
>>>> identical, and then more months or years reworking the architecture to
>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>> and KSM is specifically for cases where this isn't practical.
>>>> Based on your request in the previous thread, we investigated whether
>>>> the boost was coming from the unintended side effects of KSM splitting
>>>> THPs. This wasn't the case.
>>>> If you have other theories on how the results could be bogus, we'd be
>>>> happy to investigate those as well. But you have to let us know what
>>>> you're looking for.
>>>>
>>>
>>> Maybe I'm bad at making such requests but
>>>
>>> "Stefan, can you do me a favor and investigate which pages we end up
>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>> still that significant when disabling THP?"
>>>
>>> "In any case, it would be nice to get a feeling for how much variety in
>>> these 20% of deduplicated pages are. "
>>>
>>> is pretty clear to me. And shouldn't take months.
>>>
>
> Just to clarify: the details I requested are not meant to decide whether to
> reject the patch set (I understand that it can be beneficial to have); I
> primarily want to understand if we're really dealing with a workload where KSM
> is able to deduplicate pages that are non-trivial, to maybe figure out if there
> are other workloads that could similarly benefit -- or if we could optimize KSM
> for these specific cases or avoid the memory deduplication altogether.
>
> In contrast to e.g.:
>
> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>    placement was unfortunate.
>
> 2) Unoptimized memory allocators that leave many identical pages mapped
>    after freeing up memory (e.g., zeroed pages, pages all filled with
>    poison values) instead of e.g., using MADV_DONTNEED to free up that
>    memory.
>
>

I repeated an experiment with and without KSM. In terms of THP there is
no huge difference between the two. On a 64GB main memory machine I see
between 100 - 400MB in AnonHugePages.

>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>> Instagram workload. The workload consists of 36 processes plus a few
>> sidecar processes.
>
> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
> environment?
>

It's set to the standard value of 256.

In the meantime I have run experiments with different settings for
pages_to_scan. With the default value of 100, we only get a relatively
small benefit of KSM. If I increase the value to for instance to 2000 or
3000 the savings are substantial. (The workload is memory bound, not
CPU bound).

Here are some stats for setting pages_to_scan to 3000:

full_scans: 560
general_profit: 20620539008
max_page_sharing: 256
merge_across_nodes: 1
pages_shared: 125446
pages_sharing: 5259506
pages_to_scan: 3000
pages_unshared: 1897537
pages_volatile: 12389223
run: 1
sleep_millisecs: 20
stable_node_chains: 176
stable_node_chains_prune_millisecs: 2000
stable_node_dups: 2604
use_zero_pages: 0
zero_pages_sharing: 0


> What would be interesting is pages_shared after max_page_sharing was set to a
> very high number such that pages_shared does not include duplicates. Then
> pages_shared actually expresses how many different pages we deduplicate. No need
> to run without THP in that case.
>

Thats on my list for the next set of experiments.
> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
> sufficient to understand what's happening.
>
>

I already run experiments with use_zero_pages, but they didn't make a
difference. I'll repeat the experiment with a higher pages_to_scan
value.

>> Each of these individual processes has around 500MB in KSM pages.
>>
>
> That's really a lot, thanks.
>
>> Also to give some idea for individual VMA's
>> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
>> 73160 KB)
>>
>
> I'll have a look at the patches today.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 15:50     ` Stefan Roesch
@ 2023-04-03 17:02       ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-03 17:02 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

On 03.04.23 17:50, Stefan Roesch wrote:
>> I guess the interpreter could enable it (like a memory allocator could enable it
>> for the whole heap). But I get that it's much easier to enable this per-process,
>> and eventually only when a lot of the same processes are running in that
>> particular environment.
>>
> 
> We don't want it to get enabled for all workloads of that interpreter,
> instead we want to be able to select for which workloads we enable KSM.
> 

Right.

> 
>>> 1. New options for prctl system command
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>      The setting will be inherited by child processes.
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>> 2. Changes to KSM processing
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>>
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is
>> enabled for a process?
> 
> I decided to allow enabling KSM with prctl even when MADV_MERGEABLE,
> this allows more flexibility.

MADV_MERGEABLE will be a nop. But IIUC, MADV_UNMERGEABLE will end up 
calling unmerge_ksm_pages() and clear VM_MERGEABLE. But then, the next 
KSM scan will merge the pages in there again.

Not sure if that flexibility is worth having.

[...]


>>> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>>>    	case PR_SET_VMA:
>>>    		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>>    		break;
>>> +#ifdef CONFIG_KSM
>>> +	case PR_SET_MEMORY_MERGE:
>>> +		if (!capable(CAP_SYS_RESOURCE))
>>> +			return -EPERM;
>>> +
>>> +		if (arg2) {
>>> +			if (mmap_write_lock_killable(me->mm))
>>> +				return -EINTR;
>>> +
>>> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
>>> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
>>
>> Hm, I think this might be problematic if we alread called __ksm_enter() via
>> madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set
>> MMF_VM_MERGABLE instead. Like:
>>
>> error = 0;
>> if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
>> 	error = __ksm_enter(me->mm);
>> if (!error)
>> 	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>>
> 
> If we make that change, we would no longer be able to distinguish
> if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set.

Why would you need that exactly? To cleanup? See below.

> 
>>> +			mmap_write_unlock(me->mm);
>>> +		} else {
>>> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);
>>
>> Hm, I'd prefer if we really only call __ksm_exit() when we really exit the
>> process. Is there a strong requirement to optimize disabling of KSM or would it
>> be sufficient to clear the MMF_VM_MERGE_ANY flag here?
>>
> Then we still have the mm_slot allocated until the process gets
> terminated.

Which is the same as using MADV_UNMERGEABLE, no?

> 
>> Also, I wonder what happens if we have another VMA in that process that has it
>> enabled ..
>>
>> Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE
>> and actually unmerge the KSM pages?
>>
> Do you want to call unmerge for all VMA's?

The question is what clearing MMF_VM_MERGE_ANY is supposed to do. If 
it's supposed to disable KSM (like MADV_UNMERGEABLE) would, then I guess 
you should go over all VMA's and unmerge.

Also, it depend on how you want to handle VM_MERGABLE with 
MMF_VM_MERGE_ANY. If MMF_VM_MERGE_ANY would not set VM_MERGABLE, then 
you'd only unmerge where VM_MERGABLE is not set. Otherwise, you'd 
unshare everywhere where VM_MERGABLE is set (and clear VM_MERGABLE) 
while at it.

Unsharing when clearing MMF_VM_MERGE_ANY might be the right thing to do 
IMHO.


I guess the main questions regarding implementation are:

1) Do we want setting MMF_VM_MERGE_ANY to set VM_MERGABLE on all
    candidate VMA's (go over all VMA's and set VM_MERGABLE). Then,
    clearing MMF_VM_MERGE_ANY would simply unmerge and clear VM_MERGABLE
    on all VMA's.

2) Do we want to make MMF_VM_MERGE_ANY imply MMF_VM_MERGABLE. You could
    still disable KSM (__ksm_exit()) during clearing MMF_VM_MERGE_ANY
    after going over all VMA's (where you might want to unshare already
    either way).

I guess the code will end up simpler if you make MMF_VM_MERGE_ANY simply 
piggy-back on MMF_VM_MERGABLE + VM_MERGABLE. I might be wrong, of course.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03 16:34             ` Stefan Roesch
@ 2023-04-03 17:04               ` David Hildenbrand
  2023-04-06 16:59               ` Stefan Roesch
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-03 17:04 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 03.04.23 18:34, Stefan Roesch wrote:
>>
>> In contrast to e.g.:
>>
>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>     placement was unfortunate.
>>
>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>     after freeing up memory (e.g., zeroed pages, pages all filled with
>>     poison values) instead of e.g., using MADV_DONTNEED to free up that
>>     memory.
>>
>>
> 
> I repeated an experiment with and without KSM. In terms of THP there is
> no huge difference between the two. On a 64GB main memory machine I see
> between 100 - 400MB in AnonHugePages.
> 
>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>> Instagram workload. The workload consists of 36 processes plus a few
>>> sidecar processes.
>>
>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>> environment?
>>
> 
> It's set to the standard value of 256.
> 
> In the meantime I have run experiments with different settings for
> pages_to_scan. With the default value of 100, we only get a relatively
> small benefit of KSM. If I increase the value to for instance to 2000 or
> 3000 the savings are substantial. (The workload is memory bound, not
> CPU bound).

Interesting.

> 
> Here are some stats for setting pages_to_scan to 3000:
> 
> full_scans: 560
> general_profit: 20620539008
> max_page_sharing: 256
> merge_across_nodes: 1
> pages_shared: 125446
> pages_sharing: 5259506
> pages_to_scan: 3000
> pages_unshared: 1897537
> pages_volatile: 12389223
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 176
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 2604
> use_zero_pages: 0
> zero_pages_sharing: 0
> 
> 
>> What would be interesting is pages_shared after max_page_sharing was set to a
>> very high number such that pages_shared does not include duplicates. Then
>> pages_shared actually expresses how many different pages we deduplicate. No need
>> to run without THP in that case.
>>
> 
> Thats on my list for the next set of experiments.

Splendid.

>> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
>> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
>> sufficient to understand what's happening.
>>
>>
> 
> I already run experiments with use_zero_pages, but they didn't make a
> difference. I'll repeat the experiment with a higher pages_to_scan
> value.

Okay, so it's most certainly not the zeropage. Thanks for that 
information and running the experiments!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
@ 2023-04-04 16:32       ` Stefan Roesch
  2023-04-04 16:43       ` Stefan Roesch
  2023-04-05  6:51       ` Christian Borntraeger
  2 siblings, 0 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-04-04 16:32 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya, Janosch Frank,
	Christian Borntraeger


David Hildenbrand <david@redhat.com> writes:

> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).

gmap_mark_unmergeable() seems to have a problem today. We can execute
gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly
after that the process can run madvise on it again and make it
mergeable. Am I mssing something here?

Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in
gmap_mark_unmergeable(). In case it is set, we can return an error. The
error code path looks like it can handle that case.

For the opposite case: gmap_mark_unmergeable() has already been run, we
would need some kind of flag or other means to be able to detect it.
Any recommendations?

>
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing
> what it's supposed to do. I don't think this effectively prevents right now
> madvise() from getting re-enabled on that VMA.
>
> @Christian, Janosch, am I missing something?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
@ 2023-04-04 16:43       ` Stefan Roesch
  2023-04-05  6:51       ` Christian Borntraeger
  2 siblings, 0 replies; 36+ messages in thread
From: Stefan Roesch @ 2023-04-04 16:43 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya, Janosch Frank,
	Christian Borntraeger


David Hildenbrand <david@redhat.com> writes:

> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).

gmap_mark_unmergeable() seems to have a problem today. We can execute
gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly
after that the process can run madvise on it again and make it
mergeable. Am I mssing something here?

Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in
gmap_mark_unmergeable(). In case it is set, we can return an error. The
error code path looks like it can handle that case.

For the opposite case: gmap_mark_unmergeable() has already been run, we
would need some kind of flag or other means to be able to detect it.
Any recommendations?

>
>
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing
> what it's supposed to do. I don't think this effectively prevents right now
> madvise() from getting re-enabled on that VMA.
>
> @Christian, Janosch, am I missing something?

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
  2023-04-04 16:43       ` Stefan Roesch
@ 2023-04-05  6:51       ` Christian Borntraeger
  2023-04-05 16:04         ` David Hildenbrand
  2 siblings, 1 reply; 36+ messages in thread
From: Christian Borntraeger @ 2023-04-05  6:51 UTC (permalink / raw)
  To: David Hildenbrand, Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank

Am 03.04.23 um 13:03 schrieb David Hildenbrand:
> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>>
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>>
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>>
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>>
>>
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>>
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>>
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>>
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>>
>>
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We

Normally we do support KSM on s390. This is a special case for guests using
storage keys. Those are attributes of the physical page and might differ even
if the content of the page is the same.
New Linux no longer uses it (unless a debug option is set during build) so we
enable the guest storage keys lazy and break KSM pages in that process.
Ideally we would continue this semantic (e.g. even after a prctl, if the
guest enable storage keys, disable ksm for this VM).

>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).
> 
> 
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA.
> 
> @Christian, Janosch, am I missing something?

Yes, if QEMU would do an madvise later on instead of just the start if would
result in guest storage keys to be messed up on KSM merges. One could argue
that this is a bug in the hypervisor then (QEMU) but yes, we should try
to make this more reliable in the kernel.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-05  6:51       ` Christian Borntraeger
@ 2023-04-05 16:04         ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger, Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank

On 05.04.23 08:51, Christian Borntraeger wrote:
> Am 03.04.23 um 13:03 schrieb David Hildenbrand:
>> On 03.04.23 12:37, David Hildenbrand wrote:
>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>> Patch series "mm: process/cgroup ksm support", v3.
>>>>
>>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Use case 1:
>>>>
>>>>      The madvise call is not available in the programming language.  An
>>>>      example for this are programs with forked workloads using a garbage
>>>>      collected language without pointers.  In such a language madvise cannot
>>>>      be made available.
>>>>
>>>>      In addition the addresses of objects get moved around as they are
>>>>      garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>>      for these type of workloads.
>>>
>>> I guess the interpreter could enable it (like a memory allocator could
>>> enable it for the whole heap). But I get that it's much easier to enable
>>> this per-process, and eventually only when a lot of the same processes
>>> are running in that particular environment.
>>>
>>>>
>>>> Use case 2:
>>>>
>>>>      The same interpreter can also be used for workloads where KSM brings
>>>>      no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>>      a workload by workload basis.
>>>
>>> Agreed. A per-process control is also helpful to identidy workloads
>>> where KSM might be beneficial (and to which degree).
>>>
>>>>
>>>> Use case 3:
>>>>
>>>>      With the madvise call sharing opportunities are only enabled for the
>>>>      current process: it is a workload-local decision.  A considerable number
>>>>      of sharing opportuniites may exist across multiple workloads or jobs.
>>>>      Only a higler level entity like a job scheduler or container can know
>>>>      for certain if its running one or more instances of a job.  That job
>>>>      scheduler however doesn't have the necessary internal worklaod knowledge
>>>>      to make targeted madvise calls.
>>>>
>>>> Security concerns:
>>>>
>>>>      In previous discussions security concerns have been brought up.  The
>>>>      problem is that an individual workload does not have the knowledge about
>>>>      what else is running on a machine.  Therefore it has to be very
>>>>      conservative in what memory areas can be shared or not.  However, if the
>>>>      system is dedicated to running multiple jobs within the same security
>>>>      domain, its the job scheduler that has the knowledge that sharing can be
>>>>      safely enabled and is even desirable.
>>>>
>>>> Performance:
>>>>
>>>>      Experiments with using UKSM have shown a capacity increase of around
>>>>      20%.
>>>>
>>>
>>> As raised, it would be great to include more details about the workload
>>> where this particulalry helps (e.g., a lot of Django processes operating
>>> in the same domain).
>>>
>>>>
>>>> 1. New options for prctl system command
>>>>
>>>>       This patch series adds two new options to the prctl system call.
>>>>       The first one allows to enable KSM at the process level and the second
>>>>       one to query the setting.
>>>>
>>>>       The setting will be inherited by child processes.
>>>>
>>>>       With the above setting, KSM can be enabled for the seed process of a
>>>>       cgroup and all processes in the cgroup will inherit the setting.
>>>>
>>>> 2. Changes to KSM processing
>>>>
>>>>       When KSM is enabled at the process level, the KSM code will iterate
>>>>       over all the VMA's and enable KSM for the eligible VMA's.
>>>>
>>>>       When forking a process that has KSM enabled, the setting will be
>>>>       inherited by the new child process.
>>>>
>>>>       In addition when KSM is disabled for a process, KSM will be disabled
>>>>       for the VMA's where KSM has been enabled.
>>>
>>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>>> prctl is enabled for a process?
>>>
>>>>
>>>> 3. Add general_profit metric
>>>>
>>>>       The general_profit metric of KSM is specified in the documentation,
>>>>       but not calculated.  This adds the general profit metric to
>>>>       /sys/kernel/debug/mm/ksm.
>>>>
>>>> 4. Add more metrics to ksm_stat
>>>>
>>>>       This adds the process profit and ksm type metric to
>>>>       /proc/<pid>/ksm_stat.
>>>>
>>>> 5. Add more tests to ksm_tests
>>>>
>>>>       This adds an option to specify the merge type to the ksm_tests.
>>>>       This allows to test madvise and prctl KSM.  It also adds a new option
>>>>       to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>>       that the KSM process setting is inherited by client processes.
>>>>
>>>> An update to the prctl(2) manpage has been proposed at [1].
>>>>
>>>> This patch (of 3):
>>>>
>>>> This adds a new prctl to API to enable and disable KSM on a per process
>>>> basis instead of only at the VMA basis (with madvise).
>>>>
>>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>>
>>>>       This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>>       is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>>       process.
>>>>
>>>> 2) add flag to __ksm_enter
>>>>
>>>>       This change adds the flag parameter to __ksm_enter.  This allows to
>>>>       distinguish if ksm was called by prctl or madvise.
>>>>
>>>> 3) add flag to __ksm_exit call
>>>>
>>>>       This adds the flag parameter to the __ksm_exit() call.  This allows
>>>>       to distinguish if this call is for an prctl or madvise invocation.
>>>>
>>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>>
>>>>       If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>>       over all the vmas and enable ksm if possible.  For the vmas that can be
>>>>       ksm enabled this is only done once.
>>>>
>>>> 5) support disabling of ksm for a process
>>>>
>>>>       This adds the ability to disable ksm for a process if ksm has been
>>>>       enabled for the process.
>>>>
>>>> 6) add new prctl option to get and set ksm for a process
>>>>
>>>>       This adds two new options to the prctl system call
>>>>       - enable ksm for all vmas of a process (if the vmas support it).
>>>>       - query if ksm has been enabled for a process.
>>>
>>>
>>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>>> infrastructure. Especially:
>>>
>>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>>       compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>>       already set.
>>>
>>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>>       VM_MERGABLE?
>>>
>>> The you can avoid all runtime checks for compatible VMAs and only look
>>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>>>
>>>
>>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
> 
> Normally we do support KSM on s390. This is a special case for guests using
> storage keys. Those are attributes of the physical page and might differ even
> if the content of the page is the same.
> New Linux no longer uses it (unless a debug option is set during build) so we
> enable the guest storage keys lazy and break KSM pages in that process.
> Ideally we would continue this semantic (e.g. even after a prctl, if the
> guest enable storage keys, disable ksm for this VM).

IIRC, KSM also gets disabled when switching to protected VMs. I recall 
that we really wanted to stop KSM scanning pages that are possibly 
protected. (don't remember if one could harm the system enabling it 
before/after the switch)

> 
>>> have to find a way to fence the prctl (for example, fail setting the
>>> prctl after gmap_mark_unmergeable() ran, and make
>>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>>> in some other way).
>>
>>
>> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA.
>>
>> @Christian, Janosch, am I missing something?
> 
> Yes, if QEMU would do an madvise later on instead of just the start if would
> result in guest storage keys to be messed up on KSM merges. One could argue
> that this is a bug in the hypervisor then (QEMU) but yes, we should try
> to make this more reliable in the kernel.

It looks like the "mm->def_flags &= ~VM_MERGEABLE" wanted to achieve 
that, but failed. At least it looks like completely unnecessary code if 
I am not wrong.

Maybe inspired by similar code in thp_split_mm(), that enforces 
VM_NOHUGEPAGE.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-03-10 18:28 ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
@ 2023-04-05 17:04   ` David Hildenbrand
  2023-04-05 21:20     ` Stefan Roesch
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2023-04-05 17:04 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya

On 10.03.23 19:28, Stefan Roesch wrote:
> This adds the general_profit KSM sysfs knob and the process profit metric
> and process merge type knobs to ksm_stat.
> 
> 1) split off pages_volatile function
> 
>     This splits off the pages_volatile function.  The next patch will
>     use this function.
> 
> 2) expose general_profit metric
> 
>     The documentation mentions a general profit metric, however this
>     metric is not calculated.  In addition the formula depends on the size
>     of internal structures, which makes it more difficult for an
>     administrator to make the calculation.  Adding the metric for a better
>     user experience.
> 
> 3) document general_profit sysfs knob
> 
> 4) calculate ksm process profit metric
> 
>     The ksm documentation mentions the process profit metric and how to
>     calculate it.  This adds the calculation of the metric.
> 
> 5) add ksm_merge_type() function
> 
>     This adds the ksm_merge_type function.  The function returns the
>     merge type for the process.  For madvise it returns "madvise", for
>     prctl it returns "process" and otherwise it returns "none".
> 
> 6) mm: expose ksm process profit metric and merge type in ksm_stat
> 
>     This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
>     The name of the value is ksm_merge_type.  The documentation mentions
>     the formula for the ksm process profit metric, however it does not
>     calculate it.  In addition the formula depends on the size of internal
>     structures.  So it makes sense to expose it.
> 
> 7) document new procfs ksm knobs
> 

Often, when you have to start making a list of things that a patch does, 
it might make sense to split some of the items into separate patches 
such that you can avoid lists and just explain in list-free text how the 
pieces in the patch fit together.

I'd suggest splitting this patch into logical pieces. For example, 
separating the general profit calculation/exposure from the per-mm 
profit and the per-mm ksm type indication.

> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
> Signed-off-by: Stefan Roesch <shr@devkernel.io>
> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---


[...]

>   KSM_ATTR_RO(pages_volatile);
>   
> @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject *kobj,
>   }
>   KSM_ATTR_RO(zero_pages_sharing);
>   
> +static ssize_t general_profit_show(struct kobject *kobj,
> +				   struct kobj_attribute *attr, char *buf)
> +{
> +	long general_profit;
> +	long all_rmap_items;
> +
> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
> +				ksm_pages_unshared + pages_volatile();

Are you sure you want to count a config knob (ksm_max_page_sharing) into 
that formula? I yet have to digest what this calculation implies, but it 
does feel odd.


Further, maybe just avoid pages_volatile(). Expanding the formula 
(excluding ksm_max_page_sharing for now):

all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();

-> expand pages_volatile() (ignoring the < 0 case)

all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items - 
ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;

-> simplify

all_rmap = ksm_rmap_items + ksm_pages_sharing;

Or is the < 0 case relevant here?

> +	general_profit = ksm_pages_sharing * PAGE_SIZE -
> +				all_rmap_items * sizeof(struct ksm_rmap_item);
> +
> +	return sysfs_emit(buf, "%ld\n", general_profit);
> +}
> +KSM_ATTR_RO(general_profit);
> +
>   static ssize_t stable_node_dups_show(struct kobject *kobj,
>   				     struct kobj_attribute *attr, char *buf)
>   {
> @@ -3345,6 +3385,7 @@ static struct attribute *ksm_attrs[] = {
>   	&stable_node_dups_attr.attr,
>   	&stable_node_chains_prune_millisecs_attr.attr,
>   	&use_zero_pages_attr.attr,
> +	&general_profit_attr.attr,
>   	NULL,
>   };
>   

The calculations (profit) don't include when KSM places the shared 
zeropage I guess. Accounting that per MM (and eventually globally) is in 
the works. [1]


[1] 
https://lore.kernel.org/lkml/20230328153852.26c2577e4bd921c371c47a7e@linux-foundation.org/t/

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-05 17:04   ` David Hildenbrand
@ 2023-04-05 21:20     ` Stefan Roesch
  2023-04-06 13:23       ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Stefan Roesch @ 2023-04-05 21:20 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 10.03.23 19:28, Stefan Roesch wrote:
>> This adds the general_profit KSM sysfs knob and the process profit metric
>> and process merge type knobs to ksm_stat.
>> 1) split off pages_volatile function
>>     This splits off the pages_volatile function.  The next patch will
>>     use this function.
>> 2) expose general_profit metric
>>     The documentation mentions a general profit metric, however this
>>     metric is not calculated.  In addition the formula depends on the size
>>     of internal structures, which makes it more difficult for an
>>     administrator to make the calculation.  Adding the metric for a better
>>     user experience.
>> 3) document general_profit sysfs knob
>> 4) calculate ksm process profit metric
>>     The ksm documentation mentions the process profit metric and how to
>>     calculate it.  This adds the calculation of the metric.
>> 5) add ksm_merge_type() function
>>     This adds the ksm_merge_type function.  The function returns the
>>     merge type for the process.  For madvise it returns "madvise", for
>>     prctl it returns "process" and otherwise it returns "none".
>> 6) mm: expose ksm process profit metric and merge type in ksm_stat
>>     This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
>>     The name of the value is ksm_merge_type.  The documentation mentions
>>     the formula for the ksm process profit metric, however it does not
>>     calculate it.  In addition the formula depends on the size of internal
>>     structures.  So it makes sense to expose it.
>> 7) document new procfs ksm knobs
>>
>
> Often, when you have to start making a list of things that a patch does, it
> might make sense to split some of the items into separate patches such that you
> can avoid lists and just explain in list-free text how the pieces in the patch
> fit together.
>
> I'd suggest splitting this patch into logical pieces. For example, separating
> the general profit calculation/exposure from the per-mm profit and the per-mm
> ksm type indication.
>

Originally these were individual patches. If I recall correctly Johannes
Weiner wanted them as one patch. I can certainly split them again.

>> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Rik van Riel <riel@surriel.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>
>
> [...]
>
>>   KSM_ATTR_RO(pages_volatile);
>>   @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject
>> *kobj,
>>   }
>>   KSM_ATTR_RO(zero_pages_sharing);
>>   +static ssize_t general_profit_show(struct kobject *kobj,
>> +				   struct kobj_attribute *attr, char *buf)
>> +{
>> +	long general_profit;
>> +	long all_rmap_items;
>> +
>> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
>> +				ksm_pages_unshared + pages_volatile();
>
> Are you sure you want to count a config knob (ksm_max_page_sharing) into that
> formula? I yet have to digest what this calculation implies, but it does feel
> odd.
>

This was a mistake. I wanted ksm_pages_sharing instead of
ksm_max_page_sharing.

>
> Further, maybe just avoid pages_volatile(). Expanding the formula (excluding
> ksm_max_page_sharing for now):
>
>
> all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();
>
> -> expand pages_volatile() (ignoring the < 0 case)
>
> all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items -
> ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;
>
> -> simplify
>
> all_rmap = ksm_rmap_items + ksm_pages_sharing;
>
I'll simplify it.

> Or is the < 0 case relevant here?
>

A negative profit is ok.

>> +	general_profit = ksm_pages_sharing * PAGE_SIZE -
>> +				all_rmap_items * sizeof(struct ksm_rmap_item);
>> +
>> +	return sysfs_emit(buf, "%ld\n", general_profit);
>> +}
>> +KSM_ATTR_RO(general_profit);
>> +
>>   static ssize_t stable_node_dups_show(struct kobject *kobj,
>>   				     struct kobj_attribute *attr, char *buf)
>>   {
>> @@ -3345,6 +3385,7 @@ static struct attribute *ksm_attrs[] = {
>>   	&stable_node_dups_attr.attr,
>>   	&stable_node_chains_prune_millisecs_attr.attr,
>>   	&use_zero_pages_attr.attr,
>> +	&general_profit_attr.attr,
>>   	NULL,
>>   };
>>
>
> The calculations (profit) don't include when KSM places the shared zeropage I
> guess. Accounting that per MM (and eventually globally) is in the works. [1]
>
>
> [1]
> https://lore.kernel.org/lkml/20230328153852.26c2577e4bd921c371c47a7e@linux-foundation.org/t/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-05 21:20     ` Stefan Roesch
@ 2023-04-06 13:23       ` David Hildenbrand
  2023-04-06 14:16         ` Johannes Weiner
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2023-04-06 13:23 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

>>
>> Often, when you have to start making a list of things that a patch does, it
>> might make sense to split some of the items into separate patches such that you
>> can avoid lists and just explain in list-free text how the pieces in the patch
>> fit together.
>>
>> I'd suggest splitting this patch into logical pieces. For example, separating
>> the general profit calculation/exposure from the per-mm profit and the per-mm
>> ksm type indication.
>>
> 
> Originally these were individual patches. If I recall correctly Johannes
> Weiner wanted them as one patch. I can certainly split them again.

That's why I remember that v1 contained more patches :)

Again, just my opinion on patches that require a description in form of 
a list ...

> 
>>> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
>>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>>> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Rik van Riel <riel@surriel.com>
>>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>>> ---
>>
>>
>> [...]
>>
>>>    KSM_ATTR_RO(pages_volatile);
>>>    @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject
>>> *kobj,
>>>    }
>>>    KSM_ATTR_RO(zero_pages_sharing);
>>>    +static ssize_t general_profit_show(struct kobject *kobj,
>>> +				   struct kobj_attribute *attr, char *buf)
>>> +{
>>> +	long general_profit;
>>> +	long all_rmap_items;
>>> +
>>> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
>>> +				ksm_pages_unshared + pages_volatile();
>>
>> Are you sure you want to count a config knob (ksm_max_page_sharing) into that
>> formula? I yet have to digest what this calculation implies, but it does feel
>> odd.
>>
> 
> This was a mistake. I wanted ksm_pages_sharing instead of
> ksm_max_page_sharing.
> 
>>
>> Further, maybe just avoid pages_volatile(). Expanding the formula (excluding
>> ksm_max_page_sharing for now):
>>
>>
>> all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();
>>
>> -> expand pages_volatile() (ignoring the < 0 case)
>>
>> all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items -
>> ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;
>>
>> -> simplify
>>
>> all_rmap = ksm_rmap_items + ksm_pages_sharing;
>>
> I'll simplify it.


Cool.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-06 13:23       ` David Hildenbrand
@ 2023-04-06 14:16         ` Johannes Weiner
  2023-04-06 14:32           ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Johannes Weiner @ 2023-04-06 14:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Bagas Sanjaya

On Thu, Apr 06, 2023 at 03:23:11PM +0200, David Hildenbrand wrote:
> > > 
> > > Often, when you have to start making a list of things that a patch does, it
> > > might make sense to split some of the items into separate patches such that you
> > > can avoid lists and just explain in list-free text how the pieces in the patch
> > > fit together.
> > > 
> > > I'd suggest splitting this patch into logical pieces. For example, separating
> > > the general profit calculation/exposure from the per-mm profit and the per-mm
> > > ksm type indication.
> > > 
> > 
> > Originally these were individual patches. If I recall correctly Johannes
> > Weiner wanted them as one patch. I can certainly split them again.
> 
> That's why I remember that v1 contained more patches :)
> 
> Again, just my opinion on patches that require a description in form of a
> list ...

My concern was the initial splitting of patch 1. I found it easier to
review with the new prctl() being one logical change, and fully
implemented in one go. The changelog is still in the form of a list,
but it essentially enumerates diff hunks that make up the interface.

No objection to splitting out the multiple sysfs knobs, though!

The strategy I usually follow is this:

1. Split out changes based on user-visible behavior (new prctl(), new
   sysfs knob)

2. Extract changes made along the way that have stand-alone value in
   existing code (bug fixes, simplifying/documenting tricky sections,
   cleanups).

3. Split out noisy prep work such as renames and refactors that make
   the user-visible functional changes more difficult to understand.

and then order the series into sections 2, 3, and 1.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-06 14:16         ` Johannes Weiner
@ 2023-04-06 14:32           ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-06 14:32 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Bagas Sanjaya

On 06.04.23 16:16, Johannes Weiner wrote:
> On Thu, Apr 06, 2023 at 03:23:11PM +0200, David Hildenbrand wrote:
>>>>
>>>> Often, when you have to start making a list of things that a patch does, it
>>>> might make sense to split some of the items into separate patches such that you
>>>> can avoid lists and just explain in list-free text how the pieces in the patch
>>>> fit together.
>>>>
>>>> I'd suggest splitting this patch into logical pieces. For example, separating
>>>> the general profit calculation/exposure from the per-mm profit and the per-mm
>>>> ksm type indication.
>>>>
>>>
>>> Originally these were individual patches. If I recall correctly Johannes
>>> Weiner wanted them as one patch. I can certainly split them again.
>>
>> That's why I remember that v1 contained more patches :)
>>
>> Again, just my opinion on patches that require a description in form of a
>> list ...
> 
> My concern was the initial splitting of patch 1. I found it easier to
> review with the new prctl() being one logical change, and fully
> implemented in one go. The changelog is still in the form of a list,
> but it essentially enumerates diff hunks that make up the interface.
> 
> No objection to splitting out the multiple sysfs knobs, though!
> 
> The strategy I usually follow is this:
> 
> 1. Split out changes based on user-visible behavior (new prctl(), new
>     sysfs knob)
> 
> 2. Extract changes made along the way that have stand-alone value in
>     existing code (bug fixes, simplifying/documenting tricky sections,
>     cleanups).
> 
> 3. Split out noisy prep work such as renames and refactors that make
>     the user-visible functional changes more difficult to understand.
> 
> and then order the series into sections 2, 3, and 1.
> 

I agree. The most important part is the "logical change" part. Once it's 
down to a list of 3 items or so for a single commit we can usually 
express it like (just an example that does no longer apply due to 
pages_volatile() not being required) the following when combining items 
1+2+3 from the list:

"
Let's expose the general KSM profit via sysfs and document that new 
toggle. [add details about that and especially why we are doing that]

As we need the number of volatile pages to calculate the general KSM 
profit, factor out existing functionality into a helper.
"

Much easier to read.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03 16:34             ` Stefan Roesch
  2023-04-03 17:04               ` David Hildenbrand
@ 2023-04-06 16:59               ` Stefan Roesch
  2023-04-06 17:10                 ` David Hildenbrand
  1 sibling, 1 reply; 36+ messages in thread
From: Stefan Roesch @ 2023-04-06 16:59 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


Stefan Roesch <shr@devkernel.io> writes:

> David Hildenbrand <david@redhat.com> writes:
>
>>>>> Obviously we could spend months analysing which exact allocations are
>>>>> identical, and then more months or years reworking the architecture to
>>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>>> and KSM is specifically for cases where this isn't practical.
>>>>> Based on your request in the previous thread, we investigated whether
>>>>> the boost was coming from the unintended side effects of KSM splitting
>>>>> THPs. This wasn't the case.
>>>>> If you have other theories on how the results could be bogus, we'd be
>>>>> happy to investigate those as well. But you have to let us know what
>>>>> you're looking for.
>>>>>
>>>>
>>>> Maybe I'm bad at making such requests but
>>>>
>>>> "Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?"
>>>>
>>>> "In any case, it would be nice to get a feeling for how much variety in
>>>> these 20% of deduplicated pages are. "
>>>>
>>>> is pretty clear to me. And shouldn't take months.
>>>>
>>
>> Just to clarify: the details I requested are not meant to decide whether to
>> reject the patch set (I understand that it can be beneficial to have); I
>> primarily want to understand if we're really dealing with a workload where KSM
>> is able to deduplicate pages that are non-trivial, to maybe figure out if there
>> are other workloads that could similarly benefit -- or if we could optimize KSM
>> for these specific cases or avoid the memory deduplication altogether.
>>
>> In contrast to e.g.:
>>
>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>    placement was unfortunate.
>>
>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>    after freeing up memory (e.g., zeroed pages, pages all filled with
>>    poison values) instead of e.g., using MADV_DONTNEED to free up that
>>    memory.
>>
>>
>
> I repeated an experiment with and without KSM. In terms of THP there is
> no huge difference between the two. On a 64GB main memory machine I see
> between 100 - 400MB in AnonHugePages.
>
>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>> Instagram workload. The workload consists of 36 processes plus a few
>>> sidecar processes.
>>
>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>> environment?
>>
>
> It's set to the standard value of 256.
>
> In the meantime I have run experiments with different settings for
> pages_to_scan. With the default value of 100, we only get a relatively
> small benefit of KSM. If I increase the value to for instance to 2000 or
> 3000 the savings are substantial. (The workload is memory bound, not
> CPU bound).
>
> Here are some stats for setting pages_to_scan to 3000:
>
> full_scans: 560
> general_profit: 20620539008
> max_page_sharing: 256
> merge_across_nodes: 1
> pages_shared: 125446
> pages_sharing: 5259506
> pages_to_scan: 3000
> pages_unshared: 1897537
> pages_volatile: 12389223
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 176
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 2604
> use_zero_pages: 0
> zero_pages_sharing: 0
>
>
>> What would be interesting is pages_shared after max_page_sharing was set to a
>> very high number such that pages_shared does not include duplicates. Then
>> pages_shared actually expresses how many different pages we deduplicate. No need
>> to run without THP in that case.
>>
>
> Thats on my list for the next set of experiments.
>

In the new experiment I increased the max_page_sharing value to 16384.
This reduced the number of stable_node_dups considerably (its around 3%
of the previous value). However pages_sharing is still very high for
this workload.

full_scans: 138
general_profit: 24442268608
max_page_sharing: 16384
merge_across_nodes: 1
pages_shared: 144590
pages_sharing: 6230983
pages_to_scan: 3000
pages_unshared: 2120307
pages_volatile: 14590780
run: 1
sleep_millisecs: 20
stable_node_chains: 23
stable_node_chains_prune_millisecs: 2000
stable_node_dups: 78
use_zero_pages: 0
zero_pages_sharing: 0

>> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
>> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
>> sufficient to understand what's happening.
>>
>>
>
> I already run experiments with use_zero_pages, but they didn't make a
> difference. I'll repeat the experiment with a higher pages_to_scan
> value.
>
>>> Each of these individual processes has around 500MB in KSM pages.
>>>
>>
>> That's really a lot, thanks.
>>
>>> Also to give some idea for individual VMA's
>>> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
>>> 73160 KB)
>>>
>>
>> I'll have a look at the patches today.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-06 16:59               ` Stefan Roesch
@ 2023-04-06 17:10                 ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2023-04-06 17:10 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 06.04.23 18:59, Stefan Roesch wrote:
> 
> Stefan Roesch <shr@devkernel.io> writes:
> 
>> David Hildenbrand <david@redhat.com> writes:
>>
>>>>>> Obviously we could spend months analysing which exact allocations are
>>>>>> identical, and then more months or years reworking the architecture to
>>>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>>>> and KSM is specifically for cases where this isn't practical.
>>>>>> Based on your request in the previous thread, we investigated whether
>>>>>> the boost was coming from the unintended side effects of KSM splitting
>>>>>> THPs. This wasn't the case.
>>>>>> If you have other theories on how the results could be bogus, we'd be
>>>>>> happy to investigate those as well. But you have to let us know what
>>>>>> you're looking for.
>>>>>>
>>>>>
>>>>> Maybe I'm bad at making such requests but
>>>>>
>>>>> "Stefan, can you do me a favor and investigate which pages we end up
>>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>>> still that significant when disabling THP?"
>>>>>
>>>>> "In any case, it would be nice to get a feeling for how much variety in
>>>>> these 20% of deduplicated pages are. "
>>>>>
>>>>> is pretty clear to me. And shouldn't take months.
>>>>>
>>>
>>> Just to clarify: the details I requested are not meant to decide whether to
>>> reject the patch set (I understand that it can be beneficial to have); I
>>> primarily want to understand if we're really dealing with a workload where KSM
>>> is able to deduplicate pages that are non-trivial, to maybe figure out if there
>>> are other workloads that could similarly benefit -- or if we could optimize KSM
>>> for these specific cases or avoid the memory deduplication altogether.
>>>
>>> In contrast to e.g.:
>>>
>>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>>     placement was unfortunate.
>>>
>>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>>     after freeing up memory (e.g., zeroed pages, pages all filled with
>>>     poison values) instead of e.g., using MADV_DONTNEED to free up that
>>>     memory.
>>>
>>>
>>
>> I repeated an experiment with and without KSM. In terms of THP there is
>> no huge difference between the two. On a 64GB main memory machine I see
>> between 100 - 400MB in AnonHugePages.
>>
>>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>>> Instagram workload. The workload consists of 36 processes plus a few
>>>> sidecar processes.
>>>
>>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>>> environment?
>>>
>>
>> It's set to the standard value of 256.
>>
>> In the meantime I have run experiments with different settings for
>> pages_to_scan. With the default value of 100, we only get a relatively
>> small benefit of KSM. If I increase the value to for instance to 2000 or
>> 3000 the savings are substantial. (The workload is memory bound, not
>> CPU bound).
>>
>> Here are some stats for setting pages_to_scan to 3000:
>>
>> full_scans: 560
>> general_profit: 20620539008
>> max_page_sharing: 256
>> merge_across_nodes: 1
>> pages_shared: 125446
>> pages_sharing: 5259506
>> pages_to_scan: 3000
>> pages_unshared: 1897537
>> pages_volatile: 12389223
>> run: 1
>> sleep_millisecs: 20
>> stable_node_chains: 176
>> stable_node_chains_prune_millisecs: 2000
>> stable_node_dups: 2604
>> use_zero_pages: 0
>> zero_pages_sharing: 0
>>
>>
>>> What would be interesting is pages_shared after max_page_sharing was set to a
>>> very high number such that pages_shared does not include duplicates. Then
>>> pages_shared actually expresses how many different pages we deduplicate. No need
>>> to run without THP in that case.
>>>
>>
>> Thats on my list for the next set of experiments.
>>
> 
> In the new experiment I increased the max_page_sharing value to 16384.
> This reduced the number of stable_node_dups considerably (its around 3%
> of the previous value). However pages_sharing is still very high for
> this workload.
> 
> full_scans: 138
> general_profit: 24442268608
> max_page_sharing: 16384
> merge_across_nodes: 1
> pages_shared: 144590
> pages_sharing: 6230983
> pages_to_scan: 3000
> pages_unshared: 2120307
> pages_volatile: 14590780
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 23
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 78
> use_zero_pages: 0
> zero_pages_sharing: 0

Interesting, thanks!

I wonder if it's really many interpreters performing (and caching?) 
essentially same blobs (for example, for a JIT the IR and/or target 
executable code). So maybe in general, such multi-instance interpreters 
are a good candidate for KSM. (I recall there were some processes where 
a server would perform and cache the translations instead) But just a 
pure speculation :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-04-06 17:12 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
2023-03-10 18:28 ` [PATCH v4 1/3] mm: add new api to enable ksm per process Stefan Roesch
2023-03-13 16:26   ` Johannes Weiner
2023-04-03 10:37   ` David Hildenbrand
2023-04-03 11:03     ` David Hildenbrand
2023-04-04 16:32       ` Stefan Roesch
2023-04-04 16:43       ` Stefan Roesch
2023-04-05  6:51       ` Christian Borntraeger
2023-04-05 16:04         ` David Hildenbrand
2023-04-03 15:50     ` Stefan Roesch
2023-04-03 17:02       ` David Hildenbrand
2023-03-10 18:28 ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
2023-04-05 17:04   ` David Hildenbrand
2023-04-05 21:20     ` Stefan Roesch
2023-04-06 13:23       ` David Hildenbrand
2023-04-06 14:16         ` Johannes Weiner
2023-04-06 14:32           ` David Hildenbrand
2023-03-10 18:28 ` [PATCH v4 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
2023-03-15 20:23   ` Mike Kravetz
2023-03-15 21:05   ` Johannes Weiner
2023-03-15 21:19     ` Johannes Weiner
2023-03-15 21:45       ` David Hildenbrand
2023-03-15 21:47         ` David Hildenbrand
2023-03-30 16:19         ` Stefan Roesch
2023-03-28 23:09 ` Andrew Morton
2023-03-30  4:55   ` David Hildenbrand
2023-03-30 14:26     ` Johannes Weiner
2023-03-30 14:40       ` David Hildenbrand
2023-03-30 16:41         ` Stefan Roesch
2023-04-03  9:48           ` David Hildenbrand
2023-04-03 16:34             ` Stefan Roesch
2023-04-03 17:04               ` David Hildenbrand
2023-04-06 16:59               ` Stefan Roesch
2023-04-06 17:10                 ` David Hildenbrand
2023-03-30 20:18     ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.