linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/3] mm: process/cgroup ksm support
@ 2023-04-12  3:16 Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12  3:16 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, willy

So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

Use case 1:
The madvise call is not available in the programming language. An example for
this are programs with forked workloads using a garbage collected language without
pointers. In such a language madvise cannot be made available.

In addition the addresses of objects get moved around as they are garbage
collected. KSM sharing needs to be enabled "from the outside" for these type of
workloads.

Use case 2:
The same interpreter can also be used for workloads where KSM brings no
benefit or even has overhead. We'd like to be able to enable KSM on a workload
by workload basis.

Use case 3:
With the madvise call sharing opportunities are only enabled for the current
process: it is a workload-local decision. A considerable number of sharing
opportunities may exist across multiple workloads or jobs (if they are part
of the same security domain). Only a higler level entity like a job scheduler
or container can know for certain if its running one or more instances of a
job. That job scheduler however doesn't have the necessary internal workload
knowledge to make targeted madvise calls.

Security concerns:
In previous discussions security concerns have been brought up. The problem is
that an individual workload does not have the knowledge about what else is
running on a machine. Therefore it has to be very conservative in what memory
areas can be shared or not. However, if the system is dedicated to running
multiple jobs within the same security domain, its the job scheduler that has
the knowledge that sharing can be safely enabled and is even desirable.

Performance:
Experiments with using UKSM have shown a capacity increase of around 20%.


1. New options for prctl system command
This patch series adds two new options to the prctl system call. The first
one allows to enable KSM at the process level and the second one to query the
setting.

The setting will be inherited by child processes.

With the above setting, KSM can be enabled for the seed process of a cgroup
and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing
When KSM is enabled at the process level, the KSM code will iterate over all
the VMA's and enable KSM for the eligible VMA's.

When forking a process that has KSM enabled, the setting will be inherited by
the new child process.

3. Add general_profit metric
The general_profit metric of KSM is specified in the documentation, but not
calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm.

4. Add more metrics to ksm_stat
This adds the process profit metric to /proc/<pid>/ksm_stat.

5. Add more tests to ksm_tests
This adds an option to specify the merge type to the ksm_tests. This allows to
test madvise and prctl KSM. It also adds a new option to query if prctl KSM has
been enabled. It adds a fork test to verify that the KSM process setting is
inherited by client processes.


Changes:
- V6:
  - Fix error condition in prctl call
  - Remove ksm_merge_type function and ksm_stat output
  - Some minor changes like whitespace and removing a cast.
  
- V5:
  - When the prctl system call is invoked, mark all compatible VMA
    as mergeable
  - Instead of checcking during scan if VMA is mergeable, mark the VMA
    mergeable when the VMA is created (in case the VMA is compatible)
    - Remove earlier changes, they are no longer necessary
  - Unset the flag MMF_VM_MERGE_ANY in gmap_mark_unmergeable().
  - When unsetting the MMF_VM_MERGE_ANY flag with prctl, only unset the
    flag
  - Remove pages_volatile function (with the simplar general_profit calculation,
    the function is no longer needed)
  - Use simpler formula for calculation of general_profit

- V4:
  - removing check in prctl for MMF_VM_MERGEABLE in PR_SET_MEMORY_MERGE
    handling
  - Checking for VM_MERGEABLE AND MMF_VM_MERGE_ANY to avoid chaning vm_flags
    - This requires also checking that the vma is compatible. The
      compatibility check is provided by a new helper
    - processes which have set MMF_VM_MERGE_ANY, only need to call the
      helper and not madvise.
  - removed unmerge_vmas function, this function is no longer necessary,
    clearing the MMF_VM_MERGE_ANY bit is sufficient

- V3:
  - folded patch 1 - 6
  - folded patch 7 - 14
  - folded patch 15 - 19
  - Expanded on the use cases in the cover letter
  - Added a section on security concerns to the cover letter

- V2:
  - Added use cases to the cover letter
  - Removed the tracing patch from the patch series and posted it as an
    individual patch
  - Refreshed repo


Stefan Roesch (3):
  mm: add new api to enable ksm per process
  mm: add new KSM process and sysfs knobs
  selftests/mm: add new selftests for KSM

 Documentation/ABI/testing/sysfs-kernel-mm-ksm |   8 +
 Documentation/admin-guide/mm/ksm.rst          |   5 +-
 arch/s390/mm/gmap.c                           |   1 +
 fs/proc/base.c                                |   3 +
 include/linux/ksm.h                           |  27 +-
 include/linux/sched/coredump.h                |   1 +
 include/uapi/linux/prctl.h                    |   2 +
 kernel/fork.c                                 |   1 +
 kernel/sys.c                                  |  23 ++
 mm/ksm.c                                      | 132 +++++++--
 mm/mmap.c                                     |   7 +
 tools/include/uapi/linux/prctl.h              |   2 +
 tools/testing/selftests/mm/Makefile           |   2 +-
 tools/testing/selftests/mm/ksm_tests.c        | 254 +++++++++++++++---
 14 files changed, 400 insertions(+), 68 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12  3:16 [PATCH v6 0/3] mm: process/cgroup ksm support Stefan Roesch
@ 2023-04-12  3:16 ` Stefan Roesch
  2023-04-12 13:20   ` Matthew Wilcox
  2023-04-12 15:40   ` David Hildenbrand
  2023-04-12  3:16 ` [PATCH v6 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
  2 siblings, 2 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12  3:16 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

So far KSM can only be enabled by calling madvise for memory regions.  To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

1. New options for prctl system command

   This patch series adds two new options to the prctl system call.
   The first one allows to enable KSM at the process level and the second
   one to query the setting.

   The setting will be inherited by child processes.

   With the above setting, KSM can be enabled for the seed process of a
   cgroup and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing

   When KSM is enabled at the process level, the KSM code will iterate
   over all the VMA's and enable KSM for the eligible VMA's.

   When forking a process that has KSM enabled, the setting will be
   inherited by the new child process.

  1) Introduce new MMF_VM_MERGE_ANY flag

     This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
     is set, kernel samepage merging (ksm) gets enabled for all vma's of a
     process.

  2) Setting VM_MERGEABLE on VMA creation

     When a VMA is created, if the MMF_VM_MERGE_ANY flag is set, the
     VM_MERGEABLE flag will be set for this VMA.

  3) add flag to __ksm_enter

     This change adds the flag parameter to __ksm_enter.  This allows to
     distinguish if ksm was called by prctl or madvise.

  4) add flag to __ksm_exit call

     This adds the flag parameter to the __ksm_exit() call.  This allows
     to distinguish if this call is for an prctl or madvise invocation.

  5) support disabling of ksm for a process

     This adds the ability to disable ksm for a process if ksm has been
     enabled for the process.

  6) add new prctl option to get and set ksm for a process

     This adds two new options to the prctl system call
     - enable ksm for all vmas of a process (if the vmas support it).
     - query if ksm has been enabled for a process.

3. Disabling MMF_VM_MERGE_ANY for storage keys in s390

   In the s390 architecture when storage keys are used, the
   MMF_VM_MERGE_ANY will be disabled.

Signed-off-by: Stefan Roesch <shr@devkernel.io>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 arch/s390/mm/gmap.c            |   1 +
 include/linux/ksm.h            |  23 +++++--
 include/linux/sched/coredump.h |   1 +
 include/uapi/linux/prctl.h     |   2 +
 kernel/fork.c                  |   1 +
 kernel/sys.c                   |  23 +++++++
 mm/ksm.c                       | 111 ++++++++++++++++++++++++++-------
 mm/mmap.c                      |   7 +++
 8 files changed, 142 insertions(+), 27 deletions(-)

diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 5a716bdcba05..9d85e5589474 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2591,6 +2591,7 @@ int gmap_mark_unmergeable(void)
 	int ret;
 	VMA_ITERATOR(vmi, mm, 0);
 
+	clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
 	for_each_vma(vmi, vma) {
 		/* Copy vm_flags to avoid partial modifications in ksm_madvise */
 		vm_flags = vma->vm_flags;
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 7e232ba59b86..f24f9faf1561 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -18,20 +18,29 @@
 #ifdef CONFIG_KSM
 int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags);
-int __ksm_enter(struct mm_struct *mm);
-void __ksm_exit(struct mm_struct *mm);
+
+int ksm_add_mm(struct mm_struct *mm);
+void ksm_add_vma(struct vm_area_struct *vma);
+void ksm_add_vmas(struct mm_struct *mm);
+
+int __ksm_enter(struct mm_struct *mm, int flag);
+void __ksm_exit(struct mm_struct *mm, int flag);
 
 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
 {
+	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
+		return ksm_add_mm(mm);
 	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
-		return __ksm_enter(mm);
+		return __ksm_enter(mm, MMF_VM_MERGEABLE);
 	return 0;
 }
 
 static inline void ksm_exit(struct mm_struct *mm)
 {
-	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
-		__ksm_exit(mm);
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		__ksm_exit(mm, MMF_VM_MERGE_ANY);
+	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+		__ksm_exit(mm, MMF_VM_MERGEABLE);
 }
 
 /*
@@ -53,6 +62,10 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
 
 #else  /* !CONFIG_KSM */
 
+static inline void ksm_add_vma(struct vm_area_struct *vma)
+{
+}
+
 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
 {
 	return 0;
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index 0e17ae7fbfd3..0ee96ea7a0e9 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
 #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
 				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
 
+#define MMF_VM_MERGE_ANY	29
 #endif /* _LINUX_SCHED_COREDUMP_H */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 1312a137f7fb..759b3f53e53f 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -290,4 +290,6 @@ struct prctl_mm_map {
 #define PR_SET_VMA		0x53564d41
 # define PR_SET_VMA_ANON_NAME		0
 
+#define PR_SET_MEMORY_MERGE		67
+#define PR_GET_MEMORY_MERGE		68
 #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/fork.c b/kernel/fork.c
index f68954d05e89..1520697cf6c7 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -686,6 +686,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 		if (vma_iter_bulk_store(&vmi, tmp))
 			goto fail_nomem_vmi_store;
 
+		ksm_add_vma(tmp);
 		mm->map_count++;
 		if (!(tmp->vm_flags & VM_WIPEONFORK))
 			retval = copy_page_range(tmp, mpnt);
diff --git a/kernel/sys.c b/kernel/sys.c
index 495cd87d9bf4..9bba163d2d04 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -15,6 +15,7 @@
 #include <linux/highuid.h>
 #include <linux/fs.h>
 #include <linux/kmod.h>
+#include <linux/ksm.h>
 #include <linux/perf_event.h>
 #include <linux/resource.h>
 #include <linux/kernel.h>
@@ -2661,6 +2662,28 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
 	case PR_SET_VMA:
 		error = prctl_set_vma(arg2, arg3, arg4, arg5);
 		break;
+#ifdef CONFIG_KSM
+	case PR_SET_MEMORY_MERGE:
+		if (mmap_write_lock_killable(me->mm))
+			return -EINTR;
+
+		if (arg2) {
+			int err = ksm_add_mm(me->mm);
+
+			if (!err)
+				ksm_add_vmas(me->mm);
+		} else {
+			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+		}
+		mmap_write_unlock(me->mm);
+		break;
+	case PR_GET_MEMORY_MERGE:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+
+		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+		break;
+#endif
 	default:
 		error = -EINVAL;
 		break;
diff --git a/mm/ksm.c b/mm/ksm.c
index d7bd28199f6c..ab95ae0f9def 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -534,10 +534,33 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
 	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
 }
 
+static bool vma_ksm_compatible(struct vm_area_struct *vma)
+{
+	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
+			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
+			     VM_MIXEDMAP))
+		return false;		/* just ignore the advice */
+
+	if (vma_is_dax(vma))
+		return false;
+
+#ifdef VM_SAO
+	if (vma->vm_flags & VM_SAO)
+		return false;
+#endif
+#ifdef VM_SPARC_ADI
+	if (vma->vm_flags & VM_SPARC_ADI)
+		return false;
+#endif
+
+	return true;
+}
+
 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
 		unsigned long addr)
 {
 	struct vm_area_struct *vma;
+
 	if (ksm_test_exit(mm))
 		return NULL;
 	vma = vma_lookup(mm, addr);
@@ -1065,6 +1088,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 
 			mm_slot_free(mm_slot_cache, mm_slot);
 			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
 			mmdrop(mm);
 		} else
 			spin_unlock(&ksm_mmlist_lock);
@@ -2495,6 +2519,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 
 		mm_slot_free(mm_slot_cache, mm_slot);
 		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
 		mmap_read_unlock(mm);
 		mmdrop(mm);
 	} else {
@@ -2571,6 +2596,63 @@ static int ksm_scan_thread(void *nothing)
 	return 0;
 }
 
+static void __ksm_add_vma(struct vm_area_struct *vma)
+{
+	unsigned long vm_flags = vma->vm_flags;
+
+	if (vm_flags & VM_MERGEABLE)
+		return;
+
+	if (vma_ksm_compatible(vma)) {
+		vm_flags |= VM_MERGEABLE;
+		vm_flags_reset(vma, vm_flags);
+	}
+}
+
+/**
+ * ksm_add_vma - Mark vma as mergeable
+ *
+ * @vma:  Pointer to vma
+ */
+void ksm_add_vma(struct vm_area_struct *vma)
+{
+	struct mm_struct *mm = vma->vm_mm;
+
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		__ksm_add_vma(vma);
+}
+
+/**
+ * ksm_add_vmas - Mark all vma's of a process as mergeable
+ *
+ * @mm:  Pointer to mm
+ */
+void ksm_add_vmas(struct mm_struct *mm)
+{
+	struct vm_area_struct *vma;
+
+	VMA_ITERATOR(vmi, mm, 0);
+	for_each_vma(vmi, vma)
+		__ksm_add_vma(vma);
+}
+
+/**
+ * ksm_add_mm - Add mm to mm ksm list
+ *
+ * @mm:  Pointer to mm
+ *
+ * Returns 0 on success, otherwise error code
+ */
+int ksm_add_mm(struct mm_struct *mm)
+{
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		return -EINVAL;
+	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+		return -EINVAL;
+
+	return __ksm_enter(mm, MMF_VM_MERGE_ANY);
+}
+
 int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags)
 {
@@ -2579,28 +2661,13 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 
 	switch (advice) {
 	case MADV_MERGEABLE:
-		/*
-		 * Be somewhat over-protective for now!
-		 */
-		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
-				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
-				 VM_HUGETLB | VM_MIXEDMAP))
-			return 0;		/* just ignore the advice */
-
-		if (vma_is_dax(vma))
+		if (vma->vm_flags & VM_MERGEABLE)
 			return 0;
-
-#ifdef VM_SAO
-		if (*vm_flags & VM_SAO)
+		if (!vma_ksm_compatible(vma))
 			return 0;
-#endif
-#ifdef VM_SPARC_ADI
-		if (*vm_flags & VM_SPARC_ADI)
-			return 0;
-#endif
 
 		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
-			err = __ksm_enter(mm);
+			err = __ksm_enter(mm, MMF_VM_MERGEABLE);
 			if (err)
 				return err;
 		}
@@ -2626,7 +2693,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 }
 EXPORT_SYMBOL_GPL(ksm_madvise);
 
-int __ksm_enter(struct mm_struct *mm)
+int __ksm_enter(struct mm_struct *mm, int flag)
 {
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
@@ -2659,7 +2726,7 @@ int __ksm_enter(struct mm_struct *mm)
 		list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node);
 	spin_unlock(&ksm_mmlist_lock);
 
-	set_bit(MMF_VM_MERGEABLE, &mm->flags);
+	set_bit(flag, &mm->flags);
 	mmgrab(mm);
 
 	if (needs_wakeup)
@@ -2668,7 +2735,7 @@ int __ksm_enter(struct mm_struct *mm)
 	return 0;
 }
 
-void __ksm_exit(struct mm_struct *mm)
+void __ksm_exit(struct mm_struct *mm, int flag)
 {
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
@@ -2700,7 +2767,7 @@ void __ksm_exit(struct mm_struct *mm)
 
 	if (easy_to_free) {
 		mm_slot_free(mm_slot_cache, mm_slot);
-		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(flag, &mm->flags);
 		mmdrop(mm);
 	} else if (mm_slot) {
 		mmap_write_lock(mm);
diff --git a/mm/mmap.c b/mm/mmap.c
index 740b54be3ed4..483e182e0b9d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -46,6 +46,7 @@
 #include <linux/pkeys.h>
 #include <linux/oom.h>
 #include <linux/sched/mm.h>
+#include <linux/ksm.h>
 
 #include <linux/uaccess.h>
 #include <asm/cacheflush.h>
@@ -2213,6 +2214,8 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	/* vma_complete stores the new vma */
 	vma_complete(&vp, vmi, vma->vm_mm);
 
+	ksm_add_vma(new);
+
 	/* Success. */
 	if (new_below)
 		vma_next(vmi);
@@ -2664,6 +2667,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	if (file && vm_flags & VM_SHARED)
 		mapping_unmap_writable(file->f_mapping);
 	file = vma->vm_file;
+	ksm_add_vma(vma);
 expanded:
 	perf_event_mmap(vma);
 
@@ -2936,6 +2940,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 		goto mas_store_fail;
 
 	mm->map_count++;
+	ksm_add_vma(vma);
 out:
 	perf_event_mmap(vma);
 	mm->total_vm += len >> PAGE_SHIFT;
@@ -3180,6 +3185,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 		if (vma_link(mm, new_vma))
 			goto out_vma_link;
 		*need_rmap_locks = false;
+		ksm_add_vma(new_vma);
 	}
 	validate_mm_mt(mm);
 	return new_vma;
@@ -3356,6 +3362,7 @@ static struct vm_area_struct *__install_special_mapping(
 	vm_stat_account(mm, vma->vm_flags, len >> PAGE_SHIFT);
 
 	perf_event_mmap(vma);
+	ksm_add_vma(vma);
 
 	validate_mm_mt(mm);
 	return vma;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 2/3] mm: add new KSM process and sysfs knobs
  2023-04-12  3:16 [PATCH v6 0/3] mm: process/cgroup ksm support Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-04-12  3:16 ` Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
  2 siblings, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12  3:16 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

This adds the general_profit KSM sysfs knob and the process profit metric
knobs to ksm_stat.

1) expose general_profit metric

   The documentation mentions a general profit metric, however this
   metric is not calculated.  In addition the formula depends on the size
   of internal structures, which makes it more difficult for an
   administrator to make the calculation.  Adding the metric for a better
   user experience.

2) document general_profit sysfs knob

3) calculate ksm process profit metric

   The ksm documentation mentions the process profit metric and how to
   calculate it.  This adds the calculation of the metric.

4) mm: expose ksm process profit metric in ksm_stat

   This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
   The documentation mentions the formula for the ksm process profit
   metric, however it does not calculate it.  In addition the formula
   depends on the size of internal structures.  So it makes sense to
   expose it.

5) document new procfs ksm knobs

Signed-off-by: Stefan Roesch <shr@devkernel.io>
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 Documentation/ABI/testing/sysfs-kernel-mm-ksm |  8 +++++++
 Documentation/admin-guide/mm/ksm.rst          |  5 ++++-
 fs/proc/base.c                                |  3 +++
 include/linux/ksm.h                           |  4 ++++
 mm/ksm.c                                      | 21 +++++++++++++++++++
 5 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-ksm b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
index d244674a9480..6041a025b65a 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-ksm
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
@@ -51,3 +51,11 @@ Description:	Control merging pages across different NUMA nodes.
 
 		When it is set to 0 only pages from the same node are merged,
 		otherwise pages from all nodes can be merged together (default).
+
+What:		/sys/kernel/mm/ksm/general_profit
+Date:		April 2023
+KernelVersion:  6.4
+Contact:	Linux memory management mailing list <linux-mm@kvack.org>
+Description:	Measure how effective KSM is.
+		general_profit: how effective is KSM. The formula for the
+		calculation is in Documentation/admin-guide/mm/ksm.rst.
diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 270560fef3b2..bc1dd830dd49 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -157,6 +157,8 @@ stable_node_chains_prune_millisecs
 
 The effectiveness of KSM and MADV_MERGEABLE is shown in ``/sys/kernel/mm/ksm/``:
 
+general_profit
+        how effective is KSM. The calculation is explained below.
 pages_shared
         how many shared pages are being used
 pages_sharing
@@ -214,7 +216,8 @@ several times, which are unprofitable memory consumed.
 			  ksm_rmap_items * sizeof(rmap_item).
 
    where ksm_merging_pages is shown under the directory ``/proc/<pid>/``,
-   and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``.
+   and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``. The process profit
+   is also shown in ``/proc/<pid>/ksm_stat`` as ksm_process_profit.
 
 From the perspective of application, a high ratio of ``ksm_rmap_items`` to
 ``ksm_merging_pages`` means a bad madvise-applied policy, so developers or
diff --git a/fs/proc/base.c b/fs/proc/base.c
index 07463ad4a70a..cb42bb021995 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -96,6 +96,7 @@
 #include <linux/time_namespace.h>
 #include <linux/resctrl.h>
 #include <linux/cn_proc.h>
+#include <linux/ksm.h>
 #include <trace/events/oom.h>
 #include "internal.h"
 #include "fd.h"
@@ -3208,6 +3209,8 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
 	if (mm) {
 		seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items);
 		seq_printf(m, "zero_pages_sharing %lu\n", mm->ksm_zero_pages_sharing);
+		seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages);
+		seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm));
 		mmput(mm);
 	}
 
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index f24f9faf1561..63cb491b1740 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -60,6 +60,10 @@ struct page *ksm_might_need_to_copy(struct page *page,
 void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
 void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
 
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *);
+#endif /* CONFIG_PROC_FS */
+
 #else  /* !CONFIG_KSM */
 
 static inline void ksm_add_vma(struct vm_area_struct *vma)
diff --git a/mm/ksm.c b/mm/ksm.c
index ab95ae0f9def..7982bac15d8c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3042,6 +3042,14 @@ static void wait_while_offlining(void)
 }
 #endif /* CONFIG_MEMORY_HOTREMOVE */
 
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *mm)
+{
+	return mm->ksm_merging_pages * PAGE_SIZE -
+		mm->ksm_rmap_items * sizeof(struct ksm_rmap_item);
+}
+#endif /* CONFIG_PROC_FS */
+
 #ifdef CONFIG_SYSFS
 /*
  * This all compiles without CONFIG_SYSFS, but is a waste of space.
@@ -3313,6 +3321,18 @@ static ssize_t zero_pages_sharing_show(struct kobject *kobj,
 }
 KSM_ATTR_RO(zero_pages_sharing);
 
+static ssize_t general_profit_show(struct kobject *kobj,
+				   struct kobj_attribute *attr, char *buf)
+{
+	long general_profit;
+
+	general_profit = ksm_pages_sharing * PAGE_SIZE -
+				ksm_rmap_items * sizeof(struct ksm_rmap_item);
+
+	return sysfs_emit(buf, "%ld\n", general_profit);
+}
+KSM_ATTR_RO(general_profit);
+
 static ssize_t stable_node_dups_show(struct kobject *kobj,
 				     struct kobj_attribute *attr, char *buf)
 {
@@ -3378,6 +3398,7 @@ static struct attribute *ksm_attrs[] = {
 	&stable_node_dups_attr.attr,
 	&stable_node_chains_prune_millisecs_attr.attr,
 	&use_zero_pages_attr.attr,
+	&general_profit_attr.attr,
 	NULL,
 };
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v6 3/3] selftests/mm: add new selftests for KSM
  2023-04-12  3:16 [PATCH v6 0/3] mm: process/cgroup ksm support Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
  2023-04-12  3:16 ` [PATCH v6 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
@ 2023-04-12  3:16 ` Stefan Roesch
  2023-04-13 13:07   ` David Hildenbrand
  2 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12  3:16 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

This adds three new tests to the selftests for KSM.  These tests use the
new prctl API's to enable and disable KSM.

1) add new prctl flags to prctl header file in tools dir

   This adds the new prctl flags to the include file prct.h in the
   tools directory.  This makes sure they are available for testing.

2) add KSM prctl merge test

   This adds the -t option to the ksm_tests program.  The -t flag
   allows to specify if it should use madvise or prctl ksm merging.

3) add KSM get merge type test

   This adds the -G flag to the ksm_tests program to query the KSM
   status with prctl after KSM has been enabled with prctl.

4) add KSM fork test

   Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
   by the child process.

5) add two functions for debugging merge outcome

   This adds two functions to report the metrics in /proc/self/ksm_stat
   and /sys/kernel/debug/mm/ksm.

The debugging can be enabled with the following command line:
make -C tools/testing/selftests TARGETS="mm" --keep-going \
        EXTRA_CFLAGS=-DDEBUG=1

[akpm@linux-foundation.org: fix Makefile]
Link: https://lkml.kernel.org/r/20230224044000.3084046-4-shr@devkernel.io
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 tools/include/uapi/linux/prctl.h       |   2 +
 tools/testing/selftests/mm/Makefile    |   2 +-
 tools/testing/selftests/mm/ksm_tests.c | 254 +++++++++++++++++++++----
 3 files changed, 218 insertions(+), 40 deletions(-)

diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index a5e06dcbba13..e4c629c1f1b0 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -284,4 +284,6 @@ struct prctl_mm_map {
 #define PR_SET_VMA		0x53564d41
 # define PR_SET_VMA_ANON_NAME		0
 
+#define PR_SET_MEMORY_MERGE		67
+#define PR_GET_MEMORY_MERGE		68
 #endif /* _LINUX_PRCTL_H */
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index c31d952cff68..fbf5646b1072 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -29,7 +29,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
 # LDLIBS.
 MAKEFLAGS += --no-builtin-rules
 
-CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/tools/include/uapi $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
 LDLIBS = -lrt -lpthread
 TEST_GEN_FILES = cow
 TEST_GEN_FILES += compaction_test
diff --git a/tools/testing/selftests/mm/ksm_tests.c b/tools/testing/selftests/mm/ksm_tests.c
index f9eb4d67e0dd..9fb21b982dc9 100644
--- a/tools/testing/selftests/mm/ksm_tests.c
+++ b/tools/testing/selftests/mm/ksm_tests.c
@@ -1,6 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
 
 #include <sys/mman.h>
+#include <sys/prctl.h>
+#include <sys/wait.h>
 #include <stdbool.h>
 #include <time.h>
 #include <string.h>
@@ -21,6 +23,7 @@
 #define KSM_PROT_STR_DEFAULT "rw"
 #define KSM_USE_ZERO_PAGES_DEFAULT false
 #define KSM_MERGE_ACROSS_NODES_DEFAULT true
+#define KSM_MERGE_TYPE_DEFAULT 0
 #define MB (1ul << 20)
 
 struct ksm_sysfs {
@@ -33,9 +36,17 @@ struct ksm_sysfs {
 	unsigned long use_zero_pages;
 };
 
+enum ksm_merge_type {
+	KSM_MERGE_MADVISE,
+	KSM_MERGE_PRCTL,
+	KSM_MERGE_LAST = KSM_MERGE_PRCTL
+};
+
 enum ksm_test_name {
 	CHECK_KSM_MERGE,
+	CHECK_KSM_MERGE_FORK,
 	CHECK_KSM_UNMERGE,
+	CHECK_KSM_GET_MERGE_TYPE,
 	CHECK_KSM_ZERO_PAGE_MERGE,
 	CHECK_KSM_NUMA_MERGE,
 	KSM_MERGE_TIME,
@@ -82,6 +93,55 @@ static int ksm_read_sysfs(const char *file_path, unsigned long *val)
 	return 0;
 }
 
+#ifdef DEBUG
+static void ksm_print_sysfs(void)
+{
+	unsigned long max_page_sharing, pages_sharing, pages_shared;
+	unsigned long full_scans, pages_unshared, pages_volatile;
+	unsigned long stable_node_chains, stable_node_dups;
+	long general_profit;
+
+	if (ksm_read_sysfs(KSM_FP("pages_shared"), &pages_shared) ||
+	    ksm_read_sysfs(KSM_FP("pages_sharing"), &pages_sharing) ||
+	    ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing) ||
+	    ksm_read_sysfs(KSM_FP("full_scans"), &full_scans) ||
+	    ksm_read_sysfs(KSM_FP("pages_unshared"), &pages_unshared) ||
+	    ksm_read_sysfs(KSM_FP("pages_volatile"), &pages_volatile) ||
+	    ksm_read_sysfs(KSM_FP("stable_node_chains"), &stable_node_chains) ||
+	    ksm_read_sysfs(KSM_FP("stable_node_dups"), &stable_node_dups) ||
+	    ksm_read_sysfs(KSM_FP("general_profit"), (unsigned long *)&general_profit))
+		return;
+
+	printf("pages_shared      : %lu\n", pages_shared);
+	printf("pages_sharing     : %lu\n", pages_sharing);
+	printf("max_page_sharing  : %lu\n", max_page_sharing);
+	printf("full_scans        : %lu\n", full_scans);
+	printf("pages_unshared    : %lu\n", pages_unshared);
+	printf("pages_volatile    : %lu\n", pages_volatile);
+	printf("stable_node_chains: %lu\n", stable_node_chains);
+	printf("stable_node_dups  : %lu\n", stable_node_dups);
+	printf("general_profit    : %ld\n", general_profit);
+}
+
+static void ksm_print_procfs(void)
+{
+	const char *file_name = "/proc/self/ksm_stat";
+	char buffer[512];
+	FILE *f = fopen(file_name, "r");
+
+	if (!f) {
+		fprintf(stderr, "f %s\n", file_name);
+		perror("fopen");
+		return;
+	}
+
+	while (fgets(buffer, sizeof(buffer), f))
+		printf("%s", buffer);
+
+	fclose(f);
+}
+#endif
+
 static int str_to_prot(char *prot_str)
 {
 	int prot = 0;
@@ -115,7 +175,9 @@ static void print_help(void)
 	       " -D evaluate unmerging time and speed when disabling KSM.\n"
 	       "    For this test, the size of duplicated memory area (in MiB)\n"
 	       "    must be provided using -s option\n"
-	       " -C evaluate the time required to break COW of merged pages.\n\n");
+	       " -C evaluate the time required to break COW of merged pages.\n"
+	       " -G query merge mode\n"
+	       " -F evaluate that the KSM process flag is inherited\n\n");
 
 	printf(" -a: specify the access protections of pages.\n"
 	       "     <prot> must be of the form [rwx].\n"
@@ -129,6 +191,10 @@ static void print_help(void)
 	printf(" -m: change merge_across_nodes tunable\n"
 	       "     Default: %d\n", KSM_MERGE_ACROSS_NODES_DEFAULT);
 	printf(" -s: the size of duplicated memory area (in MiB)\n");
+	printf(" -t: KSM merge type\n"
+	       "     Default: 0\n"
+	       "     0: madvise merging\n"
+	       "     1: prctl merging\n");
 
 	exit(0);
 }
@@ -176,12 +242,21 @@ static int ksm_do_scan(int scan_count, struct timespec start_time, int timeout)
 	return 0;
 }
 
-static int ksm_merge_pages(void *addr, size_t size, struct timespec start_time, int timeout)
+static int ksm_merge_pages(int merge_type, void *addr, size_t size,
+			struct timespec start_time, int timeout)
 {
-	if (madvise(addr, size, MADV_MERGEABLE)) {
-		perror("madvise");
-		return 1;
+	if (merge_type == KSM_MERGE_MADVISE) {
+		if (madvise(addr, size, MADV_MERGEABLE)) {
+			perror("madvise");
+			return 1;
+		}
+	} else if (merge_type == KSM_MERGE_PRCTL) {
+		if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+			perror("prctl");
+			return 1;
+		}
 	}
+
 	if (ksm_write_sysfs(KSM_FP("run"), 1))
 		return 1;
 
@@ -211,6 +286,11 @@ static bool assert_ksm_pages_count(long dupl_page_count)
 	    ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing))
 		return false;
 
+#ifdef DEBUG
+	ksm_print_sysfs();
+	ksm_print_procfs();
+#endif
+
 	/*
 	 * Since there must be at least 2 pages for merging and 1 page can be
 	 * shared with the limited number of pages (max_page_sharing), sometimes
@@ -266,7 +346,8 @@ static int ksm_restore(struct ksm_sysfs *ksm_sysfs)
 	return 0;
 }
 
-static int check_ksm_merge(int mapping, int prot, long page_count, int timeout, size_t page_size)
+static int check_ksm_merge(int merge_type, int mapping, int prot,
+			long page_count, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -281,13 +362,16 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	/* verify that the right number of pages are merged */
 	if (assert_ksm_pages_count(page_count)) {
 		printf("OK\n");
-		munmap(map_ptr, page_size * page_count);
+		if (merge_type == KSM_MERGE_MADVISE)
+			munmap(map_ptr, page_size * page_count);
+		else if (merge_type == KSM_MERGE_PRCTL)
+			prctl(PR_SET_MEMORY_MERGE, 0);
 		return KSFT_PASS;
 	}
 
@@ -297,7 +381,73 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
 	return KSFT_FAIL;
 }
 
-static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_size)
+/* Verify that prctl ksm flag is inherited. */
+static int check_ksm_fork(void)
+{
+	int rc = KSFT_FAIL;
+	pid_t child_pid;
+
+	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+		perror("prctl");
+		return KSFT_FAIL;
+	}
+
+	child_pid = fork();
+	if (child_pid == 0) {
+		int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+		if (!is_on)
+			exit(KSFT_FAIL);
+
+		exit(KSFT_PASS);
+	}
+
+	if (child_pid < 0)
+		goto out;
+
+	if (waitpid(child_pid, &rc, 0) < 0)
+		rc = KSFT_FAIL;
+
+	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+		perror("prctl");
+		rc = KSFT_FAIL;
+	}
+
+out:
+	if (rc == KSFT_PASS)
+		printf("OK\n");
+	else
+		printf("Not OK\n");
+
+	return rc;
+}
+
+static int check_ksm_get_merge_type(void)
+{
+	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+		perror("prctl set");
+		return 1;
+	}
+
+	int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+		perror("prctl set");
+		return 1;
+	}
+
+	int is_off = prctl(PR_GET_MEMORY_MERGE, 0);
+
+	if (is_on && is_off) {
+		printf("OK\n");
+		return KSFT_PASS;
+	}
+
+	printf("Not OK\n");
+	return KSFT_FAIL;
+}
+
+static int check_ksm_unmerge(int merge_type, int mapping, int prot, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -313,7 +463,7 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	/* change 1 byte in each of the 2 pages -- KSM must automatically unmerge them */
@@ -337,8 +487,8 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
 	return KSFT_FAIL;
 }
 
-static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int timeout,
-				     bool use_zero_pages, size_t page_size)
+static int check_ksm_zero_page_merge(int merge_type, int mapping, int prot, long page_count,
+				int timeout, bool use_zero_pages, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time;
@@ -356,7 +506,7 @@ static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int
 	if (!map_ptr)
 		return KSFT_FAIL;
 
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
        /*
@@ -402,8 +552,8 @@ static int get_first_mem_node(void)
 	return get_next_mem_node(numa_max_node());
 }
 
-static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_across_nodes,
-				size_t page_size)
+static int check_ksm_numa_merge(int merge_type, int mapping, int prot, int timeout,
+				bool merge_across_nodes, size_t page_size)
 {
 	void *numa1_map_ptr, *numa2_map_ptr;
 	struct timespec start_time;
@@ -439,8 +589,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
 	memset(numa2_map_ptr, '*', page_size);
 
 	/* try to merge the pages */
-	if (ksm_merge_pages(numa1_map_ptr, page_size, start_time, timeout) ||
-	    ksm_merge_pages(numa2_map_ptr, page_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, numa1_map_ptr, page_size, start_time, timeout) ||
+	    ksm_merge_pages(merge_type, numa2_map_ptr, page_size, start_time, timeout))
 		goto err_out;
 
        /*
@@ -466,7 +616,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
 	return KSFT_FAIL;
 }
 
-static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_hugepages_time(int merge_type, int mapping, int prot,
+				int timeout, size_t map_size)
 {
 	void *map_ptr, *map_ptr_orig;
 	struct timespec start_time, end_time;
@@ -508,7 +659,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
 		perror("clock_gettime");
@@ -533,7 +684,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
 	return KSFT_FAIL;
 }
 
-static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -549,7 +700,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
 		perror("clock_gettime");
@@ -574,7 +725,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
 	return KSFT_FAIL;
 }
 
-static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_unmerge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -589,7 +740,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
 		perror("clock_gettime");
 		goto err_out;
 	}
-	if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
 		goto err_out;
 
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -621,7 +772,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
 	return KSFT_FAIL;
 }
 
-static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
+static int ksm_cow_time(int merge_type, int mapping, int prot, int timeout, size_t page_size)
 {
 	void *map_ptr;
 	struct timespec start_time, end_time;
@@ -660,7 +811,7 @@ static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
 		memset(map_ptr + page_size * i, '+', i / 2 + 1);
 		memset(map_ptr + page_size * (i + 1), '+', i / 2 + 1);
 	}
-	if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+	if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
 		goto err_out;
 
 	if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -697,6 +848,7 @@ int main(int argc, char *argv[])
 	int ret, opt;
 	int prot = 0;
 	int ksm_scan_limit_sec = KSM_SCAN_LIMIT_SEC_DEFAULT;
+	int merge_type = KSM_MERGE_TYPE_DEFAULT;
 	long page_count = KSM_PAGE_COUNT_DEFAULT;
 	size_t page_size = sysconf(_SC_PAGESIZE);
 	struct ksm_sysfs ksm_sysfs_old;
@@ -705,7 +857,7 @@ int main(int argc, char *argv[])
 	bool merge_across_nodes = KSM_MERGE_ACROSS_NODES_DEFAULT;
 	long size_MB = 0;
 
-	while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:MUZNPCHD")) != -1) {
+	while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:t:FGMUZNPCHD")) != -1) {
 		switch (opt) {
 		case 'a':
 			prot = str_to_prot(optarg);
@@ -745,6 +897,20 @@ int main(int argc, char *argv[])
 				printf("Size must be greater than 0\n");
 				return KSFT_FAIL;
 			}
+		case 't':
+			{
+				int tmp = atoi(optarg);
+
+				if (tmp < 0 || tmp > KSM_MERGE_LAST) {
+					printf("Invalid merge type\n");
+					return KSFT_FAIL;
+				}
+				merge_type = atoi(optarg);
+			}
+			break;
+		case 'F':
+			test_name = CHECK_KSM_MERGE_FORK;
+			break;
 		case 'M':
 			break;
 		case 'U':
@@ -753,6 +919,9 @@ int main(int argc, char *argv[])
 		case 'Z':
 			test_name = CHECK_KSM_ZERO_PAGE_MERGE;
 			break;
+		case 'G':
+			test_name = CHECK_KSM_GET_MERGE_TYPE;
+			break;
 		case 'N':
 			test_name = CHECK_KSM_NUMA_MERGE;
 			break;
@@ -795,35 +964,42 @@ int main(int argc, char *argv[])
 
 	switch (test_name) {
 	case CHECK_KSM_MERGE:
-		ret = check_ksm_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
+		ret = check_ksm_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
 				      ksm_scan_limit_sec, page_size);
 		break;
+	case CHECK_KSM_MERGE_FORK:
+		ret = check_ksm_fork();
+		break;
 	case CHECK_KSM_UNMERGE:
-		ret = check_ksm_unmerge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-					page_size);
+		ret = check_ksm_unmerge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+					ksm_scan_limit_sec, page_size);
+		break;
+	case CHECK_KSM_GET_MERGE_TYPE:
+		ret = check_ksm_get_merge_type();
 		break;
 	case CHECK_KSM_ZERO_PAGE_MERGE:
-		ret = check_ksm_zero_page_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
-						ksm_scan_limit_sec, use_zero_pages, page_size);
+		ret = check_ksm_zero_page_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+						page_count, ksm_scan_limit_sec, use_zero_pages,
+						page_size);
 		break;
 	case CHECK_KSM_NUMA_MERGE:
-		ret = check_ksm_numa_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-					   merge_across_nodes, page_size);
+		ret = check_ksm_numa_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+					ksm_scan_limit_sec, merge_across_nodes, page_size);
 		break;
 	case KSM_MERGE_TIME:
 		if (size_MB == 0) {
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_merge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-				     size_MB);
+		ret = ksm_merge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+				ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_MERGE_TIME_HUGE_PAGES:
 		if (size_MB == 0) {
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_merge_hugepages_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+		ret = ksm_merge_hugepages_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
 				ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_UNMERGE_TIME:
@@ -831,12 +1007,12 @@ int main(int argc, char *argv[])
 			printf("Option '-s' is required.\n");
 			return KSFT_FAIL;
 		}
-		ret = ksm_unmerge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+		ret = ksm_unmerge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
 				       ksm_scan_limit_sec, size_MB);
 		break;
 	case KSM_COW_TIME:
-		ret = ksm_cow_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
-				   page_size);
+		ret = ksm_cow_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+				ksm_scan_limit_sec, page_size);
 		break;
 	}
 
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-04-12 13:20   ` Matthew Wilcox
  2023-04-12 16:08     ` Stefan Roesch
  2023-04-12 15:40   ` David Hildenbrand
  1 sibling, 1 reply; 17+ messages in thread
From: Matthew Wilcox @ 2023-04-12 13:20 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, akpm, hannes, Bagas Sanjaya

On Tue, Apr 11, 2023 at 08:16:46PM -0700, Stefan Roesch wrote:
>  	case PR_SET_VMA:
>  		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>  		break;
> +#ifdef CONFIG_KSM
> +	case PR_SET_MEMORY_MERGE:
> +		if (mmap_write_lock_killable(me->mm))
> +			return -EINTR;
> +
> +		if (arg2) {
> +			int err = ksm_add_mm(me->mm);
> +
> +			if (!err)
> +				ksm_add_vmas(me->mm);

in the last version of this patch, you reported the error.  Now you
swallow the error.  I have no idea which is correct, but you've
changed the behaviour without explaining it, so I assume it's wrong.

> +		} else {
> +			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> +		}
> +		mmap_write_unlock(me->mm);
> +		break;
> +	case PR_GET_MEMORY_MERGE:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +
> +		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> +		break;

Why do we need a GET?  Just for symmetry, or is there an actual need for
it?



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
  2023-04-12 13:20   ` Matthew Wilcox
@ 2023-04-12 15:40   ` David Hildenbrand
  2023-04-12 16:44     ` Stefan Roesch
  1 sibling, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2023-04-12 15:40 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	willy, Bagas Sanjaya

[...]

Thanks for giving mu sugegstions a churn. I think we can further 
improve/simplify some things. I added some comments, but might have more 
regarding MMF_VM_MERGE_ANY / MMF_VM_MERGEABLE.

[I'll try reowkring your patch after I send this mail to play with some 
simplifications]

>   arch/s390/mm/gmap.c            |   1 +
>   include/linux/ksm.h            |  23 +++++--
>   include/linux/sched/coredump.h |   1 +
>   include/uapi/linux/prctl.h     |   2 +
>   kernel/fork.c                  |   1 +
>   kernel/sys.c                   |  23 +++++++
>   mm/ksm.c                       | 111 ++++++++++++++++++++++++++-------
>   mm/mmap.c                      |   7 +++
>   8 files changed, 142 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 5a716bdcba05..9d85e5589474 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2591,6 +2591,7 @@ int gmap_mark_unmergeable(void)
>   	int ret;
>   	VMA_ITERATOR(vmi, mm, 0);
>   
> +	clear_bit(MMF_VM_MERGE_ANY, &mm->flags);

Okay, that should keep the existing mechanism working. (but users can 
still mess it up)

Might be worth a comment

/*
  * Make sure to disable KSM (if enabled for the whole process or
  * individual VMAs). Note that nothing currently hinders user space
  * from re-enabling it.
  */

>   	for_each_vma(vmi, vma) {
>   		/* Copy vm_flags to avoid partial modifications in ksm_madvise */
>   		vm_flags = vma->vm_flags;
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 7e232ba59b86..f24f9faf1561 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -18,20 +18,29 @@
>   #ifdef CONFIG_KSM
>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   		unsigned long end, int advice, unsigned long *vm_flags);
> -int __ksm_enter(struct mm_struct *mm);
> -void __ksm_exit(struct mm_struct *mm);
> +
> +int ksm_add_mm(struct mm_struct *mm);
> +void ksm_add_vma(struct vm_area_struct *vma);
> +void ksm_add_vmas(struct mm_struct *mm);
> +
> +int __ksm_enter(struct mm_struct *mm, int flag);
> +void __ksm_exit(struct mm_struct *mm, int flag);
>   
>   static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>   {
> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
> +		return ksm_add_mm(mm);

ksm_fork() runs before copying any VMAs. Copying the bit should be 
sufficient.

Would it be possible to rework to something like:

if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
	set_bit(MMF_VM_MERGE_ANY, &mm->flags)
if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
	return __ksm_enter(mm);

work? IOW, not exporting ksm_add_mm() and not passing a flag to 
__ksm_enter() -- it would simply set MMF_VM_MERGEABLE ?


I rememebr proposing that enabling MMF_VM_MERGE_ANY would simply enable 
MMF_VM_MERGEABLE.

>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
> -		return __ksm_enter(mm);
> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>   	return 0;
>   }
>   
>   static inline void ksm_exit(struct mm_struct *mm)
>   {
> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> -		__ksm_exit(mm);
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGEABLE);

Can we do

if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
	__ksm_exit(mm);

And simply let __ksm_exit() clear both bits?

>   }
>   
>   /*
> @@ -53,6 +62,10 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
>   
>   #else  /* !CONFIG_KSM */
>   

[...]

>   #endif /* _LINUX_SCHED_COREDUMP_H */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 1312a137f7fb..759b3f53e53f 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>   #define PR_SET_VMA		0x53564d41
>   # define PR_SET_VMA_ANON_NAME		0
>   
> +#define PR_SET_MEMORY_MERGE		67
> +#define PR_GET_MEMORY_MERGE		68
>   #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/fork.c b/kernel/fork.c
> index f68954d05e89..1520697cf6c7 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -686,6 +686,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
>   		if (vma_iter_bulk_store(&vmi, tmp))
>   			goto fail_nomem_vmi_store;
>   
> +		ksm_add_vma(tmp);

Is this really required? The relevant VMAs should have VM_MERGEABLE set.

>   		mm->map_count++;
>   		if (!(tmp->vm_flags & VM_WIPEONFORK))
>   			retval = copy_page_range(tmp, mpnt);
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 495cd87d9bf4..9bba163d2d04 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -15,6 +15,7 @@
>   #include <linux/highuid.h>
>   #include <linux/fs.h>
>   #include <linux/kmod.h>
> +#include <linux/ksm.h>
>   #include <linux/perf_event.h>
>   #include <linux/resource.h>
>   #include <linux/kernel.h>
> @@ -2661,6 +2662,28 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>   	case PR_SET_VMA:
>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>   		break;
> +#ifdef CONFIG_KSM
> +	case PR_SET_MEMORY_MERGE:
> +		if (mmap_write_lock_killable(me->mm))
> +			return -EINTR;
> +
> +		if (arg2) {
> +			int err = ksm_add_mm(me->mm);
> +
> +			if (!err)
> +				ksm_add_vmas(me->mm);
> +		} else {
> +			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);

Okay, so disabling doesn't actually unshare anything.

> +		}
> +		mmap_write_unlock(me->mm);
> +		break;
> +	case PR_GET_MEMORY_MERGE:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +
> +		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> +		break;
> +#endif
>   	default:
>   		error = -EINVAL;
>   		break;
> diff --git a/mm/ksm.c b/mm/ksm.c
> index d7bd28199f6c..ab95ae0f9def 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -534,10 +534,33 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>   }
>   
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
> +			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
> +			     VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */
> +
> +	if (vma_is_dax(vma))
> +		return false;
> +
> +#ifdef VM_SAO
> +	if (vma->vm_flags & VM_SAO)
> +		return false;
> +#endif
> +#ifdef VM_SPARC_ADI
> +	if (vma->vm_flags & VM_SPARC_ADI)
> +		return false;
> +#endif
> +
> +	return true;
> +}
> +
>   static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
>   		unsigned long addr)
>   {
>   	struct vm_area_struct *vma;
> +

unrelated change

>   	if (ksm_test_exit(mm))
>   		return NULL;
>   	vma = vma_lookup(mm, addr);
> @@ -1065,6 +1088,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>   
>   			mm_slot_free(mm_slot_cache, mm_slot);
>   			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>   			mmdrop(mm);
>   		} else
>   			spin_unlock(&ksm_mmlist_lock);
> @@ -2495,6 +2519,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>   
>   		mm_slot_free(mm_slot_cache, mm_slot);
>   		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>   		mmap_read_unlock(mm);
>   		mmdrop(mm);
>   	} else {
> @@ -2571,6 +2596,63 @@ static int ksm_scan_thread(void *nothing)
>   	return 0;
>   }
>   
> +static void __ksm_add_vma(struct vm_area_struct *vma)
> +{
> +	unsigned long vm_flags = vma->vm_flags;
> +
> +	if (vm_flags & VM_MERGEABLE)
> +		return;
> +
> +	if (vma_ksm_compatible(vma)) {
> +		vm_flags |= VM_MERGEABLE;
> +		vm_flags_reset(vma, vm_flags);
> +	}
> +}
> +
> +/**
> + * ksm_add_vma - Mark vma as mergeable

"if compatible"

> + *
> + * @vma:  Pointer to vma
> + */
> +void ksm_add_vma(struct vm_area_struct *vma)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		__ksm_add_vma(vma);
> +}
> +
> +/**
> + * ksm_add_vmas - Mark all vma's of a process as mergeable
> + *
> + * @mm:  Pointer to mm
> + */
> +void ksm_add_vmas(struct mm_struct *mm)

I'd suggest calling this

> +{
> +	struct vm_area_struct *vma;
> +
> +	VMA_ITERATOR(vmi, mm, 0);
> +	for_each_vma(vmi, vma)
> +		__ksm_add_vma(vma);
> +}
> +
> +/**
> + * ksm_add_mm - Add mm to mm ksm list
> + *
> + * @mm:  Pointer to mm
> + *
> + * Returns 0 on success, otherwise error code
> + */
> +int ksm_add_mm(struct mm_struct *mm)
> +{
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		return -EINVAL;
> +	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> +		return -EINVAL;
> +
> +	return __ksm_enter(mm, MMF_VM_MERGE_ANY);
> +}
> +
>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   		unsigned long end, int advice, unsigned long *vm_flags)
>   {
> @@ -2579,28 +2661,13 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   
>   	switch (advice) {
>   	case MADV_MERGEABLE:
> -		/*
> -		 * Be somewhat over-protective for now!
> -		 */
> -		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> -				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> -				 VM_HUGETLB | VM_MIXEDMAP))
> -			return 0;		/* just ignore the advice */
> -
> -		if (vma_is_dax(vma))
> +		if (vma->vm_flags & VM_MERGEABLE)
>   			return 0;
> -
> -#ifdef VM_SAO
> -		if (*vm_flags & VM_SAO)
> +		if (!vma_ksm_compatible(vma))
>   			return 0;
> -#endif
> -#ifdef VM_SPARC_ADI
> -		if (*vm_flags & VM_SPARC_ADI)
> -			return 0;
> -#endif
>   
>   		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
> -			err = __ksm_enter(mm);
> +			err = __ksm_enter(mm, MMF_VM_MERGEABLE);
>   			if (err)
>   				return err;
>   		}
> @@ -2626,7 +2693,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   }
>   EXPORT_SYMBOL_GPL(ksm_madvise);
>   
> -int __ksm_enter(struct mm_struct *mm)
> +int __ksm_enter(struct mm_struct *mm, int flag)
>   {
>   	struct ksm_mm_slot *mm_slot;
>   	struct mm_slot *slot;
> @@ -2659,7 +2726,7 @@ int __ksm_enter(struct mm_struct *mm)
>   		list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node);
>   	spin_unlock(&ksm_mmlist_lock);
>   
> -	set_bit(MMF_VM_MERGEABLE, &mm->flags);
> +	set_bit(flag, &mm->flags);
>   	mmgrab(mm);
>   
>   	if (needs_wakeup)
> @@ -2668,7 +2735,7 @@ int __ksm_enter(struct mm_struct *mm)
>   	return 0;
>   }
>   
> -void __ksm_exit(struct mm_struct *mm)
> +void __ksm_exit(struct mm_struct *mm, int flag)
>   {
>   	struct ksm_mm_slot *mm_slot;
>   	struct mm_slot *slot;
> @@ -2700,7 +2767,7 @@ void __ksm_exit(struct mm_struct *mm)
>   
>   	if (easy_to_free) {
>   		mm_slot_free(mm_slot_cache, mm_slot);
> -		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +		clear_bit(flag, &mm->flags);
>   		mmdrop(mm);
>   	} else if (mm_slot) {
>   		mmap_write_lock(mm);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 740b54be3ed4..483e182e0b9d 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -46,6 +46,7 @@
>   #include <linux/pkeys.h>
>   #include <linux/oom.h>
>   #include <linux/sched/mm.h>
> +#include <linux/ksm.h>
>   
>   #include <linux/uaccess.h>
>   #include <asm/cacheflush.h>
> @@ -2213,6 +2214,8 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   	/* vma_complete stores the new vma */
>   	vma_complete(&vp, vmi, vma->vm_mm);
>   
> +	ksm_add_vma(new);
> +

Splitting a VMA shouldn't modify VM_MERGEABLE, so I assume this is not 
required?

>   	/* Success. */
>   	if (new_below)
>   		vma_next(vmi);
> @@ -2664,6 +2667,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>   	if (file && vm_flags & VM_SHARED)
>   		mapping_unmap_writable(file->f_mapping);
>   	file = vma->vm_file;
> +	ksm_add_vma(vma);
>   expanded:
>   	perf_event_mmap(vma);
>   
> @@ -2936,6 +2940,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   		goto mas_store_fail;
>   
>   	mm->map_count++;
> +	ksm_add_vma(vma);
>   out:
>   	perf_event_mmap(vma);
>   	mm->total_vm += len >> PAGE_SHIFT;
> @@ -3180,6 +3185,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>   		if (vma_link(mm, new_vma))
>   			goto out_vma_link;
>   		*need_rmap_locks = false;
> +		ksm_add_vma(new_vma);

Copying shouldn't modify VM_MERGEABLE, so I think this is not required?

>   	}
>   	validate_mm_mt(mm);
>   	return new_vma;
> @@ -3356,6 +3362,7 @@ static struct vm_area_struct *__install_special_mapping(
>   	vm_stat_account(mm, vma->vm_flags, len >> PAGE_SHIFT);
>   
>   	perf_event_mmap(vma);
> +	ksm_add_vma(vma);

IIUC, special mappings will never be considered a reasonable target for 
KSM (especially, because at least VM_DONTEXPAND is always set).

I think you can just drop this call.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 13:20   ` Matthew Wilcox
@ 2023-04-12 16:08     ` Stefan Roesch
  2023-04-12 16:29       ` Matthew Wilcox
  0 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12 16:08 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, akpm, hannes, Bagas Sanjaya


Matthew Wilcox <willy@infradead.org> writes:

> On Tue, Apr 11, 2023 at 08:16:46PM -0700, Stefan Roesch wrote:
>>  	case PR_SET_VMA:
>>  		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>  		break;
>> +#ifdef CONFIG_KSM
>> +	case PR_SET_MEMORY_MERGE:
>> +		if (mmap_write_lock_killable(me->mm))
>> +			return -EINTR;
>> +
>> +		if (arg2) {
>> +			int err = ksm_add_mm(me->mm);
>> +
>> +			if (!err)
>> +				ksm_add_vmas(me->mm);
>
> in the last version of this patch, you reported the error.  Now you
> swallow the error.  I have no idea which is correct, but you've
> changed the behaviour without explaining it, so I assume it's wrong.
>

I don't see how the error is swallowed in the arg2 case. If there is
an error ksm_add_vmas is not executedd and at the end of the function
the error is returned. Am I missing something?

>> +		} else {
>> +			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>> +		}
>> +		mmap_write_unlock(me->mm);
>> +		break;
>> +	case PR_GET_MEMORY_MERGE:
>> +		if (arg2 || arg3 || arg4 || arg5)
>> +			return -EINVAL;
>> +
>> +		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>> +		break;
>
> Why do we need a GET?  Just for symmetry, or is there an actual need for
> it?

There are three reasons:
- For symmetry
- The ksm sharing is inherited by child processes. This allows the test
  programs to verify that this is working.
- For child processes it might be useful to have the ability to check if
  ksm sharing has been enabled


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 16:08     ` Stefan Roesch
@ 2023-04-12 16:29       ` Matthew Wilcox
  0 siblings, 0 replies; 17+ messages in thread
From: Matthew Wilcox @ 2023-04-12 16:29 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, akpm, hannes, Bagas Sanjaya

On Wed, Apr 12, 2023 at 09:08:11AM -0700, Stefan Roesch wrote:
> 
> Matthew Wilcox <willy@infradead.org> writes:
> 
> > On Tue, Apr 11, 2023 at 08:16:46PM -0700, Stefan Roesch wrote:
> >>  	case PR_SET_VMA:
> >>  		error = prctl_set_vma(arg2, arg3, arg4, arg5);
> >>  		break;
> >> +#ifdef CONFIG_KSM
> >> +	case PR_SET_MEMORY_MERGE:
> >> +		if (mmap_write_lock_killable(me->mm))
> >> +			return -EINTR;
> >> +
> >> +		if (arg2) {
> >> +			int err = ksm_add_mm(me->mm);
> >> +
> >> +			if (!err)
> >> +				ksm_add_vmas(me->mm);
> >
> > in the last version of this patch, you reported the error.  Now you
> > swallow the error.  I have no idea which is correct, but you've
> > changed the behaviour without explaining it, so I assume it's wrong.
> >
> 
> I don't see how the error is swallowed in the arg2 case. If there is
> an error ksm_add_vmas is not executedd and at the end of the function
> the error is returned. Am I missing something?

You said 'int err' which declares a new variable.  If you want it
reported, just use 'error = ksm_add_mm(me->mm);'.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 15:40   ` David Hildenbrand
@ 2023-04-12 16:44     ` Stefan Roesch
  2023-04-12 18:41       ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12 16:44 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> [...]
>
> Thanks for giving mu sugegstions a churn. I think we can further
> improve/simplify some things. I added some comments, but might have more
> regarding MMF_VM_MERGE_ANY / MMF_VM_MERGEABLE.
>
> [I'll try reowkring your patch after I send this mail to play with some
> simplifications]
>
>>   arch/s390/mm/gmap.c            |   1 +
>>   include/linux/ksm.h            |  23 +++++--
>>   include/linux/sched/coredump.h |   1 +
>>   include/uapi/linux/prctl.h     |   2 +
>>   kernel/fork.c                  |   1 +
>>   kernel/sys.c                   |  23 +++++++
>>   mm/ksm.c                       | 111 ++++++++++++++++++++++++++-------
>>   mm/mmap.c                      |   7 +++
>>   8 files changed, 142 insertions(+), 27 deletions(-)
>> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
>> index 5a716bdcba05..9d85e5589474 100644
>> --- a/arch/s390/mm/gmap.c
>> +++ b/arch/s390/mm/gmap.c
>> @@ -2591,6 +2591,7 @@ int gmap_mark_unmergeable(void)
>>   	int ret;
>>   	VMA_ITERATOR(vmi, mm, 0);
>>   +	clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>
> Okay, that should keep the existing mechanism working. (but users can still mess
> it up)
>
> Might be worth a comment
>
> /*
>  * Make sure to disable KSM (if enabled for the whole process or
>  * individual VMAs). Note that nothing currently hinders user space
>  * from re-enabling it.
>  */
>

I'll add the comment.

>>   	for_each_vma(vmi, vma) {
>>   		/* Copy vm_flags to avoid partial modifications in ksm_madvise */
>>   		vm_flags = vma->vm_flags;
>> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
>> index 7e232ba59b86..f24f9faf1561 100644
>> --- a/include/linux/ksm.h
>> +++ b/include/linux/ksm.h
>> @@ -18,20 +18,29 @@
>>   #ifdef CONFIG_KSM
>>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>   		unsigned long end, int advice, unsigned long *vm_flags);
>> -int __ksm_enter(struct mm_struct *mm);
>> -void __ksm_exit(struct mm_struct *mm);
>> +
>> +int ksm_add_mm(struct mm_struct *mm);
>> +void ksm_add_vma(struct vm_area_struct *vma);
>> +void ksm_add_vmas(struct mm_struct *mm);
>> +
>> +int __ksm_enter(struct mm_struct *mm, int flag);
>> +void __ksm_exit(struct mm_struct *mm, int flag);
>>     static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>>   {
>> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
>> +		return ksm_add_mm(mm);
>
> ksm_fork() runs before copying any VMAs. Copying the bit should be sufficient.
>
> Would it be possible to rework to something like:
>
> if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
> 	set_bit(MMF_VM_MERGE_ANY, &mm->flags)
> if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
> 	return __ksm_enter(mm);
>

That will work.
> work? IOW, not exporting ksm_add_mm() and not passing a flag to __ksm_enter() --
> it would simply set MMF_VM_MERGEABLE ?
>

ksm_add_mm() is also used in prctl (kernel/sys.c). Do you want to make a
similar change there?
>
> I rememebr proposing that enabling MMF_VM_MERGE_ANY would simply enable
> MMF_VM_MERGEABLE.
>
>>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
>> -		return __ksm_enter(mm);
>> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>>   	return 0;
>>   }
>>     static inline void ksm_exit(struct mm_struct *mm)
>>   {
>> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> -		__ksm_exit(mm);
>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
>> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGEABLE);
>
> Can we do
>
> if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> 	__ksm_exit(mm);
>
> And simply let __ksm_exit() clear both bits?
>
Yes, I'll make the change.
>>   }
>>     /*
>> @@ -53,6 +62,10 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
>>     #else  /* !CONFIG_KSM */
>>
>
> [...]
>
>>   #endif /* _LINUX_SCHED_COREDUMP_H */
>> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
>> index 1312a137f7fb..759b3f53e53f 100644
>> --- a/include/uapi/linux/prctl.h
>> +++ b/include/uapi/linux/prctl.h
>> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>>   #define PR_SET_VMA		0x53564d41
>>   # define PR_SET_VMA_ANON_NAME		0
>>   +#define PR_SET_MEMORY_MERGE		67
>> +#define PR_GET_MEMORY_MERGE		68
>>   #endif /* _LINUX_PRCTL_H */
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index f68954d05e89..1520697cf6c7 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -686,6 +686,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
>>   		if (vma_iter_bulk_store(&vmi, tmp))
>>   			goto fail_nomem_vmi_store;
>>   +		ksm_add_vma(tmp);
>
> Is this really required? The relevant VMAs should have VM_MERGEABLE set.
>
I'll fix it.

>>   		mm->map_count++;
>>   		if (!(tmp->vm_flags & VM_WIPEONFORK))
>>   			retval = copy_page_range(tmp, mpnt);
>> diff --git a/kernel/sys.c b/kernel/sys.c
>> index 495cd87d9bf4..9bba163d2d04 100644
>> --- a/kernel/sys.c
>> +++ b/kernel/sys.c
>> @@ -15,6 +15,7 @@
>>   #include <linux/highuid.h>
>>   #include <linux/fs.h>
>>   #include <linux/kmod.h>
>> +#include <linux/ksm.h>
>>   #include <linux/perf_event.h>
>>   #include <linux/resource.h>
>>   #include <linux/kernel.h>
>> @@ -2661,6 +2662,28 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>>   	case PR_SET_VMA:
>>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>   		break;
>> +#ifdef CONFIG_KSM
>> +	case PR_SET_MEMORY_MERGE:
>> +		if (mmap_write_lock_killable(me->mm))
>> +			return -EINTR;
>> +
>> +		if (arg2) {
>> +			int err = ksm_add_mm(me->mm);
>> +
>> +			if (!err)
>> +				ksm_add_vmas(me->mm);
>> +		} else {
>> +			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>
> Okay, so disabling doesn't actually unshare anything.
>
>> +		}
>> +		mmap_write_unlock(me->mm);
>> +		break;
>> +	case PR_GET_MEMORY_MERGE:
>> +		if (arg2 || arg3 || arg4 || arg5)
>> +			return -EINVAL;
>> +
>> +		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>> +		break;
>> +#endif
>>   	default:
>>   		error = -EINVAL;
>>   		break;
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index d7bd28199f6c..ab95ae0f9def 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -534,10 +534,33 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>>   }
>>   +static bool vma_ksm_compatible(struct vm_area_struct *vma)
>> +{
>> +	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
>> +			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
>> +			     VM_MIXEDMAP))
>> +		return false;		/* just ignore the advice */
>> +
>> +	if (vma_is_dax(vma))
>> +		return false;
>> +
>> +#ifdef VM_SAO
>> +	if (vma->vm_flags & VM_SAO)
>> +		return false;
>> +#endif
>> +#ifdef VM_SPARC_ADI
>> +	if (vma->vm_flags & VM_SPARC_ADI)
>> +		return false;
>> +#endif
>> +
>> +	return true;
>> +}
>> +
>>   static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
>>   		unsigned long addr)
>>   {
>>   	struct vm_area_struct *vma;
>> +
>
> unrelated change
>
Removed.

>>   	if (ksm_test_exit(mm))
>>   		return NULL;
>>   	vma = vma_lookup(mm, addr);
>> @@ -1065,6 +1088,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>>     			mm_slot_free(mm_slot_cache, mm_slot);
>>   			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> +			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>>   			mmdrop(mm);
>>   		} else
>>   			spin_unlock(&ksm_mmlist_lock);
>> @@ -2495,6 +2519,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>>     		mm_slot_free(mm_slot_cache, mm_slot);
>>   		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> +		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>>   		mmap_read_unlock(mm);
>>   		mmdrop(mm);
>>   	} else {
>> @@ -2571,6 +2596,63 @@ static int ksm_scan_thread(void *nothing)
>>   	return 0;
>>   }
>>   +static void __ksm_add_vma(struct vm_area_struct *vma)
>> +{
>> +	unsigned long vm_flags = vma->vm_flags;
>> +
>> +	if (vm_flags & VM_MERGEABLE)
>> +		return;
>> +
>> +	if (vma_ksm_compatible(vma)) {
>> +		vm_flags |= VM_MERGEABLE;
>> +		vm_flags_reset(vma, vm_flags);
>> +	}
>> +}
>> +
>> +/**
>> + * ksm_add_vma - Mark vma as mergeable
>
> "if compatible"
>
I'll added the above.

>> + *
>> + * @vma:  Pointer to vma
>> + */
>> +void ksm_add_vma(struct vm_area_struct *vma)
>> +{
>> +	struct mm_struct *mm = vma->vm_mm;
>> +
>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>> +		__ksm_add_vma(vma);
>> +}
>> +
>> +/**
>> + * ksm_add_vmas - Mark all vma's of a process as mergeable
>> + *
>> + * @mm:  Pointer to mm
>> + */
>> +void ksm_add_vmas(struct mm_struct *mm)
>
> I'd suggest calling this
>
I guess you forgot your name suggestion?

>> +{
>> +	struct vm_area_struct *vma;
>> +
>> +	VMA_ITERATOR(vmi, mm, 0);
>> +	for_each_vma(vmi, vma)
>> +		__ksm_add_vma(vma);
>> +}
>> +
>> +/**
>> + * ksm_add_mm - Add mm to mm ksm list
>> + *
>> + * @mm:  Pointer to mm
>> + *
>> + * Returns 0 on success, otherwise error code
>> + */
>> +int ksm_add_mm(struct mm_struct *mm)
>> +{
>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>> +		return -EINVAL;
>> +	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> +		return -EINVAL;
>> +
>> +	return __ksm_enter(mm, MMF_VM_MERGE_ANY);
>> +}
>> +
>>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>   		unsigned long end, int advice, unsigned long *vm_flags)
>>   {
>> @@ -2579,28 +2661,13 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>     	switch (advice) {
>>   	case MADV_MERGEABLE:
>> -		/*
>> -		 * Be somewhat over-protective for now!
>> -		 */
>> -		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
>> -				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
>> -				 VM_HUGETLB | VM_MIXEDMAP))
>> -			return 0;		/* just ignore the advice */
>> -
>> -		if (vma_is_dax(vma))
>> +		if (vma->vm_flags & VM_MERGEABLE)
>>   			return 0;
>> -
>> -#ifdef VM_SAO
>> -		if (*vm_flags & VM_SAO)
>> +		if (!vma_ksm_compatible(vma))
>>   			return 0;
>> -#endif
>> -#ifdef VM_SPARC_ADI
>> -		if (*vm_flags & VM_SPARC_ADI)
>> -			return 0;
>> -#endif
>>     		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
>> -			err = __ksm_enter(mm);
>> +			err = __ksm_enter(mm, MMF_VM_MERGEABLE);
>>   			if (err)
>>   				return err;
>>   		}
>> @@ -2626,7 +2693,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>   }
>>   EXPORT_SYMBOL_GPL(ksm_madvise);
>>   -int __ksm_enter(struct mm_struct *mm)
>> +int __ksm_enter(struct mm_struct *mm, int flag)
>>   {
>>   	struct ksm_mm_slot *mm_slot;
>>   	struct mm_slot *slot;
>> @@ -2659,7 +2726,7 @@ int __ksm_enter(struct mm_struct *mm)
>>   		list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node);
>>   	spin_unlock(&ksm_mmlist_lock);
>>   -	set_bit(MMF_VM_MERGEABLE, &mm->flags);
>> +	set_bit(flag, &mm->flags);
>>   	mmgrab(mm);
>>     	if (needs_wakeup)
>> @@ -2668,7 +2735,7 @@ int __ksm_enter(struct mm_struct *mm)
>>   	return 0;
>>   }
>>   -void __ksm_exit(struct mm_struct *mm)
>> +void __ksm_exit(struct mm_struct *mm, int flag)
>>   {
>>   	struct ksm_mm_slot *mm_slot;
>>   	struct mm_slot *slot;
>> @@ -2700,7 +2767,7 @@ void __ksm_exit(struct mm_struct *mm)
>>     	if (easy_to_free) {
>>   		mm_slot_free(mm_slot_cache, mm_slot);
>> -		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> +		clear_bit(flag, &mm->flags);
>>   		mmdrop(mm);
>>   	} else if (mm_slot) {
>>   		mmap_write_lock(mm);
>> diff --git a/mm/mmap.c b/mm/mmap.c
>> index 740b54be3ed4..483e182e0b9d 100644
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -46,6 +46,7 @@
>>   #include <linux/pkeys.h>
>>   #include <linux/oom.h>
>>   #include <linux/sched/mm.h>
>> +#include <linux/ksm.h>
>>     #include <linux/uaccess.h>
>>   #include <asm/cacheflush.h>
>> @@ -2213,6 +2214,8 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>>   	/* vma_complete stores the new vma */
>>   	vma_complete(&vp, vmi, vma->vm_mm);
>>   +	ksm_add_vma(new);
>> +
>
> Splitting a VMA shouldn't modify VM_MERGEABLE, so I assume this is not required?
>
I'll fix it.

>>   	/* Success. */
>>   	if (new_below)
>>   		vma_next(vmi);
>> @@ -2664,6 +2667,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>>   	if (file && vm_flags & VM_SHARED)
>>   		mapping_unmap_writable(file->f_mapping);
>>   	file = vma->vm_file;
>> +	ksm_add_vma(vma);
>>   expanded:
>>   	perf_event_mmap(vma);
>>   @@ -2936,6 +2940,7 @@ static int do_brk_flags(struct vma_iterator *vmi,
>> struct vm_area_struct *vma,
>>   		goto mas_store_fail;
>>     	mm->map_count++;
>> +	ksm_add_vma(vma);
>>   out:
>>   	perf_event_mmap(vma);
>>   	mm->total_vm += len >> PAGE_SHIFT;
>> @@ -3180,6 +3185,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>>   		if (vma_link(mm, new_vma))
>>   			goto out_vma_link;
>>   		*need_rmap_locks = false;
>> +		ksm_add_vma(new_vma);
>
> Copying shouldn't modify VM_MERGEABLE, so I think this is not required?
>
I'll fix it.

>>   	}
>>   	validate_mm_mt(mm);
>>   	return new_vma;
>> @@ -3356,6 +3362,7 @@ static struct vm_area_struct *__install_special_mapping(
>>   	vm_stat_account(mm, vma->vm_flags, len >> PAGE_SHIFT);
>>     	perf_event_mmap(vma);
>> +	ksm_add_vma(vma);
>
> IIUC, special mappings will never be considered a reasonable target for KSM
> (especially, because at least VM_DONTEXPAND is always set).
>
> I think you can just drop this call.
I dropped it.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 16:44     ` Stefan Roesch
@ 2023-04-12 18:41       ` David Hildenbrand
  2023-04-12 19:08         ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2023-04-12 18:41 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

[...]
> That will work.
>> work? IOW, not exporting ksm_add_mm() and not passing a flag to __ksm_enter() --
>> it would simply set MMF_VM_MERGEABLE ?
>>
> 
> ksm_add_mm() is also used in prctl (kernel/sys.c). Do you want to make a
> similar change there?

Yes.

>>> + *
>>> + * @vma:  Pointer to vma
>>> + */
>>> +void ksm_add_vma(struct vm_area_struct *vma)
>>> +{
>>> +	struct mm_struct *mm = vma->vm_mm;
>>> +
>>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>>> +		__ksm_add_vma(vma);
>>> +}
>>> +
>>> +/**
>>> + * ksm_add_vmas - Mark all vma's of a process as mergeable
>>> + *
>>> + * @mm:  Pointer to mm
>>> + */
>>> +void ksm_add_vmas(struct mm_struct *mm)
>>
>> I'd suggest calling this
>>
> I guess you forgot your name suggestion?

Yeah, I reconsidered because the first idea I had was not particularly 
good. Maybe

ksm_enable_for_all_vmas()

But not so sure. If you think the "add" terminology is a good fit, keep 
it like that.


Thanks for bearing with me :)

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 18:41       ` David Hildenbrand
@ 2023-04-12 19:08         ` David Hildenbrand
  2023-04-12 19:55           ` Stefan Roesch
  0 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2023-04-12 19:08 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

On 12.04.23 20:41, David Hildenbrand wrote:
> [...]
>> That will work.
>>> work? IOW, not exporting ksm_add_mm() and not passing a flag to __ksm_enter() --
>>> it would simply set MMF_VM_MERGEABLE ?
>>>
>>
>> ksm_add_mm() is also used in prctl (kernel/sys.c). Do you want to make a
>> similar change there?
> 
> Yes.
> 
>>>> + *
>>>> + * @vma:  Pointer to vma
>>>> + */
>>>> +void ksm_add_vma(struct vm_area_struct *vma)
>>>> +{
>>>> +	struct mm_struct *mm = vma->vm_mm;
>>>> +
>>>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>>>> +		__ksm_add_vma(vma);
>>>> +}
>>>> +
>>>> +/**
>>>> + * ksm_add_vmas - Mark all vma's of a process as mergeable
>>>> + *
>>>> + * @mm:  Pointer to mm
>>>> + */
>>>> +void ksm_add_vmas(struct mm_struct *mm)
>>>
>>> I'd suggest calling this
>>>
>> I guess you forgot your name suggestion?
> 
> Yeah, I reconsidered because the first idea I had was not particularly
> good. Maybe
> 
> ksm_enable_for_all_vmas()
> 
> But not so sure. If you think the "add" terminology is a good fit, keep
> it like that.
> 
> 
> Thanks for bearing with me :)
> 

I briefly played with your patch to see how much it can be simplified.
Always enabling ksm (setting MMF_VM_MERGEABLE) before setting
MMF_VM_MERGE_ANY might simplify things. ksm_enable_merge_any() [or however it should
be called] and ksm_fork() contain the interesting bits.


Feel free to incorporate what you consider valuable (uncompiled,
untested).


diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index 5a716bdcba05..5b2eef31398e 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2591,6 +2591,12 @@ int gmap_mark_unmergeable(void)
  	int ret;
  	VMA_ITERATOR(vmi, mm, 0);
  
+	/*
+	 * Make sure to disable KSM (if enabled for the whole process or
+	 * individual VMAs). Note that nothing currently hinders user space
+	 * from re-enabling it.
+	 */
+	clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
  	for_each_vma(vmi, vma) {
  		/* Copy vm_flags to avoid partial modifications in ksm_madvise */
  		vm_flags = vma->vm_flags;
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 7e232ba59b86..c638b034d586 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -18,13 +18,24 @@
  #ifdef CONFIG_KSM
  int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
  		unsigned long end, int advice, unsigned long *vm_flags);
+
+void ksm_add_vma(struct vm_area_struct *vma);
+int ksm_enable_merge_any(struct mm_struct *mm);
+
  int __ksm_enter(struct mm_struct *mm);
  void __ksm_exit(struct mm_struct *mm);
  
  static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
  {
-	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
-		return __ksm_enter(mm);
+	int ret;
+
+	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) {
+		ret = __ksm_enter(mm);
+		if (ret)
+			return ret;
+	}
+	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
+		set_bit(MMF_VM_MERGE_ANY, &mm->flags);
  	return 0;
  }
  
@@ -53,6 +64,10 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
  
  #else  /* !CONFIG_KSM */
  
+static inline void ksm_add_vma(struct vm_area_struct *vma)
+{
+}
+
  static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
  {
  	return 0;
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index 0e17ae7fbfd3..0ee96ea7a0e9 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
  #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
  				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
  
+#define MMF_VM_MERGE_ANY	29
  #endif /* _LINUX_SCHED_COREDUMP_H */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 1312a137f7fb..759b3f53e53f 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -290,4 +290,6 @@ struct prctl_mm_map {
  #define PR_SET_VMA		0x53564d41
  # define PR_SET_VMA_ANON_NAME		0
  
+#define PR_SET_MEMORY_MERGE		67
+#define PR_GET_MEMORY_MERGE		68
  #endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index 495cd87d9bf4..8c2e50edeb18 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -15,6 +15,7 @@
  #include <linux/highuid.h>
  #include <linux/fs.h>
  #include <linux/kmod.h>
+#include <linux/ksm.h>
  #include <linux/perf_event.h>
  #include <linux/resource.h>
  #include <linux/kernel.h>
@@ -2661,6 +2662,30 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
  	case PR_SET_VMA:
  		error = prctl_set_vma(arg2, arg3, arg4, arg5);
  		break;
+#ifdef CONFIG_KSM
+	case PR_SET_MEMORY_MERGE:
+		if (mmap_write_lock_killable(me->mm))
+			return -EINTR;
+
+		if (arg2) {
+			error = ksm_enable_merge_any(me->mm);
+		} else {
+			/*
+			 * TODO: we might want disable KSM on all VMAs and
+			 * trigger unsharing to completely disable KSM.
+			 */
+			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+			error = 0;
+		}
+		mmap_write_unlock(me->mm);
+		break;
+	case PR_GET_MEMORY_MERGE:
+		if (arg2 || arg3 || arg4 || arg5)
+			return -EINVAL;
+
+		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+		break;
+#endif
  	default:
  		error = -EINVAL;
  		break;
diff --git a/mm/ksm.c b/mm/ksm.c
index 2b8d30068cbb..76ceec35395c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -512,6 +512,28 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
  }
  
+static bool vma_ksm_compatible(struct vm_area_struct *vma)
+{
+	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
+			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
+			     VM_MIXEDMAP))
+		return false;		/* just ignore the advice */
+
+	if (vma_is_dax(vma))
+		return false;
+
+#ifdef VM_SAO
+	if (vma->vm_flags & VM_SAO)
+		return false;
+#endif
+#ifdef VM_SPARC_ADI
+	if (vma->vm_flags & VM_SPARC_ADI)
+		return false;
+#endif
+
+	return true;
+}
+
  static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
  		unsigned long addr)
  {
@@ -1020,6 +1042,7 @@ static int unmerge_and_remove_all_rmap_items(void)
  
  			mm_slot_free(mm_slot_cache, mm_slot);
  			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
  			mmdrop(mm);
  		} else
  			spin_unlock(&ksm_mmlist_lock);
@@ -2395,6 +2418,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
  
  		mm_slot_free(mm_slot_cache, mm_slot);
  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
  		mmap_read_unlock(mm);
  		mmdrop(mm);
  	} else {
@@ -2471,6 +2495,52 @@ static int ksm_scan_thread(void *nothing)
  	return 0;
  }
  
+static void __ksm_add_vma(struct vm_area_struct *vma)
+{
+	unsigned long vm_flags = vma->vm_flags;
+
+	if (vm_flags & VM_MERGEABLE)
+		return;
+
+	if (vma_ksm_compatible(vma)) {
+		vm_flags |= VM_MERGEABLE;
+		vm_flags_reset(vma, vm_flags);
+	}
+}
+
+/**
+ * ksm_add_vma - Mark vma as mergeable
+ *
+ * @vma:  Pointer to vma
+ */
+void ksm_add_vma(struct vm_area_struct *vma)
+{
+	struct mm_struct *mm = vma->vm_mm;
+
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+		__ksm_add_vma(vma);
+}
+
+int ksm_enable_merge_any(struct mm_struct *mm)
+{
+	struct vm_area_struct *vma;
+	int ret;
+
+	if (test_bit(MMF_VM_MERGE_ANY, mm->flags))
+		return 0;
+
+	if (!test_bit(MMF_VM_MERGEABLE, mm->flags)) {
+		ret = __ksm_enter(mm);
+		if (ret)
+			return ret;
+	}
+	set_bit(MMF_VM_MERGE_ANY, &mm->flags);
+
+	VMA_ITERATOR(vmi, mm, 0);
+	for_each_vma(vmi, vma)
+		__ksm_add_vma(vma);
+}
+
  int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
  		unsigned long end, int advice, unsigned long *vm_flags)
  {
@@ -2479,25 +2549,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
  
  	switch (advice) {
  	case MADV_MERGEABLE:
-		/*
-		 * Be somewhat over-protective for now!
-		 */
-		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
-				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
-				 VM_HUGETLB | VM_MIXEDMAP))
-			return 0;		/* just ignore the advice */
-
-		if (vma_is_dax(vma))
+		if (vma->vm_flags & VM_MERGEABLE)
  			return 0;
-
-#ifdef VM_SAO
-		if (*vm_flags & VM_SAO)
-			return 0;
-#endif
-#ifdef VM_SPARC_ADI
-		if (*vm_flags & VM_SPARC_ADI)
+		if (!vma_ksm_compatible(vma))
  			return 0;
-#endif
  
  		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
  			err = __ksm_enter(mm);
@@ -2601,6 +2656,7 @@ void __ksm_exit(struct mm_struct *mm)
  	if (easy_to_free) {
  		mm_slot_free(mm_slot_cache, mm_slot);
  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
  		mmdrop(mm);
  	} else if (mm_slot) {
  		mmap_write_lock(mm);
diff --git a/mm/mmap.c b/mm/mmap.c
index ff68a67a2a7c..1f8619ff58ca 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -46,6 +46,7 @@
  #include <linux/pkeys.h>
  #include <linux/oom.h>
  #include <linux/sched/mm.h>
+#include <linux/ksm.h>
  
  #include <linux/uaccess.h>
  #include <asm/cacheflush.h>
@@ -2659,6 +2660,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
  	if (file && vm_flags & VM_SHARED)
  		mapping_unmap_writable(file->f_mapping);
  	file = vma->vm_file;
+	ksm_add_vma(vma);
  expanded:
  	perf_event_mmap(vma);
  
@@ -2931,6 +2933,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
  		goto mas_store_fail;
  
  	mm->map_count++;
+	ksm_add_vma(vma);
  out:
  	perf_event_mmap(vma);
  	mm->total_vm += len >> PAGE_SHIFT;
-- 
2.39.2


-- 
Thanks,

David / dhildenb



^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 19:08         ` David Hildenbrand
@ 2023-04-12 19:55           ` Stefan Roesch
  2023-04-13  9:46             ` David Hildenbrand
  0 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-04-12 19:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 12.04.23 20:41, David Hildenbrand wrote:
>> [...]
>>> That will work.
>>>> work? IOW, not exporting ksm_add_mm() and not passing a flag to __ksm_enter() --
>>>> it would simply set MMF_VM_MERGEABLE ?
>>>>
>>>
>>> ksm_add_mm() is also used in prctl (kernel/sys.c). Do you want to make a
>>> similar change there?
>> Yes.
>>
>>>>> + *
>>>>> + * @vma:  Pointer to vma
>>>>> + */
>>>>> +void ksm_add_vma(struct vm_area_struct *vma)
>>>>> +{
>>>>> +	struct mm_struct *mm = vma->vm_mm;
>>>>> +
>>>>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>>>>> +		__ksm_add_vma(vma);
>>>>> +}
>>>>> +
>>>>> +/**
>>>>> + * ksm_add_vmas - Mark all vma's of a process as mergeable
>>>>> + *
>>>>> + * @mm:  Pointer to mm
>>>>> + */
>>>>> +void ksm_add_vmas(struct mm_struct *mm)
>>>>
>>>> I'd suggest calling this
>>>>
>>> I guess you forgot your name suggestion?
>> Yeah, I reconsidered because the first idea I had was not particularly
>> good. Maybe
>> ksm_enable_for_all_vmas()
>> But not so sure. If you think the "add" terminology is a good fit, keep
>> it like that.
>> Thanks for bearing with me :)
>>
>
> I briefly played with your patch to see how much it can be simplified.
> Always enabling ksm (setting MMF_VM_MERGEABLE) before setting
> MMF_VM_MERGE_ANY might simplify things. ksm_enable_merge_any() [or however it should
> be called] and ksm_fork() contain the interesting bits.
>
>
> Feel free to incorporate what you consider valuable (uncompiled,
> untested).
>
I added most of it. The only change is that I kept ksm_add_vmas as a
static function, otherwise I need to define the VMA_ITERATOR at the top
of the function.

>
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index 5a716bdcba05..5b2eef31398e 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -2591,6 +2591,12 @@ int gmap_mark_unmergeable(void)
>  	int ret;
>  	VMA_ITERATOR(vmi, mm, 0);
>  +	/*
> +	 * Make sure to disable KSM (if enabled for the whole process or
> +	 * individual VMAs). Note that nothing currently hinders user space
> +	 * from re-enabling it.
> +	 */
> +	clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>  	for_each_vma(vmi, vma) {
>  		/* Copy vm_flags to avoid partial modifications in ksm_madvise */
>  		vm_flags = vma->vm_flags;
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 7e232ba59b86..c638b034d586 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -18,13 +18,24 @@
>  #ifdef CONFIG_KSM
>  int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags);
> +
> +void ksm_add_vma(struct vm_area_struct *vma);
> +int ksm_enable_merge_any(struct mm_struct *mm);
> +
>  int __ksm_enter(struct mm_struct *mm);
>  void __ksm_exit(struct mm_struct *mm);
>    static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>  {
> -	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
> -		return __ksm_enter(mm);
> +	int ret;
> +
> +	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) {
> +		ret = __ksm_enter(mm);
> +		if (ret)
> +			return ret;
> +	}
> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
> +		set_bit(MMF_VM_MERGE_ANY, &mm->flags);
>  	return 0;
>  }
>  @@ -53,6 +64,10 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio
> *folio);
>    #else  /* !CONFIG_KSM */
>  +static inline void ksm_add_vma(struct vm_area_struct *vma)
> +{
> +}
> +
>  static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>  {
>  	return 0;
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index 0e17ae7fbfd3..0ee96ea7a0e9 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
>  #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
>  				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
>  +#define MMF_VM_MERGE_ANY	29
>  #endif /* _LINUX_SCHED_COREDUMP_H */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 1312a137f7fb..759b3f53e53f 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>  #define PR_SET_VMA		0x53564d41
>  # define PR_SET_VMA_ANON_NAME		0
>  +#define PR_SET_MEMORY_MERGE		67
> +#define PR_GET_MEMORY_MERGE		68
>  #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 495cd87d9bf4..8c2e50edeb18 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -15,6 +15,7 @@
>  #include <linux/highuid.h>
>  #include <linux/fs.h>
>  #include <linux/kmod.h>
> +#include <linux/ksm.h>
>  #include <linux/perf_event.h>
>  #include <linux/resource.h>
>  #include <linux/kernel.h>
> @@ -2661,6 +2662,30 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  	case PR_SET_VMA:
>  		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>  		break;
> +#ifdef CONFIG_KSM
> +	case PR_SET_MEMORY_MERGE:
> +		if (mmap_write_lock_killable(me->mm))
> +			return -EINTR;
> +
> +		if (arg2) {
> +			error = ksm_enable_merge_any(me->mm);
> +		} else {
> +			/*
> +			 * TODO: we might want disable KSM on all VMAs and
> +			 * trigger unsharing to completely disable KSM.
> +			 */
> +			clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> +			error = 0;
> +		}
> +		mmap_write_unlock(me->mm);
> +		break;
> +	case PR_GET_MEMORY_MERGE:
> +		if (arg2 || arg3 || arg4 || arg5)
> +			return -EINVAL;
> +
> +		error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> +		break;
> +#endif
>  	default:
>  		error = -EINVAL;
>  		break;
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 2b8d30068cbb..76ceec35395c 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -512,6 +512,28 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
>  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>  }
>  +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
> +			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
> +			     VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */
> +
> +	if (vma_is_dax(vma))
> +		return false;
> +
> +#ifdef VM_SAO
> +	if (vma->vm_flags & VM_SAO)
> +		return false;
> +#endif
> +#ifdef VM_SPARC_ADI
> +	if (vma->vm_flags & VM_SPARC_ADI)
> +		return false;
> +#endif
> +
> +	return true;
> +}
> +
>  static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
>  		unsigned long addr)
>  {
> @@ -1020,6 +1042,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>    			mm_slot_free(mm_slot_cache, mm_slot);
>  			clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +			clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>  			mmdrop(mm);
>  		} else
>  			spin_unlock(&ksm_mmlist_lock);
> @@ -2395,6 +2418,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>    		mm_slot_free(mm_slot_cache, mm_slot);
>  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>  		mmap_read_unlock(mm);
>  		mmdrop(mm);
>  	} else {
> @@ -2471,6 +2495,52 @@ static int ksm_scan_thread(void *nothing)
>  	return 0;
>  }
>  +static void __ksm_add_vma(struct vm_area_struct *vma)
> +{
> +	unsigned long vm_flags = vma->vm_flags;
> +
> +	if (vm_flags & VM_MERGEABLE)
> +		return;
> +
> +	if (vma_ksm_compatible(vma)) {
> +		vm_flags |= VM_MERGEABLE;
> +		vm_flags_reset(vma, vm_flags);
> +	}
> +}
> +
> +/**
> + * ksm_add_vma - Mark vma as mergeable
> + *
> + * @vma:  Pointer to vma
> + */
> +void ksm_add_vma(struct vm_area_struct *vma)
> +{
> +	struct mm_struct *mm = vma->vm_mm;
> +
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		__ksm_add_vma(vma);
> +}
> +
> +int ksm_enable_merge_any(struct mm_struct *mm)
> +{
> +	struct vm_area_struct *vma;
> +	int ret;
> +
> +	if (test_bit(MMF_VM_MERGE_ANY, mm->flags))
> +		return 0;
> +
> +	if (!test_bit(MMF_VM_MERGEABLE, mm->flags)) {
> +		ret = __ksm_enter(mm);
> +		if (ret)
> +			return ret;
> +	}
> +	set_bit(MMF_VM_MERGE_ANY, &mm->flags);
> +
> +	VMA_ITERATOR(vmi, mm, 0);
> +	for_each_vma(vmi, vma)
> +		__ksm_add_vma(vma);
> +}
> +
>  int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags)
>  {
> @@ -2479,25 +2549,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>    	switch (advice) {
>  	case MADV_MERGEABLE:
> -		/*
> -		 * Be somewhat over-protective for now!
> -		 */
> -		if (*vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> -				 VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> -				 VM_HUGETLB | VM_MIXEDMAP))
> -			return 0;		/* just ignore the advice */
> -
> -		if (vma_is_dax(vma))
> +		if (vma->vm_flags & VM_MERGEABLE)
>  			return 0;
> -
> -#ifdef VM_SAO
> -		if (*vm_flags & VM_SAO)
> -			return 0;
> -#endif
> -#ifdef VM_SPARC_ADI
> -		if (*vm_flags & VM_SPARC_ADI)
> +		if (!vma_ksm_compatible(vma))
>  			return 0;
> -#endif
>    		if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
>  			err = __ksm_enter(mm);
> @@ -2601,6 +2656,7 @@ void __ksm_exit(struct mm_struct *mm)
>  	if (easy_to_free) {
>  		mm_slot_free(mm_slot_cache, mm_slot);
>  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> +		clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>  		mmdrop(mm);
>  	} else if (mm_slot) {
>  		mmap_write_lock(mm);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index ff68a67a2a7c..1f8619ff58ca 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -46,6 +46,7 @@
>  #include <linux/pkeys.h>
>  #include <linux/oom.h>
>  #include <linux/sched/mm.h>
> +#include <linux/ksm.h>
>    #include <linux/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -2659,6 +2660,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  	if (file && vm_flags & VM_SHARED)
>  		mapping_unmap_writable(file->f_mapping);
>  	file = vma->vm_file;
> +	ksm_add_vma(vma);
>  expanded:
>  	perf_event_mmap(vma);
>  @@ -2931,6 +2933,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct
> vm_area_struct *vma,
>  		goto mas_store_fail;
>    	mm->map_count++;
> +	ksm_add_vma(vma);
>  out:
>  	perf_event_mmap(vma);
>  	mm->total_vm += len >> PAGE_SHIFT;
> --
> 2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 1/3] mm: add new api to enable ksm per process
  2023-04-12 19:55           ` Stefan Roesch
@ 2023-04-13  9:46             ` David Hildenbrand
  0 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2023-04-13  9:46 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya

On 12.04.23 21:55, Stefan Roesch wrote:
> 
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 12.04.23 20:41, David Hildenbrand wrote:
>>> [...]
>>>> That will work.
>>>>> work? IOW, not exporting ksm_add_mm() and not passing a flag to __ksm_enter() --
>>>>> it would simply set MMF_VM_MERGEABLE ?
>>>>>
>>>>
>>>> ksm_add_mm() is also used in prctl (kernel/sys.c). Do you want to make a
>>>> similar change there?
>>> Yes.
>>>
>>>>>> + *
>>>>>> + * @vma:  Pointer to vma
>>>>>> + */
>>>>>> +void ksm_add_vma(struct vm_area_struct *vma)
>>>>>> +{
>>>>>> +	struct mm_struct *mm = vma->vm_mm;
>>>>>> +
>>>>>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>>>>>> +		__ksm_add_vma(vma);
>>>>>> +}
>>>>>> +
>>>>>> +/**
>>>>>> + * ksm_add_vmas - Mark all vma's of a process as mergeable
>>>>>> + *
>>>>>> + * @mm:  Pointer to mm
>>>>>> + */
>>>>>> +void ksm_add_vmas(struct mm_struct *mm)
>>>>>
>>>>> I'd suggest calling this
>>>>>
>>>> I guess you forgot your name suggestion?
>>> Yeah, I reconsidered because the first idea I had was not particularly
>>> good. Maybe
>>> ksm_enable_for_all_vmas()
>>> But not so sure. If you think the "add" terminology is a good fit, keep
>>> it like that.
>>> Thanks for bearing with me :)
>>>
>>
>> I briefly played with your patch to see how much it can be simplified.
>> Always enabling ksm (setting MMF_VM_MERGEABLE) before setting
>> MMF_VM_MERGE_ANY might simplify things. ksm_enable_merge_any() [or however it should
>> be called] and ksm_fork() contain the interesting bits.
>>
>>
>> Feel free to incorporate what you consider valuable (uncompiled,
>> untested).
>>
> I added most of it. The only change is that I kept ksm_add_vmas as a
> static function, otherwise I need to define the VMA_ITERATOR at the top
> of the function.


Makes sense. I'll review patch #3 later, so we can hopefully get this 
into the 6.4 merge window after letting it rest at least some days in -next.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 3/3] selftests/mm: add new selftests for KSM
  2023-04-12  3:16 ` [PATCH v6 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
@ 2023-04-13 13:07   ` David Hildenbrand
  2023-04-13 13:08     ` David Hildenbrand
  2023-04-13 18:09     ` Stefan Roesch
  0 siblings, 2 replies; 17+ messages in thread
From: David Hildenbrand @ 2023-04-13 13:07 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	willy, Bagas Sanjaya

On 12.04.23 05:16, Stefan Roesch wrote:
> This adds three new tests to the selftests for KSM.  These tests use the
> new prctl API's to enable and disable KSM.
> 
> 1) add new prctl flags to prctl header file in tools dir
> 
>     This adds the new prctl flags to the include file prct.h in the
>     tools directory.  This makes sure they are available for testing.
> 
> 2) add KSM prctl merge test
> 
>     This adds the -t option to the ksm_tests program.  The -t flag
>     allows to specify if it should use madvise or prctl ksm merging.
> 
> 3) add KSM get merge type test
> 
>     This adds the -G flag to the ksm_tests program to query the KSM
>     status with prctl after KSM has been enabled with prctl.
> 
> 4) add KSM fork test
> 
>     Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
>     by the child process.
> 
> 5) add two functions for debugging merge outcome
> 
>     This adds two functions to report the metrics in /proc/self/ksm_stat
>     and /sys/kernel/debug/mm/ksm.
> 
> The debugging can be enabled with the following command line:
> make -C tools/testing/selftests TARGETS="mm" --keep-going \
>          EXTRA_CFLAGS=-DDEBUG=1

Would it make sense to instead have a "-D" (if still unused) runtime 
options to print this data? Dead code that's not compiled is a bit 
unfortunate as it can easily bit-rot.



This patch essentially does two things

1) Add the option to run all tests/benchmarks with the PRCTL instead of 
MADVISE

2) Add some functional KSM tests for the new PRCTL (fork, enabling 
works, disabling works).

The latter should rather go into ksm_functional_tests().

[...]

>   
> -static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_size)
> +/* Verify that prctl ksm flag is inherited. */
> +static int check_ksm_fork(void)
> +{
> +	int rc = KSFT_FAIL;
> +	pid_t child_pid;
> +
> +	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
> +		perror("prctl");
> +		return KSFT_FAIL;
> +	}
> +
> +	child_pid = fork();
> +	if (child_pid == 0) {
> +		int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
> +
> +		if (!is_on)
> +			exit(KSFT_FAIL);
> +
> +		exit(KSFT_PASS);
> +	}
> +
> +	if (child_pid < 0)
> +		goto out;
> +
> +	if (waitpid(child_pid, &rc, 0) < 0)
> +		rc = KSFT_FAIL;
> +
> +	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
> +		perror("prctl");
> +		rc = KSFT_FAIL;
> +	}
> +
> +out:
> +	if (rc == KSFT_PASS)
> +		printf("OK\n");
> +	else
> +		printf("Not OK\n");
> +
> +	return rc;
> +}
> +
> +static int check_ksm_get_merge_type(void)
> +{
> +	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
> +		perror("prctl set");
> +		return 1;
> +	}
> +
> +	int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
> +
> +	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
> +		perror("prctl set");
> +		return 1;
> +	}
> +
> +	int is_off = prctl(PR_GET_MEMORY_MERGE, 0);
> +
> +	if (is_on && is_off) {
> +		printf("OK\n");
> +		return KSFT_PASS;
> +	}
> +
> +	printf("Not OK\n");
> +	return KSFT_FAIL;
> +}

Yes, these two are better located in ksm_functional_tests() to just run 
them both automatically when the test is executed.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 3/3] selftests/mm: add new selftests for KSM
  2023-04-13 13:07   ` David Hildenbrand
@ 2023-04-13 13:08     ` David Hildenbrand
  2023-04-13 16:32       ` Stefan Roesch
  2023-04-13 18:09     ` Stefan Roesch
  1 sibling, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2023-04-13 13:08 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	willy, Bagas Sanjaya

On 13.04.23 15:07, David Hildenbrand wrote:
> On 12.04.23 05:16, Stefan Roesch wrote:
>> This adds three new tests to the selftests for KSM.  These tests use the
>> new prctl API's to enable and disable KSM.
>>
>> 1) add new prctl flags to prctl header file in tools dir
>>
>>      This adds the new prctl flags to the include file prct.h in the
>>      tools directory.  This makes sure they are available for testing.
>>
>> 2) add KSM prctl merge test
>>
>>      This adds the -t option to the ksm_tests program.  The -t flag
>>      allows to specify if it should use madvise or prctl ksm merging.
>>
>> 3) add KSM get merge type test
>>
>>      This adds the -G flag to the ksm_tests program to query the KSM
>>      status with prctl after KSM has been enabled with prctl.
>>
>> 4) add KSM fork test
>>
>>      Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
>>      by the child process.
>>
>> 5) add two functions for debugging merge outcome
>>
>>      This adds two functions to report the metrics in /proc/self/ksm_stat
>>      and /sys/kernel/debug/mm/ksm.
>>
>> The debugging can be enabled with the following command line:
>> make -C tools/testing/selftests TARGETS="mm" --keep-going \
>>           EXTRA_CFLAGS=-DDEBUG=1
> 
> Would it make sense to instead have a "-D" (if still unused) runtime
> options to print this data? Dead code that's not compiled is a bit
> unfortunate as it can easily bit-rot.
> 
> 
> 
> This patch essentially does two things
> 
> 1) Add the option to run all tests/benchmarks with the PRCTL instead of
> MADVISE
> 
> 2) Add some functional KSM tests for the new PRCTL (fork, enabling
> works, disabling works).
> 
> The latter should rather go into ksm_functional_tests().


"tools/testing/selftests/mm/ksm_functional_tests.c" is what I wanted to say.

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 3/3] selftests/mm: add new selftests for KSM
  2023-04-13 13:08     ` David Hildenbrand
@ 2023-04-13 16:32       ` Stefan Roesch
  0 siblings, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-04-13 16:32 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 13.04.23 15:07, David Hildenbrand wrote:
>> On 12.04.23 05:16, Stefan Roesch wrote:
>>> This adds three new tests to the selftests for KSM.  These tests use the
>>> new prctl API's to enable and disable KSM.
>>>
>>> 1) add new prctl flags to prctl header file in tools dir
>>>
>>>      This adds the new prctl flags to the include file prct.h in the
>>>      tools directory.  This makes sure they are available for testing.
>>>
>>> 2) add KSM prctl merge test
>>>
>>>      This adds the -t option to the ksm_tests program.  The -t flag
>>>      allows to specify if it should use madvise or prctl ksm merging.
>>>
>>> 3) add KSM get merge type test
>>>
>>>      This adds the -G flag to the ksm_tests program to query the KSM
>>>      status with prctl after KSM has been enabled with prctl.
>>>
>>> 4) add KSM fork test
>>>
>>>      Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
>>>      by the child process.
>>>
>>> 5) add two functions for debugging merge outcome
>>>
>>>      This adds two functions to report the metrics in /proc/self/ksm_stat
>>>      and /sys/kernel/debug/mm/ksm.
>>>
>>> The debugging can be enabled with the following command line:
>>> make -C tools/testing/selftests TARGETS="mm" --keep-going \
>>>           EXTRA_CFLAGS=-DDEBUG=1
>> Would it make sense to instead have a "-D" (if still unused) runtime
>> options to print this data? Dead code that's not compiled is a bit
>> unfortunate as it can easily bit-rot.
>> This patch essentially does two things
>> 1) Add the option to run all tests/benchmarks with the PRCTL instead of
>> MADVISE
>> 2) Add some functional KSM tests for the new PRCTL (fork, enabling
>> works, disabling works).
>> The latter should rather go into ksm_functional_tests().
>
>
> "tools/testing/selftests/mm/ksm_functional_tests.c" is what I wanted to say.

I understood. I'll look into moving the fork check and the disabling
into the functional tests for the next version.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v6 3/3] selftests/mm: add new selftests for KSM
  2023-04-13 13:07   ` David Hildenbrand
  2023-04-13 13:08     ` David Hildenbrand
@ 2023-04-13 18:09     ` Stefan Roesch
  1 sibling, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-04-13 18:09 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, willy, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 12.04.23 05:16, Stefan Roesch wrote:
>> This adds three new tests to the selftests for KSM.  These tests use the
>> new prctl API's to enable and disable KSM.
>> 1) add new prctl flags to prctl header file in tools dir
>>     This adds the new prctl flags to the include file prct.h in the
>>     tools directory.  This makes sure they are available for testing.
>> 2) add KSM prctl merge test
>>     This adds the -t option to the ksm_tests program.  The -t flag
>>     allows to specify if it should use madvise or prctl ksm merging.
>> 3) add KSM get merge type test
>>     This adds the -G flag to the ksm_tests program to query the KSM
>>     status with prctl after KSM has been enabled with prctl.
>> 4) add KSM fork test
>>     Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited
>>     by the child process.
>> 5) add two functions for debugging merge outcome
>>     This adds two functions to report the metrics in /proc/self/ksm_stat
>>     and /sys/kernel/debug/mm/ksm.
>> The debugging can be enabled with the following command line:
>> make -C tools/testing/selftests TARGETS="mm" --keep-going \
>>          EXTRA_CFLAGS=-DDEBUG=1
>
> Would it make sense to instead have a "-D" (if still unused) runtime options to
> print this data? Dead code that's not compiled is a bit unfortunate as it can
> easily bit-rot.
>
>

In the next version I'll add -d option. I'll add a global debug variable
for this. Otherwise we would need to pass down the debug option several
levels.

>
> This patch essentially does two things
>
> 1) Add the option to run all tests/benchmarks with the PRCTL instead of MADVISE
>
> 2) Add some functional KSM tests for the new PRCTL (fork, enabling works,
> disabling works).
>
> The latter should rather go into ksm_functional_tests().
>
> [...]
>
>>   -static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t
>> page_size)
>> +/* Verify that prctl ksm flag is inherited. */
>> +static int check_ksm_fork(void)
>> +{
>> +	int rc = KSFT_FAIL;
>> +	pid_t child_pid;
>> +
>> +	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
>> +		perror("prctl");
>> +		return KSFT_FAIL;
>> +	}
>> +
>> +	child_pid = fork();
>> +	if (child_pid == 0) {
>> +		int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
>> +
>> +		if (!is_on)
>> +			exit(KSFT_FAIL);
>> +
>> +		exit(KSFT_PASS);
>> +	}
>> +
>> +	if (child_pid < 0)
>> +		goto out;
>> +
>> +	if (waitpid(child_pid, &rc, 0) < 0)
>> +		rc = KSFT_FAIL;
>> +
>> +	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
>> +		perror("prctl");
>> +		rc = KSFT_FAIL;
>> +	}
>> +
>> +out:
>> +	if (rc == KSFT_PASS)
>> +		printf("OK\n");
>> +	else
>> +		printf("Not OK\n");
>> +
>> +	return rc;
>> +}
>> +
>> +static int check_ksm_get_merge_type(void)
>> +{
>> +	if (prctl(PR_SET_MEMORY_MERGE, 1)) {
>> +		perror("prctl set");
>> +		return 1;
>> +	}
>> +
>> +	int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
>> +
>> +	if (prctl(PR_SET_MEMORY_MERGE, 0)) {
>> +		perror("prctl set");
>> +		return 1;
>> +	}
>> +
>> +	int is_off = prctl(PR_GET_MEMORY_MERGE, 0);
>> +
>> +	if (is_on && is_off) {
>> +		printf("OK\n");
>> +		return KSFT_PASS;
>> +	}
>> +
>> +	printf("Not OK\n");
>> +	return KSFT_FAIL;
>> +}
>
> Yes, these two are better located in ksm_functional_tests() to just run them
> both automatically when the test is executed.

I moved the check_ksm_get_merge_type() and check_ksm_fork() to the
ksm_functional_test executable. The change will be in the next version.


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2023-04-13 18:12 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-12  3:16 [PATCH v6 0/3] mm: process/cgroup ksm support Stefan Roesch
2023-04-12  3:16 ` [PATCH v6 1/3] mm: add new api to enable ksm per process Stefan Roesch
2023-04-12 13:20   ` Matthew Wilcox
2023-04-12 16:08     ` Stefan Roesch
2023-04-12 16:29       ` Matthew Wilcox
2023-04-12 15:40   ` David Hildenbrand
2023-04-12 16:44     ` Stefan Roesch
2023-04-12 18:41       ` David Hildenbrand
2023-04-12 19:08         ` David Hildenbrand
2023-04-12 19:55           ` Stefan Roesch
2023-04-13  9:46             ` David Hildenbrand
2023-04-12  3:16 ` [PATCH v6 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
2023-04-12  3:16 ` [PATCH v6 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
2023-04-13 13:07   ` David Hildenbrand
2023-04-13 13:08     ` David Hildenbrand
2023-04-13 16:32       ` Stefan Roesch
2023-04-13 18:09     ` Stefan Roesch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).