linux-kselftest.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] mm: process/cgroup ksm support
@ 2023-03-10 18:28 Stefan Roesch
       [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
                   ` (3 more replies)
  0 siblings, 4 replies; 33+ messages in thread
From: Stefan Roesch @ 2023-03-10 18:28 UTC (permalink / raw)
  To: kernel-team
  Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
	akpm, hannes

So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

Use case 1:
The madvise call is not available in the programming language. An example for
this are programs with forked workloads using a garbage collected language without
pointers. In such a language madvise cannot be made available.

In addition the addresses of objects get moved around as they are garbage
collected. KSM sharing needs to be enabled "from the outside" for these type of
workloads.

Use case 2:
The same interpreter can also be used for workloads where KSM brings no
benefit or even has overhead. We'd like to be able to enable KSM on a workload
by workload basis.

Use case 3:
With the madvise call sharing opportunities are only enabled for the current
process: it is a workload-local decision. A considerable number of sharing
opportuniites may exist across multiple workloads or jobs. Only a higler level
entity like a job scheduler or container can know for certain if its running
one or more instances of a job. That job scheduler however doesn't have
the necessary internal worklaod knowledge to make targeted madvise calls.

Security concerns:
In previous discussions security concerns have been brought up. The problem is
that an individual workload does not have the knowledge about what else is
running on a machine. Therefore it has to be very conservative in what memory
areas can be shared or not. However, if the system is dedicated to running
multiple jobs within the same security domain, its the job scheduler that has
the knowledge that sharing can be safely enabled and is even desirable.

Performance:
Experiments with using UKSM have shown a capacity increase of around 20%.


1. New options for prctl system command
This patch series adds two new options to the prctl system call. The first
one allows to enable KSM at the process level and the second one to query the
setting.

The setting will be inherited by child processes.

With the above setting, KSM can be enabled for the seed process of a cgroup
and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing
When KSM is enabled at the process level, the KSM code will iterate over all
the VMA's and enable KSM for the eligible VMA's.

When forking a process that has KSM enabled, the setting will be inherited by
the new child process.

In addition when KSM is disabled for a process, KSM will be disabled for the
VMA's where KSM has been enabled.

3. Add general_profit metric
The general_profit metric of KSM is specified in the documentation, but not
calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm.

4. Add more metrics to ksm_stat
This adds the process profit and ksm type metric to /proc/<pid>/ksm_stat.

5. Add more tests to ksm_tests
This adds an option to specify the merge type to the ksm_tests. This allows to
test madvise and prctl KSM. It also adds a new option to query if prctl KSM has
been enabled. It adds a fork test to verify that the KSM process setting is
inherited by client processes.


Changes:
- V4:
  - removing check in prctl for MMF_VM_MERGEABLE in PR_SET_MEMORY_MERGE
    handling
  - Checking for VM_MERGEABLE AND MMF_VM_MERGE_ANY to avoid chaning vm_flags
    - This requires also checking that the vma is compatible. The
      compatibility check is provided by a new helper
    - processes which have set MMF_VM_MERGE_ANY, only need to call the
      helper and not madvise.
  - removed unmerge_vmas function, this function is no longer necessary,
    clearing the MMF_VM_MERGE_ANY bit is sufficient

- V3:
  - folded patch 1 - 6
  - folded patch 7 - 14
  - folded patch 15 - 19
  - Expanded on the use cases in the cover letter
  - Added a section on security concerns to the cover letter

- V2:
  - Added use cases to the cover letter
  - Removed the tracing patch from the patch series and posted it as an
    individual patch
  - Refreshed repo



Stefan Roesch (3):
  mm: add new api to enable ksm per process
  mm: add new KSM process and sysfs knobs
  selftests/mm: add new selftests for KSM

 Documentation/ABI/testing/sysfs-kernel-mm-ksm |   8 +
 Documentation/admin-guide/mm/ksm.rst          |   8 +-
 fs/proc/base.c                                |   5 +
 include/linux/ksm.h                           |  19 +-
 include/linux/sched/coredump.h                |   1 +
 include/uapi/linux/prctl.h                    |   2 +
 kernel/sys.c                                  |  27 ++
 mm/ksm.c                                      | 137 +++++++---
 tools/include/uapi/linux/prctl.h              |   2 +
 tools/testing/selftests/mm/Makefile           |   2 +-
 tools/testing/selftests/mm/ksm_tests.c        | 254 +++++++++++++++---
 11 files changed, 388 insertions(+), 77 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
       [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
@ 2023-03-13 16:26   ` Johannes Weiner
  2023-04-03 10:37   ` David Hildenbrand
  1 sibling, 0 replies; 33+ messages in thread
From: Johannes Weiner @ 2023-03-13 16:26 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, akpm, Bagas Sanjaya

On Fri, Mar 10, 2023 at 10:28:49AM -0800, Stefan Roesch wrote:
> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>  }
>  
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	/*
> +	 * Be somewhat over-protective for now!
> +	 */
> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> +			     VM_HUGETLB | VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */
> +
> +	if (vma_is_dax(vma))
> +		return false;
> +
> +#ifdef VM_SAO
> +	if (*vm_flags & VM_SAO)
> +		return false;
> +#endif
> +#ifdef VM_SPARC_ADI
> +	if (*vm_flags & VM_SPARC_ADI)
> +		return false;
> +#endif

These two also need to check vma->vm_flags, or it won't build on those
configs.

Otherwise, the patch looks good to me.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
       [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
@ 2023-03-15 20:03 ` David Hildenbrand
  2023-03-15 20:23   ` Mike Kravetz
  2023-03-15 21:05   ` Johannes Weiner
  2023-03-28 23:09 ` Andrew Morton
       [not found] ` <20230310182851.2579138-3-shr@devkernel.io>
  3 siblings, 2 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-03-15 20:03 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Mike Kravetz, Rik van Riel

On 10.03.23 19:28, Stefan Roesch wrote:
> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
> 
> Use case 1:
> The madvise call is not available in the programming language. An example for
> this are programs with forked workloads using a garbage collected language without
> pointers. In such a language madvise cannot be made available.
> 
> In addition the addresses of objects get moved around as they are garbage
> collected. KSM sharing needs to be enabled "from the outside" for these type of
> workloads.
> 
> Use case 2:
> The same interpreter can also be used for workloads where KSM brings no
> benefit or even has overhead. We'd like to be able to enable KSM on a workload
> by workload basis.
> 
> Use case 3:
> With the madvise call sharing opportunities are only enabled for the current
> process: it is a workload-local decision. A considerable number of sharing
> opportuniites may exist across multiple workloads or jobs. Only a higler level
> entity like a job scheduler or container can know for certain if its running
> one or more instances of a job. That job scheduler however doesn't have
> the necessary internal worklaod knowledge to make targeted madvise calls.
> 
> Security concerns:
> In previous discussions security concerns have been brought up. The problem is
> that an individual workload does not have the knowledge about what else is
> running on a machine. Therefore it has to be very conservative in what memory
> areas can be shared or not. However, if the system is dedicated to running
> multiple jobs within the same security domain, its the job scheduler that has
> the knowledge that sharing can be safely enabled and is even desirable.
> 
> Performance:
> Experiments with using UKSM have shown a capacity increase of around 20%.

Stefan, can you do me a favor and investigate which pages we end up 
deduplicating -- especially if it's mostly only the zeropage and if it's 
still that significant when disabling THP?


I'm currently investigating with some engineers on playing with enabling 
KSM on some selected processes (enabling it blindly on all VMAs of that 
process via madvise() ).

One thing we noticed is that such (~50 times) 20MiB processes end up 
saving ~2MiB of memory per process. That made me suspicious, because 
it's the THP size.

What I think happens is that we have a 2 MiB area (stack?) and only 
touch a single page. We get a whole 2 MiB THP populated. Most of that 
THP is zeroes.

KSM somehow ends up splitting that THP and deduplicates all resulting 
zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer 
"waste" 2 MiB. I think the processes with KSM have less (none) THP than 
the processes with THP enabled, but I only took a look at a sample of 
the process' smaps so far.

I recall that there was a proposal to split underutilized THP and free 
up the zeropages (IIRC Rik was involved).

I also recall that Mike reported memory waste due to THP.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
@ 2023-03-15 20:23   ` Mike Kravetz
  2023-03-15 21:05   ` Johannes Weiner
  1 sibling, 0 replies; 33+ messages in thread
From: Mike Kravetz @ 2023-03-15 20:23 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, hannes

On 03/15/23 21:03, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
> 
> Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?
> 
> I'm currently investigating with some engineers on playing with enabling KSM
> on some selected processes (enabling it blindly on all VMAs of that process
> via madvise() ).
> 
> One thing we noticed is that such (~50 times) 20MiB processes end up saving
> ~2MiB of memory per process. That made me suspicious, because it's the THP
> size.
> 
> What I think happens is that we have a 2 MiB area (stack?) and only touch a
> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> 
> KSM somehow ends up splitting that THP and deduplicates all resulting
> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> processes with THP enabled, but I only took a look at a sample of the
> process' smaps so far.
> 
> I recall that there was a proposal to split underutilized THP and free up
> the zeropages (IIRC Rik was involved).
> 
> I also recall that Mike reported memory waste due to THP.

Interesting!

2MB stacks were also involved in our case.  That stack would first get a
write fault allocating a THP.  The write fault would be followed by a
mprotect(PROT_NONE) of the 4K page at the bottom of the stack to create
a guard page.  The mprotect would result in the THP being split resulting
in 510 zero filled pages.  I suppose KSM could dedup those zero pages.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
  2023-03-15 20:23   ` Mike Kravetz
@ 2023-03-15 21:05   ` Johannes Weiner
  2023-03-15 21:19     ` Johannes Weiner
  1 sibling, 1 reply; 33+ messages in thread
From: Johannes Weiner @ 2023-03-15 21:05 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
> > So far KSM can only be enabled by calling madvise for memory regions. To
> > be able to use KSM for more workloads, KSM needs to have the ability to be
> > enabled / disabled at the process / cgroup level.
> > 
> > Use case 1:
> > The madvise call is not available in the programming language. An example for
> > this are programs with forked workloads using a garbage collected language without
> > pointers. In such a language madvise cannot be made available.
> > 
> > In addition the addresses of objects get moved around as they are garbage
> > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > workloads.
> > 
> > Use case 2:
> > The same interpreter can also be used for workloads where KSM brings no
> > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > by workload basis.
> > 
> > Use case 3:
> > With the madvise call sharing opportunities are only enabled for the current
> > process: it is a workload-local decision. A considerable number of sharing
> > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > entity like a job scheduler or container can know for certain if its running
> > one or more instances of a job. That job scheduler however doesn't have
> > the necessary internal worklaod knowledge to make targeted madvise calls.
> > 
> > Security concerns:
> > In previous discussions security concerns have been brought up. The problem is
> > that an individual workload does not have the knowledge about what else is
> > running on a machine. Therefore it has to be very conservative in what memory
> > areas can be shared or not. However, if the system is dedicated to running
> > multiple jobs within the same security domain, its the job scheduler that has
> > the knowledge that sharing can be safely enabled and is even desirable.
> > 
> > Performance:
> > Experiments with using UKSM have shown a capacity increase of around 20%.
> 
> Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?
> 
> 
> I'm currently investigating with some engineers on playing with enabling KSM
> on some selected processes (enabling it blindly on all VMAs of that process
> via madvise() ).
> 
> One thing we noticed is that such (~50 times) 20MiB processes end up saving
> ~2MiB of memory per process. That made me suspicious, because it's the THP
> size.
> 
> What I think happens is that we have a 2 MiB area (stack?) and only touch a
> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> 
> KSM somehow ends up splitting that THP and deduplicates all resulting
> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> processes with THP enabled, but I only took a look at a sample of the
> process' smaps so far.

THP and KSM is indeed an interesting problem. Better TLB hits with
THPs, but reduced chance of deduplicating memory - which may or may
not result in more IO that outweighs any THP benefits.

That said, the service in the experiment referenced above has swap
turned on and is under significant memory pressure. Unused splitpages
would get swapped out. The difference from KSM was from deduplicating
pages that were in active use, not internal THP fragmentation.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:05   ` Johannes Weiner
@ 2023-03-15 21:19     ` Johannes Weiner
  2023-03-15 21:45       ` David Hildenbrand
  0 siblings, 1 reply; 33+ messages in thread
From: Johannes Weiner @ 2023-03-15 21:19 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
> > On 10.03.23 19:28, Stefan Roesch wrote:
> > > So far KSM can only be enabled by calling madvise for memory regions. To
> > > be able to use KSM for more workloads, KSM needs to have the ability to be
> > > enabled / disabled at the process / cgroup level.
> > > 
> > > Use case 1:
> > > The madvise call is not available in the programming language. An example for
> > > this are programs with forked workloads using a garbage collected language without
> > > pointers. In such a language madvise cannot be made available.
> > > 
> > > In addition the addresses of objects get moved around as they are garbage
> > > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > > workloads.
> > > 
> > > Use case 2:
> > > The same interpreter can also be used for workloads where KSM brings no
> > > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > > by workload basis.
> > > 
> > > Use case 3:
> > > With the madvise call sharing opportunities are only enabled for the current
> > > process: it is a workload-local decision. A considerable number of sharing
> > > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > > entity like a job scheduler or container can know for certain if its running
> > > one or more instances of a job. That job scheduler however doesn't have
> > > the necessary internal worklaod knowledge to make targeted madvise calls.
> > > 
> > > Security concerns:
> > > In previous discussions security concerns have been brought up. The problem is
> > > that an individual workload does not have the knowledge about what else is
> > > running on a machine. Therefore it has to be very conservative in what memory
> > > areas can be shared or not. However, if the system is dedicated to running
> > > multiple jobs within the same security domain, its the job scheduler that has
> > > the knowledge that sharing can be safely enabled and is even desirable.
> > > 
> > > Performance:
> > > Experiments with using UKSM have shown a capacity increase of around 20%.
> > 
> > Stefan, can you do me a favor and investigate which pages we end up
> > deduplicating -- especially if it's mostly only the zeropage and if it's
> > still that significant when disabling THP?
> > 
> > 
> > I'm currently investigating with some engineers on playing with enabling KSM
> > on some selected processes (enabling it blindly on all VMAs of that process
> > via madvise() ).
> > 
> > One thing we noticed is that such (~50 times) 20MiB processes end up saving
> > ~2MiB of memory per process. That made me suspicious, because it's the THP
> > size.
> > 
> > What I think happens is that we have a 2 MiB area (stack?) and only touch a
> > single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
> > 
> > KSM somehow ends up splitting that THP and deduplicates all resulting
> > zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
> > "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
> > processes with THP enabled, but I only took a look at a sample of the
> > process' smaps so far.
> 
> THP and KSM is indeed an interesting problem. Better TLB hits with
> THPs, but reduced chance of deduplicating memory - which may or may
> not result in more IO that outweighs any THP benefits.
> 
> That said, the service in the experiment referenced above has swap
> turned on and is under significant memory pressure. Unused splitpages
> would get swapped out. The difference from KSM was from deduplicating
> pages that were in active use, not internal THP fragmentation.

Brainfart, my apologies. It could have been the ksm-induced splits
themselves that allowed the unused subpages to get swapped out in the
first place.

But no, I double checked that workload just now. On a weekly average,
it has about 50 anon THPs and 12 million regular anon. THP is not a
factor in the reduction results.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:19     ` Johannes Weiner
@ 2023-03-15 21:45       ` David Hildenbrand
  2023-03-15 21:47         ` David Hildenbrand
  2023-03-30 16:19         ` Stefan Roesch
  0 siblings, 2 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-03-15 21:45 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On 15.03.23 22:19, Johannes Weiner wrote:
> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Use case 1:
>>>> The madvise call is not available in the programming language. An example for
>>>> this are programs with forked workloads using a garbage collected language without
>>>> pointers. In such a language madvise cannot be made available.
>>>>
>>>> In addition the addresses of objects get moved around as they are garbage
>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>> workloads.
>>>>
>>>> Use case 2:
>>>> The same interpreter can also be used for workloads where KSM brings no
>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>> by workload basis.
>>>>
>>>> Use case 3:
>>>> With the madvise call sharing opportunities are only enabled for the current
>>>> process: it is a workload-local decision. A considerable number of sharing
>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>> entity like a job scheduler or container can know for certain if its running
>>>> one or more instances of a job. That job scheduler however doesn't have
>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>
>>>> Security concerns:
>>>> In previous discussions security concerns have been brought up. The problem is
>>>> that an individual workload does not have the knowledge about what else is
>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>> areas can be shared or not. However, if the system is dedicated to running
>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>
>>>> Performance:
>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>
>>> Stefan, can you do me a favor and investigate which pages we end up
>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>> still that significant when disabling THP?
>>>
>>>
>>> I'm currently investigating with some engineers on playing with enabling KSM
>>> on some selected processes (enabling it blindly on all VMAs of that process
>>> via madvise() ).
>>>
>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>> size.
>>>
>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>
>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>> processes with THP enabled, but I only took a look at a sample of the
>>> process' smaps so far.
>>
>> THP and KSM is indeed an interesting problem. Better TLB hits with
>> THPs, but reduced chance of deduplicating memory - which may or may
>> not result in more IO that outweighs any THP benefits.
>>
>> That said, the service in the experiment referenced above has swap
>> turned on and is under significant memory pressure. Unused splitpages
>> would get swapped out. The difference from KSM was from deduplicating
>> pages that were in active use, not internal THP fragmentation.
> 
> Brainfart, my apologies. It could have been the ksm-induced splits
> themselves that allowed the unused subpages to get swapped out in the
> first place.

Yes, it's not easy to spot that this is implemented. I just wrote a 
simple reproducer to confirm: modifying a single subpage in a bunch of 
THP ranges will populate a THP whereby most of the THP is zeroes.

As long as you keep accessing the single subpage via the PMD I assume 
chances of getting it swapped out are lower, because the folio will be 
references/dirty.

KSM will come around and split the THP filled mostly with zeroes and 
deduplciate the resulting zero pages.

[that's where a zeropage-only KSM could be very valuable eventually I think]

> 
> But no, I double checked that workload just now. On a weekly average,
> it has about 50 anon THPs and 12 million regular anon. THP is not a
> factor in the reduction results.

You mean with KSM enabled or with KSM disabled for the process? Not sure 
if your observation reliably implies that the scenario described 
couldn't have happened, but it's late in Germany already :)

In any case, it would be nice to get a feeling for how much variety in 
these 20% of deduplicated pages are. For example, if it's 99% the same 
page or just a wild collection.

Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. 
But I seem to be getting "126" in my simple example where only zeropages 
should get deduplicated, so I have to take another look at the stats 
tomorrow ...

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:45       ` David Hildenbrand
@ 2023-03-15 21:47         ` David Hildenbrand
  2023-03-30 16:19         ` Stefan Roesch
  1 sibling, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-03-15 21:47 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz

On 15.03.23 22:45, David Hildenbrand wrote:
> On 15.03.23 22:19, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>>
>>>>> Use case 1:
>>>>> The madvise call is not available in the programming language. An example for
>>>>> this are programs with forked workloads using a garbage collected language without
>>>>> pointers. In such a language madvise cannot be made available.
>>>>>
>>>>> In addition the addresses of objects get moved around as they are garbage
>>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>>> workloads.
>>>>>
>>>>> Use case 2:
>>>>> The same interpreter can also be used for workloads where KSM brings no
>>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>>> by workload basis.
>>>>>
>>>>> Use case 3:
>>>>> With the madvise call sharing opportunities are only enabled for the current
>>>>> process: it is a workload-local decision. A considerable number of sharing
>>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>>> entity like a job scheduler or container can know for certain if its running
>>>>> one or more instances of a job. That job scheduler however doesn't have
>>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>>
>>>>> Security concerns:
>>>>> In previous discussions security concerns have been brought up. The problem is
>>>>> that an individual workload does not have the knowledge about what else is
>>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>>> areas can be shared or not. However, if the system is dedicated to running
>>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>>
>>>>> Performance:
>>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>>
>>>> Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?
>>>>
>>>>
>>>> I'm currently investigating with some engineers on playing with enabling KSM
>>>> on some selected processes (enabling it blindly on all VMAs of that process
>>>> via madvise() ).
>>>>
>>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>>> size.
>>>>
>>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>>
>>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>>> processes with THP enabled, but I only took a look at a sample of the
>>>> process' smaps so far.
>>>
>>> THP and KSM is indeed an interesting problem. Better TLB hits with
>>> THPs, but reduced chance of deduplicating memory - which may or may
>>> not result in more IO that outweighs any THP benefits.
>>>
>>> That said, the service in the experiment referenced above has swap
>>> turned on and is under significant memory pressure. Unused splitpages
>>> would get swapped out. The difference from KSM was from deduplicating
>>> pages that were in active use, not internal THP fragmentation.
>>
>> Brainfart, my apologies. It could have been the ksm-induced splits
>> themselves that allowed the unused subpages to get swapped out in the
>> first place.
> 
> Yes, it's not easy to spot that this is implemented. I just wrote a
> simple reproducer to confirm: modifying a single subpage in a bunch of
> THP ranges will populate a THP whereby most of the THP is zeroes.
> 
> As long as you keep accessing the single subpage via the PMD I assume
> chances of getting it swapped out are lower, because the folio will be
> references/dirty.
> 
> KSM will come around and split the THP filled mostly with zeroes and
> deduplciate the resulting zero pages.
> 
> [that's where a zeropage-only KSM could be very valuable eventually I think]
> 
>>
>> But no, I double checked that workload just now. On a weekly average,
>> it has about 50 anon THPs and 12 million regular anon. THP is not a
>> factor in the reduction results.
> 
> You mean with KSM enabled or with KSM disabled for the process? Not sure
> if your observation reliably implies that the scenario described
> couldn't have happened, but it's late in Germany already :)
> 
> In any case, it would be nice to get a feeling for how much variety in
> these 20% of deduplicated pages are. For example, if it's 99% the same
> page or just a wild collection.
> 
> Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already.
> But I seem to be getting "126" in my simple example where only zeropages
> should get deduplicated, so I have to take another look at the stats
> tomorrow ...
> 

On second thought, I guess it's because of "max_page_sharing". So one 
has to set that really high to make pages_shared more expressive.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
       [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
  2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
@ 2023-03-28 23:09 ` Andrew Morton
  2023-03-30  4:55   ` David Hildenbrand
       [not found] ` <20230310182851.2579138-3-shr@devkernel.io>
  3 siblings, 1 reply; 33+ messages in thread
From: Andrew Morton @ 2023-03-28 23:09 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
	linux-doc, hannes

On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:

> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.

Review on this series has been a bit thin.  Are we OK with moving this
into mm-stable for the next merge window?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-28 23:09 ` Andrew Morton
@ 2023-03-30  4:55   ` David Hildenbrand
  2023-03-30 14:26     ` Johannes Weiner
  2023-03-30 20:18     ` Andrew Morton
  0 siblings, 2 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-03-30  4:55 UTC (permalink / raw)
  To: Andrew Morton, Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	hannes, Hugh Dickins

On 29.03.23 01:09, Andrew Morton wrote:
> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> 
>> So far KSM can only be enabled by calling madvise for memory regions. To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
> 
> Review on this series has been a bit thin.  Are we OK with moving this
> into mm-stable for the next merge window?

I still want to review (traveling this week), but I also don't want to 
block this forever.

I think I didn't get a reply from Stefan to my question [1] yet (only 
some comments from Johannes). I would still be interested in the 
variance of pages we end up de-duplicating for processes.

The 20% statement in the cover letter is rather useless and possibly 
misleading if no details about the actual workload are shared.

Maybe Hugh has some comments as well (most probably he's also busy).

[1] 
https://lore.kernel.org/all/273a2f82-928f-5ad1-0988-1a886d169e83@redhat.com/

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30  4:55   ` David Hildenbrand
@ 2023-03-30 14:26     ` Johannes Weiner
  2023-03-30 14:40       ` David Hildenbrand
  2023-03-30 20:18     ` Andrew Morton
  1 sibling, 1 reply; 33+ messages in thread
From: Johannes Weiner @ 2023-03-30 14:26 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Stefan Roesch, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
> On 29.03.23 01:09, Andrew Morton wrote:
> > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> > 
> > > So far KSM can only be enabled by calling madvise for memory regions. To
> > > be able to use KSM for more workloads, KSM needs to have the ability to be
> > > enabled / disabled at the process / cgroup level.
> > 
> > Review on this series has been a bit thin.  Are we OK with moving this
> > into mm-stable for the next merge window?
> 
> I still want to review (traveling this week), but I also don't want to block
> this forever.
> 
> I think I didn't get a reply from Stefan to my question [1] yet (only some
> comments from Johannes). I would still be interested in the variance of
> pages we end up de-duplicating for processes.
> 
> The 20% statement in the cover letter is rather useless and possibly
> misleading if no details about the actual workload are shared.

The workload is instagram. It forks off Django runtimes on-demand
until it saturates whatever hardware it's running on. This benefits
from merging common heap/stack state between instances. Since that
runtime is quite large, the 20% number is not surprising, and matches
our expectations of duplicative memory between instances.

Obviously we could spend months analysing which exact allocations are
identical, and then more months or years reworking the architecture to
deduplicate them by hand and in userspace. But this isn't practical,
and KSM is specifically for cases where this isn't practical.

Based on your request in the previous thread, we investigated whether
the boost was coming from the unintended side effects of KSM splitting
THPs. This wasn't the case.

If you have other theories on how the results could be bogus, we'd be
happy to investigate those as well. But you have to let us know what
you're looking for.

Beyond that, I don't think we need to prove from scratch that KSM can
be a worthwhile optimization. It's been established that it can
be. This series is about enabling it in scenarios where madvise()
isn't practical, that's it, and it's yielding the expected results.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 14:26     ` Johannes Weiner
@ 2023-03-30 14:40       ` David Hildenbrand
  2023-03-30 16:41         ` Stefan Roesch
  0 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2023-03-30 14:40 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, Stefan Roesch, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 30.03.23 16:26, Johannes Weiner wrote:
> On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
>> On 29.03.23 01:09, Andrew Morton wrote:
>>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>>>
>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>
>>> Review on this series has been a bit thin.  Are we OK with moving this
>>> into mm-stable for the next merge window?
>>
>> I still want to review (traveling this week), but I also don't want to block
>> this forever.
>>
>> I think I didn't get a reply from Stefan to my question [1] yet (only some
>> comments from Johannes). I would still be interested in the variance of
>> pages we end up de-duplicating for processes.
>>
>> The 20% statement in the cover letter is rather useless and possibly
>> misleading if no details about the actual workload are shared.
> 
> The workload is instagram. It forks off Django runtimes on-demand
> until it saturates whatever hardware it's running on. This benefits
> from merging common heap/stack state between instances. Since that
> runtime is quite large, the 20% number is not surprising, and matches
> our expectations of duplicative memory between instances.

Thanks for this explanation. It's valuable to get at least a feeling for 
the workload because it doesn't seem to apply to other workloads at all.

> 
> Obviously we could spend months analysing which exact allocations are
> identical, and then more months or years reworking the architecture to
> deduplicate them by hand and in userspace. But this isn't practical,
> and KSM is specifically for cases where this isn't practical.
> 
> Based on your request in the previous thread, we investigated whether
> the boost was coming from the unintended side effects of KSM splitting
> THPs. This wasn't the case.
> 
> If you have other theories on how the results could be bogus, we'd be
> happy to investigate those as well. But you have to let us know what
> you're looking for.
> 

Maybe I'm bad at making such requests but

"Stefan, can you do me a favor and investigate which pages we end up
deduplicating -- especially if it's mostly only the zeropage and if it's
still that significant when disabling THP?"

"In any case, it would be nice to get a feeling for how much variety in
these 20% of deduplicated pages are. "

is pretty clear to me. And shouldn't take months.

> Beyond that, I don't think we need to prove from scratch that KSM can

I never expected a proof. I was merely trying to understand if it's 
really KSM that helps here. Also with the intention to figure out if KSM 
is really the right tool to use here or if it simply "helps by luck" as 
with the shared zeropage. That end result could have been valuable to 
your use case as well, because KSM overhead is real.

> be a worthwhile optimization. It's been established that it can
> be. This series is about enabling it in scenarios where madvise()
> isn't practical, that's it, and it's yielding the expected results.

I'm sorry to say, but you sound a bit aggressive and annoyed. I also 
have no idea why Stefan isn't replying to me but always you.

Am I asking the wrong questions? Do you want me to stop looking at KSM code?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-15 21:45       ` David Hildenbrand
  2023-03-15 21:47         ` David Hildenbrand
@ 2023-03-30 16:19         ` Stefan Roesch
  1 sibling, 0 replies; 33+ messages in thread
From: Stefan Roesch @ 2023-03-30 16:19 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Mike Kravetz


David Hildenbrand <david@redhat.com> writes:

> On 15.03.23 22:19, Johannes Weiner wrote:
>> On Wed, Mar 15, 2023 at 05:05:47PM -0400, Johannes Weiner wrote:
>>> On Wed, Mar 15, 2023 at 09:03:57PM +0100, David Hildenbrand wrote:
>>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>>
>>>>> Use case 1:
>>>>> The madvise call is not available in the programming language. An example for
>>>>> this are programs with forked workloads using a garbage collected language without
>>>>> pointers. In such a language madvise cannot be made available.
>>>>>
>>>>> In addition the addresses of objects get moved around as they are garbage
>>>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>>>> workloads.
>>>>>
>>>>> Use case 2:
>>>>> The same interpreter can also be used for workloads where KSM brings no
>>>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>>>> by workload basis.
>>>>>
>>>>> Use case 3:
>>>>> With the madvise call sharing opportunities are only enabled for the current
>>>>> process: it is a workload-local decision. A considerable number of sharing
>>>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>>>> entity like a job scheduler or container can know for certain if its running
>>>>> one or more instances of a job. That job scheduler however doesn't have
>>>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>>>
>>>>> Security concerns:
>>>>> In previous discussions security concerns have been brought up. The problem is
>>>>> that an individual workload does not have the knowledge about what else is
>>>>> running on a machine. Therefore it has to be very conservative in what memory
>>>>> areas can be shared or not. However, if the system is dedicated to running
>>>>> multiple jobs within the same security domain, its the job scheduler that has
>>>>> the knowledge that sharing can be safely enabled and is even desirable.
>>>>>
>>>>> Performance:
>>>>> Experiments with using UKSM have shown a capacity increase of around 20%.
>>>>
>>>> Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?
>>>>
>>>>
>>>> I'm currently investigating with some engineers on playing with enabling KSM
>>>> on some selected processes (enabling it blindly on all VMAs of that process
>>>> via madvise() ).
>>>>
>>>> One thing we noticed is that such (~50 times) 20MiB processes end up saving
>>>> ~2MiB of memory per process. That made me suspicious, because it's the THP
>>>> size.
>>>>
>>>> What I think happens is that we have a 2 MiB area (stack?) and only touch a
>>>> single page. We get a whole 2 MiB THP populated. Most of that THP is zeroes.
>>>>
>>>> KSM somehow ends up splitting that THP and deduplicates all resulting
>>>> zeropages. Thus, we "save" 2 MiB. Actually, it's more like we no longer
>>>> "waste" 2 MiB. I think the processes with KSM have less (none) THP than the
>>>> processes with THP enabled, but I only took a look at a sample of the
>>>> process' smaps so far.
>>>
>>> THP and KSM is indeed an interesting problem. Better TLB hits with
>>> THPs, but reduced chance of deduplicating memory - which may or may
>>> not result in more IO that outweighs any THP benefits.
>>>
>>> That said, the service in the experiment referenced above has swap
>>> turned on and is under significant memory pressure. Unused splitpages
>>> would get swapped out. The difference from KSM was from deduplicating
>>> pages that were in active use, not internal THP fragmentation.
>> Brainfart, my apologies. It could have been the ksm-induced splits
>> themselves that allowed the unused subpages to get swapped out in the
>> first place.
>
> Yes, it's not easy to spot that this is implemented. I just wrote a simple
> reproducer to confirm: modifying a single subpage in a bunch of THP ranges will
> populate a THP whereby most of the THP is zeroes.
>
> As long as you keep accessing the single subpage via the PMD I assume chances of
> getting it swapped out are lower, because the folio will be references/dirty.
>
> KSM will come around and split the THP filled mostly with zeroes and deduplciate
> the resulting zero pages.
>
> [that's where a zeropage-only KSM could be very valuable eventually I think]
>

We can certainly run an experiment where THP is turned off to verify if we
observe similar savings,

>> But no, I double checked that workload just now. On a weekly average,
>> it has about 50 anon THPs and 12 million regular anon. THP is not a
>> factor in the reduction results.
>
> You mean with KSM enabled or with KSM disabled for the process? Not sure if your
> observation reliably implies that the scenario described couldn't have happened,
> but it's late in Germany already :)
>
> In any case, it would be nice to get a feeling for how much variety in these 20%
> of deduplicated pages are. For example, if it's 99% the same page or just a wild
> collection.
>
> Maybe "cat /sys/kernel/mm/ksm/pages_shared" would be expressive already. But I
> seem to be getting "126" in my simple example where only zeropages should get
> deduplicated, so I have to take another look at the stats tomorrow ...

/sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
Instagram workload. The workload consists of 36 processes plus a few
sidecar processes.

Also to give some idea for individual VMA's

7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM: 73160 KB)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 14:40       ` David Hildenbrand
@ 2023-03-30 16:41         ` Stefan Roesch
  2023-04-03  9:48           ` David Hildenbrand
  0 siblings, 1 reply; 33+ messages in thread
From: Stefan Roesch @ 2023-03-30 16:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


My mistake I first answered to an older email.

David Hildenbrand <david@redhat.com> writes:

> On 30.03.23 16:26, Johannes Weiner wrote:
>> On Thu, Mar 30, 2023 at 06:55:31AM +0200, David Hildenbrand wrote:
>>> On 29.03.23 01:09, Andrew Morton wrote:
>>>> On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>>>>
>>>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Review on this series has been a bit thin.  Are we OK with moving this
>>>> into mm-stable for the next merge window?
>>>
>>> I still want to review (traveling this week), but I also don't want to block
>>> this forever.
>>>
>>> I think I didn't get a reply from Stefan to my question [1] yet (only some
>>> comments from Johannes). I would still be interested in the variance of
>>> pages we end up de-duplicating for processes.
>>>
>>> The 20% statement in the cover letter is rather useless and possibly
>>> misleading if no details about the actual workload are shared.
>> The workload is instagram. It forks off Django runtimes on-demand
>> until it saturates whatever hardware it's running on. This benefits
>> from merging common heap/stack state between instances. Since that
>> runtime is quite large, the 20% number is not surprising, and matches
>> our expectations of duplicative memory between instances.
>
> Thanks for this explanation. It's valuable to get at least a feeling for the
> workload because it doesn't seem to apply to other workloads at all.
>
>> Obviously we could spend months analysing which exact allocations are
>> identical, and then more months or years reworking the architecture to
>> deduplicate them by hand and in userspace. But this isn't practical,
>> and KSM is specifically for cases where this isn't practical.
>> Based on your request in the previous thread, we investigated whether
>> the boost was coming from the unintended side effects of KSM splitting
>> THPs. This wasn't the case.
>> If you have other theories on how the results could be bogus, we'd be
>> happy to investigate those as well. But you have to let us know what
>> you're looking for.
>>
>
> Maybe I'm bad at making such requests but
>
> "Stefan, can you do me a favor and investigate which pages we end up
> deduplicating -- especially if it's mostly only the zeropage and if it's
> still that significant when disabling THP?"
>
> "In any case, it would be nice to get a feeling for how much variety in
> these 20% of deduplicated pages are. "
>
> is pretty clear to me. And shouldn't take months.
>

/sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
Instagram workload. The workload consists of 36 processes plus a few
sidecar processes.

Each of these individual processes has around 500MB in KSM pages.

Also to give some idea for individual VMA's

7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
73160 KB)

>> Beyond that, I don't think we need to prove from scratch that KSM can
>
> I never expected a proof. I was merely trying to understand if it's really KSM
> that helps here. Also with the intention to figure out if KSM is really the
> right tool to use here or if it simply "helps by luck" as with the shared
> zeropage. That end result could have been valuable to your use case as well,
> because KSM overhead is real.
>
>> be a worthwhile optimization. It's been established that it can
>> be. This series is about enabling it in scenarios where madvise()
>> isn't practical, that's it, and it's yielding the expected results.
>
> I'm sorry to say, but you sound a bit aggressive and annoyed. I also have no
> idea why Stefan isn't replying to me but always you.
>
> Am I asking the wrong questions? Do you want me to stop looking at KSM code?
>

Your review is valuable, Johannes was quicker than me.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30  4:55   ` David Hildenbrand
  2023-03-30 14:26     ` Johannes Weiner
@ 2023-03-30 20:18     ` Andrew Morton
  1 sibling, 0 replies; 33+ messages in thread
From: Andrew Morton @ 2023-03-30 20:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, hannes, Hugh Dickins

On Thu, 30 Mar 2023 06:55:31 +0200 David Hildenbrand <david@redhat.com> wrote:

> On 29.03.23 01:09, Andrew Morton wrote:
> > On Fri, 10 Mar 2023 10:28:48 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> > 
> >> So far KSM can only be enabled by calling madvise for memory regions. To
> >> be able to use KSM for more workloads, KSM needs to have the ability to be
> >> enabled / disabled at the process / cgroup level.
> > 
> > Review on this series has been a bit thin.  Are we OK with moving this
> > into mm-stable for the next merge window?
> 
> I still want to review (traveling this week), but I also don't want to 
> block this forever.

No hurry, we're only at -rc4.  Holding this series in mm-unstable for
another 2-3 weeks isn't a problem.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-03-30 16:41         ` Stefan Roesch
@ 2023-04-03  9:48           ` David Hildenbrand
  2023-04-03 16:34             ` Stefan Roesch
  0 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2023-04-03  9:48 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

>>> Obviously we could spend months analysing which exact allocations are
>>> identical, and then more months or years reworking the architecture to
>>> deduplicate them by hand and in userspace. But this isn't practical,
>>> and KSM is specifically for cases where this isn't practical.
>>> Based on your request in the previous thread, we investigated whether
>>> the boost was coming from the unintended side effects of KSM splitting
>>> THPs. This wasn't the case.
>>> If you have other theories on how the results could be bogus, we'd be
>>> happy to investigate those as well. But you have to let us know what
>>> you're looking for.
>>>
>>
>> Maybe I'm bad at making such requests but
>>
>> "Stefan, can you do me a favor and investigate which pages we end up
>> deduplicating -- especially if it's mostly only the zeropage and if it's
>> still that significant when disabling THP?"
>>
>> "In any case, it would be nice to get a feeling for how much variety in
>> these 20% of deduplicated pages are. "
>>
>> is pretty clear to me. And shouldn't take months.
>>

Just to clarify: the details I requested are not meant to decide whether 
to reject the patch set (I understand that it can be beneficial to 
have); I primarily want to understand if we're really dealing with a 
workload where KSM is able to deduplicate pages that are non-trivial, to 
maybe figure out if there are other workloads that could similarly 
benefit -- or if we could optimize KSM for these specific cases or avoid 
the memory deduplication altogether.

In contrast to e.g.:

1) THP resulted in many zeropages we end up deduplicating again. The THP
    placement was unfortunate.

2) Unoptimized memory allocators that leave many identical pages mapped
    after freeing up memory (e.g., zeroed pages, pages all filled with
    poison values) instead of e.g., using MADV_DONTNEED to free up that
    memory.


> 
> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
> Instagram workload. The workload consists of 36 processes plus a few
> sidecar processes.

Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in 
that environment?

What would be interesting is pages_shared after max_page_sharing was set 
to a very high number such that pages_shared does not include 
duplicates. Then pages_shared actually expresses how many different 
pages we deduplicate. No need to run without THP in that case.

Similarly, enabling "use_zero_pages" could highlight if your workload 
ends up deduplciating a lot of zeropages. But maxing out 
max_page_sharing would be sufficient to understand what's happening.


> 
> Each of these individual processes has around 500MB in KSM pages.
> 

That's really a lot, thanks.

> Also to give some idea for individual VMA's
> 
> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
> 73160 KB)
> 

I'll have a look at the patches today.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
       [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
  2023-03-13 16:26   ` [PATCH v4 1/3] mm: add new api to enable ksm per process Johannes Weiner
@ 2023-04-03 10:37   ` David Hildenbrand
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-03 15:50     ` Stefan Roesch
  1 sibling, 2 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-03 10:37 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya

On 10.03.23 19:28, Stefan Roesch wrote:
> Patch series "mm: process/cgroup ksm support", v3.
> 
> So far KSM can only be enabled by calling madvise for memory regions.  To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
> 
> Use case 1:
> 
>    The madvise call is not available in the programming language.  An
>    example for this are programs with forked workloads using a garbage
>    collected language without pointers.  In such a language madvise cannot
>    be made available.
> 
>    In addition the addresses of objects get moved around as they are
>    garbage collected.  KSM sharing needs to be enabled "from the outside"
>    for these type of workloads.

I guess the interpreter could enable it (like a memory allocator could 
enable it for the whole heap). But I get that it's much easier to enable 
this per-process, and eventually only when a lot of the same processes 
are running in that particular environment.

> 
> Use case 2:
> 
>    The same interpreter can also be used for workloads where KSM brings
>    no benefit or even has overhead.  We'd like to be able to enable KSM on
>    a workload by workload basis.

Agreed. A per-process control is also helpful to identidy workloads 
where KSM might be beneficial (and to which degree).

> 
> Use case 3:
> 
>    With the madvise call sharing opportunities are only enabled for the
>    current process: it is a workload-local decision.  A considerable number
>    of sharing opportuniites may exist across multiple workloads or jobs.
>    Only a higler level entity like a job scheduler or container can know
>    for certain if its running one or more instances of a job.  That job
>    scheduler however doesn't have the necessary internal worklaod knowledge
>    to make targeted madvise calls.
> 
> Security concerns:
> 
>    In previous discussions security concerns have been brought up.  The
>    problem is that an individual workload does not have the knowledge about
>    what else is running on a machine.  Therefore it has to be very
>    conservative in what memory areas can be shared or not.  However, if the
>    system is dedicated to running multiple jobs within the same security
>    domain, its the job scheduler that has the knowledge that sharing can be
>    safely enabled and is even desirable.
> 
> Performance:
> 
>    Experiments with using UKSM have shown a capacity increase of around
>    20%.
> 

As raised, it would be great to include more details about the workload 
where this particulalry helps (e.g., a lot of Django processes operating 
in the same domain).

> 
> 1. New options for prctl system command
> 
>     This patch series adds two new options to the prctl system call.
>     The first one allows to enable KSM at the process level and the second
>     one to query the setting.
> 
>     The setting will be inherited by child processes.
> 
>     With the above setting, KSM can be enabled for the seed process of a
>     cgroup and all processes in the cgroup will inherit the setting.
> 
> 2. Changes to KSM processing
> 
>     When KSM is enabled at the process level, the KSM code will iterate
>     over all the VMA's and enable KSM for the eligible VMA's.
> 
>     When forking a process that has KSM enabled, the setting will be
>     inherited by the new child process.
> 
>     In addition when KSM is disabled for a process, KSM will be disabled
>     for the VMA's where KSM has been enabled.

Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new 
prctl is enabled for a process?

> 
> 3. Add general_profit metric
> 
>     The general_profit metric of KSM is specified in the documentation,
>     but not calculated.  This adds the general profit metric to
>     /sys/kernel/debug/mm/ksm.
> 
> 4. Add more metrics to ksm_stat
> 
>     This adds the process profit and ksm type metric to
>     /proc/<pid>/ksm_stat.
> 
> 5. Add more tests to ksm_tests
> 
>     This adds an option to specify the merge type to the ksm_tests.
>     This allows to test madvise and prctl KSM.  It also adds a new option
>     to query if prctl KSM has been enabled.  It adds a fork test to verify
>     that the KSM process setting is inherited by client processes.
> 
> An update to the prctl(2) manpage has been proposed at [1].
> 
> This patch (of 3):
> 
> This adds a new prctl to API to enable and disable KSM on a per process
> basis instead of only at the VMA basis (with madvise).
> 
> 1) Introduce new MMF_VM_MERGE_ANY flag
> 
>     This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>     is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>     process.
> 
> 2) add flag to __ksm_enter
> 
>     This change adds the flag parameter to __ksm_enter.  This allows to
>     distinguish if ksm was called by prctl or madvise.
> 
> 3) add flag to __ksm_exit call
> 
>     This adds the flag parameter to the __ksm_exit() call.  This allows
>     to distinguish if this call is for an prctl or madvise invocation.
> 
> 4) invoke madvise for all vmas in scan_get_next_rmap_item
> 
>     If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>     over all the vmas and enable ksm if possible.  For the vmas that can be
>     ksm enabled this is only done once.
> 
> 5) support disabling of ksm for a process
> 
>     This adds the ability to disable ksm for a process if ksm has been
>     enabled for the process.
> 
> 6) add new prctl option to get and set ksm for a process
> 
>     This adds two new options to the prctl system call
>     - enable ksm for all vmas of a process (if the vmas support it).
>     - query if ksm has been enabled for a process.


Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, 
to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE 
infrastructure. Especially:

1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
    compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
    already set.

2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
    VM_MERGABLE?

The you can avoid all runtime checks for compatible VMAs and only look 
at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely 
expressive then for all VMAs. You don't need vma_ksm_mergeable() then.


Another thing to consider is interaction with arch/s390/mm/gmap.c: 
s390x/kvm does not support KSM and it has to disable it for all VMAs. We 
have to find a way to fence the prctl (for example, fail setting the 
prctl after gmap_mark_unmergeable() ran, and make 
gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully 
in some other way).


> 
> Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1]
> Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io
> Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io
> Signed-off-by: Stefan Roesch <shr@devkernel.io>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Bagas Sanjaya <bagasdotme@gmail.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
>   include/linux/ksm.h            | 14 ++++--
>   include/linux/sched/coredump.h |  1 +
>   include/uapi/linux/prctl.h     |  2 +
>   kernel/sys.c                   | 27 ++++++++++
>   mm/ksm.c                       | 90 +++++++++++++++++++++++-----------
>   5 files changed, 101 insertions(+), 33 deletions(-)
> 
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 7e232ba59b86..d38a05a36298 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -18,20 +18,24 @@
>   #ifdef CONFIG_KSM
>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>   		unsigned long end, int advice, unsigned long *vm_flags);
> -int __ksm_enter(struct mm_struct *mm);
> -void __ksm_exit(struct mm_struct *mm);
> +int __ksm_enter(struct mm_struct *mm, int flag);
> +void __ksm_exit(struct mm_struct *mm, int flag);
>   
>   static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>   {
> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
> +		return __ksm_enter(mm, MMF_VM_MERGE_ANY);
>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
> -		return __ksm_enter(mm);
> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>   	return 0;
>   }
>   
>   static inline void ksm_exit(struct mm_struct *mm)
>   {
> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> -		__ksm_exit(mm);
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> +		__ksm_exit(mm, MMF_VM_MERGEABLE);
>   }
>   
>   /*
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index 0e17ae7fbfd3..0ee96ea7a0e9 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
>   #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
>   				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
>   
> +#define MMF_VM_MERGE_ANY	29
>   #endif /* _LINUX_SCHED_COREDUMP_H */
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 1312a137f7fb..759b3f53e53f 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>   #define PR_SET_VMA		0x53564d41
>   # define PR_SET_VMA_ANON_NAME		0
>   
> +#define PR_SET_MEMORY_MERGE		67
> +#define PR_GET_MEMORY_MERGE		68
>   #endif /* _LINUX_PRCTL_H */
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 495cd87d9bf4..edc439b1cae9 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -15,6 +15,7 @@
>   #include <linux/highuid.h>
>   #include <linux/fs.h>
>   #include <linux/kmod.h>
> +#include <linux/ksm.h>
>   #include <linux/perf_event.h>
>   #include <linux/resource.h>
>   #include <linux/kernel.h>
> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>   	case PR_SET_VMA:
>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>   		break;
> +#ifdef CONFIG_KSM
> +	case PR_SET_MEMORY_MERGE:
> +		if (!capable(CAP_SYS_RESOURCE))
> +			return -EPERM;
> +
> +		if (arg2) {
> +			if (mmap_write_lock_killable(me->mm))
> +				return -EINTR;
> +
> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);

Hm, I think this might be problematic if we alread called __ksm_enter() 
via madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY 
set MMF_VM_MERGABLE instead. Like:

error = 0;
if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
	error = __ksm_enter(me->mm);
if (!error)
	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);

> +			mmap_write_unlock(me->mm);
> +		} else {
> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);

Hm, I'd prefer if we really only call __ksm_exit() when we really exit 
the process. Is there a strong requirement to optimize disabling of KSM 
or would it be sufficient to clear the MMF_VM_MERGE_ANY flag here?

Also, I wonder what happens if we have another VMA in that process that 
has it enabled ..

Last but not least, wouldn't we want to do the same thing as 
MADV_UNMERGEABLE and actually unmerge the KSM pages?


It smells like it could be simpler and more consistent to handle by 
letting PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE 
and mimic what ksm_madvise() does simply for all VMAs.

> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>   }
>   
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	/*
> +	 * Be somewhat over-protective for now!
> +	 */
> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
> +			     VM_HUGETLB | VM_MIXEDMAP))
> +		return false;		/* just ignore the advice */

That comment is kind-of stale and ksm_madvise() specific.

> +

The VM_MERGEABLE check is really only used for ksm_madvise() to return 
immediately. I suggest keeping it in ksm_madvise() -- "Already enabled". 
Returning "false" in that case looks wrong (it's not broken because you 
do an early check in vma_ksm_mergeable(), it's just semantically weird).



-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 10:37   ` David Hildenbrand
@ 2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
                         ` (2 more replies)
  2023-04-03 15:50     ` Stefan Roesch
  1 sibling, 3 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-03 11:03 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank, Christian Borntraeger

On 03.04.23 12:37, David Hildenbrand wrote:
> On 10.03.23 19:28, Stefan Roesch wrote:
>> Patch series "mm: process/cgroup ksm support", v3.
>>
>> So far KSM can only be enabled by calling madvise for memory regions.  To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>>
>> Use case 1:
>>
>>     The madvise call is not available in the programming language.  An
>>     example for this are programs with forked workloads using a garbage
>>     collected language without pointers.  In such a language madvise cannot
>>     be made available.
>>
>>     In addition the addresses of objects get moved around as they are
>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>     for these type of workloads.
> 
> I guess the interpreter could enable it (like a memory allocator could
> enable it for the whole heap). But I get that it's much easier to enable
> this per-process, and eventually only when a lot of the same processes
> are running in that particular environment.
> 
>>
>> Use case 2:
>>
>>     The same interpreter can also be used for workloads where KSM brings
>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>     a workload by workload basis.
> 
> Agreed. A per-process control is also helpful to identidy workloads
> where KSM might be beneficial (and to which degree).
> 
>>
>> Use case 3:
>>
>>     With the madvise call sharing opportunities are only enabled for the
>>     current process: it is a workload-local decision.  A considerable number
>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>     Only a higler level entity like a job scheduler or container can know
>>     for certain if its running one or more instances of a job.  That job
>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>     to make targeted madvise calls.
>>
>> Security concerns:
>>
>>     In previous discussions security concerns have been brought up.  The
>>     problem is that an individual workload does not have the knowledge about
>>     what else is running on a machine.  Therefore it has to be very
>>     conservative in what memory areas can be shared or not.  However, if the
>>     system is dedicated to running multiple jobs within the same security
>>     domain, its the job scheduler that has the knowledge that sharing can be
>>     safely enabled and is even desirable.
>>
>> Performance:
>>
>>     Experiments with using UKSM have shown a capacity increase of around
>>     20%.
>>
> 
> As raised, it would be great to include more details about the workload
> where this particulalry helps (e.g., a lot of Django processes operating
> in the same domain).
> 
>>
>> 1. New options for prctl system command
>>
>>      This patch series adds two new options to the prctl system call.
>>      The first one allows to enable KSM at the process level and the second
>>      one to query the setting.
>>
>>      The setting will be inherited by child processes.
>>
>>      With the above setting, KSM can be enabled for the seed process of a
>>      cgroup and all processes in the cgroup will inherit the setting.
>>
>> 2. Changes to KSM processing
>>
>>      When KSM is enabled at the process level, the KSM code will iterate
>>      over all the VMA's and enable KSM for the eligible VMA's.
>>
>>      When forking a process that has KSM enabled, the setting will be
>>      inherited by the new child process.
>>
>>      In addition when KSM is disabled for a process, KSM will be disabled
>>      for the VMA's where KSM has been enabled.
> 
> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
> prctl is enabled for a process?
> 
>>
>> 3. Add general_profit metric
>>
>>      The general_profit metric of KSM is specified in the documentation,
>>      but not calculated.  This adds the general profit metric to
>>      /sys/kernel/debug/mm/ksm.
>>
>> 4. Add more metrics to ksm_stat
>>
>>      This adds the process profit and ksm type metric to
>>      /proc/<pid>/ksm_stat.
>>
>> 5. Add more tests to ksm_tests
>>
>>      This adds an option to specify the merge type to the ksm_tests.
>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>      that the KSM process setting is inherited by client processes.
>>
>> An update to the prctl(2) manpage has been proposed at [1].
>>
>> This patch (of 3):
>>
>> This adds a new prctl to API to enable and disable KSM on a per process
>> basis instead of only at the VMA basis (with madvise).
>>
>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>
>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>      process.
>>
>> 2) add flag to __ksm_enter
>>
>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>      distinguish if ksm was called by prctl or madvise.
>>
>> 3) add flag to __ksm_exit call
>>
>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>      to distinguish if this call is for an prctl or madvise invocation.
>>
>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>
>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>      ksm enabled this is only done once.
>>
>> 5) support disabling of ksm for a process
>>
>>      This adds the ability to disable ksm for a process if ksm has been
>>      enabled for the process.
>>
>> 6) add new prctl option to get and set ksm for a process
>>
>>      This adds two new options to the prctl system call
>>      - enable ksm for all vmas of a process (if the vmas support it).
>>      - query if ksm has been enabled for a process.
> 
> 
> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
> infrastructure. Especially:
> 
> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>      already set.
> 
> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>      VM_MERGABLE?
> 
> The you can avoid all runtime checks for compatible VMAs and only look
> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
> 
> 
> Another thing to consider is interaction with arch/s390/mm/gmap.c:
> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
> have to find a way to fence the prctl (for example, fail setting the
> prctl after gmap_mark_unmergeable() ran, and make
> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
> in some other way).


Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" 
is doing what it's supposed to do. I don't think this effectively 
prevents right now madvise() from getting re-enabled on that VMA.

@Christian, Janosch, am I missing something?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 10:37   ` David Hildenbrand
  2023-04-03 11:03     ` David Hildenbrand
@ 2023-04-03 15:50     ` Stefan Roesch
  2023-04-03 17:02       ` David Hildenbrand
  1 sibling, 1 reply; 33+ messages in thread
From: Stefan Roesch @ 2023-04-03 15:50 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 10.03.23 19:28, Stefan Roesch wrote:
>> Patch series "mm: process/cgroup ksm support", v3.
>> So far KSM can only be enabled by calling madvise for memory regions.  To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>> Use case 1:
>>    The madvise call is not available in the programming language.  An
>>    example for this are programs with forked workloads using a garbage
>>    collected language without pointers.  In such a language madvise cannot
>>    be made available.
>>    In addition the addresses of objects get moved around as they are
>>    garbage collected.  KSM sharing needs to be enabled "from the outside"
>>    for these type of workloads.
>
> I guess the interpreter could enable it (like a memory allocator could enable it
> for the whole heap). But I get that it's much easier to enable this per-process,
> and eventually only when a lot of the same processes are running in that
> particular environment.
>

We don't want it to get enabled for all workloads of that interpreter,
instead we want to be able to select for which workloads we enable KSM.

>> Use case 2:
>>    The same interpreter can also be used for workloads where KSM brings
>>    no benefit or even has overhead.  We'd like to be able to enable KSM on
>>    a workload by workload basis.
>
> Agreed. A per-process control is also helpful to identidy workloads where KSM
> might be beneficial (and to which degree).
>
>> Use case 3:
>>    With the madvise call sharing opportunities are only enabled for the
>>    current process: it is a workload-local decision.  A considerable number
>>    of sharing opportuniites may exist across multiple workloads or jobs.
>>    Only a higler level entity like a job scheduler or container can know
>>    for certain if its running one or more instances of a job.  That job
>>    scheduler however doesn't have the necessary internal worklaod knowledge
>>    to make targeted madvise calls.
>> Security concerns:
>>    In previous discussions security concerns have been brought up.  The
>>    problem is that an individual workload does not have the knowledge about
>>    what else is running on a machine.  Therefore it has to be very
>>    conservative in what memory areas can be shared or not.  However, if the
>>    system is dedicated to running multiple jobs within the same security
>>    domain, its the job scheduler that has the knowledge that sharing can be
>>    safely enabled and is even desirable.
>> Performance:
>>    Experiments with using UKSM have shown a capacity increase of around
>>    20%.
>>
>
> As raised, it would be great to include more details about the workload where
> this particulalry helps (e.g., a lot of Django processes operating in the same
> domain).
>

I can add that the django processes are part of the same domain with the
next version of the patch series.

>> 1. New options for prctl system command
>>     This patch series adds two new options to the prctl system call.
>>     The first one allows to enable KSM at the process level and the second
>>     one to query the setting.
>>     The setting will be inherited by child processes.
>>     With the above setting, KSM can be enabled for the seed process of a
>>     cgroup and all processes in the cgroup will inherit the setting.
>> 2. Changes to KSM processing
>>     When KSM is enabled at the process level, the KSM code will iterate
>>     over all the VMA's and enable KSM for the eligible VMA's.
>>     When forking a process that has KSM enabled, the setting will be
>>     inherited by the new child process.
>>     In addition when KSM is disabled for a process, KSM will be disabled
>>     for the VMA's where KSM has been enabled.
>
> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is
> enabled for a process?

I decided to allow enabling KSM with prctl even when MADV_MERGEABLE,
this allows more flexibility.
>
>> 3. Add general_profit metric
>>     The general_profit metric of KSM is specified in the documentation,
>>     but not calculated.  This adds the general profit metric to
>>     /sys/kernel/debug/mm/ksm.
>> 4. Add more metrics to ksm_stat
>>     This adds the process profit and ksm type metric to
>>     /proc/<pid>/ksm_stat.
>> 5. Add more tests to ksm_tests
>>     This adds an option to specify the merge type to the ksm_tests.
>>     This allows to test madvise and prctl KSM.  It also adds a new option
>>     to query if prctl KSM has been enabled.  It adds a fork test to verify
>>     that the KSM process setting is inherited by client processes.
>> An update to the prctl(2) manpage has been proposed at [1].
>> This patch (of 3):
>> This adds a new prctl to API to enable and disable KSM on a per process
>> basis instead of only at the VMA basis (with madvise).
>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>     This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>     is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>     process.
>> 2) add flag to __ksm_enter
>>     This change adds the flag parameter to __ksm_enter.  This allows to
>>     distinguish if ksm was called by prctl or madvise.
>> 3) add flag to __ksm_exit call
>>     This adds the flag parameter to the __ksm_exit() call.  This allows
>>     to distinguish if this call is for an prctl or madvise invocation.
>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>     If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>     over all the vmas and enable ksm if possible.  For the vmas that can be
>>     ksm enabled this is only done once.
>> 5) support disabling of ksm for a process
>>     This adds the ability to disable ksm for a process if ksm has been
>>     enabled for the process.
>> 6) add new prctl option to get and set ksm for a process
>>     This adds two new options to the prctl system call
>>     - enable ksm for all vmas of a process (if the vmas support it).
>>     - query if ksm has been enabled for a process.
>
>
> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way, to
> instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE infrastructure.
> Especially:
>
> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>    compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>    already set.
>
> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>    VM_MERGABLE?
>
> The you can avoid all runtime checks for compatible VMAs and only look at the
> VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely expressive then
> for all VMAs. You don't need vma_ksm_mergeable() then.
>

I didn't consider the above approach, I can have a look. I can see the
benefit of not needing vma_ksm_mergeable().

> Another thing to consider is interaction with arch/s390/mm/gmap.c: s390x/kvm
> does not support KSM and it has to disable it for all VMAs. We have to find a
> way to fence the prctl (for example, fail setting the prctl after
> gmap_mark_unmergeable() ran, and make gmap_mark_unmergeable() fail if the prctl
> ran -- or handle it gracefully in some other way).
>
>

I'll have a look.

>> Link: https://lkml.kernel.org/r/20230227220206.436662-1-shr@devkernel.io [1]
>> Link: https://lkml.kernel.org/r/20230224044000.3084046-1-shr@devkernel.io
>> Link: https://lkml.kernel.org/r/20230224044000.3084046-2-shr@devkernel.io
>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Rik van Riel <riel@surriel.com>
>> Cc: Bagas Sanjaya <bagasdotme@gmail.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>>   include/linux/ksm.h            | 14 ++++--
>>   include/linux/sched/coredump.h |  1 +
>>   include/uapi/linux/prctl.h     |  2 +
>>   kernel/sys.c                   | 27 ++++++++++
>>   mm/ksm.c                       | 90 +++++++++++++++++++++++-----------
>>   5 files changed, 101 insertions(+), 33 deletions(-)
>> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
>> index 7e232ba59b86..d38a05a36298 100644
>> --- a/include/linux/ksm.h
>> +++ b/include/linux/ksm.h
>> @@ -18,20 +18,24 @@
>>   #ifdef CONFIG_KSM
>>   int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>>   		unsigned long end, int advice, unsigned long *vm_flags);
>> -int __ksm_enter(struct mm_struct *mm);
>> -void __ksm_exit(struct mm_struct *mm);
>> +int __ksm_enter(struct mm_struct *mm, int flag);
>> +void __ksm_exit(struct mm_struct *mm, int flag);
>>     static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>>   {
>> +	if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
>> +		return __ksm_enter(mm, MMF_VM_MERGE_ANY);
>>   	if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
>> -		return __ksm_enter(mm);
>> +		return __ksm_enter(mm, MMF_VM_MERGEABLE);
>>   	return 0;
>>   }
>>     static inline void ksm_exit(struct mm_struct *mm)
>>   {
>> -	if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> -		__ksm_exit(mm);
>> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGE_ANY);
>> +	else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
>> +		__ksm_exit(mm, MMF_VM_MERGEABLE);
>>   }
>>     /*
>> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
>> index 0e17ae7fbfd3..0ee96ea7a0e9 100644
>> --- a/include/linux/sched/coredump.h
>> +++ b/include/linux/sched/coredump.h
>> @@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
>>   #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
>>   				 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
>>   +#define MMF_VM_MERGE_ANY	29
>>   #endif /* _LINUX_SCHED_COREDUMP_H */
>> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
>> index 1312a137f7fb..759b3f53e53f 100644
>> --- a/include/uapi/linux/prctl.h
>> +++ b/include/uapi/linux/prctl.h
>> @@ -290,4 +290,6 @@ struct prctl_mm_map {
>>   #define PR_SET_VMA		0x53564d41
>>   # define PR_SET_VMA_ANON_NAME		0
>>   +#define PR_SET_MEMORY_MERGE		67
>> +#define PR_GET_MEMORY_MERGE		68
>>   #endif /* _LINUX_PRCTL_H */
>> diff --git a/kernel/sys.c b/kernel/sys.c
>> index 495cd87d9bf4..edc439b1cae9 100644
>> --- a/kernel/sys.c
>> +++ b/kernel/sys.c
>> @@ -15,6 +15,7 @@
>>   #include <linux/highuid.h>
>>   #include <linux/fs.h>
>>   #include <linux/kmod.h>
>> +#include <linux/ksm.h>
>>   #include <linux/perf_event.h>
>>   #include <linux/resource.h>
>>   #include <linux/kernel.h>
>> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>>   	case PR_SET_VMA:
>>   		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>   		break;
>> +#ifdef CONFIG_KSM
>> +	case PR_SET_MEMORY_MERGE:
>> +		if (!capable(CAP_SYS_RESOURCE))
>> +			return -EPERM;
>> +
>> +		if (arg2) {
>> +			if (mmap_write_lock_killable(me->mm))
>> +				return -EINTR;
>> +
>> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
>> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
>
> Hm, I think this might be problematic if we alread called __ksm_enter() via
> madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set
> MMF_VM_MERGABLE instead. Like:
>
> error = 0;
> if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
> 	error = __ksm_enter(me->mm);
> if (!error)
> 	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>

If we make that change, we would no longer be able to distinguish
if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set.

>> +			mmap_write_unlock(me->mm);
>> +		} else {
>> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);
>
> Hm, I'd prefer if we really only call __ksm_exit() when we really exit the
> process. Is there a strong requirement to optimize disabling of KSM or would it
> be sufficient to clear the MMF_VM_MERGE_ANY flag here?
>
Then we still have the mm_slot allocated until the process gets
terminated.

> Also, I wonder what happens if we have another VMA in that process that has it
> enabled ..
>
> Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE
> and actually unmerge the KSM pages?
>
Do you want to call unmerge for all VMA's?

>
> It smells like it could be simpler and more consistent to handle by letting
> PR_SET_MEMORY_MERGE piggy-back on MMF_VM_MERGABLE/VM_MERGABLE and mimic what
> ksm_madvise() does simply for all VMAs.
>
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -534,16 +534,58 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
>>   	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>>   }
>>   +static bool vma_ksm_compatible(struct vm_area_struct *vma)
>> +{
>> +	/*
>> +	 * Be somewhat over-protective for now!
>> +	 */
>> +	if (vma->vm_flags & (VM_MERGEABLE | VM_SHARED  | VM_MAYSHARE   |
>> +			     VM_PFNMAP    | VM_IO      | VM_DONTEXPAND |
>> +			     VM_HUGETLB | VM_MIXEDMAP))
>> +		return false;		/* just ignore the advice */
>
> That comment is kind-of stale and ksm_madvise() specific.
>

I'll remove the comment.

>> +
>
> The VM_MERGEABLE check is really only used for ksm_madvise() to return
> immediately. I suggest keeping it in ksm_madvise() -- "Already enabled".
> Returning "false" in that case looks wrong (it's not broken because you do an
> early check in vma_ksm_mergeable(), it's just semantically weird).
>

I'll make it in ksm_madvise and remove it here.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03  9:48           ` David Hildenbrand
@ 2023-04-03 16:34             ` Stefan Roesch
  2023-04-03 17:04               ` David Hildenbrand
  2023-04-06 16:59               ` Stefan Roesch
  0 siblings, 2 replies; 33+ messages in thread
From: Stefan Roesch @ 2023-04-03 16:34 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


David Hildenbrand <david@redhat.com> writes:

>>>> Obviously we could spend months analysing which exact allocations are
>>>> identical, and then more months or years reworking the architecture to
>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>> and KSM is specifically for cases where this isn't practical.
>>>> Based on your request in the previous thread, we investigated whether
>>>> the boost was coming from the unintended side effects of KSM splitting
>>>> THPs. This wasn't the case.
>>>> If you have other theories on how the results could be bogus, we'd be
>>>> happy to investigate those as well. But you have to let us know what
>>>> you're looking for.
>>>>
>>>
>>> Maybe I'm bad at making such requests but
>>>
>>> "Stefan, can you do me a favor and investigate which pages we end up
>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>> still that significant when disabling THP?"
>>>
>>> "In any case, it would be nice to get a feeling for how much variety in
>>> these 20% of deduplicated pages are. "
>>>
>>> is pretty clear to me. And shouldn't take months.
>>>
>
> Just to clarify: the details I requested are not meant to decide whether to
> reject the patch set (I understand that it can be beneficial to have); I
> primarily want to understand if we're really dealing with a workload where KSM
> is able to deduplicate pages that are non-trivial, to maybe figure out if there
> are other workloads that could similarly benefit -- or if we could optimize KSM
> for these specific cases or avoid the memory deduplication altogether.
>
> In contrast to e.g.:
>
> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>    placement was unfortunate.
>
> 2) Unoptimized memory allocators that leave many identical pages mapped
>    after freeing up memory (e.g., zeroed pages, pages all filled with
>    poison values) instead of e.g., using MADV_DONTNEED to free up that
>    memory.
>
>

I repeated an experiment with and without KSM. In terms of THP there is
no huge difference between the two. On a 64GB main memory machine I see
between 100 - 400MB in AnonHugePages.

>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>> Instagram workload. The workload consists of 36 processes plus a few
>> sidecar processes.
>
> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
> environment?
>

It's set to the standard value of 256.

In the meantime I have run experiments with different settings for
pages_to_scan. With the default value of 100, we only get a relatively
small benefit of KSM. If I increase the value to for instance to 2000 or
3000 the savings are substantial. (The workload is memory bound, not
CPU bound).

Here are some stats for setting pages_to_scan to 3000:

full_scans: 560
general_profit: 20620539008
max_page_sharing: 256
merge_across_nodes: 1
pages_shared: 125446
pages_sharing: 5259506
pages_to_scan: 3000
pages_unshared: 1897537
pages_volatile: 12389223
run: 1
sleep_millisecs: 20
stable_node_chains: 176
stable_node_chains_prune_millisecs: 2000
stable_node_dups: 2604
use_zero_pages: 0
zero_pages_sharing: 0


> What would be interesting is pages_shared after max_page_sharing was set to a
> very high number such that pages_shared does not include duplicates. Then
> pages_shared actually expresses how many different pages we deduplicate. No need
> to run without THP in that case.
>

Thats on my list for the next set of experiments.
> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
> sufficient to understand what's happening.
>
>

I already run experiments with use_zero_pages, but they didn't make a
difference. I'll repeat the experiment with a higher pages_to_scan
value.

>> Each of these individual processes has around 500MB in KSM pages.
>>
>
> That's really a lot, thanks.
>
>> Also to give some idea for individual VMA's
>> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
>> 73160 KB)
>>
>
> I'll have a look at the patches today.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 15:50     ` Stefan Roesch
@ 2023-04-03 17:02       ` David Hildenbrand
  0 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-03 17:02 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

On 03.04.23 17:50, Stefan Roesch wrote:
>> I guess the interpreter could enable it (like a memory allocator could enable it
>> for the whole heap). But I get that it's much easier to enable this per-process,
>> and eventually only when a lot of the same processes are running in that
>> particular environment.
>>
> 
> We don't want it to get enabled for all workloads of that interpreter,
> instead we want to be able to select for which workloads we enable KSM.
> 

Right.

> 
>>> 1. New options for prctl system command
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>      The setting will be inherited by child processes.
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>> 2. Changes to KSM processing
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>>
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new prctl is
>> enabled for a process?
> 
> I decided to allow enabling KSM with prctl even when MADV_MERGEABLE,
> this allows more flexibility.

MADV_MERGEABLE will be a nop. But IIUC, MADV_UNMERGEABLE will end up 
calling unmerge_ksm_pages() and clear VM_MERGEABLE. But then, the next 
KSM scan will merge the pages in there again.

Not sure if that flexibility is worth having.

[...]


>>> @@ -2661,6 +2662,32 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>>>    	case PR_SET_VMA:
>>>    		error = prctl_set_vma(arg2, arg3, arg4, arg5);
>>>    		break;
>>> +#ifdef CONFIG_KSM
>>> +	case PR_SET_MEMORY_MERGE:
>>> +		if (!capable(CAP_SYS_RESOURCE))
>>> +			return -EPERM;
>>> +
>>> +		if (arg2) {
>>> +			if (mmap_write_lock_killable(me->mm))
>>> +				return -EINTR;
>>> +
>>> +			if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
>>> +				error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
>>
>> Hm, I think this might be problematic if we alread called __ksm_enter() via
>> madvise(). Maybe we should really consider making MMF_VM_MERGE_ANY set
>> MMF_VM_MERGABLE instead. Like:
>>
>> error = 0;
>> if(test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
>> 	error = __ksm_enter(me->mm);
>> if (!error)
>> 	set_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>>
> 
> If we make that change, we would no longer be able to distinguish
> if MMF_VM_MERGEABLE or MMF_VM_MERGE_ANY have been set.

Why would you need that exactly? To cleanup? See below.

> 
>>> +			mmap_write_unlock(me->mm);
>>> +		} else {
>>> +			__ksm_exit(me->mm, MMF_VM_MERGE_ANY);
>>
>> Hm, I'd prefer if we really only call __ksm_exit() when we really exit the
>> process. Is there a strong requirement to optimize disabling of KSM or would it
>> be sufficient to clear the MMF_VM_MERGE_ANY flag here?
>>
> Then we still have the mm_slot allocated until the process gets
> terminated.

Which is the same as using MADV_UNMERGEABLE, no?

> 
>> Also, I wonder what happens if we have another VMA in that process that has it
>> enabled ..
>>
>> Last but not least, wouldn't we want to do the same thing as MADV_UNMERGEABLE
>> and actually unmerge the KSM pages?
>>
> Do you want to call unmerge for all VMA's?

The question is what clearing MMF_VM_MERGE_ANY is supposed to do. If 
it's supposed to disable KSM (like MADV_UNMERGEABLE) would, then I guess 
you should go over all VMA's and unmerge.

Also, it depend on how you want to handle VM_MERGABLE with 
MMF_VM_MERGE_ANY. If MMF_VM_MERGE_ANY would not set VM_MERGABLE, then 
you'd only unmerge where VM_MERGABLE is not set. Otherwise, you'd 
unshare everywhere where VM_MERGABLE is set (and clear VM_MERGABLE) 
while at it.

Unsharing when clearing MMF_VM_MERGE_ANY might be the right thing to do 
IMHO.


I guess the main questions regarding implementation are:

1) Do we want setting MMF_VM_MERGE_ANY to set VM_MERGABLE on all
    candidate VMA's (go over all VMA's and set VM_MERGABLE). Then,
    clearing MMF_VM_MERGE_ANY would simply unmerge and clear VM_MERGABLE
    on all VMA's.

2) Do we want to make MMF_VM_MERGE_ANY imply MMF_VM_MERGABLE. You could
    still disable KSM (__ksm_exit()) during clearing MMF_VM_MERGE_ANY
    after going over all VMA's (where you might want to unshare already
    either way).

I guess the code will end up simpler if you make MMF_VM_MERGE_ANY simply 
piggy-back on MMF_VM_MERGABLE + VM_MERGABLE. I might be wrong, of course.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03 16:34             ` Stefan Roesch
@ 2023-04-03 17:04               ` David Hildenbrand
  2023-04-06 16:59               ` Stefan Roesch
  1 sibling, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-03 17:04 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 03.04.23 18:34, Stefan Roesch wrote:
>>
>> In contrast to e.g.:
>>
>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>     placement was unfortunate.
>>
>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>     after freeing up memory (e.g., zeroed pages, pages all filled with
>>     poison values) instead of e.g., using MADV_DONTNEED to free up that
>>     memory.
>>
>>
> 
> I repeated an experiment with and without KSM. In terms of THP there is
> no huge difference between the two. On a 64GB main memory machine I see
> between 100 - 400MB in AnonHugePages.
> 
>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>> Instagram workload. The workload consists of 36 processes plus a few
>>> sidecar processes.
>>
>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>> environment?
>>
> 
> It's set to the standard value of 256.
> 
> In the meantime I have run experiments with different settings for
> pages_to_scan. With the default value of 100, we only get a relatively
> small benefit of KSM. If I increase the value to for instance to 2000 or
> 3000 the savings are substantial. (The workload is memory bound, not
> CPU bound).

Interesting.

> 
> Here are some stats for setting pages_to_scan to 3000:
> 
> full_scans: 560
> general_profit: 20620539008
> max_page_sharing: 256
> merge_across_nodes: 1
> pages_shared: 125446
> pages_sharing: 5259506
> pages_to_scan: 3000
> pages_unshared: 1897537
> pages_volatile: 12389223
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 176
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 2604
> use_zero_pages: 0
> zero_pages_sharing: 0
> 
> 
>> What would be interesting is pages_shared after max_page_sharing was set to a
>> very high number such that pages_shared does not include duplicates. Then
>> pages_shared actually expresses how many different pages we deduplicate. No need
>> to run without THP in that case.
>>
> 
> Thats on my list for the next set of experiments.

Splendid.

>> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
>> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
>> sufficient to understand what's happening.
>>
>>
> 
> I already run experiments with use_zero_pages, but they didn't make a
> difference. I'll repeat the experiment with a higher pages_to_scan
> value.

Okay, so it's most certainly not the zeropage. Thanks for that 
information and running the experiments!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
@ 2023-04-04 16:32       ` Stefan Roesch
  2023-04-04 16:43       ` Stefan Roesch
  2023-04-05  6:51       ` Christian Borntraeger
  2 siblings, 0 replies; 33+ messages in thread
From: Stefan Roesch @ 2023-04-04 16:32 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya, Janosch Frank,
	Christian Borntraeger


David Hildenbrand <david@redhat.com> writes:

> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).

gmap_mark_unmergeable() seems to have a problem today. We can execute
gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly
after that the process can run madvise on it again and make it
mergeable. Am I mssing something here?

Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in
gmap_mark_unmergeable(). In case it is set, we can return an error. The
error code path looks like it can handle that case.

For the opposite case: gmap_mark_unmergeable() has already been run, we
would need some kind of flag or other means to be able to detect it.
Any recommendations?

>
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing
> what it's supposed to do. I don't think this effectively prevents right now
> madvise() from getting re-enabled on that VMA.
>
> @Christian, Janosch, am I missing something?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
@ 2023-04-04 16:43       ` Stefan Roesch
  2023-04-05  6:51       ` Christian Borntraeger
  2 siblings, 0 replies; 33+ messages in thread
From: Stefan Roesch @ 2023-04-04 16:43 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya, Janosch Frank,
	Christian Borntraeger


David Hildenbrand <david@redhat.com> writes:

> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).

gmap_mark_unmergeable() seems to have a problem today. We can execute
gmap_mark_unmergeable() and mark the vma's as unmergeable, but shortly
after that the process can run madvise on it again and make it
mergeable. Am I mssing something here?

Once prctl is run, we can check for the MMF_VM_MERGE_ANY flag in
gmap_mark_unmergeable(). In case it is set, we can return an error. The
error code path looks like it can handle that case.

For the opposite case: gmap_mark_unmergeable() has already been run, we
would need some kind of flag or other means to be able to detect it.
Any recommendations?

>
>
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing
> what it's supposed to do. I don't think this effectively prevents right now
> madvise() from getting re-enabled on that VMA.
>
> @Christian, Janosch, am I missing something?

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-03 11:03     ` David Hildenbrand
  2023-04-04 16:32       ` Stefan Roesch
  2023-04-04 16:43       ` Stefan Roesch
@ 2023-04-05  6:51       ` Christian Borntraeger
  2023-04-05 16:04         ` David Hildenbrand
  2 siblings, 1 reply; 33+ messages in thread
From: Christian Borntraeger @ 2023-04-05  6:51 UTC (permalink / raw)
  To: David Hildenbrand, Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank

Am 03.04.23 um 13:03 schrieb David Hildenbrand:
> On 03.04.23 12:37, David Hildenbrand wrote:
>> On 10.03.23 19:28, Stefan Roesch wrote:
>>> Patch series "mm: process/cgroup ksm support", v3.
>>>
>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>>
>>>     The madvise call is not available in the programming language.  An
>>>     example for this are programs with forked workloads using a garbage
>>>     collected language without pointers.  In such a language madvise cannot
>>>     be made available.
>>>
>>>     In addition the addresses of objects get moved around as they are
>>>     garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>     for these type of workloads.
>>
>> I guess the interpreter could enable it (like a memory allocator could
>> enable it for the whole heap). But I get that it's much easier to enable
>> this per-process, and eventually only when a lot of the same processes
>> are running in that particular environment.
>>
>>>
>>> Use case 2:
>>>
>>>     The same interpreter can also be used for workloads where KSM brings
>>>     no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>     a workload by workload basis.
>>
>> Agreed. A per-process control is also helpful to identidy workloads
>> where KSM might be beneficial (and to which degree).
>>
>>>
>>> Use case 3:
>>>
>>>     With the madvise call sharing opportunities are only enabled for the
>>>     current process: it is a workload-local decision.  A considerable number
>>>     of sharing opportuniites may exist across multiple workloads or jobs.
>>>     Only a higler level entity like a job scheduler or container can know
>>>     for certain if its running one or more instances of a job.  That job
>>>     scheduler however doesn't have the necessary internal worklaod knowledge
>>>     to make targeted madvise calls.
>>>
>>> Security concerns:
>>>
>>>     In previous discussions security concerns have been brought up.  The
>>>     problem is that an individual workload does not have the knowledge about
>>>     what else is running on a machine.  Therefore it has to be very
>>>     conservative in what memory areas can be shared or not.  However, if the
>>>     system is dedicated to running multiple jobs within the same security
>>>     domain, its the job scheduler that has the knowledge that sharing can be
>>>     safely enabled and is even desirable.
>>>
>>> Performance:
>>>
>>>     Experiments with using UKSM have shown a capacity increase of around
>>>     20%.
>>>
>>
>> As raised, it would be great to include more details about the workload
>> where this particulalry helps (e.g., a lot of Django processes operating
>> in the same domain).
>>
>>>
>>> 1. New options for prctl system command
>>>
>>>      This patch series adds two new options to the prctl system call.
>>>      The first one allows to enable KSM at the process level and the second
>>>      one to query the setting.
>>>
>>>      The setting will be inherited by child processes.
>>>
>>>      With the above setting, KSM can be enabled for the seed process of a
>>>      cgroup and all processes in the cgroup will inherit the setting.
>>>
>>> 2. Changes to KSM processing
>>>
>>>      When KSM is enabled at the process level, the KSM code will iterate
>>>      over all the VMA's and enable KSM for the eligible VMA's.
>>>
>>>      When forking a process that has KSM enabled, the setting will be
>>>      inherited by the new child process.
>>>
>>>      In addition when KSM is disabled for a process, KSM will be disabled
>>>      for the VMA's where KSM has been enabled.
>>
>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>> prctl is enabled for a process?
>>
>>>
>>> 3. Add general_profit metric
>>>
>>>      The general_profit metric of KSM is specified in the documentation,
>>>      but not calculated.  This adds the general profit metric to
>>>      /sys/kernel/debug/mm/ksm.
>>>
>>> 4. Add more metrics to ksm_stat
>>>
>>>      This adds the process profit and ksm type metric to
>>>      /proc/<pid>/ksm_stat.
>>>
>>> 5. Add more tests to ksm_tests
>>>
>>>      This adds an option to specify the merge type to the ksm_tests.
>>>      This allows to test madvise and prctl KSM.  It also adds a new option
>>>      to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>      that the KSM process setting is inherited by client processes.
>>>
>>> An update to the prctl(2) manpage has been proposed at [1].
>>>
>>> This patch (of 3):
>>>
>>> This adds a new prctl to API to enable and disable KSM on a per process
>>> basis instead of only at the VMA basis (with madvise).
>>>
>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>
>>>      This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>      is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>      process.
>>>
>>> 2) add flag to __ksm_enter
>>>
>>>      This change adds the flag parameter to __ksm_enter.  This allows to
>>>      distinguish if ksm was called by prctl or madvise.
>>>
>>> 3) add flag to __ksm_exit call
>>>
>>>      This adds the flag parameter to the __ksm_exit() call.  This allows
>>>      to distinguish if this call is for an prctl or madvise invocation.
>>>
>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>
>>>      If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>      over all the vmas and enable ksm if possible.  For the vmas that can be
>>>      ksm enabled this is only done once.
>>>
>>> 5) support disabling of ksm for a process
>>>
>>>      This adds the ability to disable ksm for a process if ksm has been
>>>      enabled for the process.
>>>
>>> 6) add new prctl option to get and set ksm for a process
>>>
>>>      This adds two new options to the prctl system call
>>>      - enable ksm for all vmas of a process (if the vmas support it).
>>>      - query if ksm has been enabled for a process.
>>
>>
>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>> infrastructure. Especially:
>>
>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>      compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>      already set.
>>
>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>      VM_MERGABLE?
>>
>> The you can avoid all runtime checks for compatible VMAs and only look
>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>>
>>
>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We

Normally we do support KSM on s390. This is a special case for guests using
storage keys. Those are attributes of the physical page and might differ even
if the content of the page is the same.
New Linux no longer uses it (unless a debug option is set during build) so we
enable the guest storage keys lazy and break KSM pages in that process.
Ideally we would continue this semantic (e.g. even after a prctl, if the
guest enable storage keys, disable ksm for this VM).

>> have to find a way to fence the prctl (for example, fail setting the
>> prctl after gmap_mark_unmergeable() ran, and make
>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>> in some other way).
> 
> 
> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA.
> 
> @Christian, Janosch, am I missing something?

Yes, if QEMU would do an madvise later on instead of just the start if would
result in guest storage keys to be messed up on KSM merges. One could argue
that this is a bug in the hypervisor then (QEMU) but yes, we should try
to make this more reliable in the kernel.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 1/3] mm: add new api to enable ksm per process
  2023-04-05  6:51       ` Christian Borntraeger
@ 2023-04-05 16:04         ` David Hildenbrand
  0 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-05 16:04 UTC (permalink / raw)
  To: Christian Borntraeger, Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya, Janosch Frank

On 05.04.23 08:51, Christian Borntraeger wrote:
> Am 03.04.23 um 13:03 schrieb David Hildenbrand:
>> On 03.04.23 12:37, David Hildenbrand wrote:
>>> On 10.03.23 19:28, Stefan Roesch wrote:
>>>> Patch series "mm: process/cgroup ksm support", v3.
>>>>
>>>> So far KSM can only be enabled by calling madvise for memory regions.  To
>>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>>> enabled / disabled at the process / cgroup level.
>>>>
>>>> Use case 1:
>>>>
>>>>      The madvise call is not available in the programming language.  An
>>>>      example for this are programs with forked workloads using a garbage
>>>>      collected language without pointers.  In such a language madvise cannot
>>>>      be made available.
>>>>
>>>>      In addition the addresses of objects get moved around as they are
>>>>      garbage collected.  KSM sharing needs to be enabled "from the outside"
>>>>      for these type of workloads.
>>>
>>> I guess the interpreter could enable it (like a memory allocator could
>>> enable it for the whole heap). But I get that it's much easier to enable
>>> this per-process, and eventually only when a lot of the same processes
>>> are running in that particular environment.
>>>
>>>>
>>>> Use case 2:
>>>>
>>>>      The same interpreter can also be used for workloads where KSM brings
>>>>      no benefit or even has overhead.  We'd like to be able to enable KSM on
>>>>      a workload by workload basis.
>>>
>>> Agreed. A per-process control is also helpful to identidy workloads
>>> where KSM might be beneficial (and to which degree).
>>>
>>>>
>>>> Use case 3:
>>>>
>>>>      With the madvise call sharing opportunities are only enabled for the
>>>>      current process: it is a workload-local decision.  A considerable number
>>>>      of sharing opportuniites may exist across multiple workloads or jobs.
>>>>      Only a higler level entity like a job scheduler or container can know
>>>>      for certain if its running one or more instances of a job.  That job
>>>>      scheduler however doesn't have the necessary internal worklaod knowledge
>>>>      to make targeted madvise calls.
>>>>
>>>> Security concerns:
>>>>
>>>>      In previous discussions security concerns have been brought up.  The
>>>>      problem is that an individual workload does not have the knowledge about
>>>>      what else is running on a machine.  Therefore it has to be very
>>>>      conservative in what memory areas can be shared or not.  However, if the
>>>>      system is dedicated to running multiple jobs within the same security
>>>>      domain, its the job scheduler that has the knowledge that sharing can be
>>>>      safely enabled and is even desirable.
>>>>
>>>> Performance:
>>>>
>>>>      Experiments with using UKSM have shown a capacity increase of around
>>>>      20%.
>>>>
>>>
>>> As raised, it would be great to include more details about the workload
>>> where this particulalry helps (e.g., a lot of Django processes operating
>>> in the same domain).
>>>
>>>>
>>>> 1. New options for prctl system command
>>>>
>>>>       This patch series adds two new options to the prctl system call.
>>>>       The first one allows to enable KSM at the process level and the second
>>>>       one to query the setting.
>>>>
>>>>       The setting will be inherited by child processes.
>>>>
>>>>       With the above setting, KSM can be enabled for the seed process of a
>>>>       cgroup and all processes in the cgroup will inherit the setting.
>>>>
>>>> 2. Changes to KSM processing
>>>>
>>>>       When KSM is enabled at the process level, the KSM code will iterate
>>>>       over all the VMA's and enable KSM for the eligible VMA's.
>>>>
>>>>       When forking a process that has KSM enabled, the setting will be
>>>>       inherited by the new child process.
>>>>
>>>>       In addition when KSM is disabled for a process, KSM will be disabled
>>>>       for the VMA's where KSM has been enabled.
>>>
>>> Do we want to make MADV_MERGEABLE/MADV_UNMERGEABLE fail while the new
>>> prctl is enabled for a process?
>>>
>>>>
>>>> 3. Add general_profit metric
>>>>
>>>>       The general_profit metric of KSM is specified in the documentation,
>>>>       but not calculated.  This adds the general profit metric to
>>>>       /sys/kernel/debug/mm/ksm.
>>>>
>>>> 4. Add more metrics to ksm_stat
>>>>
>>>>       This adds the process profit and ksm type metric to
>>>>       /proc/<pid>/ksm_stat.
>>>>
>>>> 5. Add more tests to ksm_tests
>>>>
>>>>       This adds an option to specify the merge type to the ksm_tests.
>>>>       This allows to test madvise and prctl KSM.  It also adds a new option
>>>>       to query if prctl KSM has been enabled.  It adds a fork test to verify
>>>>       that the KSM process setting is inherited by client processes.
>>>>
>>>> An update to the prctl(2) manpage has been proposed at [1].
>>>>
>>>> This patch (of 3):
>>>>
>>>> This adds a new prctl to API to enable and disable KSM on a per process
>>>> basis instead of only at the VMA basis (with madvise).
>>>>
>>>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>>>
>>>>       This introduces the new flag MMF_VM_MERGE_ANY flag.  When this flag
>>>>       is set, kernel samepage merging (ksm) gets enabled for all vma's of a
>>>>       process.
>>>>
>>>> 2) add flag to __ksm_enter
>>>>
>>>>       This change adds the flag parameter to __ksm_enter.  This allows to
>>>>       distinguish if ksm was called by prctl or madvise.
>>>>
>>>> 3) add flag to __ksm_exit call
>>>>
>>>>       This adds the flag parameter to the __ksm_exit() call.  This allows
>>>>       to distinguish if this call is for an prctl or madvise invocation.
>>>>
>>>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>>>
>>>>       If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>>>>       over all the vmas and enable ksm if possible.  For the vmas that can be
>>>>       ksm enabled this is only done once.
>>>>
>>>> 5) support disabling of ksm for a process
>>>>
>>>>       This adds the ability to disable ksm for a process if ksm has been
>>>>       enabled for the process.
>>>>
>>>> 6) add new prctl option to get and set ksm for a process
>>>>
>>>>       This adds two new options to the prctl system call
>>>>       - enable ksm for all vmas of a process (if the vmas support it).
>>>>       - query if ksm has been enabled for a process.
>>>
>>>
>>> Did you consider, instead of handling MMF_VM_MERGE_ANY in a special way,
>>> to instead make it reuse the existing MMF_VM_MERGEABLE/VM_MERGEABLE
>>> infrastructure. Especially:
>>>
>>> 1) During prctl(MMF_VM_MERGE_ANY), set VM_MERGABLE on all applicable
>>>       compatible. Further, set MMF_VM_MERGEABLE and enter KSM if not
>>>       already set.
>>>
>>> 2) When creating a new, compatible VMA and MMF_VM_MERGE_ANY is set, set
>>>       VM_MERGABLE?
>>>
>>> The you can avoid all runtime checks for compatible VMAs and only look
>>> at the VM_MERGEABLE flag. In fact, the VM_MERGEABLE will be completely
>>> expressive then for all VMAs. You don't need vma_ksm_mergeable() then.
>>>
>>>
>>> Another thing to consider is interaction with arch/s390/mm/gmap.c:
>>> s390x/kvm does not support KSM and it has to disable it for all VMAs. We
> 
> Normally we do support KSM on s390. This is a special case for guests using
> storage keys. Those are attributes of the physical page and might differ even
> if the content of the page is the same.
> New Linux no longer uses it (unless a debug option is set during build) so we
> enable the guest storage keys lazy and break KSM pages in that process.
> Ideally we would continue this semantic (e.g. even after a prctl, if the
> guest enable storage keys, disable ksm for this VM).

IIRC, KSM also gets disabled when switching to protected VMs. I recall 
that we really wanted to stop KSM scanning pages that are possibly 
protected. (don't remember if one could harm the system enabling it 
before/after the switch)

> 
>>> have to find a way to fence the prctl (for example, fail setting the
>>> prctl after gmap_mark_unmergeable() ran, and make
>>> gmap_mark_unmergeable() fail if the prctl ran -- or handle it gracefully
>>> in some other way).
>>
>>
>> Staring at that code, I wonder if the "mm->def_flags &= ~VM_MERGEABLE" is doing what it's supposed to do. I don't think this effectively prevents right now madvise() from getting re-enabled on that VMA.
>>
>> @Christian, Janosch, am I missing something?
> 
> Yes, if QEMU would do an madvise later on instead of just the start if would
> result in guest storage keys to be messed up on KSM merges. One could argue
> that this is a bug in the hypervisor then (QEMU) but yes, we should try
> to make this more reliable in the kernel.

It looks like the "mm->def_flags &= ~VM_MERGEABLE" wanted to achieve 
that, but failed. At least it looks like completely unnecessary code if 
I am not wrong.

Maybe inspired by similar code in thp_split_mm(), that enforces 
VM_NOHUGEPAGE.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
       [not found] ` <20230310182851.2579138-3-shr@devkernel.io>
@ 2023-04-05 17:04   ` David Hildenbrand
  2023-04-05 21:20     ` Stefan Roesch
  0 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2023-04-05 17:04 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team
  Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes,
	Bagas Sanjaya

On 10.03.23 19:28, Stefan Roesch wrote:
> This adds the general_profit KSM sysfs knob and the process profit metric
> and process merge type knobs to ksm_stat.
> 
> 1) split off pages_volatile function
> 
>     This splits off the pages_volatile function.  The next patch will
>     use this function.
> 
> 2) expose general_profit metric
> 
>     The documentation mentions a general profit metric, however this
>     metric is not calculated.  In addition the formula depends on the size
>     of internal structures, which makes it more difficult for an
>     administrator to make the calculation.  Adding the metric for a better
>     user experience.
> 
> 3) document general_profit sysfs knob
> 
> 4) calculate ksm process profit metric
> 
>     The ksm documentation mentions the process profit metric and how to
>     calculate it.  This adds the calculation of the metric.
> 
> 5) add ksm_merge_type() function
> 
>     This adds the ksm_merge_type function.  The function returns the
>     merge type for the process.  For madvise it returns "madvise", for
>     prctl it returns "process" and otherwise it returns "none".
> 
> 6) mm: expose ksm process profit metric and merge type in ksm_stat
> 
>     This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
>     The name of the value is ksm_merge_type.  The documentation mentions
>     the formula for the ksm process profit metric, however it does not
>     calculate it.  In addition the formula depends on the size of internal
>     structures.  So it makes sense to expose it.
> 
> 7) document new procfs ksm knobs
> 

Often, when you have to start making a list of things that a patch does, 
it might make sense to split some of the items into separate patches 
such that you can avoid lists and just explain in list-free text how the 
pieces in the patch fit together.

I'd suggest splitting this patch into logical pieces. For example, 
separating the general profit calculation/exposure from the per-mm 
profit and the per-mm ksm type indication.

> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
> Signed-off-by: Stefan Roesch <shr@devkernel.io>
> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---


[...]

>   KSM_ATTR_RO(pages_volatile);
>   
> @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject *kobj,
>   }
>   KSM_ATTR_RO(zero_pages_sharing);
>   
> +static ssize_t general_profit_show(struct kobject *kobj,
> +				   struct kobj_attribute *attr, char *buf)
> +{
> +	long general_profit;
> +	long all_rmap_items;
> +
> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
> +				ksm_pages_unshared + pages_volatile();

Are you sure you want to count a config knob (ksm_max_page_sharing) into 
that formula? I yet have to digest what this calculation implies, but it 
does feel odd.


Further, maybe just avoid pages_volatile(). Expanding the formula 
(excluding ksm_max_page_sharing for now):

all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();

-> expand pages_volatile() (ignoring the < 0 case)

all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items - 
ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;

-> simplify

all_rmap = ksm_rmap_items + ksm_pages_sharing;

Or is the < 0 case relevant here?

> +	general_profit = ksm_pages_sharing * PAGE_SIZE -
> +				all_rmap_items * sizeof(struct ksm_rmap_item);
> +
> +	return sysfs_emit(buf, "%ld\n", general_profit);
> +}
> +KSM_ATTR_RO(general_profit);
> +
>   static ssize_t stable_node_dups_show(struct kobject *kobj,
>   				     struct kobj_attribute *attr, char *buf)
>   {
> @@ -3345,6 +3385,7 @@ static struct attribute *ksm_attrs[] = {
>   	&stable_node_dups_attr.attr,
>   	&stable_node_chains_prune_millisecs_attr.attr,
>   	&use_zero_pages_attr.attr,
> +	&general_profit_attr.attr,
>   	NULL,
>   };
>   

The calculations (profit) don't include when KSM places the shared 
zeropage I guess. Accounting that per MM (and eventually globally) is in 
the works. [1]


[1] 
https://lore.kernel.org/lkml/20230328153852.26c2577e4bd921c371c47a7e@linux-foundation.org/t/

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-05 17:04   ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs David Hildenbrand
@ 2023-04-05 21:20     ` Stefan Roesch
  2023-04-06 13:23       ` David Hildenbrand
  0 siblings, 1 reply; 33+ messages in thread
From: Stefan Roesch @ 2023-04-05 21:20 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya


David Hildenbrand <david@redhat.com> writes:

> On 10.03.23 19:28, Stefan Roesch wrote:
>> This adds the general_profit KSM sysfs knob and the process profit metric
>> and process merge type knobs to ksm_stat.
>> 1) split off pages_volatile function
>>     This splits off the pages_volatile function.  The next patch will
>>     use this function.
>> 2) expose general_profit metric
>>     The documentation mentions a general profit metric, however this
>>     metric is not calculated.  In addition the formula depends on the size
>>     of internal structures, which makes it more difficult for an
>>     administrator to make the calculation.  Adding the metric for a better
>>     user experience.
>> 3) document general_profit sysfs knob
>> 4) calculate ksm process profit metric
>>     The ksm documentation mentions the process profit metric and how to
>>     calculate it.  This adds the calculation of the metric.
>> 5) add ksm_merge_type() function
>>     This adds the ksm_merge_type function.  The function returns the
>>     merge type for the process.  For madvise it returns "madvise", for
>>     prctl it returns "process" and otherwise it returns "none".
>> 6) mm: expose ksm process profit metric and merge type in ksm_stat
>>     This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
>>     The name of the value is ksm_merge_type.  The documentation mentions
>>     the formula for the ksm process profit metric, however it does not
>>     calculate it.  In addition the formula depends on the size of internal
>>     structures.  So it makes sense to expose it.
>> 7) document new procfs ksm knobs
>>
>
> Often, when you have to start making a list of things that a patch does, it
> might make sense to split some of the items into separate patches such that you
> can avoid lists and just explain in list-free text how the pieces in the patch
> fit together.
>
> I'd suggest splitting this patch into logical pieces. For example, separating
> the general profit calculation/exposure from the per-mm profit and the per-mm
> ksm type indication.
>

Originally these were individual patches. If I recall correctly Johannes
Weiner wanted them as one patch. I can certainly split them again.

>> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Rik van Riel <riel@surriel.com>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> ---
>
>
> [...]
>
>>   KSM_ATTR_RO(pages_volatile);
>>   @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject
>> *kobj,
>>   }
>>   KSM_ATTR_RO(zero_pages_sharing);
>>   +static ssize_t general_profit_show(struct kobject *kobj,
>> +				   struct kobj_attribute *attr, char *buf)
>> +{
>> +	long general_profit;
>> +	long all_rmap_items;
>> +
>> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
>> +				ksm_pages_unshared + pages_volatile();
>
> Are you sure you want to count a config knob (ksm_max_page_sharing) into that
> formula? I yet have to digest what this calculation implies, but it does feel
> odd.
>

This was a mistake. I wanted ksm_pages_sharing instead of
ksm_max_page_sharing.

>
> Further, maybe just avoid pages_volatile(). Expanding the formula (excluding
> ksm_max_page_sharing for now):
>
>
> all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();
>
> -> expand pages_volatile() (ignoring the < 0 case)
>
> all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items -
> ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;
>
> -> simplify
>
> all_rmap = ksm_rmap_items + ksm_pages_sharing;
>
I'll simplify it.

> Or is the < 0 case relevant here?
>

A negative profit is ok.

>> +	general_profit = ksm_pages_sharing * PAGE_SIZE -
>> +				all_rmap_items * sizeof(struct ksm_rmap_item);
>> +
>> +	return sysfs_emit(buf, "%ld\n", general_profit);
>> +}
>> +KSM_ATTR_RO(general_profit);
>> +
>>   static ssize_t stable_node_dups_show(struct kobject *kobj,
>>   				     struct kobj_attribute *attr, char *buf)
>>   {
>> @@ -3345,6 +3385,7 @@ static struct attribute *ksm_attrs[] = {
>>   	&stable_node_dups_attr.attr,
>>   	&stable_node_chains_prune_millisecs_attr.attr,
>>   	&use_zero_pages_attr.attr,
>> +	&general_profit_attr.attr,
>>   	NULL,
>>   };
>>
>
> The calculations (profit) don't include when KSM places the shared zeropage I
> guess. Accounting that per MM (and eventually globally) is in the works. [1]
>
>
> [1]
> https://lore.kernel.org/lkml/20230328153852.26c2577e4bd921c371c47a7e@linux-foundation.org/t/

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-05 21:20     ` Stefan Roesch
@ 2023-04-06 13:23       ` David Hildenbrand
  2023-04-06 14:16         ` Johannes Weiner
  0 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2023-04-06 13:23 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: kernel-team, linux-mm, riel, mhocko, linux-kselftest, linux-doc,
	akpm, hannes, Bagas Sanjaya

>>
>> Often, when you have to start making a list of things that a patch does, it
>> might make sense to split some of the items into separate patches such that you
>> can avoid lists and just explain in list-free text how the pieces in the patch
>> fit together.
>>
>> I'd suggest splitting this patch into logical pieces. For example, separating
>> the general profit calculation/exposure from the per-mm profit and the per-mm
>> ksm type indication.
>>
> 
> Originally these were individual patches. If I recall correctly Johannes
> Weiner wanted them as one patch. I can certainly split them again.

That's why I remember that v1 contained more patches :)

Again, just my opinion on patches that require a description in form of 
a list ...

> 
>>> Link: https://lkml.kernel.org/r/20230224044000.3084046-3-shr@devkernel.io
>>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>>> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
>>> Cc: David Hildenbrand <david@redhat.com>
>>> Cc: Johannes Weiner <hannes@cmpxchg.org>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Rik van Riel <riel@surriel.com>
>>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>>> ---
>>
>>
>> [...]
>>
>>>    KSM_ATTR_RO(pages_volatile);
>>>    @@ -3280,6 +3305,21 @@ static ssize_t zero_pages_sharing_show(struct kobject
>>> *kobj,
>>>    }
>>>    KSM_ATTR_RO(zero_pages_sharing);
>>>    +static ssize_t general_profit_show(struct kobject *kobj,
>>> +				   struct kobj_attribute *attr, char *buf)
>>> +{
>>> +	long general_profit;
>>> +	long all_rmap_items;
>>> +
>>> +	all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
>>> +				ksm_pages_unshared + pages_volatile();
>>
>> Are you sure you want to count a config knob (ksm_max_page_sharing) into that
>> formula? I yet have to digest what this calculation implies, but it does feel
>> odd.
>>
> 
> This was a mistake. I wanted ksm_pages_sharing instead of
> ksm_max_page_sharing.
> 
>>
>> Further, maybe just avoid pages_volatile(). Expanding the formula (excluding
>> ksm_max_page_sharing for now):
>>
>>
>> all_rmap = ksm_pages_shared + ksm_pages_unshared + pages_volatile();
>>
>> -> expand pages_volatile() (ignoring the < 0 case)
>>
>> all_rmap = ksm_pages_shared + ksm_pages_unshared + ksm_rmap_items -
>> ksm_pages_shared - ksm_pages_sharing - ksm_pages_unshared;
>>
>> -> simplify
>>
>> all_rmap = ksm_rmap_items + ksm_pages_sharing;
>>
> I'll simplify it.


Cool.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-06 13:23       ` David Hildenbrand
@ 2023-04-06 14:16         ` Johannes Weiner
  2023-04-06 14:32           ` David Hildenbrand
  0 siblings, 1 reply; 33+ messages in thread
From: Johannes Weiner @ 2023-04-06 14:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Bagas Sanjaya

On Thu, Apr 06, 2023 at 03:23:11PM +0200, David Hildenbrand wrote:
> > > 
> > > Often, when you have to start making a list of things that a patch does, it
> > > might make sense to split some of the items into separate patches such that you
> > > can avoid lists and just explain in list-free text how the pieces in the patch
> > > fit together.
> > > 
> > > I'd suggest splitting this patch into logical pieces. For example, separating
> > > the general profit calculation/exposure from the per-mm profit and the per-mm
> > > ksm type indication.
> > > 
> > 
> > Originally these were individual patches. If I recall correctly Johannes
> > Weiner wanted them as one patch. I can certainly split them again.
> 
> That's why I remember that v1 contained more patches :)
> 
> Again, just my opinion on patches that require a description in form of a
> list ...

My concern was the initial splitting of patch 1. I found it easier to
review with the new prctl() being one logical change, and fully
implemented in one go. The changelog is still in the form of a list,
but it essentially enumerates diff hunks that make up the interface.

No objection to splitting out the multiple sysfs knobs, though!

The strategy I usually follow is this:

1. Split out changes based on user-visible behavior (new prctl(), new
   sysfs knob)

2. Extract changes made along the way that have stand-alone value in
   existing code (bug fixes, simplifying/documenting tricky sections,
   cleanups).

3. Split out noisy prep work such as renames and refactors that make
   the user-visible functional changes more difficult to understand.

and then order the series into sections 2, 3, and 1.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 2/3] mm: add new KSM process and sysfs knobs
  2023-04-06 14:16         ` Johannes Weiner
@ 2023-04-06 14:32           ` David Hildenbrand
  0 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-06 14:32 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
	linux-kselftest, linux-doc, akpm, Bagas Sanjaya

On 06.04.23 16:16, Johannes Weiner wrote:
> On Thu, Apr 06, 2023 at 03:23:11PM +0200, David Hildenbrand wrote:
>>>>
>>>> Often, when you have to start making a list of things that a patch does, it
>>>> might make sense to split some of the items into separate patches such that you
>>>> can avoid lists and just explain in list-free text how the pieces in the patch
>>>> fit together.
>>>>
>>>> I'd suggest splitting this patch into logical pieces. For example, separating
>>>> the general profit calculation/exposure from the per-mm profit and the per-mm
>>>> ksm type indication.
>>>>
>>>
>>> Originally these were individual patches. If I recall correctly Johannes
>>> Weiner wanted them as one patch. I can certainly split them again.
>>
>> That's why I remember that v1 contained more patches :)
>>
>> Again, just my opinion on patches that require a description in form of a
>> list ...
> 
> My concern was the initial splitting of patch 1. I found it easier to
> review with the new prctl() being one logical change, and fully
> implemented in one go. The changelog is still in the form of a list,
> but it essentially enumerates diff hunks that make up the interface.
> 
> No objection to splitting out the multiple sysfs knobs, though!
> 
> The strategy I usually follow is this:
> 
> 1. Split out changes based on user-visible behavior (new prctl(), new
>     sysfs knob)
> 
> 2. Extract changes made along the way that have stand-alone value in
>     existing code (bug fixes, simplifying/documenting tricky sections,
>     cleanups).
> 
> 3. Split out noisy prep work such as renames and refactors that make
>     the user-visible functional changes more difficult to understand.
> 
> and then order the series into sections 2, 3, and 1.
> 

I agree. The most important part is the "logical change" part. Once it's 
down to a list of 3 items or so for a single commit we can usually 
express it like (just an example that does no longer apply due to 
pages_volatile() not being required) the following when combining items 
1+2+3 from the list:

"
Let's expose the general KSM profit via sysfs and document that new 
toggle. [add details about that and especially why we are doing that]

As we need the number of volatile pages to calculate the general KSM 
profit, factor out existing functionality into a helper.
"

Much easier to read.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-03 16:34             ` Stefan Roesch
  2023-04-03 17:04               ` David Hildenbrand
@ 2023-04-06 16:59               ` Stefan Roesch
  2023-04-06 17:10                 ` David Hildenbrand
  1 sibling, 1 reply; 33+ messages in thread
From: Stefan Roesch @ 2023-04-06 16:59 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins


Stefan Roesch <shr@devkernel.io> writes:

> David Hildenbrand <david@redhat.com> writes:
>
>>>>> Obviously we could spend months analysing which exact allocations are
>>>>> identical, and then more months or years reworking the architecture to
>>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>>> and KSM is specifically for cases where this isn't practical.
>>>>> Based on your request in the previous thread, we investigated whether
>>>>> the boost was coming from the unintended side effects of KSM splitting
>>>>> THPs. This wasn't the case.
>>>>> If you have other theories on how the results could be bogus, we'd be
>>>>> happy to investigate those as well. But you have to let us know what
>>>>> you're looking for.
>>>>>
>>>>
>>>> Maybe I'm bad at making such requests but
>>>>
>>>> "Stefan, can you do me a favor and investigate which pages we end up
>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>> still that significant when disabling THP?"
>>>>
>>>> "In any case, it would be nice to get a feeling for how much variety in
>>>> these 20% of deduplicated pages are. "
>>>>
>>>> is pretty clear to me. And shouldn't take months.
>>>>
>>
>> Just to clarify: the details I requested are not meant to decide whether to
>> reject the patch set (I understand that it can be beneficial to have); I
>> primarily want to understand if we're really dealing with a workload where KSM
>> is able to deduplicate pages that are non-trivial, to maybe figure out if there
>> are other workloads that could similarly benefit -- or if we could optimize KSM
>> for these specific cases or avoid the memory deduplication altogether.
>>
>> In contrast to e.g.:
>>
>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>    placement was unfortunate.
>>
>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>    after freeing up memory (e.g., zeroed pages, pages all filled with
>>    poison values) instead of e.g., using MADV_DONTNEED to free up that
>>    memory.
>>
>>
>
> I repeated an experiment with and without KSM. In terms of THP there is
> no huge difference between the two. On a 64GB main memory machine I see
> between 100 - 400MB in AnonHugePages.
>
>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>> Instagram workload. The workload consists of 36 processes plus a few
>>> sidecar processes.
>>
>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>> environment?
>>
>
> It's set to the standard value of 256.
>
> In the meantime I have run experiments with different settings for
> pages_to_scan. With the default value of 100, we only get a relatively
> small benefit of KSM. If I increase the value to for instance to 2000 or
> 3000 the savings are substantial. (The workload is memory bound, not
> CPU bound).
>
> Here are some stats for setting pages_to_scan to 3000:
>
> full_scans: 560
> general_profit: 20620539008
> max_page_sharing: 256
> merge_across_nodes: 1
> pages_shared: 125446
> pages_sharing: 5259506
> pages_to_scan: 3000
> pages_unshared: 1897537
> pages_volatile: 12389223
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 176
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 2604
> use_zero_pages: 0
> zero_pages_sharing: 0
>
>
>> What would be interesting is pages_shared after max_page_sharing was set to a
>> very high number such that pages_shared does not include duplicates. Then
>> pages_shared actually expresses how many different pages we deduplicate. No need
>> to run without THP in that case.
>>
>
> Thats on my list for the next set of experiments.
>

In the new experiment I increased the max_page_sharing value to 16384.
This reduced the number of stable_node_dups considerably (its around 3%
of the previous value). However pages_sharing is still very high for
this workload.

full_scans: 138
general_profit: 24442268608
max_page_sharing: 16384
merge_across_nodes: 1
pages_shared: 144590
pages_sharing: 6230983
pages_to_scan: 3000
pages_unshared: 2120307
pages_volatile: 14590780
run: 1
sleep_millisecs: 20
stable_node_chains: 23
stable_node_chains_prune_millisecs: 2000
stable_node_dups: 78
use_zero_pages: 0
zero_pages_sharing: 0

>> Similarly, enabling "use_zero_pages" could highlight if your workload ends up
>> deduplciating a lot of zeropages. But maxing out max_page_sharing would be
>> sufficient to understand what's happening.
>>
>>
>
> I already run experiments with use_zero_pages, but they didn't make a
> difference. I'll repeat the experiment with a higher pages_to_scan
> value.
>
>>> Each of these individual processes has around 500MB in KSM pages.
>>>
>>
>> That's really a lot, thanks.
>>
>>> Also to give some idea for individual VMA's
>>> 7ef5d5600000-7ef5e5600000 rw-p 00000000 00:00 0 (Size: 262144 KB, KSM:
>>> 73160 KB)
>>>
>>
>> I'll have a look at the patches today.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v4 0/3] mm: process/cgroup ksm support
  2023-04-06 16:59               ` Stefan Roesch
@ 2023-04-06 17:10                 ` David Hildenbrand
  0 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2023-04-06 17:10 UTC (permalink / raw)
  To: Stefan Roesch
  Cc: Johannes Weiner, Andrew Morton, kernel-team, linux-mm, riel,
	mhocko, linux-kselftest, linux-doc, Hugh Dickins

On 06.04.23 18:59, Stefan Roesch wrote:
> 
> Stefan Roesch <shr@devkernel.io> writes:
> 
>> David Hildenbrand <david@redhat.com> writes:
>>
>>>>>> Obviously we could spend months analysing which exact allocations are
>>>>>> identical, and then more months or years reworking the architecture to
>>>>>> deduplicate them by hand and in userspace. But this isn't practical,
>>>>>> and KSM is specifically for cases where this isn't practical.
>>>>>> Based on your request in the previous thread, we investigated whether
>>>>>> the boost was coming from the unintended side effects of KSM splitting
>>>>>> THPs. This wasn't the case.
>>>>>> If you have other theories on how the results could be bogus, we'd be
>>>>>> happy to investigate those as well. But you have to let us know what
>>>>>> you're looking for.
>>>>>>
>>>>>
>>>>> Maybe I'm bad at making such requests but
>>>>>
>>>>> "Stefan, can you do me a favor and investigate which pages we end up
>>>>> deduplicating -- especially if it's mostly only the zeropage and if it's
>>>>> still that significant when disabling THP?"
>>>>>
>>>>> "In any case, it would be nice to get a feeling for how much variety in
>>>>> these 20% of deduplicated pages are. "
>>>>>
>>>>> is pretty clear to me. And shouldn't take months.
>>>>>
>>>
>>> Just to clarify: the details I requested are not meant to decide whether to
>>> reject the patch set (I understand that it can be beneficial to have); I
>>> primarily want to understand if we're really dealing with a workload where KSM
>>> is able to deduplicate pages that are non-trivial, to maybe figure out if there
>>> are other workloads that could similarly benefit -- or if we could optimize KSM
>>> for these specific cases or avoid the memory deduplication altogether.
>>>
>>> In contrast to e.g.:
>>>
>>> 1) THP resulted in many zeropages we end up deduplicating again. The THP
>>>     placement was unfortunate.
>>>
>>> 2) Unoptimized memory allocators that leave many identical pages mapped
>>>     after freeing up memory (e.g., zeroed pages, pages all filled with
>>>     poison values) instead of e.g., using MADV_DONTNEED to free up that
>>>     memory.
>>>
>>>
>>
>> I repeated an experiment with and without KSM. In terms of THP there is
>> no huge difference between the two. On a 64GB main memory machine I see
>> between 100 - 400MB in AnonHugePages.
>>
>>>> /sys/kernel/mm/ksm/pages_shared is over 10000 when we run this on an
>>>> Instagram workload. The workload consists of 36 processes plus a few
>>>> sidecar processes.
>>>
>>> Thanks! To which value is /sys/kernel/mm/ksm/max_page_sharing set in that
>>> environment?
>>>
>>
>> It's set to the standard value of 256.
>>
>> In the meantime I have run experiments with different settings for
>> pages_to_scan. With the default value of 100, we only get a relatively
>> small benefit of KSM. If I increase the value to for instance to 2000 or
>> 3000 the savings are substantial. (The workload is memory bound, not
>> CPU bound).
>>
>> Here are some stats for setting pages_to_scan to 3000:
>>
>> full_scans: 560
>> general_profit: 20620539008
>> max_page_sharing: 256
>> merge_across_nodes: 1
>> pages_shared: 125446
>> pages_sharing: 5259506
>> pages_to_scan: 3000
>> pages_unshared: 1897537
>> pages_volatile: 12389223
>> run: 1
>> sleep_millisecs: 20
>> stable_node_chains: 176
>> stable_node_chains_prune_millisecs: 2000
>> stable_node_dups: 2604
>> use_zero_pages: 0
>> zero_pages_sharing: 0
>>
>>
>>> What would be interesting is pages_shared after max_page_sharing was set to a
>>> very high number such that pages_shared does not include duplicates. Then
>>> pages_shared actually expresses how many different pages we deduplicate. No need
>>> to run without THP in that case.
>>>
>>
>> Thats on my list for the next set of experiments.
>>
> 
> In the new experiment I increased the max_page_sharing value to 16384.
> This reduced the number of stable_node_dups considerably (its around 3%
> of the previous value). However pages_sharing is still very high for
> this workload.
> 
> full_scans: 138
> general_profit: 24442268608
> max_page_sharing: 16384
> merge_across_nodes: 1
> pages_shared: 144590
> pages_sharing: 6230983
> pages_to_scan: 3000
> pages_unshared: 2120307
> pages_volatile: 14590780
> run: 1
> sleep_millisecs: 20
> stable_node_chains: 23
> stable_node_chains_prune_millisecs: 2000
> stable_node_dups: 78
> use_zero_pages: 0
> zero_pages_sharing: 0

Interesting, thanks!

I wonder if it's really many interpreters performing (and caching?) 
essentially same blobs (for example, for a JIT the IR and/or target 
executable code). So maybe in general, such multi-instance interpreters 
are a good candidate for KSM. (I recall there were some processes where 
a server would perform and cache the translations instead) But just a 
pure speculation :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2023-04-06 17:12 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-10 18:28 [PATCH v4 0/3] mm: process/cgroup ksm support Stefan Roesch
     [not found] ` <20230310182851.2579138-2-shr@devkernel.io>
2023-03-13 16:26   ` [PATCH v4 1/3] mm: add new api to enable ksm per process Johannes Weiner
2023-04-03 10:37   ` David Hildenbrand
2023-04-03 11:03     ` David Hildenbrand
2023-04-04 16:32       ` Stefan Roesch
2023-04-04 16:43       ` Stefan Roesch
2023-04-05  6:51       ` Christian Borntraeger
2023-04-05 16:04         ` David Hildenbrand
2023-04-03 15:50     ` Stefan Roesch
2023-04-03 17:02       ` David Hildenbrand
2023-03-15 20:03 ` [PATCH v4 0/3] mm: process/cgroup ksm support David Hildenbrand
2023-03-15 20:23   ` Mike Kravetz
2023-03-15 21:05   ` Johannes Weiner
2023-03-15 21:19     ` Johannes Weiner
2023-03-15 21:45       ` David Hildenbrand
2023-03-15 21:47         ` David Hildenbrand
2023-03-30 16:19         ` Stefan Roesch
2023-03-28 23:09 ` Andrew Morton
2023-03-30  4:55   ` David Hildenbrand
2023-03-30 14:26     ` Johannes Weiner
2023-03-30 14:40       ` David Hildenbrand
2023-03-30 16:41         ` Stefan Roesch
2023-04-03  9:48           ` David Hildenbrand
2023-04-03 16:34             ` Stefan Roesch
2023-04-03 17:04               ` David Hildenbrand
2023-04-06 16:59               ` Stefan Roesch
2023-04-06 17:10                 ` David Hildenbrand
2023-03-30 20:18     ` Andrew Morton
     [not found] ` <20230310182851.2579138-3-shr@devkernel.io>
2023-04-05 17:04   ` [PATCH v4 2/3] mm: add new KSM process and sysfs knobs David Hildenbrand
2023-04-05 21:20     ` Stefan Roesch
2023-04-06 13:23       ` David Hildenbrand
2023-04-06 14:16         ` Johannes Weiner
2023-04-06 14:32           ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).