All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yu Zhao <yuzhao@google.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Liam Howlett <liam.howlett@oracle.com>
Cc: "Andi Kleen" <ak@linux.intel.com>,
	"Aneesh Kumar" <aneesh.kumar@linux.ibm.com>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Dave Hansen" <dave.hansen@linux.intel.com>,
	"Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Mel Gorman" <mgorman@suse.de>,
	"Michael Larabel" <Michael@michaellarabel.com>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Mike Rapoport" <rppt@kernel.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>,
	"Will Deacon" <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org,
	page-reclaim@google.com, "Brian Geffon" <bgeffon@google.com>,
	"Jan Alexander Steffens" <heftig@archlinux.org>,
	"Oleksandr Natalenko" <oleksandr@natalenko.name>,
	"Steven Barrett" <steven@liquorix.net>,
	"Suleiman Souhlal" <suleiman@google.com>,
	"Daniel Byrne" <djbyrne@mtu.edu>,
	"Donald Carr" <d@chaos-reins.com>,
	"Holger Hoffstätte" <holger@applied-asynchrony.com>,
	"Konstantin Kharlamov" <Hi-Angel@yandex.ru>,
	"Shuang Zhai" <szhai2@cs.rochester.edu>,
	"Sofia Trinh" <sofia.trinh@edi.works>,
	"Vaibhav Jain" <vaibhav@linux.ibm.com>
Subject: Re: [PATCH v13 08/14] mm: multi-gen LRU: support page table walks
Date: Wed, 6 Jul 2022 16:26:27 -0600	[thread overview]
Message-ID: <YsYMEwJCL4GE0Cx6@google.com> (raw)
In-Reply-To: <20220706220022.968789-9-yuzhao@google.com>

On Wed, Jul 06, 2022 at 04:00:17PM -0600, Yu Zhao wrote:

...

> +/*
> + * Some userspace memory allocators map many single-page VMAs. Instead of
> + * returning back to the PGD table for each of such VMAs, finish an entire PMD
> + * table to reduce zigzags and improve cache performance.
> + */
> +static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args,
> +			 unsigned long *vm_start, unsigned long *vm_end)
> +{
> +	unsigned long start = round_up(*vm_end, size);
> +	unsigned long end = (start | ~mask) + 1;
> +
> +	VM_WARN_ON_ONCE(mask & size);
> +	VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask));
> +
> +	while (args->vma) {
> +		if (start >= args->vma->vm_end) {
> +			args->vma = args->vma->vm_next;
> +			continue;
> +		}
> +
> +		if (end && end <= args->vma->vm_start)
> +			return false;
> +
> +		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) {
> +			args->vma = args->vma->vm_next;
> +			continue;
> +		}
> +
> +		*vm_start = max(start, args->vma->vm_start);
> +		*vm_end = min(end - 1, args->vma->vm_end - 1) + 1;
> +
> +		return true;
> +	}
> +
> +	return false;
> +}

To make the above work on top of the Maple Tree:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7096ff7836db..c0c1195da803 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3689,23 +3689,14 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk
 {
 	unsigned long start = round_up(*vm_end, size);
 	unsigned long end = (start | ~mask) + 1;
+	VMA_ITERATOR(vmi, args->mm, start);
 
 	VM_WARN_ON_ONCE(mask & size);
 	VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask));
 
-	while (args->vma) {
-		if (start >= args->vma->vm_end) {
-			args->vma = args->vma->vm_next;
+	for_each_vma_range(vmi, args->vma, end) {
+		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args))
 			continue;
-		}
-
-		if (end && end <= args->vma->vm_start)
-			return false;
-
-		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) {
-			args->vma = args->vma->vm_next;
-			continue;
-		}
 
 		*vm_start = max(start, args->vma->vm_start);
 		*vm_end = min(end - 1, args->vma->vm_end - 1) + 1;

WARNING: multiple messages have this Message-ID (diff)
From: Yu Zhao <yuzhao@google.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Liam Howlett <liam.howlett@oracle.com>
Cc: "Andi Kleen" <ak@linux.intel.com>,
	"Aneesh Kumar" <aneesh.kumar@linux.ibm.com>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Dave Hansen" <dave.hansen@linux.intel.com>,
	"Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Linus Torvalds" <torvalds@linux-foundation.org>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Mel Gorman" <mgorman@suse.de>,
	"Michael Larabel" <Michael@michaellarabel.com>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Mike Rapoport" <rppt@kernel.org>,
	"Peter Zijlstra" <peterz@infradead.org>,
	"Tejun Heo" <tj@kernel.org>, "Vlastimil Babka" <vbabka@suse.cz>,
	"Will Deacon" <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org,
	page-reclaim@google.com, "Brian Geffon" <bgeffon@google.com>,
	"Jan Alexander Steffens" <heftig@archlinux.org>,
	"Oleksandr Natalenko" <oleksandr@natalenko.name>,
	"Steven Barrett" <steven@liquorix.net>,
	"Suleiman Souhlal" <suleiman@google.com>,
	"Daniel Byrne" <djbyrne@mtu.edu>,
	"Donald Carr" <d@chaos-reins.com>,
	"Holger Hoffstätte" <holger@applied-asynchrony.com>,
	"Konstantin Kharlamov" <Hi-Angel@yandex.ru>,
	"Shuang Zhai" <szhai2@cs.rochester.edu>,
	"Sofia Trinh" <sofia.trinh@edi.works>,
	"Vaibhav Jain" <vaibhav@linux.ibm.com>
Subject: Re: [PATCH v13 08/14] mm: multi-gen LRU: support page table walks
Date: Wed, 6 Jul 2022 16:26:27 -0600	[thread overview]
Message-ID: <YsYMEwJCL4GE0Cx6@google.com> (raw)
In-Reply-To: <20220706220022.968789-9-yuzhao@google.com>

On Wed, Jul 06, 2022 at 04:00:17PM -0600, Yu Zhao wrote:

...

> +/*
> + * Some userspace memory allocators map many single-page VMAs. Instead of
> + * returning back to the PGD table for each of such VMAs, finish an entire PMD
> + * table to reduce zigzags and improve cache performance.
> + */
> +static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk *args,
> +			 unsigned long *vm_start, unsigned long *vm_end)
> +{
> +	unsigned long start = round_up(*vm_end, size);
> +	unsigned long end = (start | ~mask) + 1;
> +
> +	VM_WARN_ON_ONCE(mask & size);
> +	VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask));
> +
> +	while (args->vma) {
> +		if (start >= args->vma->vm_end) {
> +			args->vma = args->vma->vm_next;
> +			continue;
> +		}
> +
> +		if (end && end <= args->vma->vm_start)
> +			return false;
> +
> +		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) {
> +			args->vma = args->vma->vm_next;
> +			continue;
> +		}
> +
> +		*vm_start = max(start, args->vma->vm_start);
> +		*vm_end = min(end - 1, args->vma->vm_end - 1) + 1;
> +
> +		return true;
> +	}
> +
> +	return false;
> +}

To make the above work on top of the Maple Tree:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7096ff7836db..c0c1195da803 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3689,23 +3689,14 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk
 {
 	unsigned long start = round_up(*vm_end, size);
 	unsigned long end = (start | ~mask) + 1;
+	VMA_ITERATOR(vmi, args->mm, start);
 
 	VM_WARN_ON_ONCE(mask & size);
 	VM_WARN_ON_ONCE((start & mask) != (*vm_start & mask));
 
-	while (args->vma) {
-		if (start >= args->vma->vm_end) {
-			args->vma = args->vma->vm_next;
+	for_each_vma_range(vmi, args->vma, end) {
+		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args))
 			continue;
-		}
-
-		if (end && end <= args->vma->vm_start)
-			return false;
-
-		if (should_skip_vma(args->vma->vm_start, args->vma->vm_end, args)) {
-			args->vma = args->vma->vm_next;
-			continue;
-		}
 
 		*vm_start = max(start, args->vma->vm_start);
 		*vm_end = min(end - 1, args->vma->vm_end - 1) + 1;

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2022-07-06 22:26 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-06 22:00 [PATCH v13 00/14] Multi-Gen LRU Framework Yu Zhao
2022-07-06 22:00 ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 05/14] mm: multi-gen LRU: groundwork Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 08/14] mm: multi-gen LRU: support page table walks Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:26   ` Yu Zhao [this message]
2022-07-06 22:26     ` Yu Zhao
2022-07-07  1:23     ` Liam Howlett
2022-07-07  1:23       ` Liam Howlett
2022-07-06 22:00 ` [PATCH v13 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 10/14] mm: multi-gen LRU: kill switch Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-06 22:00 ` [PATCH v13 13/14] mm: multi-gen LRU: admin guide Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-10  2:20   ` Bagas Sanjaya
2022-07-10  2:20     ` Bagas Sanjaya
2022-07-06 22:00 ` [PATCH v13 14/14] mm: multi-gen LRU: design doc Yu Zhao
2022-07-06 22:00   ` Yu Zhao
2022-07-10  2:21   ` Bagas Sanjaya
2022-07-10  2:21     ` Bagas Sanjaya
2022-07-07  3:03 ` [PATCH v13 00/14] Multi-Gen LRU Framework Bagas Sanjaya
2022-07-07  3:03   ` Bagas Sanjaya
2022-07-09 19:31   ` Yu Zhao
2022-07-09 19:31     ` Yu Zhao
2022-07-10  1:09     ` Bagas Sanjaya
2022-07-10  1:09       ` Bagas Sanjaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YsYMEwJCL4GE0Cx6@google.com \
    --to=yuzhao@google.com \
    --cc=Hi-Angel@yandex.ru \
    --cc=Michael@michaellarabel.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=axboe@kernel.dk \
    --cc=bgeffon@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=d@chaos-reins.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=djbyrne@mtu.edu \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=heftig@archlinux.org \
    --cc=holger@applied-asynchrony.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=oleksandr@natalenko.name \
    --cc=page-reclaim@google.com \
    --cc=peterz@infradead.org \
    --cc=rppt@kernel.org \
    --cc=sofia.trinh@edi.works \
    --cc=steven@liquorix.net \
    --cc=suleiman@google.com \
    --cc=szhai2@cs.rochester.edu \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vaibhav@linux.ibm.com \
    --cc=vbabka@suse.cz \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.