linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] mm/gup: Fix pin page write cache bouncing on has_pinned
@ 2021-05-07 15:05 Peter Xu
  2021-05-07 15:05 ` [PATCH v2 1/3] mm/gup_benchmark: Support threading Peter Xu
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Peter Xu @ 2021-05-07 15:05 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Jan Kara, John Hubbard, peterx, Linus Torvalds, Michal Hocko,
	Kirill Tkhai, Kirill Shutemov, Oleg Nesterov, Andrew Morton,
	Jann Horn, Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox,
	Hugh Dickins

v2:
- patch 1: rename s/threads/nthreads/; assert() on pthread create/destroy [John]
- patch 2:
  - rewrite commit message [John, Linus]
  - use parentheses [Linus]
- patch 3:
  - define mm_set_has_pinned_flag() helper and use it [John, Linus, Matthew]
  - keep has_pinned comment but move to MMF_HAS_PINNED [John]

This series contains 3 patches, the 1st one enables threading for gup_benchmark
in the kselftest.  The latter two patches are collected from Andrea's local
branch which can fix write cache bouncing issue with pinning fast-gup.

To be explicit on the latter two patches:

  - the 2nd patch fixes the perf degrade when introducing has_pinned, then

  - the last patch tries to remove the has_pinned with a bit in mm->flags

For patch 3: originally I think we had a plan to reuse has_pinned into a
counter very soon, however that's not happening at least until today, so maybe
it proves that we can remove it until we really want such a counter for
whatever reason.  As the commit message stated, it saves 4 bytes for each mm
without observable regressions.

Regarding testing: we can reference to the commit message of patch 2 for some
detailed testing with will-is-scale.  Meanwhile I did patch 1 just because then
we can even easily verify the patchset using the existing kselftest facilities
or even regress test it in the future with the repo if we want.

Below numbers are extra verification tests that I did besides commit message of
patch 2 using the new gup_benchmark and 256 cpus.  Below test is done on 40
cpus host with Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, and I can get similar
result (of course the write cache bouncing get severe with even more cores).

After patch 1 applied (only test patch, so using old kernel):

  $ sudo chrt -f 1 ./gup_test -a  -m 512 -j 40
  PIN_FAST_BENCHMARK: Time: get:459632 put:5990 us
  PIN_FAST_BENCHMARK: Time: get:461967 put:5840 us
  PIN_FAST_BENCHMARK: Time: get:464521 put:6140 us
  PIN_FAST_BENCHMARK: Time: get:465176 put:7100 us
  PIN_FAST_BENCHMARK: Time: get:465960 put:6733 us
  PIN_FAST_BENCHMARK: Time: get:465324 put:6781 us
  PIN_FAST_BENCHMARK: Time: get:466018 put:7130 us
  PIN_FAST_BENCHMARK: Time: get:466362 put:7118 us
  PIN_FAST_BENCHMARK: Time: get:465118 put:6975 us
  PIN_FAST_BENCHMARK: Time: get:466422 put:6602 us
  PIN_FAST_BENCHMARK: Time: get:465791 put:6818 us
  PIN_FAST_BENCHMARK: Time: get:467091 put:6298 us
  PIN_FAST_BENCHMARK: Time: get:467694 put:5432 us
  PIN_FAST_BENCHMARK: Time: get:469575 put:5581 us
  PIN_FAST_BENCHMARK: Time: get:468124 put:6055 us
  PIN_FAST_BENCHMARK: Time: get:468877 put:6720 us
  PIN_FAST_BENCHMARK: Time: get:467212 put:4961 us
  PIN_FAST_BENCHMARK: Time: get:467834 put:6697 us
  PIN_FAST_BENCHMARK: Time: get:470778 put:6398 us
  PIN_FAST_BENCHMARK: Time: get:469788 put:6310 us
  PIN_FAST_BENCHMARK: Time: get:488277 put:7113 us
  PIN_FAST_BENCHMARK: Time: get:486613 put:7085 us
  PIN_FAST_BENCHMARK: Time: get:486940 put:7202 us
  PIN_FAST_BENCHMARK: Time: get:488728 put:7101 us
  PIN_FAST_BENCHMARK: Time: get:487570 put:7327 us
  PIN_FAST_BENCHMARK: Time: get:489260 put:7027 us
  PIN_FAST_BENCHMARK: Time: get:488846 put:6866 us
  PIN_FAST_BENCHMARK: Time: get:488521 put:6745 us
  PIN_FAST_BENCHMARK: Time: get:489950 put:6459 us
  PIN_FAST_BENCHMARK: Time: get:489777 put:6617 us
  PIN_FAST_BENCHMARK: Time: get:488224 put:6591 us
  PIN_FAST_BENCHMARK: Time: get:488644 put:6477 us
  PIN_FAST_BENCHMARK: Time: get:488754 put:6711 us
  PIN_FAST_BENCHMARK: Time: get:488875 put:6743 us
  PIN_FAST_BENCHMARK: Time: get:489290 put:6657 us
  PIN_FAST_BENCHMARK: Time: get:490264 put:6684 us
  PIN_FAST_BENCHMARK: Time: get:489631 put:6737 us
  PIN_FAST_BENCHMARK: Time: get:488434 put:6655 us
  PIN_FAST_BENCHMARK: Time: get:492213 put:6297 us
  PIN_FAST_BENCHMARK: Time: get:491124 put:6173 us

After the whole series applied (new fixed kernel):

  $ sudo chrt -f 1 ./gup_test -a  -m 512 -j 40
  PIN_FAST_BENCHMARK: Time: get:82038 put:7041 us
  PIN_FAST_BENCHMARK: Time: get:82144 put:6817 us
  PIN_FAST_BENCHMARK: Time: get:83417 put:6674 us
  PIN_FAST_BENCHMARK: Time: get:82540 put:6594 us
  PIN_FAST_BENCHMARK: Time: get:83214 put:6681 us
  PIN_FAST_BENCHMARK: Time: get:83444 put:6889 us
  PIN_FAST_BENCHMARK: Time: get:83194 put:7499 us
  PIN_FAST_BENCHMARK: Time: get:84876 put:7369 us
  PIN_FAST_BENCHMARK: Time: get:86092 put:10289 us
  PIN_FAST_BENCHMARK: Time: get:86153 put:10415 us
  PIN_FAST_BENCHMARK: Time: get:85026 put:7751 us
  PIN_FAST_BENCHMARK: Time: get:85458 put:7944 us
  PIN_FAST_BENCHMARK: Time: get:85735 put:8154 us
  PIN_FAST_BENCHMARK: Time: get:85851 put:8299 us
  PIN_FAST_BENCHMARK: Time: get:86323 put:9617 us
  PIN_FAST_BENCHMARK: Time: get:86288 put:10496 us
  PIN_FAST_BENCHMARK: Time: get:87697 put:9346 us
  PIN_FAST_BENCHMARK: Time: get:87980 put:8382 us
  PIN_FAST_BENCHMARK: Time: get:88719 put:8400 us
  PIN_FAST_BENCHMARK: Time: get:87616 put:8588 us
  PIN_FAST_BENCHMARK: Time: get:86730 put:9563 us
  PIN_FAST_BENCHMARK: Time: get:88167 put:8673 us
  PIN_FAST_BENCHMARK: Time: get:86844 put:9777 us
  PIN_FAST_BENCHMARK: Time: get:88068 put:11774 us
  PIN_FAST_BENCHMARK: Time: get:86170 put:15676 us
  PIN_FAST_BENCHMARK: Time: get:87967 put:12827 us
  PIN_FAST_BENCHMARK: Time: get:95773 put:7652 us
  PIN_FAST_BENCHMARK: Time: get:87734 put:13650 us
  PIN_FAST_BENCHMARK: Time: get:89833 put:14237 us
  PIN_FAST_BENCHMARK: Time: get:96186 put:8029 us
  PIN_FAST_BENCHMARK: Time: get:95532 put:8886 us
  PIN_FAST_BENCHMARK: Time: get:95351 put:5826 us
  PIN_FAST_BENCHMARK: Time: get:96401 put:8407 us
  PIN_FAST_BENCHMARK: Time: get:96473 put:8287 us
  PIN_FAST_BENCHMARK: Time: get:97177 put:8430 us
  PIN_FAST_BENCHMARK: Time: get:98120 put:5263 us
  PIN_FAST_BENCHMARK: Time: get:96271 put:7757 us
  PIN_FAST_BENCHMARK: Time: get:99628 put:10467 us
  PIN_FAST_BENCHMARK: Time: get:99344 put:10045 us
  PIN_FAST_BENCHMARK: Time: get:94212 put:15485 us

Summary:

  Old kernel: 477729.97 (+-3.79%)
  New kernel:  89144.65 (+-11.76%)

I'm not sure whether I should add Fixes for patch 2.  If to add it'll be:

Fixes: 008cfe4418b3d ("mm: Introduce mm_struct.has_pinned")

Then cc stable for 5.9+.  However I'll skip adding it if no one asks, as this
is a perf fix, and frequent+concurrent pinning should not really happen that much.

Please review, thanks.

Andrea Arcangeli (2):
  mm: gup: allow FOLL_PIN to scale in SMP
  mm: gup: pack has_pinned in MMF_HAS_PINNED

Peter Xu (1):
  mm/gup_benchmark: Support threading

 fs/proc/task_mmu.c                    |  2 +-
 include/linux/mm.h                    |  2 +-
 include/linux/mm_types.h              | 10 ---
 include/linux/sched/coredump.h        |  8 +++
 kernel/fork.c                         |  1 -
 mm/gup.c                              | 15 ++++-
 tools/testing/selftests/vm/gup_test.c | 96 ++++++++++++++++++---------
 7 files changed, 88 insertions(+), 46 deletions(-)

-- 
2.31.1




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/3] mm/gup_benchmark: Support threading
  2021-05-07 15:05 [PATCH v2 0/3] mm/gup: Fix pin page write cache bouncing on has_pinned Peter Xu
@ 2021-05-07 15:05 ` Peter Xu
  2021-05-07 15:05 ` [PATCH v2 2/3] mm: gup: allow FOLL_PIN to scale in SMP Peter Xu
  2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
  2 siblings, 0 replies; 8+ messages in thread
From: Peter Xu @ 2021-05-07 15:05 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Jan Kara, John Hubbard, peterx, Linus Torvalds, Michal Hocko,
	Kirill Tkhai, Kirill Shutemov, Oleg Nesterov, Andrew Morton,
	Jann Horn, Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox,
	Hugh Dickins

Add a new parameter "-j N" to support concurrent gup test.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 tools/testing/selftests/vm/gup_test.c | 96 ++++++++++++++++++---------
 1 file changed, 65 insertions(+), 31 deletions(-)

diff --git a/tools/testing/selftests/vm/gup_test.c b/tools/testing/selftests/vm/gup_test.c
index 1e662d59c502a..fe043f67798b0 100644
--- a/tools/testing/selftests/vm/gup_test.c
+++ b/tools/testing/selftests/vm/gup_test.c
@@ -6,6 +6,8 @@
 #include <sys/mman.h>
 #include <sys/stat.h>
 #include <sys/types.h>
+#include <pthread.h>
+#include <assert.h>
 #include "../../../../mm/gup_test.h"
 
 #define MB (1UL << 20)
@@ -15,6 +17,12 @@
 #define FOLL_WRITE	0x01	/* check pte is writable */
 #define FOLL_TOUCH	0x02	/* mark page accessed */
 
+static unsigned long cmd = GUP_FAST_BENCHMARK;
+static int gup_fd, repeats = 1;
+static unsigned long size = 128 * MB;
+/* Serialize prints */
+static pthread_mutex_t print_mutex = PTHREAD_MUTEX_INITIALIZER;
+
 static char *cmd_to_str(unsigned long cmd)
 {
 	switch (cmd) {
@@ -34,17 +42,55 @@ static char *cmd_to_str(unsigned long cmd)
 	return "Unknown command";
 }
 
+void *gup_thread(void *data)
+{
+	struct gup_test gup = *(struct gup_test *)data;
+	int i;
+
+	/* Only report timing information on the *_BENCHMARK commands: */
+	if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||
+	     (cmd == PIN_LONGTERM_BENCHMARK)) {
+		for (i = 0; i < repeats; i++) {
+			gup.size = size;
+			if (ioctl(gup_fd, cmd, &gup))
+				perror("ioctl"), exit(1);
+
+			pthread_mutex_lock(&print_mutex);
+			printf("%s: Time: get:%lld put:%lld us",
+			       cmd_to_str(cmd), gup.get_delta_usec,
+			       gup.put_delta_usec);
+			if (gup.size != size)
+				printf(", truncated (size: %lld)", gup.size);
+			printf("\n");
+			pthread_mutex_unlock(&print_mutex);
+		}
+	} else {
+		gup.size = size;
+		if (ioctl(gup_fd, cmd, &gup)) {
+			perror("ioctl");
+			exit(1);
+		}
+
+		pthread_mutex_lock(&print_mutex);
+		printf("%s: done\n", cmd_to_str(cmd));
+		if (gup.size != size)
+			printf("Truncated (size: %lld)\n", gup.size);
+		pthread_mutex_unlock(&print_mutex);
+	}
+
+	return NULL;
+}
+
 int main(int argc, char **argv)
 {
 	struct gup_test gup = { 0 };
-	unsigned long size = 128 * MB;
-	int i, fd, filed, opt, nr_pages = 1, thp = -1, repeats = 1, write = 1;
-	unsigned long cmd = GUP_FAST_BENCHMARK;
+	int filed, i, opt, nr_pages = 1, thp = -1, write = 1, nthreads = 1, ret;
 	int flags = MAP_PRIVATE, touch = 0;
 	char *file = "/dev/zero";
+	pthread_t *tid;
 	char *p;
 
-	while ((opt = getopt(argc, argv, "m:r:n:F:f:abctTLUuwWSHpz")) != -1) {
+	while ((opt = getopt(argc, argv, "m:r:n:F:f:abcj:tTLUuwWSHpz")) != -1) {
 		switch (opt) {
 		case 'a':
 			cmd = PIN_FAST_BENCHMARK;
@@ -74,6 +120,9 @@ int main(int argc, char **argv)
 			/* strtol, so you can pass flags in hex form */
 			gup.gup_flags = strtol(optarg, 0, 0);
 			break;
+		case 'j':
+			nthreads = atoi(optarg);
+			break;
 		case 'm':
 			size = atoi(optarg) * MB;
 			break;
@@ -154,8 +203,8 @@ int main(int argc, char **argv)
 	if (write)
 		gup.gup_flags |= FOLL_WRITE;
 
-	fd = open("/sys/kernel/debug/gup_test", O_RDWR);
-	if (fd == -1) {
+	gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR);
+	if (gup_fd == -1) {
 		perror("open");
 		exit(1);
 	}
@@ -185,32 +234,17 @@ int main(int argc, char **argv)
 			p[0] = 0;
 	}
 
-	/* Only report timing information on the *_BENCHMARK commands: */
-	if ((cmd == PIN_FAST_BENCHMARK) || (cmd == GUP_FAST_BENCHMARK) ||
-	     (cmd == PIN_LONGTERM_BENCHMARK)) {
-		for (i = 0; i < repeats; i++) {
-			gup.size = size;
-			if (ioctl(fd, cmd, &gup))
-				perror("ioctl"), exit(1);
-
-			printf("%s: Time: get:%lld put:%lld us",
-			       cmd_to_str(cmd), gup.get_delta_usec,
-			       gup.put_delta_usec);
-			if (gup.size != size)
-				printf(", truncated (size: %lld)", gup.size);
-			printf("\n");
-		}
-	} else {
-		gup.size = size;
-		if (ioctl(fd, cmd, &gup)) {
-			perror("ioctl");
-			exit(1);
-		}
-
-		printf("%s: done\n", cmd_to_str(cmd));
-		if (gup.size != size)
-			printf("Truncated (size: %lld)\n", gup.size);
+	tid = malloc(sizeof(pthread_t) * nthreads);
+	assert(tid);
+	for (i = 0; i < nthreads; i++) {
+		ret = pthread_create(&tid[i], NULL, gup_thread, &gup);
+		assert(ret == 0);
+	}
+	for (i = 0; i < nthreads; i++) {
+		ret = pthread_join(tid[i], NULL);
+		assert(ret == 0);
 	}
+	free(tid);
 
 	return 0;
 }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] mm: gup: allow FOLL_PIN to scale in SMP
  2021-05-07 15:05 [PATCH v2 0/3] mm/gup: Fix pin page write cache bouncing on has_pinned Peter Xu
  2021-05-07 15:05 ` [PATCH v2 1/3] mm/gup_benchmark: Support threading Peter Xu
@ 2021-05-07 15:05 ` Peter Xu
  2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
  2 siblings, 0 replies; 8+ messages in thread
From: Peter Xu @ 2021-05-07 15:05 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Jan Kara, John Hubbard, peterx, Linus Torvalds, Michal Hocko,
	Kirill Tkhai, Kirill Shutemov, Oleg Nesterov, Andrew Morton,
	Jann Horn, Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox,
	Hugh Dickins

From: Andrea Arcangeli <aarcange@redhat.com>

has_pinned cannot be written by each pin-fast or it won't scale in
SMP. This isn't "false sharing" strictly speaking (it's more like
"true non-sharing"), but it creates the same SMP scalability
bottleneck of "false sharing".

To verify the improvement, below test is done on 40 cpus host with Intel(R)
Xeon(R) CPU E5-2630 v4 @ 2.20GHz (must be with CONFIG_GUP_TEST=y):

  $ sudo chrt -f 1 ./gup_test -a  -m 512 -j 40

Where we can get (average value for 40 threads):

  Old kernel: 477729.97 (+- 3.79%)
  New kernel:  89144.65 (+-11.76%)

On a similar condition with 256 cpus, this commits increases the SMP
scalability of pin_user_pages_fast() executed by different threads of the same
process by more than 4000%.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
[peterx: rewrite commit message, add parentheses against "(A & B)"]
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 mm/gup.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 63a079e361a3d..9933bc5c2eff2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1292,7 +1292,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
 		BUG_ON(*locked != 1);
 	}
 
-	if (flags & FOLL_PIN)
+	if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
 		atomic_set(&mm->has_pinned, 1);
 
 	/*
@@ -2617,7 +2617,7 @@ static int internal_get_user_pages_fast(unsigned long start,
 				       FOLL_FAST_ONLY)))
 		return -EINVAL;
 
-	if (gup_flags & FOLL_PIN)
+	if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
 		atomic_set(&current->mm->has_pinned, 1);
 
 	if (!(gup_flags & FOLL_FAST_ONLY))
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED
  2021-05-07 15:05 [PATCH v2 0/3] mm/gup: Fix pin page write cache bouncing on has_pinned Peter Xu
  2021-05-07 15:05 ` [PATCH v2 1/3] mm/gup_benchmark: Support threading Peter Xu
  2021-05-07 15:05 ` [PATCH v2 2/3] mm: gup: allow FOLL_PIN to scale in SMP Peter Xu
@ 2021-05-07 15:05 ` Peter Xu
  2021-05-08  1:12   ` John Hubbard
                     ` (2 more replies)
  2 siblings, 3 replies; 8+ messages in thread
From: Peter Xu @ 2021-05-07 15:05 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Jan Kara, John Hubbard, peterx, Linus Torvalds, Michal Hocko,
	Kirill Tkhai, Kirill Shutemov, Oleg Nesterov, Andrew Morton,
	Jann Horn, Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox,
	Hugh Dickins

From: Andrea Arcangeli <aarcange@redhat.com>

has_pinned 32bit can be packed in the MMF_HAS_PINNED bit as a noop
cleanup.

Any atomic_inc/dec to the mm cacheline shared by all threads in
pin-fast would reintroduce a loss of SMP scalability to pin-fast, so
there's no future potential usefulness to keep an atomic in the mm for
this.

set_bit(MMF_HAS_PINNED) will be theoretically a bit slower than
WRITE_ONCE (atomic_set is equivalent to WRITE_ONCE), but the set_bit
(just like atomic_set after this commit) has to be still issued only
once per "mm", so the difference between the two will be lost in the
noise.

will-it-scale "mmap2" shows no change in performance with enterprise
config as expected.

will-it-scale "pin_fast" retains the > 4000% SMP scalability
performance improvement against upstream as expected.

This is a noop as far as overall performance and SMP scalability are
concerned.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
[peterx: Fix build for task_mmu.c, introduce mm_set_has_pinned_flag, fix
 comment here and there]
Signed-off-by: Peter Xu <peterx@redhat.com>
---
 fs/proc/task_mmu.c             |  2 +-
 include/linux/mm.h             |  2 +-
 include/linux/mm_types.h       | 10 ----------
 include/linux/sched/coredump.h |  8 ++++++++
 kernel/fork.c                  |  1 -
 mm/gup.c                       | 19 +++++++++++++++----
 6 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 4c95cc57a66a8..6144571942db9 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1049,7 +1049,7 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr,
 		return false;
 	if (!is_cow_mapping(vma->vm_flags))
 		return false;
-	if (likely(!atomic_read(&vma->vm_mm->has_pinned)))
+	if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)))
 		return false;
 	page = vm_normal_page(vma, addr, pte);
 	if (!page)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d6790ab0cf575..94dc84f6d8658 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1331,7 +1331,7 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
 	if (!is_cow_mapping(vma->vm_flags))
 		return false;
 
-	if (!atomic_read(&vma->vm_mm->has_pinned))
+	if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))
 		return false;
 
 	return page_maybe_dma_pinned(page);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6613b26a88946..15d79858fadbd 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -435,16 +435,6 @@ struct mm_struct {
 		 */
 		atomic_t mm_count;
 
-		/**
-		 * @has_pinned: Whether this mm has pinned any pages.  This can
-		 * be either replaced in the future by @pinned_vm when it
-		 * becomes stable, or grow into a counter on its own. We're
-		 * aggresive on this bit now - even if the pinned pages were
-		 * unpinned later on, we'll still keep this bit set for the
-		 * lifecycle of this mm just for simplicity.
-		 */
-		atomic_t has_pinned;
-
 		/**
 		 * @write_protect_seq: Locked when any thread is write
 		 * protecting pages mapped by this mm to enforce a later COW,
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index dfd82eab29025..4d9e3a6568758 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -73,6 +73,14 @@ static inline int get_dumpable(struct mm_struct *mm)
 #define MMF_OOM_VICTIM		25	/* mm is the oom victim */
 #define MMF_OOM_REAP_QUEUED	26	/* mm was queued for oom_reaper */
 #define MMF_MULTIPROCESS	27	/* mm is shared between processes */
+/*
+ * MMF_HAS_PINNED: Whether this mm has pinned any pages.  This can be either
+ * replaced in the future by mm.pinned_vm when it becomes stable, or grow into
+ * a counter on its own. We're aggresive on this bit for now: even if the
+ * pinned pages were unpinned later on, we'll still keep this bit set for the
+ * lifecycle of this mm, just for simplicity.
+ */
+#define MMF_HAS_PINNED		28	/* FOLL_PIN has run, never cleared */
 #define MMF_DISABLE_THP_MASK	(1 << MMF_DISABLE_THP)
 
 #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
diff --git a/kernel/fork.c b/kernel/fork.c
index 502dc046fbc62..a71e73707ef59 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1026,7 +1026,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 	mm_pgtables_bytes_init(mm);
 	mm->map_count = 0;
 	mm->locked_vm = 0;
-	atomic_set(&mm->has_pinned, 0);
 	atomic64_set(&mm->pinned_vm, 0);
 	memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
 	spin_lock_init(&mm->page_table_lock);
diff --git a/mm/gup.c b/mm/gup.c
index 9933bc5c2eff2..bb130723a6717 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1270,6 +1270,17 @@ int fixup_user_fault(struct mm_struct *mm,
 }
 EXPORT_SYMBOL_GPL(fixup_user_fault);
 
+/*
+ * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
+ * lifecycle.  Avoid setting the bit unless necessary, or it might cause write
+ * cache bouncing on large SMP machines for concurrent pinned gups.
+ */
+static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
+{
+       if (!test_bit(MMF_HAS_PINNED, mm_flags))
+               set_bit(MMF_HAS_PINNED, mm_flags);
+}
+
 /*
  * Please note that this function, unlike __get_user_pages will not
  * return 0 for nr_pages > 0 without FOLL_NOWAIT
@@ -1292,8 +1303,8 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
 		BUG_ON(*locked != 1);
 	}
 
-	if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
-		atomic_set(&mm->has_pinned, 1);
+	if (flags & FOLL_PIN)
+		mm_set_has_pinned_flag(&mm->flags);
 
 	/*
 	 * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
@@ -2617,8 +2628,8 @@ static int internal_get_user_pages_fast(unsigned long start,
 				       FOLL_FAST_ONLY)))
 		return -EINVAL;
 
-	if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
-		atomic_set(&current->mm->has_pinned, 1);
+	if (gup_flags & FOLL_PIN)
+		mm_set_has_pinned_flag(&current->mm->flags);
 
 	if (!(gup_flags & FOLL_FAST_ONLY))
 		might_lock_read(&current->mm->mmap_lock);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED
  2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
@ 2021-05-08  1:12   ` John Hubbard
  2021-05-12  9:49   ` Geert Uytterhoeven
  2021-05-12  9:49   ` Naresh Kamboju
  2 siblings, 0 replies; 8+ messages in thread
From: John Hubbard @ 2021-05-08  1:12 UTC (permalink / raw)
  To: Peter Xu, linux-mm, linux-kernel
  Cc: Jan Kara, Linus Torvalds, Michal Hocko, Kirill Tkhai,
	Kirill Shutemov, Oleg Nesterov, Andrew Morton, Jann Horn,
	Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox, Hugh Dickins

On 5/7/21 8:05 AM, Peter Xu wrote:
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> has_pinned 32bit can be packed in the MMF_HAS_PINNED bit as a noop
> cleanup.
> 
> Any atomic_inc/dec to the mm cacheline shared by all threads in
> pin-fast would reintroduce a loss of SMP scalability to pin-fast, so
> there's no future potential usefulness to keep an atomic in the mm for
> this.
> 
> set_bit(MMF_HAS_PINNED) will be theoretically a bit slower than
> WRITE_ONCE (atomic_set is equivalent to WRITE_ONCE), but the set_bit
> (just like atomic_set after this commit) has to be still issued only
> once per "mm", so the difference between the two will be lost in the
> noise.
> 
> will-it-scale "mmap2" shows no change in performance with enterprise
> config as expected.
> 
> will-it-scale "pin_fast" retains the > 4000% SMP scalability
> performance improvement against upstream as expected.
> 
> This is a noop as far as overall performance and SMP scalability are
> concerned.
> 
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> [peterx: Fix build for task_mmu.c, introduce mm_set_has_pinned_flag, fix
>   comment here and there]
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>   fs/proc/task_mmu.c             |  2 +-
>   include/linux/mm.h             |  2 +-
>   include/linux/mm_types.h       | 10 ----------
>   include/linux/sched/coredump.h |  8 ++++++++
>   kernel/fork.c                  |  1 -
>   mm/gup.c                       | 19 +++++++++++++++----
>   6 files changed, 25 insertions(+), 17 deletions(-)
> 

Looks good.

Reviewed-by: John Hubbard <jhubbard@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 4c95cc57a66a8..6144571942db9 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1049,7 +1049,7 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr,
>   		return false;
>   	if (!is_cow_mapping(vma->vm_flags))
>   		return false;
> -	if (likely(!atomic_read(&vma->vm_mm->has_pinned)))
> +	if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)))
>   		return false;
>   	page = vm_normal_page(vma, addr, pte);
>   	if (!page)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index d6790ab0cf575..94dc84f6d8658 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1331,7 +1331,7 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
>   	if (!is_cow_mapping(vma->vm_flags))
>   		return false;
>   
> -	if (!atomic_read(&vma->vm_mm->has_pinned))
> +	if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))
>   		return false;
>   
>   	return page_maybe_dma_pinned(page);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6613b26a88946..15d79858fadbd 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -435,16 +435,6 @@ struct mm_struct {
>   		 */
>   		atomic_t mm_count;
>   
> -		/**
> -		 * @has_pinned: Whether this mm has pinned any pages.  This can
> -		 * be either replaced in the future by @pinned_vm when it
> -		 * becomes stable, or grow into a counter on its own. We're
> -		 * aggresive on this bit now - even if the pinned pages were
> -		 * unpinned later on, we'll still keep this bit set for the
> -		 * lifecycle of this mm just for simplicity.
> -		 */
> -		atomic_t has_pinned;
> -
>   		/**
>   		 * @write_protect_seq: Locked when any thread is write
>   		 * protecting pages mapped by this mm to enforce a later COW,
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index dfd82eab29025..4d9e3a6568758 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -73,6 +73,14 @@ static inline int get_dumpable(struct mm_struct *mm)
>   #define MMF_OOM_VICTIM		25	/* mm is the oom victim */
>   #define MMF_OOM_REAP_QUEUED	26	/* mm was queued for oom_reaper */
>   #define MMF_MULTIPROCESS	27	/* mm is shared between processes */
> +/*
> + * MMF_HAS_PINNED: Whether this mm has pinned any pages.  This can be either
> + * replaced in the future by mm.pinned_vm when it becomes stable, or grow into
> + * a counter on its own. We're aggresive on this bit for now: even if the
> + * pinned pages were unpinned later on, we'll still keep this bit set for the
> + * lifecycle of this mm, just for simplicity.
> + */
> +#define MMF_HAS_PINNED		28	/* FOLL_PIN has run, never cleared */
>   #define MMF_DISABLE_THP_MASK	(1 << MMF_DISABLE_THP)
>   
>   #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 502dc046fbc62..a71e73707ef59 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1026,7 +1026,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>   	mm_pgtables_bytes_init(mm);
>   	mm->map_count = 0;
>   	mm->locked_vm = 0;
> -	atomic_set(&mm->has_pinned, 0);
>   	atomic64_set(&mm->pinned_vm, 0);
>   	memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
>   	spin_lock_init(&mm->page_table_lock);
> diff --git a/mm/gup.c b/mm/gup.c
> index 9933bc5c2eff2..bb130723a6717 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1270,6 +1270,17 @@ int fixup_user_fault(struct mm_struct *mm,
>   }
>   EXPORT_SYMBOL_GPL(fixup_user_fault);
>   
> +/*
> + * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
> + * lifecycle.  Avoid setting the bit unless necessary, or it might cause write
> + * cache bouncing on large SMP machines for concurrent pinned gups.
> + */
> +static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
> +{
> +       if (!test_bit(MMF_HAS_PINNED, mm_flags))
> +               set_bit(MMF_HAS_PINNED, mm_flags);
> +}
> +
>   /*
>    * Please note that this function, unlike __get_user_pages will not
>    * return 0 for nr_pages > 0 without FOLL_NOWAIT
> @@ -1292,8 +1303,8 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
>   		BUG_ON(*locked != 1);
>   	}
>   
> -	if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
> -		atomic_set(&mm->has_pinned, 1);
> +	if (flags & FOLL_PIN)
> +		mm_set_has_pinned_flag(&mm->flags);
>   
>   	/*
>   	 * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
> @@ -2617,8 +2628,8 @@ static int internal_get_user_pages_fast(unsigned long start,
>   				       FOLL_FAST_ONLY)))
>   		return -EINVAL;
>   
> -	if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
> -		atomic_set(&current->mm->has_pinned, 1);
> +	if (gup_flags & FOLL_PIN)
> +		mm_set_has_pinned_flag(&current->mm->flags);
>   
>   	if (!(gup_flags & FOLL_FAST_ONLY))
>   		might_lock_read(&current->mm->mmap_lock);
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED
  2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
  2021-05-08  1:12   ` John Hubbard
@ 2021-05-12  9:49   ` Geert Uytterhoeven
  2021-05-12 12:34     ` Peter Xu
  2021-05-12  9:49   ` Naresh Kamboju
  2 siblings, 1 reply; 8+ messages in thread
From: Geert Uytterhoeven @ 2021-05-12  9:49 UTC (permalink / raw)
  To: Peter Xu, Andrea Arcangeli
  Cc: Linux MM, Linux Kernel Mailing List, Jan Kara, John Hubbard,
	Linus Torvalds, Michal Hocko, Kirill Tkhai, Kirill Shutemov,
	Oleg Nesterov, Andrew Morton, Jann Horn, Jason Gunthorpe,
	Matthew Wilcox, Hugh Dickins

Hi Peter, Andrea,

On Fri, May 7, 2021 at 7:26 PM Peter Xu <peterx@redhat.com> wrote:
> From: Andrea Arcangeli <aarcange@redhat.com>
>
> has_pinned 32bit can be packed in the MMF_HAS_PINNED bit as a noop
> cleanup.
>
> Any atomic_inc/dec to the mm cacheline shared by all threads in
> pin-fast would reintroduce a loss of SMP scalability to pin-fast, so
> there's no future potential usefulness to keep an atomic in the mm for
> this.
>
> set_bit(MMF_HAS_PINNED) will be theoretically a bit slower than
> WRITE_ONCE (atomic_set is equivalent to WRITE_ONCE), but the set_bit
> (just like atomic_set after this commit) has to be still issued only
> once per "mm", so the difference between the two will be lost in the
> noise.
>
> will-it-scale "mmap2" shows no change in performance with enterprise
> config as expected.
>
> will-it-scale "pin_fast" retains the > 4000% SMP scalability
> performance improvement against upstream as expected.
>
> This is a noop as far as overall performance and SMP scalability are
> concerned.
>
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> [peterx: Fix build for task_mmu.c, introduce mm_set_has_pinned_flag, fix
>  comment here and there]
> Signed-off-by: Peter Xu <peterx@redhat.com>

Thanks for your patch, which is now in linux-next.

> diff --git a/mm/gup.c b/mm/gup.c
> index 9933bc5c2eff2..bb130723a6717 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1270,6 +1270,17 @@ int fixup_user_fault(struct mm_struct *mm,
>  }
>  EXPORT_SYMBOL_GPL(fixup_user_fault);
>
> +/*
> + * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
> + * lifecycle.  Avoid setting the bit unless necessary, or it might cause write
> + * cache bouncing on large SMP machines for concurrent pinned gups.
> + */
> +static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
> +{
> +       if (!test_bit(MMF_HAS_PINNED, mm_flags))
> +               set_bit(MMF_HAS_PINNED, mm_flags);
> +}
> +
>  /*
>   * Please note that this function, unlike __get_user_pages will not
>   * return 0 for nr_pages > 0 without FOLL_NOWAIT
> @@ -1292,8 +1303,8 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
>                 BUG_ON(*locked != 1);
>         }
>
> -       if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
> -               atomic_set(&mm->has_pinned, 1);
> +       if (flags & FOLL_PIN)
> +               mm_set_has_pinned_flag(&mm->flags);
>
>         /*
>          * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
> @@ -2617,8 +2628,8 @@ static int internal_get_user_pages_fast(unsigned long start,
>                                        FOLL_FAST_ONLY)))
>                 return -EINVAL;
>
> -       if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
> -               atomic_set(&current->mm->has_pinned, 1);
> +       if (gup_flags & FOLL_PIN)
> +               mm_set_has_pinned_flag(&current->mm->flags);

noreply@ellerman.id.au reports:

    FAILED linux-next/m5272c3_defconfig/m68k-gcc8 Wed May 12, 19:30
    http://kisskb.ellerman.id.au/kisskb/buildresult/14543658/
    Commit:   Add linux-next specific files for 20210512
          ec85c95b0c90a17413901b018e8ade7b9eae7cad
    Compiler: m68k-linux-gcc (GCC) 8.1.0 / GNU ld (GNU Binutils) 2.30

    mm/gup.c:2698:3: error: implicit declaration of function
'mm_set_has_pinned_flag'; did you mean 'set_tsk_thread_flag'?
[-Werror=implicit-function-declaration]

It's definition is inside the #ifdef CONFIG_MMU section, but the last
user isn't.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED
  2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
  2021-05-08  1:12   ` John Hubbard
  2021-05-12  9:49   ` Geert Uytterhoeven
@ 2021-05-12  9:49   ` Naresh Kamboju
  2 siblings, 0 replies; 8+ messages in thread
From: Naresh Kamboju @ 2021-05-12  9:49 UTC (permalink / raw)
  To: Peter Xu, linux-mm, open list
  Cc: Jan Kara, John Hubbard, Linus Torvalds, Michal Hocko,
	Kirill Tkhai, Kirill Shutemov, Oleg Nesterov, Andrew Morton,
	Jann Horn, Andrea Arcangeli, Jason Gunthorpe, Matthew Wilcox,
	Hugh Dickins, Linux-Next Mailing List, lkft-triage, regressions

On Fri, 7 May 2021 at 20:36, Peter Xu <peterx@redhat.com> wrote:
>
> From: Andrea Arcangeli <aarcange@redhat.com>
>
> has_pinned 32bit can be packed in the MMF_HAS_PINNED bit as a noop
> cleanup.
>
> Any atomic_inc/dec to the mm cacheline shared by all threads in
> pin-fast would reintroduce a loss of SMP scalability to pin-fast, so
> there's no future potential usefulness to keep an atomic in the mm for
> this.
>
> set_bit(MMF_HAS_PINNED) will be theoretically a bit slower than
> WRITE_ONCE (atomic_set is equivalent to WRITE_ONCE), but the set_bit
> (just like atomic_set after this commit) has to be still issued only
> once per "mm", so the difference between the two will be lost in the
> noise.
>
> will-it-scale "mmap2" shows no change in performance with enterprise
> config as expected.
>
> will-it-scale "pin_fast" retains the > 4000% SMP scalability
> performance improvement against upstream as expected.
>
> This is a noop as far as overall performance and SMP scalability are
> concerned.
>
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> [peterx: Fix build for task_mmu.c, introduce mm_set_has_pinned_flag, fix
>  comment here and there]
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
>  fs/proc/task_mmu.c             |  2 +-
>  include/linux/mm.h             |  2 +-
>  include/linux/mm_types.h       | 10 ----------
>  include/linux/sched/coredump.h |  8 ++++++++
>  kernel/fork.c                  |  1 -
>  mm/gup.c                       | 19 +++++++++++++++----
>  6 files changed, 25 insertions(+), 17 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 4c95cc57a66a8..6144571942db9 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -1049,7 +1049,7 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr,
>                 return false;
>         if (!is_cow_mapping(vma->vm_flags))
>                 return false;
> -       if (likely(!atomic_read(&vma->vm_mm->has_pinned)))
> +       if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags)))
>                 return false;
>         page = vm_normal_page(vma, addr, pte);
>         if (!page)
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index d6790ab0cf575..94dc84f6d8658 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1331,7 +1331,7 @@ static inline bool page_needs_cow_for_dma(struct vm_area_struct *vma,
>         if (!is_cow_mapping(vma->vm_flags))
>                 return false;
>
> -       if (!atomic_read(&vma->vm_mm->has_pinned))
> +       if (!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))
>                 return false;
>
>         return page_maybe_dma_pinned(page);
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6613b26a88946..15d79858fadbd 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -435,16 +435,6 @@ struct mm_struct {
>                  */
>                 atomic_t mm_count;
>
> -               /**
> -                * @has_pinned: Whether this mm has pinned any pages.  This can
> -                * be either replaced in the future by @pinned_vm when it
> -                * becomes stable, or grow into a counter on its own. We're
> -                * aggresive on this bit now - even if the pinned pages were
> -                * unpinned later on, we'll still keep this bit set for the
> -                * lifecycle of this mm just for simplicity.
> -                */
> -               atomic_t has_pinned;
> -
>                 /**
>                  * @write_protect_seq: Locked when any thread is write
>                  * protecting pages mapped by this mm to enforce a later COW,
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index dfd82eab29025..4d9e3a6568758 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -73,6 +73,14 @@ static inline int get_dumpable(struct mm_struct *mm)
>  #define MMF_OOM_VICTIM         25      /* mm is the oom victim */
>  #define MMF_OOM_REAP_QUEUED    26      /* mm was queued for oom_reaper */
>  #define MMF_MULTIPROCESS       27      /* mm is shared between processes */
> +/*
> + * MMF_HAS_PINNED: Whether this mm has pinned any pages.  This can be either
> + * replaced in the future by mm.pinned_vm when it becomes stable, or grow into
> + * a counter on its own. We're aggresive on this bit for now: even if the
> + * pinned pages were unpinned later on, we'll still keep this bit set for the
> + * lifecycle of this mm, just for simplicity.
> + */
> +#define MMF_HAS_PINNED         28      /* FOLL_PIN has run, never cleared */
>  #define MMF_DISABLE_THP_MASK   (1 << MMF_DISABLE_THP)
>
>  #define MMF_INIT_MASK          (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 502dc046fbc62..a71e73707ef59 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1026,7 +1026,6 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
>         mm_pgtables_bytes_init(mm);
>         mm->map_count = 0;
>         mm->locked_vm = 0;
> -       atomic_set(&mm->has_pinned, 0);
>         atomic64_set(&mm->pinned_vm, 0);
>         memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
>         spin_lock_init(&mm->page_table_lock);
> diff --git a/mm/gup.c b/mm/gup.c
> index 9933bc5c2eff2..bb130723a6717 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1270,6 +1270,17 @@ int fixup_user_fault(struct mm_struct *mm,
>  }
>  EXPORT_SYMBOL_GPL(fixup_user_fault);
>
> +/*
> + * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
> + * lifecycle.  Avoid setting the bit unless necessary, or it might cause write
> + * cache bouncing on large SMP machines for concurrent pinned gups.
> + */
> +static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
> +{
> +       if (!test_bit(MMF_HAS_PINNED, mm_flags))
> +               set_bit(MMF_HAS_PINNED, mm_flags);
> +}
> +
>  /*
>   * Please note that this function, unlike __get_user_pages will not
>   * return 0 for nr_pages > 0 without FOLL_NOWAIT
> @@ -1292,8 +1303,8 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
>                 BUG_ON(*locked != 1);
>         }
>
> -       if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
> -               atomic_set(&mm->has_pinned, 1);
> +       if (flags & FOLL_PIN)
> +               mm_set_has_pinned_flag(&mm->flags);
>
>         /*
>          * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
> @@ -2617,8 +2628,8 @@ static int internal_get_user_pages_fast(unsigned long start,
>                                        FOLL_FAST_ONLY)))
>                 return -EINVAL;
>
> -       if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
> -               atomic_set(&current->mm->has_pinned, 1);
> +       if (gup_flags & FOLL_PIN)
> +               mm_set_has_pinned_flag(&current->mm->flags);

Linux next tag next-20210512 builds failed on arm, riscv, mips and sh
for the tinyconfig and allnoconfig due this patch.

 arm, mips, riscv and sh (tinyconfig) with gcc-8
 arm, mips, riscv and sh (allnoconfig) with gcc-8
 arm, mips, riscv and sh (tinyconfig) with gcc-9
 arm, mips, riscv and sh (allnoconfig) with gcc-9
 arm, mips, riscv and sh (tinyconfig) with gcc-10
 arm, mips, riscv and sh (allnoconfig) with gcc-10

mm/gup.c: In function 'internal_get_user_pages_fast':
mm/gup.c:2698:3: error: implicit declaration of function
'mm_set_has_pinned_flag' [-Werror=implicit-function-declaration]
 2698 |   mm_set_has_pinned_flag(&current->mm->flags);
      |   ^~~~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
make[2]: *** [/builds/linux/scripts/Makefile.build:273: mm/gup.o] Error 1

Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>

#regzb introduced: 354a2e3604e2 ("mm: gup: pack has_pinned in MMF_HAS_PINNED")

Build url:
https://gitlab.com/Linaro/lkft/mirrors/next/linux-next/-/jobs/1255567072#L315

--
Linaro LKFT
https://lkft.linaro.org


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED
  2021-05-12  9:49   ` Geert Uytterhoeven
@ 2021-05-12 12:34     ` Peter Xu
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Xu @ 2021-05-12 12:34 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Andrea Arcangeli, Linux MM, Linux Kernel Mailing List, Jan Kara,
	John Hubbard, Linus Torvalds, Michal Hocko, Kirill Tkhai,
	Kirill Shutemov, Oleg Nesterov, Andrew Morton, Jann Horn,
	Jason Gunthorpe, Matthew Wilcox, Hugh Dickins

On Wed, May 12, 2021 at 11:49:05AM +0200, Geert Uytterhoeven wrote:
> Hi Peter, Andrea,

Hi, Geert, Naresh,

(Adding Naresh too since Naresh reported the same issue at the meantime)

> 
> On Fri, May 7, 2021 at 7:26 PM Peter Xu <peterx@redhat.com> wrote:
> > From: Andrea Arcangeli <aarcange@redhat.com>
> >
> > has_pinned 32bit can be packed in the MMF_HAS_PINNED bit as a noop
> > cleanup.
> >
> > Any atomic_inc/dec to the mm cacheline shared by all threads in
> > pin-fast would reintroduce a loss of SMP scalability to pin-fast, so
> > there's no future potential usefulness to keep an atomic in the mm for
> > this.
> >
> > set_bit(MMF_HAS_PINNED) will be theoretically a bit slower than
> > WRITE_ONCE (atomic_set is equivalent to WRITE_ONCE), but the set_bit
> > (just like atomic_set after this commit) has to be still issued only
> > once per "mm", so the difference between the two will be lost in the
> > noise.
> >
> > will-it-scale "mmap2" shows no change in performance with enterprise
> > config as expected.
> >
> > will-it-scale "pin_fast" retains the > 4000% SMP scalability
> > performance improvement against upstream as expected.
> >
> > This is a noop as far as overall performance and SMP scalability are
> > concerned.
> >
> > Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
> > [peterx: Fix build for task_mmu.c, introduce mm_set_has_pinned_flag, fix
> >  comment here and there]
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> Thanks for your patch, which is now in linux-next.
> 
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 9933bc5c2eff2..bb130723a6717 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -1270,6 +1270,17 @@ int fixup_user_fault(struct mm_struct *mm,
> >  }
> >  EXPORT_SYMBOL_GPL(fixup_user_fault);
> >
> > +/*
> > + * Set the MMF_HAS_PINNED if not set yet; after set it'll be there for the mm's
> > + * lifecycle.  Avoid setting the bit unless necessary, or it might cause write
> > + * cache bouncing on large SMP machines for concurrent pinned gups.
> > + */
> > +static inline void mm_set_has_pinned_flag(unsigned long *mm_flags)
> > +{
> > +       if (!test_bit(MMF_HAS_PINNED, mm_flags))
> > +               set_bit(MMF_HAS_PINNED, mm_flags);
> > +}
> > +
> >  /*
> >   * Please note that this function, unlike __get_user_pages will not
> >   * return 0 for nr_pages > 0 without FOLL_NOWAIT
> > @@ -1292,8 +1303,8 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
> >                 BUG_ON(*locked != 1);
> >         }
> >
> > -       if ((flags & FOLL_PIN) && !atomic_read(&mm->has_pinned))
> > -               atomic_set(&mm->has_pinned, 1);
> > +       if (flags & FOLL_PIN)
> > +               mm_set_has_pinned_flag(&mm->flags);
> >
> >         /*
> >          * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
> > @@ -2617,8 +2628,8 @@ static int internal_get_user_pages_fast(unsigned long start,
> >                                        FOLL_FAST_ONLY)))
> >                 return -EINVAL;
> >
> > -       if ((gup_flags & FOLL_PIN) && !atomic_read(&current->mm->has_pinned))
> > -               atomic_set(&current->mm->has_pinned, 1);
> > +       if (gup_flags & FOLL_PIN)
> > +               mm_set_has_pinned_flag(&current->mm->flags);
> 
> noreply@ellerman.id.au reports:
> 
>     FAILED linux-next/m5272c3_defconfig/m68k-gcc8 Wed May 12, 19:30
>     http://kisskb.ellerman.id.au/kisskb/buildresult/14543658/
>     Commit:   Add linux-next specific files for 20210512
>           ec85c95b0c90a17413901b018e8ade7b9eae7cad
>     Compiler: m68k-linux-gcc (GCC) 8.1.0 / GNU ld (GNU Binutils) 2.30
> 
>     mm/gup.c:2698:3: error: implicit declaration of function
> 'mm_set_has_pinned_flag'; did you mean 'set_tsk_thread_flag'?
> [-Werror=implicit-function-declaration]
> 
> It's definition is inside the #ifdef CONFIG_MMU section, but the last
> user isn't.

Indeed that's wrong and I replied to the mm-commit email but not here to fix
this up yesterday:

https://lore.kernel.org/mm-commits/20210511220029.m6tGcxUIw%25akpm@linux-foundation.org/

I'll remember to reply to the thread next time. Sorry for that!

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-05-12 12:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-07 15:05 [PATCH v2 0/3] mm/gup: Fix pin page write cache bouncing on has_pinned Peter Xu
2021-05-07 15:05 ` [PATCH v2 1/3] mm/gup_benchmark: Support threading Peter Xu
2021-05-07 15:05 ` [PATCH v2 2/3] mm: gup: allow FOLL_PIN to scale in SMP Peter Xu
2021-05-07 15:05 ` [PATCH v2 3/3] mm: gup: pack has_pinned in MMF_HAS_PINNED Peter Xu
2021-05-08  1:12   ` John Hubbard
2021-05-12  9:49   ` Geert Uytterhoeven
2021-05-12 12:34     ` Peter Xu
2021-05-12  9:49   ` Naresh Kamboju

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).