From: Daniel Jordan <daniel.m.jordan@oracle.com>
To: linux-mm@kvack.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: aarcange@redhat.com, aaron.lu@intel.com,
akpm@linux-foundation.org, alex.williamson@redhat.com,
bsd@redhat.com, daniel.m.jordan@oracle.com,
darrick.wong@oracle.com, dave.hansen@linux.intel.com,
jgg@mellanox.com, jwadams@google.com, jiangshanlai@gmail.com,
mhocko@kernel.org, mike.kravetz@oracle.com,
Pavel.Tatashin@microsoft.com, prasad.singamsetty@oracle.com,
rdunlap@infradead.org, steven.sistare@oracle.com,
tim.c.chen@intel.com, tj@kernel.org, vbabka@suse.cz
Subject: [RFC PATCH v4 12/13] mm: parallelize clear_gigantic_page
Date: Mon, 5 Nov 2018 11:55:57 -0500 [thread overview]
Message-ID: <20181105165558.11698-13-daniel.m.jordan@oracle.com> (raw)
In-Reply-To: <20181105165558.11698-1-daniel.m.jordan@oracle.com>
Parallelize clear_gigantic_page, which zeroes any page size larger than
8M (e.g. 1G on x86).
Performance results (the default number of threads is 4; higher thread
counts shown for context only):
Machine: Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz, 288 CPUs, 1T memory
Test: Clear a range of gigantic pages (triggered via fallocate)
nthread speedup size (GiB) min time (s) stdev
1 100 41.13 0.03
2 2.03x 100 20.26 0.14
4 4.28x 100 9.62 0.09
8 8.39x 100 4.90 0.05
16 10.44x 100 3.94 0.03
1 200 89.68 0.35
2 2.21x 200 40.64 0.18
4 4.64x 200 19.33 0.32
8 8.99x 200 9.98 0.04
16 11.27x 200 7.96 0.04
1 400 188.20 1.57
2 2.30x 400 81.84 0.09
4 4.63x 400 40.62 0.26
8 8.92x 400 21.09 0.50
16 11.78x 400 15.97 0.25
1 800 434.91 1.81
2 2.54x 800 170.97 1.46
4 4.98x 800 87.38 1.91
8 10.15x 800 42.86 2.59
16 12.99x 800 33.48 0.83
The speedups are mostly due to the fact that more threads can use more
memory bandwidth. The loop we're stressing on the x86 chip in this test
is clear_page_erms, which tops out at a bandwidth of 2550 MiB/s with one
thread. We get the same bandwidth per thread for 2, 4, or 8 threads,
but at 16 threads the per-thread bandwidth drops to 1420 MiB/s.
However, the performance also improves over a single thread because of
the ktask threads' NUMA awareness (ktask migrates worker threads to the
node local to the work being done). This becomes a bigger factor as the
amount of pages to zero grows to include memory from multiple nodes, so
that speedups increase as the size increases.
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
---
mm/memory.c | 32 ++++++++++++++++++++++++--------
1 file changed, 24 insertions(+), 8 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 15c417e8e31d..445d06537905 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -69,6 +69,7 @@
#include <linux/userfaultfd_k.h>
#include <linux/dax.h>
#include <linux/oom.h>
+#include <linux/ktask.h>
#include <asm/io.h>
#include <asm/mmu_context.h>
@@ -4415,19 +4416,28 @@ static inline void process_huge_page(
}
}
-static void clear_gigantic_page(struct page *page,
- unsigned long addr,
- unsigned int pages_per_huge_page)
+struct cgp_args {
+ struct page *base_page;
+ unsigned long addr;
+};
+
+static int clear_gigantic_page_chunk(unsigned long start, unsigned long end,
+ struct cgp_args *args)
{
- int i;
- struct page *p = page;
+ struct page *base_page = args->base_page;
+ struct page *p = base_page;
+ unsigned long addr = args->addr;
+ unsigned long i;
might_sleep();
- for (i = 0; i < pages_per_huge_page;
- i++, p = mem_map_next(p, page, i)) {
+ for (i = start; i < end; ++i) {
cond_resched();
clear_user_highpage(p, addr + i * PAGE_SIZE);
+
+ p = mem_map_next(p, base_page, i);
}
+
+ return KTASK_RETURN_SUCCESS;
}
static void clear_subpage(unsigned long addr, int idx, void *arg)
@@ -4444,7 +4454,13 @@ void clear_huge_page(struct page *page,
~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1);
if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) {
- clear_gigantic_page(page, addr, pages_per_huge_page);
+ struct cgp_args args = {page, addr};
+ struct ktask_node node = {0, pages_per_huge_page,
+ page_to_nid(page)};
+ DEFINE_KTASK_CTL(ctl, clear_gigantic_page_chunk, &args,
+ KTASK_MEM_CHUNK);
+
+ ktask_run_numa(&node, 1, &ctl);
return;
}
--
2.19.1
next prev parent reply other threads:[~2018-11-05 16:56 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-05 16:55 [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 01/13] ktask: add documentation Daniel Jordan
2018-11-05 21:19 ` Randy Dunlap
2018-11-06 2:27 ` Daniel Jordan
2018-11-06 8:49 ` Peter Zijlstra
2018-11-06 20:34 ` Daniel Jordan
2018-11-06 20:34 ` Daniel Jordan
2018-11-06 20:34 ` Daniel Jordan
2018-11-06 20:51 ` Jason Gunthorpe
2018-11-06 20:51 ` Jason Gunthorpe
2018-11-06 20:51 ` Jason Gunthorpe
2018-11-07 10:27 ` Peter Zijlstra
2018-11-07 10:27 ` Peter Zijlstra
2018-11-07 10:27 ` Peter Zijlstra
2018-11-07 20:21 ` Daniel Jordan
2018-11-07 20:21 ` Daniel Jordan
2018-11-07 20:21 ` Daniel Jordan
2018-11-07 10:35 ` Peter Zijlstra
2018-11-07 21:20 ` Daniel Jordan
2018-11-08 17:26 ` Jonathan Corbet
2018-11-08 19:15 ` Daniel Jordan
2018-11-08 19:24 ` Jonathan Corbet
2018-11-27 19:50 ` Pavel Machek
2018-11-28 16:56 ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 02/13] ktask: multithread CPU-intensive kernel work Daniel Jordan
2018-11-05 20:51 ` Randy Dunlap
2018-11-06 2:24 ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 03/13] ktask: add undo support Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 04/13] ktask: run helper threads at MAX_NICE Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 05/13] workqueue, ktask: renice helper threads to prevent starvation Daniel Jordan
2018-11-13 16:34 ` Tejun Heo
2018-11-19 16:45 ` Daniel Jordan
2018-11-20 16:33 ` Tejun Heo
2018-11-20 17:03 ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 06/13] vfio: parallelize vfio_pin_map_dma Daniel Jordan
2018-11-05 21:51 ` Alex Williamson
2018-11-06 2:42 ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 07/13] mm: change locked_vm's type from unsigned long to atomic_long_t Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 08/13] vfio: remove unnecessary mmap_sem writer acquisition around locked_vm Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 09/13] vfio: relieve mmap_sem reader cacheline bouncing by holding it longer Daniel Jordan
2018-11-05 16:55 ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 10/13] mm: enlarge type of offset argument in mem_map_offset and mem_map_next Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 11/13] mm: parallelize deferred struct page initialization within each node Daniel Jordan
2018-11-10 3:48 ` Elliott, Robert (Persistent Memory)
2018-11-10 3:48 ` Elliott, Robert (Persistent Memory)
2018-11-12 16:54 ` Daniel Jordan
2018-11-12 16:54 ` Daniel Jordan
2018-11-12 22:15 ` Elliott, Robert (Persistent Memory)
2018-11-12 22:15 ` Elliott, Robert (Persistent Memory)
2018-11-19 16:01 ` Daniel Jordan
2018-11-19 16:01 ` Daniel Jordan
2018-11-27 0:12 ` Elliott, Robert (Persistent Memory)
2018-11-27 0:12 ` Elliott, Robert (Persistent Memory)
2018-11-27 20:23 ` Daniel Jordan
2018-11-27 20:23 ` Daniel Jordan
2018-11-19 16:29 ` Daniel Jordan
2018-11-19 16:29 ` Daniel Jordan
2018-11-05 16:55 ` Daniel Jordan [this message]
2018-11-05 16:55 ` [RFC PATCH v4 13/13] hugetlbfs: parallelize hugetlbfs_fallocate with ktask Daniel Jordan
2018-11-05 17:29 ` [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work Michal Hocko
2018-11-06 1:29 ` Daniel Jordan
2018-11-06 9:21 ` Michal Hocko
2018-11-07 20:17 ` Daniel Jordan
2018-11-07 20:17 ` Daniel Jordan
2018-11-05 18:49 ` Zi Yan
2018-11-06 2:20 ` Daniel Jordan
2018-11-06 2:48 ` Zi Yan
2018-11-06 19:00 ` Daniel Jordan
2018-11-30 19:18 ` Tejun Heo
2018-12-01 0:13 ` Daniel Jordan
2018-12-03 16:16 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181105165558.11698-13-daniel.m.jordan@oracle.com \
--to=daniel.m.jordan@oracle.com \
--cc=Pavel.Tatashin@microsoft.com \
--cc=aarcange@redhat.com \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=bsd@redhat.com \
--cc=darrick.wong@oracle.com \
--cc=dave.hansen@linux.intel.com \
--cc=jgg@mellanox.com \
--cc=jiangshanlai@gmail.com \
--cc=jwadams@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=prasad.singamsetty@oracle.com \
--cc=rdunlap@infradead.org \
--cc=steven.sistare@oracle.com \
--cc=tim.c.chen@intel.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.