linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Jordan <daniel.m.jordan@oracle.com>
To: Zi Yan <zi.yan@cs.rutgers.edu>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>,
	linux-mm@kvack.org, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, aarcange@redhat.com,
	aaron.lu@intel.com, akpm@linux-foundation.org,
	alex.williamson@redhat.com, bsd@redhat.com,
	darrick.wong@oracle.com, dave.hansen@linux.intel.com,
	jgg@mellanox.com, jwadams@google.com, jiangshanlai@gmail.com,
	mhocko@kernel.org, mike.kravetz@oracle.com,
	Pavel.Tatashin@microsoft.com, prasad.singamsetty@oracle.com,
	rdunlap@infradead.org, steven.sistare@oracle.com,
	tim.c.chen@intel.com, tj@kernel.org, vbabka@suse.cz
Subject: Re: [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work
Date: Tue, 6 Nov 2018 11:00:29 -0800	[thread overview]
Message-ID: <20181106190029.epktpxhimrca4f4a@ca-dmjordan1.us.oracle.com> (raw)
In-Reply-To: <7E53DD63-4955-480D-8C0D-EB07E4FF011B@cs.rutgers.edu>

On Mon, Nov 05, 2018 at 09:48:56PM -0500, Zi Yan wrote:
> On 5 Nov 2018, at 21:20, Daniel Jordan wrote:
> 
> > Hi Zi,
> >
> > On Mon, Nov 05, 2018 at 01:49:14PM -0500, Zi Yan wrote:
> >> On 5 Nov 2018, at 11:55, Daniel Jordan wrote:
> >>
> >> Do you think if it makes sense to use ktask for huge page migration (the data
> >> copy part)?
> >
> > It certainly could.
> >
> >> I did some experiments back in 2016[1], which showed that migrating one 2MB page
> >> with 8 threads could achieve 2.8x throughput of the existing single-threaded method.
> >> The problem with my parallel page migration patchset at that time was that it
> >> has no CPU-utilization awareness, which is solved by your patches now.
> >
> > Did you run with fewer than 8 threads?  I'd want a bigger speedup than 2.8x for
> > 8, and a smaller thread count might improve thread utilization.
> 
> Yes. When migrating one 2MB THP with migrate_pages() system call on a two-socket server
> with 2 E5-2650 v3 CPUs (10 cores per socket) across two sockets, here are the page migration
> throughput numbers:
> 
>              throughput       factor
> 1 thread      2.15 GB/s         1x
> 2 threads     3.05 GB/s         1.42x
> 4 threads     4.50 GB/s         2.09x
> 8 threads     5.98 GB/s         2.78x

Thanks.  Looks like in your patches you start a worker for every piece of the
huge page copy and have the main thread wait.  I'm curious what the workqueue
overhead is like on your machine.  On a newer Xeon it's ~50usec from queueing a
work to starting to execute it and another ~20usec to flush a work
(barrier_func), which could happen after the work is already done.  A pretty
significant piece of the copy time for part of a THP.

            bash 60728 [087] 155865.157116:                   probe:ktask_run: (ffffffffb7ee7a80)
            bash 60728 [087] 155865.157119:    workqueue:workqueue_queue_work: work struct=0xffff95fb73276000
            bash 60728 [087] 155865.157119: workqueue:workqueue_activate_work: work struct 0xffff95fb73276000
 kworker/u194:3- 86730 [095] 155865.157168: workqueue:workqueue_execute_start: work struct 0xffff95fb73276000: function ktask_thread
 kworker/u194:3- 86730 [095] 155865.157170:   workqueue:workqueue_execute_end: work struct 0xffff95fb73276000
 kworker/u194:3- 86730 [095] 155865.157171: workqueue:workqueue_execute_start: work struct 0xffffa676995bfb90: function wq_barrier_func
 kworker/u194:3- 86730 [095] 155865.157190:   workqueue:workqueue_execute_end: work struct 0xffffa676995bfb90
            bash 60728 [087] 155865.157207:       probe:ktask_run_ret__return: (ffffffffb7ee7a80 <- ffffffffb7ee7b7b)

> >
> > It would be nice to multithread at a higher granularity than 2M, too: a range
> > of THPs might also perform better than a single page.
> 
> Sure. But the kernel currently does not copy multiple pages altogether even if a range
> of THPs is migrated. Page copy function is interleaved with page table operations
> for every single page.
> 
> I also did some study and modified the kernel to improve this, which I called
> concurrent page migration in https://lwn.net/Articles/714991/. It further
> improves page migration throughput.

Ok, over 4x with 8 threads for 16 THPs.  Is 16 a typical number for migration,
or does it get larger?  What workloads do you have in mind with this change?

  reply	other threads:[~2018-11-06 19:00 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-05 16:55 [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 01/13] ktask: add documentation Daniel Jordan
2018-11-05 21:19   ` Randy Dunlap
2018-11-06  2:27     ` Daniel Jordan
2018-11-06  8:49   ` Peter Zijlstra
2018-11-06 20:34     ` Daniel Jordan
2018-11-06 20:51       ` Jason Gunthorpe
2018-11-07 10:27         ` Peter Zijlstra
2018-11-07 20:21           ` Daniel Jordan
2018-11-07 10:35       ` Peter Zijlstra
2018-11-07 21:20         ` Daniel Jordan
2018-11-08 17:26   ` Jonathan Corbet
2018-11-08 19:15     ` Daniel Jordan
2018-11-08 19:24       ` Jonathan Corbet
2018-11-27 19:50   ` Pavel Machek
2018-11-28 16:56     ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 02/13] ktask: multithread CPU-intensive kernel work Daniel Jordan
2018-11-05 20:51   ` Randy Dunlap
2018-11-06  2:24     ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 03/13] ktask: add undo support Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 04/13] ktask: run helper threads at MAX_NICE Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 05/13] workqueue, ktask: renice helper threads to prevent starvation Daniel Jordan
2018-11-13 16:34   ` Tejun Heo
2018-11-19 16:45     ` Daniel Jordan
2018-11-20 16:33       ` Tejun Heo
2018-11-20 17:03         ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 06/13] vfio: parallelize vfio_pin_map_dma Daniel Jordan
2018-11-05 21:51   ` Alex Williamson
2018-11-06  2:42     ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 07/13] mm: change locked_vm's type from unsigned long to atomic_long_t Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 08/13] vfio: remove unnecessary mmap_sem writer acquisition around locked_vm Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 09/13] vfio: relieve mmap_sem reader cacheline bouncing by holding it longer Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 10/13] mm: enlarge type of offset argument in mem_map_offset and mem_map_next Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 11/13] mm: parallelize deferred struct page initialization within each node Daniel Jordan
2018-11-10  3:48   ` Elliott, Robert (Persistent Memory)
2018-11-12 16:54     ` Daniel Jordan
2018-11-12 22:15       ` Elliott, Robert (Persistent Memory)
2018-11-19 16:01         ` Daniel Jordan
2018-11-27  0:12           ` Elliott, Robert (Persistent Memory)
2018-11-27 20:23             ` Daniel Jordan
2018-11-19 16:29       ` Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 12/13] mm: parallelize clear_gigantic_page Daniel Jordan
2018-11-05 16:55 ` [RFC PATCH v4 13/13] hugetlbfs: parallelize hugetlbfs_fallocate with ktask Daniel Jordan
2018-11-05 17:29 ` [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work Michal Hocko
2018-11-06  1:29   ` Daniel Jordan
2018-11-06  9:21     ` Michal Hocko
2018-11-07 20:17       ` Daniel Jordan
2018-11-05 18:49 ` Zi Yan
2018-11-06  2:20   ` Daniel Jordan
2018-11-06  2:48     ` Zi Yan
2018-11-06 19:00       ` Daniel Jordan [this message]
2018-11-30 19:18 ` Tejun Heo
2018-12-01  0:13   ` Daniel Jordan
2018-12-03 16:16     ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181106190029.epktpxhimrca4f4a@ca-dmjordan1.us.oracle.com \
    --to=daniel.m.jordan@oracle.com \
    --cc=Pavel.Tatashin@microsoft.com \
    --cc=aarcange@redhat.com \
    --cc=aaron.lu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.williamson@redhat.com \
    --cc=bsd@redhat.com \
    --cc=darrick.wong@oracle.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jgg@mellanox.com \
    --cc=jiangshanlai@gmail.com \
    --cc=jwadams@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mike.kravetz@oracle.com \
    --cc=prasad.singamsetty@oracle.com \
    --cc=rdunlap@infradead.org \
    --cc=steven.sistare@oracle.com \
    --cc=tim.c.chen@intel.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=zi.yan@cs.rutgers.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).