Linux-mm Archive on
 help / color / Atom feed
From: Daniel Jordan <>
To: Andrew Morton <>
Subject: Re: [RFC PATCH v3 0/7] ktask: multithread CPU-intensive kernel work
Date: Wed, 6 Dec 2017 09:21:49 -0500
Message-ID: <> (raw)
In-Reply-To: <>

On 12/05/2017 05:23 PM, Andrew Morton wrote:
> On Tue,  5 Dec 2017 14:52:13 -0500 Daniel Jordan <> wrote:
>> This patchset is based on 4.15-rc2 plus one mmots fix[*] and contains three
>> ktask users:
>>   - deferred struct page initialization at boot time
>>   - clearing gigantic pages
>>   - fallocate for HugeTLB pages
> Performance improvements are nice.  How much overall impact is there in
> real-world worklaods?

All of the users so far are mainly for initialization/startup, so the 
impact depends on how often users are rebooting (deferred struct page 
init) and starting applications such as RDBMS'es (hugetlbfs_fallocate).

ktask saves 5 seconds of boot time on the two-socket machine I tested on 
with deferred init, which is half the time it takes for the kernel to 
get to systemd, so for big machines that are frequently updated, the 
savings would add up.

>> Work in progress:
>>   - Parallelizing page freeing in the exit/munmap paths
> Also sounds interesting.

Parallelizing this efficiently depends on scaling lru_lock and 
zone->lock, which I've been working on separately.

Have you identified any other parallelizable
> operations?  vfs object teardown at umount time may be one...

By vfs object teardown, are you referring to evict_inodes/dispose_list?

If so, I actually have tried parallelizing that and there were good 
speedups during unmount with many cached pages.  It's just a matter of 
parallelizing well across inodes with different amounts of pages in cache.

I've also gotten good results with __get_user_pages.  If we want to keep 
the return value of __get_user_pages consistent on error (and I'm 
assuming that's a given), there needs to be logic that undoes the work 
past the first non-pinned page in the range so we continue to return the 
number of pages pinned from the start.  That seems ok since it's a slow 

The shmem page free path (shmem_undo_range), struct page initialization 
on memory hotplug, and huge page copying are others I've considered but 
haven't implemented yet.

>>   - CPU hotplug support
> Of what?  The ktask infrastructure itself?

Yes, ktask itself.  When CPUs come up or down, ktask's resource limits 
and preallocated data (the struct ktask_work's passed to the workqueue 
code) need to be adjusted for the new CPU count, at least as it's 
written now.

Thanks for the comments,

To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to  For more info on Linux MM,
see: .
Don't email: <a href=mailto:""> </a>

      reply index

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-05 19:52 Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 1/7] ktask: add documentation Daniel Jordan
2017-12-05 20:59   ` Daniel Jordan
2017-12-06 14:35   ` Michal Hocko
2017-12-06 20:32     ` Daniel Jordan
2017-12-08 12:43       ` Michal Hocko
2017-12-08 13:46         ` Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 2/7] ktask: multithread CPU-intensive kernel work Daniel Jordan
2017-12-05 22:21   ` Andrew Morton
2017-12-06 14:21     ` Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 3/7] ktask: add /proc/sys/debug/ktask_max_threads Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 4/7] mm: enlarge type of offset argument in mem_map_offset and mem_map_next Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 5/7] mm: parallelize clear_gigantic_page Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 6/7] hugetlbfs: parallelize hugetlbfs_fallocate with ktask Daniel Jordan
2017-12-05 19:52 ` [RFC PATCH v3 7/7] mm: parallelize deferred struct page initialization within each node Daniel Jordan
2017-12-05 22:23 ` [RFC PATCH v3 0/7] ktask: multithread CPU-intensive kernel work Andrew Morton
2017-12-06 14:21   ` Daniel Jordan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-mm Archive on

Archives are clonable:
	git clone --mirror linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ \
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:

AGPL code for this site: git clone