linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang\, Ying" <ying.huang@intel.com>
To: Prathu Baronia <prathu.baronia@oneplus.com>
Cc: <alexander.duyck@gmail.com>,  <chintan.pandya@oneplus.com>,
	 <mhocko@suse.com>,  <akpm@linux-foundation.com>,
	 <linux-mm@kvack.org>,  <gregkh@linuxfoundation.com>,
	 <gthelen@google.com>,  <jack@suse.cz>,  <ken.lin@oneplus.com>,
	 <gasine.xu@oneplus.com>
Subject: Re: [PATCH v2] mm: Optimized hugepage zeroing & copying from user
Date: Wed, 15 Apr 2020 11:27:39 +0800	[thread overview]
Message-ID: <87r1wpzavo.fsf@yhuang-dev.intel.com> (raw)
In-Reply-To: <20200414153829.GA15230@oneplus.com> (Prathu Baronia's message of "Tue, 14 Apr 2020 21:08:32 +0530")

Prathu Baronia <prathu.baronia@oneplus.com> writes:

> In !HIGHMEM cases, specially in 64-bit architectures, we don't need temp mapping
> of pages. Hence, k(map|unmap)_atomic() acts as nothing more than multiple
> barrier() calls, for example for a 2MB hugepage in clear_huge_page() these are
> called 512 times i.e. to map and unmap each subpage that means in total 2048
> barrier calls. This called for optimization. Simply getting VADDR from page does
> the job for us. This also applies to the copy_user_huge_page() function.
>
> With kmap_atomic() out of the picture we can use memset and memcpy for sizes
> larger than 4K. Instead of a left-right approach to access the target subpage,
> getting the VADDR from the page and using memset directly in a simple experiment
> we observed a 64% improvement in time over the current approach.
>
> With this(v2) patch we observe 65.85%(under controlled conditions) improvement
> over the current approach. 

Can you describe your test?

> Currently process_huge_page iterates over subpages in a left-right manner
> targeting the subpage that was accessed to be processed at last to keep the
> cache hot around the faulting address. This caused a latency issue because as we
> observed in the case of ARM64 the reverse access is much slower than forward
> access and much much slower than oneshot access because of the pre-fetcher
> behaviour. The following simple userspace experiment to allocate
> 100MB(total_size) of pages and writing to it using memset(oneshot), forward
> order loop and a reverse order loop gave us a good insight:-
>
> --------------------------------------------------------------------------------
> Test code snippet:
> --------------------------------------------------------------------------------
>   /* One shot memset */
>   memset (r, 0xd, total_size);
>
>   /* traverse in forward order */
>   for (j = 0; j < total_pages; j++)
>     {
>       memset (q + (j * SZ_4K), 0xc, SZ_4K);
>     }
>
>   /* traverse in reverse order */
>   for (i = 0; i < total_pages; i++)
>     {
>       memset (p + total_size - (i + 1) * SZ_4K, 0xb, SZ_4K);
>     }

You have tested the chunk sizes 4KB and 2MB, can you test some values in
between?  For example 32KB or 64KB?  Maybe there's a sweet point with
some smaller granularity and good performance.

> ----------------------------------------------------------------------
> Results:
> ----------------------------------------------------------------------
> Results for ARM64 target (SM8150 , CPU0 & 6 are online, running at max
> frequency)
> All numbers are mean of 100 iterations. Variation is ignorable.
> ----------------------------------------------------------------------
> - Oneshot : 3389.26 us
> - Forward : 8876.16 us
> - Reverse : 18157.6 us
> ----------------------------------------------------------------------
>
> ----------------------------------------------------------------------
> Results for x86-64 (Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, only CPU 0 in max
> frequency, DDR also running at max frequency.)
> All numbers are mean of 100 iterations. Variation is ignorable.
> ----------------------------------------------------------------------
> - Oneshot : 3203.49 us
> - Forward : 5766.46 us
> - Reverse : 5187.86 us
> ----------------------------------------------------------------------
>
> Hence refactor the function process_huge_page() to process the hugepage
> in oneshot manner using oneshot version of routines clear_huge_page() and
> copy_user_huge_page() for !HIGHMEM cases.
>
> These oneshot routines do zeroing using memset and copying using memcpy since we
> observed after extensive testing on ARM64 and some local testing on x86 memset
> and memcpy routines are highly optimized and with the above data points in hand
> it made sense to utilize them directly instead of looping over all subpages.
> These oneshot routines do zero and copy with a small offset(default kept as 32KB for
> now) to keep the cache hot around the faulting address. This offset is dependent
> on the cache size and hence can be kept as a tunable configuration option.
>
> The below profiles are for ARM64(SM8150, CPU0 & 6 are online, running at max
> frequency, DDR also running at max frequency.)
>
> ----------------------------------------------------------------------
> Ftrace Results(clear_huge_page_profile()):
> ----------------------------------------------------------------------
> All timing values are in microseconds(us)
> ----------------------------------------------------------------------
> Base:
>         - CPU0:
>                 - Samples: 95
>                 - Mean: 242.099 us
>                 - Std dev: 45.0096 us
>         - CPU6:
>                 - Samples: 61
>                 - Mean: 258.372 us
>                 - Std dev: 22.0754 us
> ----------------------------------------------------------------------
> v2:
>         - CPU0:
>                 - Samples: 63
>                 - Mean: 112.297 us
>                 - Std dev: 0.310989 us
>         - CPU6:
>                 - Samples: 99
>                 - Mean: 67.359 us
>                 - Std dev: 1.15997 us
> ----------------------------------------------------------------------

In addition to clearing the huge page itself, we need to consider how it
impact the application which accesses the huge page.  For example, the
huge page may be accessed twice, once in kernel (zeroing) and once in
the user space (initializing).  So please find a way to test it.  As
pointed out by Alexander, please consider the cache contention among
the logical CPUs too.

Best Regards,
Huang, Ying


  parent reply	other threads:[~2020-04-15  3:27 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 15:38 [PATCH v2] mm: Optimized hugepage zeroing & copying from user Prathu Baronia
2020-04-14 17:03 ` Michal Hocko
2020-04-14 17:41   ` Daniel Jordan
     [not found]   ` <20200414184743.GB2097@oneplus.com>
2020-04-14 19:32     ` Alexander Duyck
2020-04-15  3:40       ` Huang, Ying
2020-04-15 11:09         ` Michal Hocko
2020-04-19 12:05       ` Prathu Baronia
2020-04-14 19:40     ` Michal Hocko
2020-04-15  3:27 ` Huang, Ying [this message]
2020-04-16  1:21   ` Huang, Ying
2020-04-19 15:58   ` Prathu Baronia
2020-04-20  0:18     ` Huang, Ying
2020-04-21  9:36       ` Prathu Baronia
2020-04-21 10:09         ` Will Deacon
2020-04-21 12:47           ` Vlastimil Babka
2020-04-21 12:48             ` Vlastimil Babka
2020-04-21 13:39               ` Will Deacon
2020-04-21 13:48                 ` Vlastimil Babka
2020-04-21 13:56                   ` Chintan Pandya
2020-04-22  8:18                   ` Will Deacon
2020-04-22 11:19                     ` Will Deacon
2020-04-22 14:38                       ` Prathu Baronia
2020-05-01  8:58                         ` Prathu Baronia
2020-05-05  8:59                           ` Will Deacon
2020-04-21 13:00             ` Michal Hocko
2020-04-21 13:10               ` Will Deacon
2020-04-17  7:48 ` [mm] 134c8b410f: vm-scalability.median -7.9% regression kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87r1wpzavo.fsf@yhuang-dev.intel.com \
    --to=ying.huang@intel.com \
    --cc=akpm@linux-foundation.com \
    --cc=alexander.duyck@gmail.com \
    --cc=chintan.pandya@oneplus.com \
    --cc=gasine.xu@oneplus.com \
    --cc=gregkh@linuxfoundation.com \
    --cc=gthelen@google.com \
    --cc=jack@suse.cz \
    --cc=ken.lin@oneplus.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=prathu.baronia@oneplus.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).