linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Halbuer <halbuer@sra.uni-hannover.de>
To: Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Mel Gorman <mgorman@techsingularity.net>
Subject: Re: [PATCH] mm: reduce lock contention of pcp buffer refill
Date: Thu, 9 Feb 2023 11:34:39 +0100	[thread overview]
Message-ID: <610eb535-e2cc-15cf-d776-5486aba118ff@sra.uni-hannover.de> (raw)
In-Reply-To: <8dc0a48f-f29a-27de-0ced-d39a920b2afd@suse.cz>



On 2/8/23 16:11, Vlastimil Babka wrote:
> On 2/7/23 17:11, Alexander Halbuer wrote:
>> On 2/3/23 00:25, Andrew Morton wrote:
>>> On Wed,  1 Feb 2023 17:25:49 +0100 Alexander Halbuer <halbuer@sra.uni-hannover.de> wrote:
>>>
>>>> The `rmqueue_bulk` function batches the allocation of multiple elements to
>>>> refill the per-CPU buffers into a single hold of the zone lock. Each
>>>> element is allocated and checked using the `check_pcp_refill` function.
>>>> The check touches every related struct page which is especially expensive
>>>> for higher order allocations (huge pages). This patch reduces the time
>>>> holding the lock by moving the check out of the critical section similar
>>>> to the `rmqueue_buddy` function which allocates a single element.
>>>> Measurements of parallel allocation-heavy workloads show a reduction of
>>>> the average huge page allocation latency of 50 percent for two cores and
>>>> nearly 90 percent for 24 cores.
>>> Sounds nice.
>>>
>>> Were you able to test how much benefit we get by simply removing the
>>> check_new_pages() call from rmqueue_bulk()?
>> I did some further investigations and measurements to quantify potential
>> performance gains. Benchmarks ran on a machine with 24 physical cores
>> fixed at 2.1 GHz. The results show significant performance gains with the
>> patch applied in parallel scenarios. Eliminating the check reduces
>> allocation latency further, especially for low core counts.
> 
> Are the benchmarks done via kernel module calling bulk allocation, or from
> userspace?

The benchmarks are done via a kernel module using the alloc_pages_node()
function to allocate from a specific NUMA node. (The test machine is a
dual-socket system featuring two Intel Xeon Gold 6252 with 24 physical
cores each; benchmark threads are pinned to the first CPU to prevent
additional NUMA effects.)

The module has originally been developed as part of a discussion about
different allocator designs to get actual numbers of an actual buddy
allocator implementation. Measurements include different allocation
scenarios (bulk, repeat, random) for different allocation sizes (orders)
and degrees of parallelism. The parallel bulk allocation benchmark
showed an outlier for huge page allocations (order 9) that was much
worse than order 8 and even order 10, despite the lack of a pcp buffer
for those sizes. This has been the origin of my further investigations.

The benchmark module internally uses multiple threads that independently
perform a specified number of allocations with a loop around the
mentioned alloc_pages_node() function. A barrier synchronizes all those
threads at the beginning. The values shown in the table are the average
durations of all threads divided by the number of allocations to break
it down to a single allocation.

>> The following tables show the average allocation latencies for huge pages
>> and normal pages for three different configurations:
>> The unpatched kernel, the patched kernel, and an additional version
>> without the check.
>>
>> Huge pages
>> +-------+---------+-------+----------+---------+----------+
>> | Cores | Default | Patch |    Patch | NoCheck |  NoCheck |
>> |       |    (ns) |  (ns) |     Diff |    (ns) |     Diff |
>> +-------+---------+-------+----------+---------+----------+
>> |     1 |     127 |   124 |  (-2.4%) |     118 |  (-7.1%) |
>> |     2 |     140 |   140 |  (-0.0%) |     134 |  (-4.3%) |
>> |     3 |     143 |   142 |  (-0.7%) |     134 |  (-6.3%) |
>> |     4 |     178 |   159 | (-10.7%) |     156 | (-12.4%) |
>> |     6 |     269 |   239 | (-11.2%) |     237 | (-11.9%) |
>> |     8 |     363 |   321 | (-11.6%) |     319 | (-12.1%) |
>> |    10 |     454 |   409 |  (-9.9%) |     409 |  (-9.9%) |
>> |    12 |     545 |   494 |  (-9.4%) |     488 | (-10.5%) |
>> |    14 |     639 |   578 |  (-9.5%) |     574 | (-10.2%) |
>> |    16 |     735 |   660 | (-10.2%) |     653 | (-11.2%) |
>> |    20 |     915 |   826 |  (-9.7%) |     815 | (-10.9%) |
>> |    24 |    1105 |   992 | (-10.2%) |     982 | (-11.1%) |
>> +-------+---------+-------+----------+---------+----------+
>>
>> Normal pages
>> +-------+---------+-------+----------+---------+----------+
>> | Cores | Default | Patch |    Patch | NoCheck |  NoCheck |
>> |       |    (ns) |  (ns) |     Diff |    (ns) |     Diff |
>> +-------+---------+-------+----------+---------+----------+
>> |     1 |    2790 |  2767 |  (-0.8%) |     171 | (-93.9%) |
>> |     2 |    6685 |  3484 | (-47.9%) |     519 | (-92.2%) |
>> |     3 |   10501 |  3599 | (-65.7%) |     855 | (-91.9%) |
>> |     4 |   14264 |  3635 | (-74.5%) |    1139 | (-92.0%) |
>> |     6 |   21800 |  3551 | (-83.7%) |    1713 | (-92.1%) |
>> |     8 |   29563 |  3570 | (-87.9%) |    2268 | (-92.3%) |
>> |    10 |   37210 |  3845 | (-89.7%) |    2872 | (-92.3%) |
>> |    12 |   44780 |  4452 | (-90.1%) |    3417 | (-92.4%) |
>> |    14 |   52576 |  5100 | (-90.3%) |    4020 | (-92.4%) |
>> |    16 |   60118 |  5785 | (-90.4%) |    4604 | (-92.3%) |
>> |    20 |   75037 |  7270 | (-90.3%) |    6486 | (-91.4%) |
>> |    24 |   90226 |  8712 | (-90.3%) |    7061 | (-92.2%) |
>> +-------+---------+-------+----------+---------+----------+
> 
> Nice improvements. I assume the table headings are switched? Why would
> Normal be more epensive than Huge?

Yes, you are right, I messed up the table headings.

>>>
>>> Vlastimil, I find this quite confusing:
>>>
>>> #ifdef CONFIG_DEBUG_VM
>>> /*
>>>  * With DEBUG_VM enabled, order-0 pages are checked for expected state when
>>>  * being allocated from pcp lists. With debug_pagealloc also enabled, they are
>>>  * also checked when pcp lists are refilled from the free lists.
>>>  */
>>> static inline bool check_pcp_refill(struct page *page, unsigned int order)
>>> {
>>> 	if (debug_pagealloc_enabled_static())
>>> 		return check_new_pages(page, order);
>>> 	else
>>> 		return false;
>>> }
>>>
>>> static inline bool check_new_pcp(struct page *page, unsigned int order)
>>> {
>>> 	return check_new_pages(page, order);
>>> }
>>> #else
>>> /*
>>>  * With DEBUG_VM disabled, free order-0 pages are checked for expected state
>>>  * when pcp lists are being refilled from the free lists. With debug_pagealloc
>>>  * enabled, they are also checked when being allocated from the pcp lists.
>>>  */
>>> static inline bool check_pcp_refill(struct page *page, unsigned int order)
>>> {
>>> 	return check_new_pages(page, order);
>>> }
>>> static inline bool check_new_pcp(struct page *page, unsigned int order)
>>> {
>>> 	if (debug_pagealloc_enabled_static())
>>> 		return check_new_pages(page, order);
>>> 	else
>>> 		return false;
>>> }
>>> #endif /* CONFIG_DEBUG_VM */
>>>
>>> and the 4462b32c9285b5 changelog is a struggle to follow.
>>>
>>> Why are we performing *any* checks when CONFIG_DEBUG_VM=n and when
>>> debug_pagealloc_enabled is false?
>>>
>>> Anyway, these checks sounds quite costly so let's revisit their
>>> desirability?
>>>
> 

  reply	other threads:[~2023-02-09 10:37 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-01 16:25 [PATCH] mm: reduce lock contention of pcp buffer refill Alexander Halbuer
2023-02-02 23:25 ` Andrew Morton
2023-02-07 16:11   ` Alexander Halbuer
2023-02-08 15:11     ` Vlastimil Babka
2023-02-09 10:34       ` Alexander Halbuer [this message]
2023-02-08 10:45   ` Vlastimil Babka
2023-02-14 17:27     ` Kees Cook
2023-02-08 15:20 ` Vlastimil Babka
2023-03-29  9:31 ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=610eb535-e2cc-15cf-d776-5486aba118ff@sra.uni-hannover.de \
    --to=halbuer@sra.uni-hannover.de \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).