mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* + mm-page_alloc-inline-__rmqueue_pcplist.patch added to -mm tree
@ 2021-03-28 22:30 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2021-03-28 22:30 UTC (permalink / raw)
  To: alexander.duyck, alobakin, brouer, chuck.lever, davem, hch,
	ilias.apalodimas, mgorman, mm-commits, vbabka, willy


The patch titled
     Subject: mm/page_alloc: inline __rmqueue_pcplist
has been added to the -mm tree.  Its filename is
     mm-page_alloc-inline-__rmqueue_pcplist.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/mm-page_alloc-inline-__rmqueue_pcplist.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-inline-__rmqueue_pcplist.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Jesper Dangaard Brouer <brouer@redhat.com>
Subject: mm/page_alloc: inline __rmqueue_pcplist

When __alloc_pages_bulk() got introduced two callers of __rmqueue_pcplist
exist and the compiler chooses to not inline this function.

 ./scripts/bloat-o-meter vmlinux-before vmlinux-inline__rmqueue_pcplist
add/remove: 0/1 grow/shrink: 2/0 up/down: 164/-125 (39)
Function                                     old     new   delta
rmqueue                                     2197    2296     +99
__alloc_pages_bulk                          1921    1986     +65
__rmqueue_pcplist                            125       -    -125
Total: Before=19374127, After=19374166, chg +0.00%

modprobe page_bench04_bulk loops=$((10**7))

Type:time_bulk_page_alloc_free_array
 -  Per elem: 106 cycles(tsc) 29.595 ns (step:64)
 - (measurement period time:0.295955434 sec time_interval:295955434)
 - (invoke count:10000000 tsc_interval:1065447105)

Before:
 - Per elem: 110 cycles(tsc) 30.633 ns (step:64)

Link: https://lkml.kernel.org/r/20210325114228.27719-6-mgorman@techsingularity.net
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Reviewed-by: Alexander Lobakin <alobakin@pm.me>
Cc: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: David Miller <davem@davemloft.net>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/page_alloc.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/mm/page_alloc.c~mm-page_alloc-inline-__rmqueue_pcplist
+++ a/mm/page_alloc.c
@@ -3450,7 +3450,8 @@ static inline void zone_statistics(struc
 }
 
 /* Remove page from the per-cpu list, caller must protect the list */
-static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
+static inline
+struct page *__rmqueue_pcplist(struct zone *zone, int migratetype,
 			unsigned int alloc_flags,
 			struct per_cpu_pages *pcp,
 			struct list_head *list)
_

Patches currently in -mm which might be from brouer@redhat.com are

mm-page_alloc-optimize-code-layout-for-__alloc_pages_bulk.patch
mm-page_alloc-inline-__rmqueue_pcplist.patch
net-page_pool-refactor-dma_map-into-own-function-page_pool_dma_map.patch
net-page_pool-use-alloc_pages_bulk-in-refill-code-path.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-03-28 22:31 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-28 22:30 + mm-page_alloc-inline-__rmqueue_pcplist.patch added to -mm tree akpm

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).