All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hanjun Guo <guohanjun@huawei.com>
To: Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Michal Hocko <mhocko@suse.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	Linux-MM <linux-mm@kvack.org>, Jia He <hejianet@gmail.com>,
	Eugeniu Rosca <erosca@de.adit-jv.com>,
	Petr Tesarik <ptesarik@suse.com>,
	Nikolay Borisov <nborisov@suse.com>,
	Russell King <linux@armlinux.org.uk>,
	Daniel Jordan <daniel.m.jordan@oracle.com>,
	"AKASHI Takahiro" <takahiro.akashi@linaro.org>,
	Gioh Kim <gi-oh.kim@profitbricks.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Laura Abbott <labbott@redhat.com>,
	Daniel Vacek <neelx@redhat.com>, Mel Gorman <mgorman@suse.de>,
	"Vladimir Murzin" <vladimir.murzin@arm.com>,
	Kees Cook <keescook@chromium.org>,
	"Philip Derrin" <philip@cog.systems>,
	YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>,
	"Jia He" <jia.he@hxt-semitech.com>,
	Kemi Wang <kemi.wang@intel.com>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	Steve Capper <steve.capper@arm.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	James Morse <james.morse@arm.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v11 0/3] remain and optimize memblock_next_valid_pfn on arm and arm64
Date: Sat, 8 Jun 2019 12:22:13 +0800	[thread overview]
Message-ID: <84b8e874-2a52-274c-4806-968470e66a08@huawei.com> (raw)
In-Reply-To: <20180907144447.GD12788@arm.com>

Hi Ard, Will,

This week we were trying to debug an issue of time consuming in mem_init(),
and leading to this similar solution form Jia He, so I would like to bring this
thread back, please see my detail test result below.

On 2018/9/7 22:44, Will Deacon wrote:
> On Thu, Sep 06, 2018 at 01:24:22PM +0200, Ard Biesheuvel wrote:
>> On 22 August 2018 at 05:07, Jia He <hejianet@gmail.com> wrote:
>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>>> where possible") optimized the loop in memmap_init_zone(). But it causes
>>> possible panic bug. So Daniel Vacek reverted it later.
>>>
>>> But as suggested by Daniel Vacek, it is fine to using memblock to skip
>>> gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID.
>>>
>>> More from what Daniel said:
>>> "On arm and arm64, memblock is used by default. But generic version of
>>> pfn_valid() is based on mem sections and memblock_next_valid_pfn() does
>>> not always return the next valid one but skips more resulting in some
>>> valid frames to be skipped (as if they were invalid). And that's why
>>> kernel was eventually crashing on some !arm machines."
>>>
>>> About the performance consideration:
>>> As said by James in b92df1de5,
>>> "I have tested this patch on a virtual model of a Samurai CPU with a
>>> sparse memory map.  The kernel boot time drops from 109 to 62 seconds."
>>> Thus it would be better if we remain memblock_next_valid_pfn on arm/arm64.
>>>
>>> Besides we can remain memblock_next_valid_pfn, there is still some room
>>> for improvement. After this set, I can see the time overhead of memmap_init
>>> is reduced from 27956us to 13537us in my armv8a server(QDF2400 with 96G
>>> memory, pagesize 64k). I believe arm server will benefit more if memory is
>>> larger than TBs
>>>
>>
>> OK so we can summarize the benefits of this series as follows:
>> - boot time on a virtual model of a Samurai CPU drops from 109 to 62 seconds
>> - boot time on a QDF2400 arm64 server with 96 GB of RAM drops by ~15
>> *milliseconds*
>>
>> Google was not very helpful in figuring out what a Samurai CPU is and
>> why we should care about the boot time of Linux running on a virtual
>> model of it, and the 15 ms speedup is not that compelling either.

Testing this patch set on top of Kunpeng 920 based ARM64 server, with
384G memory in total, we got the time consuming below

             without this patch set      with this patch set
mem_init()        13310ms                      1415ms

So we got about 8x speedup on this machine, which is very impressive.

The time consuming is related the memory DIMM size and where to locate those
memory DIMMs in the slots. In above case, we are using 16G memory DIMM.
We also tested 1T memory with 64G size for each memory DIMM on another ARM64
machine, the time consuming reduced from 20s to 2s (I think it's related to
firmware implementations).

>>
>> Apologies to Jia that it took 11 revisions to reach this conclusion,
>> but in /my/ opinion, tweaking the fragile memblock/pfn handling code
>> for this reason is totally unjustified, and we're better off
>> disregarding these patches.

Indeed this patch set has a bug, For exampe, if we have 3 regions which
is [a, b] [c, d] [e, f] if address of pfn is bigger than the end address of
last region, we will increase early_region_idx to count of region, which is
out of bound of the regions. Fixed by patch below,

 mm/memblock.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 8279295..8283bf0 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1252,13 +1252,17 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn)
 		if (pfn >= start_pfn && pfn < end_pfn)
 			return pfn;

-		early_region_idx++;
+		/* try slow path */
+		if (++early_region_idx == type->cnt)
+			goto slow_path;
+
 		next_start_pfn = PFN_DOWN(regions[early_region_idx].base);

 		if (pfn >= end_pfn && pfn <= next_start_pfn)
 			return next_start_pfn;
 	}

+slow_path:
 	/* slow path, do the binary searching */
 	do {
 		mid = (right + left) / 2;

As the really impressive speedup on our ARM64 server system, could you reconsider
this patch set for merge? if you want more data I'm willing to clarify and give
more test.

Thanks
Hanjun


WARNING: multiple messages have this Message-ID (diff)
From: Hanjun Guo <guohanjun@huawei.com>
To: Will Deacon <will.deacon@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Michal Hocko <mhocko@suse.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Kemi Wang <kemi.wang@intel.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	Linux-MM <linux-mm@kvack.org>,
	Eugeniu Rosca <erosca@de.adit-jv.com>,
	Petr Tesarik <ptesarik@suse.com>,
	Nikolay Borisov <nborisov@suse.com>,
	Russell King <linux@armlinux.org.uk>,
	Daniel Jordan <daniel.m.jordan@oracle.com>,
	AKASHI Takahiro <takahiro.akashi@linaro.org>,
	Mel Gorman <mgorman@suse.de>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Laura Abbott <labbott@redhat.com>,
	Daniel Vacek <neelx@redhat.com>,
	Vladimir Murzin <vladimir.murzin@arm.com>,
	Kees Cook <keescook@chromium.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Johannes Weiner <hannes@cmpxchg.org>,
	YASUAKI ISHIMATSU <yasu.isimatu@gmail.com>,
	Jia He <jia.he@hxt-semitech.com>, Jia He <hejianet@gmail.com>,
	Gioh Kim <gi-oh.kim@profitbricks.com>,
	linux-arm-kernel <linux-arm-kernel@lists.infradead.org>,
	Steve Capper <steve.capper@arm.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	James Morse <james.morse@arm.com>,
	Philip Derrin <philip@cog.systems>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v11 0/3] remain and optimize memblock_next_valid_pfn on arm and arm64
Date: Sat, 8 Jun 2019 12:22:13 +0800	[thread overview]
Message-ID: <84b8e874-2a52-274c-4806-968470e66a08@huawei.com> (raw)
In-Reply-To: <20180907144447.GD12788@arm.com>

Hi Ard, Will,

This week we were trying to debug an issue of time consuming in mem_init(),
and leading to this similar solution form Jia He, so I would like to bring this
thread back, please see my detail test result below.

On 2018/9/7 22:44, Will Deacon wrote:
> On Thu, Sep 06, 2018 at 01:24:22PM +0200, Ard Biesheuvel wrote:
>> On 22 August 2018 at 05:07, Jia He <hejianet@gmail.com> wrote:
>>> Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns
>>> where possible") optimized the loop in memmap_init_zone(). But it causes
>>> possible panic bug. So Daniel Vacek reverted it later.
>>>
>>> But as suggested by Daniel Vacek, it is fine to using memblock to skip
>>> gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID.
>>>
>>> More from what Daniel said:
>>> "On arm and arm64, memblock is used by default. But generic version of
>>> pfn_valid() is based on mem sections and memblock_next_valid_pfn() does
>>> not always return the next valid one but skips more resulting in some
>>> valid frames to be skipped (as if they were invalid). And that's why
>>> kernel was eventually crashing on some !arm machines."
>>>
>>> About the performance consideration:
>>> As said by James in b92df1de5,
>>> "I have tested this patch on a virtual model of a Samurai CPU with a
>>> sparse memory map.  The kernel boot time drops from 109 to 62 seconds."
>>> Thus it would be better if we remain memblock_next_valid_pfn on arm/arm64.
>>>
>>> Besides we can remain memblock_next_valid_pfn, there is still some room
>>> for improvement. After this set, I can see the time overhead of memmap_init
>>> is reduced from 27956us to 13537us in my armv8a server(QDF2400 with 96G
>>> memory, pagesize 64k). I believe arm server will benefit more if memory is
>>> larger than TBs
>>>
>>
>> OK so we can summarize the benefits of this series as follows:
>> - boot time on a virtual model of a Samurai CPU drops from 109 to 62 seconds
>> - boot time on a QDF2400 arm64 server with 96 GB of RAM drops by ~15
>> *milliseconds*
>>
>> Google was not very helpful in figuring out what a Samurai CPU is and
>> why we should care about the boot time of Linux running on a virtual
>> model of it, and the 15 ms speedup is not that compelling either.

Testing this patch set on top of Kunpeng 920 based ARM64 server, with
384G memory in total, we got the time consuming below

             without this patch set      with this patch set
mem_init()        13310ms                      1415ms

So we got about 8x speedup on this machine, which is very impressive.

The time consuming is related the memory DIMM size and where to locate those
memory DIMMs in the slots. In above case, we are using 16G memory DIMM.
We also tested 1T memory with 64G size for each memory DIMM on another ARM64
machine, the time consuming reduced from 20s to 2s (I think it's related to
firmware implementations).

>>
>> Apologies to Jia that it took 11 revisions to reach this conclusion,
>> but in /my/ opinion, tweaking the fragile memblock/pfn handling code
>> for this reason is totally unjustified, and we're better off
>> disregarding these patches.

Indeed this patch set has a bug, For exampe, if we have 3 regions which
is [a, b] [c, d] [e, f] if address of pfn is bigger than the end address of
last region, we will increase early_region_idx to count of region, which is
out of bound of the regions. Fixed by patch below,

 mm/memblock.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 8279295..8283bf0 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1252,13 +1252,17 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn)
 		if (pfn >= start_pfn && pfn < end_pfn)
 			return pfn;

-		early_region_idx++;
+		/* try slow path */
+		if (++early_region_idx == type->cnt)
+			goto slow_path;
+
 		next_start_pfn = PFN_DOWN(regions[early_region_idx].base);

 		if (pfn >= end_pfn && pfn <= next_start_pfn)
 			return next_start_pfn;
 	}

+slow_path:
 	/* slow path, do the binary searching */
 	do {
 		mid = (right + left) / 2;

As the really impressive speedup on our ARM64 server system, could you reconsider
this patch set for merge? if you want more data I'm willing to clarify and give
more test.

Thanks
Hanjun


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2019-06-08  4:22 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-22  3:07 [PATCH v11 0/3] remain and optimize memblock_next_valid_pfn on arm and arm64 Jia He
2018-08-22  3:07 ` Jia He
2018-08-22  3:07 ` [PATCH v11 1/3] arm: arm64: introduce CONFIG_HAVE_MEMBLOCK_PFN_VALID Jia He
2018-08-22  3:07   ` Jia He
2018-08-22  3:07 ` [PATCH v11 2/3] mm: page_alloc: remain memblock_next_valid_pfn() on arm/arm64 Jia He
2018-08-22  3:07   ` Jia He
2018-08-22  3:07 ` [PATCH v11 3/3] mm: page_alloc: reduce unnecessary binary search in memblock_next_valid_pfn Jia He
2018-08-22  3:07   ` Jia He
2018-09-05 21:57 ` [PATCH v11 0/3] remain and optimize memblock_next_valid_pfn on arm and arm64 Andrew Morton
2018-09-05 21:57   ` Andrew Morton
2018-09-06 10:47   ` Will Deacon
2018-09-06 10:47     ` Will Deacon
2018-09-06 11:24 ` Ard Biesheuvel
2018-09-06 11:24   ` Ard Biesheuvel
2018-09-07 14:44   ` Will Deacon
2018-09-07 14:44     ` Will Deacon
2018-09-14 18:50     ` Eugeniu Rosca
2018-09-14 18:50       ` Eugeniu Rosca
2018-09-14 18:50       ` Eugeniu Rosca
2019-06-08  4:22     ` Hanjun Guo [this message]
2019-06-08  4:22       ` Hanjun Guo
2019-06-10 13:16       ` Ard Biesheuvel
2019-06-10 13:16         ` Ard Biesheuvel
2019-06-10 13:16         ` Ard Biesheuvel
2019-06-11 15:18         ` Hanjun Guo
2019-06-11 15:18           ` Hanjun Guo
2019-06-12  1:05           ` Jia He
2019-06-12  1:05             ` Jia He
2019-06-12 12:48             ` Hanjun Guo
2019-06-12 12:48               ` Hanjun Guo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=84b8e874-2a52-274c-4806-968470e66a08@huawei.com \
    --to=guohanjun@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=ard.biesheuvel@arm.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=catalin.marinas@arm.com \
    --cc=daniel.m.jordan@oracle.com \
    --cc=erosca@de.adit-jv.com \
    --cc=gi-oh.kim@profitbricks.com \
    --cc=hannes@cmpxchg.org \
    --cc=hejianet@gmail.com \
    --cc=james.morse@arm.com \
    --cc=jia.he@hxt-semitech.com \
    --cc=keescook@chromium.org \
    --cc=kemi.wang@intel.com \
    --cc=labbott@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux@armlinux.org.uk \
    --cc=mark.rutland@arm.com \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=nborisov@suse.com \
    --cc=neelx@redhat.com \
    --cc=philip@cog.systems \
    --cc=ptesarik@suse.com \
    --cc=richard.weiyang@gmail.com \
    --cc=steve.capper@arm.com \
    --cc=takahiro.akashi@linaro.org \
    --cc=vbabka@suse.cz \
    --cc=vladimir.murzin@arm.com \
    --cc=will.deacon@arm.com \
    --cc=yasu.isimatu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.