All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yinghai Lu <yinghai@kernel.org>
To: Russ Anderson <rja@sgi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Linux MM <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memblock, numa: Binary search node id
Date: Fri, 16 Aug 2013 12:15:21 -0700	[thread overview]
Message-ID: <CAE9FiQUYccFLzfHcjx+cgLky0UH8h99msDsNdAR7WdLpzwFQ2A@mail.gmail.com> (raw)
In-Reply-To: <20130816190106.GD22182@sgi.com>

On Fri, Aug 16, 2013 at 12:01 PM, Russ Anderson <rja@sgi.com> wrote:
> On Thu, Aug 15, 2013 at 01:43:48PM -0700, Andrew Morton wrote:
>> On Wed, 14 Aug 2013 22:46:29 -0700 Yinghai Lu <yinghai@kernel.org> wrote:
>>
>> > Current early_pfn_to_nid() on arch that support memblock go
>> > over memblock.memory one by one, so will take too many try
>> > near the end.
>> >
>> > We can use existing memblock_search to find the node id for
>> > given pfn, that could save some time on bigger system that
>> > have many entries memblock.memory array.
>>
>> Looks nice.  I wonder how much difference it makes.
>
> Here are the timing differences for several machines.
> In each case with the patch less time was spent in __early_pfn_to_nid().
>
>
>                         3.11-rc5        with patch      difference (%)
>                         --------        ----------      --------------
> UV1: 256 nodes  9TB:     411.66          402.47         -9.19 (2.23%)
> UV2: 255 nodes 16TB:    1141.02         1138.12         -2.90 (0.25%)
> UV2:  64 nodes  2TB:     128.15          126.53         -1.62 (1.26%)
> UV2:  32 nodes  2TB:     121.87          121.07         -0.80 (0.66%)
>                         Time in seconds.
>

Thanks.

9T one have more entries in memblock.memory?

Yinghai

WARNING: multiple messages have this Message-ID (diff)
From: Yinghai Lu <yinghai@kernel.org>
To: Russ Anderson <rja@sgi.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Linux MM <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memblock, numa: Binary search node id
Date: Fri, 16 Aug 2013 12:15:21 -0700	[thread overview]
Message-ID: <CAE9FiQUYccFLzfHcjx+cgLky0UH8h99msDsNdAR7WdLpzwFQ2A@mail.gmail.com> (raw)
In-Reply-To: <20130816190106.GD22182@sgi.com>

On Fri, Aug 16, 2013 at 12:01 PM, Russ Anderson <rja@sgi.com> wrote:
> On Thu, Aug 15, 2013 at 01:43:48PM -0700, Andrew Morton wrote:
>> On Wed, 14 Aug 2013 22:46:29 -0700 Yinghai Lu <yinghai@kernel.org> wrote:
>>
>> > Current early_pfn_to_nid() on arch that support memblock go
>> > over memblock.memory one by one, so will take too many try
>> > near the end.
>> >
>> > We can use existing memblock_search to find the node id for
>> > given pfn, that could save some time on bigger system that
>> > have many entries memblock.memory array.
>>
>> Looks nice.  I wonder how much difference it makes.
>
> Here are the timing differences for several machines.
> In each case with the patch less time was spent in __early_pfn_to_nid().
>
>
>                         3.11-rc5        with patch      difference (%)
>                         --------        ----------      --------------
> UV1: 256 nodes  9TB:     411.66          402.47         -9.19 (2.23%)
> UV2: 255 nodes 16TB:    1141.02         1138.12         -2.90 (0.25%)
> UV2:  64 nodes  2TB:     128.15          126.53         -1.62 (1.26%)
> UV2:  32 nodes  2TB:     121.87          121.07         -0.80 (0.66%)
>                         Time in seconds.
>

Thanks.

9T one have more entries in memblock.memory?

Yinghai

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-08-17  0:50 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-15  5:46 [PATCH] memblock, numa: Binary search node id Yinghai Lu
2013-08-15  5:46 ` Yinghai Lu
2013-08-15 20:43 ` Andrew Morton
2013-08-15 20:43   ` Andrew Morton
2013-08-15 21:06   ` Yinghai Lu
2013-08-15 21:06     ` Yinghai Lu
2013-08-15 21:37     ` Russ Anderson
2013-08-15 21:37       ` Russ Anderson
2013-08-16 19:01   ` Russ Anderson
2013-08-16 19:01     ` Russ Anderson
2013-08-16 19:15     ` Yinghai Lu [this message]
2013-08-16 19:15       ` Yinghai Lu
2013-08-16 19:31       ` Russ Anderson
2013-08-16 19:31         ` Russ Anderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAE9FiQUYccFLzfHcjx+cgLky0UH8h99msDsNdAR7WdLpzwFQ2A@mail.gmail.com \
    --to=yinghai@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rja@sgi.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.