All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nathan Fontenot <nfont@austin.ibm.com>
To: Anton Blanchard <anton@samba.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linuxppc-dev@ozlabs.org,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Dave Hansen <dave@linux.vnet.ibm.com>, Greg KH <greg@kroah.com>,
	akpm@linux-foundation.org
Subject: Re: [PATCH 0/8] v5 De-couple sysfs memory directories from memory sections
Date: Thu, 02 Sep 2010 12:39:47 -0500	[thread overview]
Message-ID: <4C7FE163.4000906@austin.ibm.com> (raw)
In-Reply-To: <20100831215745.GA7641@kryten>

On 08/31/2010 04:57 PM, Anton Blanchard wrote:
> 
> Hi Nathan,
> 
>> This set of patches de-couples the idea that there is a single
>> directory in sysfs for each memory section.  The intent of the
>> patches is to reduce the number of sysfs directories created to
>> resolve a boot-time performance issue.  On very large systems
>> boot time are getting very long (as seen on powerpc hardware)
>> due to the enormous number of sysfs directories being created.
>> On a system with 1 TB of memory we create ~63,000 directories.
>> For even larger systems boot times are being measured in hours.
>>
>> This set of patches allows for each directory created in sysfs
>> to cover more than one memory section.  The default behavior for
>> sysfs directory creation is the same, in that each directory
>> represents a single memory section.  A new file 'end_phys_index'
>> in each directory contains the physical_id of the last memory
>> section covered by the directory so that users can easily
>> determine the memory section range of a directory.
> 
> I tested this on a POWER7 with 2TB memory and the boot time improved from
> greater than 6 hours (I gave up), to under 5 minutes. Nice!

Thanks for testing this out.  I was able to test this on a 1 TB system
and saw memory sysfs creation times go from 10 minutes to a few seconds.
It's good to see the difference for a 2 TB system.

-Nathan


WARNING: multiple messages have this Message-ID (diff)
From: Nathan Fontenot <nfont@austin.ibm.com>
To: Anton Blanchard <anton@samba.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linuxppc-dev@ozlabs.org,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	Dave Hansen <dave@linux.vnet.ibm.com>, Greg KH <greg@kroah.com>,
	akpm@linux-foundation.org
Subject: Re: [PATCH 0/8] v5 De-couple sysfs memory directories from memory sections
Date: Thu, 02 Sep 2010 12:39:47 -0500	[thread overview]
Message-ID: <4C7FE163.4000906@austin.ibm.com> (raw)
In-Reply-To: <20100831215745.GA7641@kryten>

On 08/31/2010 04:57 PM, Anton Blanchard wrote:
> 
> Hi Nathan,
> 
>> This set of patches de-couples the idea that there is a single
>> directory in sysfs for each memory section.  The intent of the
>> patches is to reduce the number of sysfs directories created to
>> resolve a boot-time performance issue.  On very large systems
>> boot time are getting very long (as seen on powerpc hardware)
>> due to the enormous number of sysfs directories being created.
>> On a system with 1 TB of memory we create ~63,000 directories.
>> For even larger systems boot times are being measured in hours.
>>
>> This set of patches allows for each directory created in sysfs
>> to cover more than one memory section.  The default behavior for
>> sysfs directory creation is the same, in that each directory
>> represents a single memory section.  A new file 'end_phys_index'
>> in each directory contains the physical_id of the last memory
>> section covered by the directory so that users can easily
>> determine the memory section range of a directory.
> 
> I tested this on a POWER7 with 2TB memory and the boot time improved from
> greater than 6 hours (I gave up), to under 5 minutes. Nice!

Thanks for testing this out.  I was able to test this on a 1 TB system
and saw memory sysfs creation times go from 10 minutes to a few seconds.
It's good to see the difference for a 2 TB system.

-Nathan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Nathan Fontenot <nfont@austin.ibm.com>
To: Anton Blanchard <anton@samba.org>
Cc: linuxppc-dev@ozlabs.org, Greg KH <greg@kroah.com>,
	linux-kernel@vger.kernel.org,
	Dave Hansen <dave@linux.vnet.ibm.com>,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Subject: Re: [PATCH 0/8] v5 De-couple sysfs memory directories from memory sections
Date: Thu, 02 Sep 2010 12:39:47 -0500	[thread overview]
Message-ID: <4C7FE163.4000906@austin.ibm.com> (raw)
In-Reply-To: <20100831215745.GA7641@kryten>

On 08/31/2010 04:57 PM, Anton Blanchard wrote:
> 
> Hi Nathan,
> 
>> This set of patches de-couples the idea that there is a single
>> directory in sysfs for each memory section.  The intent of the
>> patches is to reduce the number of sysfs directories created to
>> resolve a boot-time performance issue.  On very large systems
>> boot time are getting very long (as seen on powerpc hardware)
>> due to the enormous number of sysfs directories being created.
>> On a system with 1 TB of memory we create ~63,000 directories.
>> For even larger systems boot times are being measured in hours.
>>
>> This set of patches allows for each directory created in sysfs
>> to cover more than one memory section.  The default behavior for
>> sysfs directory creation is the same, in that each directory
>> represents a single memory section.  A new file 'end_phys_index'
>> in each directory contains the physical_id of the last memory
>> section covered by the directory so that users can easily
>> determine the memory section range of a directory.
> 
> I tested this on a POWER7 with 2TB memory and the boot time improved from
> greater than 6 hours (I gave up), to under 5 minutes. Nice!

Thanks for testing this out.  I was able to test this on a 1 TB system
and saw memory sysfs creation times go from 10 minutes to a few seconds.
It's good to see the difference for a 2 TB system.

-Nathan

  reply	other threads:[~2010-09-02 17:39 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-09 17:53 [PATCH 0/8] v5 De-couple sysfs memory directories from memory sections Nathan Fontenot
2010-08-09 17:53 ` Nathan Fontenot
2010-08-09 17:53 ` Nathan Fontenot
2010-08-09 18:35 ` [PATCH 1/8] v5 Move the find_memory_block() routine up Nathan Fontenot
2010-08-09 18:35   ` Nathan Fontenot
2010-08-09 18:35   ` Nathan Fontenot
2010-08-09 18:36 ` [PATCH 2/8] v5 Add new phys_index properties Nathan Fontenot
2010-08-09 18:36   ` Nathan Fontenot
2010-08-09 18:36   ` Nathan Fontenot
2010-08-09 18:37 ` [PATCH 3/8] v5 Add section count to memory_block Nathan Fontenot
2010-08-09 18:37   ` Nathan Fontenot
2010-08-09 18:37   ` Nathan Fontenot
2010-08-09 18:38 ` [PATCH 4/8] v5 Add mutex for add/remove of memory blocks Nathan Fontenot
2010-08-09 18:38   ` Nathan Fontenot
2010-08-09 18:38   ` Nathan Fontenot
2010-08-09 18:39 ` [PATCH 5/8] v5 Allow memory_block to span multiple memory sections Nathan Fontenot
2010-08-09 18:39   ` Nathan Fontenot
2010-08-09 18:39   ` Nathan Fontenot
2010-08-09 18:41 ` [PATCH 6/8] v5 Update the node sysfs code Nathan Fontenot
2010-08-09 18:41   ` Nathan Fontenot
2010-08-09 18:41   ` Nathan Fontenot
2010-08-09 18:42 ` [PATCH 7/8] v5 Define memory_block_size_bytes() for ppc/pseries Nathan Fontenot
2010-08-09 18:42   ` Nathan Fontenot
2010-08-09 18:42   ` Nathan Fontenot
2010-08-09 18:43 ` [PATCH 8/8] v5 Update memory-hotplug documentation Nathan Fontenot
2010-08-09 18:43   ` Nathan Fontenot
2010-08-09 18:43   ` Nathan Fontenot
2010-08-09 20:44   ` Nishanth Aravamudan
2010-08-09 20:44     ` Nishanth Aravamudan
2010-08-09 20:44     ` Nishanth Aravamudan
2010-08-09 20:48     ` Nishanth Aravamudan
2010-08-09 20:48       ` Nishanth Aravamudan
2010-08-10 12:17     ` Nathan Fontenot
2010-08-10 12:17       ` Nathan Fontenot
2010-08-10 12:17       ` Nathan Fontenot
2010-08-11 15:18 ` [PATCH 0/8] v5 De-couple sysfs memory directories from memory sections Dave Hansen
2010-08-11 15:18   ` Dave Hansen
2010-08-11 15:18   ` Dave Hansen
2010-08-12 19:08 ` Andrew Morton
2010-08-12 19:08   ` Andrew Morton
2010-08-12 19:08   ` Andrew Morton
2010-08-12 20:07   ` Dave Hansen
2010-08-12 20:07     ` Dave Hansen
2010-08-12 20:07     ` Dave Hansen
2010-08-16 14:34   ` Nathan Fontenot
2010-08-16 14:34     ` Nathan Fontenot
2010-08-16 14:34     ` Nathan Fontenot
2010-08-31 18:12     ` Dave Hansen
2010-08-31 18:12       ` Dave Hansen
2010-08-31 18:12       ` Dave Hansen
2010-08-31 21:57 ` Anton Blanchard
2010-08-31 21:57   ` Anton Blanchard
2010-08-31 21:57   ` Anton Blanchard
2010-09-02 17:39   ` Nathan Fontenot [this message]
2010-09-02 17:39     ` Nathan Fontenot
2010-09-02 17:39     ` Nathan Fontenot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C7FE163.4000906@austin.ibm.com \
    --to=nfont@austin.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=anton@samba.org \
    --cc=dave@linux.vnet.ibm.com \
    --cc=greg@kroah.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.