From: Wei Yang <richard.weiyang@gmail.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Wei Yang <richard.weiyang@gmail.com>,
mhocko@suse.com, linux-mm@kvack.org
Subject: Re: [RFC PATCH 3/4] mm/hotplug: make __add_pages() iterate on memory_block and split __add_section()
Date: Tue, 27 Jun 2017 07:53:12 +0800 [thread overview]
Message-ID: <20170626235312.GE53180@WeideMacBook-Pro.local> (raw)
In-Reply-To: <559864c6-6ad6-297a-3094-8abecbd251b9@nvidia.com>
[-- Attachment #1: Type: text/plain, Size: 5602 bytes --]
On Mon, Jun 26, 2017 at 12:50:14AM -0700, John Hubbard wrote:
>On 06/24/2017 07:52 PM, Wei Yang wrote:
>> Memory hotplug unit is memory_block which contains one or several
>> mem_section. The current logic is iterating on each mem_section and add or
>> adjust the memory_block every time.
>>
>> This patch makes the __add_pages() iterate on memory_block and split
>> __add_section() to two functions: __add_section() and __add_memory_block().
>>
>> The first one would take care of each section data and the second one would
>> register the memory_block at once, which makes the function more clear and
>> natural.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> ---
>> drivers/base/memory.c | 17 +++++------------
>> mm/memory_hotplug.c | 27 +++++++++++++++++----------
>> 2 files changed, 22 insertions(+), 22 deletions(-)
>>
>> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
>> index b54cfe9cd98b..468e5ad1bc87 100644
>> --- a/drivers/base/memory.c
>> +++ b/drivers/base/memory.c
>> @@ -705,19 +705,12 @@ int register_new_memory(int nid, struct mem_section *section)
>>
>> mutex_lock(&mem_sysfs_mutex);
>>
>> - mem = find_memory_block(section);
>> - if (mem) {
>> - mem->section_count++;
>> - put_device(&mem->dev);
>> - } else {
>> - ret = init_memory_block(&mem, section, MEM_OFFLINE);
>> - if (ret)
>> - goto out;
>> - mem->section_count++;
>> - }
>> + ret = init_memory_block(&mem, section, MEM_OFFLINE);
>> + if (ret)
>> + goto out;
>> + mem->section_count = sections_per_block;
>
>Hi Wei,
>
>Things have changed...the register_new_memory() routine is accepting a single section,
>but instead of registering just that section, it is registering a containing block.
>(That works, because apparently the approach is to make sections_per_block == 1,
>and eventually kill sections, if I am reading all this correctly.)
>
The original function is a little confusing. Actually it tries to register a
memory_block while it register it for several times, on each present
mem_section actually.
This change here will register the whole memory_block at once.
You would see in next patch it will accept the start section number instead of
a section, while maybe more easy to understand it.
BTW, I don't get your point on kill sections when sections_per_block == The
original function is a little confusing. Actually it tries to register a
memory_block while it register it for several times, on each present
mem_section actually.
This change here will register the whole memory_block at once.
You would see in next patch it will accept the start section number instead of
a section, while maybe more easy to understand it.
BTW, I don't get your point on kill sections when sections_per_block == 1.
Would you rephrase this?
>So, how about this: let's add a line to the function comment:
>
>* Register an entire memory_block.
>
May look good, let me have a try.
>That makes it clearer that we're dealing in blocks, even though the memsection*
>argument is passed in.
>
>>
>> - if (mem->section_count == sections_per_block)
>> - ret = register_mem_sect_under_node(mem, nid);
>> + ret = register_mem_sect_under_node(mem, nid);
>> out:
>> mutex_unlock(&mem_sysfs_mutex);
>> return ret;
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index a79a83ec965f..14a08b980b59 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -302,8 +302,7 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
>> }
>> #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
>>
>> -static int __meminit __add_section(int nid, unsigned long phys_start_pfn,
>> - bool want_memblock)
>> +static int __meminit __add_section(int nid, unsigned long phys_start_pfn)
>> {
>> int ret;
>> int i;
>> @@ -332,6 +331,18 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn,
>> SetPageReserved(page);
>> }
>>
>> + return 0;
>> +}
>> +
>> +static int __meminit __add_memory_block(int nid, unsigned long phys_start_pfn,
>> + bool want_memblock)
>> +{
>> + int ret;
>> +
>> + ret = __add_section(nid, phys_start_pfn);
>> + if (ret)
>> + return ret;
>> +
>> if (!want_memblock)
>> return 0;
>>
>> @@ -347,15 +358,10 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn,
>> int __ref __add_pages(int nid, unsigned long phys_start_pfn,
>> unsigned long nr_pages, bool want_memblock)
>> {
>> - unsigned long i;
>> + unsigned long pfn;
>> int err = 0;
>> - int start_sec, end_sec;
>> struct vmem_altmap *altmap;
>>
>> - /* during initialize mem_map, align hot-added range to section */
>> - start_sec = pfn_to_section_nr(phys_start_pfn);
>> - end_sec = pfn_to_section_nr(phys_start_pfn + nr_pages - 1);
>> -
>> altmap = to_vmem_altmap((unsigned long) pfn_to_page(phys_start_pfn));
>> if (altmap) {
>> /*
>> @@ -370,8 +376,9 @@ int __ref __add_pages(int nid, unsigned long phys_start_pfn,
>> altmap->alloc = 0;
>> }
>>
>> - for (i = start_sec; i <= end_sec; i++) {
>> - err = __add_section(nid, section_nr_to_pfn(i), want_memblock);
>> + for (pfn; pfn < phys_start_pfn + nr_pages;
>> + pfn += sections_per_block * PAGES_PER_SECTION) {
>
yep
>A pages_per_block variable would be nice here, too.
>
>thanks,
>john h
>
>> + err = __add_memory_block(nid, pfn, want_memblock);
>>
>> /*
>> * EEXIST is finally dealt with by ioresource collision
>>
--
Wei Yang
Help you, Help me
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
next prev parent reply other threads:[~2017-06-26 23:53 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-25 2:52 [RFC PATCH 0/4] mm/hotplug: make hotplug memory_block alligned Wei Yang
2017-06-25 2:52 ` [RFC PATCH 1/4] mm/hotplug: aligne the hotplugable range with memory_block Wei Yang
2017-06-25 3:31 ` John Hubbard
2017-06-26 0:20 ` Wei Yang
2017-06-26 6:49 ` John Hubbard
2017-06-26 23:21 ` Wei Yang
2017-06-25 2:52 ` [RFC PATCH 2/4] mm/hotplug: walk_memroy_range on memory_block uit Wei Yang
2017-06-26 7:32 ` John Hubbard
2017-06-26 23:40 ` Wei Yang
2017-06-27 6:59 ` John Hubbard
2017-06-28 0:11 ` Wei Yang
2017-06-25 2:52 ` [RFC PATCH 3/4] mm/hotplug: make __add_pages() iterate on memory_block and split __add_section() Wei Yang
2017-06-26 7:50 ` John Hubbard
2017-06-26 23:53 ` Wei Yang [this message]
2017-06-27 6:47 ` John Hubbard
2017-06-28 0:16 ` Wei Yang
2017-06-28 0:22 ` Wei Yang
2017-06-25 2:52 ` [RFC PATCH 4/4] base/memory: pass start_section_nr to init_memory_block() Wei Yang
2017-06-27 7:11 ` John Hubbard
2017-06-28 0:18 ` Wei Yang
2017-06-26 7:46 ` [RFC PATCH 0/4] mm/hotplug: make hotplug memory_block alligned Michal Hocko
2017-06-27 2:13 ` Wei Yang
2017-06-28 9:43 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170626235312.GE53180@WeideMacBook-Pro.local \
--to=richard.weiyang@gmail.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).