From: David Hildenbrand <david@redhat.com>
To: Oscar Salvador <osalvador@suse.de>
Cc: akpm@linux-foundation.org, mhocko@suse.com,
dan.j.williams@intel.com, pasha.tatashin@soleen.com,
Jonathan.Cameron@huawei.com, anshuman.khandual@arm.com,
vbabka@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 4/5] mm,memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap
Date: Wed, 26 Jun 2019 10:15:45 +0200 [thread overview]
Message-ID: <47246a73-7df4-9ac3-8b09-c8bd6bef1098@redhat.com> (raw)
In-Reply-To: <20190626081325.GB30863@linux>
On 26.06.19 10:13, Oscar Salvador wrote:
> On Tue, Jun 25, 2019 at 10:49:10AM +0200, David Hildenbrand wrote:
>> On 25.06.19 09:52, Oscar Salvador wrote:
>>> Physical memory hotadd has to allocate a memmap (struct page array) for
>>> the newly added memory section. Currently, alloc_pages_node() is used
>>> for those allocations.
>>>
>>> This has some disadvantages:
>>> a) an existing memory is consumed for that purpose
>>> (~2MB per 128MB memory section on x86_64)
>>> b) if the whole node is movable then we have off-node struct pages
>>> which has performance drawbacks.
>>>
>>> a) has turned out to be a problem for memory hotplug based ballooning
>>> because the userspace might not react in time to online memory while
>>> the memory consumed during physical hotadd consumes enough memory to
>>> push system to OOM. 31bc3858ea3e ("memory-hotplug: add automatic onlining
>>> policy for the newly added memory") has been added to workaround that
>>> problem.
>>>
>>> I have also seen hot-add operations failing on powerpc due to the fact
>>> that we try to use order-8 pages. If the base page size is 64KB, this
>>> gives us 16MB, and if we run out of those, we simply fail.
>>> One could arge that we can fall back to basepages as we do in x86_64, but
>>> we can do better when CONFIG_SPARSEMEM_VMEMMAP is enabled.
>>>
>>> Vmemap page tables can map arbitrary memory.
>>> That means that we can simply use the beginning of each memory section and
>>> map struct pages there.
>>> struct pages which back the allocated space then just need to be treated
>>> carefully.
>>>
>>> Implementation wise we reuse vmem_altmap infrastructure to override
>>> the default allocator used by __vmemap_populate. Once the memmap is
>>> allocated we need a way to mark altmap pfns used for the allocation.
>>> If MHP_MEMMAP_{DEVICE,MEMBLOCK} flag was passed, we set up the layout of the
>>> altmap structure at the beginning of __add_pages(), and then we call
>>> mark_vmemmap_pages().
>>>
>>> Depending on which flag is passed (MHP_MEMMAP_DEVICE or MHP_MEMMAP_MEMBLOCK),
>>> mark_vmemmap_pages() gets called at a different stage.
>>> With MHP_MEMMAP_MEMBLOCK, we call it once we have populated the sections
>>> fitting in a single memblock, while with MHP_MEMMAP_DEVICE we wait until all
>>> sections have been populated.
>>
>> So, only MHP_MEMMAP_DEVICE will be used. Would it make sense to only
>> implement one for now (after we decide which one to use), to make things
>> simpler?
>>
>> Or do you have a real user in mind for the other?
>
> Currently, only MHP_MEMMAP_DEVICE will be used, as we only pass flags from
> acpi memory-hotplug path.
>
> All the others: hyper-v, Xen,... will have to be evaluated to see which one
> do they want to use.
>
> Although MHP_MEMMAP_DEVICE is the only one used right now, I introduced
> MHP_MEMMAP_MEMBLOCK to give the callers the choice of using MHP_MEMMAP_MEMBLOCK
> if they think that a strategy where hot-removing works in a different granularity
> makes sense.
>
> Moreover, since they both use the same API, there is no extra code needed to
> handle it. (Just two lines in __add_pages())
>
> This arose here [1].
>
> [1] https://patchwork.kernel.org/project/linux-mm/list/?submitter=137061
>
Just noting that you can emulate MHP_MEMMAP_MEMBLOCK via
MHP_MEMMAP_DEVICE by adding memory in memory block granularity (which is
what hyper-v and xen do if I am not wrong!).
Not yet convinced that both, MHP_MEMMAP_MEMBLOCK and MHP_MEMMAP_DEVICE
are needed. But we can sort that out later.
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2019-06-26 8:15 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-25 7:52 [PATCH v2 0/5] Allocate memmap from hotadded memory Oscar Salvador
2019-06-25 7:52 ` [PATCH v2 1/5] drivers/base/memory: Remove unneeded check in remove_memory_block_devices Oscar Salvador
2019-06-25 8:01 ` David Hildenbrand
2019-06-25 8:03 ` David Hildenbrand
2019-06-25 8:09 ` Oscar Salvador
2019-06-25 8:27 ` David Hildenbrand
2019-06-25 7:52 ` [PATCH v2 2/5] mm,memory_hotplug: Introduce MHP_VMEMMAP_FLAGS Oscar Salvador
2019-06-25 8:31 ` David Hildenbrand
2019-07-24 20:11 ` Dan Williams
2019-07-24 21:36 ` osalvador
2019-07-25 9:27 ` Oscar Salvador
2019-07-25 9:30 ` David Hildenbrand
2019-07-25 9:40 ` Oscar Salvador
2019-07-25 10:04 ` David Hildenbrand
2019-07-25 10:13 ` Oscar Salvador
2019-07-25 10:15 ` David Hildenbrand
2019-06-25 7:52 ` [PATCH v2 3/5] mm,memory_hotplug: Introduce Vmemmap page helpers Oscar Salvador
2019-06-25 7:52 ` [PATCH v2 4/5] mm,memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap Oscar Salvador
2019-06-25 8:49 ` David Hildenbrand
2019-06-26 8:13 ` Oscar Salvador
2019-06-26 8:15 ` David Hildenbrand [this message]
2019-06-26 8:17 ` Anshuman Khandual
2019-06-26 8:28 ` Oscar Salvador
2019-07-24 21:49 ` Dan Williams
2019-06-25 7:52 ` [PATCH v2 5/5] mm,memory_hotplug: Allow userspace to enable/disable vmemmap Oscar Salvador
2019-06-25 8:25 ` [PATCH v2 0/5] Allocate memmap from hotadded memory David Hildenbrand
2019-06-25 8:33 ` David Hildenbrand
2019-06-26 8:03 ` Oscar Salvador
2019-06-26 8:11 ` David Hildenbrand
2019-06-26 8:15 ` Oscar Salvador
2019-06-26 8:27 ` Oscar Salvador
2019-06-26 8:37 ` David Hildenbrand
2019-06-26 8:28 ` David Hildenbrand
2019-07-02 6:42 ` Rashmica Gupta
2019-07-02 7:48 ` Oscar Salvador
[not found] ` <CAC6rBskRyh5Tj9L-6T4dTgA18H0Y8GsMdC-X5_0Jh1SVfLLYtg@mail.gmail.com>
2019-07-10 1:14 ` Rashmica Gupta
2019-07-31 12:08 ` Michal Hocko
2019-07-31 23:06 ` Rashmica Gupta
2019-08-01 7:17 ` Michal Hocko
2019-08-01 7:18 ` David Hildenbrand
2019-08-01 7:24 ` Michal Hocko
2019-08-01 7:26 ` David Hildenbrand
2019-08-01 7:31 ` David Hildenbrand
2019-08-01 7:39 ` Michal Hocko
2019-08-01 7:48 ` Michal Hocko
2019-08-01 9:18 ` David Hildenbrand
2019-08-01 7:34 ` Michal Hocko
2019-08-01 7:50 ` David Hildenbrand
2019-08-01 8:04 ` Michal Hocko
2019-07-16 12:28 ` David Hildenbrand
2019-07-29 5:42 ` Rashmica Gupta
2019-07-29 8:06 ` David Hildenbrand
2019-07-30 7:08 ` Rashmica Gupta
2019-07-31 2:21 ` Rashmica Gupta
2019-07-31 9:39 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47246a73-7df4-9ac3-8b09-c8bd6bef1098@redhat.com \
--to=david@redhat.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=dan.j.williams@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=osalvador@suse.de \
--cc=pasha.tatashin@soleen.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).