linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Oscar Salvador <osalvador@suse.de>, akpm@linux-foundation.org
Cc: mhocko@suse.com, dan.j.williams@intel.com,
	pasha.tatashin@soleen.com, Jonathan.Cameron@huawei.com,
	anshuman.khandual@arm.com, vbabka@suse.cz, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 0/5] Allocate memmap from hotadded memory
Date: Tue, 25 Jun 2019 10:33:41 +0200	[thread overview]
Message-ID: <f986c09d-8554-855e-0b47-fcc6205bbb20@redhat.com> (raw)
In-Reply-To: <2ebfbd36-11bd-9576-e373-2964c458185b@redhat.com>

On 25.06.19 10:25, David Hildenbrand wrote:
> On 25.06.19 09:52, Oscar Salvador wrote:
>> Hi,
>>
>> It has been while since I sent previous version [1].
>>
>> In this version I added some feedback I got back then, like letting
>> the caller decide whether he wants allocating per memory block or
>> per memory range (patch#2), and having the chance to disable vmemmap when
>> users want to expose all hotpluggable memory to userspace (patch#5).
>>
>> [Testing]
>>
>> While I could test last version on powerpc, and Huawei's fellows helped me out
>> testing it on arm64, this time I could only test it on x86_64.
>> The codebase is quite the same, so I would not expect surprises.
>>
>>  - x86_64: small and large memblocks (128MB, 1G and 2G)
>>  - Kernel module that adds memory spanning multiple memblocks
>>    and remove that memory in a different granularity.
>>
>> So far, only acpi memory hotplug uses the new flag.
>> The other callers can be changed depending on their needs.
>>
>> Of course, more testing and feedback is appreciated.
>>
>> [Coverletter]
>>
>> This is another step to make memory hotplug more usable. The primary
>> goal of this patchset is to reduce memory overhead of the hot-added
>> memory (at least for SPARSEMEM_VMEMMAP memory model). The current way we use
>> to populate memmap (struct page array) has two main drawbacks:
> 
> Mental note: How will it be handled if a caller specifies "Allocate
> memmap from hotadded memory", but we are running under SPARSEMEM where
> we can't do this.
> 
>>
>> a) it consumes an additional memory until the hotadded memory itself is
>>    onlined and
>> b) memmap might end up on a different numa node which is especially true
>>    for movable_node configuration.
>>
>> a) it is a problem especially for memory hotplug based memory "ballooning"
>>    solutions when the delay between physical memory hotplug and the
>>    onlining can lead to OOM and that led to introduction of hacks like auto
>>    onlining (see 31bc3858ea3e ("memory-hotplug: add automatic onlining
>>    policy for the newly added memory")).
>>
>> b) can have performance drawbacks.
>>
>> Another minor case is that I have seen hot-add operations failing on archs
>> because they were running out of order-x pages.
>> E.g On powerpc, in certain configurations, we use order-8 pages,
>> and given 64KB base pagesize, that is 16MB.
>> If we run out of those, we just fail the operation and we cannot add
>> more memory.
> 
> At least for SPARSEMEM, we fallback to vmalloc() to work around this
> issue. I haven't looked into the populate_section_memmap() internals
> yet. Can you point me at the code that performs this allocation?
> 
>> We could fallback to base pages as x86_64 does, but we can do better.
>>
>> One way to mitigate all these issues is to simply allocate memmap array
>> (which is the largest memory footprint of the physical memory hotplug)
>> from the hot-added memory itself. SPARSEMEM_VMEMMAP memory model allows
>> us to map any pfn range so the memory doesn't need to be online to be
>> usable for the array. See patch 3 for more details.
>> This feature is only usable when CONFIG_SPARSEMEM_VMEMMAP is set.
>>
>> [Overall design]:
>>
>> Implementation wise we reuse vmem_altmap infrastructure to override
>> the default allocator used by vmemap_populate. Once the memmap is
>> allocated we need a way to mark altmap pfns used for the allocation.
>> If MHP_MEMMAP_{DEVICE,MEMBLOCK} flag was passed, we set up the layout of the
>> altmap structure at the beginning of __add_pages(), and then we call
>> mark_vmemmap_pages().
>>
>> The flags are either MHP_MEMMAP_DEVICE or MHP_MEMMAP_MEMBLOCK, and only differ
>> in the way they allocate vmemmap pages within the memory blocks.
>>
>> MHP_MEMMAP_MEMBLOCK:
>>         - With this flag, we will allocate vmemmap pages in each memory block.
>>           This means that if we hot-add a range that spans multiple memory blocks,
>>           we will use the beginning of each memory block for the vmemmap pages.
>>           This strategy is good for cases where the caller wants the flexiblity
>>           to hot-remove memory in a different granularity than when it was added.
>>
>> MHP_MEMMAP_DEVICE:
>>         - With this flag, we will store all vmemmap pages at the beginning of
>>           hot-added memory.
>>
>> So it is a tradeoff of flexiblity vs contigous memory.
>> More info on the above can be found in patch#2.
>>
>> Depending on which flag is passed (MHP_MEMMAP_DEVICE or MHP_MEMMAP_MEMBLOCK),
>> mark_vmemmap_pages() gets called at a different stage.
>> With MHP_MEMMAP_MEMBLOCK, we call it once we have populated the sections
>> fitting in a single memblock, while with MHP_MEMMAP_DEVICE we wait until all
>> sections have been populated.
>>
>> mark_vmemmap_pages() marks the pages as vmemmap and sets some metadata:
>>
>> The current layout of the Vmemmap pages are:
>>
>>         [Head->refcount] : Nr sections used by this altmap
>>         [Head->private]  : Nr of vmemmap pages
>>         [Tail->freelist] : Pointer to the head page
>>
>> This is done to easy the computation we need in some places.
>> E.g:
>>
>> Example 1)
>> We hot-add 1GB on x86_64 (memory block 128MB) using
>> MHP_MEMMAP_DEVICE:
>>
>> head->_refcount = 8 sections
>> head->private = 4096 vmemmap pages
>> tail's->freelist = head
>>
>> Example 2)
>> We hot-add 1GB on x86_64 using MHP_MEMMAP_MEMBLOCK:
>>
>> [at the beginning of each memblock]
>> head->_refcount = 1 section
>> head->private = 512 vmemmap pages
>> tail's->freelist = head
>>
>> We have the refcount because when using MHP_MEMMAP_DEVICE, we need to know
>> how much do we have to defer the call to vmemmap_free().
>> The thing is that the first pages of the hot-added range are used to create
>> the memmap mapping, so we cannot remove those first, otherwise we would blow up
>> when accessing the other pages.
> 
> So, assuming we add_memory(1GB, MHP_MEMMAP_DEVICE) and then
> remove_memory(128MB) of the added memory, this will work?

Hmm, I guess this won't work - especially when removing the first 128MB
first, where the memmap resides.

Do we need MHP_MEMMAP_DEVICE at this point or could we start with
MHP_MEMMAP_MEMBLOCK? That "smells" like being the easier case.

-- 

Thanks,

David / dhildenb


  reply	other threads:[~2019-06-25  8:33 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-25  7:52 [PATCH v2 0/5] Allocate memmap from hotadded memory Oscar Salvador
2019-06-25  7:52 ` [PATCH v2 1/5] drivers/base/memory: Remove unneeded check in remove_memory_block_devices Oscar Salvador
2019-06-25  8:01   ` David Hildenbrand
2019-06-25  8:03     ` David Hildenbrand
2019-06-25  8:09       ` Oscar Salvador
2019-06-25  8:27         ` David Hildenbrand
2019-06-25  7:52 ` [PATCH v2 2/5] mm,memory_hotplug: Introduce MHP_VMEMMAP_FLAGS Oscar Salvador
2019-06-25  8:31   ` David Hildenbrand
2019-07-24 20:11   ` Dan Williams
2019-07-24 21:36     ` osalvador
2019-07-25  9:27     ` Oscar Salvador
2019-07-25  9:30       ` David Hildenbrand
2019-07-25  9:40         ` Oscar Salvador
2019-07-25 10:04           ` David Hildenbrand
2019-07-25 10:13             ` Oscar Salvador
2019-07-25 10:15               ` David Hildenbrand
2019-06-25  7:52 ` [PATCH v2 3/5] mm,memory_hotplug: Introduce Vmemmap page helpers Oscar Salvador
2019-06-25 10:28   ` David Hildenbrand
2019-06-26  9:48     ` Oscar Salvador
2019-06-25  7:52 ` [PATCH v2 4/5] mm,memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap Oscar Salvador
2019-06-25  8:49   ` David Hildenbrand
2019-06-26  8:13     ` Oscar Salvador
2019-06-26  8:15       ` David Hildenbrand
2019-06-26  8:17   ` Anshuman Khandual
2019-06-26  8:28     ` Oscar Salvador
2019-07-24 21:49   ` Dan Williams
2019-06-25  7:52 ` [PATCH v2 5/5] mm,memory_hotplug: Allow userspace to enable/disable vmemmap Oscar Salvador
2019-06-25  8:25 ` [PATCH v2 0/5] Allocate memmap from hotadded memory David Hildenbrand
2019-06-25  8:33   ` David Hildenbrand [this message]
2019-06-26  8:03   ` Oscar Salvador
2019-06-26  8:11     ` David Hildenbrand
2019-06-26  8:15       ` Oscar Salvador
2019-06-26  8:27         ` Oscar Salvador
2019-06-26  8:37           ` David Hildenbrand
2019-06-26  8:28         ` David Hildenbrand
2019-07-02  6:42           ` Rashmica Gupta
2019-07-02  7:48             ` Oscar Salvador
2019-07-02  8:52               ` Rashmica Gupta
2019-07-10  1:14                 ` Rashmica Gupta
2019-07-31 12:08                 ` Michal Hocko
2019-07-31 23:06                   ` Rashmica Gupta
2019-08-01  7:17                     ` Michal Hocko
2019-08-01  7:18                       ` David Hildenbrand
2019-08-01  7:24                         ` Michal Hocko
2019-08-01  7:26                           ` David Hildenbrand
2019-08-01  7:31                             ` David Hildenbrand
2019-08-01  7:39                               ` Michal Hocko
2019-08-01  7:48                                 ` Michal Hocko
2019-08-01  9:18                                   ` David Hildenbrand
2019-08-01  7:34                             ` Michal Hocko
2019-08-01  7:50                               ` David Hildenbrand
2019-08-01  8:04                                 ` Michal Hocko
2019-07-16 12:28             ` David Hildenbrand
2019-07-29  5:42               ` Rashmica Gupta
2019-07-29  8:06                 ` David Hildenbrand
2019-07-30  7:08                   ` Rashmica Gupta
2019-07-31  2:21                   ` Rashmica Gupta
2019-07-31  9:39                     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f986c09d-8554-855e-0b47-fcc6205bbb20@redhat.com \
    --to=david@redhat.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=osalvador@suse.de \
    --cc=pasha.tatashin@soleen.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).