All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>,
	akpm@linux-foundation.org, dan.j.williams@intel.com,
	Jonathan.Cameron@huawei.com, anshuman.khandual@arm.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded memory
Date: Wed, 3 Apr 2019 10:41:35 +0200	[thread overview]
Message-ID: <04a5b856-c8e0-937b-72bb-b9d17a12ccc7@redhat.com> (raw)
In-Reply-To: <20190403083757.GC15605@dhcp22.suse.cz>

On 03.04.19 10:37, Michal Hocko wrote:
> On Wed 03-04-19 10:17:26, David Hildenbrand wrote:
>> On 03.04.19 10:12, Michal Hocko wrote:
>>> On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
>>>> On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
>>>>> So what is going to happen when you hotadd two memblocks. The first one
>>>>> holds memmaps and then you want to hotremove (not just offline) it?
>>>>
>>>> If you hot-add two memblocks, this means that either:
>>>>
>>>> a) you hot-add a 256MB-memory-device (128MB per memblock)
>>>> b) you hot-add two 128MB-memory-device
>>>>
>>>> Either way, hot-removing only works for memory-device as a whole, so
>>>> there is no problem.
>>>>
>>>> Vmemmaps are created per hot-added operations, this means that
>>>> vmemmaps will be created for the hot-added range.
>>>> And since hot-add/hot-remove operations works with the same granularity,
>>>> there is no problem.
>>>
>>> What does prevent calling somebody arch_add_memory for a range spanning
>>> multiple memblocks from a driver directly. In other words aren't you
>>
>> To drivers, we only expose add_memory() and friends. And I think this is
>> a good idea.
>>
>>> making  assumptions about a future usage based on the qemu usecase?
>>>
>>
>> As I noted, we only have an issue if add add_memory() and
>> remove_memory() is called with different granularity. I gave two
>> examples where this might not be the case, but we will have to look int
>> the details.
> 
> It seems natural that the DIMM will be hot remove all at once because
> you cannot hot remove a half of the DIMM, right? But I can envision that
> people might want to hotremove a faulty part of a really large DIMM
> because they would like to save some resources.

Even for virtio-mem, something like that would be useful. But I could
try to live without it :) Add a lot of memory in one go when starting up
(add_memory()) - much faster than doing individual remove_memory()
calls. When removing memory, as soon as all parts of a memblock are
offline, remove only the memblock to save memory (remove_memory()).

There, I would need to allocate it per memblock.

> 
> With different users asking for the hotplug functionality, I do not
> think we want to make such a strong assumption as hotremove will have
> the same granularity as hotadd.
> 

Then we have to make sure it works for all use cases.

> 
> That being said it should be the caller of the hotplug code to tell
> the vmemmap allocation strategy. For starter, I would only pack vmemmaps
> for "regular" kernel zone memory. Movable zones should be more careful.
> We can always re-evaluate later when there is a strong demand for huge
> pages on movable zones but this is not the case now because those pages
> are not really movable in practice.

Remains the issue with potential different user trying to remove memory
it didn't add in some other granularity. We then really have to identify
and isolate that case.

-- 

Thanks,

David / dhildenb

  reply	other threads:[~2019-04-03  8:41 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-28 13:43 [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded memory Oscar Salvador
2019-03-28 13:43 ` [PATCH 1/4] mm, memory_hotplug: cleanup memory offline path Oscar Salvador
2019-04-03  8:43   ` Michal Hocko
2019-03-28 13:43 ` [PATCH 2/4] mm, memory_hotplug: provide a more generic restrictions for memory hotplug Oscar Salvador
2019-04-03  8:46   ` Michal Hocko
2019-04-03  8:48     ` David Hildenbrand
2019-04-04 10:04     ` Oscar Salvador
2019-04-04 10:06       ` David Hildenbrand
2019-04-04 10:31       ` Michal Hocko
2019-04-04 12:04         ` Oscar Salvador
2019-03-28 13:43 ` [PATCH 3/4] mm, memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap Oscar Salvador
2019-03-28 13:43 ` [PATCH 4/4] mm, sparse: rename kmalloc_section_memmap, __kfree_section_memmap Oscar Salvador
2019-03-28 15:09 ` [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded memory David Hildenbrand
2019-03-28 15:31   ` David Hildenbrand
2019-03-29  8:45     ` Oscar Salvador
2019-03-29  8:56       ` David Hildenbrand
2019-03-29  9:01         ` David Hildenbrand
2019-03-29  9:20         ` Oscar Salvador
2019-03-29 13:42       ` Michal Hocko
2019-04-01  7:59         ` Oscar Salvador
2019-04-01 11:53           ` Michal Hocko
2019-04-02  8:28             ` Oscar Salvador
2019-04-02  8:39               ` David Hildenbrand
2019-04-02 12:48               ` Michal Hocko
2019-04-03  8:01                 ` Oscar Salvador
2019-04-03  8:12                   ` Michal Hocko
2019-04-03  8:17                     ` David Hildenbrand
2019-04-03  8:37                       ` Michal Hocko
2019-04-03  8:41                         ` David Hildenbrand [this message]
2019-04-03  8:49                           ` Michal Hocko
2019-04-03  8:53                             ` David Hildenbrand
2019-04-03  8:50                           ` Oscar Salvador
2019-04-03  8:54                             ` David Hildenbrand
2019-04-03  9:40                         ` Oscar Salvador
2019-04-03 10:46                           ` Michal Hocko
2019-04-04 10:25                           ` Vlastimil Babka
2019-04-03  8:34                     ` Oscar Salvador
2019-04-03  8:36                       ` David Hildenbrand
2019-03-29  8:30   ` Oscar Salvador
2019-03-29  8:51     ` David Hildenbrand
2019-03-29 22:23 ` John Hubbard
2019-04-01  7:52   ` Oscar Salvador

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=04a5b856-c8e0-937b-72bb-b9d17a12ccc7@redhat.com \
    --to=david@redhat.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.