linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory
@ 2019-09-19 14:22 David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node David Hildenbrand
                   ` (9 more replies)
  0 siblings, 10 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Alexander Duyck, Alexander Potapenko, Andrew Morton,
	Andrey Ryabinin, Anshuman Khandual, Anthony Yznaga, Dan Williams,
	Dave Young, Igor Mammedov, Ira Weiny, Jason Gunthorpe,
	Jason Wang, Johannes Weiner, Juergen Gross, Len Brown,
	Matthew Wilcox, Mauro Carvalho Chehab, Mel Gorman,
	Michael S. Tsirkin, Michal Hocko, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Minchan Kim, Oscar Salvador, Oscar Salvador,
	Pavel Tatashin, Pavel Tatashin, Pingfan Liu, Qian Cai,
	Rafael J. Wysocki, Stefan Hajnoczi, Stephen Rothwell,
	Vlastimil Babka, Wei Yang, Wei Yang, Yang Shi, Yu Zhao

Long time no RFC! I finally had time to get the next version of the Linux
driver side of virtio-mem into shape, incorporating ideas and feedback from
previous discussions.

This RFC is based on the series currently on the mm list:
- [PATCH 0/3] Remove __online_page_set_limits()
- [PATCH v1 0/3] mm/memory_hotplug: Export generic_online_page()
- [PATCH v4 0/8] mm/memory_hotplug: Shrink zones before removing memory
The current state (kept updated) is available on
- https://github.com/davidhildenbrand/linux/tree/virtio-mem

The basic idea of virtio-mem is to provide a flexible,
cross-architecture memory hot(un)plug solution that avoids many limitations
imposed by existing technologies, architectures, and interfaces. More
details can be found below and in linked material.

This RFC was only tested on x86-64, however, should theoretically work
on any Linux architecture that implements memory hot(un)plug - like
s390x. On x86-64, it is currently possible to add/remove memory to the
system in >= 4MB granularity. Memory hotplug works very reliable. For
memory unplug, there are no guarantees how much memory can actually get
unplugged, it depends on the setup. I have plans to improve that in the
future.

--------------------------------------------------------------------------
1. virtio-mem
--------------------------------------------------------------------------

The basic idea behind virtio-mem was presented at KVM Forum 2018. The
slides can be found at [1]. The previous RFC can be found at [2]. The
first RFC can be found at [3]. However, the concept evolved over time. The
KVM Forum slides roughly match the current design.

Patch #2 ("virtio-mem: Paravirtualized memory hotplug") contains quite some
information, especially in "include/uapi/linux/virtio_mem.h":

  Each virtio-mem device manages a dedicated region in physical address
  space. Each device can belong to a single NUMA node, multiple devices
  for a single NUMA node are possible. A virtio-mem device is like a
  "resizable DIMM" consisting of small memory blocks that can be plugged
  or unplugged. The device driver is responsible for (un)plugging memory
  blocks on demand.

  Virtio-mem devices can only operate on their assigned memory region in
  order to (un)plug memory. A device cannot (un)plug memory belonging to
  other devices.

  The "region_size" corresponds to the maximum amount of memory that can
  be provided by a device. The "size" corresponds to the amount of memory
  that is currently plugged. "requested_size" corresponds to a request
  from the device to the device driver to (un)plug blocks. The
  device driver should try to (un)plug blocks in order to reach the
  "requested_size". It is impossible to plug more memory than requested.

  The "usable_region_size" represents the memory region that can actually
  be used to (un)plug memory. It is always at least as big as the
  "requested_size" and will grow dynamically. It will only shrink when
  explicitly triggered (VIRTIO_MEM_REQ_UNPLUG).

  Memory in the usable region can usually be read, however, there are no
  guarantees. It can happen that the device cannot process a request,
  because it is busy. The device driver has to retry later.

  Usually, during system resets all memory will get unplugged, so the
  device driver can start with a clean state. However, in specific
  scenarios (if the device is busy) it can happen that the device still
  has memory plugged. The device driver can request to unplug all memory
  (VIRTIO_MEM_REQ_UNPLUG) - which might take a while to succeed if the
  device is busy.

--------------------------------------------------------------------------
2. Linux Implementation
--------------------------------------------------------------------------

This RFC reuses quite some existing MM infrastructure, however, has to
expose some additional functionality.

Memory blocks (e.g., 128MB) are added/removed on demand. Within these
memory blocks, subblocks (e.g., 4MB) are plugged/unplugged. The sizes
depend on the target architecture, MAX_ORDER + pageblock_order, and
the block size of a virtio-mem device.

add_memory()/try_remove_memory() is used to add/remove memory blocks.
virtio-mem will not online memory blocks itself. This has to be done by
user space, or configured into the kernel
(CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE). virtio-mem will only unplug memory
that was online to the ZONE_NORMAL. Memory is suggested to be onlined to
the ZONE_NORMAL for now.

The memory hotplug notifier is used to properly synchronize against
onlining/offlining of memory blocks and to track the states of memory
blocks (including the zone memory blocks are onlined to). Locking is
done similar to the PPC CMA implementation.

The set_online_page() callback is used to keep unplugged subblocks
of a memory block fake-offline when onlining the memory block.
generic_online_page() is used to fake-online plugged subblocks. This
handling is similar to the Hyper-V balloon driver.

PG_offline is used to mark unplugged subblocks as offline, so e.g.,
dumping tools (makedumpfile) will skip these pages. This is similar to
other balloon drivers like virtio-balloon and Hyper-V.

PG_offline + reference count of 0 [new] is now also used to mark pages as
a "skip" when offlining memory blocks. This allows to offline memory blocks
that have partially unplugged subblocks - or are completely unplugged.

alloc_contig_range()/free_contig_range() [now exposed] is used to
unplug/plug subblocks of memory blocks the are already exposed to Linux.

offline_and_remove_memory() [new] is used to offline a fully unplugged
memory block and remove it from Linux.


A lot of additional information can be found in the separate patches and
as comments in the code itself.

--------------------------------------------------------------------------
3. Major changes since last RFC
--------------------------------------------------------------------------

A lot of things changed, especially also on the QEMU + virtio side. The
biggest changes on the Linux driver side are:
- Onlining/offlining of subblocks is now emulated on top of memory blocks.
  set_online_page()+alloc_contig_range()+free_contig_range() is now used
  for that. Core MM does not have to be modified and will continue to
  online/offline full memory blocks.
- Onlining/offlining of memory blocks is no longer performed by virtio-mem.
- Pg_offline is upstream and can be used. It is also used to allow
  offlining of partially unplugged memory blocks.
- Memory block states + subblocks are now tracked more space-efficient.
- Proper kexec(), kdump(), driver unload, driver reload, ZONE_MOVABLE, ...
  handling.

--------------------------------------------------------------------------
4. Future work
--------------------------------------------------------------------------

The separate patches contain a lot of future work items. One of the next
steps is to make memory unplug more likely to succeed - currently, there
are no guarantees on how much memory can get unplugged again. I have
various ideas on how to limit fragmentation of all memory blocks that
virtio-mem added.

Memory hotplug:
- Reduce the amount of memory resources if that turnes out to be an
  issue. Or try to speed up relevant code paths to deal with many
  resources.
- Allocate the vmemmap from the added memory. Makes hotplug more likely
  to succeed, the vmemmap is stored on the same NUMA node and that
  unmovable memory will later not hinder unplug.

Memory hotunplug:
- Performance improvements:
-- Sense (lockless) if it make sense to try alloc_contig_range() at all
   before directly trying to isolate and taking locks.
-- Try to unplug bigger chunks if possible first.
-- Identify free areas first, that don't have to be evacuated.
- Make unplug more likely to succeed:
-- The "issue" is that in the ZONE_NORMAL, the buddy will randomly
   allocate memory. Only pageblocks somewhat limit fragmentation,
   however we would want to limit fragmentation on subblock granularity
   and even memory block granularity. One idea is to have a new
   ZONE_PREFER_MOVABLE. Memory blocks will then be onlined to ZONE_NORMAL
   / ZONE_PREFER_MOVABLE in a certain ratio per node (e.g.,
   1:4). This makes unplug of quite some memory likely to succeed in most
   setups. ZONE_PREFER_MOVABLE is then a mixture of ZONE_NORMAL and
   ZONE_MOVABlE. Especially, movable data can end up on that zone, but
   only if really required - avoiding running out of memory on ZONE
   imbalances. The zone fallback order would be
   MOVABLE=>PREFER_MOVABLE=>HIGHMEM=>NORMAL=>PREFER_MOVABLE=>DMA32=>DMA
-- Allocate memmap from added memory. This way, less unmovable data can
   end up on the memory blocks.
-- Call drop_slab() before trying to unplug. Eventually shrink other
   caches.
- Better retry handling in case memory is busy. We certainly don't want
  to try for ever in a short interval to try to get some memory back.
- OOM handling, e.g., via an OOM handler.

--------------------------------------------------------------------------
5. Example Usage
--------------------------------------------------------------------------

A very basic QEMU prototype (kept updated) is available at:
- https://github.com/davidhildenbrand/qemu/tree/virtio-mem

It lacks various features, however works to test the guest driver side:
- No support for resizable memory regions / memory backends
- No protection of unplugged memory (esp., userfaultfd-wp)
- No dump/migration/XXX optimizations to skip over unplugged memory

Start QEMU with two virtio-mem devices (one per NUMA node):
 $ qemu-system-x86_64 -m 4G,maxmem=20G \
  -smp sockets=2,cores=2 \
  -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
  [...]
  -object memory-backend-ram,id=mem0,size=8G \
  -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=128M \
  -object memory-backend-ram,id=mem1,size=8G \
  -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=80M

Query the configuration:
 QEMU 4.1.50 monitor - type 'help' for more information
 (qemu) info memory-devices
 Memory device [virtio-mem]: "vm0"
   memaddr: 0x140000000
   node: 0
   requested-size: 134217728
   size: 134217728
   max-size: 8589934592
   block-size: 2097152
   memdev: /objects/mem0
 Memory device [virtio-mem]: "vm1"
   memaddr: 0x340000000
   node: 1
   requested-size: 83886080
   size: 83886080
   max-size: 8589934592
   block-size: 2097152
   memdev: /objects/mem1

Add some memory to node 1:
 QEMU 4.1.50 monitor - type 'help' for more information
 (qemu) qom-set vm1 requested-size 1G

Remove some memory from node 0:
 QEMU 4.1.50 monitor - type 'help' for more information
 (qemu) qom-set vm0 requested-size 64M

Query the configuration:
(qemu) info memory-devices
Memory device [virtio-mem]: "vm0"
  memaddr: 0x140000000
  node: 0
  requested-size: 67108864
  size: 67108864
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/mem0
Memory device [virtio-mem]: "vm1"
  memaddr: 0x340000000
  node: 1
  requested-size: 1073741824
  size: 1073741824
  max-size: 8589934592
  block-size: 2097152
  memdev: /objects/mem1

--------------------------------------------------------------------------
6. Q/A
--------------------------------------------------------------------------

Q: Why add/remove parts ("subblocks") of memory blocks/sections?
A: Flexibility (section size depends on the architecture) - e.g., some
   architectures have a section size of 2GB. Also, the memory block size
   is variable (e.g., on x86-64). I want to avoid any such restrictions.
   Some use cases want to add/remove memory in smaller granularities to a
   VM (e.g., the Hyper-V balloon also implements this) - especially smaller
   VMs like used for kata containers. Also, on memory unplug, it is more
   reliable to free-up and unplug multiple small chunks instead
   of one big chunk. E.g., if one page of a DIMM is either unmovable or
   pinned, the DIMM can't get unplugged. This approach is basically a
   compromise between DIMM-based memory hot(un)plug and balloon
   inflation/deflation, which works mostly on page granularity.

Q: Why care about memory blocks?
A: They are the way to tell user space about new memory. This way,
   memory can get onlined/offlined by user space. Also, e.g., kdump
   relies on udev events to reload kexec when memory blocks are
   onlined/offlined. Memory blocks are the "real" memory hot(un)plug
   granularity. Everything that's smaller has to be emulated "on top".

Q: Won't memory unplug of subblocks fragment memory?
A: Yes and no. Unplugging e.g., >=4MB subblocks on x86-64 will not really
   fragment memory like unplugging random pages like a balloon driver does.
   Buddy merging will not be limited. However, any allocation that requires
   bigger consecutive memory chunks (e.g., gigantic pages) might observe
   the fragmentation. Possible solutions: Allocate gigantic huge pages
   before unplugging memory, don't unplug memory, combine virtio-mem with
   DIMM based memory or bigger initial memory. Remember, a virtio-mem
   device will only unplug on the memory range it manages, not on other
   DIMMs. Unplug of single memory blocks will result in similar
   fragmentation in respect to gigantic huge pages.

Q: How reliable is memory unplug?
A: There are no guarantees on how much memory can get unplugged
   again. However, it is more likely to find 4MB chunks to unplug than
   e.g., 128MB chunks. If memory is terribly fragmented, there is nothing
   we can do - for now. I consider memory hotplug the first primary use
   of virtio-mem. Memory unplug might usually work, but we want to improve
   the performance and the amount of memory we can actually unplug later.

Q: Why not unplug from the ZONE_MOVABLE?
A: Unplugged memory chunks are unmovable. Unmovable data must not end up
   on the ZONE_MOVABLE - similar to gigantic pages - they will never be
   allocated from ZONE_MOVABLE. Teaching MM to move unplugged chunks within
   a device might be problematic and will require a new guest->hypervisor
   command to move unplugged chunks. virtio-mem added memory can be onlined
   to the ZONE_MOVABLE, but subblocks will not get unplugged from it.

Q: How big should the initial (!virtio-mem) memory of a VM be?
A: virtio-mem memory will not go to the DMA zones. So to avoid running out
   of DMA memory, I suggest something like 2-3GB on x86-64. But many
   VMs can most probably deal with less DMA memory - depends on the use
   case.

[1] https://events.linuxfoundation.org/wp-content/uploads/2017/12/virtio-mem-Paravirtualized-Memory-David-Hildenbrand-Red-Hat-1.pdf
[2] https://lwn.net/Articles/755423/
[3] https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg03870.html

---

David Hildenbrand (9):
  ACPI: NUMA: export pxm_to_node
  virtio-mem: Paravirtualized memory hotplug
  virtio-mem: Paravirtualized memory hotunplug part 1
  mm: Export alloc_contig_range() / free_contig_range()
  virtio-mem: Paravirtualized memory hotunplug part 2
  mm: Allow to offline PageOffline() pages with a reference count of 0
  virtio-mem: Allow to offline partially unplugged memory blocks
  mm/memory_hotplug: Introduce offline_and_remove_memory()
  virtio-mem: Offline and remove completely unplugged memory blocks

 drivers/acpi/numa.c             |    1 +
 drivers/virtio/Kconfig          |   18 +
 drivers/virtio/Makefile         |    1 +
 drivers/virtio/virtio_mem.c     | 1900 +++++++++++++++++++++++++++++++
 include/linux/memory_hotplug.h  |    1 +
 include/linux/page-flags.h      |    4 +
 include/linux/page-isolation.h  |    4 +-
 include/uapi/linux/virtio_ids.h |    1 +
 include/uapi/linux/virtio_mem.h |  204 ++++
 mm/memory_hotplug.c             |   44 +-
 mm/page_alloc.c                 |   24 +-
 mm/page_isolation.c             |   18 +-
 mm/swap.c                       |    9 +
 13 files changed, 2219 insertions(+), 10 deletions(-)
 create mode 100644 drivers/virtio/virtio_mem.c
 create mode 100644 include/uapi/linux/virtio_mem.h

-- 
2.21.0


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-09-23 10:13   ` David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 2/9] virtio-mem: Paravirtualized memory hotplug David Hildenbrand
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Rafael J. Wysocki, Len Brown, linux-acpi

Will be needed by virtio-mem to identify the node from a pxm.

Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Len Brown <lenb@kernel.org>
Cc: linux-acpi@vger.kernel.org
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/acpi/numa.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
index eadbf90e65d1..d5847fa7ac69 100644
--- a/drivers/acpi/numa.c
+++ b/drivers/acpi/numa.c
@@ -35,6 +35,7 @@ int pxm_to_node(int pxm)
 		return NUMA_NO_NODE;
 	return pxm_to_node_map[pxm];
 }
+EXPORT_SYMBOL(pxm_to_node);
 
 int node_to_pxm(int node)
 {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 2/9] virtio-mem: Paravirtualized memory hotplug
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 3/9] virtio-mem: Paravirtualized memory hotunplug part 1 David Hildenbrand
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Michael S. Tsirkin, Jason Wang, Oscar Salvador, Michal Hocko,
	Igor Mammedov, Dave Young, Andrew Morton, Dan Williams,
	Pavel Tatashin, Stefan Hajnoczi, Vlastimil Babka

Each virtio-mem device owns exactly one memory region. It is responsible
for adding/removing memory from that memory region on request.

When the device driver starts up, the requested amount of memory is
queried and then plugged to Linux. On request, further memory can be
plugged or unplugged. This patch only implements the plugging part.

On x86-64, memory can currently be plugged in 4MB ("subblock") granularity.
When required, a new memory block will be added (e.g., usually 128MB on
x86-64) in order to plug more subblocks. Only x86-64 was tested for now.

The online_page callback is used to keep unplugged subblocks offline
when onlining memory - similar to the Hyper-V balloon driver. Unplugged
pages are marked PG_offline, to tell dump tools (e.g., makedumpfile) to
skip them.

User space is usually responsible for onlining the added memory. The
memory hotplug notifier is used to synchronize virtio-mem activity
against memory onlining/offlining.

Each virtio-mem device can belong to a NUMA node, which allows us to
easily add/remove small chunks of memory to/from a specific NUMA node by
using multiple virtio-mem devices. Something that works even when the
guest has no idea about the NUMA topology.

One way to view virtio-mem is as a "resizable DIMM" or a DIMM with many
"sub-DIMMS".

This patch directly introduces the basic infrastructure to implement memory
unplug. Especially the memory block states and subblock bitmaps will be
heavily used there.

Notes:
- Memory blocks that are partally unplugged must not be onlined to the
  MOVABLE_ZONE. They contain unmovable parts and might e.g., result
  in warnings when trying to offline them.
- In the kdump kernel, we don't want to load the driver, to not mess
  with memory to be dumped.
- In case memory is to be onlined by user space, we limit the amount of
  offline memory blocks, to not run out of memory.
- Suspend/Hibernate is not supported due to the way virtio-mem devices
  behave. Limited support might be possible in the future.
- Reloading the device driver is not supported.
- When unloading the driver, we have to remove partially plugged offline
  memory blocks. Otherwise they could get fully onlined later on, when
  we no longer have control over via the online_page callback.
- As the hypervisor might suddenly be busy (during reboot, in-between
  requests, when adding of memory fails), we have to take care of some
  corner cases - especially virtio_mem_unplug_pending_mb() and
  virtio_mem_send_unplug_all_request(). The hypervisor could for example
  be busy if it is currently migrating the guest.

Future work:
- Reduce the amount of memory resources if that turnes out to be an
  issue. Or try to speed up relevant code paths to deal with many
  resources.
- Allocate the vmemmap from the added memory. Makes hotplug more likely
  to succeed, the vmemmap is stored on the same NUMA node and that
  unmovable memory will later not hinder unplug.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/Kconfig          |   17 +
 drivers/virtio/Makefile         |    1 +
 drivers/virtio/virtio_mem.c     | 1580 +++++++++++++++++++++++++++++++
 include/uapi/linux/virtio_ids.h |    1 +
 include/uapi/linux/virtio_mem.h |  204 ++++
 5 files changed, 1803 insertions(+)
 create mode 100644 drivers/virtio/virtio_mem.c
 create mode 100644 include/uapi/linux/virtio_mem.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 078615cf2afc..294720d53057 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -64,6 +64,23 @@ config VIRTIO_BALLOON
 
 	 If unsure, say M.
 
+config VIRTIO_MEM
+	tristate "Virtio mem driver"
+	default m
+	depends on X86_64
+	depends on VIRTIO
+	depends on MEMORY_HOTPLUG_SPARSE
+	depends on MEMORY_HOTREMOVE
+	help
+	 This driver provides access to virtio-mem paravirtualized memory
+	 devices, allowing to hotplug and hotunplug memory.
+
+	 This driver is was only tested under x86-64, but should
+	 theoretically work on all architectures that support memory
+	 hotplug and hotremove.
+
+	 If unsure, say M.
+
 config VIRTIO_INPUT
 	tristate "Virtio input driver"
 	depends on VIRTIO
diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
index 3a2b5c5dcf46..906d5a00ac85 100644
--- a/drivers/virtio/Makefile
+++ b/drivers/virtio/Makefile
@@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
 virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
 obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
 obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
+obj-$(CONFIG_VIRTIO_MEM) += virtio_mem.o
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
new file mode 100644
index 000000000000..597fab4933f6
--- /dev/null
+++ b/drivers/virtio/virtio_mem.c
@@ -0,0 +1,1580 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Virtio-mem device driver.
+ *
+ * Copyright Red Hat, Inc. 2019
+ *
+ * Author(s): David Hildenbrand <david@redhat.com>
+ */
+
+#include <linux/virtio.h>
+#include <linux/virtio_mem.h>
+#include <linux/workqueue.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/memory_hotplug.h>
+#include <linux/memory.h>
+#include <linux/hrtimer.h>
+#include <linux/crash_dump.h>
+#include <linux/mutex.h>
+#include <linux/bitmap.h>
+
+#include <acpi/acpi_numa.h>
+
+enum virtio_mem_mb_state {
+	/* Unplugged, not added to Linux. Can be reused later. */
+	VIRTIO_MEM_MB_STATE_UNUSED = 0,
+	/* (Partially) plugged, not added to Linux. Error on add_memory(). */
+	VIRTIO_MEM_MB_STATE_PLUGGED,
+	/* Fully plugged, fully added to Linux, offline. */
+	VIRTIO_MEM_MB_STATE_OFFLINE,
+	/* Paritally plugged, fully added to Linux, offline. */
+	VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL,
+	/* Fully plugged, fully added to Linux, online (!ZONE_MOVABLE). */
+	VIRTIO_MEM_MB_STATE_ONLINE,
+	/* Partially plugged, fully added to Linux, online (!ZONE_MOVABLE). */
+	VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL,
+	/*
+	 * Fully plugged, fully added to Linux, online (ZONE_MOVABLE).
+	 * We are not allowed to allocate (unplug) parts of this block that
+	 * are not movable (similar to gigantic pages). We will never allow
+	 * to online OFFLINE_PARTIAL to ZONE_MOVABLE (as they would contain
+	 * unmovable parts).
+	 */
+	VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE,
+	VIRTIO_MEM_MB_STATE_COUNT
+};
+
+struct virtio_mem {
+	struct virtio_device *vdev;
+
+	/* We might first have to unplug all memory when starting up. */
+	bool unplug_all_required;
+
+	/* Workqueue that processes the plug/unplug requests. */
+	struct work_struct wq;
+	atomic_t config_changed;
+
+	/* Virtqueue for guest->host requests. */
+	struct virtqueue *vq;
+
+	/* Wait for a host response to a guest request. */
+	wait_queue_head_t host_resp;
+
+	/* Space for one guest request and the host response. */
+	struct virtio_mem_req req;
+	struct virtio_mem_resp resp;
+
+	/* The current size of the device. */
+	uint64_t plugged_size;
+	/* The requested size of the device. */
+	uint64_t requested_size;
+
+	/* The device block size (for communicating with the device). */
+	uint32_t device_block_size;
+	/* The translated node id. NUMA_NO_NODE in case not specified. */
+	int nid;
+	/* Physical start address of the memory region. */
+	uint64_t addr;
+
+	/* The subblock size. */
+	uint32_t subblock_size;
+	/* The number of subblocks per memory block. */
+	uint32_t nb_sb_per_mb;
+
+	/* Id of the first memory block of this device. */
+	unsigned long first_mb_id;
+	/* Id of the last memory block of this device. */
+	unsigned long last_mb_id;
+	/* Id of the last usable memory block of this device. */
+	unsigned long last_usable_mb_id;
+	/* Id of the next memory bock to prepare when needed. */
+	unsigned long next_mb_id;
+
+	/* Summary of all memory block states. */
+	unsigned long nb_mb_state[VIRTIO_MEM_MB_STATE_COUNT];
+#define VIRTIO_MEM_NB_OFFLINE_THRESHOLD		10
+
+	/*
+	 * One byte state per memory block.
+	 *
+	 * Allocated via vmalloc(). When preparing new blocks, resized
+	 * (alloc+copy+free) when needed (crossing pages with the next mb).
+	 * (when crossing pages).
+	 *
+	 * With 128MB memory blocks, we have states for 512GB of memory on one
+	 * page.
+	 */
+	uint8_t *mb_state;
+
+	/*
+	 * $nb_sb_per_mb bit per memory block. Handled similar to mb_state.
+	 *
+	 * With 4MB subblocks, we manage 128GB of memory in one page.
+	 */
+	unsigned long *sb_bitmap;
+
+	/*
+	 * Mutex that protects the nb_mb_state, mb_state, and sb_bitmap.
+	 *
+	 * When this lock is held the pointers can't change, ONLINE and
+	 * OFFLINE blocks can't change the state and no subblocks will get
+	 * plugged.
+	 */
+	struct mutex hotplug_mutex;
+	bool hotplug_active;
+
+	/* An error ocurred we cannot handle - stop processing requests. */
+	bool broken;
+
+	/* The driver is being removed. */
+	spinlock_t removal_lock;
+	bool removing;
+
+	/* Timer for retrying to plug/unplug memory. */
+	struct hrtimer retry_timer;
+	ktime_t retry_interval;
+#define VIRTIO_MEM_RETRY_MS		10000
+
+	/* Memory notifier (online/offline events). */
+	struct notifier_block memory_notifier;
+
+	/* Next device in the list of virtio-mem devices. */
+	struct list_head next;
+};
+
+/*
+ * We have to share a single online_page callback among all virtio-mem
+ * devices. We use RCU to iterate the list in the callback.
+ */
+static DEFINE_MUTEX(virtio_mem_mutex);
+static LIST_HEAD(virtio_mem_devices);
+
+static void virtio_mem_online_page_cb(struct page *page, unsigned int order);
+
+/*
+ * Register a virtio-mem device so it will be considered for the online_page
+ * callback.
+ */
+static int register_virtio_mem_device(struct virtio_mem *vm)
+{
+	int rc = 0;
+
+	/* First device registers the callback. */
+	mutex_lock(&virtio_mem_mutex);
+	if (list_empty(&virtio_mem_devices))
+		rc = set_online_page_callback(&virtio_mem_online_page_cb);
+	if (!rc)
+		list_add_rcu(&vm->next, &virtio_mem_devices);
+	mutex_unlock(&virtio_mem_mutex);
+
+	return rc;
+}
+
+/*
+ * Unregister a virtio-mem device so it will no longer be considered for the
+ * online_page callback.
+ */
+static void unregister_virtio_mem_device(struct virtio_mem *vm)
+{
+	/* Last device unregisters the callback. */
+	mutex_lock(&virtio_mem_mutex);
+	list_del_rcu(&vm->next);
+	if (list_empty(&virtio_mem_devices))
+		restore_online_page_callback(&virtio_mem_online_page_cb);
+	mutex_unlock(&virtio_mem_mutex);
+
+	synchronize_rcu();
+}
+
+/*
+ * Calculate the memory block id of a given address.
+ */
+static unsigned long virtio_mem_phys_to_mb_id(unsigned long addr)
+{
+	return addr / memory_block_size_bytes();
+}
+
+/*
+ * Calculate the physical start address of a given memory block id.
+ */
+static unsigned long virtio_mem_mb_id_to_phys(unsigned long mb_id)
+{
+	return mb_id * memory_block_size_bytes();
+}
+
+/*
+ * Calculate the subblock id of a given address.
+ */
+static unsigned long virtio_mem_phys_to_sb_id(struct virtio_mem *vm,
+					      unsigned long addr)
+{
+	const unsigned long mb_id = virtio_mem_phys_to_mb_id(addr);
+	const unsigned long mb_addr = virtio_mem_mb_id_to_phys(mb_id);
+
+	return (addr - mb_addr) / vm->subblock_size;
+}
+
+/*
+ * Set the state of a memory block, taking care of the state counter.
+ */
+static void virtio_mem_mb_set_state(struct virtio_mem *vm, unsigned long mb_id,
+				    enum virtio_mem_mb_state state)
+{
+	const unsigned long idx = mb_id - vm->first_mb_id;
+	enum virtio_mem_mb_state old_state;
+
+	old_state = vm->mb_state[idx];
+	vm->mb_state[idx] = state;
+
+	BUG_ON(vm->nb_mb_state[old_state] == 0);
+	vm->nb_mb_state[old_state]--;
+	vm->nb_mb_state[state]++;
+}
+
+/*
+ * Get the state of a memory block, taking care of the state counter.
+ */
+static enum virtio_mem_mb_state virtio_mem_mb_get_state(struct virtio_mem *vm,
+							unsigned long mb_id)
+{
+	const unsigned long idx = mb_id - vm->first_mb_id;
+
+	return vm->mb_state[idx];
+}
+
+/*
+ * Prepare the state array for the next memory block.
+ */
+static int virtio_mem_mb_state_prepare_next_mb(struct virtio_mem *vm)
+{
+	unsigned long old_bytes = vm->next_mb_id - vm->first_mb_id + 1;
+	unsigned long new_bytes = vm->next_mb_id - vm->first_mb_id + 2;
+	int old_pages = PFN_UP(old_bytes);
+	int new_pages = PFN_UP(new_bytes);
+	uint8_t *new_mb_state;
+
+	if (vm->mb_state && old_pages == new_pages)
+		return 0;
+
+	new_mb_state = vzalloc(new_pages * PAGE_SIZE);
+	if (!new_mb_state)
+		return -ENOMEM;
+
+	mutex_lock(&vm->hotplug_mutex);
+	if (vm->mb_state)
+		memcpy(new_mb_state, vm->mb_state, old_pages * PAGE_SIZE);
+	vfree(vm->mb_state);
+	vm->mb_state = new_mb_state;
+	mutex_unlock(&vm->hotplug_mutex);
+
+	return 0;
+}
+
+#define virtio_mem_for_each_mb_state(_vm, _mb_id, _state) \
+	for (_mb_id = _vm->first_mb_id; \
+	     _mb_id < _vm->next_mb_id && _vm->nb_mb_state[_state]; \
+	     _mb_id++) \
+		if (virtio_mem_mb_get_state(_vm, _mb_id) == _state)
+
+/*
+ * Mark all selected subblocks plugged.
+ *
+ * Will not modify the state of the memory block.
+ */
+static void virtio_mem_mb_set_sb_plugged(struct virtio_mem *vm,
+					 unsigned long mb_id, int sb_id,
+					 int count)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb + sb_id;
+
+	__bitmap_set(vm->sb_bitmap, bit, count);
+}
+
+/*
+ * Mark all selected subblocks unplugged.
+ *
+ * Will not modify the state of the memory block.
+ */
+static void virtio_mem_mb_set_sb_unplugged(struct virtio_mem *vm,
+					   unsigned long mb_id, int sb_id,
+					   int count)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb + sb_id;
+
+	__bitmap_clear(vm->sb_bitmap, bit, count);
+}
+
+/*
+ * Test if all selected subblocks are plugged.
+ */
+static bool virtio_mem_mb_test_sb_plugged(struct virtio_mem *vm,
+					  unsigned long mb_id, int sb_id,
+					  int count)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb + sb_id;
+
+	if (count == 1)
+		return test_bit(bit, vm->sb_bitmap);
+
+	/* TODO: Helper similar to bitmap_set() */
+	return find_next_zero_bit(vm->sb_bitmap, bit + count, bit) >=
+	       bit + count;
+}
+
+/*
+ * Find the first plugged subblock. Returns vm->nb_sb_per_mb in case there is
+ * none.
+ */
+static int virtio_mem_mb_first_plugged_sb(struct virtio_mem *vm,
+					  unsigned long mb_id)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb;
+
+	return find_next_bit(vm->sb_bitmap, bit + vm->nb_sb_per_mb, bit) - bit;
+}
+
+/*
+ * Find the first unplugged subblock. Returns vm->nb_sb_per_mb in case there is
+ * none.
+ */
+static int virtio_mem_mb_first_unplugged_sb(struct virtio_mem *vm,
+					    unsigned long mb_id)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb;
+
+	return find_next_zero_bit(vm->sb_bitmap, bit + vm->nb_sb_per_mb, bit) -
+	       bit;
+}
+
+/*
+ * Prepare the subblock bitmap for the next memory block.
+ */
+static int virtio_mem_sb_bitmap_prepare_next_mb(struct virtio_mem *vm)
+{
+	const unsigned long old_nb_mb = vm->next_mb_id - vm->first_mb_id;
+	const unsigned long old_nb_bits = old_nb_mb * vm->nb_sb_per_mb;
+	const unsigned long new_nb_bits = (old_nb_mb + 1) * vm->nb_sb_per_mb;
+	int old_pages = PFN_UP(BITS_TO_LONGS(old_nb_bits) * sizeof(long));
+	int new_pages = PFN_UP(BITS_TO_LONGS(new_nb_bits) * sizeof(long));
+	unsigned long *new_sb_bitmap, *old_sb_bitmap;
+
+	if (vm->sb_bitmap && old_pages == new_pages)
+		return 0;
+
+	new_sb_bitmap = vzalloc(new_pages * PAGE_SIZE);
+	if (!new_sb_bitmap)
+		return -ENOMEM;
+
+	mutex_lock(&vm->hotplug_mutex);
+	if (new_sb_bitmap)
+		memcpy(new_sb_bitmap, vm->sb_bitmap, old_pages * PAGE_SIZE);
+
+	old_sb_bitmap = vm->sb_bitmap;
+	vm->sb_bitmap = new_sb_bitmap;
+	mutex_unlock(&vm->hotplug_mutex);
+
+	vfree(old_sb_bitmap);
+	return 0;
+}
+
+/*
+ * Try to add a memory block to Linux. This will usually only fail
+ * if out of memory.
+ *
+ * Must not be called with the vm->hotplug_mutex held (possible deadlock with
+ * onlining code).
+ *
+ * Will not modify the state of the memory block.
+ */
+static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
+{
+	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
+	int nid = vm->nid;
+
+	if (nid == NUMA_NO_NODE)
+		nid = memory_add_physaddr_to_nid(addr);
+
+	dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id);
+	return add_memory(nid, addr, memory_block_size_bytes());
+}
+
+/*
+ * Try to remove a memory block from Linux. Will only fail if the memory block
+ * is not offline.
+ *
+ * Must not be called with the vm->hotplug_mutex held (possible deadlock with
+ * onlining code).
+ *
+ * Will not modify the state of the memory block.
+ */
+static int virtio_mem_mb_remove(struct virtio_mem *vm, unsigned long mb_id)
+{
+	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
+	int nid = vm->nid;
+
+	if (nid == NUMA_NO_NODE)
+		nid = memory_add_physaddr_to_nid(addr);
+
+	dev_dbg(&vm->vdev->dev, "removing memory block: %lu\n", mb_id);
+	return remove_memory(nid, addr, memory_block_size_bytes());
+}
+
+/*
+ * Trigger the workqueue so the device can perform its magic.
+ */
+static void virtio_mem_retry(struct virtio_mem *vm)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&vm->removal_lock, flags);
+	if (!vm->removing)
+		queue_work(system_freezable_wq, &vm->wq);
+	spin_unlock_irqrestore(&vm->removal_lock, flags);
+}
+
+static int virtio_mem_translate_node_id(struct virtio_mem *vm, uint16_t node_id)
+{
+	int node = NUMA_NO_NODE;
+
+	if (IS_ENABLED(CONFIG_NUMA) && IS_ENABLED(CONFIG_ACPI_NUMA) &&
+	    virtio_has_feature(vm->vdev, VIRTIO_MEM_F_ACPI_PXM))
+		node = pxm_to_node(node_id);
+	return node;
+}
+
+/*
+ * Test if a virtio-mem device overlaps with the given range. Can be called
+ * from (notifier) callbacks lockless.
+ */
+static bool virtio_mem_overlaps_range(struct virtio_mem *vm,
+				      unsigned long start, unsigned long size)
+{
+	unsigned long dev_start = virtio_mem_mb_id_to_phys(vm->first_mb_id);
+	unsigned long dev_end = virtio_mem_mb_id_to_phys(vm->last_mb_id) +
+				memory_block_size_bytes();
+	unsigned long end = start + size;
+
+	return (start >= dev_start && start < dev_end) ||
+	       (end > dev_start && end <= dev_end) ||
+	       (start < dev_start && end > dev_end);
+}
+
+/*
+ * Test if a virtio-mem device owns a memory block. Can be called from
+ * (notifier) callbacks lockless.
+ */
+static bool virtio_mem_owned_mb(struct virtio_mem *vm, unsigned long mb_id)
+{
+	return mb_id >= vm->first_mb_id && mb_id <= vm->last_mb_id;
+}
+
+static int virtio_mem_notify_going_online(struct virtio_mem *vm,
+					  unsigned long mb_id,
+					  enum zone_type zone)
+{
+	switch (virtio_mem_mb_get_state(vm, mb_id)) {
+	case VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL:
+		/*
+		 * We won't allow to online a partially plugged memory block
+		 * to the MOVABLE zone - it would contain unmovable parts.
+		 */
+		if (zone == ZONE_MOVABLE) {
+			dev_warn_ratelimited(&vm->vdev->dev,
+					     "memory block has holes, MOVABLE not supported\n");
+			return NOTIFY_BAD;
+		}
+		return NOTIFY_OK;
+	case VIRTIO_MEM_MB_STATE_OFFLINE:
+		return NOTIFY_OK;
+	default:
+		break;
+	}
+	dev_warn_ratelimited(&vm->vdev->dev,
+			     "memory block onlining denied\n");
+	return NOTIFY_BAD;
+}
+
+static void virtio_mem_notify_offline(struct virtio_mem *vm,
+				      unsigned long mb_id)
+{
+	switch (virtio_mem_mb_get_state(vm, mb_id)) {
+	case VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL:
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL);
+		break;
+	case VIRTIO_MEM_MB_STATE_ONLINE:
+	case VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE:
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_OFFLINE);
+		break;
+	default:
+		BUG();
+		break;
+	}
+}
+
+static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id,
+				     enum zone_type zone)
+{
+	unsigned long nb_offline;
+
+	switch (virtio_mem_mb_get_state(vm, mb_id)) {
+	case VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL:
+		BUG_ON(zone == ZONE_MOVABLE);
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL);
+		break;
+	case VIRTIO_MEM_MB_STATE_OFFLINE:
+		if (zone == ZONE_MOVABLE)
+			virtio_mem_mb_set_state(vm, mb_id,
+						VIRTIO_MEM_MB_STATE_ONLINE_MOVABLE);
+		else
+			virtio_mem_mb_set_state(vm, mb_id,
+						VIRTIO_MEM_MB_STATE_ONLINE);
+		break;
+	default:
+		BUG();
+		break;
+	}
+	nb_offline = vm->nb_mb_state[VIRTIO_MEM_MB_STATE_OFFLINE] +
+		     vm->nb_mb_state[VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL];
+
+	/* see if we can add new blocks now that we onlined one block */
+	if (nb_offline == VIRTIO_MEM_NB_OFFLINE_THRESHOLD - 1)
+		virtio_mem_retry(vm);
+}
+
+/*
+ * This callback will either be called synchonously from add_memory() or
+ * asynchronously (e.g., triggered via user space). We have to be careful
+ * with locking when calling add_memory().
+ */
+static int virtio_mem_memory_notifier_cb(struct notifier_block *nb,
+					 unsigned long action, void *arg)
+{
+	struct virtio_mem *vm = container_of(nb, struct virtio_mem,
+					     memory_notifier);
+	struct memory_notify *mhp = arg;
+	const unsigned long start = PFN_PHYS(mhp->start_pfn);
+	const unsigned long size = PFN_PHYS(mhp->nr_pages);
+	const unsigned long mb_id = virtio_mem_phys_to_mb_id(start);
+	enum zone_type zone;
+	int rc = NOTIFY_OK;
+
+	/*
+	 * Memory is usually onlined/offlined in memory block granularity
+	 * (via device_online()/device_offline()). This means, we
+	 * cannot cross virtio-mem device boundaries and memory block
+	 * boundaries.
+	 *
+	 * Only obscure users (powernv/memtrace.c) play dirty games
+	 * manually with offline_pages() with multiple memory blocks -
+	 * let's NACK these requests in case virtio-mem is involved.
+	 */
+	if (size != memory_block_size_bytes() ||
+	    !IS_ALIGNED(start, memory_block_size_bytes())) {
+		if (virtio_mem_overlaps_range(vm, start, size))
+			return NOTIFY_BAD;
+		return NOTIFY_DONE;
+	}
+
+	if (!virtio_mem_owned_mb(vm, mb_id))
+		return NOTIFY_DONE;
+
+	switch (action) {
+	case MEM_GOING_OFFLINE:
+		spin_lock_irq(&vm->removal_lock);
+		if (vm->removing)
+			rc = notifier_to_errno(-EBUSY);
+		spin_unlock_irq(&vm->removal_lock);
+		if (rc == NOTIFY_OK) {
+			mutex_lock(&vm->hotplug_mutex);
+			vm->hotplug_active = true;
+		}
+		break;
+	case MEM_GOING_ONLINE:
+		spin_lock_irq(&vm->removal_lock);
+		if (vm->removing)
+			rc = notifier_to_errno(-EBUSY);
+		spin_unlock_irq(&vm->removal_lock);
+		if (rc == NOTIFY_OK) {
+			mutex_lock(&vm->hotplug_mutex);
+			vm->hotplug_active = true;
+			zone = page_zonenum(pfn_to_page(mhp->start_pfn));
+			rc = virtio_mem_notify_going_online(vm, mb_id, zone);
+		}
+		break;
+	case MEM_OFFLINE:
+		virtio_mem_notify_offline(vm, mb_id);
+		vm->hotplug_active = false;
+		mutex_unlock(&vm->hotplug_mutex);
+		break;
+	case MEM_ONLINE:
+		zone = page_zonenum(pfn_to_page(mhp->start_pfn));
+		virtio_mem_notify_online(vm, mb_id, zone);
+		vm->hotplug_active = false;
+		mutex_unlock(&vm->hotplug_mutex);
+		break;
+	case MEM_CANCEL_OFFLINE:
+	case MEM_CANCEL_ONLINE:
+		/* We might not get a MEM_GOING* if somebody else canceled */
+		if (vm->hotplug_active) {
+			vm->hotplug_active = false;
+			mutex_unlock(&vm->hotplug_mutex);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+/*
+ * Set a range of pages PG_offline.
+ */
+static void virtio_mem_set_fake_offline(unsigned long pfn,
+					unsigned int nr_pages)
+{
+	for (; nr_pages--; pfn++)
+		__SetPageOffline(pfn_to_page(pfn));
+}
+
+/*
+ * Clear PG_offline from a range of pages.
+ */
+static void virtio_mem_clear_fake_offline(unsigned long pfn,
+					  unsigned int nr_pages)
+{
+	for (; nr_pages--; pfn++)
+		__ClearPageOffline(pfn_to_page(pfn));
+}
+
+/*
+ * Release a range of fake-offline pages to the buddy, effectively
+ * fake-onlining them.
+ */
+static void virtio_mem_fake_online(unsigned long pfn, unsigned int nr_pages)
+{
+	const int order = MAX_ORDER - 1;
+	int i;
+
+	/*
+	 * We are always called with subblock granularity, which is at least
+	 * aligned to MAX_ORDER - 1. All pages in a subblock are either
+	 * reserved or not.
+	 */
+	BUG_ON(!IS_ALIGNED(pfn, 1 << order));
+	BUG_ON(!IS_ALIGNED(nr_pages, 1 << order));
+
+	virtio_mem_clear_fake_offline(pfn, nr_pages);
+
+	for (i = 0; i < nr_pages; i += 1 << order) {
+		struct page *page = pfn_to_page(pfn + i);
+
+		generic_online_page(page, order);
+	}
+}
+
+static void virtio_mem_online_page_cb(struct page *page, unsigned int order)
+{
+	const unsigned long addr = page_to_phys(page);
+	const unsigned long mb_id = virtio_mem_phys_to_mb_id(addr);
+	struct virtio_mem *vm;
+	int sb_id;
+
+	/*
+	 * We exploit here that subblocks have at least MAX_ORDER - 1
+	 * size/alignment and that this callback is is called with such a
+	 * size/alignment. So we cannot cross subblocks and therefore
+	 * also not memory blocks.
+	 */
+	rcu_read_lock();
+	list_for_each_entry_rcu(vm, &virtio_mem_devices, next) {
+		if (!virtio_mem_owned_mb(vm, mb_id))
+			continue;
+
+		sb_id = virtio_mem_phys_to_sb_id(vm, addr);
+		/*
+		 * If plugged, online the pages, otherwise, set them fake
+		 * offline (PageOffline).
+		 */
+		if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1))
+			generic_online_page(page, order);
+		else
+			virtio_mem_set_fake_offline(PFN_DOWN(addr), 1 << order);
+		rcu_read_unlock();
+		return;
+	}
+	rcu_read_unlock();
+
+	/* not virtio-mem memory, but e.g., a DIMM. online it */
+	generic_online_page(page, order);
+}
+
+static uint64_t virtio_mem_send_request(struct virtio_mem *vm,
+					const struct virtio_mem_req *req)
+{
+	struct scatterlist *sgs[2], sg_req, sg_resp;
+	unsigned int len;
+	int rc;
+
+	/* don't use the request residing on the stack (vaddr) */
+	vm->req = *req;
+
+	/* out: buffer for request */
+	sg_init_one(&sg_req, &vm->req, sizeof(vm->req));
+	sgs[0] = &sg_req;
+
+	/* in: buffer for response */
+	sg_init_one(&sg_resp, &vm->resp, sizeof(vm->resp));
+	sgs[1] = &sg_resp;
+
+	rc = virtqueue_add_sgs(vm->vq, sgs, 1, 1, vm, GFP_KERNEL);
+	if (rc < 0)
+		return rc;
+
+	virtqueue_kick(vm->vq);
+
+	/* wait for a response */
+	wait_event(vm->host_resp, virtqueue_get_buf(vm->vq, &len));
+
+	return virtio16_to_cpu(vm->vdev, vm->resp.type);
+}
+
+static int virtio_mem_send_plug_request(struct virtio_mem *vm, uint64_t addr,
+					uint64_t size)
+{
+	const uint64_t nb_vm_blocks = size / vm->device_block_size;
+	const struct virtio_mem_req req = {
+		.type = cpu_to_virtio16(vm->vdev, VIRTIO_MEM_REQ_PLUG),
+		.u.plug.addr = cpu_to_virtio64(vm->vdev, addr),
+		.u.plug.nb_blocks = cpu_to_virtio16(vm->vdev, nb_vm_blocks),
+	};
+
+	if (atomic_read(&vm->config_changed))
+		return -EAGAIN;
+
+	switch (virtio_mem_send_request(vm, &req)) {
+	case VIRTIO_MEM_RESP_ACK:
+		vm->plugged_size += size;
+		return 0;
+	case VIRTIO_MEM_RESP_NACK:
+		return -EAGAIN;
+	case VIRTIO_MEM_RESP_BUSY:
+		return -EBUSY;
+	case VIRTIO_MEM_RESP_ERROR:
+		return -EINVAL;
+	default:
+		return -ENOMEM;
+	}
+}
+
+static int virtio_mem_send_unplug_request(struct virtio_mem *vm, uint64_t addr,
+					  uint64_t size)
+{
+	const uint64_t nb_vm_blocks = size / vm->device_block_size;
+	const struct virtio_mem_req req = {
+		.type = cpu_to_virtio16(vm->vdev, VIRTIO_MEM_REQ_UNPLUG),
+		.u.unplug.addr = cpu_to_virtio64(vm->vdev, addr),
+		.u.unplug.nb_blocks = cpu_to_virtio16(vm->vdev, nb_vm_blocks),
+	};
+
+	if (atomic_read(&vm->config_changed))
+		return -EAGAIN;
+
+	switch (virtio_mem_send_request(vm, &req)) {
+	case VIRTIO_MEM_RESP_ACK:
+		vm->plugged_size -= size;
+		return 0;
+	case VIRTIO_MEM_RESP_BUSY:
+		return -EBUSY;
+	case VIRTIO_MEM_RESP_ERROR:
+		return -EINVAL;
+	default:
+		return -ENOMEM;
+	}
+}
+
+static int virtio_mem_send_unplug_all_request(struct virtio_mem *vm)
+{
+	const struct virtio_mem_req req = {
+		.type = cpu_to_virtio16(vm->vdev, VIRTIO_MEM_REQ_UNPLUG_ALL),
+	};
+
+	switch (virtio_mem_send_request(vm, &req)) {
+	case VIRTIO_MEM_RESP_ACK:
+		vm->unplug_all_required = false;
+		vm->plugged_size = 0;
+		/* usable region might have shrunk */
+		atomic_set(&vm->config_changed, 1);
+		return 0;
+	case VIRTIO_MEM_RESP_BUSY:
+		return -EBUSY;
+	default:
+		return -ENOMEM;
+	}
+}
+
+/*
+ * Plug selected subblocks. Updates the plugged state, but not the state
+ * of the memory block.
+ */
+static int virtio_mem_mb_plug_sb(struct virtio_mem *vm, unsigned long mb_id,
+				 int sb_id, int count)
+{
+	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id) +
+			      sb_id * vm->subblock_size;
+	const uint64_t size = count * vm->subblock_size;
+	int rc;
+
+	dev_dbg(&vm->vdev->dev, "plugging memory block: %lu : %i - %i\n", mb_id,
+		sb_id, sb_id + count - 1);
+
+	rc = virtio_mem_send_plug_request(vm, addr, size);
+	if (!rc)
+		virtio_mem_mb_set_sb_plugged(vm, mb_id, sb_id, count);
+	return rc;
+}
+
+/*
+ * Unplug selected subblocks. Updates the plugged state, but not the state
+ * of the memory block.
+ */
+static int virtio_mem_mb_unplug_sb(struct virtio_mem *vm, unsigned long mb_id,
+				   int sb_id, int count)
+{
+	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id) +
+			      sb_id * vm->subblock_size;
+	const uint64_t size = count * vm->subblock_size;
+	int rc;
+
+	dev_dbg(&vm->vdev->dev, "unplugging memory block: %lu : %i - %i\n",
+		mb_id, sb_id, sb_id + count - 1);
+
+	rc = virtio_mem_send_unplug_request(vm, addr, size);
+	if (!rc)
+		virtio_mem_mb_set_sb_unplugged(vm, mb_id, sb_id, count);
+	return rc;
+}
+
+/*
+ * Unplug the desired number of plugged subblocks of a offline or unadded
+ * memory block. Will fail if any subblock cannot get unplugged (instead of
+ * skipping it).
+ *
+ * Will not modify the state of the memory block.
+ *
+ * Note: can fail after some subblocks were unplugged.
+ */
+static int virtio_mem_mb_unplug_any_sb(struct virtio_mem *vm,
+				       unsigned long mb_id, uint64_t *nb_sb)
+{
+	int sb_id, count;
+	int rc;
+
+	while (*nb_sb) {
+		sb_id = virtio_mem_mb_first_plugged_sb(vm, mb_id);
+		if (sb_id >= vm->nb_sb_per_mb)
+			break;
+		count = 1;
+		while (count < *nb_sb &&
+		       sb_id + count  < vm->nb_sb_per_mb &&
+		       virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id + count,
+						     1))
+			count++;
+
+		rc = virtio_mem_mb_unplug_sb(vm, mb_id, sb_id, count);
+		if (rc)
+			return rc;
+		*nb_sb -= count;
+	}
+
+	return 0;
+}
+
+/*
+ * Unplug all plugged subblocks of an offline or unadded memory block.
+ *
+ * Will not modify the state of the memory block.
+ *
+ * Note: can fail after some subblocks were unplugged.
+ */
+static int virtio_mem_mb_unplug(struct virtio_mem *vm, unsigned long mb_id)
+{
+	uint64_t nb_sb = vm->nb_sb_per_mb;
+
+	return virtio_mem_mb_unplug_any_sb(vm, mb_id, &nb_sb);
+}
+
+/*
+ * Prepare tracking data for the next memory block.
+ */
+static int virtio_mem_prepare_next_mb(struct virtio_mem *vm,
+				      unsigned long *mb_id)
+{
+	int rc;
+
+	if (vm->next_mb_id > vm->last_usable_mb_id)
+		return -ENOSPC;
+
+	/* Resize the state array if required. */
+	rc = virtio_mem_mb_state_prepare_next_mb(vm);
+	if (rc)
+		return rc;
+
+	/* Resize the subblock bitmap if required. */
+	rc = virtio_mem_sb_bitmap_prepare_next_mb(vm);
+	if (rc)
+		return rc;
+
+	vm->nb_mb_state[VIRTIO_MEM_MB_STATE_UNUSED]++;
+	*mb_id = vm->next_mb_id++;
+	return 0;
+}
+
+/*
+ * Don't add too many blocks that are not onlined yet to avoid running OOM.
+ */
+static bool virtio_mem_too_many_mb_offline(struct virtio_mem *vm)
+{
+	unsigned long nb_offline;
+
+	nb_offline = vm->nb_mb_state[VIRTIO_MEM_MB_STATE_OFFLINE] +
+		     vm->nb_mb_state[VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL];
+	return nb_offline >= VIRTIO_MEM_NB_OFFLINE_THRESHOLD;
+}
+
+/*
+ * Try to plug the desired number of subblocks and add the memory block
+ * to Linux.
+ *
+ * Will modify the state of the memory block.
+ */
+static int virtio_mem_mb_plug_and_add(struct virtio_mem *vm,
+				      unsigned long mb_id,
+				      uint64_t *nb_sb)
+{
+	const int count = min_t(int, *nb_sb, vm->nb_sb_per_mb);
+	int rc, rc2;
+
+	if (WARN_ON_ONCE(!count))
+		return -EINVAL;
+
+	/*
+	 * Plug the requested number of subblocks before adding it to linux,
+	 * so that onlining will directly online all plugged subblocks.
+	 */
+	rc = virtio_mem_mb_plug_sb(vm, mb_id, 0, count);
+	if (rc)
+		return rc;
+
+	/*
+	 * Mark the block properly offline before adding it to Linux,
+	 * so the memory notifiers will find the block in the right state.
+	 */
+	if (count == vm->nb_sb_per_mb)
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_OFFLINE);
+	else
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL);
+
+	/* Add the memory block to linux - if that fails, try to unplug. */
+	rc = virtio_mem_mb_add(vm, mb_id);
+	if (rc) {
+		enum virtio_mem_mb_state new_state = VIRTIO_MEM_MB_STATE_UNUSED;
+
+		dev_err(&vm->vdev->dev,
+			"adding memory block %lu failed with %d\n", mb_id, rc);
+		rc2 = virtio_mem_mb_unplug_sb(vm, mb_id, 0, count);
+
+		/*
+		 * TODO: Linux MM does not properly clean up yet in all cases
+		 * where adding of memory failed - especially on -ENOMEM.
+		 */
+		if (rc2)
+			new_state = VIRTIO_MEM_MB_STATE_PLUGGED;
+		virtio_mem_mb_set_state(vm, mb_id, new_state);
+		return rc;
+	}
+
+	*nb_sb -= count;
+	return 0;
+}
+
+/*
+ * Try to plug the desired number of subblocks of a memory block that
+ * is already added to Linux.
+ *
+ * Will modify the state of the memory block.
+ *
+ * Note: Can fail after some subblocks were successfully plugged.
+ */
+static int virtio_mem_mb_plug_any_sb(struct virtio_mem *vm, unsigned long mb_id,
+				     uint64_t *nb_sb, bool online)
+{
+	unsigned long pfn, nr_pages;
+	int sb_id, count;
+	int rc;
+
+	if (WARN_ON_ONCE(!*nb_sb))
+		return -EINVAL;
+
+	while (*nb_sb) {
+		sb_id = virtio_mem_mb_first_unplugged_sb(vm, mb_id);
+		if (sb_id >= vm->nb_sb_per_mb)
+			break;
+		count = 1;
+		while (count < *nb_sb &&
+		       sb_id + count < vm->nb_sb_per_mb &&
+		       !virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id + count,
+						      1))
+			count++;
+
+		rc = virtio_mem_mb_plug_sb(vm, mb_id, sb_id, count);
+		if (rc)
+			return rc;
+		*nb_sb -= count;
+		if (!online)
+			continue;
+
+		/* fake-online the pages if the memory block is online */
+		pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) +
+			       sb_id * vm->subblock_size);
+		nr_pages = PFN_DOWN(count * vm->subblock_size);
+		virtio_mem_fake_online(pfn, nr_pages);
+	}
+
+	if (virtio_mem_mb_test_sb_plugged(vm, mb_id, 0, vm->nb_sb_per_mb)) {
+		if (online)
+			virtio_mem_mb_set_state(vm, mb_id,
+						VIRTIO_MEM_MB_STATE_ONLINE);
+		else
+			virtio_mem_mb_set_state(vm, mb_id,
+						VIRTIO_MEM_MB_STATE_OFFLINE);
+	}
+
+	return rc;
+}
+
+/*
+ * Try to plug the requested amount of memory.
+ */
+static int virtio_mem_plug_request(struct virtio_mem *vm, uint64_t diff)
+{
+	uint64_t nb_sb = diff / vm->subblock_size;
+	unsigned long mb_id;
+	int rc;
+
+	if (!nb_sb)
+		return 0;
+
+	/* Don't race with onlining/offlining */
+	mutex_lock(&vm->hotplug_mutex);
+
+	/* Try to plug subblocks of partially plugged online blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL) {
+		rc = virtio_mem_mb_plug_any_sb(vm, mb_id, &nb_sb, true);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		cond_resched();
+	}
+
+	/* Try to plug subblocks of partially plugged offline blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL) {
+		rc = virtio_mem_mb_plug_any_sb(vm, mb_id, &nb_sb, false);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		cond_resched();
+	}
+
+	/*
+	 * We won't be working on online/offline memory blocks from this point,
+	 * so we can't race with memory onlining/offlining. Drop the mutex.
+	 */
+	mutex_unlock(&vm->hotplug_mutex);
+
+	/* Try to plug and add unused blocks */
+	virtio_mem_for_each_mb_state(vm, mb_id, VIRTIO_MEM_MB_STATE_UNUSED) {
+		if (virtio_mem_too_many_mb_offline(vm))
+			return -ENOSPC;
+
+		rc = virtio_mem_mb_plug_and_add(vm, mb_id, &nb_sb);
+		if (rc || !nb_sb)
+			return rc;
+		cond_resched();
+	}
+
+	/* Try to prepare, plug and add new blocks */
+	while (nb_sb) {
+		if (virtio_mem_too_many_mb_offline(vm))
+			return -ENOSPC;
+
+		rc = virtio_mem_prepare_next_mb(vm, &mb_id);
+		if (rc)
+			return rc;
+		rc = virtio_mem_mb_plug_and_add(vm, mb_id, &nb_sb);
+		if (rc)
+			return rc;
+		cond_resched();
+	}
+
+	return 0;
+out_unlock:
+	mutex_unlock(&vm->hotplug_mutex);
+	return rc;
+}
+
+/*
+ * Try to unplug all blocks that couldn't be unplugged before, for example,
+ * because the hypervisor was busy.
+ */
+static int virtio_mem_unplug_pending_mb(struct virtio_mem *vm)
+{
+	unsigned long mb_id;
+	int rc;
+
+	virtio_mem_for_each_mb_state(vm, mb_id, VIRTIO_MEM_MB_STATE_PLUGGED) {
+		rc = virtio_mem_mb_unplug(vm, mb_id);
+		if (rc)
+			return rc;
+		virtio_mem_mb_set_state(vm, mb_id, VIRTIO_MEM_MB_STATE_UNUSED);
+	}
+
+	return 0;
+}
+
+/*
+ * Update all parts of the config that could have changed.
+ */
+static void virtio_mem_refresh_config(struct virtio_mem *vm)
+{
+	const uint64_t phys_limit = 1UL << MAX_PHYSMEM_BITS;
+	uint64_t new_plugged_size, usable_region_size, end_addr;
+
+	/* the plugged_size is just a reflection of what _we_ did previously */
+	virtio_cread(vm->vdev, struct virtio_mem_config, plugged_size,
+		     &new_plugged_size);
+	if (WARN_ON_ONCE(new_plugged_size != vm->plugged_size))
+		vm->plugged_size = new_plugged_size;
+
+	/* calculate the last usable memory block id */
+	virtio_cread(vm->vdev, struct virtio_mem_config,
+		     usable_region_size, &usable_region_size);
+	end_addr = vm->addr + usable_region_size;
+	end_addr = min(end_addr, phys_limit);
+	vm->last_usable_mb_id = virtio_mem_phys_to_mb_id(end_addr) - 1;
+
+	/* see if there is a request to change the size */
+	virtio_cread(vm->vdev, struct virtio_mem_config, requested_size,
+		     &vm->requested_size);
+
+	dev_info(&vm->vdev->dev, "plugged size: 0x%llx", vm->plugged_size);
+	dev_info(&vm->vdev->dev, "requested size: 0x%llx", vm->requested_size);
+}
+
+/*
+ * Workqueue function for handling plug/unplug requests and config updates.
+ */
+static void virtio_mem_run_wq(struct work_struct *work)
+{
+	struct virtio_mem *vm = container_of(work, struct virtio_mem, wq);
+	uint64_t diff;
+	int rc = 0;
+
+	hrtimer_cancel(&vm->retry_timer);
+
+	if (vm->broken)
+		return;
+
+	/* Make sure we start with a clean state if there are leftovers. */
+	if (unlikely(vm->unplug_all_required))
+		rc = virtio_mem_send_unplug_all_request(vm);
+
+config_update:
+	if (atomic_read(&vm->config_changed)) {
+		atomic_set(&vm->config_changed, 0);
+		virtio_mem_refresh_config(vm);
+	}
+
+	/* Unplug any leftovers from previous runs */
+	if (!rc)
+		rc = virtio_mem_unplug_pending_mb(vm);
+
+	if (!rc && vm->requested_size != vm->plugged_size) {
+		if (vm->requested_size > vm->plugged_size) {
+			diff = vm->requested_size - vm->plugged_size;
+			rc = virtio_mem_plug_request(vm, diff);
+		}
+		/* TODO: try to unplug memory */
+	}
+
+	switch (rc) {
+	case 0:
+		break;
+	case -ENOSPC:
+		/*
+		 * We cannot add any more memory (alignment, physical limit)
+		 * or we have too many offline memory blocks.
+		 */
+		break;
+	case -EBUSY:
+		/*
+		 * The hypervisor cannot process our request right now
+		 * (e.g., out of memory, migrating).
+		 */
+	case -ENOMEM:
+		/* Out of memory, try again later. */
+		hrtimer_start(&vm->retry_timer, vm->retry_interval,
+			      HRTIMER_MODE_REL);
+		/*
+		 * TODO: more advanced retry handling. For example stop
+		 * after X attempts or increase the waiting time between
+		 * retries.
+		 */
+		break;
+	case -EAGAIN:
+		/* Retry immediately (e.g., the config changed). */
+		goto config_update;
+	default:
+		/* Unknown error, mark as broken */
+		dev_err(&vm->vdev->dev,
+			"unknown error, marking device broken: %d\n", rc);
+		vm->broken = true;
+	}
+}
+
+static enum hrtimer_restart virtio_mem_timer_expired(struct hrtimer *timer)
+{
+	struct virtio_mem *vm = container_of(timer, struct virtio_mem,
+					     retry_timer);
+
+	virtio_mem_retry(vm);
+	return HRTIMER_NORESTART;
+}
+
+static void virtio_mem_handle_response(struct virtqueue *vq)
+{
+	struct virtio_mem *vm = vq->vdev->priv;
+
+	wake_up(&vm->host_resp);
+}
+
+static int virtio_mem_init_vq(struct virtio_mem *vm)
+{
+	struct virtqueue *vq;
+
+	vq = virtio_find_single_vq(vm->vdev, virtio_mem_handle_response,
+				   "guest-request");
+	if (IS_ERR(vq))
+		return PTR_ERR(vq);
+	vm->vq = vq;
+
+	return 0;
+}
+
+/*
+ * Test if any memory in the range is present in Linux.
+ */
+static bool virtio_mem_any_memory_present(unsigned long start,
+					  unsigned long size)
+{
+	const unsigned long start_pfn = PFN_DOWN(start);
+	const unsigned long end_pfn = PFN_UP(start + size);
+	unsigned long pfn;
+
+	for (pfn = start_pfn; pfn != end_pfn; pfn++)
+		if (present_section_nr(pfn_to_section_nr(pfn)))
+			return true;
+
+	return false;
+}
+
+static int virtio_mem_init(struct virtio_mem *vm)
+{
+	const uint64_t phys_limit = 1UL << MAX_PHYSMEM_BITS;
+	uint64_t region_size;
+	uint16_t node_id;
+
+	if (!vm->vdev->config->get) {
+		dev_err(&vm->vdev->dev, "config access disabled\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * We don't want to (un)plug or reuse any memory when in kdump. The
+	 * memory is still accessible (but not mapped).
+	 */
+	if (is_kdump_kernel()) {
+		dev_warn(&vm->vdev->dev, "disabled in kdump kernel\n");
+		return -EBUSY;
+	}
+
+	/* Fetch all properties that can't change. */
+	virtio_cread(vm->vdev, struct virtio_mem_config, plugged_size,
+		     &vm->plugged_size);
+	virtio_cread(vm->vdev, struct virtio_mem_config, block_size,
+		     &vm->device_block_size);
+	virtio_cread(vm->vdev, struct virtio_mem_config, node_id,
+		     &node_id);
+	vm->nid = virtio_mem_translate_node_id(vm, node_id);
+	virtio_cread(vm->vdev, struct virtio_mem_config, addr, &vm->addr);
+	virtio_cread(vm->vdev, struct virtio_mem_config, region_size,
+		     &region_size);
+
+	/*
+	 * If we still have memory plugged, we might have to unplug all
+	 * memory first. However, if somebody simply unloaded the driver
+	 * we would have to reinitialize the old state - something we don't
+	 * support yet. Detect if we have any memory in the area present.
+	 */
+	if (vm->plugged_size) {
+		uint64_t usable_region_size;
+
+		virtio_cread(vm->vdev, struct virtio_mem_config,
+			     usable_region_size, &usable_region_size);
+
+		if (virtio_mem_any_memory_present(vm->addr,
+						  usable_region_size)) {
+			dev_err(&vm->vdev->dev,
+				"reloading the driver is not supported\n");
+			return -EINVAL;
+		}
+		/*
+		 * Note: it might happen that the device is busy and
+		 * unplugging all memory might take some time.
+		 */
+		dev_info(&vm->vdev->dev, "unplugging all memory required\n");
+		vm->unplug_all_required = 1;
+	}
+
+	/*
+	 * We always hotplug memory in memory block granularity. This way,
+	 * we have to wait for exactly one memory block to online.
+	 */
+	if (vm->device_block_size > memory_block_size_bytes()) {
+		dev_err(&vm->vdev->dev,
+			"The block size is not supported (too big).\n");
+		return -EINVAL;
+	}
+
+	/* bad device setup - warn only */
+	if (!IS_ALIGNED(vm->addr, memory_block_size_bytes()))
+		dev_warn(&vm->vdev->dev,
+			 "The alignment of the physical start address can make some memory unusable.\n");
+	if (!IS_ALIGNED(vm->addr + region_size, memory_block_size_bytes()))
+		dev_warn(&vm->vdev->dev,
+			 "The alignment of the physical end address can make some memory unusable.\n");
+	if (vm->addr + region_size > phys_limit)
+		dev_warn(&vm->vdev->dev,
+			 "Some memory is not addressable. This can make some memory unusable.\n");
+
+	/*
+	 * Calculate the subblock size:
+	 * - At least MAX_ORDER - 1 / pageblock_order.
+	 * - At least the device block size.
+	 * In the worst case, a single subblock per memory block.
+	 */
+	vm->subblock_size = PAGE_SIZE * 1u << max_t(uint32_t, MAX_ORDER - 1,
+						    pageblock_order);
+	vm->subblock_size = max_t(uint32_t, vm->device_block_size,
+				  vm->subblock_size);
+	vm->nb_sb_per_mb = memory_block_size_bytes() / vm->subblock_size;
+
+	/* Round up to the next full memory block */
+	vm->first_mb_id = virtio_mem_phys_to_mb_id(vm->addr - 1 +
+						   memory_block_size_bytes());
+	vm->next_mb_id = vm->first_mb_id;
+	vm->last_mb_id = virtio_mem_phys_to_mb_id(vm->addr + region_size) - 1;
+
+	dev_info(&vm->vdev->dev, "start address: 0x%llx", vm->addr);
+	dev_info(&vm->vdev->dev, "region size: 0x%llx", region_size);
+	dev_info(&vm->vdev->dev, "device block size: 0x%x",
+		 vm->device_block_size);
+	dev_info(&vm->vdev->dev, "memory block size: 0x%lx",
+		 memory_block_size_bytes());
+	dev_info(&vm->vdev->dev, "subblock size: 0x%x",
+		 vm->subblock_size);
+
+	return 0;
+}
+
+static int virtio_mem_probe(struct virtio_device *vdev)
+{
+	struct virtio_mem *vm;
+	int rc = -EINVAL;
+
+	vdev->priv = vm = kzalloc(sizeof(*vm), GFP_KERNEL);
+	if (!vm)
+		return -ENOMEM;
+
+	init_waitqueue_head(&vm->host_resp);
+	vm->vdev = vdev;
+	INIT_WORK(&vm->wq, virtio_mem_run_wq);
+	mutex_init(&vm->hotplug_mutex);
+	/*
+	 * Avoid circular locking lockdep warnings. We lock the mutex
+	 * e.g., in MEM_GOING_ONLINE and unlock it in MEM_ONLINE. The
+	 * blocking_notifier_call_chain() has it's own lock, which gets unlocked
+	 * between both notifier calls.
+	 */
+	lockdep_set_novalidate_class(&vm->hotplug_mutex);
+	INIT_LIST_HEAD(&vm->next);
+	spin_lock_init(&vm->removal_lock);
+	hrtimer_init(&vm->retry_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	vm->retry_timer.function = virtio_mem_timer_expired;
+	vm->retry_interval = ms_to_ktime(VIRTIO_MEM_RETRY_MS);
+
+	/* register the virtqueue */
+	rc = virtio_mem_init_vq(vm);
+	if (rc)
+		goto out_free_vm;
+
+	/* initialize the device by querying the config */
+	rc = virtio_mem_init(vm);
+	if (rc)
+		goto out_del_vq;
+
+	/* register callbacks */
+	vm->memory_notifier.notifier_call = virtio_mem_memory_notifier_cb;
+	rc = register_memory_notifier(&vm->memory_notifier);
+	if (rc)
+		goto out_del_vq;
+	rc = register_virtio_mem_device(vm);
+	if (rc)
+		goto out_unreg_mem;
+
+	virtio_device_ready(vdev);
+
+	/* trigger a config update to start processing the requested_size */
+	atomic_set(&vm->config_changed, 1);
+	queue_work(system_freezable_wq, &vm->wq);
+
+	return 0;
+out_unreg_mem:
+	unregister_memory_notifier(&vm->memory_notifier);
+out_del_vq:
+	vdev->config->del_vqs(vdev);
+out_free_vm:
+	kfree(vm);
+	vdev->priv = NULL;
+
+	return rc;
+}
+
+static void virtio_mem_remove(struct virtio_device *vdev)
+{
+	struct virtio_mem *vm = vdev->priv;
+	unsigned long mb_id;
+	int rc;
+
+	/*
+	 * Make sure the workqueue won't be triggered anymore and no memory
+	 * blocks can be onlined/offlined until we're finished here.
+	 */
+	spin_lock_irq(&vm->removal_lock);
+	vm->removing = true;
+	spin_unlock_irq(&vm->removal_lock);
+
+	/* wait until the workqueue stopped */
+	cancel_work_sync(&vm->wq);
+	hrtimer_cancel(&vm->retry_timer);
+
+	mutex_lock(&vm->hotplug_mutex);
+	/*
+	 * After we unregistered our callbacks, user space can online partially
+	 * plugged offline blocks. Make sure to remove them.
+	 */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL) {
+		mutex_unlock(&vm->hotplug_mutex);
+		rc = virtio_mem_mb_remove(vm, mb_id);
+		BUG_ON(rc);
+		mutex_lock(&vm->hotplug_mutex);
+	}
+	mutex_unlock(&vm->hotplug_mutex);
+
+	/* unregister callbacks */
+	unregister_virtio_mem_device(vm);
+	unregister_memory_notifier(&vm->memory_notifier);
+
+	/*
+	 * There is no way we could deterministically remove all memory we have
+	 * added to the system. And there is no way to stop the driver/device
+	 * from going away.
+	 */
+	if (vm->plugged_size)
+		dev_warn(&vdev->dev, "device still has memory plugged\n");
+
+	/* remove all tracking data - no locking needed */
+	vfree(vm->mb_state);
+	vfree(vm->sb_bitmap);
+
+	/* reset the device and cleanup the queues */
+	vdev->config->reset(vdev);
+	vdev->config->del_vqs(vdev);
+
+	kfree(vm);
+	vdev->priv = NULL;
+}
+
+static void virtio_mem_config_changed(struct virtio_device *vdev)
+{
+	struct virtio_mem *vm = vdev->priv;
+
+	atomic_set(&vm->config_changed, 1);
+	virtio_mem_retry(vm);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int virtio_mem_freeze(struct virtio_device *vdev)
+{
+	/*
+	 * When restarting the VM, all memory is usually unplugged. Don't
+	 * allow to suspend/hibernate.
+	 */
+	dev_err(&vdev->dev, "save/restore not supported.\n");
+	return -EPERM;
+}
+
+static int virtio_mem_restore(struct virtio_device *vdev)
+{
+	return -EPERM;
+}
+#endif
+
+static unsigned int virtio_mem_features[] = {
+#if defined(CONFIG_NUMA) && defined(CONFIG_ACPI_NUMA)
+	VIRTIO_MEM_F_ACPI_PXM,
+#endif
+};
+
+static struct virtio_device_id virtio_mem_id_table[] = {
+	{ VIRTIO_ID_MEM, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_mem_driver = {
+	.feature_table = virtio_mem_features,
+	.feature_table_size = ARRAY_SIZE(virtio_mem_features),
+	.driver.name = KBUILD_MODNAME,
+	.driver.owner = THIS_MODULE,
+	.id_table = virtio_mem_id_table,
+	.probe = virtio_mem_probe,
+	.remove = virtio_mem_remove,
+	.config_changed = virtio_mem_config_changed,
+#ifdef CONFIG_PM_SLEEP
+	.freeze	=	virtio_mem_freeze,
+	.restore =	virtio_mem_restore,
+#endif
+};
+
+module_virtio_driver(virtio_mem_driver);
+MODULE_DEVICE_TABLE(virtio, virtio_mem_id_table);
+MODULE_AUTHOR("David Hildenbrand <david@redhat.com>");
+MODULE_DESCRIPTION("Virtio-mem driver");
+MODULE_LICENSE("GPL");
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 585e07b27333..934e0b444454 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -44,6 +44,7 @@
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
 #define VIRTIO_ID_IOMMU        23 /* virtio IOMMU */
+#define VIRTIO_ID_MEM          24 /* virtio mem */
 #define VIRTIO_ID_FS           26 /* virtio filesystem */
 #define VIRTIO_ID_PMEM         27 /* virtio pmem */
 
diff --git a/include/uapi/linux/virtio_mem.h b/include/uapi/linux/virtio_mem.h
new file mode 100644
index 000000000000..bac73fca6c0e
--- /dev/null
+++ b/include/uapi/linux/virtio_mem.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: BSD-3-Clause */
+/*
+ * Virtio Mem Device
+ *
+ * Copyright Red Hat, Inc. 2019
+ *
+ * Authors:
+ *     David Hildenbrand <david@redhat.com>
+ *
+ * This header is BSD licensed so anyone can use the definitions
+ * to implement compatible drivers/servers:
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#ifndef _LINUX_VIRTIO_MEM_H
+#define _LINUX_VIRTIO_MEM_H
+
+#include <linux/types.h>
+#include <linux/virtio_types.h>
+#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h>
+
+/*
+ * Each virtio-mem device manages a dedicated region in physical address
+ * space. Each device can belong to a single NUMA node, multiple devices
+ * for a single NUMA node are possible. A virtio-mem device is like a
+ * "resizable DIMM" consisting of small memory blocks that can be plugged
+ * or unplugged. The device driver is responsible for (un)plugging memory
+ * blocks on demand.
+ *
+ * Virtio-mem devices can only operate on their assigned memory region in
+ * order to (un)plug memory. A device cannot (un)plug memory belonging to
+ * other devices.
+ *
+ * The "region_size" corresponds to the maximum amount of memory that can
+ * be provided by a device. The "size" corresponds to the amount of memory
+ * that is currently plugged. "requested_size" corresponds to a request
+ * from the device to the device driver to (un)plug blocks. The
+ * device driver should try to (un)plug blocks in order to reach the
+ * "requested_size". It is impossible to plug more memory than requested.
+ *
+ * The "usable_region_size" represents the memory region that can actually
+ * be used to (un)plug memory. It is always at least as big as the
+ * "requested_size" and will grow dynamically. It will only shrink when
+ * explicitly triggered (VIRTIO_MEM_REQ_UNPLUG).
+ *
+ * Memory in the usable region can usually be read, however, there are no
+ * guarantees. It can happen that the device cannot process a request,
+ * because it is busy. The device driver has to retry later.
+ *
+ * Usually, during system resets all memory will get unplugged, so the
+ * device driver can start with a clean state. However, in specific
+ * scenarios (if the device is busy) it can happen that the device still
+ * has memory plugged. The device driver can request to unplug all memory
+ * (VIRTIO_MEM_REQ_UNPLUG) - which might take a while to succeed if the
+ * device is busy.
+ */
+
+/* --- virtio-mem: feature bits --- */
+
+/* node_id is an ACPI PXM and is valid */
+#define VIRTIO_MEM_F_ACPI_PXM		0
+
+
+/* --- virtio-mem: guest -> host requests --- */
+
+/* request to plug memory blocks */
+#define VIRTIO_MEM_REQ_PLUG			0
+/* request to unplug memory blocks */
+#define VIRTIO_MEM_REQ_UNPLUG			1
+/* request to unplug all blocks and shrink the usable size */
+#define VIRTIO_MEM_REQ_UNPLUG_ALL		2
+/* request information about the plugged state of memory blocks */
+#define VIRTIO_MEM_REQ_STATE			3
+
+struct virtio_mem_req_plug {
+	__virtio64 addr;
+	__virtio16 nb_blocks;
+};
+
+struct virtio_mem_req_unplug {
+	__virtio64 addr;
+	__virtio16 nb_blocks;
+};
+
+struct virtio_mem_req_state {
+	__virtio64 addr;
+	__virtio16 nb_blocks;
+};
+
+struct virtio_mem_req {
+	__virtio16 type;
+	__virtio16 padding[3];
+
+	union {
+		struct virtio_mem_req_plug plug;
+		struct virtio_mem_req_unplug unplug;
+		struct virtio_mem_req_state state;
+	} u;
+};
+
+
+/* --- virtio-mem: host -> guest response --- */
+
+/*
+ * Request processed successfully, applicable for
+ * - VIRTIO_MEM_REQ_PLUG
+ * - VIRTIO_MEM_REQ_UNPLUG
+ * - VIRTIO_MEM_REQ_UNPLUG_ALL
+ * - VIRTIO_MEM_REQ_STATE
+ */
+#define VIRTIO_MEM_RESP_ACK			0
+/*
+ * Request denied - e.g. trying to plug more than requested, applicable for
+ * - VIRTIO_MEM_REQ_PLUG
+ */
+#define VIRTIO_MEM_RESP_NACK			1
+/*
+ * Request cannot be processed right now, try again later, applicable for
+ * - VIRTIO_MEM_REQ_PLUG
+ * - VIRTIO_MEM_REQ_UNPLUG
+ * - VIRTIO_MEM_REQ_UNPLUG_ALL
+ */
+#define VIRTIO_MEM_RESP_BUSY			2
+/*
+ * Error in request (e.g. addresses/alignemnt), applicable for
+ * - VIRTIO_MEM_REQ_PLUG
+ * - VIRTIO_MEM_REQ_UNPLUG
+ * - VIRTIO_MEM_REQ_STATE
+ */
+#define VIRTIO_MEM_RESP_ERROR			3
+
+
+/* State of memory blocks is "plugged" */
+#define VIRTIO_MEM_STATE_PLUGGED		0
+/* State of memory blocks is "unplugged" */
+#define VIRTIO_MEM_STATE_UNPLUGGED		1
+/* State of memory blocks is "mixed" */
+#define VIRTIO_MEM_STATE_MIXED			2
+
+struct virtio_mem_resp_state {
+	__virtio16 state;
+};
+
+struct virtio_mem_resp {
+	__virtio16 type;
+	__virtio16 padding[3];
+
+	union {
+		struct virtio_mem_resp_state state;
+	} u;
+};
+
+/* --- virtio-mem: configuration --- */
+
+struct virtio_mem_config {
+	/* Block size and alignment. Cannot change. */
+	uint32_t block_size;
+	/* Valid with VIRTIO_MEM_F_ACPI_PXM. Cannot change. */
+	uint16_t node_id;
+	uint16_t padding;
+	/* Start address of the memory region. Cannot change. */
+	uint64_t addr;
+	/* Region size (maximum). Cannot change. */
+	uint64_t region_size;
+	/*
+	 * Currently usable region size. Can grow up to region_size. Can
+	 * shrink due to VIRTIO_MEM_REQ_UNPLUG_ALL (in which case no config
+	 * update will be sent).
+	 */
+	uint64_t usable_region_size;
+	/*
+	 * Currently used size. Changes due to plug/unplug requests, but no
+	 * config updates will be sent.
+	 */
+	uint64_t plugged_size;
+	/* Requested size. New plug requests cannot exceed it. Can change. */
+	uint64_t requested_size;
+};
+
+#endif /* _LINUX_VIRTIO_MEM_H */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 3/9] virtio-mem: Paravirtualized memory hotunplug part 1
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 2/9] virtio-mem: Paravirtualized memory hotplug David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range() David Hildenbrand
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Michael S. Tsirkin, Jason Wang, Oscar Salvador, Michal Hocko,
	Igor Mammedov, Dave Young, Andrew Morton, Dan Williams,
	Pavel Tatashin, Stefan Hajnoczi, Vlastimil Babka

Unplugging subblocks of memory blocks that are offline is easy. All we
have to do is watch out for concurrent onlining acticity.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/virtio_mem.c | 109 +++++++++++++++++++++++++++++++++++-
 1 file changed, 107 insertions(+), 2 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 597fab4933f6..6fb55d4b6f6c 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -119,7 +119,7 @@ struct virtio_mem {
 	 *
 	 * When this lock is held the pointers can't change, ONLINE and
 	 * OFFLINE blocks can't change the state and no subblocks will get
-	 * plugged.
+	 * plugged/unplugged.
 	 */
 	struct mutex hotplug_mutex;
 	bool hotplug_active;
@@ -322,6 +322,19 @@ static bool virtio_mem_mb_test_sb_plugged(struct virtio_mem *vm,
 	       bit + count;
 }
 
+/*
+ * Test if all selected subblocks are unplugged.
+ */
+static bool virtio_mem_mb_test_sb_unplugged(struct virtio_mem *vm,
+					    unsigned long mb_id, int sb_id,
+					    int count)
+{
+	const int bit = (mb_id - vm->first_mb_id) * vm->nb_sb_per_mb + sb_id;
+
+	/* TODO: Helper similar to bitmap_set() */
+	return find_next_bit(vm->sb_bitmap, bit + count, bit) >= bit + count;
+}
+
 /*
  * Find the first plugged subblock. Returns vm->nb_sb_per_mb in case there is
  * none.
@@ -512,6 +525,9 @@ static void virtio_mem_notify_offline(struct virtio_mem *vm,
 		BUG();
 		break;
 	}
+
+	/* trigger the workqueue, maybe we can now unplug memory. */
+	virtio_mem_retry(vm);
 }
 
 static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id,
@@ -1129,6 +1145,93 @@ static int virtio_mem_plug_request(struct virtio_mem *vm, uint64_t diff)
 	return rc;
 }
 
+/*
+ * Unplug the desired number of plugged subblocks of an offline memory block.
+ * Will fail if any subblock cannot get unplugged (instead of skipping it).
+ *
+ * Will modify the state of the memory block. Might temporarily drop the
+ * hotplug_mutex.
+ *
+ * Note: Can fail after some subblocks were successfully unplugged.
+ */
+static int virtio_mem_mb_unplug_any_sb_offline(struct virtio_mem *vm,
+					       unsigned long mb_id,
+					       uint64_t *nb_sb)
+{
+	int rc;
+
+	rc = virtio_mem_mb_unplug_any_sb(vm, mb_id, nb_sb);
+
+	/* some subblocks might have been unplugged even on failure */
+	if (!virtio_mem_mb_test_sb_plugged(vm, mb_id, 0, vm->nb_sb_per_mb))
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL);
+	if (rc)
+		return rc;
+
+	if (virtio_mem_mb_test_sb_unplugged(vm, mb_id, 0, vm->nb_sb_per_mb)) {
+		/*
+		 * Remove the block from Linux - this should never fail.
+		 * Hinder the block from getting onlined by marking it
+		 * unplugged. Temporarily drop the mutex, so
+		 * any pending GOING_ONLINE requests can be serviced/rejected.
+		 */
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_UNUSED);
+
+		mutex_unlock(&vm->hotplug_mutex);
+		rc = virtio_mem_mb_remove(vm, mb_id);
+		BUG_ON(rc);
+		mutex_lock(&vm->hotplug_mutex);
+	}
+	return 0;
+}
+
+/*
+ * Try to unplug the requested amount of memory.
+ */
+static int virtio_mem_unplug_request(struct virtio_mem *vm, uint64_t diff)
+{
+	uint64_t nb_sb = diff / vm->subblock_size;
+	unsigned long mb_id;
+	int rc;
+
+	if (!nb_sb)
+		return 0;
+
+	/*
+	 * We'll drop the mutex a couple of times when it is safe to do so.
+	 * This might result in some blocks switching the state (online/offline)
+	 * and we could miss them in this run - we will retry again later.
+	 */
+	mutex_lock(&vm->hotplug_mutex);
+
+	/* Try to unplug subblocks of partially plugged offline blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_OFFLINE_PARTIAL) {
+		rc = virtio_mem_mb_unplug_any_sb_offline(vm, mb_id,
+							 &nb_sb);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		cond_resched();
+	}
+
+	/* Try to unplug subblocks of plugged offline blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id, VIRTIO_MEM_MB_STATE_OFFLINE) {
+		rc = virtio_mem_mb_unplug_any_sb_offline(vm, mb_id,
+							 &nb_sb);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		cond_resched();
+	}
+
+	mutex_unlock(&vm->hotplug_mutex);
+	return 0;
+out_unlock:
+	mutex_unlock(&vm->hotplug_mutex);
+	return rc;
+}
+
 /*
  * Try to unplug all blocks that couldn't be unplugged before, for example,
  * because the hypervisor was busy.
@@ -1209,8 +1312,10 @@ static void virtio_mem_run_wq(struct work_struct *work)
 		if (vm->requested_size > vm->plugged_size) {
 			diff = vm->requested_size - vm->plugged_size;
 			rc = virtio_mem_plug_request(vm, diff);
+		} else {
+			diff = vm->plugged_size - vm->requested_size;
+			rc = virtio_mem_unplug_request(vm, diff);
 		}
-		/* TODO: try to unplug memory */
 	}
 
 	switch (rc) {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range()
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (2 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 3/9] virtio-mem: Paravirtualized memory hotunplug part 1 David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-10-16 11:20   ` Michal Hocko
  2019-09-19 14:22 ` [PATCH RFC v3 5/9] virtio-mem: Paravirtualized memory hotunplug part 2 David Hildenbrand
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Andrew Morton, Michal Hocko, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Mike Rapoport, Dan Williams, Alexander Duyck,
	Pavel Tatashin, Alexander Potapenko

A virtio-mem device wants to allocate memory from the memory region it
manages in order to unplug it in the hypervisor - similar to
a balloon driver. Also, it might want to plug previously unplugged
(allocated) memory and give it back to Linux. alloc_contig_range() /
free_contig_range() seem to be the perfect interface for this task.

In contrast to existing balloon devices, a virtio-mem device operates
on bigger chunks (e.g., 4MB) and only on physical memory it manages. It
tracks which chunks (subblocks) are still plugged, so it can go ahead
and try to alloc_contig_range()+unplug them on unplug request, or
plug+free_contig_range() unplugged chunks on plug requests.

A virtio-mem device will use alloc_contig_range() / free_contig_range()
only on ranges that belong to the same node/zone in at least
MAX(MAX_ORDER - 1, pageblock_order) order granularity - e.g., 4MB on
x86-64. The virtio-mem device added that memory, so the memory
exists and does not contain any holes. virtio-mem will only try to allocate
on ZONE_NORMAL, never on ZONE_MOVABLE, just like when allocating
gigantic pages (we don't put unmovable data into the movable zone).

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
Cc: Alexander Potapenko <glider@google.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3334a769eb91..d5d7944954b3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8469,6 +8469,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 				pfn_max_align_up(end), migratetype);
 	return ret;
 }
+EXPORT_SYMBOL_GPL(alloc_contig_range);
 #endif /* CONFIG_CONTIG_ALLOC */
 
 void free_contig_range(unsigned long pfn, unsigned int nr_pages)
@@ -8483,6 +8484,7 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages)
 	}
 	WARN(count != 0, "%d pages are still in use!\n", count);
 }
+EXPORT_SYMBOL_GPL(free_contig_range);
 
 #ifdef CONFIG_MEMORY_HOTPLUG
 /*
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 5/9] virtio-mem: Paravirtualized memory hotunplug part 2
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (3 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range() David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0 David Hildenbrand
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Michael S. Tsirkin, Jason Wang, Oscar Salvador, Michal Hocko,
	Igor Mammedov, Dave Young, Andrew Morton, Dan Williams,
	Pavel Tatashin, Stefan Hajnoczi, Vlastimil Babka

We can use alloc_contig_range() to try to unplug subblocks. Unplugged
blocks will be marked PG_offline, however, don't have the PG_reserved
flag set. This way, we can differentiate these allocated subblocks from
subblocks that were never onlined and handle them properly in
virtio_mem_fake_online(). free_contig_range() is used to hand back
subblocks to Linux.

It is worth noting that there are no guarantess on how much memory can
actually get unplugged again. All device memory might completely be
fragmented with unmovable data, such that no subblock can get unplugged.
We might want to improve the unplugging capability in the future.

We are not touching the ZONE_MOVABLE. If memory is onlined to the
ZONE_MOVABLE, it can only get unplugged after that memory was offlined
manually by user space. In normal operation, virtio-mem memory is
suggested to be onlined to ZONE_NORMAL. In the future, we will try to
make unplug more likely to succeed.

Future work:
- Offline + remove memory blocks once all subblocks were unplugged. This
  might then free up unmovable data un other memory blocks.
- Performance improvements:
-- Sense (lockless) if it make sense to try alloc_contig_range() at all
   before directly trying to isolate and taking locks.
-- Try to unplug bigger chunks if possible first.
-- Identify free areas first, that don't have to be evacuated.
- Make unplug more likely to succeed:
-- The "issue" is that in the ZONE_NORMAL, the buddy will randomly
   allocate memory. Only pageblocks somewhat limit fragmentation,
   however we would want to limit fragmentation on subblock granularity
   and even memory block granularity. One idea is to have a new
   ZONE_PREFER_MOVABLE. Memory blocks will then be onlined to ZONE_NORMAL
   / ZONE_PREFER_MOVABLE in a certain ratio per node (e.g.,
   1:4). This makes unplug of quite some memory likely to succeed in most
   setups. ZONE_PREFER_MOVABLE is then a mixture of ZONE_NORMAL and
   ZONE_MOVABlE. Especially, movable data can end up on that zone, but
   only if really required - avoiding running out of memory on ZONE
   imbalances. The zone fallback order would be
   MOVABLE=>PREFER_MOVABLE=>HIGHMEM=>NORMAL=>PREFER_MOVABLE=>DMA32=>DMA
-- Allocate memmap from added memory. This way, less unmovable data can
   end up on the memory blocks.
-- Call drop_slab() before trying to unplug. Eventually shrink other
   caches.
- Better retry handling in case memory is busy. We certainly don't want
  to try for ever in a short interval to try to get some memory back.
- OOM handling, e.g., via an OOM handler.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/Kconfig      |   1 +
 drivers/virtio/virtio_mem.c | 106 +++++++++++++++++++++++++++++++++++-
 2 files changed, 104 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 294720d53057..75a760f32ec7 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -71,6 +71,7 @@ config VIRTIO_MEM
 	depends on VIRTIO
 	depends on MEMORY_HOTPLUG_SPARSE
 	depends on MEMORY_HOTREMOVE
+	select CONTIG_ALLOC
 	help
 	 This driver provides access to virtio-mem paravirtualized memory
 	 devices, allowing to hotplug and hotunplug memory.
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 6fb55d4b6f6c..91052a37d10d 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -689,7 +689,17 @@ static void virtio_mem_fake_online(unsigned long pfn, unsigned int nr_pages)
 	for (i = 0; i < nr_pages; i += 1 << order) {
 		struct page *page = pfn_to_page(pfn + i);
 
-		generic_online_page(page, order);
+		/*
+		 * If the page is reserved, it was kept fake-offline when
+		 * onlining the memory block. Otherwise, it was allocated
+		 * using alloc_contig_range().
+		 */
+		if (PageReserved(page))
+			generic_online_page(page, order);
+		else {
+			free_contig_range(pfn + i, 1 << order);
+			totalram_pages_add(1 << order);
+		}
 	}
 }
 
@@ -1187,6 +1197,72 @@ static int virtio_mem_mb_unplug_any_sb_offline(struct virtio_mem *vm,
 	return 0;
 }
 
+/*
+ * Unplug the desired number of plugged subblocks of an online memory block.
+ * Will skip subblock that are busy.
+ *
+ * Will modify the state of the memory block.
+ *
+ * Note: Can fail after some subblocks were successfully unplugged. Can
+ *       return 0 even if subblocks were busy and could not get unplugged.
+ */
+static int virtio_mem_mb_unplug_any_sb_online(struct virtio_mem *vm,
+					      unsigned long mb_id,
+					      uint64_t *nb_sb)
+{
+	const unsigned long nr_pages = PFN_DOWN(vm->subblock_size);
+	unsigned long start_pfn;
+	int rc, sb_id;
+
+	/*
+	 * TODO: To increase the performance we want to try bigger, consecutive
+	 * subblocks first before falling back to single subblocks. Also,
+	 * we should sense via something like is_mem_section_removable()
+	 * first if it makes sense to go ahead any try to allocate.
+	 */
+	for (sb_id = 0; sb_id < vm->nb_sb_per_mb && *nb_sb; sb_id++) {
+		/* Find the next candidate subblock */
+		while (sb_id < vm->nb_sb_per_mb &&
+		       !virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1))
+			sb_id++;
+		if (sb_id >= vm->nb_sb_per_mb)
+			break;
+
+		start_pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) +
+				     sb_id * vm->subblock_size);
+		rc = alloc_contig_range(start_pfn, start_pfn + nr_pages,
+					MIGRATE_MOVABLE, GFP_KERNEL);
+		if (rc == -ENOMEM)
+			/* whoops, out of memory */
+			return rc;
+		if (rc)
+			/* memory busy, we can't unplug this chunk */
+			continue;
+
+		/* Mark it as fake-offline before unplugging it */
+		virtio_mem_set_fake_offline(start_pfn, nr_pages);
+		totalram_pages_add(-nr_pages);
+
+		/* Try to unplug the allocated memory */
+		rc = virtio_mem_mb_unplug_sb(vm, mb_id, sb_id, 1);
+		if (rc) {
+			/* Return the memory to the buddy. */
+			virtio_mem_fake_online(start_pfn, nr_pages);
+			return rc;
+		}
+
+		virtio_mem_mb_set_state(vm, mb_id,
+					VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL);
+		*nb_sb -= 1;
+	}
+
+	/*
+	 * TODO: Once all subblocks of a memory block were unplugged, we want
+	 * to offline the memory block and remove it.
+	 */
+	return 0;
+}
+
 /*
  * Try to unplug the requested amount of memory.
  */
@@ -1225,8 +1301,31 @@ static int virtio_mem_unplug_request(struct virtio_mem *vm, uint64_t diff)
 		cond_resched();
 	}
 
+	/* Try to unplug subblocks of partially plugged online blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL) {
+		rc = virtio_mem_mb_unplug_any_sb_online(vm, mb_id,
+							&nb_sb);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		mutex_unlock(&vm->hotplug_mutex);
+		cond_resched();
+		mutex_lock(&vm->hotplug_mutex);
+	}
+
+	/* Try to unplug subblocks of plugged online blocks. */
+	virtio_mem_for_each_mb_state(vm, mb_id, VIRTIO_MEM_MB_STATE_ONLINE) {
+		rc = virtio_mem_mb_unplug_any_sb_online(vm, mb_id,
+							&nb_sb);
+		if (rc || !nb_sb)
+			goto out_unlock;
+		mutex_unlock(&vm->hotplug_mutex);
+		cond_resched();
+		mutex_lock(&vm->hotplug_mutex);
+	}
+
 	mutex_unlock(&vm->hotplug_mutex);
-	return 0;
+	return nb_sb ? -EBUSY : 0;
 out_unlock:
 	mutex_unlock(&vm->hotplug_mutex);
 	return rc;
@@ -1330,7 +1429,8 @@ static void virtio_mem_run_wq(struct work_struct *work)
 	case -EBUSY:
 		/*
 		 * The hypervisor cannot process our request right now
-		 * (e.g., out of memory, migrating).
+		 * (e.g., out of memory, migrating) or we cannot free up
+		 * any memory to unplug it (all plugged memory is busy).
 		 */
 	case -ENOMEM:
 		/* Out of memory, try again later. */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (4 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 5/9] virtio-mem: Paravirtualized memory hotunplug part 2 David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-10-16 11:43   ` Michal Hocko
  2019-09-19 14:22 ` [PATCH RFC v3 7/9] virtio-mem: Allow to offline partially unplugged memory blocks David Hildenbrand
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Michal Hocko, Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman,
	Mike Rapoport, Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

virtio-mem wants to allow to offline memory blocks of which some parts
were unplugged, especially, to later offline and remove completely
unplugged memory blocks. The important part is that PageOffline() has
to remain set until the section is offline, so these pages will never
get accessed (e.g., when dumping). The pages should not be handed
back to the buddy (which would require clearing PageOffline() and
result in issues if offlining fails and the pages are suddenly in the
buddy).

Let's use "PageOffline() + reference count = 0" as a sign to
memory offlining code that these pages can simply be skipped when
offlining, similar to free or HWPoison pages.

Pass flags to test_pages_isolated(), similar as already done for
has_unmovable_pages(). Use a new flag to indicate the
requirement of memory offlining to skip over these special pages.

In has_unmovable_pages(), make sure the pages won't be detected as
movable. This is not strictly necessary, however makes e.g.,
alloc_contig_range() stop early, trying to isolate such page blocks -
compared to failing later when testing if all pages were isolated.

Also, make sure that when a reference to a PageOffline() page is
dropped, that the page will not be returned to the buddy.

memory devices (like virtio-mem) that want to make use of this
functionality have to make sure to synchronize against memory offlining,
using the memory hotplug notifier.

Alternative: Allow to offline with a reference count of 1
and use some other sign in the struct page that offlining is permitted.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Anthony Yznaga <anthony.yznaga@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pingfan Liu <kernelfans@gmail.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Wei Yang <richardw.yang@linux.intel.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/page-flags.h     |  4 ++++
 include/linux/page-isolation.h |  4 +++-
 mm/memory_hotplug.c            |  9 ++++++---
 mm/page_alloc.c                | 22 +++++++++++++++++++++-
 mm/page_isolation.c            | 18 +++++++++++++-----
 mm/swap.c                      |  9 +++++++++
 6 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f91cb8898ff0..7e563eab6b4b 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -745,6 +745,10 @@ PAGE_TYPE_OPS(Buddy, buddy)
  * not onlined when onlining the section).
  * The content of these pages is effectively stale. Such pages should not
  * be touched (read/write/dump/save) except by their owner.
+ *
+ * PageOffline() pages that have a reference count of 0 will be treated
+ * like free pages when offlining pages, allowing the containing memory
+ * block to get offlined.
  */
 PAGE_TYPE_OPS(Offline, offline)
 
diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 1099c2fee20f..024e02b60346 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -32,6 +32,8 @@ static inline bool is_migrate_isolate(int migratetype)
 
 #define SKIP_HWPOISON	0x1
 #define REPORT_FAILURE	0x2
+/* Skip PageOffline() pages with a reference count of 0. */
+#define SKIP_OFFLINE	0x4
 
 bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 			 int migratetype, int flags);
@@ -58,7 +60,7 @@ undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  * Test all pages in [start_pfn, end_pfn) are isolated or not.
  */
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
-			bool skip_hwpoisoned_pages);
+			int flags);
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private);
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f08eb429b8f3..d23ff7c5c96b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1127,7 +1127,8 @@ static bool is_pageblock_removable_nolock(unsigned long pfn)
 	if (!zone_spans_pfn(zone, pfn))
 		return false;
 
-	return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, SKIP_HWPOISON);
+	return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE,
+				    SKIP_HWPOISON | SKIP_OFFLINE);
 }
 
 /* Checks if this range of memory is likely to be hot-removable. */
@@ -1344,7 +1345,8 @@ static int
 check_pages_isolated_cb(unsigned long start_pfn, unsigned long nr_pages,
 			void *data)
 {
-	return test_pages_isolated(start_pfn, start_pfn + nr_pages, true);
+	return test_pages_isolated(start_pfn, start_pfn + nr_pages,
+				   SKIP_HWPOISON | SKIP_OFFLINE);
 }
 
 static int __init cmdline_parse_movable_node(char *p)
@@ -1455,7 +1457,8 @@ static int __ref __offline_pages(unsigned long start_pfn,
 	/* set above range as isolated */
 	ret = start_isolate_page_range(start_pfn, end_pfn,
 				       MIGRATE_MOVABLE,
-				       SKIP_HWPOISON | REPORT_FAILURE);
+				       SKIP_HWPOISON | REPORT_FAILURE |
+				       SKIP_OFFLINE);
 	if (ret < 0) {
 		reason = "failure to isolate range";
 		goto failed_removal;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d5d7944954b3..fef74720d8b4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		if (!page_ref_count(page)) {
 			if (PageBuddy(page))
 				iter += (1 << page_order(page)) - 1;
+			/*
+			* Memory devices allow to offline a page if it is
+			* marked PG_offline and has a reference count of 0.
+			* However, the pages are not movable as it would be
+			* required e.g., for alloc_contig_range().
+			*/
+			if (PageOffline(page) && !(flags & SKIP_OFFLINE))
+				if (++found > count)
+					goto unmovable;
 			continue;
 		}
 
@@ -8444,7 +8453,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	}
 
 	/* Make sure the range is really isolated. */
-	if (test_pages_isolated(outer_start, end, false)) {
+	if (test_pages_isolated(outer_start, end, 0)) {
 		pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n",
 			__func__, outer_start, end);
 		ret = -EBUSY;
@@ -8563,6 +8572,17 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 			offlined_pages++;
 			continue;
 		}
+		/*
+		 * Memory devices allow to offline a page if it is marked
+		 * PG_offline and has a reference count of 0.
+		 */
+		if (PageOffline(page) && !page_count(page)) {
+			BUG_ON(PageBuddy(page));
+			pfn++;
+			SetPageReserved(page);
+			offlined_pages++;
+			continue;
+		}
 
 		BUG_ON(page_count(page));
 		BUG_ON(!PageBuddy(page));
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 89c19c0feadb..0a75019d7e7c 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -171,6 +171,8 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
  *			SKIP_HWPOISON - ignore hwpoison pages
  *			REPORT_FAILURE - report details about the failure to
  *			isolate the range
+ *			SKIP_OFFLINE - ignore PageOffline() pages with a
+ *			reference count of 0
  *
  * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
  * the range will never be allocated. Any free pages and pages freed in the
@@ -257,7 +259,7 @@ void undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  */
 static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
-				  bool skip_hwpoisoned_pages)
+				  int flags)
 {
 	struct page *page;
 
@@ -274,9 +276,16 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 			 * simple way to verify that as VM_BUG_ON(), though.
 			 */
 			pfn += 1 << page_order(page);
-		else if (skip_hwpoisoned_pages && PageHWPoison(page))
+		else if ((flags & SKIP_HWPOISON) && PageHWPoison(page))
 			/* A HWPoisoned page cannot be also PageBuddy */
 			pfn++;
+		else if ((flags & SKIP_OFFLINE) && PageOffline(page) &&
+			 !page_count(page))
+			/*
+			 * Memory devices allow to offline a page if it is
+			 * marked PG_offline and has a reference count of 0.
+			 */
+			pfn++;
 		else
 			break;
 	}
@@ -286,7 +295,7 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 
 /* Caller should ensure that requested range is in a single zone */
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
-			bool skip_hwpoisoned_pages)
+			int isol_flags)
 {
 	unsigned long pfn, flags;
 	struct page *page;
@@ -308,8 +317,7 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
-						skip_hwpoisoned_pages);
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, isol_flags);
 	spin_unlock_irqrestore(&zone->lock, flags);
 
 	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
diff --git a/mm/swap.c b/mm/swap.c
index 38c3fa4308e2..f98987656ecc 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -107,6 +107,15 @@ void __put_page(struct page *page)
 		 * not return it to page allocator.
 		 */
 		return;
+	} else if (PageOffline(page)) {
+		/*
+		 * Memory devices allow to offline a page if it is
+		 * marked PG_offline and has a reference count of 0. So if
+		 * somebody puts a reference of such a page and the
+		 * reference count drops to 0, don't return the page to the
+		 * buddy.
+		 */
+		return;
 	}
 
 	if (unlikely(PageCompound(page)))
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 7/9] virtio-mem: Allow to offline partially unplugged memory blocks
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (5 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0 David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-09-19 14:22 ` [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory() David Hildenbrand
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Michael S. Tsirkin, Jason Wang, Oscar Salvador, Michal Hocko,
	Igor Mammedov, Dave Young, Andrew Morton, Dan Williams,
	Pavel Tatashin, Stefan Hajnoczi, Vlastimil Babka

Dropping the reference count of PageOffline() pages allows offlining
code to skip them. However, we also have to convert PG_reserved to
another flag - let's use PG_dirty - so has_unmovable_pages() will
properly handle them. PG_reserved pages get detected as unmovable right
away.

We need the flag to see if we are onlining pages the first time, or if
we allocated them via alloc_contig_range().

Properly take care of offlining code also modifying the stats and
special handling in case the driver gets unloaded.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/virtio_mem.c | 102 ++++++++++++++++++++++++++++++++----
 1 file changed, 92 insertions(+), 10 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 91052a37d10d..9cb31459b211 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -561,6 +561,30 @@ static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id,
 		virtio_mem_retry(vm);
 }
 
+/*
+ * When we unplug subblocks, we already modify stats (e.g., subtract them
+ * from totalram_pages). Offlining code will modify the stats, too. So
+ * properly fixup the stats when GOING_OFFLINE and revert that when
+ * CANCEL_OFFLINE.
+ */
+static void virtio_mem_mb_going_offline_fixup_stats(struct virtio_mem *vm,
+						    unsigned long mb_id,
+						    bool cancel)
+{
+	const unsigned long nr_pages = PFN_DOWN(vm->subblock_size);
+	int sb_id;
+
+	for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) {
+		if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1))
+			continue;
+
+		if (cancel)
+			totalram_pages_add(-nr_pages);
+		else
+			totalram_pages_add(nr_pages);
+	}
+}
+
 /*
  * This callback will either be called synchonously from add_memory() or
  * asynchronously (e.g., triggered via user space). We have to be careful
@@ -608,6 +632,7 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb,
 			mutex_lock(&vm->hotplug_mutex);
 			vm->hotplug_active = true;
 		}
+		virtio_mem_mb_going_offline_fixup_stats(vm, mb_id, false);
 		break;
 	case MEM_GOING_ONLINE:
 		spin_lock_irq(&vm->removal_lock);
@@ -633,6 +658,8 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb,
 		mutex_unlock(&vm->hotplug_mutex);
 		break;
 	case MEM_CANCEL_OFFLINE:
+		virtio_mem_mb_going_offline_fixup_stats(vm, mb_id, true);
+		/* fall through */
 	case MEM_CANCEL_ONLINE:
 		/* We might not get a MEM_GOING* if somebody else canceled */
 		if (vm->hotplug_active) {
@@ -648,23 +675,55 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb,
 }
 
 /*
- * Set a range of pages PG_offline.
+ * Convert PG_reserved to PG_dirty. Needed to allow isolation code to
+ * not immediately consider them as unmovable.
+ */
+static void virtio_mem_reserved_to_dirty(unsigned long pfn,
+					 unsigned int nr_pages)
+{
+	for (; nr_pages--; pfn++) {
+		SetPageDirty(pfn_to_page(pfn));
+		ClearPageReserved(pfn_to_page(pfn));
+	}
+}
+
+/*
+ * Convert PG_dirty to PG_reserved. Needed so generic_online_page()
+ * works correctly.
+ */
+static void virtio_mem_dirty_to_reserved(unsigned long pfn,
+					 unsigned int nr_pages)
+{
+	for (; nr_pages--; pfn++) {
+		SetPageReserved(pfn_to_page(pfn));
+		ClearPageDirty(pfn_to_page(pfn));
+	}
+}
+
+/*
+ * Set a range of pages PG_offline and drop the reference. The dropped
+ * reference (0) and the flag allows isolation code to isolate this range
+ * and offline code to offline it.
  */
 static void virtio_mem_set_fake_offline(unsigned long pfn,
 					unsigned int nr_pages)
 {
-	for (; nr_pages--; pfn++)
+	for (; nr_pages--; pfn++) {
 		__SetPageOffline(pfn_to_page(pfn));
+		page_ref_dec(pfn_to_page(pfn));
+	}
 }
 
 /*
- * Clear PG_offline from a range of pages.
+ * Get a reference and clear PG_offline from a range of pages.
  */
 static void virtio_mem_clear_fake_offline(unsigned long pfn,
 					  unsigned int nr_pages)
 {
-	for (; nr_pages--; pfn++)
+	for (; nr_pages--; pfn++) {
+		page_ref_inc(pfn_to_page(pfn));
 		__ClearPageOffline(pfn_to_page(pfn));
+	}
 }
 
 /*
@@ -679,7 +738,7 @@ static void virtio_mem_fake_online(unsigned long pfn, unsigned int nr_pages)
 	/*
 	 * We are always called with subblock granularity, which is at least
 	 * aligned to MAX_ORDER - 1. All pages in a subblock are either
-	 * reserved or not.
+	 * PG_dirty (converted PG_reserved) or not.
 	 */
 	BUG_ON(!IS_ALIGNED(pfn, 1 << order));
 	BUG_ON(!IS_ALIGNED(nr_pages, 1 << order));
@@ -690,13 +749,14 @@ static void virtio_mem_fake_online(unsigned long pfn, unsigned int nr_pages)
 		struct page *page = pfn_to_page(pfn + i);
 
 		/*
-		 * If the page is reserved, it was kept fake-offline when
+		 * If the page is PG_dirty, it was kept fake-offline when
 		 * onlining the memory block. Otherwise, it was allocated
 		 * using alloc_contig_range().
 		 */
-		if (PageReserved(page))
+		if (PageDirty(page)) {
+			virtio_mem_dirty_to_reserved(pfn + i, 1 << order);
 			generic_online_page(page, order);
-		else {
+		} else {
 			free_contig_range(pfn + i, 1 << order);
 			totalram_pages_add(1 << order);
 		}
@@ -728,8 +788,10 @@ static void virtio_mem_online_page_cb(struct page *page, unsigned int order)
 		 */
 		if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1))
 			generic_online_page(page, order);
-		else
+		else {
 			virtio_mem_set_fake_offline(PFN_DOWN(addr), 1 << order);
+			virtio_mem_reserved_to_dirty(PFN_DOWN(addr), 1 << order);
+		}
 		rcu_read_unlock();
 		return;
 	}
@@ -1674,7 +1736,8 @@ static int virtio_mem_probe(struct virtio_device *vdev)
 static void virtio_mem_remove(struct virtio_device *vdev)
 {
 	struct virtio_mem *vm = vdev->priv;
-	unsigned long mb_id;
+	unsigned long nr_pages = PFN_DOWN(vm->subblock_size);
+	unsigned long pfn, mb_id, sb_id;
 	int rc;
 
 	/*
@@ -1701,6 +1764,25 @@ static void virtio_mem_remove(struct virtio_device *vdev)
 		BUG_ON(rc);
 		mutex_lock(&vm->hotplug_mutex);
 	}
+	/*
+	 * After we unregistered our callbacks, user space can offline +
+	 * re-online partially plugged online blocks. Make sure they can't
+	 * get offlined by getting a reference. Also, restore PG_reserved.
+	 */
+	virtio_mem_for_each_mb_state(vm, mb_id,
+				     VIRTIO_MEM_MB_STATE_ONLINE_PARTIAL) {
+		for (sb_id = 0; sb_id < vm->nb_sb_per_mb; sb_id++) {
+			if (virtio_mem_mb_test_sb_plugged(vm, mb_id, sb_id, 1))
+				continue;
+			pfn = PFN_DOWN(virtio_mem_mb_id_to_phys(mb_id) +
+			      sb_id * vm->subblock_size);
+
+			if (PageDirty(pfn_to_page(pfn)))
+				virtio_mem_dirty_to_reserved(pfn, nr_pages);
+			for (; nr_pages--; pfn++)
+				page_ref_inc(pfn_to_page(pfn));
+		}
+	}
 	mutex_unlock(&vm->hotplug_mutex);
 
 	/* unregister callbacks */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory()
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (6 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 7/9] virtio-mem: Allow to offline partially unplugged memory blocks David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-10-16 11:47   ` Michal Hocko
  2019-09-19 14:22 ` [PATCH RFC v3 9/9] virtio-mem: Offline and remove completely unplugged memory blocks David Hildenbrand
  2019-10-16  8:12 ` [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
  9 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Andrew Morton, Oscar Salvador, Michal Hocko, Pavel Tatashin,
	Wei Yang, Dan Williams, Qian Cai

virtio-mem wants to offline and remove a memory block once it unplugged
all subblocks (e.g., using alloc_contig_range()). Let's provide
an interface to do that from a driver. virtio-mem already supports to
offline partially unplugged memory blocks. Offlining a fully unplugged
memory block will not require to migrate any pages. All unplugged
subblocks are PageOffline() and have a reference count of 0 - so
offlining code will simply skip them.

All we need an interface to trigger the "offlining" and the removing in a
single operation - to make sure the memory block cannot get onlined by
user space again before it gets removed.

To keep things simple, allow to only work on a single memory block.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Oscar Salvador <osalvador@suse.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/memory_hotplug.h |  1 +
 mm/memory_hotplug.c            | 35 ++++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 384ffb3d69ab..a1bf868aaeba 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -313,6 +313,7 @@ extern void try_offline_node(int nid);
 extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
 extern int remove_memory(int nid, u64 start, u64 size);
 extern void __remove_memory(int nid, u64 start, u64 size);
+extern int offline_and_remove_memory(int nid, u64 start, u64 size);
 
 #else
 static inline bool is_mem_section_removable(unsigned long pfn,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d23ff7c5c96b..caf3c93f5f7c 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1742,4 +1742,39 @@ int remove_memory(int nid, u64 start, u64 size)
 	return rc;
 }
 EXPORT_SYMBOL_GPL(remove_memory);
+
+/*
+ * Try to offline and remove a memory block. Might take a long time to
+ * finish in case memory is still in use. Primarily useful for memory devices
+ * that logically unplugged all memory (so it's no longer in use) and want to
+ * offline + remove the memory block.
+ */
+int offline_and_remove_memory(int nid, u64 start, u64 size)
+{
+	struct memory_block *mem;
+	int rc = -EINVAL;
+
+	if (!IS_ALIGNED(start, memory_block_size_bytes()) ||
+	    size != memory_block_size_bytes())
+		return rc;
+
+	lock_device_hotplug();
+	mem = find_memory_block(__pfn_to_section(PFN_DOWN(start)));
+	if (mem)
+		rc = device_offline(&mem->dev);
+	/* Ignore if the device is already offline. */
+	if (rc > 0)
+		rc = 0;
+
+	/*
+	 * In case we succeeded to offline the memory block, remove it.
+	 * This cannot fail as it cannot get onlined in the meantime.
+	 */
+	if (!rc && try_remove_memory(nid, start, size))
+		BUG();
+	unlock_device_hotplug();
+
+	return rc;
+}
+EXPORT_SYMBOL_GPL(offline_and_remove_memory);
 #endif /* CONFIG_MEMORY_HOTREMOVE */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH RFC v3 9/9] virtio-mem: Offline and remove completely unplugged memory blocks
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (7 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory() David Hildenbrand
@ 2019-09-19 14:22 ` David Hildenbrand
  2019-10-16  8:12 ` [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-19 14:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, David Hildenbrand,
	Michael S. Tsirkin, Jason Wang, Oscar Salvador, Michal Hocko,
	Igor Mammedov, Dave Young, Andrew Morton, Dan Williams,
	Pavel Tatashin, Stefan Hajnoczi, Vlastimil Babka

Let's offline+remove memory blocks once all subblocks are unplugged. We
can use the new Linux MM interface for that. As no memory is in use
anymore, this shouldn't take a long time and shouldn't fail. There might
be corner cases where the offlining could still fail (especially, if
another notifier NACKs the offlining request).

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/virtio_mem.c | 39 ++++++++++++++++++++++++++++++++++---
 1 file changed, 36 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 9cb31459b211..01d5fc784d5d 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -433,6 +433,28 @@ static int virtio_mem_mb_remove(struct virtio_mem *vm, unsigned long mb_id)
 	return remove_memory(nid, addr, memory_block_size_bytes());
 }
 
+/*
+ * Try to offline and remove a memory block from Linux.
+ *
+ * Must not be called with the vm->hotplug_mutex held (possible deadlock with
+ * onlining code).
+ *
+ * Will not modify the state of the memory block.
+ */
+static int virtio_mem_mb_offline_and_remove(struct virtio_mem *vm,
+					    unsigned long mb_id)
+{
+	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
+	int nid = vm->nid;
+
+	if (nid == NUMA_NO_NODE)
+		nid = memory_add_physaddr_to_nid(addr);
+
+	dev_dbg(&vm->vdev->dev, "offlining and removing memory block: %lu\n",
+		mb_id);
+	return offline_and_remove_memory(nid, addr, memory_block_size_bytes());
+}
+
 /*
  * Trigger the workqueue so the device can perform its magic.
  */
@@ -1263,7 +1285,8 @@ static int virtio_mem_mb_unplug_any_sb_offline(struct virtio_mem *vm,
  * Unplug the desired number of plugged subblocks of an online memory block.
  * Will skip subblock that are busy.
  *
- * Will modify the state of the memory block.
+ * Will modify the state of the memory block. Might temporarily drop the
+ * hotplug_mutex.
  *
  * Note: Can fail after some subblocks were successfully unplugged. Can
  *       return 0 even if subblocks were busy and could not get unplugged.
@@ -1319,9 +1342,19 @@ static int virtio_mem_mb_unplug_any_sb_online(struct virtio_mem *vm,
 	}
 
 	/*
-	 * TODO: Once all subblocks of a memory block were unplugged, we want
-	 * to offline the memory block and remove it.
+	 * Once all subblocks of a memory block were unplugged, offline and
+	 * remove it. This will usually not fail, as no memory is in use
+	 * anymore - however some other notifiers might NACK the request.
 	 */
+	if (virtio_mem_mb_test_sb_unplugged(vm, mb_id, 0, vm->nb_sb_per_mb)) {
+		mutex_unlock(&vm->hotplug_mutex);
+		rc = virtio_mem_mb_offline_and_remove(vm, mb_id);
+		mutex_lock(&vm->hotplug_mutex);
+		if (!rc)
+			virtio_mem_mb_set_state(vm, mb_id,
+						VIRTIO_MEM_MB_STATE_UNUSED);
+	}
+
 	return 0;
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
  2019-09-19 14:22 ` [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node David Hildenbrand
@ 2019-09-23 10:13   ` David Hildenbrand
  2019-09-23 10:36     ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-09-23 10:13 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, Rafael J. Wysocki,
	Len Brown, linux-acpi

On 19.09.19 16:22, David Hildenbrand wrote:
> Will be needed by virtio-mem to identify the node from a pxm.
> 
> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> Cc: Len Brown <lenb@kernel.org>
> Cc: linux-acpi@vger.kernel.org
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  drivers/acpi/numa.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
> index eadbf90e65d1..d5847fa7ac69 100644
> --- a/drivers/acpi/numa.c
> +++ b/drivers/acpi/numa.c
> @@ -35,6 +35,7 @@ int pxm_to_node(int pxm)
>  		return NUMA_NO_NODE;
>  	return pxm_to_node_map[pxm];
>  }
> +EXPORT_SYMBOL(pxm_to_node);

FWIW, this is a fairly old patch I dragged along and I think I'll
convert this to EXPORT_SYMBOL_GPL now that I know better :)

>  
>  int node_to_pxm(int node)
>  {
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
  2019-09-23 10:13   ` David Hildenbrand
@ 2019-09-23 10:36     ` Michal Hocko
  2019-09-23 10:39       ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-09-23 10:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Rafael J. Wysocki, Len Brown, linux-acpi

On Mon 23-09-19 12:13:11, David Hildenbrand wrote:
> On 19.09.19 16:22, David Hildenbrand wrote:
> > Will be needed by virtio-mem to identify the node from a pxm.
> > 
> > Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
> > Cc: Len Brown <lenb@kernel.org>
> > Cc: linux-acpi@vger.kernel.org
> > Signed-off-by: David Hildenbrand <david@redhat.com>
> > ---
> >  drivers/acpi/numa.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
> > index eadbf90e65d1..d5847fa7ac69 100644
> > --- a/drivers/acpi/numa.c
> > +++ b/drivers/acpi/numa.c
> > @@ -35,6 +35,7 @@ int pxm_to_node(int pxm)
> >  		return NUMA_NO_NODE;
> >  	return pxm_to_node_map[pxm];
> >  }
> > +EXPORT_SYMBOL(pxm_to_node);
> 
> FWIW, this is a fairly old patch I dragged along and I think I'll
> convert this to EXPORT_SYMBOL_GPL now that I know better :)

All other exports in this file are EXPORT_SYMBOL. Why is this one
considered special to restrict the access?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node
  2019-09-23 10:36     ` Michal Hocko
@ 2019-09-23 10:39       ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-09-23 10:39 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Rafael J. Wysocki, Len Brown, linux-acpi

On 23.09.19 12:36, Michal Hocko wrote:
> On Mon 23-09-19 12:13:11, David Hildenbrand wrote:
>> On 19.09.19 16:22, David Hildenbrand wrote:
>>> Will be needed by virtio-mem to identify the node from a pxm.
>>>
>>> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
>>> Cc: Len Brown <lenb@kernel.org>
>>> Cc: linux-acpi@vger.kernel.org
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>>  drivers/acpi/numa.c | 1 +
>>>  1 file changed, 1 insertion(+)
>>>
>>> diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
>>> index eadbf90e65d1..d5847fa7ac69 100644
>>> --- a/drivers/acpi/numa.c
>>> +++ b/drivers/acpi/numa.c
>>> @@ -35,6 +35,7 @@ int pxm_to_node(int pxm)
>>>  		return NUMA_NO_NODE;
>>>  	return pxm_to_node_map[pxm];
>>>  }
>>> +EXPORT_SYMBOL(pxm_to_node);
>>
>> FWIW, this is a fairly old patch I dragged along and I think I'll
>> convert this to EXPORT_SYMBOL_GPL now that I know better :)
> 
> All other exports in this file are EXPORT_SYMBOL. Why is this one
> considered special to restrict the access?
> 

Uh, so I did actually use my brain two years ago back when I was
crafting this patch :)

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory
  2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
                   ` (8 preceding siblings ...)
  2019-09-19 14:22 ` [PATCH RFC v3 9/9] virtio-mem: Offline and remove completely unplugged memory blocks David Hildenbrand
@ 2019-10-16  8:12 ` David Hildenbrand
  9 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16  8:12 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, Andrea Arcangeli, Alexander Duyck,
	Alexander Potapenko, Andrew Morton, Andrey Ryabinin,
	Anshuman Khandual, Anthony Yznaga, Dan Williams, Dave Young,
	Igor Mammedov, Ira Weiny, Jason Gunthorpe, Jason Wang,
	Johannes Weiner, Juergen Gross, Len Brown, Matthew Wilcox,
	Mauro Carvalho Chehab, Mel Gorman, Michael S. Tsirkin,
	Michal Hocko, Michal Hocko, Mike Rapoport, Mike Rapoport,
	Minchan Kim, Oscar Salvador, Oscar Salvador, Pavel Tatashin,
	Pavel Tatashin, Pingfan Liu, Qian Cai, Rafael J. Wysocki,
	Stefan Hajnoczi, Stephen Rothwell, Vlastimil Babka, Wei Yang,
	Wei Yang, Yang Shi, Yu Zhao

On 19.09.19 16:22, David Hildenbrand wrote:
> Long time no RFC! I finally had time to get the next version of the Linux
> driver side of virtio-mem into shape, incorporating ideas and feedback from
> previous discussions.
> 
> This RFC is based on the series currently on the mm list:
> - [PATCH 0/3] Remove __online_page_set_limits()
> - [PATCH v1 0/3] mm/memory_hotplug: Export generic_online_page()
> - [PATCH v4 0/8] mm/memory_hotplug: Shrink zones before removing memory
> The current state (kept updated) is available on
> - https://github.com/davidhildenbrand/linux/tree/virtio-mem
> 
> The basic idea of virtio-mem is to provide a flexible,
> cross-architecture memory hot(un)plug solution that avoids many limitations
> imposed by existing technologies, architectures, and interfaces. More
> details can be found below and in linked material.
> 
> This RFC was only tested on x86-64, however, should theoretically work
> on any Linux architecture that implements memory hot(un)plug - like
> s390x. On x86-64, it is currently possible to add/remove memory to the
> system in >= 4MB granularity. Memory hotplug works very reliable. For
> memory unplug, there are no guarantees how much memory can actually get
> unplugged, it depends on the setup. I have plans to improve that in the
> future.
> 
> --------------------------------------------------------------------------
> 1. virtio-mem
> --------------------------------------------------------------------------
> 
> The basic idea behind virtio-mem was presented at KVM Forum 2018. The
> slides can be found at [1]. The previous RFC can be found at [2]. The
> first RFC can be found at [3]. However, the concept evolved over time. The
> KVM Forum slides roughly match the current design.
> 
> Patch #2 ("virtio-mem: Paravirtualized memory hotplug") contains quite some
> information, especially in "include/uapi/linux/virtio_mem.h":
> 
>    Each virtio-mem device manages a dedicated region in physical address
>    space. Each device can belong to a single NUMA node, multiple devices
>    for a single NUMA node are possible. A virtio-mem device is like a
>    "resizable DIMM" consisting of small memory blocks that can be plugged
>    or unplugged. The device driver is responsible for (un)plugging memory
>    blocks on demand.
> 
>    Virtio-mem devices can only operate on their assigned memory region in
>    order to (un)plug memory. A device cannot (un)plug memory belonging to
>    other devices.
> 
>    The "region_size" corresponds to the maximum amount of memory that can
>    be provided by a device. The "size" corresponds to the amount of memory
>    that is currently plugged. "requested_size" corresponds to a request
>    from the device to the device driver to (un)plug blocks. The
>    device driver should try to (un)plug blocks in order to reach the
>    "requested_size". It is impossible to plug more memory than requested.
> 
>    The "usable_region_size" represents the memory region that can actually
>    be used to (un)plug memory. It is always at least as big as the
>    "requested_size" and will grow dynamically. It will only shrink when
>    explicitly triggered (VIRTIO_MEM_REQ_UNPLUG).
> 
>    Memory in the usable region can usually be read, however, there are no
>    guarantees. It can happen that the device cannot process a request,
>    because it is busy. The device driver has to retry later.
> 
>    Usually, during system resets all memory will get unplugged, so the
>    device driver can start with a clean state. However, in specific
>    scenarios (if the device is busy) it can happen that the device still
>    has memory plugged. The device driver can request to unplug all memory
>    (VIRTIO_MEM_REQ_UNPLUG) - which might take a while to succeed if the
>    device is busy.
> 
> --------------------------------------------------------------------------
> 2. Linux Implementation
> --------------------------------------------------------------------------
> 
> This RFC reuses quite some existing MM infrastructure, however, has to
> expose some additional functionality.
> 
> Memory blocks (e.g., 128MB) are added/removed on demand. Within these
> memory blocks, subblocks (e.g., 4MB) are plugged/unplugged. The sizes
> depend on the target architecture, MAX_ORDER + pageblock_order, and
> the block size of a virtio-mem device.
> 
> add_memory()/try_remove_memory() is used to add/remove memory blocks.
> virtio-mem will not online memory blocks itself. This has to be done by
> user space, or configured into the kernel
> (CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE). virtio-mem will only unplug memory
> that was online to the ZONE_NORMAL. Memory is suggested to be onlined to
> the ZONE_NORMAL for now.
> 
> The memory hotplug notifier is used to properly synchronize against
> onlining/offlining of memory blocks and to track the states of memory
> blocks (including the zone memory blocks are onlined to). Locking is
> done similar to the PPC CMA implementation.
> 
> The set_online_page() callback is used to keep unplugged subblocks
> of a memory block fake-offline when onlining the memory block.
> generic_online_page() is used to fake-online plugged subblocks. This
> handling is similar to the Hyper-V balloon driver.
> 
> PG_offline is used to mark unplugged subblocks as offline, so e.g.,
> dumping tools (makedumpfile) will skip these pages. This is similar to
> other balloon drivers like virtio-balloon and Hyper-V.
> 
> PG_offline + reference count of 0 [new] is now also used to mark pages as
> a "skip" when offlining memory blocks. This allows to offline memory blocks
> that have partially unplugged subblocks - or are completely unplugged.
> 
> alloc_contig_range()/free_contig_range() [now exposed] is used to
> unplug/plug subblocks of memory blocks the are already exposed to Linux.
> 
> offline_and_remove_memory() [new] is used to offline a fully unplugged
> memory block and remove it from Linux.
> 
> 
> A lot of additional information can be found in the separate patches and
> as comments in the code itself.
> 
> --------------------------------------------------------------------------
> 3. Major changes since last RFC
> --------------------------------------------------------------------------
> 
> A lot of things changed, especially also on the QEMU + virtio side. The
> biggest changes on the Linux driver side are:
> - Onlining/offlining of subblocks is now emulated on top of memory blocks.
>    set_online_page()+alloc_contig_range()+free_contig_range() is now used
>    for that. Core MM does not have to be modified and will continue to
>    online/offline full memory blocks.
> - Onlining/offlining of memory blocks is no longer performed by virtio-mem.
> - Pg_offline is upstream and can be used. It is also used to allow
>    offlining of partially unplugged memory blocks.
> - Memory block states + subblocks are now tracked more space-efficient.
> - Proper kexec(), kdump(), driver unload, driver reload, ZONE_MOVABLE, ...
>    handling.
> 
> --------------------------------------------------------------------------
> 4. Future work
> --------------------------------------------------------------------------
> 
> The separate patches contain a lot of future work items. One of the next
> steps is to make memory unplug more likely to succeed - currently, there
> are no guarantees on how much memory can get unplugged again. I have
> various ideas on how to limit fragmentation of all memory blocks that
> virtio-mem added.
> 
> Memory hotplug:
> - Reduce the amount of memory resources if that turnes out to be an
>    issue. Or try to speed up relevant code paths to deal with many
>    resources.
> - Allocate the vmemmap from the added memory. Makes hotplug more likely
>    to succeed, the vmemmap is stored on the same NUMA node and that
>    unmovable memory will later not hinder unplug.
> 
> Memory hotunplug:
> - Performance improvements:
> -- Sense (lockless) if it make sense to try alloc_contig_range() at all
>     before directly trying to isolate and taking locks.
> -- Try to unplug bigger chunks if possible first.
> -- Identify free areas first, that don't have to be evacuated.
> - Make unplug more likely to succeed:
> -- The "issue" is that in the ZONE_NORMAL, the buddy will randomly
>     allocate memory. Only pageblocks somewhat limit fragmentation,
>     however we would want to limit fragmentation on subblock granularity
>     and even memory block granularity. One idea is to have a new
>     ZONE_PREFER_MOVABLE. Memory blocks will then be onlined to ZONE_NORMAL
>     / ZONE_PREFER_MOVABLE in a certain ratio per node (e.g.,
>     1:4). This makes unplug of quite some memory likely to succeed in most
>     setups. ZONE_PREFER_MOVABLE is then a mixture of ZONE_NORMAL and
>     ZONE_MOVABlE. Especially, movable data can end up on that zone, but
>     only if really required - avoiding running out of memory on ZONE
>     imbalances. The zone fallback order would be
>     MOVABLE=>PREFER_MOVABLE=>HIGHMEM=>NORMAL=>PREFER_MOVABLE=>DMA32=>DMA
> -- Allocate memmap from added memory. This way, less unmovable data can
>     end up on the memory blocks.
> -- Call drop_slab() before trying to unplug. Eventually shrink other
>     caches.
> - Better retry handling in case memory is busy. We certainly don't want
>    to try for ever in a short interval to try to get some memory back.
> - OOM handling, e.g., via an OOM handler.
> 
> --------------------------------------------------------------------------
> 5. Example Usage
> --------------------------------------------------------------------------
> 
> A very basic QEMU prototype (kept updated) is available at:
> - https://github.com/davidhildenbrand/qemu/tree/virtio-mem
> 
> It lacks various features, however works to test the guest driver side:
> - No support for resizable memory regions / memory backends
> - No protection of unplugged memory (esp., userfaultfd-wp)
> - No dump/migration/XXX optimizations to skip over unplugged memory
> 
> Start QEMU with two virtio-mem devices (one per NUMA node):
>   $ qemu-system-x86_64 -m 4G,maxmem=20G \
>    -smp sockets=2,cores=2 \
>    -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \
>    [...]
>    -object memory-backend-ram,id=mem0,size=8G \
>    -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=128M \
>    -object memory-backend-ram,id=mem1,size=8G \
>    -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=80M
> 
> Query the configuration:
>   QEMU 4.1.50 monitor - type 'help' for more information
>   (qemu) info memory-devices
>   Memory device [virtio-mem]: "vm0"
>     memaddr: 0x140000000
>     node: 0
>     requested-size: 134217728
>     size: 134217728
>     max-size: 8589934592
>     block-size: 2097152
>     memdev: /objects/mem0
>   Memory device [virtio-mem]: "vm1"
>     memaddr: 0x340000000
>     node: 1
>     requested-size: 83886080
>     size: 83886080
>     max-size: 8589934592
>     block-size: 2097152
>     memdev: /objects/mem1
> 
> Add some memory to node 1:
>   QEMU 4.1.50 monitor - type 'help' for more information
>   (qemu) qom-set vm1 requested-size 1G
> 
> Remove some memory from node 0:
>   QEMU 4.1.50 monitor - type 'help' for more information
>   (qemu) qom-set vm0 requested-size 64M
> 
> Query the configuration:
> (qemu) info memory-devices
> Memory device [virtio-mem]: "vm0"
>    memaddr: 0x140000000
>    node: 0
>    requested-size: 67108864
>    size: 67108864
>    max-size: 8589934592
>    block-size: 2097152
>    memdev: /objects/mem0
> Memory device [virtio-mem]: "vm1"
>    memaddr: 0x340000000
>    node: 1
>    requested-size: 1073741824
>    size: 1073741824
>    max-size: 8589934592
>    block-size: 2097152
>    memdev: /objects/mem1
> 
> --------------------------------------------------------------------------
> 6. Q/A
> --------------------------------------------------------------------------
> 
> Q: Why add/remove parts ("subblocks") of memory blocks/sections?
> A: Flexibility (section size depends on the architecture) - e.g., some
>     architectures have a section size of 2GB. Also, the memory block size
>     is variable (e.g., on x86-64). I want to avoid any such restrictions.
>     Some use cases want to add/remove memory in smaller granularities to a
>     VM (e.g., the Hyper-V balloon also implements this) - especially smaller
>     VMs like used for kata containers. Also, on memory unplug, it is more
>     reliable to free-up and unplug multiple small chunks instead
>     of one big chunk. E.g., if one page of a DIMM is either unmovable or
>     pinned, the DIMM can't get unplugged. This approach is basically a
>     compromise between DIMM-based memory hot(un)plug and balloon
>     inflation/deflation, which works mostly on page granularity.
> 
> Q: Why care about memory blocks?
> A: They are the way to tell user space about new memory. This way,
>     memory can get onlined/offlined by user space. Also, e.g., kdump
>     relies on udev events to reload kexec when memory blocks are
>     onlined/offlined. Memory blocks are the "real" memory hot(un)plug
>     granularity. Everything that's smaller has to be emulated "on top".
> 
> Q: Won't memory unplug of subblocks fragment memory?
> A: Yes and no. Unplugging e.g., >=4MB subblocks on x86-64 will not really
>     fragment memory like unplugging random pages like a balloon driver does.
>     Buddy merging will not be limited. However, any allocation that requires
>     bigger consecutive memory chunks (e.g., gigantic pages) might observe
>     the fragmentation. Possible solutions: Allocate gigantic huge pages
>     before unplugging memory, don't unplug memory, combine virtio-mem with
>     DIMM based memory or bigger initial memory. Remember, a virtio-mem
>     device will only unplug on the memory range it manages, not on other
>     DIMMs. Unplug of single memory blocks will result in similar
>     fragmentation in respect to gigantic huge pages.
> 
> Q: How reliable is memory unplug?
> A: There are no guarantees on how much memory can get unplugged
>     again. However, it is more likely to find 4MB chunks to unplug than
>     e.g., 128MB chunks. If memory is terribly fragmented, there is nothing
>     we can do - for now. I consider memory hotplug the first primary use
>     of virtio-mem. Memory unplug might usually work, but we want to improve
>     the performance and the amount of memory we can actually unplug later.
> 
> Q: Why not unplug from the ZONE_MOVABLE?
> A: Unplugged memory chunks are unmovable. Unmovable data must not end up
>     on the ZONE_MOVABLE - similar to gigantic pages - they will never be
>     allocated from ZONE_MOVABLE. Teaching MM to move unplugged chunks within
>     a device might be problematic and will require a new guest->hypervisor
>     command to move unplugged chunks. virtio-mem added memory can be onlined
>     to the ZONE_MOVABLE, but subblocks will not get unplugged from it.
> 
> Q: How big should the initial (!virtio-mem) memory of a VM be?
> A: virtio-mem memory will not go to the DMA zones. So to avoid running out
>     of DMA memory, I suggest something like 2-3GB on x86-64. But many
>     VMs can most probably deal with less DMA memory - depends on the use
>     case.
> 
> [1] https://events.linuxfoundation.org/wp-content/uploads/2017/12/virtio-mem-Paravirtualized-Memory-David-Hildenbrand-Red-Hat-1.pdf
> [2] https://lwn.net/Articles/755423/
> [3] https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg03870.html
> 
> ---
> 

Gentle ping. I want to continue working on this. I need feedback from MM 
people on ...

> David Hildenbrand (9):
>    ACPI: NUMA: export pxm_to_node
>    virtio-mem: Paravirtualized memory hotplug
>    virtio-mem: Paravirtualized memory hotunplug part 1
>    mm: Export alloc_contig_range() / free_contig_range()

... this patch, if it is okay to export alloc_contig_range() / 
free_contig_range() (see next patch how it is used)

>    virtio-mem: Paravirtualized memory hotunplug part 2
>    mm: Allow to offline PageOffline() pages with a reference count of 0

... this patch, to see if we can use "PageOffline() + refcount == 0" as 
a way to allow offlining memory with unplugged pieces. (see next patch 
how this is used)

>    virtio-mem: Allow to offline partially unplugged memory blocks
>    mm/memory_hotplug: Introduce offline_and_remove_memory()

... I assume this patch is not that debatable.

>    virtio-mem: Offline and remove completely unplugged memory blocks

So yeah, please feedback on

"mm: Export alloc_contig_range() / free_contig_range()" and "mm: Allow 
to offline PageOffline() pages with a reference count of 0", because 
they are the essential pieces apart from what we already have 
(generic_online_page(), memory notifiers ...)

Thanks!

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range()
  2019-09-19 14:22 ` [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range() David Hildenbrand
@ 2019-10-16 11:20   ` Michal Hocko
  2019-10-16 12:31     ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 11:20 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Mel Gorman,
	Mike Rapoport, Dan Williams, Alexander Duyck, Pavel Tatashin,
	Alexander Potapenko

On Thu 19-09-19 16:22:23, David Hildenbrand wrote:
> A virtio-mem device wants to allocate memory from the memory region it
> manages in order to unplug it in the hypervisor - similar to
> a balloon driver. Also, it might want to plug previously unplugged
> (allocated) memory and give it back to Linux. alloc_contig_range() /
> free_contig_range() seem to be the perfect interface for this task.
> 
> In contrast to existing balloon devices, a virtio-mem device operates
> on bigger chunks (e.g., 4MB) and only on physical memory it manages. It
> tracks which chunks (subblocks) are still plugged, so it can go ahead
> and try to alloc_contig_range()+unplug them on unplug request, or
> plug+free_contig_range() unplugged chunks on plug requests.
> 
> A virtio-mem device will use alloc_contig_range() / free_contig_range()
> only on ranges that belong to the same node/zone in at least
> MAX(MAX_ORDER - 1, pageblock_order) order granularity - e.g., 4MB on
> x86-64. The virtio-mem device added that memory, so the memory
> exists and does not contain any holes. virtio-mem will only try to allocate
> on ZONE_NORMAL, never on ZONE_MOVABLE, just like when allocating
> gigantic pages (we don't put unmovable data into the movable zone).

Is there any real reason to export as GPL rather than generic
EXPORT_SYMBOL? In other words do we need to restrict the usage this
interface only to GPL modules and why if so. All other allocator APIs
are EXPORT_SYMBOL so there should better be a good reason for this one
to differ. I can understand that this one is slightly different by
requesting a specific range of the memory but it is still under a full
control of the core MM to say no.

> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
> Cc: Alexander Potapenko <glider@google.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Other than that, I do not think exporting this function is harmful. It
would be worse to reinvent it and do it wrong.

I usually prefer to add a caller in the same patch, though, because it
makes the usage explicit and clear.

Acked-by: Michal Hocko <mhocko@suse.com> # to export contig range allocator API

> ---
>  mm/page_alloc.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3334a769eb91..d5d7944954b3 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8469,6 +8469,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>  				pfn_max_align_up(end), migratetype);
>  	return ret;
>  }
> +EXPORT_SYMBOL_GPL(alloc_contig_range);
>  #endif /* CONFIG_CONTIG_ALLOC */
>  
>  void free_contig_range(unsigned long pfn, unsigned int nr_pages)
> @@ -8483,6 +8484,7 @@ void free_contig_range(unsigned long pfn, unsigned int nr_pages)
>  	}
>  	WARN(count != 0, "%d pages are still in use!\n", count);
>  }
> +EXPORT_SYMBOL_GPL(free_contig_range);
>  
>  #ifdef CONFIG_MEMORY_HOTPLUG
>  /*
> -- 
> 2.21.0

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-09-19 14:22 ` [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0 David Hildenbrand
@ 2019-10-16 11:43   ` Michal Hocko
  2019-10-16 12:50     ` David Hildenbrand
  2019-10-16 13:45     ` David Hildenbrand
  0 siblings, 2 replies; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 11:43 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
> virtio-mem wants to allow to offline memory blocks of which some parts
> were unplugged, especially, to later offline and remove completely
> unplugged memory blocks. The important part is that PageOffline() has
> to remain set until the section is offline, so these pages will never
> get accessed (e.g., when dumping). The pages should not be handed
> back to the buddy (which would require clearing PageOffline() and
> result in issues if offlining fails and the pages are suddenly in the
> buddy).
> 
> Let's use "PageOffline() + reference count = 0" as a sign to
> memory offlining code that these pages can simply be skipped when
> offlining, similar to free or HWPoison pages.
> 
> Pass flags to test_pages_isolated(), similar as already done for
> has_unmovable_pages(). Use a new flag to indicate the
> requirement of memory offlining to skip over these special pages.
> 
> In has_unmovable_pages(), make sure the pages won't be detected as
> movable. This is not strictly necessary, however makes e.g.,
> alloc_contig_range() stop early, trying to isolate such page blocks -
> compared to failing later when testing if all pages were isolated.
> 
> Also, make sure that when a reference to a PageOffline() page is
> dropped, that the page will not be returned to the buddy.
> 
> memory devices (like virtio-mem) that want to make use of this
> functionality have to make sure to synchronize against memory offlining,
> using the memory hotplug notifier.
> 
> Alternative: Allow to offline with a reference count of 1
> and use some other sign in the struct page that offlining is permitted.

Few questions. I do not see onlining code to take care of this special
case. What should happen when offline && online?
Should we allow to try_remove_memory to succeed with these pages?
Do we really have hook into __put_page? Why do we even care about the
reference count of those pages? Wouldn't it be just more consistent to
elevate the reference count (I guess this is what you suggest in the
last paragraph) and the virtio driver would return that page to the
buddy by regular put_page. This is also related to the above question
about the physical memory remove.

[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5d7944954b3..fef74720d8b4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>  		if (!page_ref_count(page)) {
>  			if (PageBuddy(page))
>  				iter += (1 << page_order(page)) - 1;
> +			/*
> +			* Memory devices allow to offline a page if it is
> +			* marked PG_offline and has a reference count of 0.
> +			* However, the pages are not movable as it would be
> +			* required e.g., for alloc_contig_range().
> +			*/
> +			if (PageOffline(page) && !(flags & SKIP_OFFLINE))
> +				if (++found > count)
> +					goto unmovable;
>  			continue;
>  		}

Do we really need to distinguish offline and hwpoison pages? They are
both unmovable for allocator purposes and offlineable for the hotplug,
right? Should we just hide them behind a helper and use it rather than
an explicit SKIP_$FOO?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory()
  2019-09-19 14:22 ` [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory() David Hildenbrand
@ 2019-10-16 11:47   ` Michal Hocko
  2019-10-16 12:57     ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 11:47 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Oscar Salvador, Pavel Tatashin, Wei Yang,
	Dan Williams, Qian Cai

On Thu 19-09-19 16:22:27, David Hildenbrand wrote:
> virtio-mem wants to offline and remove a memory block once it unplugged
> all subblocks (e.g., using alloc_contig_range()). Let's provide
> an interface to do that from a driver. virtio-mem already supports to
> offline partially unplugged memory blocks. Offlining a fully unplugged
> memory block will not require to migrate any pages. All unplugged
> subblocks are PageOffline() and have a reference count of 0 - so
> offlining code will simply skip them.
> 
> All we need an interface to trigger the "offlining" and the removing in a
> single operation - to make sure the memory block cannot get onlined by
> user space again before it gets removed.
> 
> To keep things simple, allow to only work on a single memory block.

Without a user it is not really clear why do we need this interface.
I am also not really sure why do you want/need to control beyond the
offlining stage. Care to explain some more?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range()
  2019-10-16 11:20   ` Michal Hocko
@ 2019-10-16 12:31     ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 12:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Mel Gorman,
	Mike Rapoport, Dan Williams, Alexander Duyck, Pavel Tatashin,
	Alexander Potapenko

On 16.10.19 13:20, Michal Hocko wrote:
> On Thu 19-09-19 16:22:23, David Hildenbrand wrote:
>> A virtio-mem device wants to allocate memory from the memory region it
>> manages in order to unplug it in the hypervisor - similar to
>> a balloon driver. Also, it might want to plug previously unplugged
>> (allocated) memory and give it back to Linux. alloc_contig_range() /
>> free_contig_range() seem to be the perfect interface for this task.
>>
>> In contrast to existing balloon devices, a virtio-mem device operates
>> on bigger chunks (e.g., 4MB) and only on physical memory it manages. It
>> tracks which chunks (subblocks) are still plugged, so it can go ahead
>> and try to alloc_contig_range()+unplug them on unplug request, or
>> plug+free_contig_range() unplugged chunks on plug requests.
>>
>> A virtio-mem device will use alloc_contig_range() / free_contig_range()
>> only on ranges that belong to the same node/zone in at least
>> MAX(MAX_ORDER - 1, pageblock_order) order granularity - e.g., 4MB on
>> x86-64. The virtio-mem device added that memory, so the memory
>> exists and does not contain any holes. virtio-mem will only try to allocate
>> on ZONE_NORMAL, never on ZONE_MOVABLE, just like when allocating
>> gigantic pages (we don't put unmovable data into the movable zone).
> 
> Is there any real reason to export as GPL rather than generic
> EXPORT_SYMBOL? In other words do we need to restrict the usage this
> interface only to GPL modules and why if so. All other allocator APIs
> are EXPORT_SYMBOL so there should better be a good reason for this one
> to differ. I can understand that this one is slightly different by
> requesting a specific range of the memory but it is still under a full
> control of the core MM to say no.

I thought that we might - at least initially - might want to know all 
users. If you prefer, I can drop the GPL.

> 
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Michal Hocko <mhocko@suse.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Mike Rapoport <rppt@linux.ibm.com>
>> Cc: Dan Williams <dan.j.williams@intel.com>
>> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
>> Cc: Alexander Potapenko <glider@google.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> Other than that, I do not think exporting this function is harmful. It
> would be worse to reinvent it and do it wrong.
> 
> I usually prefer to add a caller in the same patch, though, because it
> makes the usage explicit and clear.
> 

It's the next patch in this series (I prefer to split this from the 
actual driver):

https://lkml.org/lkml/2019/9/19/486

> Acked-by: Michal Hocko <mhocko@suse.com> # to export contig range allocator API

Thanks!

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 11:43   ` Michal Hocko
@ 2019-10-16 12:50     ` David Hildenbrand
  2019-10-16 13:45       ` Michal Hocko
  2019-10-16 13:45     ` David Hildenbrand
  1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 12:50 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 13:43, Michal Hocko wrote:
> On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
>> virtio-mem wants to allow to offline memory blocks of which some parts
>> were unplugged, especially, to later offline and remove completely
>> unplugged memory blocks. The important part is that PageOffline() has
>> to remain set until the section is offline, so these pages will never
>> get accessed (e.g., when dumping). The pages should not be handed
>> back to the buddy (which would require clearing PageOffline() and
>> result in issues if offlining fails and the pages are suddenly in the
>> buddy).
>>
>> Let's use "PageOffline() + reference count = 0" as a sign to
>> memory offlining code that these pages can simply be skipped when
>> offlining, similar to free or HWPoison pages.
>>
>> Pass flags to test_pages_isolated(), similar as already done for
>> has_unmovable_pages(). Use a new flag to indicate the
>> requirement of memory offlining to skip over these special pages.
>>
>> In has_unmovable_pages(), make sure the pages won't be detected as
>> movable. This is not strictly necessary, however makes e.g.,
>> alloc_contig_range() stop early, trying to isolate such page blocks -
>> compared to failing later when testing if all pages were isolated.
>>
>> Also, make sure that when a reference to a PageOffline() page is
>> dropped, that the page will not be returned to the buddy.
>>
>> memory devices (like virtio-mem) that want to make use of this
>> functionality have to make sure to synchronize against memory offlining,
>> using the memory hotplug notifier.
>>
>> Alternative: Allow to offline with a reference count of 1
>> and use some other sign in the struct page that offlining is permitted.
> 
> Few questions. I do not see onlining code to take care of this special
> case. What should happen when offline && online?

Once offline, the memmap is garbage. When onlining again:

a) memmap will be re-initialized
b) online_page_callback_t will be called for every page in the section. 
The driver can mark them offline again and not give them to the buddy.
c) section will be marked online.

The driver that marked these pages to be skipped when offlining is 
responsible for registering the online_page_callback_t callback where 
these pages will get excluded.

This is exactly the same as when onling a memory block that is partially 
populated (e.g., HpyerV balloon right now).

So it's effectively "re-initializing the memmap using the driver 
knowledge" when onlining.

> Should we allow to try_remove_memory to succeed with these pages?

I think we should first properly offline them (mark sections offline and 
memory blocks, fixup numbers, shrink zones ...). The we can cleanly 
remove the memory. (see [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce 
offline_and_remove_memory())

Once offline, the memmap is irrelevant and try_remove_memory() can do 
its job.

> Do we really have hook into __put_page? Why do we even care about the
> reference count of those pages? Wouldn't it be just more consistent to
> elevate the reference count (I guess this is what you suggest in the
> last paragraph) and the virtio driver would return that page to the
> buddy by regular put_page. This is also related to the above question
> about the physical memory remove.

Returning them to the buddy is problematic for various reasons. Let's 
have a look at __offline_pages():

1) start_isolate_page_range()
-> offline pages with a reference count of one will be detected as 
unmovable -> BAD, we abort right away. We could hack around that.

2) memory_notify(MEM_GOING_OFFLINE, &arg);
-> Here, we could release all pages to the buddy, clearing PG_offline
-> BAD, PF_offline must not be cleared so dumping tools will not touch
    these pages. I don't see a way to hack around that.

3) scan_movable_pages() ...

4a) memory_notify(MEM_OFFLINE, &arg);

Perfect, it worked. Sections are offline.

4b) undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
     memory_notify(MEM_CANCEL_OFFLINE, &arg);

-> Offlining failed for whatever reason.
-> Pages are in the buddy, but we already un-isolated them. BAD.

By not going via the buddy we avoid these issues and can leave 
PG_offline set until the section is fully offline. Something that is 
very desirable for virtio-mem (and as far as I can tell also HyperV in 
the future).

> 
> [...]
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index d5d7944954b3..fef74720d8b4 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
>>   		if (!page_ref_count(page)) {
>>   			if (PageBuddy(page))
>>   				iter += (1 << page_order(page)) - 1;
>> +			/*
>> +			* Memory devices allow to offline a page if it is
>> +			* marked PG_offline and has a reference count of 0.
>> +			* However, the pages are not movable as it would be
>> +			* required e.g., for alloc_contig_range().
>> +			*/
>> +			if (PageOffline(page) && !(flags & SKIP_OFFLINE))
>> +				if (++found > count)
>> +					goto unmovable;
>>   			continue;
>>   		}
> 
> Do we really need to distinguish offline and hwpoison pages? They are
> both unmovable for allocator purposes and offlineable for the hotplug,
> right? Should we just hide them behind a helper and use it rather than
> an explicit SKIP_$FOO?

Makes sense. It really boils down to "offline" vs. "allocate" use cases.

So maybe instead of "SKIP_FOO" something like "MEMORY_OFFLINE". ?


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory()
  2019-10-16 11:47   ` Michal Hocko
@ 2019-10-16 12:57     ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 12:57 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Oscar Salvador, Pavel Tatashin, Wei Yang,
	Dan Williams, Qian Cai

On 16.10.19 13:47, Michal Hocko wrote:
> On Thu 19-09-19 16:22:27, David Hildenbrand wrote:
>> virtio-mem wants to offline and remove a memory block once it unplugged
>> all subblocks (e.g., using alloc_contig_range()). Let's provide
>> an interface to do that from a driver. virtio-mem already supports to
>> offline partially unplugged memory blocks. Offlining a fully unplugged
>> memory block will not require to migrate any pages. All unplugged
>> subblocks are PageOffline() and have a reference count of 0 - so
>> offlining code will simply skip them.
>>
>> All we need an interface to trigger the "offlining" and the removing in a
>> single operation - to make sure the memory block cannot get onlined by
>> user space again before it gets removed.
>>
>> To keep things simple, allow to only work on a single memory block.
> 
> Without a user it is not really clear why do we need this interface.
> I am also not really sure why do you want/need to control beyond the
> offlining stage. Care to explain some more?
> 

The user is the next (small) patch in this series:

https://lkml.org/lkml/2019/9/19/475

Let's assume virtio-mem added a memory block and that block was onlined 
(e.g. by user space). E.g. 128MB.

On request, virtio-mem used alloc_contig_range() to logically unplug all 
chunks (e.g., 4MB) of that memory block. virtio-mem marked all pages 
PG_offline and dropped the reference count to 0 (to allow the memory 
block to get offlined). Basically no memory of the memory block is still 
in use by the system. So it is very desirable to remove that memory 
block along with the vmemmap and the page tables. This frees up memory.

In order to remove the memory block, it first has to be officially 
offlined (e.g., make the memory block as offline). Then, the memory 
block can get cleanly removed. Otherwise, try_remove_memory() will fail.

To do this, virtio-mem needs an interface to perform both steps (offline 
+ remove).

There is no interface for a driver to offline a memory block. What I 
propose here performs both steps (offline+remove) in a single step, as 
that is really what the driver wants.

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 11:43   ` Michal Hocko
  2019-10-16 12:50     ` David Hildenbrand
@ 2019-10-16 13:45     ` David Hildenbrand
  2019-10-16 14:03       ` Michal Hocko
  1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 13:45 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 13:43, Michal Hocko wrote:
> On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
>> virtio-mem wants to allow to offline memory blocks of which some parts
>> were unplugged, especially, to later offline and remove completely
>> unplugged memory blocks. The important part is that PageOffline() has
>> to remain set until the section is offline, so these pages will never
>> get accessed (e.g., when dumping). The pages should not be handed
>> back to the buddy (which would require clearing PageOffline() and
>> result in issues if offlining fails and the pages are suddenly in the
>> buddy).
>>
>> Let's use "PageOffline() + reference count = 0" as a sign to
>> memory offlining code that these pages can simply be skipped when
>> offlining, similar to free or HWPoison pages.
>>
>> Pass flags to test_pages_isolated(), similar as already done for
>> has_unmovable_pages(). Use a new flag to indicate the
>> requirement of memory offlining to skip over these special pages.
>>
>> In has_unmovable_pages(), make sure the pages won't be detected as
>> movable. This is not strictly necessary, however makes e.g.,
>> alloc_contig_range() stop early, trying to isolate such page blocks -
>> compared to failing later when testing if all pages were isolated.
>>
>> Also, make sure that when a reference to a PageOffline() page is
>> dropped, that the page will not be returned to the buddy.
>>
>> memory devices (like virtio-mem) that want to make use of this
>> functionality have to make sure to synchronize against memory offlining,
>> using the memory hotplug notifier.
>>
>> Alternative: Allow to offline with a reference count of 1
>> and use some other sign in the struct page that offlining is permitted.
> 
> Few questions. I do not see onlining code to take care of this special
> case. What should happen when offline && online?
> Should we allow to try_remove_memory to succeed with these pages?
> Do we really have hook into __put_page? Why do we even care about the
> reference count of those pages?

Oh, I forgot to answer this questions. The __put_page() change is 
necessary for the following race I identified:

Page has a refcount of 1 (e.g., allocated by virtio-mem using 
alloc_contig_range()).

a) kernel: get_page_unless_zero(page): refcount = 2
b) virtio-mem: set page PG_offline, reduce refcount): refocunt = 1
c) kernel: put_page(page): refcount = 0

The page would suddenly be given to the buddy. which is bad.

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 12:50     ` David Hildenbrand
@ 2019-10-16 13:45       ` Michal Hocko
  2019-10-16 13:55         ` David Hildenbrand
  2019-10-16 13:59         ` David Hildenbrand
  0 siblings, 2 replies; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 13:45 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Wed 16-10-19 14:50:30, David Hildenbrand wrote:
> On 16.10.19 13:43, Michal Hocko wrote:
> > On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
> > > virtio-mem wants to allow to offline memory blocks of which some parts
> > > were unplugged, especially, to later offline and remove completely
> > > unplugged memory blocks. The important part is that PageOffline() has
> > > to remain set until the section is offline, so these pages will never
> > > get accessed (e.g., when dumping). The pages should not be handed
> > > back to the buddy (which would require clearing PageOffline() and
> > > result in issues if offlining fails and the pages are suddenly in the
> > > buddy).
> > > 
> > > Let's use "PageOffline() + reference count = 0" as a sign to
> > > memory offlining code that these pages can simply be skipped when
> > > offlining, similar to free or HWPoison pages.
> > > 
> > > Pass flags to test_pages_isolated(), similar as already done for
> > > has_unmovable_pages(). Use a new flag to indicate the
> > > requirement of memory offlining to skip over these special pages.
> > > 
> > > In has_unmovable_pages(), make sure the pages won't be detected as
> > > movable. This is not strictly necessary, however makes e.g.,
> > > alloc_contig_range() stop early, trying to isolate such page blocks -
> > > compared to failing later when testing if all pages were isolated.
> > > 
> > > Also, make sure that when a reference to a PageOffline() page is
> > > dropped, that the page will not be returned to the buddy.
> > > 
> > > memory devices (like virtio-mem) that want to make use of this
> > > functionality have to make sure to synchronize against memory offlining,
> > > using the memory hotplug notifier.
> > > 
> > > Alternative: Allow to offline with a reference count of 1
> > > and use some other sign in the struct page that offlining is permitted.
> > 
> > Few questions. I do not see onlining code to take care of this special
> > case. What should happen when offline && online?
> 
> Once offline, the memmap is garbage. When onlining again:
> 
> a) memmap will be re-initialized
> b) online_page_callback_t will be called for every page in the section. The
> driver can mark them offline again and not give them to the buddy.
> c) section will be marked online.

But we can skip those pages when onlining and keep them in the offline
state right? We do not poison offlined pages.

There is state stored in the struct page. In other words this shouldn't
be really different from HWPoison pages. I cannot find the code that is
doing that and maybe we don't handle that. But we cannot simply online
hwpoisoned page. Offlining the range will not make a broken memory OK
all of the sudden. And your usecase sounds similar to me.

> The driver that marked these pages to be skipped when offlining is
> responsible for registering the online_page_callback_t callback where these
> pages will get excluded.
> 
> This is exactly the same as when onling a memory block that is partially
> populated (e.g., HpyerV balloon right now).
> 
> So it's effectively "re-initializing the memmap using the driver knowledge"
> when onlining.

I am not sure I follow. So you exclude those pages when onlining?

> > Should we allow to try_remove_memory to succeed with these pages?
> 
> I think we should first properly offline them (mark sections offline and
> memory blocks, fixup numbers, shrink zones ...). The we can cleanly remove
> the memory. (see [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce
> offline_and_remove_memory())

I will have a look, but just to quick question. try_remove_memory would
fail if the range is offline (via user interface) but there are still some
pages in the driver Offline state?

> Once offline, the memmap is irrelevant and try_remove_memory() can do its
> job.
> 
> > Do we really have hook into __put_page? Why do we even care about the
> > reference count of those pages? Wouldn't it be just more consistent to
> > elevate the reference count (I guess this is what you suggest in the
> > last paragraph) and the virtio driver would return that page to the
> > buddy by regular put_page. This is also related to the above question
> > about the physical memory remove.
> 
> Returning them to the buddy is problematic for various reasons. Let's have a
> look at __offline_pages():
> 
> 1) start_isolate_page_range()
> -> offline pages with a reference count of one will be detected as unmovable
> -> BAD, we abort right away. We could hack around that.
> 
> 2) memory_notify(MEM_GOING_OFFLINE, &arg);
> -> Here, we could release all pages to the buddy, clearing PG_offline
> -> BAD, PF_offline must not be cleared so dumping tools will not touch
>    these pages. I don't see a way to hack around that.
> 
> 3) scan_movable_pages() ...
> 
> 4a) memory_notify(MEM_OFFLINE, &arg);
> 
> Perfect, it worked. Sections are offline.
> 
> 4b) undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
>     memory_notify(MEM_CANCEL_OFFLINE, &arg);
> 
> -> Offlining failed for whatever reason.
> -> Pages are in the buddy, but we already un-isolated them. BAD.
> 
> By not going via the buddy we avoid these issues and can leave PG_offline
> set until the section is fully offline. Something that is very desirable for
> virtio-mem (and as far as I can tell also HyperV in the future).

I am not sure I follow. Maybe my original question was confusing. Let me
ask again. Why do we need to hook into __put_page?


> > [...]
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index d5d7944954b3..fef74720d8b4 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
> > >   		if (!page_ref_count(page)) {
> > >   			if (PageBuddy(page))
> > >   				iter += (1 << page_order(page)) - 1;
> > > +			/*
> > > +			* Memory devices allow to offline a page if it is
> > > +			* marked PG_offline and has a reference count of 0.
> > > +			* However, the pages are not movable as it would be
> > > +			* required e.g., for alloc_contig_range().
> > > +			*/
> > > +			if (PageOffline(page) && !(flags & SKIP_OFFLINE))
> > > +				if (++found > count)
> > > +					goto unmovable;
> > >   			continue;
> > >   		}
> > 
> > Do we really need to distinguish offline and hwpoison pages? They are
> > both unmovable for allocator purposes and offlineable for the hotplug,
> > right? Should we just hide them behind a helper and use it rather than
> > an explicit SKIP_$FOO?
> 
> Makes sense. It really boils down to "offline" vs. "allocate" use cases.
> 
> So maybe instead of "SKIP_FOO" something like "MEMORY_OFFLINE". ?

Yes, that would be a better fit.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 13:45       ` Michal Hocko
@ 2019-10-16 13:55         ` David Hildenbrand
  2019-10-16 14:09           ` Michal Hocko
  2019-10-16 13:59         ` David Hildenbrand
  1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 13:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 15:45, Michal Hocko wrote:
> On Wed 16-10-19 14:50:30, David Hildenbrand wrote:
>> On 16.10.19 13:43, Michal Hocko wrote:
>>> On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
>>>> virtio-mem wants to allow to offline memory blocks of which some parts
>>>> were unplugged, especially, to later offline and remove completely
>>>> unplugged memory blocks. The important part is that PageOffline() has
>>>> to remain set until the section is offline, so these pages will never
>>>> get accessed (e.g., when dumping). The pages should not be handed
>>>> back to the buddy (which would require clearing PageOffline() and
>>>> result in issues if offlining fails and the pages are suddenly in the
>>>> buddy).
>>>>
>>>> Let's use "PageOffline() + reference count = 0" as a sign to
>>>> memory offlining code that these pages can simply be skipped when
>>>> offlining, similar to free or HWPoison pages.
>>>>
>>>> Pass flags to test_pages_isolated(), similar as already done for
>>>> has_unmovable_pages(). Use a new flag to indicate the
>>>> requirement of memory offlining to skip over these special pages.
>>>>
>>>> In has_unmovable_pages(), make sure the pages won't be detected as
>>>> movable. This is not strictly necessary, however makes e.g.,
>>>> alloc_contig_range() stop early, trying to isolate such page blocks -
>>>> compared to failing later when testing if all pages were isolated.
>>>>
>>>> Also, make sure that when a reference to a PageOffline() page is
>>>> dropped, that the page will not be returned to the buddy.
>>>>
>>>> memory devices (like virtio-mem) that want to make use of this
>>>> functionality have to make sure to synchronize against memory offlining,
>>>> using the memory hotplug notifier.
>>>>
>>>> Alternative: Allow to offline with a reference count of 1
>>>> and use some other sign in the struct page that offlining is permitted.
>>>
>>> Few questions. I do not see onlining code to take care of this special
>>> case. What should happen when offline && online?
>>
>> Once offline, the memmap is garbage. When onlining again:
>>
>> a) memmap will be re-initialized
>> b) online_page_callback_t will be called for every page in the section. The
>> driver can mark them offline again and not give them to the buddy.
>> c) section will be marked online.
> 
> But we can skip those pages when onlining and keep them in the offline
> state right? We do not poison offlined pages.

Right now, I do that via the online_page_callback_t call (similar to 
HyperV), as the memmap is basically garbage and not trustworthy.

> 
> There is state stored in the struct page. In other words this shouldn't
> be really different from HWPoison pages. I cannot find the code that is
> doing that and maybe we don't handle that. But we cannot simply online
> hwpoisoned page. Offlining the range will not make a broken memory OK
> all of the sudden. And your usecase sounds similar to me.

Sorry to say, but whenever we online memory the memmap is overwritten, 
because there is no way you could tell it contains garbage or not. You 
have to assume it is garbage. (my recent patch even poisons the memmap 
when offlining, which helped to find a lot of these "garbage memmap" BUGs)

online_pages()
	...
	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL);
	...
		memmap_init_zone()
			-> memmap initialized

So yes, offlining memory with HWPoison and re-onlining it effectively 
drops HWPoison markers. On the next access, you will trigger a new HWPoison.

> 
>> The driver that marked these pages to be skipped when offlining is
>> responsible for registering the online_page_callback_t callback where these
>> pages will get excluded.
>>
>> This is exactly the same as when onling a memory block that is partially
>> populated (e.g., HpyerV balloon right now).
>>
>> So it's effectively "re-initializing the memmap using the driver knowledge"
>> when onlining.
> 
> I am not sure I follow. So you exclude those pages when onlining?

Exactly, using the online_page callback. The pages will - again - be 
marked PG_offline with a refcount of 0. They will not be put to the buddy.

> 
>>> Should we allow to try_remove_memory to succeed with these pages?
>>
>> I think we should first properly offline them (mark sections offline and
>> memory blocks, fixup numbers, shrink zones ...). The we can cleanly remove
>> the memory. (see [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce
>> offline_and_remove_memory())
> 
> I will have a look, but just to quick question. try_remove_memory would
> fail if the range is offline (via user interface) but there are still some
> pages in the driver Offline state?

try_remove_memory() does not check any memmap (because it is garbage), 
it only makes sure that the memory blocks are properly marked as 
offline. (IOW, device_offline() was called on the memory block).

> 
>> Once offline, the memmap is irrelevant and try_remove_memory() can do its
>> job.
>>
>>> Do we really have hook into __put_page? Why do we even care about the
>>> reference count of those pages? Wouldn't it be just more consistent to
>>> elevate the reference count (I guess this is what you suggest in the
>>> last paragraph) and the virtio driver would return that page to the
>>> buddy by regular put_page. This is also related to the above question
>>> about the physical memory remove.
>>
>> Returning them to the buddy is problematic for various reasons. Let's have a
>> look at __offline_pages():
>>
>> 1) start_isolate_page_range()
>> -> offline pages with a reference count of one will be detected as unmovable
>> -> BAD, we abort right away. We could hack around that.
>>
>> 2) memory_notify(MEM_GOING_OFFLINE, &arg);
>> -> Here, we could release all pages to the buddy, clearing PG_offline
>> -> BAD, PF_offline must not be cleared so dumping tools will not touch
>>     these pages. I don't see a way to hack around that.
>>
>> 3) scan_movable_pages() ...
>>
>> 4a) memory_notify(MEM_OFFLINE, &arg);
>>
>> Perfect, it worked. Sections are offline.
>>
>> 4b) undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
>>      memory_notify(MEM_CANCEL_OFFLINE, &arg);
>>
>> -> Offlining failed for whatever reason.
>> -> Pages are in the buddy, but we already un-isolated them. BAD.
>>
>> By not going via the buddy we avoid these issues and can leave PG_offline
>> set until the section is fully offline. Something that is very desirable for
>> virtio-mem (and as far as I can tell also HyperV in the future).
> 
> I am not sure I follow. Maybe my original question was confusing. Let me
> ask again. Why do we need to hook into __put_page?

Just replied again answering this question, before I read this mail :)

Thanks!

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 13:45       ` Michal Hocko
  2019-10-16 13:55         ` David Hildenbrand
@ 2019-10-16 13:59         ` David Hildenbrand
  1 sibling, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 13:59 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 15:45, Michal Hocko wrote:
> On Wed 16-10-19 14:50:30, David Hildenbrand wrote:
>> On 16.10.19 13:43, Michal Hocko wrote:
>>> On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
>>>> virtio-mem wants to allow to offline memory blocks of which some parts
>>>> were unplugged, especially, to later offline and remove completely
>>>> unplugged memory blocks. The important part is that PageOffline() has
>>>> to remain set until the section is offline, so these pages will never
>>>> get accessed (e.g., when dumping). The pages should not be handed
>>>> back to the buddy (which would require clearing PageOffline() and
>>>> result in issues if offlining fails and the pages are suddenly in the
>>>> buddy).
>>>>
>>>> Let's use "PageOffline() + reference count = 0" as a sign to
>>>> memory offlining code that these pages can simply be skipped when
>>>> offlining, similar to free or HWPoison pages.
>>>>
>>>> Pass flags to test_pages_isolated(), similar as already done for
>>>> has_unmovable_pages(). Use a new flag to indicate the
>>>> requirement of memory offlining to skip over these special pages.
>>>>
>>>> In has_unmovable_pages(), make sure the pages won't be detected as
>>>> movable. This is not strictly necessary, however makes e.g.,
>>>> alloc_contig_range() stop early, trying to isolate such page blocks -
>>>> compared to failing later when testing if all pages were isolated.
>>>>
>>>> Also, make sure that when a reference to a PageOffline() page is
>>>> dropped, that the page will not be returned to the buddy.
>>>>
>>>> memory devices (like virtio-mem) that want to make use of this
>>>> functionality have to make sure to synchronize against memory offlining,
>>>> using the memory hotplug notifier.
>>>>
>>>> Alternative: Allow to offline with a reference count of 1
>>>> and use some other sign in the struct page that offlining is permitted.
>>>
>>> Few questions. I do not see onlining code to take care of this special
>>> case. What should happen when offline && online?
>>
>> Once offline, the memmap is garbage. When onlining again:
>>
>> a) memmap will be re-initialized
>> b) online_page_callback_t will be called for every page in the section. The
>> driver can mark them offline again and not give them to the buddy.
>> c) section will be marked online.
> 
> But we can skip those pages when onlining and keep them in the offline
> state right? We do not poison offlined pages.

https://lkml.org/lkml/2019/10/6/60

But  again, onlining will overwrite the whole memmap right now and there 
is no way to identify if a memmap contains garbage or not.

We would have to identify/remember if re-onlining, but I am not yet sure 
if re-using memmaps when onlining is such a good idea ...

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 13:45     ` David Hildenbrand
@ 2019-10-16 14:03       ` Michal Hocko
  2019-10-16 14:14         ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 14:03 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Wed 16-10-19 15:45:06, David Hildenbrand wrote:
> On 16.10.19 13:43, Michal Hocko wrote:
> > On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
> > > virtio-mem wants to allow to offline memory blocks of which some parts
> > > were unplugged, especially, to later offline and remove completely
> > > unplugged memory blocks. The important part is that PageOffline() has
> > > to remain set until the section is offline, so these pages will never
> > > get accessed (e.g., when dumping). The pages should not be handed
> > > back to the buddy (which would require clearing PageOffline() and
> > > result in issues if offlining fails and the pages are suddenly in the
> > > buddy).
> > > 
> > > Let's use "PageOffline() + reference count = 0" as a sign to
> > > memory offlining code that these pages can simply be skipped when
> > > offlining, similar to free or HWPoison pages.
> > > 
> > > Pass flags to test_pages_isolated(), similar as already done for
> > > has_unmovable_pages(). Use a new flag to indicate the
> > > requirement of memory offlining to skip over these special pages.
> > > 
> > > In has_unmovable_pages(), make sure the pages won't be detected as
> > > movable. This is not strictly necessary, however makes e.g.,
> > > alloc_contig_range() stop early, trying to isolate such page blocks -
> > > compared to failing later when testing if all pages were isolated.
> > > 
> > > Also, make sure that when a reference to a PageOffline() page is
> > > dropped, that the page will not be returned to the buddy.
> > > 
> > > memory devices (like virtio-mem) that want to make use of this
> > > functionality have to make sure to synchronize against memory offlining,
> > > using the memory hotplug notifier.
> > > 
> > > Alternative: Allow to offline with a reference count of 1
> > > and use some other sign in the struct page that offlining is permitted.
> > 
> > Few questions. I do not see onlining code to take care of this special
> > case. What should happen when offline && online?
> > Should we allow to try_remove_memory to succeed with these pages?
> > Do we really have hook into __put_page? Why do we even care about the
> > reference count of those pages?
> 
> Oh, I forgot to answer this questions. The __put_page() change is necessary
> for the following race I identified:
> 
> Page has a refcount of 1 (e.g., allocated by virtio-mem using
> alloc_contig_range()).
> 
> a) kernel: get_page_unless_zero(page): refcount = 2
> b) virtio-mem: set page PG_offline, reduce refcount): refocunt = 1
> c) kernel: put_page(page): refcount = 0
> 
> The page would suddenly be given to the buddy. which is bad.

But why cannot you keep the reference count at 1 (do get_page when
offlining the page)? In other words as long as the driver knows the page
has been returned to the host then it has ref count at 1. Once the page
is returned to the guest for whatever reason it can free it to the
system by clearing the offline state and put_page.

An elevated ref count could help to detect that the memory hotremove is
not safe until the driver removes all potential metadata it might still
hold. You also know that memory online should skip such a page.

All in all your page is still in use by the driver and the life cycle is
controlled by that driver.

Or am I am missing something?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 13:55         ` David Hildenbrand
@ 2019-10-16 14:09           ` Michal Hocko
  2019-10-16 14:16             ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-16 14:09 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Wed 16-10-19 15:55:00, David Hildenbrand wrote:
> On 16.10.19 15:45, Michal Hocko wrote:
[...]
> > There is state stored in the struct page. In other words this shouldn't
> > be really different from HWPoison pages. I cannot find the code that is
> > doing that and maybe we don't handle that. But we cannot simply online
> > hwpoisoned page. Offlining the range will not make a broken memory OK
> > all of the sudden. And your usecase sounds similar to me.
> 
> Sorry to say, but whenever we online memory the memmap is overwritten,
> because there is no way you could tell it contains garbage or not. You have
> to assume it is garbage. (my recent patch even poisons the memmap when
> offlining, which helped to find a lot of these "garbage memmap" BUGs)
> 
> online_pages()
> 	...
> 	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL);
> 	...
> 		memmap_init_zone()
> 			-> memmap initialized
> 
> So yes, offlining memory with HWPoison and re-onlining it effectively drops
> HWPoison markers. On the next access, you will trigger a new HWPoison.

Right you are! I need to sit on this much more and think about it with a
clean head.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 14:03       ` Michal Hocko
@ 2019-10-16 14:14         ` David Hildenbrand
  2019-10-18  8:15           ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 14:14 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 16:03, Michal Hocko wrote:
> On Wed 16-10-19 15:45:06, David Hildenbrand wrote:
>> On 16.10.19 13:43, Michal Hocko wrote:
>>> On Thu 19-09-19 16:22:25, David Hildenbrand wrote:
>>>> virtio-mem wants to allow to offline memory blocks of which some parts
>>>> were unplugged, especially, to later offline and remove completely
>>>> unplugged memory blocks. The important part is that PageOffline() has
>>>> to remain set until the section is offline, so these pages will never
>>>> get accessed (e.g., when dumping). The pages should not be handed
>>>> back to the buddy (which would require clearing PageOffline() and
>>>> result in issues if offlining fails and the pages are suddenly in the
>>>> buddy).
>>>>
>>>> Let's use "PageOffline() + reference count = 0" as a sign to
>>>> memory offlining code that these pages can simply be skipped when
>>>> offlining, similar to free or HWPoison pages.
>>>>
>>>> Pass flags to test_pages_isolated(), similar as already done for
>>>> has_unmovable_pages(). Use a new flag to indicate the
>>>> requirement of memory offlining to skip over these special pages.
>>>>
>>>> In has_unmovable_pages(), make sure the pages won't be detected as
>>>> movable. This is not strictly necessary, however makes e.g.,
>>>> alloc_contig_range() stop early, trying to isolate such page blocks -
>>>> compared to failing later when testing if all pages were isolated.
>>>>
>>>> Also, make sure that when a reference to a PageOffline() page is
>>>> dropped, that the page will not be returned to the buddy.
>>>>
>>>> memory devices (like virtio-mem) that want to make use of this
>>>> functionality have to make sure to synchronize against memory offlining,
>>>> using the memory hotplug notifier.
>>>>
>>>> Alternative: Allow to offline with a reference count of 1
>>>> and use some other sign in the struct page that offlining is permitted.
>>>
>>> Few questions. I do not see onlining code to take care of this special
>>> case. What should happen when offline && online?
>>> Should we allow to try_remove_memory to succeed with these pages?
>>> Do we really have hook into __put_page? Why do we even care about the
>>> reference count of those pages?
>>
>> Oh, I forgot to answer this questions. The __put_page() change is necessary
>> for the following race I identified:
>>
>> Page has a refcount of 1 (e.g., allocated by virtio-mem using
>> alloc_contig_range()).
>>
>> a) kernel: get_page_unless_zero(page): refcount = 2
>> b) virtio-mem: set page PG_offline, reduce refcount): refocunt = 1
>> c) kernel: put_page(page): refcount = 0
>>
>> The page would suddenly be given to the buddy. which is bad.
> 
> But why cannot you keep the reference count at 1 (do get_page when
> offlining the page)? In other words as long as the driver knows the page
> has been returned to the host then it has ref count at 1. Once the page
> is returned to the guest for whatever reason it can free it to the
> system by clearing the offline state and put_page.

I think I explained how the reference count of 1 is problematic when 
wanting to offline the memory. After all that's the problem I try to 
solve: Keep PG_offline set until the memory is offline and make sure 
nobody will touch the page.

See below on details on how to revive such unplugged memory.

> 
> An elevated ref count could help to detect that the memory hotremove is
> not safe until the driver removes all potential metadata it might still
> hold. You also know that memory online should skip such a page.
Again, when onlining memory you have to assume the memmap is complete 
garbage. No trusting *at all* on that as of now. This represents a BIG 
change to what we have right now. Not totally against that, but it 
sounds like a bigger rework that mainly helps to fix HWPoison.

> 
> All in all your page is still in use by the driver and the life cycle is
> controlled by that driver.

The driver let go of all direct references. The page (or rather chunks) 
are in no list. The "knowledge" that these pages are offline are stored 
in some external metadata. This metadata is updated when notified via 
the memory notifier.

What happens in case virtio-mem wants to revive a chunk (IOW, plug 
unplugged memory)?

a) it makes sure no concurrent memory onlining/offlining can happen 
(locking via memory notifiers)
b) it grabs a reference to the page (increasing the refcount)
c) it clears PG_offline and issues free_contig_range().

> 
> Or am I am missing something?
> 
It's just complex stuff :) I guess the part you are missing is that the 
driver officially signals "I have no direct reference, you can offline 
this memory, I know how to deal with that". It's not like "this is a 
balloon inflated page I keep in a list".

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 14:09           ` Michal Hocko
@ 2019-10-16 14:16             ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-16 14:16 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 16.10.19 16:09, Michal Hocko wrote:
> On Wed 16-10-19 15:55:00, David Hildenbrand wrote:
>> On 16.10.19 15:45, Michal Hocko wrote:
> [...]
>>> There is state stored in the struct page. In other words this shouldn't
>>> be really different from HWPoison pages. I cannot find the code that is
>>> doing that and maybe we don't handle that. But we cannot simply online
>>> hwpoisoned page. Offlining the range will not make a broken memory OK
>>> all of the sudden. And your usecase sounds similar to me.
>>
>> Sorry to say, but whenever we online memory the memmap is overwritten,
>> because there is no way you could tell it contains garbage or not. You have
>> to assume it is garbage. (my recent patch even poisons the memmap when
>> offlining, which helped to find a lot of these "garbage memmap" BUGs)
>>
>> online_pages()
>> 	...
>> 	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL);
>> 	...
>> 		memmap_init_zone()
>> 			-> memmap initialized
>>
>> So yes, offlining memory with HWPoison and re-onlining it effectively drops
>> HWPoison markers. On the next access, you will trigger a new HWPoison.
> 
> Right you are! I need to sit on this much more and think about it with a
> clean head.
> 

Sure, take your time and let me know if you need any details on how 
virtio-mem works, how it uses these interfaces, and how the mm internals 
it uses play along.

Thanks Michal!

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-16 14:14         ` David Hildenbrand
@ 2019-10-18  8:15           ` Michal Hocko
  2019-10-18  8:50             ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-18  8:15 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Wed 16-10-19 16:14:52, David Hildenbrand wrote:
> On 16.10.19 16:03, Michal Hocko wrote:
[...]
> > But why cannot you keep the reference count at 1 (do get_page when
> > offlining the page)? In other words as long as the driver knows the page
> > has been returned to the host then it has ref count at 1. Once the page
> > is returned to the guest for whatever reason it can free it to the
> > system by clearing the offline state and put_page.
> 
> I think I explained how the reference count of 1 is problematic when wanting
> to offline the memory. After all that's the problem I try to solve: Keep
> PG_offline set until the memory is offline and make sure nobody will touch
> the page.

Please bear with me but I still believe that elevated reference count
has some merits. I do understand that you maintain your metadata to
recognize that the memory handed over to the hypervisor will not
magically appear after onlining. But I believe that you can achieve
the same with an elevated reference count and have a more robust design
as well.

Let's say that you still keep a reference to your offlined pages and
mark them offlined. That should make sure that no consumer of the
pfn_to_online_page will touch the page's content nor the state. Now
admin might want to offline/hotremove the whole memory block via sysfs.
An elevated reference count would prevent offlining to finish. And I
believe this is a good thing because the owner of the offline page might
still need to do something to "untrack" that page. We have an interface
for that - MEM_GOING_OFFLINE notification. This sounds like a good place
for the driver to decide whether it is safe to let the page go or not.
If you can let the page go then just drop the reference count. The page
is isolated already by that time. If you cannot let it go for whatever
reason you can fail the offlining.

An advantage is that the driver has the full control over offlining and
also you do not really have to track a new online request to do the
right thing.

Or do I still see it too simplistically and the notifier is not a good
place to handle the reference count?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-18  8:15           ` Michal Hocko
@ 2019-10-18  8:50             ` David Hildenbrand
  2019-10-18 11:20               ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-18  8:50 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 18.10.19 10:15, Michal Hocko wrote:
> On Wed 16-10-19 16:14:52, David Hildenbrand wrote:
>> On 16.10.19 16:03, Michal Hocko wrote:
> [...]
>>> But why cannot you keep the reference count at 1 (do get_page when
>>> offlining the page)? In other words as long as the driver knows the page
>>> has been returned to the host then it has ref count at 1. Once the page
>>> is returned to the guest for whatever reason it can free it to the
>>> system by clearing the offline state and put_page.
>>
>> I think I explained how the reference count of 1 is problematic when wanting
>> to offline the memory. After all that's the problem I try to solve: Keep
>> PG_offline set until the memory is offline and make sure nobody will touch
>> the page.
> 
> Please bear with me but I still believe that elevated reference count
> has some merits. I do understand that you maintain your metadata to
> recognize that the memory handed over to the hypervisor will not
> magically appear after onlining. But I believe that you can achieve
> the same with an elevated reference count and have a more robust design
> as well.

Thanks for thinking about this. I still believe it is problematic. And I 
don't like releasing "pages that should not be used" to the buddy. It 
comes with some problems if offlining fails (see below).

> 
> Let's say that you still keep a reference to your offlined pages and
> mark them offlined. That should make sure that no consumer of the
> pfn_to_online_page will touch the page's content nor the state. Now

pfn_to_online_page() does not check against PG_offline. (which is 
correct IMHO. As documented, PG_offline means "logically offline".

The memmap is valid, however the page content should not be accessed. 
(PG_logically_offline would have been ugly ;) ).

But that shouldn't be an issue right now.

> admin might want to offline/hotremove the whole memory block via sysfs.

The admin can only trigger offlining via sysfs. Hotremove is under 
control of the driver. (removing the whole memory block)

And with the current virtio-mem prototype, this works like a charm.

> An elevated reference count would prevent offlining to finish. And I
> believe this is a good thing because the owner of the offline page might
> still need to do something to "untrack" that page. We have an interface

And here is the thing: The owner of the page does not have to do 
anything to untrack the page. I mean that's what a reference count of 
zero actually means - no direct reference.

I could emphasize how this is to be used in the documentation:

PageOffline() pages that have a reference count of 0 will be treated
like free pages when offlining pages, allowing the containing memory
block to get offlined. In case a driver wants to revive such a page, it 
has to synchronize against memory onlining/offlining (e.g., using memory 
notifiers) while incrementing the reference count. Also, a driver that 
relies in this feature is aware that re-onlining the memory will require 
to re-set the pages PageOffline() - e.g., via the online_page_callback_t.


> for that - MEM_GOING_OFFLINE notification. This sounds like a good place
> for the driver to decide whether it is safe to let the page go or not.

As I explained, this is too late and fragile. I post again what I posted 
before with some further explanations

__offline_pages() works like this:

1) start_isolate_page_range()
-> offline pages with a reference count of one will be detected as
unmovable -> offlining aborted. (see below on the memory isolation notifier)

2) memory_notify(MEM_GOING_OFFLINE, &arg);
-> Here, we could release all pages to the buddy, clearing PG_offline
-> PF_offline must not be cleared so dumping tools will not touch
    these pages. There is a time where pages are !PageBuddy() and
    !PageOffline().

3) scan_movable_pages() ...

4a) Memory offlining succeeded: memory_notify(MEM_OFFLINE, &arg);

Perfect, it worked. Sections are offline.

4b) Memory offlining failed

     undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
     memory_notify(MEM_CANCEL_OFFLINE, &arg);

-> Offlining failed for whatever reason. Once we reach the notifier, the
    pages are already back in use by the buddy.
-> Pages are in the buddy, they are no longer under control of the
    driver.


To summarize the important parts

1. PG_offline has to remain set until the section is offline.

2. When we reach MEM_GOING_OFFLINE, it is too late. The pages were 
already detected as unmovable.

3. When we release pages that are supposed to be PG_offline to the buddy 
and offlining fails, we are in big trouble.


> If you can let the page go then just drop the reference count. The page
> is isolated already by that time. If you cannot let it go for whatever
> reason you can fail the offlining.

We do have one hack in current MM code, which is the memory isolation 
notifier only used by CMM on PPC. It allows to "cheat" 
has_unmovable_pages() to skip over unmovable pages. But quite frankly, I 
rather want to get rid of that crap (something I am working on right 
now) than introduce new users. This stuff is racy as hell and for CMM, 
if memory offlining fails, the ballooned pages are suddenly part of the 
buddy. Fragile.

> 
> An advantage is that the driver has the full control over offlining and
> also you do not really have to track a new online request to do the
> right thing.

The driver still has to synchronize against onlining/offlining requests 
and track the state of the memory blocks.

Simple example: virtio-mem wants to try yo unplug a 4MB chunk. If the 
memory block is online, it has to use alloc contig_range(). If the 
memory block is offline (e.g., user space has not onlined it yet), it is 
sufficient to update metadata. It has to be aware of the state of the 
memory blocks and synchronize against onlining/offlining.

> 
> Or do I still see it too simplistically and the notifier is not a good
> place to handle the reference count?

Yes :) I could hack something via the memory isolation notifier (which 
is ugly) and MEM_GOING_OFFLINE. But if offlining fails, the pages are in 
the buddy and the driver lost control (which is one issue with PPC CMA 
implementation right now).

Again, I think routing pages via the buddy is the problematic part. (if 
offlining fails, but also the period where the pages are neither 
PageBuddy() nor PageOffline()))


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-18  8:50             ` David Hildenbrand
@ 2019-10-18 11:20               ` Michal Hocko
  2019-10-18 12:35                 ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-18 11:20 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Fri 18-10-19 10:50:24, David Hildenbrand wrote:
> On 18.10.19 10:15, Michal Hocko wrote:
> > On Wed 16-10-19 16:14:52, David Hildenbrand wrote:
> > > On 16.10.19 16:03, Michal Hocko wrote:
> > [...]
> > > > But why cannot you keep the reference count at 1 (do get_page when
> > > > offlining the page)? In other words as long as the driver knows the page
> > > > has been returned to the host then it has ref count at 1. Once the page
> > > > is returned to the guest for whatever reason it can free it to the
> > > > system by clearing the offline state and put_page.
> > > 
> > > I think I explained how the reference count of 1 is problematic when wanting
> > > to offline the memory. After all that's the problem I try to solve: Keep
> > > PG_offline set until the memory is offline and make sure nobody will touch
> > > the page.
> > 
> > Please bear with me but I still believe that elevated reference count
> > has some merits. I do understand that you maintain your metadata to
> > recognize that the memory handed over to the hypervisor will not
> > magically appear after onlining. But I believe that you can achieve
> > the same with an elevated reference count and have a more robust design
> > as well.
> 
> Thanks for thinking about this. I still believe it is problematic. And I
> don't like releasing "pages that should not be used" to the buddy. It comes
> with some problems if offlining fails (see below).

Sure I will not be pushing if that turns out to be unfeasible. But I
would like to explore that path before giving up on it.
 
[...]
> > An elevated reference count would prevent offlining to finish. And I
> > believe this is a good thing because the owner of the offline page might
> > still need to do something to "untrack" that page. We have an interface
> 
> And here is the thing: The owner of the page does not have to do anything to
> untrack the page. I mean that's what a reference count of zero actually
> means - no direct reference.

Will this be the case for other potential users of the similar/same
mechanism? I thought that this would become a more spread mechanism.
 
[...]

> > for that - MEM_GOING_OFFLINE notification. This sounds like a good place
> > for the driver to decide whether it is safe to let the page go or not.
> 
> As I explained, this is too late and fragile. I post again what I posted
> before with some further explanations
> 
> __offline_pages() works like this:
> 
> 1) start_isolate_page_range()
> -> offline pages with a reference count of one will be detected as
> unmovable -> offlining aborted. (see below on the memory isolation notifier)

I am assuming that has_unmovable_pages would skip over those pages. Your
patch already does that, no?

> 2) memory_notify(MEM_GOING_OFFLINE, &arg);
> -> Here, we could release all pages to the buddy, clearing PG_offline
> -> PF_offline must not be cleared so dumping tools will not touch
>    these pages. There is a time where pages are !PageBuddy() and
>    !PageOffline().

Well, this is fully under control of the driver, no? Reference count
shouldn't play any role here AFAIU.
 
> 3) scan_movable_pages() ...
> 
> 4a) Memory offlining succeeded: memory_notify(MEM_OFFLINE, &arg);
> 
> Perfect, it worked. Sections are offline.
> 
> 4b) Memory offlining failed
> 
>     undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
>     memory_notify(MEM_CANCEL_OFFLINE, &arg);

Doesn't this return pages back to buddy only when they were marked Buddy
already?

MEM_CANCEL_OFFLINE could gain the reference back to balance the
MEM_GOING_OFFLINE step.

One think that I would like to clarify because my previous email could
be misleading a bit. You do not really have to drop the reference by
releasing the page to the allocator (via put_page). You can also set
the reference count to 0 explicitly. The driver is in control of the
page, right?  And that is the whole point I wanted to make. There is an
explicit control via the reference count which is the standard way to
control the struct page life cycle.

Anyway hooking into __put_page (which tends to be a hot path with
something that is barely used on most systems) doesn't sound nice to me.
This is the whole point which made me think about the whole reference
count approach in the first place.

I do realize that the reference count doesn't solve the problem with
onlining. Drivers still have to special case the onlining and that is
something I really dislike but I do not have a good answer for.

> > If you can let the page go then just drop the reference count. The page
> > is isolated already by that time. If you cannot let it go for whatever
> > reason you can fail the offlining.
> 
> We do have one hack in current MM code, which is the memory isolation
> notifier only used by CMM on PPC. It allows to "cheat" has_unmovable_pages()
> to skip over unmovable pages. But quite frankly, I rather want to get rid of
> that crap (something I am working on right now) than introduce new users.
> This stuff is racy as hell and for CMM, if memory offlining fails, the
> ballooned pages are suddenly part of the buddy. Fragile.

Could you be more specific please?

> > An advantage is that the driver has the full control over offlining and
> > also you do not really have to track a new online request to do the
> > right thing.
> 
> The driver still has to synchronize against onlining/offlining requests and
> track the state of the memory blocks.
> 
> Simple example: virtio-mem wants to try yo unplug a 4MB chunk. If the memory
> block is online, it has to use alloc contig_range(). If the memory block is
> offline (e.g., user space has not onlined it yet), it is sufficient to
> update metadata. It has to be aware of the state of the memory blocks and
> synchronize against onlining/offlining.

Hmm, so the driver might offline a part of the memory which never got
onlined? Is there really a sensible usecases for that?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-18 11:20               ` Michal Hocko
@ 2019-10-18 12:35                 ` David Hildenbrand
  2019-10-22 12:23                   ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-18 12:35 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 18.10.19 13:20, Michal Hocko wrote:
> On Fri 18-10-19 10:50:24, David Hildenbrand wrote:
>> On 18.10.19 10:15, Michal Hocko wrote:
>>> On Wed 16-10-19 16:14:52, David Hildenbrand wrote:
>>>> On 16.10.19 16:03, Michal Hocko wrote:
>>> [...]
>>>>> But why cannot you keep the reference count at 1 (do get_page when
>>>>> offlining the page)? In other words as long as the driver knows the page
>>>>> has been returned to the host then it has ref count at 1. Once the page
>>>>> is returned to the guest for whatever reason it can free it to the
>>>>> system by clearing the offline state and put_page.
>>>>
>>>> I think I explained how the reference count of 1 is problematic when wanting
>>>> to offline the memory. After all that's the problem I try to solve: Keep
>>>> PG_offline set until the memory is offline and make sure nobody will touch
>>>> the page.
>>>
>>> Please bear with me but I still believe that elevated reference count
>>> has some merits. I do understand that you maintain your metadata to
>>> recognize that the memory handed over to the hypervisor will not
>>> magically appear after onlining. But I believe that you can achieve
>>> the same with an elevated reference count and have a more robust design
>>> as well.
>>
>> Thanks for thinking about this. I still believe it is problematic. And I
>> don't like releasing "pages that should not be used" to the buddy. It comes
>> with some problems if offlining fails (see below).
> 
> Sure I will not be pushing if that turns out to be unfeasible. But I
> would like to explore that path before giving up on it.

Sure, I appreciate that. I wrapped my head around a lot of these things 
for a long time and came to the conclusion that it is complicated :)

[read below, I think it makes sense if you can summarize the idea you 
have that avoids releasing pages to the buddy - similar to this approach]

>   
> [...]
>>> An elevated reference count would prevent offlining to finish. And I
>>> believe this is a good thing because the owner of the offline page might
>>> still need to do something to "untrack" that page. We have an interface
>>
>> And here is the thing: The owner of the page does not have to do anything to
>> untrack the page. I mean that's what a reference count of zero actually
>> means - no direct reference.
> 
> Will this be the case for other potential users of the similar/same
> mechanism? I thought that this would become a more spread mechanism.

Anybody who wants to use that mechanism has to respect these rules. I am 
going to document that. It does not make sense for all balloon drivers 
that can simply implement page compaction. E.g., PPC CMM, it makes much 
more sense to switch to balloon compaction instead (because that's what 
it really is, a balloon driver).

I could imagine that HyperV might want to use that in the future. And it 
should be possible to make them play with the rules. They already use 
memory notifiers and online_page_callback_t.

> 
>>> for that - MEM_GOING_OFFLINE notification. This sounds like a good place
>>> for the driver to decide whether it is safe to let the page go or not.
>>
>> As I explained, this is too late and fragile. I post again what I posted
>> before with some further explanations
>>
>> __offline_pages() works like this:
>>
>> 1) start_isolate_page_range()
>> -> offline pages with a reference count of one will be detected as
>> unmovable -> offlining aborted. (see below on the memory isolation notifier)
> 
> I am assuming that has_unmovable_pages would skip over those pages. Your
> patch already does that, no?

Yes, this works IFF the reference count is 0 (IOW, this patch). Not with 
a reference count of 1 (unless the pages are movable, like with balloon 
compaction).

Please note that we have other users that use PG_offline + refcount >= 1 
(HyperV balloon, XEN). We should not affect these users (IOW, 
has_unmovable_pages() has to stop right there if we see one of these pages).

> 
>> 2) memory_notify(MEM_GOING_OFFLINE, &arg);
>> -> Here, we could release all pages to the buddy, clearing PG_offline
>> -> PF_offline must not be cleared so dumping tools will not touch
>>     these pages. There is a time where pages are !PageBuddy() and
>>     !PageOffline().
> 
> Well, this is fully under control of the driver, no? Reference count
> shouldn't play any role here AFAIU.

Yes, this is more a PG_offline issue. The reference count is an issue of 
reaching this call :) If we want to go via the buddy:

1. Clear PG_offline
2. Free page (gets set PG_buddy)

Between 1 and 2, a dumping tool could not exclude these pages and 
therefore try to read from these pages. That is an issue IFF we want to 
return the pages back to the buddy instead of doing what I propose here.

>   
>> 3) scan_movable_pages() ...

Please note that when we don't put the pages back to the buddy and don't 
implement something like I have in this patch, we'll loop/fail here. 
Especially if we have pages with PG_offline + refcount >= 1 .

>>
>> 4a) Memory offlining succeeded: memory_notify(MEM_OFFLINE, &arg);
>>
>> Perfect, it worked. Sections are offline.
>>
>> 4b) Memory offlining failed
>>
>>      undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
>>      memory_notify(MEM_CANCEL_OFFLINE, &arg);
> 
> Doesn't this return pages back to buddy only when they were marked Buddy
> already?

Yes, but I think you asked for evaluating what it would take to make the 
reference count stay at 1 and return the pages to the buddy. I tried to 
explain the pitfalls of that approach here :)

Can you clarify what exactly you have in mind right now? (how to make 
isolation work, how to make offlining work without putting pages back to 
the buddy, etc .). I have the feeling I am missing the one part that 
does not put the pages back to the buddy.

> 
> MEM_CANCEL_OFFLINE could gain the reference back to balance the
> MEM_GOING_OFFLINE step.

The pages are already unisolated and could be used by the buddy. But 
again, I think you have an idea that tries to avoid putting pages to the 
buddy.

> 
> One think that I would like to clarify because my previous email could
> be misleading a bit. You do not really have to drop the reference by
> releasing the page to the allocator (via put_page). You can also set
> the reference count to 0 explicitly. The driver is in control of the
> page, right?  And that is the whole point I wanted to make. There is an

This is what virtio-mem does whenever it allocates a range using 
alloc_contig_range(). It uses page_ref_dec() instead of put_page()

/*
  * Set a range of pages PG_offline and drop the reference. The dropped
  * reference (0) and the flag allows isolation code to isolate thisrange
  * and offline code to offline it.
  */
static void virtio_mem_set_fake_offline(unsigned long pfn,
                                         unsigned int nr_pages)
{
        for (; nr_pages--; pfn++) {
                __SetPageOffline(pfn_to_page(pfn));
                page_ref_dec(pfn_to_page(pfn));
        }
}

The put_page() change is really only needed to avoid the mentioned race 
if somebody succeeds with a get_page_unless_zero(page) followed by a 
put_page() (bewlow).


> explicit control via the reference count which is the standard way to
> control the struct page life cycle.
> 
> Anyway hooking into __put_page (which tends to be a hot path with
> something that is barely used on most systems) doesn't sound nice to me.
> This is the whole point which made me think about the whole reference
> count approach in the first place.

Again, the race I think that is possible

somebody: get_page_unless_zero(page)
virtio_mem: page_ref_dec(pfn_to_page(pfn)
somebody: put_page() -> straight to the buddy

I am not yet sure if this will really be a performance issue in 
put_page(). The branch predictor should do an excellent job here. 
(especially on systems without users)

> 
> I do realize that the reference count doesn't solve the problem with
> onlining. Drivers still have to special case the onlining and that is
> something I really dislike but I do not have a good answer for.

Yeah, especially due to HWPoison, this is to be solved differently. The 
PG_offline part might not be nice, but certainly easier to handle 
(online_page_callback_t).

> 
>>> If you can let the page go then just drop the reference count. The page
>>> is isolated already by that time. If you cannot let it go for whatever
>>> reason you can fail the offlining.
>>
>> We do have one hack in current MM code, which is the memory isolation
>> notifier only used by CMM on PPC. It allows to "cheat" has_unmovable_pages()
>> to skip over unmovable pages. But quite frankly, I rather want to get rid of
>> that crap (something I am working on right now) than introduce new users.
>> This stuff is racy as hell and for CMM, if memory offlining fails, the
>> ballooned pages are suddenly part of the buddy. Fragile.
> 
> Could you be more specific please?

Let's take a look at how arch/powerpc/platforms/pseries/cmm.c handles it:

cmm_memory_isolate_cb() -> cmm_count_pages(arg):
- Memory Isolation notifier callback
- Count how many pages in the range to be isolated are in the ballooon
- This makes has_unmovable_pages() succeed. Pages can be isolated.

cmm_memory_cb -> cmm_mem_going_offline(arg):
- Memory notifier (online/offline)
- Release all pages in the range to the buddy

If offlining fails, the pages are now in the buddy, no longer in the 
balloon. MEM_CANCEL_ONLINE is too late, because the range is already 
unisolated again and the pages might be in use.

For CMM it might not be that bad, because it can actually "reloan" any 
pages. In contrast, virtio-mem cannot simply go ahead and reuse random 
memory in unplugged. Any access to these pages would be evil. Giving 
them back to the buddy is dangerous.

> 
>>> An advantage is that the driver has the full control over offlining and
>>> also you do not really have to track a new online request to do the
>>> right thing.
>>
>> The driver still has to synchronize against onlining/offlining requests and
>> track the state of the memory blocks.
>>
>> Simple example: virtio-mem wants to try yo unplug a 4MB chunk. If the memory
>> block is online, it has to use alloc contig_range(). If the memory block is
>> offline (e.g., user space has not onlined it yet), it is sufficient to
>> update metadata. It has to be aware of the state of the memory blocks and
>> synchronize against onlining/offlining.
> 
> Hmm, so the driver might offline a part of the memory which never got
> onlined? Is there really a sensible usecases for that?

Yes there is in general a demand for unplugging offline memory. E.g., 
some configurable mode in the future could be that virtio-mem only 
unplugs offline memory, because this way user space can control which 
memory will actually get unplugged (compared to virtio-mem going ahead 
and trying to find online chunks to unplug).

Also, virtio-mem will be very careful with ZONE_MOVABLE memory. 
ZONE_MOVABLE memory first has to be offlined by user space before 
virtio-mem will go ahead and unplug parts (of the now offline memory). 
The reason is that I don't want unmovable pages (IOW via 
alloc_contig_range()) to end up in ZONE_MOVABLE - similar to gigantic 
pages. But that's yet another discussion :)

Thanks Michal!

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-18 12:35                 ` David Hildenbrand
@ 2019-10-22 12:23                   ` Michal Hocko
  2019-10-22 14:02                     ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-22 12:23 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Fri 18-10-19 14:35:06, David Hildenbrand wrote:
> On 18.10.19 13:20, Michal Hocko wrote:
> > On Fri 18-10-19 10:50:24, David Hildenbrand wrote:
> > > On 18.10.19 10:15, Michal Hocko wrote:
[...]
> > > > for that - MEM_GOING_OFFLINE notification. This sounds like a good place
> > > > for the driver to decide whether it is safe to let the page go or not.
> > > 
> > > As I explained, this is too late and fragile. I post again what I posted
> > > before with some further explanations
> > > 
> > > __offline_pages() works like this:
> > > 
> > > 1) start_isolate_page_range()
> > > -> offline pages with a reference count of one will be detected as
> > > unmovable -> offlining aborted. (see below on the memory isolation notifier)
> > 
> > I am assuming that has_unmovable_pages would skip over those pages. Your
> > patch already does that, no?
> 
> Yes, this works IFF the reference count is 0 (IOW, this patch). Not with a
> reference count of 1 (unless the pages are movable, like with balloon
> compaction).

I am pretty sure that has_unmovable_pages can special these pages
regardless of the reference count for the memory hotplug. We already do
that for HWPoison pages.
 
> Please note that we have other users that use PG_offline + refcount >= 1
> (HyperV balloon, XEN). We should not affect these users (IOW,
> has_unmovable_pages() has to stop right there if we see one of these pages).

OK, this is exactly what I was worried about. I can see why you might
want to go an easier way and rule those users out but wouldn't be it
actually more reasonable to explicitly request PageOffline users to
implement MEM_GOING_OFFLINE and prepare their offlined pages for the
offlining operation or fail right there if that is not possible.
If you fail right there during the isolation phase then there is no way
to allow the offlining to proceed from that context.
 
> > > 2) memory_notify(MEM_GOING_OFFLINE, &arg);
> > > -> Here, we could release all pages to the buddy, clearing PG_offline
> > > -> PF_offline must not be cleared so dumping tools will not touch
> > >     these pages. There is a time where pages are !PageBuddy() and
> > >     !PageOffline().
> > 
> > Well, this is fully under control of the driver, no? Reference count
> > shouldn't play any role here AFAIU.
> 
> Yes, this is more a PG_offline issue. The reference count is an issue of
> reaching this call :) If we want to go via the buddy:
> 
> 1. Clear PG_offline
> 2. Free page (gets set PG_buddy)
> 
> Between 1 and 2, a dumping tool could not exclude these pages and therefore
> try to read from these pages. That is an issue IFF we want to return the
> pages back to the buddy instead of doing what I propose here.

If the driver is going to free page to the allocator then it would have
to claim the page back and so it is usable again. If it cannot free it
then it would simply set the reference count to 0. It can even keep the
PG_offline if necessary although I have to admit I am not really sure it
is necessary.

> > > 3) scan_movable_pages() ...
> 
> Please note that when we don't put the pages back to the buddy and don't
> implement something like I have in this patch, we'll loop/fail here.
> Especially if we have pages with PG_offline + refcount >= 1 .

You should have your reference count 0 at this stage as it is after
MEM_GOING_OFFLINE.
 
> > MEM_CANCEL_OFFLINE could gain the reference back to balance the
> > MEM_GOING_OFFLINE step.
> 
> The pages are already unisolated and could be used by the buddy. But again,
> I think you have an idea that tries to avoid putting pages to the buddy.

Yeah, set_page_count(page, 0) if you do not want to release that page
from the notifier context to reflect that the page is ok to be offlined
with the rest.
 
[...]

> > explicit control via the reference count which is the standard way to
> > control the struct page life cycle.
> > 
> > Anyway hooking into __put_page (which tends to be a hot path with
> > something that is barely used on most systems) doesn't sound nice to me.
> > This is the whole point which made me think about the whole reference
> > count approach in the first place.
> 
> Again, the race I think that is possible
> 
> somebody: get_page_unless_zero(page)
> virtio_mem: page_ref_dec(pfn_to_page(pfn)
> somebody: put_page() -> straight to the buddy

Who is that somebody? I thought that it is only the owner/driver to have
a control over the page. Also the above is not possible as long as the
owner/driver keeps a reference to the PageOffline page throughout the
time it is marked that way.
 
[...]

> > > > If you can let the page go then just drop the reference count. The page
> > > > is isolated already by that time. If you cannot let it go for whatever
> > > > reason you can fail the offlining.
> > > 
> > > We do have one hack in current MM code, which is the memory isolation
> > > notifier only used by CMM on PPC. It allows to "cheat" has_unmovable_pages()
> > > to skip over unmovable pages. But quite frankly, I rather want to get rid of
> > > that crap (something I am working on right now) than introduce new users.
> > > This stuff is racy as hell and for CMM, if memory offlining fails, the
> > > ballooned pages are suddenly part of the buddy. Fragile.
> > 
> > Could you be more specific please?
> 
> Let's take a look at how arch/powerpc/platforms/pseries/cmm.c handles it:
> 
> cmm_memory_isolate_cb() -> cmm_count_pages(arg):
> - Memory Isolation notifier callback
> - Count how many pages in the range to be isolated are in the ballooon
> - This makes has_unmovable_pages() succeed. Pages can be isolated.
> 
> cmm_memory_cb -> cmm_mem_going_offline(arg):
> - Memory notifier (online/offline)
> - Release all pages in the range to the buddy
> 
> If offlining fails, the pages are now in the buddy, no longer in the
> balloon. MEM_CANCEL_ONLINE is too late, because the range is already
> unisolated again and the pages might be in use.
> 
> For CMM it might not be that bad, because it can actually "reloan" any
> pages. In contrast, virtio-mem cannot simply go ahead and reuse random
> memory in unplugged. Any access to these pages would be evil. Giving them
> back to the buddy is dangerous.

Thanks, I was not aware of that code. But from what I understood this is
an outright bug in this code because cmm_mem_going_offline releases
pages to the buddy allocator which is something that is not recoverable
on a later failure.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-22 12:23                   ` Michal Hocko
@ 2019-10-22 14:02                     ` David Hildenbrand
  2019-10-23  9:43                       ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-22 14:02 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

>> Please note that we have other users that use PG_offline + refcount >= 1
>> (HyperV balloon, XEN). We should not affect these users (IOW,
>> has_unmovable_pages() has to stop right there if we see one of these pages).
> 
> OK, this is exactly what I was worried about. I can see why you might
> want to go an easier way and rule those users out but wouldn't be it
> actually more reasonable to explicitly request PageOffline users to
> implement MEM_GOING_OFFLINE and prepare their offlined pages for the
> offlining operation or fail right there if that is not possible.
> If you fail right there during the isolation phase then there is no way
> to allow the offlining to proceed from that context.

I am not sure I agree. But let's discuss the details. See below.

>   
>>>> 2) memory_notify(MEM_GOING_OFFLINE, &arg);
>>>> -> Here, we could release all pages to the buddy, clearing PG_offline
>>>> -> PF_offline must not be cleared so dumping tools will not touch
>>>>      these pages. There is a time where pages are !PageBuddy() and
>>>>      !PageOffline().
>>>
>>> Well, this is fully under control of the driver, no? Reference count
>>> shouldn't play any role here AFAIU.
>>
>> Yes, this is more a PG_offline issue. The reference count is an issue of
>> reaching this call :) If we want to go via the buddy:
>>
>> 1. Clear PG_offline
>> 2. Free page (gets set PG_buddy)
>>
>> Between 1 and 2, a dumping tool could not exclude these pages and therefore
>> try to read from these pages. That is an issue IFF we want to return the
>> pages back to the buddy instead of doing what I propose here.
> 
> If the driver is going to free page to the allocator then it would have
> to claim the page back and so it is usable again. If it cannot free it
> then it would simply set the reference count to 0. It can even keep the
> PG_offline if necessary although I have to admit I am not really sure it
> is necessary.

Yes it is necessary to keep PG_offline set to avoid anybody touching the
page until the section is offline. (especially, dumping tools)
But that's another discussion. The important part is to not go via the buddy.

> 
>>>> 3) scan_movable_pages() ...
>>
>> Please note that when we don't put the pages back to the buddy and don't
>> implement something like I have in this patch, we'll loop/fail here.
>> Especially if we have pages with PG_offline + refcount >= 1 .
> 
> You should have your reference count 0 at this stage as it is after
> MEM_GOING_OFFLINE.
>   
>>> MEM_CANCEL_OFFLINE could gain the reference back to balance the
>>> MEM_GOING_OFFLINE step.
>>
>> The pages are already unisolated and could be used by the buddy. But again,
>> I think you have an idea that tries to avoid putting pages to the buddy.
> 
> Yeah, set_page_count(page, 0) if you do not want to release that page
> from the notifier context to reflect that the page is ok to be offlined
> with the rest.
>   

I neither see how you deal with __test_page_isolated_in_pageblock() nor with
__offline_isolated_pages(). Sorry, but what I read is incomplete and you
probably have a full proposal in your head. Please read below how I think
you want to solve it.

> 
>>> explicit control via the reference count which is the standard way to
>>> control the struct page life cycle.
>>>
>>> Anyway hooking into __put_page (which tends to be a hot path with
>>> something that is barely used on most systems) doesn't sound nice to me.
>>> This is the whole point which made me think about the whole reference
>>> count approach in the first place.
>>
>> Again, the race I think that is possible
>>
>> somebody: get_page_unless_zero(page)
>> virtio_mem: page_ref_dec(pfn_to_page(pfn)
>> somebody: put_page() -> straight to the buddy
> 
> Who is that somebody? I thought that it is only the owner/driver to have
> a control over the page. Also the above is not possible as long as the
> owner/driver keeps a reference to the PageOffline page throughout the
> time it is marked that way.
>   

I was reading

include/linux/mm_types.h:

"If you want to use the refcount field, it must be used in such a way
 that other CPUs temporarily incrementing and then decrementing the
 refcount does not cause problems"

And that made me think "anybody can go ahead and try get_page_unless_zero()".

If I am missing something here and this can indeed not happen (e.g.,
because PageOffline() pages are never mapped to user space), then I'll
happily remove this code.

> 
>>>>> If you can let the page go then just drop the reference count. The page
>>>>> is isolated already by that time. If you cannot let it go for whatever
>>>>> reason you can fail the offlining.
>>>>
>>>> We do have one hack in current MM code, which is the memory isolation
>>>> notifier only used by CMM on PPC. It allows to "cheat" has_unmovable_pages()
>>>> to skip over unmovable pages. But quite frankly, I rather want to get rid of
>>>> that crap (something I am working on right now) than introduce new users.
>>>> This stuff is racy as hell and for CMM, if memory offlining fails, the
>>>> ballooned pages are suddenly part of the buddy. Fragile.
>>>
>>> Could you be more specific please?
>>
>> Let's take a look at how arch/powerpc/platforms/pseries/cmm.c handles it:
>>
>> cmm_memory_isolate_cb() -> cmm_count_pages(arg):
>> - Memory Isolation notifier callback
>> - Count how many pages in the range to be isolated are in the ballooon
>> - This makes has_unmovable_pages() succeed. Pages can be isolated.
>>
>> cmm_memory_cb -> cmm_mem_going_offline(arg):
>> - Memory notifier (online/offline)
>> - Release all pages in the range to the buddy
>>
>> If offlining fails, the pages are now in the buddy, no longer in the
>> balloon. MEM_CANCEL_ONLINE is too late, because the range is already
>> unisolated again and the pages might be in use.
>>
>> For CMM it might not be that bad, because it can actually "reloan" any
>> pages. In contrast, virtio-mem cannot simply go ahead and reuse random
>> memory in unplugged. Any access to these pages would be evil. Giving them
>> back to the buddy is dangerous.
> 
> Thanks, I was not aware of that code. But from what I understood this is
> an outright bug in this code because cmm_mem_going_offline releases
> pages to the buddy allocator which is something that is not recoverable
> on a later failure.
> 

Yes, and that should be gone if we switch to balloon compaction.



Let's recap what I suggest:

"PageOffline() pages that have a reference count of 0 will be treated
 like free pages when offlining pages, allowing the containing memory
 block to get offlined. In case a driver wants to revive such a page, it
 has to synchronize against memory onlining/offlining (e.g., using memory
 notifiers) while incrementing the reference count. Also, a driver that
 relies in this feature is aware that re-onlining the memory will require
 to re-set the pages PageOffline() - e.g., via the online_page_callback_t."

a) has_unmovable_pages() already skips over pages with a refcount of zero.
   The code I add to not skip over these pages when !MEMORY_OFFLINE is a pure
   optimization to fail early when trying to allocate from that range.

b) __test_page_isolated_in_pageblock() is modified to skip over
   PageOffline() pages with a refcount of zero

c) __offline_isolated_pages() is modified to skip over
   PageOffline() pages with a refcount of zero

d) __put_page() is modified to not return pages to the buddy in any
   case as a safety net. We might be able to get rid of that.



What I think you suggest:

a) has_unmovable_pages() skips over all PageOffline() pages.
   This results in a lot of false negatives when trying to offline. Might be ok.

b) The driver decrements the reference count of the PageOffline pages
   in MEM_GOING_OFFLINE.

c) The driver increments the reference count of the PageOffline pages
   in MEM_CANCEL_OFFLINE. One issue might be that the pages are no longer
   isolated once we get that call. Might be ok.

d) How to make __test_page_isolated_in_pageblock() succeed?
   Like I propose in this patch (PageOffline() + refcount == 0)?

e) How to make __offline_isolated_pages() succeed?
   Like I propose in this patch (PageOffline() + refcount == 0)?

In summary, is what you suggest simply delaying setting the reference count to 0
in MEM_GOING_OFFLINE instead of right away when the driver unpluggs the pages?
What's the big benefit you see and I fail to see?

-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-22 14:02                     ` David Hildenbrand
@ 2019-10-23  9:43                       ` Michal Hocko
  2019-10-23 10:03                         ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-23  9:43 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Tue 22-10-19 16:02:09, David Hildenbrand wrote:
[...]
> >>> MEM_CANCEL_OFFLINE could gain the reference back to balance the
> >>> MEM_GOING_OFFLINE step.
> >>
> >> The pages are already unisolated and could be used by the buddy. But again,
> >> I think you have an idea that tries to avoid putting pages to the buddy.
> > 
> > Yeah, set_page_count(page, 0) if you do not want to release that page
> > from the notifier context to reflect that the page is ok to be offlined
> > with the rest.
> >   
> 
> I neither see how you deal with __test_page_isolated_in_pageblock() nor with
> __offline_isolated_pages(). Sorry, but what I read is incomplete and you
> probably have a full proposal in your head. Please read below how I think
> you want to solve it.

Yeah, sorry that I am throwing incomplete ideas at you. I am just trying
to really nail down how to deal with reference counting here because it
is an important aspect.
 
> >>> explicit control via the reference count which is the standard way to
> >>> control the struct page life cycle.
> >>>
> >>> Anyway hooking into __put_page (which tends to be a hot path with
> >>> something that is barely used on most systems) doesn't sound nice to me.
> >>> This is the whole point which made me think about the whole reference
> >>> count approach in the first place.
> >>
> >> Again, the race I think that is possible
> >>
> >> somebody: get_page_unless_zero(page)
> >> virtio_mem: page_ref_dec(pfn_to_page(pfn)
> >> somebody: put_page() -> straight to the buddy
> > 
> > Who is that somebody? I thought that it is only the owner/driver to have
> > a control over the page. Also the above is not possible as long as the
> > owner/driver keeps a reference to the PageOffline page throughout the
> > time it is marked that way.
> >   
> 
> I was reading
> 
> include/linux/mm_types.h:
> 
> "If you want to use the refcount field, it must be used in such a way
>  that other CPUs temporarily incrementing and then decrementing the
>  refcount does not cause problems"
> 
> And that made me think "anybody can go ahead and try get_page_unless_zero()".
> 
> If I am missing something here and this can indeed not happen (e.g.,
> because PageOffline() pages are never mapped to user space), then I'll
> happily remove this code.

The point is that if the owner of the page is holding the only reference
to the page then it is clear that nothing like that's happened.
 
[...]

> Let's recap what I suggest:
> 
> "PageOffline() pages that have a reference count of 0 will be treated
>  like free pages when offlining pages, allowing the containing memory
>  block to get offlined. In case a driver wants to revive such a page, it
>  has to synchronize against memory onlining/offlining (e.g., using memory
>  notifiers) while incrementing the reference count. Also, a driver that
>  relies in this feature is aware that re-onlining the memory will require
>  to re-set the pages PageOffline() - e.g., via the online_page_callback_t."

OK

[...]
> d) __put_page() is modified to not return pages to the buddy in any
>    case as a safety net. We might be able to get rid of that.

I do not like exactly this part
 
> What I think you suggest:
> 
> a) has_unmovable_pages() skips over all PageOffline() pages.
>    This results in a lot of false negatives when trying to offline. Might be ok.
> 
> b) The driver decrements the reference count of the PageOffline pages
>    in MEM_GOING_OFFLINE.

Well, driver should make the page unreferenced or fail. What is done
really depends on the specific driver

> c) The driver increments the reference count of the PageOffline pages
>    in MEM_CANCEL_OFFLINE. One issue might be that the pages are no longer
>    isolated once we get that call. Might be ok.

Only previous PageBuddy pages are returned to the allocator IIRC. Mostly
because of MovablePage()

> d) How to make __test_page_isolated_in_pageblock() succeed?
>    Like I propose in this patch (PageOffline() + refcount == 0)?

Yep

> e) How to make __offline_isolated_pages() succeed?
>    Like I propose in this patch (PageOffline() + refcount == 0)?

Simply skip over PageOffline pages. Reference count should never be != 0
at this stage.
 
> In summary, is what you suggest simply delaying setting the reference count to 0
> in MEM_GOING_OFFLINE instead of right away when the driver unpluggs the pages?

Yes

> What's the big benefit you see and I fail to see?

Aparat from no hooks into __put_page it is also an explicit control over
the page via reference counting. Do you see any downsides?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-23  9:43                       ` Michal Hocko
@ 2019-10-23 10:03                         ` David Hildenbrand
  2019-10-24  8:42                           ` Michal Hocko
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-23 10:03 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 23.10.19 11:43, Michal Hocko wrote:
> On Tue 22-10-19 16:02:09, David Hildenbrand wrote:
> [...]
>>>>> MEM_CANCEL_OFFLINE could gain the reference back to balance the
>>>>> MEM_GOING_OFFLINE step.
>>>>
>>>> The pages are already unisolated and could be used by the buddy. But again,
>>>> I think you have an idea that tries to avoid putting pages to the buddy.
>>>
>>> Yeah, set_page_count(page, 0) if you do not want to release that page
>>> from the notifier context to reflect that the page is ok to be offlined
>>> with the rest.
>>>    
>>
>> I neither see how you deal with __test_page_isolated_in_pageblock() nor with
>> __offline_isolated_pages(). Sorry, but what I read is incomplete and you
>> probably have a full proposal in your head. Please read below how I think
>> you want to solve it.
> 
> Yeah, sorry that I am throwing incomplete ideas at you. I am just trying
> to really nail down how to deal with reference counting here because it
> is an important aspect.

I think we collected all the missing pieces now :) Thanks!

[...]

>>
>> I was reading
>>
>> include/linux/mm_types.h:
>>
>> "If you want to use the refcount field, it must be used in such a way
>>   that other CPUs temporarily incrementing and then decrementing the
>>   refcount does not cause problems"
>>
>> And that made me think "anybody can go ahead and try get_page_unless_zero()".
>>
>> If I am missing something here and this can indeed not happen (e.g.,
>> because PageOffline() pages are never mapped to user space), then I'll
>> happily remove this code.
> 
> The point is that if the owner of the page is holding the only reference
> to the page then it is clear that nothing like that's happened.

Right, and I think the race I described won't happen in practice. Nobody 
should be trying to do a get_page_unless_zero() on random pages that are 
not even mapped to user space. I was (as so often) very careful :)

>> Let's recap what I suggest:
>>
>> "PageOffline() pages that have a reference count of 0 will be treated
>>   like free pages when offlining pages, allowing the containing memory
>>   block to get offlined. In case a driver wants to revive such a page, it
>>   has to synchronize against memory onlining/offlining (e.g., using memory
>>   notifiers) while incrementing the reference count. Also, a driver that
>>   relies in this feature is aware that re-onlining the memory will require
>>   to re-set the pages PageOffline() - e.g., via the online_page_callback_t."
> 
> OK
> 
> [...]
>> d) __put_page() is modified to not return pages to the buddy in any
>>     case as a safety net. We might be able to get rid of that.
> 
> I do not like exactly this part

Yeah, and I think I can drop it from this patch.

>   
>> What I think you suggest:
>>
>> a) has_unmovable_pages() skips over all PageOffline() pages.
>>     This results in a lot of false negatives when trying to offline. Might be ok.
>>
>> b) The driver decrements the reference count of the PageOffline pages
>>     in MEM_GOING_OFFLINE.
> 
> Well, driver should make the page unreferenced or fail. What is done
> really depends on the specific driver
> 
>> c) The driver increments the reference count of the PageOffline pages
>>     in MEM_CANCEL_OFFLINE. One issue might be that the pages are no longer
>>     isolated once we get that call. Might be ok.
> 
> Only previous PageBuddy pages are returned to the allocator IIRC. Mostly
> because of MovablePage()
> 
>> d) How to make __test_page_isolated_in_pageblock() succeed?
>>     Like I propose in this patch (PageOffline() + refcount == 0)?
> 
> Yep
> 
>> e) How to make __offline_isolated_pages() succeed?
>>     Like I propose in this patch (PageOffline() + refcount == 0)?
> 
> Simply skip over PageOffline pages. Reference count should never be != 0
> at this stage.

Right, that should be guaranteed by d). (as long as people play by the 
rules) Same applies to my current patch.

>   
>> In summary, is what you suggest simply delaying setting the reference count to 0
>> in MEM_GOING_OFFLINE instead of right away when the driver unpluggs the pages?
> 
> Yes
> 
>> What's the big benefit you see and I fail to see?
> 
> Aparat from no hooks into __put_page it is also an explicit control over
> the page via reference counting. Do you see any downsides?

The only downside I see is that we get more false negatives on 
has_unmovable_pages(), eventually resulting in the offlining stage after 
isolation to loop forever (as some PageOffline() pages are not movable 
(especially, XEN balloon, HyperV balloon), there won't be progress).

I somewhat don't like forcing everybody that uses PageOffline() 
(especially all users of balloon compaction) to implement memory 
notifiers just to avoid that. Maybe, we even want to use PageOffline() 
in the future in the core (e.g., for memory holes instead of PG_reserved 
or similar).

Thanks!

-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-23 10:03                         ` David Hildenbrand
@ 2019-10-24  8:42                           ` Michal Hocko
  2019-10-24  8:51                             ` David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: Michal Hocko @ 2019-10-24  8:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On Wed 23-10-19 12:03:51, David Hildenbrand wrote:
> >Do you see any downsides?
> 
> The only downside I see is that we get more false negatives on
> has_unmovable_pages(), eventually resulting in the offlining stage after
> isolation to loop forever (as some PageOffline() pages are not movable
> (especially, XEN balloon, HyperV balloon), there won't be progress).
> 
> I somewhat don't like forcing everybody that uses PageOffline() (especially
> all users of balloon compaction) to implement memory notifiers just to avoid
> that. Maybe, we even want to use PageOffline() in the future in the core
> (e.g., for memory holes instead of PG_reserved or similar).

There is only a handful of those and we need to deal with them anyway.
If you do not want to enforce them to create their own notifiers then we
can accomodate the hotplug code. __test_page_isolated_in_pageblock resp.
the call chain up can distinguish temporary and permanent failures
(EAGAIN vs. EBUSY). The current state when we always return EBUSY and
keep retrying for ever is not optimal at all, right? A referenced PageOffline
could be an example of EBUSY all other failures where we are effectively
waiting for pages to get freed finaly would be EAGAIN.

It is a bit late in the process because a large portion of the work has
been done already but this doesn't sound like something to lose sleep
over.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
  2019-10-24  8:42                           ` Michal Hocko
@ 2019-10-24  8:51                             ` David Hildenbrand
  2019-10-25 11:28                               ` [PATCH RFC] mm: Allow to offline unmovable PageOffline() pages if the driver agrees David Hildenbrand
  0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2019-10-24  8:51 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, virtualization, Andrea Arcangeli,
	Andrew Morton, Juergen Gross, Pavel Tatashin, Alexander Duyck,
	Anthony Yznaga, Vlastimil Babka, Johannes Weiner, Oscar Salvador,
	Pingfan Liu, Qian Cai, Dan Williams, Mel Gorman, Mike Rapoport,
	Wei Yang, Alexander Potapenko, Anshuman Khandual,
	Jason Gunthorpe, Stephen Rothwell, Mauro Carvalho Chehab,
	Matthew Wilcox, Yu Zhao, Minchan Kim, Yang Shi, Ira Weiny,
	Andrey Ryabinin

On 24.10.19 10:42, Michal Hocko wrote:
> On Wed 23-10-19 12:03:51, David Hildenbrand wrote:
>>> Do you see any downsides?
>>
>> The only downside I see is that we get more false negatives on
>> has_unmovable_pages(), eventually resulting in the offlining stage after
>> isolation to loop forever (as some PageOffline() pages are not movable
>> (especially, XEN balloon, HyperV balloon), there won't be progress).
>>
>> I somewhat don't like forcing everybody that uses PageOffline() (especially
>> all users of balloon compaction) to implement memory notifiers just to avoid
>> that. Maybe, we even want to use PageOffline() in the future in the core
>> (e.g., for memory holes instead of PG_reserved or similar).
> 
> There is only a handful of those and we need to deal with them anyway.
> If you do not want to enforce them to create their own notifiers then we
> can accomodate the hotplug code. __test_page_isolated_in_pageblock resp.

Yeah, I would prefer offlining code to be able to deal with that without 
notifier changes for all users.

> the call chain up can distinguish temporary and permanent failures
> (EAGAIN vs. EBUSY). The current state when we always return EBUSY and
> keep retrying for ever is not optimal at all, right? A referenced PageOffline

Very right!

> could be an example of EBUSY all other failures where we are effectively
> waiting for pages to get freed finaly would be EAGAIN.

We have to watch out for PageOffline() pages that are actually movable 
(balloon compaction). But that doesn't sound too hard.
> 
> It is a bit late in the process because a large portion of the work has
> been done already but this doesn't sound like something to lose sleep
> over.
> 

Right. I'll look into that to find out if this would work. And see if I 
can reproduce what I described at all (theoretical thoughts) :)

Again, thanks for looking into this Michal!

-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH RFC] mm: Allow to offline unmovable PageOffline() pages if the driver agrees
  2019-10-24  8:51                             ` David Hildenbrand
@ 2019-10-25 11:28                               ` David Hildenbrand
  0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2019-10-25 11:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, virtualization, David Hildenbrand, Andrew Morton,
	Juergen Gross, Konrad Rzeszutek Wilk, Pavel Tatashin,
	Alexander Duyck, Vlastimil Babka, Johannes Weiner,
	Anthony Yznaga, Michal Hocko, Oscar Salvador, Mel Gorman,
	Mike Rapoport, Dan Williams, Anshuman Khandual, Qian Cai,
	Pingfan Liu

virtio-mem wants to allow to offline memory blocks of which some parts
were unplugged, especially, to later offline and remove completely
unplugged memory blocks. The important part is that PageOffline() has
to remain set until the section is offline, so these pages will never
get accessed (e.g., when dumping). The pages should not be handed
back to the buddy (which would require clearing PageOffline() and
result in issues if offlining fails and the pages are suddenly in the
buddy).

Let's allow to do that by allowing to isolate any PageOffline() page
when offlining. This way, we can reach the memory hotplug notifier
MEM_GOING_OFFLINE, where the driver can signal that he is fine with
offlining this page by dropping its reference count. PageOffline() pages
with a reference count of 0 can then be skipped when offlining the
pages (like if they were free, however they are not in the buddy).

Anybody who uses PageOffline() pages and does not agree to offline them
(e.g., Hyper-V balloon, XEN balloon, VMWare balloon for 2MB pages) will not
decrement the reference count and make offlining fail when trying to
migrate such an unmovable page. So there should be no observerable change.
Same applies to balloon compaction users (movable PageOffline() pages), the
pages will simply be migrated.

Note 1: If offlining fails, a driver has to increment the reference
	count again in MEM_CANCEL_OFFLINE.

Note 2: A driver that makes use of this has to be aware that re-onlining
	the memory block has to be handled by hooking into onlining code
	(online_page_callback_t), resetting the page PageOffline() and
	not giving them to the buddy.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Anthony Yznaga <anthony.yznaga@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Pingfan Liu <kernelfans@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---

Michal,

this is the approach where we allow has_unmovable_pages() to succeed to
reach MEM_GOING_OFFLINE and fail later in case the driver did not agree.

Thoughts?

---
 include/linux/page-flags.h | 10 ++++++++++
 mm/memory_hotplug.c        | 41 ++++++++++++++++++++++++++++----------
 mm/page_alloc.c            | 24 ++++++++++++++++++++++
 mm/page_isolation.c        |  9 +++++++++
 4 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 3b8e5c5f7e1f..4897cc585af6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -757,6 +757,16 @@ PAGE_TYPE_OPS(Buddy, buddy)
  * not onlined when onlining the section).
  * The content of these pages is effectively stale. Such pages should not
  * be touched (read/write/dump/save) except by their owner.
+ *
+ * If a driver wants to allow to offline unmovable PageOffline() pages without
+ * putting them back to the buddy, it can do so via the memory notifier by
+ * decrementing the reference count in MEM_GOING_OFFLINE and incrementing the
+ * reference count in MEM_CANCEL_OFFLINE. When offlining, the PageOffline()
+ * pages (now with a reference count of zero) are treated like free pages,
+ * allowing the containing memory block to get offlined. A driver that
+ * relies on this feature is aware that re-onlining the memory block will
+ * require to re-set the pages PageOffline() and not giving them to the
+ * buddy via online_page_callback_t.
  */
 PAGE_TYPE_OPS(Offline, offline)
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 561371ead39a..7a18b0bba045 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1223,11 +1223,15 @@ int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn,
 
 /*
  * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
- * non-lru movable pages and hugepages). We scan pfn because it's much
- * easier than scanning over linked list. This function returns the pfn
- * of the first found movable page if it's found, otherwise 0.
+ * non-lru movable pages and hugepages).
+ *
+ * Returns:
+ *	0 in case a movable page is found and movable_pfn was updated.
+ *	-ENOENT in case no movable page was found.
+ *	-EBUSY in case a definetly unmovable page was found.
  */
-static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
+static int scan_movable_pages(unsigned long start, unsigned long end,
+			      unsigned long *movable_pfn)
 {
 	unsigned long pfn;
 
@@ -1239,18 +1243,29 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
 			continue;
 		page = pfn_to_page(pfn);
 		if (PageLRU(page))
-			return pfn;
+			goto found;
 		if (__PageMovable(page))
-			return pfn;
+			goto found;
+
+		/*
+		 * Unmovable PageOffline() pages where somebody still holds
+		 * a reference count (after MEM_GOING_OFFLINE) can definetly
+		 * not be offlined.
+		 */
+		if (PageOffline(page) && page_count(page))
+			return -EBUSY;
 
 		if (!PageHuge(page))
 			continue;
 		head = compound_head(page);
 		if (page_huge_active(head))
-			return pfn;
+			goto found;
 		skip = compound_nr(head) - (page - head);
 		pfn += skip - 1;
 	}
+	return -ENOENT;
+found:
+	*movable_pfn = pfn;
 	return 0;
 }
 
@@ -1496,7 +1511,8 @@ static int __ref __offline_pages(unsigned long start_pfn,
 	}
 
 	do {
-		for (pfn = start_pfn; pfn;) {
+		pfn = start_pfn;
+		do {
 			if (signal_pending(current)) {
 				ret = -EINTR;
 				reason = "signal backoff";
@@ -1506,14 +1522,19 @@ static int __ref __offline_pages(unsigned long start_pfn,
 			cond_resched();
 			lru_add_drain_all();
 
-			pfn = scan_movable_pages(pfn, end_pfn);
-			if (pfn) {
+			ret = scan_movable_pages(pfn, end_pfn, &pfn);
+			if (!ret) {
 				/*
 				 * TODO: fatal migration failures should bail
 				 * out
 				 */
 				do_migrate_range(pfn, end_pfn);
 			}
+		} while (!ret);
+
+		if (ret != -ENOENT) {
+			reason = "unmovable page";
+			goto failed_removal_isolated;
 		}
 
 		/*
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e2b0bdfdd586..1594f480532a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8273,6 +8273,19 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
 		if ((flags & MEMORY_OFFLINE) && PageHWPoison(page))
 			continue;
 
+		/*
+		 * We treat all PageOffline() pages as movable when offlining
+		 * to give drivers a chance to decrement their reference count
+		 * in MEM_GOING_OFFLINE in order to signalize that these pages
+		 * can be offlined as there are no direct references anymore.
+		 * For actually unmovable PageOffline() where the driver does
+		 * not support this, we will fail later when trying to actually
+		 * move these pages that still have a reference count > 0.
+		 * (false negatives in this function only)
+		 */
+		if ((flags & MEMORY_OFFLINE) && PageOffline(page))
+			continue;
+
 		if (__PageMovable(page))
 			continue;
 
@@ -8702,6 +8715,17 @@ __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 			offlined_pages++;
 			continue;
 		}
+		/*
+		 * At this point all remaining PageOffline() pages have a
+		 * reference count of 0 and can simply be skipped.
+		 */
+		if (PageOffline(page)) {
+			BUG_ON(page_count(page));
+			BUG_ON(PageBuddy(page));
+			pfn++;
+			offlined_pages++;
+			continue;
+		}
 
 		BUG_ON(page_count(page));
 		BUG_ON(!PageBuddy(page));
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 04ee1663cdbe..43b4dabfedc8 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -170,6 +170,7 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
  *			a bit mask)
  *			MEMORY_OFFLINE - isolate to offline (!allocate) memory
  *					 e.g., skip over PageHWPoison() pages
+ *					 and PageOffline() pages.
  *			REPORT_FAILURE - report details about the failure to
  *			isolate the range
  *
@@ -278,6 +279,14 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else if ((flags & MEMORY_OFFLINE) && PageHWPoison(page))
 			/* A HWPoisoned page cannot be also PageBuddy */
 			pfn++;
+		else if ((flags & MEMORY_OFFLINE) && PageOffline(page) &&
+			 !page_count(page))
+			/*
+			 * The responsible driver agreed to offline
+			 * PageOffline() pages by dropping its reference in
+			 * MEM_GOING_OFFLINE.
+			 */
+			pfn++;
 		else
 			break;
 	}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2019-10-25 11:28 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-19 14:22 [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 1/9] ACPI: NUMA: export pxm_to_node David Hildenbrand
2019-09-23 10:13   ` David Hildenbrand
2019-09-23 10:36     ` Michal Hocko
2019-09-23 10:39       ` David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 2/9] virtio-mem: Paravirtualized memory hotplug David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 3/9] virtio-mem: Paravirtualized memory hotunplug part 1 David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 4/9] mm: Export alloc_contig_range() / free_contig_range() David Hildenbrand
2019-10-16 11:20   ` Michal Hocko
2019-10-16 12:31     ` David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 5/9] virtio-mem: Paravirtualized memory hotunplug part 2 David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0 David Hildenbrand
2019-10-16 11:43   ` Michal Hocko
2019-10-16 12:50     ` David Hildenbrand
2019-10-16 13:45       ` Michal Hocko
2019-10-16 13:55         ` David Hildenbrand
2019-10-16 14:09           ` Michal Hocko
2019-10-16 14:16             ` David Hildenbrand
2019-10-16 13:59         ` David Hildenbrand
2019-10-16 13:45     ` David Hildenbrand
2019-10-16 14:03       ` Michal Hocko
2019-10-16 14:14         ` David Hildenbrand
2019-10-18  8:15           ` Michal Hocko
2019-10-18  8:50             ` David Hildenbrand
2019-10-18 11:20               ` Michal Hocko
2019-10-18 12:35                 ` David Hildenbrand
2019-10-22 12:23                   ` Michal Hocko
2019-10-22 14:02                     ` David Hildenbrand
2019-10-23  9:43                       ` Michal Hocko
2019-10-23 10:03                         ` David Hildenbrand
2019-10-24  8:42                           ` Michal Hocko
2019-10-24  8:51                             ` David Hildenbrand
2019-10-25 11:28                               ` [PATCH RFC] mm: Allow to offline unmovable PageOffline() pages if the driver agrees David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 7/9] virtio-mem: Allow to offline partially unplugged memory blocks David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 8/9] mm/memory_hotplug: Introduce offline_and_remove_memory() David Hildenbrand
2019-10-16 11:47   ` Michal Hocko
2019-10-16 12:57     ` David Hildenbrand
2019-09-19 14:22 ` [PATCH RFC v3 9/9] virtio-mem: Offline and remove completely unplugged memory blocks David Hildenbrand
2019-10-16  8:12 ` [PATCH RFC v3 0/9] virtio-mem: paravirtualized memory David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).