LKML Archive on lore.kernel.org
 help / Atom feed
* [PATCH v35 0/5] Virtio-balloon: support free page reporting
@ 2018-07-10  9:31 Wei Wang
  2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

This feature enables the optimization by skipping the transfer of guest
free pages during VM live migration. It is not concerned that the memory
pages are used after they are given to the hypervisor as a hint of the
free pages, because they will be tracked by the hypervisor and transferred
in the subsequent round if they are used and written.

* Tests
- Test Environment
    Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
    Guest: 8G RAM, 4 vCPU
    Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second

- Test Results
    - Idle Guest Live Migration Time (results are averaged over 10 runs):
        - Optimization v.s. Legacy = 291ms vs 1757ms --> ~84% reduction
	(setting page poisoning zero and enabling ksm don't affect the
         comparison result)
    - Guest with Linux Compilation Workload (make bzImage -j4):
        - Live Migration Time (average)
          Optimization v.s. Legacy = 1420ms v.s. 2528ms --> ~44% reduction
        - Linux Compilation Time
          Optimization v.s. Legacy = 5min8s v.s. 5min12s
          --> no obvious difference

ChangeLog:
v34->v35:
    - mm:
       - get_from_free_page_list: use a list of page blocks as buffers to
        store addresses, instead of an array of buffers.
    - virtio-balloon:
        - Allocate a list of buffers, instead of an array of buffers.
        - Used buffers are freed after host puts the buffer to the used
          ring; unused buffers are freed immediately when guest finishes
          reporting.
        - change uint32_t to u32;
        - patch 2 is split out as an independent patch, as it's unrelated
          to the free page hinting feature.
v33->v34:
    - mm:
        - add a new API max_free_page_blocks, which estimates the max
          number of free page blocks that a free page list may have
        - get_from_free_page_list: store addresses to multiple arrays,
          instead of just one array. This removes the limitation of being
          able to report only 2TB free memory (the largest array memory
          that can be allocated on x86 is 4MB, which can store 2^19
          addresses of 4MB free page blocks).
    - virtio-balloon:
        - Allocate multiple arrays to load free page hints;
        - Use the same method in v32 to do guest/host interaction, the
          differeces are
              - the hints are tranferred array by array, instead of
                one by one.
	      - send the free page block size of a hint along with the cmd
                id to host, so that host knows each address represents e.g.
                a 4MB memory in our case. 
v32->v33:
    - mm/get_from_free_page_list: The new implementation to get free page
      hints based on the suggestions from Linus:
      https://lkml.org/lkml/2018/6/11/764
      This avoids the complex call chain, and looks more prudent.
    - virtio-balloon: 
      - use a fix-sized buffer to get free page hints;
      - remove the cmd id related interface. Now host can just send a free
        page hint command to the guest (via the host_cmd config register)
        to start the reporting. Currentlty the guest reports only the max
        order free page hints to host, which has generated similar good
        results as before. But the interface used by virtio-balloon to
        report can support reporting more orders in the future when there
        is a need.
v31->v32:
    - virtio-balloon:
        - rename cmd_id_use to cmd_id_active;
        - report_free_page_func: detach used buffers after host sends a vq
          interrupt, instead of busy waiting for used buffers.
v30->v31:
    - virtio-balloon:
        - virtio_balloon_send_free_pages: return -EINTR rather than 1 to
          indicate an active stop requested by host; and add more
          comments to explain about access to cmd_id_received without
          locks;
        -  add_one_sg: add TODO to comments about possible improvement.
v29->v30:
    - mm/walk_free_mem_block: add cond_sched() for each order
v28->v29:
    - mm/page_poison: only expose page_poison_enabled(), rather than more
      changes did in v28, as we are not 100% confident about that for now.
    - virtio-balloon: use a separate buffer for the stop cmd, instead of
      having the start and stop cmd use the same buffer. This avoids the
      corner case that the start cmd is overridden by the stop cmd when
      the host has a delay in reading the start cmd.
v27->v28:
    - mm/page_poison: Move PAGE_POISON to page_poison.c and add a function
      to expose page poison val to kernel modules.
v26->v27:
    - add a new patch to expose page_poisoning_enabled to kernel modules
    - virtio-balloon: set poison_val to 0xaaaaaaaa, instead of 0xaa
v25->v26: virtio-balloon changes only
    - remove kicking free page vq since the host now polls the vq after
      initiating the reporting
    - report_free_page_func: detach all the used buffers after sending
      the stop cmd id. This avoids leaving the detaching burden (i.e.
      overhead) to the next cmd id. Detaching here isn't considered
      overhead since the stop cmd id has been sent, and host has already
      moved formard.
v24->v25:
    - mm: change walk_free_mem_block to return 0 (instead of true) on
          completing the report, and return a non-zero value from the
          callabck, which stops the reporting.
    - virtio-balloon:
        - use enum instead of define for VIRTIO_BALLOON_VQ_INFLATE etc.
        - avoid __virtio_clear_bit when bailing out;
        - a new method to avoid reporting the some cmd id to host twice
        - destroy_workqueue can cancel free page work when the feature is
          negotiated;
        - fail probe when the free page vq size is less than 2.
v23->v24:
    - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to
      VIRTIO_BALLOON_F_FREE_PAGE_HINT
    - kick when vq->num_free < half full, instead of "= half full"
    - replace BUG_ON with bailing out
    - check vb->balloon_wq in probe(), if null, bail out
    - add a new feature bit for page poisoning
    - solve the corner case that one cmd id being sent to host twice
v22->v23:
    - change to kick the device when the vq is half-way full;
    - open-code batch_free_page_sg into add_one_sg;
    - change cmd_id from "uint32_t" to "__virtio32";
    - reserver one entry in the vq for the driver to send cmd_id, instead
      of busywaiting for an available entry;
    - add "stop_update" check before queue_work for prudence purpose for
      now, will have a separate patch to discuss this flag check later;
    - init_vqs: change to put some variables on stack to have simpler
      implementation;
    - add destroy_workqueue(vb->balloon_wq);
v21->v22:
    - add_one_sg: some code and comment re-arrangement
    - send_cmd_id: handle a cornercase

For previous ChangeLog, please reference
https://lwn.net/Articles/743660/

Wei Wang (5):
  mm: support to get hints of free page blocks
  virtio-balloon: remove BUG() in init_vqs
  virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  mm/page_poison: expose page_poisoning_enabled to kernel modules
  virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON

 drivers/virtio/virtio_balloon.c     | 419 +++++++++++++++++++++++++++++++++---
 include/linux/mm.h                  |   3 +
 include/uapi/linux/virtio_balloon.h |  14 ++
 mm/page_alloc.c                     |  98 +++++++++
 mm/page_poison.c                    |   6 +
 5 files changed, 511 insertions(+), 29 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
@ 2018-07-10  9:31 ` Wei Wang
  2018-07-10 10:16   ` Wang, Wei W
  2018-07-10 17:33   ` Linus Torvalds
  2018-07-10  9:31 ` [PATCH v35 2/5] virtio-balloon: remove BUG() in init_vqs Wei Wang
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

This patch adds support to get free page blocks from a free page list.
The physical addresses of the blocks are stored to a list of buffers
passed from the caller. The obtained free page blocks are hints about
free pages, because there is no guarantee that they are still on the free
page list after the function returns.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are hinted as free pages but are written after this function returns will
be captured by the hypervisor, and they will be added to the next round of
memory transfer.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
---
 include/linux/mm.h |  3 ++
 mm/page_alloc.c    | 98 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 101 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a0fbb9f..5ce654f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long * zones_size);
 extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
+unsigned long max_free_page_blocks(int order);
+int get_from_free_page_list(int order, struct list_head *pages,
+			    unsigned int size, unsigned long *loaded_num);
 
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1521100..b67839b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
 	show_swap_cache_info();
 }
 
+/**
+ * max_free_page_blocks - estimate the max number of free page blocks
+ * @order: the order of the free page blocks to estimate
+ *
+ * This function gives a rough estimation of the possible maximum number of
+ * free page blocks a free list may have. The estimation works on an assumption
+ * that all the system pages are on that list.
+ *
+ * Context: Any context.
+ *
+ * Return: The largest number of free page blocks that the free list can have.
+ */
+unsigned long max_free_page_blocks(int order)
+{
+	return totalram_pages / (1 << order);
+}
+EXPORT_SYMBOL_GPL(max_free_page_blocks);
+
+/**
+ * get_from_free_page_list - get hints of free pages from a free page list
+ * @order: the order of the free page list to check
+ * @pages: the list of page blocks used as buffers to load the addresses
+ * @size: the size of each buffer in bytes
+ * @loaded_num: the number of addresses loaded to the buffers
+ *
+ * This function offers hints about free pages. The addresses of free page
+ * blocks are stored to the list of buffers passed from the caller. There is
+ * no guarantee that the obtained free pages are still on the free page list
+ * after the function returns. pfn_to_page on the obtained free pages is
+ * strongly discouraged and if there is an absolute need for that, make sure
+ * to contact MM people to discuss potential problems.
+ *
+ * The addresses are currently stored to a buffer in little endian. This
+ * avoids the overhead of converting endianness by the caller who needs data
+ * in the little endian format. Big endian support can be added on demand in
+ * the future.
+ *
+ * Context: Process context.
+ *
+ * Return: 0 if all the free page block addresses are stored to the buffers;
+ *         -ENOSPC if the buffers are not sufficient to store all the
+ *         addresses; or -EINVAL if an unexpected argument is received (e.g.
+ *         incorrect @order, empty buffer list).
+ */
+int get_from_free_page_list(int order, struct list_head *pages,
+			    unsigned int size, unsigned long *loaded_num)
+{
+	struct zone *zone;
+	enum migratetype mt;
+	struct list_head *free_list;
+	struct page *free_page, *buf_page;
+	unsigned long addr;
+	__le64 *buf;
+	unsigned int used_buf_num = 0, entry_index = 0,
+		     entries = size / sizeof(__le64);
+	*loaded_num = 0;
+
+	/* Validity check */
+	if (order < 0 || order >= MAX_ORDER)
+		return -EINVAL;
+
+	buf_page = list_first_entry_or_null(pages, struct page, lru);
+	if (!buf_page)
+		return -EINVAL;
+	buf = (__le64 *)page_address(buf_page);
+
+	for_each_populated_zone(zone) {
+		spin_lock_irq(&zone->lock);
+		for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+			free_list = &zone->free_area[order].free_list[mt];
+			list_for_each_entry(free_page, free_list, lru) {
+				addr = page_to_pfn(free_page) << PAGE_SHIFT;
+				/* This buffer is full, so use the next one */
+				if (entry_index == entries) {
+					buf_page = list_next_entry(buf_page,
+								   lru);
+					/* All the buffers are consumed */
+					if (!buf_page) {
+						spin_unlock_irq(&zone->lock);
+						*loaded_num = used_buf_num *
+							      entries;
+						return -ENOSPC;
+					}
+					buf = (__le64 *)page_address(buf_page);
+					entry_index = 0;
+					used_buf_num++;
+				}
+				buf[entry_index++] = cpu_to_le64(addr);
+			}
+		}
+		spin_unlock_irq(&zone->lock);
+	}
+
+	*loaded_num = used_buf_num * entries + entry_index;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(get_from_free_page_list);
+
 static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
 {
 	zoneref->zone = zone;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v35 2/5] virtio-balloon: remove BUG() in init_vqs
  2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
  2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
@ 2018-07-10  9:31 ` Wei Wang
  2018-07-10  9:31 ` [PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT Wei Wang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

It's a bit overkill to use BUG when failing to add an entry to the
stats_vq in init_vqs. So remove it and just return the error to the
caller to bail out nicely.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/virtio/virtio_balloon.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 6b237e3..9356a1a 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -455,9 +455,13 @@ static int init_vqs(struct virtio_balloon *vb)
 		num_stats = update_balloon_stats(vb);
 
 		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
-		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
-		    < 0)
-			BUG();
+		err = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
+					   GFP_KERNEL);
+		if (err) {
+			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
+				 __func__);
+			return err;
+		}
 		virtqueue_kick(vb->stats_vq);
 	}
 	return 0;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
  2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
  2018-07-10  9:31 ` [PATCH v35 2/5] virtio-balloon: remove BUG() in init_vqs Wei Wang
@ 2018-07-10  9:31 ` Wei Wang
  2018-07-10  9:31 ` [PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules Wei Wang
  2018-07-10  9:31 ` [PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON Wei Wang
  4 siblings, 0 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free page hints by sending a new cmd id
to the guest via the free_page_report_cmd_id configuration register.

As the first step here, virtio-balloon only reports free page hints from
the max order (i.e. 10) free page list to host. This has generated similar
good results as reporting all free page hints during our tests.

When the guest starts to report, it first sends a start cmd to host via
the free page vq, which acks to host the cmd id received, and tells it the
hint size (e.g. 4MB each on x86). When the guest finishes the reporting,
a stop cmd is sent to host via the vq.

TODO:
- support reporting free page hints from smaller order free page lists
  when there is a need/request from users.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 drivers/virtio/virtio_balloon.c     | 399 +++++++++++++++++++++++++++++++++---
 include/uapi/linux/virtio_balloon.h |  11 +
 2 files changed, 384 insertions(+), 26 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 9356a1a..8754154 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -43,6 +43,14 @@
 #define OOM_VBALLOON_DEFAULT_PAGES 256
 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
 
+/* The order used to allocate a buffer to load free page hints */
+#define VIRTIO_BALLOON_HINT_BUF_ORDER (MAX_ORDER - 1)
+/* The number of pages a hint buffer has */
+#define VIRTIO_BALLOON_HINT_BUF_PAGES (1 << VIRTIO_BALLOON_HINT_BUF_ORDER)
+/* The size of a hint buffer in bytes */
+#define VIRTIO_BALLOON_HINT_BUF_SIZE (VIRTIO_BALLOON_HINT_BUF_PAGES << \
+				      PAGE_SHIFT)
+
 static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES;
 module_param(oom_pages, int, S_IRUSR | S_IWUSR);
 MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
@@ -51,9 +59,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+enum virtio_balloon_vq {
+	VIRTIO_BALLOON_VQ_INFLATE,
+	VIRTIO_BALLOON_VQ_DEFLATE,
+	VIRTIO_BALLOON_VQ_STATS,
+	VIRTIO_BALLOON_VQ_FREE_PAGE,
+	VIRTIO_BALLOON_VQ_MAX
+};
+
 struct virtio_balloon {
 	struct virtio_device *vdev;
-	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+	/* Balloon's own wq for cpu-intensive work items */
+	struct workqueue_struct *balloon_wq;
+	/* The free page reporting work item submitted to the balloon wq */
+	struct work_struct report_free_page_work;
 
 	/* The balloon servicing is delegated to a freezable workqueue. */
 	struct work_struct update_balloon_stats_work;
@@ -63,6 +84,15 @@ struct virtio_balloon {
 	spinlock_t stop_update_lock;
 	bool stop_update;
 
+	/* Command buffers to start and stop the reporting of hints to host */
+	struct virtio_balloon_free_page_hints_cmd cmd_start;
+	struct virtio_balloon_free_page_hints_cmd cmd_stop;
+
+	/* The cmd id received from host */
+	u32 cmd_id_received;
+	/* The cmd id that is actively in use */
+	u32 cmd_id_active;
+
 	/* Waiting for host to ack the pages we released. */
 	wait_queue_head_t acked;
 
@@ -326,17 +356,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
 	virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-	struct virtio_balloon *vb = vdev->priv;
-	unsigned long flags;
-
-	spin_lock_irqsave(&vb->stop_update_lock, flags);
-	if (!vb->stop_update)
-		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
-	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
 	s64 target;
@@ -353,6 +372,35 @@ static inline s64 towards_target(struct virtio_balloon *vb)
 	return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+	struct virtio_balloon *vb = vdev->priv;
+	unsigned long flags;
+	s64 diff = towards_target(vb);
+
+	if (diff) {
+		spin_lock_irqsave(&vb->stop_update_lock, flags);
+		if (!vb->stop_update)
+			queue_work(system_freezable_wq,
+				   &vb->update_balloon_size_work);
+		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+	}
+
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		virtio_cread(vdev, struct virtio_balloon_config,
+			     free_page_report_cmd_id, &vb->cmd_id_received);
+		if (vb->cmd_id_received !=
+		    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID &&
+		    vb->cmd_id_received != vb->cmd_id_active) {
+			spin_lock_irqsave(&vb->stop_update_lock, flags);
+			if (!vb->stop_update)
+				queue_work(vb->balloon_wq,
+					   &vb->report_free_page_work);
+			spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+		}
+	}
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
 	u32 actual = vb->num_pages;
@@ -425,28 +473,61 @@ static void update_balloon_size_func(struct work_struct *work)
 		queue_work(system_freezable_wq, work);
 }
 
+static void virtio_balloon_free_used_hint_buf(struct virtqueue *vq)
+{
+	unsigned int len;
+	void *buf;
+	struct virtio_balloon *vb = vq->vdev->priv;
+
+	do {
+		buf = virtqueue_get_buf(vq, &len);
+		if (buf == &vb->cmd_start || buf == &vb->cmd_stop)
+			continue;
+		free_pages((unsigned long)buf, VIRTIO_BALLOON_HINT_BUF_ORDER);
+	} while (buf);
+}
+
 static int init_vqs(struct virtio_balloon *vb)
 {
-	struct virtqueue *vqs[3];
-	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
-	static const char * const names[] = { "inflate", "deflate", "stats" };
-	int err, nvqs;
+	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_MAX];
+	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_MAX];
+	const char *names[VIRTIO_BALLOON_VQ_MAX];
+	int err;
 
 	/*
-	 * We expect two virtqueues: inflate and deflate, and
-	 * optionally stat.
+	 * Inflateq and deflateq are used unconditionally. The names[]
+	 * will be NULL if the related feature is not enabled, which will
+	 * cause no allocation for the corresponding virtqueue in find_vqs.
 	 */
-	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
-	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
+	callbacks[VIRTIO_BALLOON_VQ_INFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_INFLATE] = "inflate";
+	callbacks[VIRTIO_BALLOON_VQ_DEFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_DEFLATE] = "deflate";
+	names[VIRTIO_BALLOON_VQ_STATS] = NULL;
+	names[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL;
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+		names[VIRTIO_BALLOON_VQ_STATS] = "stats";
+		callbacks[VIRTIO_BALLOON_VQ_STATS] = stats_request;
+	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		names[VIRTIO_BALLOON_VQ_FREE_PAGE] = "free_page_vq";
+		callbacks[VIRTIO_BALLOON_VQ_FREE_PAGE] =
+					virtio_balloon_free_used_hint_buf;
+	}
+
+	err = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_MAX,
+					 vqs, callbacks, names, NULL, NULL);
 	if (err)
 		return err;
 
-	vb->inflate_vq = vqs[0];
-	vb->deflate_vq = vqs[1];
+	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_INFLATE];
+	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_DEFLATE];
 	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
 		struct scatterlist sg;
 		unsigned int num_stats;
-		vb->stats_vq = vqs[2];
+		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_STATS];
 
 		/*
 		 * Prime this virtqueue with one buffer so the hypervisor can
@@ -464,9 +545,246 @@ static int init_vqs(struct virtio_balloon *vb)
 		}
 		virtqueue_kick(vb->stats_vq);
 	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_FREE_PAGE];
+
 	return 0;
 }
 
+static int send_start_cmd_id(struct virtio_balloon *vb)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+	int err;
+
+	virtio_balloon_free_used_hint_buf(vq);
+
+	vb->cmd_start.id = cpu_to_virtio32(vb->vdev, vb->cmd_id_active);
+	vb->cmd_start.size = cpu_to_virtio32(vb->vdev,
+					     MAX_ORDER_NR_PAGES * PAGE_SIZE);
+	sg_init_one(&sg, &vb->cmd_start,
+		    sizeof(struct virtio_balloon_free_page_hints_cmd));
+
+	err = virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_start, GFP_KERNEL);
+	if (!err)
+		virtqueue_kick(vq);
+	return err;
+}
+
+static int send_stop_cmd_id(struct virtio_balloon *vb)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+	int err;
+
+	virtio_balloon_free_used_hint_buf(vq);
+
+	vb->cmd_stop.id = cpu_to_virtio32(vb->vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+	vb->cmd_stop.size = 0;
+	sg_init_one(&sg, &vb->cmd_stop,
+		    sizeof(struct virtio_balloon_free_page_hints_cmd));
+	err = virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_stop, GFP_KERNEL);
+	if (!err)
+		virtqueue_kick(vq);
+	return err;
+}
+
+static int send_hint_buf(struct virtio_balloon *vb, void *buf,
+			 unsigned int size)
+{
+	int err;
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+
+	virtio_balloon_free_used_hint_buf(vq);
+
+	/*
+	 * If a stop id or a new cmd id was just received from host,
+	 * stop the reporting, return -EINTR to indicate an active stop.
+	 */
+	if (vb->cmd_id_received != vb->cmd_id_active)
+		return -EINTR;
+
+	/* There is always one entry reserved for the cmd id to use. */
+	if (vq->num_free < 2)
+		return -ENOSPC;
+
+	sg_init_one(&sg, buf, size);
+	err = virtqueue_add_inbuf(vq, &sg, 1, buf, GFP_KERNEL);
+	if (!err)
+		virtqueue_kick(vq);
+	return err;
+}
+
+static void virtio_balloon_free_hint_bufs(struct list_head *pages)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, pages, lru) {
+		__free_pages(page, VIRTIO_BALLOON_HINT_BUF_ORDER);
+		list_del(&page->lru);
+	}
+}
+
+/*
+ * virtio_balloon_send_hints - send buffers of hints to host
+ * @vb: the virtio_balloon struct
+ * @pages: the list of page blocks used as buffers
+ * @hint_num: the total number of hints
+ *
+ * Send buffers of hints to host. This begins by sending a start cmd, which
+ * contains a cmd id received from host and the free page block size in bytes
+ * of each hint. At the end, a stop cmd is sent to host to indicate the end
+ * of this reporting. If host actively requests to stop the reporting, free
+ * the buffers that have not been sent.
+ */
+static void virtio_balloon_send_hints(struct virtio_balloon *vb,
+				      struct list_head *pages,
+				      unsigned long hint_num)
+{
+	struct page *page, *next;
+	void *buf;
+	unsigned int buf_size, hint_size;
+	int err;
+
+	/* Start by sending the received cmd id to host with an outbuf. */
+	err = send_start_cmd_id(vb);
+	if (unlikely(err))
+		goto out_free;
+
+	list_for_each_entry_safe(page, next, pages, lru) {
+		/* We've sent all the hints. */
+		if (!hint_num)
+			break;
+		hint_size = hint_num * sizeof(__le64);
+		buf = page_address(page);
+		buf_size = hint_size > VIRTIO_BALLOON_HINT_BUF_SIZE ?
+			   VIRTIO_BALLOON_HINT_BUF_SIZE : hint_size;
+		hint_num -= buf_size / sizeof(__le64);
+		err = send_hint_buf(vb, buf, buf_size);
+		/*
+		 * If host actively stops the reporting or no space to add more
+		 * hint bufs, just stop adding hints and continue to add the
+		 * stop cmd. Other device errors need to bail out with an error
+		 * message.
+		 */
+		if (unlikely(err == -EINTR || err == -ENOSPC))
+			break;
+		else if (unlikely(err))
+			goto out_free;
+		/*
+		 * Remove the buffer from the list only when it has been given
+		 * to host. Otherwise, it will stay on the list and will be
+		 * freed via virtio_balloon_free_hint_bufs.
+		 */
+		list_del(&page->lru);
+	}
+
+	/* End by sending a stop id to host with an outbuf. */
+	err = send_stop_cmd_id(vb);
+out_free:
+	if (err)
+		dev_err(&vb->vdev->dev, "%s: err = %d\n", __func__, err);
+	/* Free all the buffers that are not sent to host. */
+	virtio_balloon_free_hint_bufs(pages);
+}
+
+/*
+ * Allocate a list of buffers to load free page hints. Those buffers are
+ * allocated based on the estimation of the max number of free page blocks
+ * that the system may have, so that they are sufficient to store all the
+ * free page addresses.
+ *
+ * Return 0 on success, otherwise false.
+ */
+static int virtio_balloon_alloc_hint_bufs(struct list_head *pages)
+{
+	struct page *page;
+	unsigned long max_entries, entries_per_page, entries_per_buf,
+		      max_buf_num;
+	int i;
+
+	max_entries = max_free_page_blocks(VIRTIO_BALLOON_HINT_BUF_ORDER);
+	entries_per_page = PAGE_SIZE / sizeof(__le64);
+	entries_per_buf = entries_per_page * VIRTIO_BALLOON_HINT_BUF_PAGES;
+	max_buf_num = max_entries / entries_per_buf +
+		      !!(max_entries % entries_per_buf);
+
+	for (i = 0; i < max_buf_num; i++) {
+		page = alloc_pages(__GFP_ATOMIC | __GFP_NOMEMALLOC,
+				   VIRTIO_BALLOON_HINT_BUF_ORDER);
+		if (!page) {
+			/*
+			 * If any one of the buffers fails to be allocated, it
+			 * implies that the free list that we are interested
+			 * in is empty, and there is no need to continue the
+			 * reporting. So just free what's allocated and return
+			 * -ENOMEM.
+			 */
+			virtio_balloon_free_hint_bufs(pages);
+			return -ENOMEM;
+		}
+		list_add(&page->lru, pages);
+	}
+
+	return 0;
+}
+
+/*
+ * virtio_balloon_load_hints - load free page hints into buffers
+ * @vb: the virtio_balloon struct
+ * @pages: the list of page blocks used as buffers
+ *
+ * Only free pages blocks of MAX_ORDER - 1 are loaded into the buffers.
+ * Each buffer size is MAX_ORDER_NR_PAGES * PAGE_SIZE (e.g. 4MB on x86).
+ * Failing to allocate such a buffer essentially implies that no such free
+ * page blocks could be reported.
+ *
+ * Return the total number of hints loaded into the buffers.
+ */
+static unsigned long virtio_balloon_load_hints(struct virtio_balloon *vb,
+					       struct list_head *pages)
+{
+	unsigned long loaded_hints = 0;
+	int ret;
+
+	do {
+		ret = virtio_balloon_alloc_hint_bufs(pages);
+		if (ret)
+			return 0;
+
+		ret = get_from_free_page_list(VIRTIO_BALLOON_HINT_BUF_ORDER,
+					pages, VIRTIO_BALLOON_HINT_BUF_SIZE,
+					&loaded_hints);
+		/*
+		 * Retry in the case that memory is onlined quickly, which
+		 * causes the allocated buffers to be insufficient to store
+		 * all the free page addresses. Free the hint buffers before
+		 * retry.
+		 */
+		if (unlikely(ret == -ENOSPC))
+			virtio_balloon_free_hint_bufs(pages);
+	} while (ret == -ENOSPC);
+
+	return loaded_hints;
+}
+
+static void report_free_page_func(struct work_struct *work)
+{
+	struct virtio_balloon *vb;
+	unsigned long loaded_hints = 0;
+	LIST_HEAD(pages);
+
+	vb = container_of(work, struct virtio_balloon, report_free_page_work);
+	vb->cmd_id_active = vb->cmd_id_received;
+
+	loaded_hints = virtio_balloon_load_hints(vb, &pages);
+	if (loaded_hints)
+		virtio_balloon_send_hints(vb, &pages, loaded_hints);
+}
+
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
@@ -580,18 +898,38 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vb;
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		/*
+		 * There is always one entry reserved for cmd id, so the ring
+		 * size needs to be at least two to report free page hints.
+		 */
+		if (virtqueue_get_vring_size(vb->free_page_vq) < 2) {
+			err = -ENOSPC;
+			goto out_del_vqs;
+		}
+		vb->balloon_wq = alloc_workqueue("balloon-wq",
+					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+		if (!vb->balloon_wq) {
+			err = -ENOMEM;
+			goto out_del_vqs;
+		}
+		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
+		vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
+		vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
+	}
+
 	vb->nb.notifier_call = virtballoon_oom_notify;
 	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
 	err = register_oom_notifier(&vb->nb);
 	if (err < 0)
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 
 #ifdef CONFIG_BALLOON_COMPACTION
 	balloon_mnt = kern_mount(&balloon_fs);
 	if (IS_ERR(balloon_mnt)) {
 		err = PTR_ERR(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
@@ -601,7 +939,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		kern_unmount(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
 		vb->vb_dev_info.inode = NULL;
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
@@ -612,6 +950,9 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		virtballoon_changed(vdev);
 	return 0;
 
+out_del_balloon_wq:
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		destroy_workqueue(vb->balloon_wq);
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
 out_free_vb:
@@ -645,6 +986,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
 	cancel_work_sync(&vb->update_balloon_size_work);
 	cancel_work_sync(&vb->update_balloon_stats_work);
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		cancel_work_sync(&vb->report_free_page_work);
+		destroy_workqueue(vb->balloon_wq);
+	}
+
 	remove_common(vb);
 #ifdef CONFIG_BALLOON_COMPACTION
 	if (vb->vb_dev_info.inode)
@@ -696,6 +1042,7 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_MUST_TELL_HOST,
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 13b8cb5..b77919b 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,15 +34,26 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
 
+#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID	0
 struct virtio_balloon_config {
 	/* Number of pages host wants Guest to give up. */
 	__u32 num_pages;
 	/* Number of pages we've actually got in balloon. */
 	__u32 actual;
+	/* Free page report command id, readonly by guest */
+	__u32 free_page_report_cmd_id;
+};
+
+struct virtio_balloon_free_page_hints_cmd {
+	/* The command id received from host */
+	__virtio32 id;
+	/* The free page block size in bytes */
+	__virtio32 size;
 };
 
 #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules
  2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
                   ` (2 preceding siblings ...)
  2018-07-10  9:31 ` [PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT Wei Wang
@ 2018-07-10  9:31 ` Wei Wang
  2018-07-10  9:31 ` [PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON Wei Wang
  4 siblings, 0 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

In some usages, e.g. virtio-balloon, a kernel module needs to know if
page poisoning is in use. This patch exposes the page_poisoning_enabled
function to kernel modules.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
---
 mm/page_poison.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/mm/page_poison.c b/mm/page_poison.c
index aa2b3d3..830f604 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -17,6 +17,11 @@ static int __init early_page_poison_param(char *buf)
 }
 early_param("page_poison", early_page_poison_param);
 
+/**
+ * page_poisoning_enabled - check if page poisoning is enabled
+ *
+ * Return true if page poisoning is enabled, or false if not.
+ */
 bool page_poisoning_enabled(void)
 {
 	/*
@@ -29,6 +34,7 @@ bool page_poisoning_enabled(void)
 		(!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
 		debug_pagealloc_enabled()));
 }
+EXPORT_SYMBOL_GPL(page_poisoning_enabled);
 
 static void poison_page(struct page *page)
 {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON
  2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
                   ` (3 preceding siblings ...)
  2018-07-10  9:31 ` [PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules Wei Wang
@ 2018-07-10  9:31 ` Wei Wang
  4 siblings, 0 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-10  9:31 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, wei.w.wang, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel, peterx

The VIRTIO_BALLOON_F_PAGE_POISON feature bit is used to indicate if the
guest is using page poisoning. Guest writes to the poison_val config
field to tell host about the page poisoning value that is in use.

Suggested-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 drivers/virtio/virtio_balloon.c     | 10 ++++++++++
 include/uapi/linux/virtio_balloon.h |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 8754154..dd61660 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -869,6 +869,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
+	__u32 poison_val;
 	int err;
 
 	if (!vdev->config->get) {
@@ -916,6 +917,11 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
 		vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
 		vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID;
+		if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+			memset(&poison_val, PAGE_POISON, sizeof(poison_val));
+			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+				      poison_val, &poison_val);
+		}
 	}
 
 	vb->nb.notifier_call = virtballoon_oom_notify;
@@ -1034,6 +1040,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+	if (!page_poisoning_enabled())
+		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
 	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
 	return 0;
 }
@@ -1043,6 +1052,7 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
 	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+	VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index b77919b..97415ba 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -35,6 +35,7 @@
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
 #define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
@@ -47,6 +48,8 @@ struct virtio_balloon_config {
 	__u32 actual;
 	/* Free page report command id, readonly by guest */
 	__u32 free_page_report_cmd_id;
+	/* Stores PAGE_POISON if page poisoning is in use */
+	__u32 poison_val;
 };
 
 struct virtio_balloon_free_page_hints_cmd {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
@ 2018-07-10 10:16   ` Wang, Wei W
  2018-07-10 17:33   ` Linus Torvalds
  1 sibling, 0 replies; 27+ messages in thread
From: Wang, Wei W @ 2018-07-10 10:16 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: torvalds, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel, peterx

On Tuesday, July 10, 2018 5:31 PM, Wang, Wei W wrote:
> Subject: [PATCH v35 1/5] mm: support to get hints of free page blocks
> 
> This patch adds support to get free page blocks from a free page list.
> The physical addresses of the blocks are stored to a list of buffers passed
> from the caller. The obtained free page blocks are hints about free pages,
> because there is no guarantee that they are still on the free page list after the
> function returns.
> 
> One use example of this patch is to accelerate live migration by skipping the
> transfer of free pages reported from the guest. A popular method used by
> the hypervisor to track which part of memory is written during live migration
> is to write-protect all the guest memory. So, those pages that are hinted as
> free pages but are written after this function returns will be captured by the
> hypervisor, and they will be added to the next round of memory transfer.
> 
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> ---
>  include/linux/mm.h |  3 ++
>  mm/page_alloc.c    | 98
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 101 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9f..5ce654f
> 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long *
> zones_size);  extern void free_area_init_node(int nid, unsigned long *
> zones_size,
>  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> extern void free_initmem(void);
> +unsigned long max_free_page_blocks(int order); int
> +get_from_free_page_list(int order, struct list_head *pages,
> +			    unsigned int size, unsigned long *loaded_num);
> 
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100..b67839b
> 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
>  	show_swap_cache_info();
>  }
> 
> +/**
> + * max_free_page_blocks - estimate the max number of free page blocks
> + * @order: the order of the free page blocks to estimate
> + *
> + * This function gives a rough estimation of the possible maximum
> +number of
> + * free page blocks a free list may have. The estimation works on an
> +assumption
> + * that all the system pages are on that list.
> + *
> + * Context: Any context.
> + *
> + * Return: The largest number of free page blocks that the free list can have.
> + */
> +unsigned long max_free_page_blocks(int order) {
> +	return totalram_pages / (1 << order);
> +}
> +EXPORT_SYMBOL_GPL(max_free_page_blocks);
> +
> +/**
> + * get_from_free_page_list - get hints of free pages from a free page
> +list
> + * @order: the order of the free page list to check
> + * @pages: the list of page blocks used as buffers to load the
> +addresses
> + * @size: the size of each buffer in bytes
> + * @loaded_num: the number of addresses loaded to the buffers
> + *
> + * This function offers hints about free pages. The addresses of free
> +page
> + * blocks are stored to the list of buffers passed from the caller.
> +There is
> + * no guarantee that the obtained free pages are still on the free page
> +list
> + * after the function returns. pfn_to_page on the obtained free pages
> +is
> + * strongly discouraged and if there is an absolute need for that, make
> +sure
> + * to contact MM people to discuss potential problems.
> + *
> + * The addresses are currently stored to a buffer in little endian.
> +This
> + * avoids the overhead of converting endianness by the caller who needs
> +data
> + * in the little endian format. Big endian support can be added on
> +demand in
> + * the future.
> + *
> + * Context: Process context.
> + *
> + * Return: 0 if all the free page block addresses are stored to the buffers;
> + *         -ENOSPC if the buffers are not sufficient to store all the
> + *         addresses; or -EINVAL if an unexpected argument is received (e.g.
> + *         incorrect @order, empty buffer list).
> + */
> +int get_from_free_page_list(int order, struct list_head *pages,
> +			    unsigned int size, unsigned long *loaded_num) {


Hi Linus,

We  took your original suggestion - pass in pre-allocated buffers to load the addresses (now we use a list of pre-allocated page blocks as buffers). Hope that suggestion is still acceptable (the advantage of this method was explained here: https://lkml.org/lkml/2018/6/28/184).
Look forward to getting your feedback. Thanks.

Best,
Wei 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
  2018-07-10 10:16   ` Wang, Wei W
@ 2018-07-10 17:33   ` Linus Torvalds
  2018-07-11  1:28     ` Wei Wang
  2018-07-11  4:00     ` Michael S. Tsirkin
  1 sibling, 2 replies; 27+ messages in thread
From: Linus Torvalds @ 2018-07-10 17:33 UTC (permalink / raw)
  To: wei.w.wang
  Cc: virtio-dev, Linux Kernel Mailing List, virtualization, KVM list,
	linux-mm, Michael S. Tsirkin, Michal Hocko, Andrew Morton,
	Paolo Bonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, Rik van Riel, peterx

NAK.

On Tue, Jul 10, 2018 at 2:56 AM Wei Wang <wei.w.wang@intel.com> wrote:
>
> +
> +       buf_page = list_first_entry_or_null(pages, struct page, lru);
> +       if (!buf_page)
> +               return -EINVAL;
> +       buf = (__le64 *)page_address(buf_page);

Stop this garbage.

Why the hell would you pass in some crazy "liost of pages" that uses
that lru list?

That's just insane shit.

Just pass in a an array to fill in. No idiotic games like this with
odd list entries (what's the locking?) and crazy casting to

So if you want an array of page addresses, pass that in as such. If
you want to do it in a page, do it with

    u64 *array = page_address(page);
    int nr = PAGE_SIZE / sizeof(u64);

and now you pass that array in to the thing. None of this completely
insane crazy crap interfaces.

Plus, I still haven't heard an explanation for why you want so many
pages in the first place, and why you want anything but MAX_ORDER-1.

So no. This kind of unnecessarily complex code with completely insane
calling interfaces does not make it into the VM layer.

Maybe that crazy "let's pass a chain of pages that uses the lru list"
makes sense to the virtio-balloon code. But you need to understand
that it makes ZERO conceptual sense to anybody else. And the core VM
code is about a million times more important than the balloon code in
this case, so you had better make the interface make sense to *it*.

               Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-10 17:33   ` Linus Torvalds
@ 2018-07-11  1:28     ` Wei Wang
  2018-07-11  1:44       ` Linus Torvalds
  2018-07-11  4:00     ` Michael S. Tsirkin
  1 sibling, 1 reply; 27+ messages in thread
From: Wei Wang @ 2018-07-11  1:28 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: virtio-dev, Linux Kernel Mailing List, virtualization, KVM list,
	linux-mm, Michael S. Tsirkin, Michal Hocko, Andrew Morton,
	Paolo Bonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, Rik van Riel, peterx

On 07/11/2018 01:33 AM, Linus Torvalds wrote:
> NAK.
>
> On Tue, Jul 10, 2018 at 2:56 AM Wei Wang <wei.w.wang@intel.com> wrote:
>> +
>> +       buf_page = list_first_entry_or_null(pages, struct page, lru);
>> +       if (!buf_page)
>> +               return -EINVAL;
>> +       buf = (__le64 *)page_address(buf_page);
> Stop this garbage.
>
> Why the hell would you pass in some crazy "liost of pages" that uses
> that lru list?
>
> That's just insane shit.
>
> Just pass in a an array to fill in. No idiotic games like this with
> odd list entries (what's the locking?) and crazy casting to
>
> So if you want an array of page addresses, pass that in as such. If
> you want to do it in a page, do it with
>
>      u64 *array = page_address(page);
>      int nr = PAGE_SIZE / sizeof(u64);
>
> and now you pass that array in to the thing. None of this completely
> insane crazy crap interfaces.
>
> Plus, I still haven't heard an explanation for why you want so many
> pages in the first place, and why you want anything but MAX_ORDER-1.

Sorry for missing that explanation.
We only get addresses of the "MAX_ORDER-1" blocks into the array. The 
max size of the array that could be allocated by kmalloc is 
KMALLOC_MAX_SIZE (i.e. 4MB on x86). With that max array, we could load 
"4MB / sizeof(u64)" addresses of "MAX_ORDER-1" blocks, that is, 2TB free 
memory at most. We thought about removing that 2TB limitation by passing 
in multiple such max arrays (a list of them).

But 2TB has been enough for our use cases so far, and agree it would be 
better to have a simpler API in the first place. So I plan to go back to 
the previous version of just passing in one simple array 
(https://lkml.org/lkml/2018/6/15/21) if no objections.

Best,
Wei

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11  1:28     ` Wei Wang
@ 2018-07-11  1:44       ` Linus Torvalds
  2018-07-11  9:21         ` Michal Hocko
  0 siblings, 1 reply; 27+ messages in thread
From: Linus Torvalds @ 2018-07-11  1:44 UTC (permalink / raw)
  To: wei.w.wang
  Cc: virtio-dev, Linux Kernel Mailing List, virtualization, KVM list,
	linux-mm, Michael S. Tsirkin, Michal Hocko, Andrew Morton,
	Paolo Bonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, Rik van Riel, peterx

On Tue, Jul 10, 2018 at 6:24 PM Wei Wang <wei.w.wang@intel.com> wrote:
>
> We only get addresses of the "MAX_ORDER-1" blocks into the array. The
> max size of the array that could be allocated by kmalloc is
> KMALLOC_MAX_SIZE (i.e. 4MB on x86). With that max array, we could load
> "4MB / sizeof(u64)" addresses of "MAX_ORDER-1" blocks, that is, 2TB free
> memory at most. We thought about removing that 2TB limitation by passing
> in multiple such max arrays (a list of them).

No.

Stop this already./

You're doing everthing wrong.

If the array has to describe *all* memory you will ever free, then you
have already lost.

Just do it in chunks.

I don't want the VM code to even fill in that big of an array anyway -
this all happens under the zone lock, and you're walking a list that
is bad for caching anyway.

So plan on an interface that allows _incremental_ freeing, because any
plan that starts with "I worry that maybe two TERABYTES of memory
isn't big enough" is so broken that it's laughable.

That was what I tried to encourage with actually removing the pages
form the page list. That would be an _incremental_ interface. You can
remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark
them free for ballooning that way. And if you still feel you have tons
of free memory, just continue removing more pages from the free list.

Notice? Incremental. Not "I want to have a crazy array that is enough
to hold 2TB at one time".

So here's the rule:

 - make it a simple array interface

 - make the array *small*. Not megabytes. Kilobytes. Because if you're
filling in megabytes worth of free pointers while holding the zone
lock, you're doing something wrong.

 - design the interface so that you do not *need* to have this crazy
"all or nothing" approach.

See what I'm trying to push for. Think "low latency". Think "small
arrays". Think "simple and straightforward interfaces".

At no point should you ever worry about "2TB". Never.

           Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-10 17:33   ` Linus Torvalds
  2018-07-11  1:28     ` Wei Wang
@ 2018-07-11  4:00     ` Michael S. Tsirkin
  2018-07-11  4:04       ` Michael S. Tsirkin
  1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2018-07-11  4:00 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: wei.w.wang, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michal Hocko, Andrew Morton,
	Paolo Bonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, Rik van Riel, peterx

On Tue, Jul 10, 2018 at 10:33:08AM -0700, Linus Torvalds wrote:
> NAK.
> 
> On Tue, Jul 10, 2018 at 2:56 AM Wei Wang <wei.w.wang@intel.com> wrote:
> >
> > +
> > +       buf_page = list_first_entry_or_null(pages, struct page, lru);
> > +       if (!buf_page)
> > +               return -EINVAL;
> > +       buf = (__le64 *)page_address(buf_page);
> 
> Stop this garbage.
> 
> Why the hell would you pass in some crazy "liost of pages" that uses
> that lru list?
> 
> That's just insane shit.
> 
> Just pass in a an array to fill in.
> No idiotic games like this with
> odd list entries (what's the locking?) and crazy casting to
> 
> So if you want an array of page addresses, pass that in as such. If
> you want to do it in a page, do it with
> 
>     u64 *array = page_address(page);
>     int nr = PAGE_SIZE / sizeof(u64);
> 
> and now you pass that array in to the thing. None of this completely
> insane crazy crap interfaces.

Question was raised what to do if there are so many free
MAX_ORDER pages that their addresses don't fit in a single MAX_ORDER
page. Yes, only a huge guest would trigger that but it seems
theoretically possible.

I guess an array of arrays then?

An alternative suggestion was not to pass an array at all,
instead peel enough pages off the list to contain
all free entries. Maybe that's too hacky.


> 
> Plus, I still haven't heard an explanation for why you want so many
> pages in the first place, and why you want anything but MAX_ORDER-1.
> 
> So no. This kind of unnecessarily complex code with completely insane
> calling interfaces does not make it into the VM layer.
> 
> Maybe that crazy "let's pass a chain of pages that uses the lru list"
> makes sense to the virtio-balloon code. But you need to understand
> that it makes ZERO conceptual sense to anybody else. And the core VM
> code is about a million times more important than the balloon code in
> this case, so you had better make the interface make sense to *it*.
> 
>                Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11  4:00     ` Michael S. Tsirkin
@ 2018-07-11  4:04       ` Michael S. Tsirkin
  0 siblings, 0 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2018-07-11  4:04 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: wei.w.wang, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michal Hocko, Andrew Morton,
	Paolo Bonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, Rik van Riel, peterx

On Wed, Jul 11, 2018 at 07:00:37AM +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 10, 2018 at 10:33:08AM -0700, Linus Torvalds wrote:
> > NAK.
> > 
> > On Tue, Jul 10, 2018 at 2:56 AM Wei Wang <wei.w.wang@intel.com> wrote:
> > >
> > > +
> > > +       buf_page = list_first_entry_or_null(pages, struct page, lru);
> > > +       if (!buf_page)
> > > +               return -EINVAL;
> > > +       buf = (__le64 *)page_address(buf_page);
> > 
> > Stop this garbage.
> > 
> > Why the hell would you pass in some crazy "liost of pages" that uses
> > that lru list?
> > 
> > That's just insane shit.
> > 
> > Just pass in a an array to fill in.
> > No idiotic games like this with
> > odd list entries (what's the locking?) and crazy casting to
> > 
> > So if you want an array of page addresses, pass that in as such. If
> > you want to do it in a page, do it with
> > 
> >     u64 *array = page_address(page);
> >     int nr = PAGE_SIZE / sizeof(u64);
> > 
> > and now you pass that array in to the thing. None of this completely
> > insane crazy crap interfaces.
> 
> Question was raised what to do if there are so many free
> MAX_ORDER pages that their addresses don't fit in a single MAX_ORDER
> page.

Oh you answered already, I spoke too soon. Nevermind, pls ignore me.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11  1:44       ` Linus Torvalds
@ 2018-07-11  9:21         ` Michal Hocko
  2018-07-11 10:52           ` Wei Wang
  2018-07-11 16:23           ` Linus Torvalds
  0 siblings, 2 replies; 27+ messages in thread
From: Michal Hocko @ 2018-07-11  9:21 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: wei.w.wang, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Tue 10-07-18 18:44:34, Linus Torvalds wrote:
[...]
> That was what I tried to encourage with actually removing the pages
> form the page list. That would be an _incremental_ interface. You can
> remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark
> them free for ballooning that way. And if you still feel you have tons
> of free memory, just continue removing more pages from the free list.

We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1).
So why do we need any array based interface?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11  9:21         ` Michal Hocko
@ 2018-07-11 10:52           ` Wei Wang
  2018-07-11 11:09             ` Michal Hocko
  2018-07-11 16:23           ` Linus Torvalds
  1 sibling, 1 reply; 27+ messages in thread
From: Wei Wang @ 2018-07-11 10:52 UTC (permalink / raw)
  To: Michal Hocko, Linus Torvalds
  Cc: virtio-dev, Linux Kernel Mailing List, virtualization, KVM list,
	linux-mm, Michael S. Tsirkin, Andrew Morton, Paolo Bonzini,
	liliang.opensource, yang.zhang.wz, quan.xu0, nilal, Rik van Riel,
	peterx

On 07/11/2018 05:21 PM, Michal Hocko wrote:
> On Tue 10-07-18 18:44:34, Linus Torvalds wrote:
> [...]
>> That was what I tried to encourage with actually removing the pages
>> form the page list. That would be an _incremental_ interface. You can
>> remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark
>> them free for ballooning that way. And if you still feel you have tons
>> of free memory, just continue removing more pages from the free list.
> We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1).
> So why do we need any array based interface?

Yes, I'm trying to get free pages directly via alloc_pages, so there 
will be no new mm APIs.
I plan to let free page allocation stop when the remaining system free 
memory becomes close to min_free_kbytes (prevent swapping).


Best,
Wei



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 10:52           ` Wei Wang
@ 2018-07-11 11:09             ` Michal Hocko
  2018-07-11 13:55               ` Wang, Wei W
  2018-07-11 19:36               ` Michael S. Tsirkin
  0 siblings, 2 replies; 27+ messages in thread
From: Michal Hocko @ 2018-07-11 11:09 UTC (permalink / raw)
  To: Wei Wang
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Wed 11-07-18 18:52:45, Wei Wang wrote:
> On 07/11/2018 05:21 PM, Michal Hocko wrote:
> > On Tue 10-07-18 18:44:34, Linus Torvalds wrote:
> > [...]
> > > That was what I tried to encourage with actually removing the pages
> > > form the page list. That would be an _incremental_ interface. You can
> > > remove MAX_ORDER-1 pages one by one (or a hundred at a time), and mark
> > > them free for ballooning that way. And if you still feel you have tons
> > > of free memory, just continue removing more pages from the free list.
> > We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1).
> > So why do we need any array based interface?
> 
> Yes, I'm trying to get free pages directly via alloc_pages, so there will be
> no new mm APIs.

OK. The above was just a rough example. In fact you would need a more
complex gfp mask. I assume you only want to balloon only memory directly
usable by the kernel so it will be
	(GFP_KERNEL | __GFP_NOWARN) & ~__GFP_RECLAIM

> I plan to let free page allocation stop when the remaining system free
> memory becomes close to min_free_kbytes (prevent swapping).

~__GFP_RECLAIM will make sure you are allocate as long as there is any
memory without reclaim. It will not even poke the kswapd to do the
background work. So I do not think you would need much more than that.

But let me note that I am not really convinced how this (or previous)
approach will really work in most workloads. We tend to cache heavily so
there is rarely any memory free.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 11:09             ` Michal Hocko
@ 2018-07-11 13:55               ` Wang, Wei W
  2018-07-11 14:38                 ` Michal Hocko
  2018-07-11 19:36               ` Michael S. Tsirkin
  1 sibling, 1 reply; 27+ messages in thread
From: Wang, Wei W @ 2018-07-11 13:55 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Wednesday, July 11, 2018 7:10 PM, Michal Hocko wrote:
> On Wed 11-07-18 18:52:45, Wei Wang wrote:
> > On 07/11/2018 05:21 PM, Michal Hocko wrote:
> > > On Tue 10-07-18 18:44:34, Linus Torvalds wrote:
> > > [...]
> > > > That was what I tried to encourage with actually removing the
> > > > pages form the page list. That would be an _incremental_
> > > > interface. You can remove MAX_ORDER-1 pages one by one (or a
> > > > hundred at a time), and mark them free for ballooning that way.
> > > > And if you still feel you have tons of free memory, just continue
> removing more pages from the free list.
> > > We already have an interface for that. alloc_pages(GFP_NOWAIT,
> MAX_ORDER -1).
> > > So why do we need any array based interface?
> >
> > Yes, I'm trying to get free pages directly via alloc_pages, so there
> > will be no new mm APIs.
> 
> OK. The above was just a rough example. In fact you would need a more
> complex gfp mask. I assume you only want to balloon only memory directly
> usable by the kernel so it will be
> 	(GFP_KERNEL | __GFP_NOWARN) & ~__GFP_RECLAIM

Sounds good to me, thanks.

> 
> > I plan to let free page allocation stop when the remaining system free
> > memory becomes close to min_free_kbytes (prevent swapping).
> 
> ~__GFP_RECLAIM will make sure you are allocate as long as there is any
> memory without reclaim. It will not even poke the kswapd to do the
> background work. So I do not think you would need much more than that.

"close to min_free_kbytes" - I meant when doing the allocations, we intentionally reserve some small amount of memory, e.g. 2 free page blocks of "MAX_ORDER - 1". So when other applications happen to do some allocation, they may easily get some from the reserved memory left on the free list. Without that reserved memory, other allocation may cause the system free memory below the WMARK[MIN], and kswapd would start to do swapping. This is actually just a small optimization to reduce the probability of causing swapping (nice to have, but not mandatary because we will allocate free page blocks one by one).

 > But let me note that I am not really convinced how this (or previous)
> approach will really work in most workloads. We tend to cache heavily so
> there is rarely any memory free.

With less free memory, the improvement becomes less, but should be nicer than no optimization. For example, the Linux build workload would cause 4~5 GB (out of 8GB) memory to be used as page cache at the final stage, there is still ~44% live migration time reduction.

Since we have many cloud customers interested in this feature, I think we can let them test the usefulness.

Best,
Wei

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 13:55               ` Wang, Wei W
@ 2018-07-11 14:38                 ` Michal Hocko
  0 siblings, 0 replies; 27+ messages in thread
From: Michal Hocko @ 2018-07-11 14:38 UTC (permalink / raw)
  To: Wang, Wei W
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Wed 11-07-18 13:55:15, Wang, Wei W wrote:
> On Wednesday, July 11, 2018 7:10 PM, Michal Hocko wrote:
> > On Wed 11-07-18 18:52:45, Wei Wang wrote:
> > > On 07/11/2018 05:21 PM, Michal Hocko wrote:
> > > > On Tue 10-07-18 18:44:34, Linus Torvalds wrote:
> > > > [...]
> > > > > That was what I tried to encourage with actually removing the
> > > > > pages form the page list. That would be an _incremental_
> > > > > interface. You can remove MAX_ORDER-1 pages one by one (or a
> > > > > hundred at a time), and mark them free for ballooning that way.
> > > > > And if you still feel you have tons of free memory, just continue
> > removing more pages from the free list.
> > > > We already have an interface for that. alloc_pages(GFP_NOWAIT,
> > MAX_ORDER -1).
> > > > So why do we need any array based interface?
> > >
> > > Yes, I'm trying to get free pages directly via alloc_pages, so there
> > > will be no new mm APIs.
> > 
> > OK. The above was just a rough example. In fact you would need a more
> > complex gfp mask. I assume you only want to balloon only memory directly
> > usable by the kernel so it will be
> > 	(GFP_KERNEL | __GFP_NOWARN) & ~__GFP_RECLAIM
> 
> Sounds good to me, thanks.
> 
> > 
> > > I plan to let free page allocation stop when the remaining system free
> > > memory becomes close to min_free_kbytes (prevent swapping).
> > 
> > ~__GFP_RECLAIM will make sure you are allocate as long as there is any
> > memory without reclaim. It will not even poke the kswapd to do the
> > background work. So I do not think you would need much more than that.
> 
> "close to min_free_kbytes" - I meant when doing the allocations, we
> intentionally reserve some small amount of memory, e.g. 2 free page
> blocks of "MAX_ORDER - 1". So when other applications happen to do
> some allocation, they may easily get some from the reserved memory
> left on the free list. Without that reserved memory, other allocation
> may cause the system free memory below the WMARK[MIN], and kswapd
> would start to do swapping. This is actually just a small optimization
> to reduce the probability of causing swapping (nice to have, but not
> mandatary because we will allocate free page blocks one by one).

I really have hard time to follow you here. Nothing outside of the core
MM proper should play with watermarks.
 
>  > But let me note that I am not really convinced how this (or previous)
> > approach will really work in most workloads. We tend to cache heavily so
> > there is rarely any memory free.
> 
> With less free memory, the improvement becomes less, but should be
> nicer than no optimization. For example, the Linux build workload
> would cause 4~5 GB (out of 8GB) memory to be used as page cache at the
> final stage, there is still ~44% live migration time reduction.

But most systems will stay somewhere around the high watermark if there
is any page cache activity. Especially after a longer uptime.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11  9:21         ` Michal Hocko
  2018-07-11 10:52           ` Wei Wang
@ 2018-07-11 16:23           ` Linus Torvalds
  2018-07-12  2:21             ` Wei Wang
  2018-07-12 13:12             ` Michal Hocko
  1 sibling, 2 replies; 27+ messages in thread
From: Linus Torvalds @ 2018-07-11 16:23 UTC (permalink / raw)
  To: Michal Hocko
  Cc: wei.w.wang, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Wed, Jul 11, 2018 at 2:21 AM Michal Hocko <mhocko@kernel.org> wrote:
>
> We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1).
> So why do we need any array based interface?

That was actually my original argument in the original thread - that
the only new interface people might want is one that just tells how
many of those MAX_ORDER-1 pages there are.

See the thread in v33 with the subject

  "[PATCH v33 1/4] mm: add a function to get free page blocks"

and look for me suggesting just using

    #define GFP_MINFLAGS (__GFP_NORETRY | __GFP_NOWARN |
__GFP_THISNODE | __GFP_NOMEMALLOC)

    struct page *page =  alloc_pages(GFP_MINFLAGS, MAX_ORDER-1);

for this all.

But I could also see an argument for "allocate N pages of size
MAX_ORDER-1", with some small N, simply because I can see the
advantage of not taking and releasing the locking and looking up the
zone individually N times.

If you want to get gigabytes of memory (or terabytes), doing it in
bigger chunks than one single maximum-sized page sounds fairly
reasonable.

I just don't think that "thousands of pages" is reasonable. But "tens
of max-sized pages" sounds fair enough to me, and it would certainly
not be a pain for the VM.

So I'm open to new interfaces. I just want those new interfaces to
make sense, and be low latency and simple for the VM to do. I'm
objecting to the incredibly baroque and heavy-weight one that can
return near-infinite amounts of memory.

The real advantage of jjuist the existing "alloc_pages()" model is
that I think the ballooning people can use that to *test* things out.
If it turns out that taking and releasing the VM locks is a big cost,
we can see if a batch interface that allows you to get tens of pages
at the same time is worth it.

So yes, I'd suggest starting with just the existing alloc_pages. Maybe
it's not enough, but it should be good enough for testing.

                    Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 11:09             ` Michal Hocko
  2018-07-11 13:55               ` Wang, Wei W
@ 2018-07-11 19:36               ` Michael S. Tsirkin
  1 sibling, 0 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2018-07-11 19:36 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Wei Wang, Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Andrew Morton, Paolo Bonzini,
	liliang.opensource, yang.zhang.wz, quan.xu0, nilal, Rik van Riel,
	peterx

On Wed, Jul 11, 2018 at 01:09:49PM +0200, Michal Hocko wrote:
> But let me note that I am not really convinced how this (or previous)
> approach will really work in most workloads. We tend to cache heavily so
> there is rarely any memory free.

It might be that it's worth flushing the cache when VM is
migrating. Or maybe we should implement virtio-tmem or add
transcendent memory support to the balloon.

-- 
MST

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 16:23           ` Linus Torvalds
@ 2018-07-12  2:21             ` Wei Wang
  2018-07-12  2:30               ` Linus Torvalds
  2018-07-12 13:12             ` Michal Hocko
  1 sibling, 1 reply; 27+ messages in thread
From: Wei Wang @ 2018-07-12  2:21 UTC (permalink / raw)
  To: Linus Torvalds, Michal Hocko
  Cc: virtio-dev, Linux Kernel Mailing List, virtualization, KVM list,
	linux-mm, Michael S. Tsirkin, Andrew Morton, Paolo Bonzini,
	liliang.opensource, yang.zhang.wz, quan.xu0, nilal, Rik van Riel,
	peterx

On 07/12/2018 12:23 AM, Linus Torvalds wrote:
> On Wed, Jul 11, 2018 at 2:21 AM Michal Hocko <mhocko@kernel.org> wrote:
>> We already have an interface for that. alloc_pages(GFP_NOWAIT, MAX_ORDER -1).
>> So why do we need any array based interface?
> That was actually my original argument in the original thread - that
> the only new interface people might want is one that just tells how
> many of those MAX_ORDER-1 pages there are.
>
> See the thread in v33 with the subject
>
>    "[PATCH v33 1/4] mm: add a function to get free page blocks"
>
> and look for me suggesting just using
>
>      #define GFP_MINFLAGS (__GFP_NORETRY | __GFP_NOWARN |
> __GFP_THISNODE | __GFP_NOMEMALLOC)

Would it be better to remove __GFP_THISNODE? We actually want to get all 
the guest free pages (from all the nodes).

Best,
Wei

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12  2:21             ` Wei Wang
@ 2018-07-12  2:30               ` Linus Torvalds
  2018-07-12  2:52                 ` Wei Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Linus Torvalds @ 2018-07-12  2:30 UTC (permalink / raw)
  To: wei.w.wang
  Cc: Michal Hocko, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
>
> Would it be better to remove __GFP_THISNODE? We actually want to get all
> the guest free pages (from all the nodes).

Maybe. Or maybe it would be better to have the memory balloon logic be
per-node? Maybe you don't want to remove too much memory from one
node? I think it's one of those "play with it" things.

I don't think that's the big issue, actually. I think the real issue
is how to react quickly and gracefully to "oops, I'm trying to give
memory away, but now the guest wants it back" while you're in the
middle of trying to create that 2TB list of pages.

IOW, I think the real work is in whatever tuning for the righ
tbehavior. But I'm just guessing.

             Linus

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12  2:30               ` Linus Torvalds
@ 2018-07-12  2:52                 ` Wei Wang
  2018-07-12  8:13                   ` Michal Hocko
  0 siblings, 1 reply; 27+ messages in thread
From: Wei Wang @ 2018-07-12  2:52 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Michal Hocko, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On 07/12/2018 10:30 AM, Linus Torvalds wrote:
> On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
>> Would it be better to remove __GFP_THISNODE? We actually want to get all
>> the guest free pages (from all the nodes).
> Maybe. Or maybe it would be better to have the memory balloon logic be
> per-node? Maybe you don't want to remove too much memory from one
> node? I think it's one of those "play with it" things.
>
> I don't think that's the big issue, actually. I think the real issue
> is how to react quickly and gracefully to "oops, I'm trying to give
> memory away, but now the guest wants it back" while you're in the
> middle of trying to create that 2TB list of pages.

OK. virtio-balloon has already registered an oom notifier 
(virtballoon_oom_notify). I plan to add some control there. If oom happens,
- stop the page allocation;
- immediately give back the allocated pages to mm.

Best,
Wei

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12  2:52                 ` Wei Wang
@ 2018-07-12  8:13                   ` Michal Hocko
  2018-07-12 11:34                     ` Wei Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Michal Hocko @ 2018-07-12  8:13 UTC (permalink / raw)
  To: Wei Wang
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Thu 12-07-18 10:52:08, Wei Wang wrote:
> On 07/12/2018 10:30 AM, Linus Torvalds wrote:
> > On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
> > > Would it be better to remove __GFP_THISNODE? We actually want to get all
> > > the guest free pages (from all the nodes).
> > Maybe. Or maybe it would be better to have the memory balloon logic be
> > per-node? Maybe you don't want to remove too much memory from one
> > node? I think it's one of those "play with it" things.
> > 
> > I don't think that's the big issue, actually. I think the real issue
> > is how to react quickly and gracefully to "oops, I'm trying to give
> > memory away, but now the guest wants it back" while you're in the
> > middle of trying to create that 2TB list of pages.
> 
> OK. virtio-balloon has already registered an oom notifier
> (virtballoon_oom_notify). I plan to add some control there. If oom happens,
> - stop the page allocation;
> - immediately give back the allocated pages to mm.

Please don't. Oom notifier is an absolutely hideous interface which
should go away sooner or later (I would much rather like the former) so
do not build a new logic on top of it. I would appreciate if you
actually remove the notifier much more.

You can give memory back from the standard shrinker interface. If we are
reaching low reclaim priorities then we are struggling to reclaim memory
and then you can start returning pages back.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12  8:13                   ` Michal Hocko
@ 2018-07-12 11:34                     ` Wei Wang
  2018-07-12 11:49                       ` Michal Hocko
  0 siblings, 1 reply; 27+ messages in thread
From: Wei Wang @ 2018-07-12 11:34 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On 07/12/2018 04:13 PM, Michal Hocko wrote:
> On Thu 12-07-18 10:52:08, Wei Wang wrote:
>> On 07/12/2018 10:30 AM, Linus Torvalds wrote:
>>> On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
>>>> Would it be better to remove __GFP_THISNODE? We actually want to get all
>>>> the guest free pages (from all the nodes).
>>> Maybe. Or maybe it would be better to have the memory balloon logic be
>>> per-node? Maybe you don't want to remove too much memory from one
>>> node? I think it's one of those "play with it" things.
>>>
>>> I don't think that's the big issue, actually. I think the real issue
>>> is how to react quickly and gracefully to "oops, I'm trying to give
>>> memory away, but now the guest wants it back" while you're in the
>>> middle of trying to create that 2TB list of pages.
>> OK. virtio-balloon has already registered an oom notifier
>> (virtballoon_oom_notify). I plan to add some control there. If oom happens,
>> - stop the page allocation;
>> - immediately give back the allocated pages to mm.
> Please don't. Oom notifier is an absolutely hideous interface which
> should go away sooner or later (I would much rather like the former) so
> do not build a new logic on top of it. I would appreciate if you
> actually remove the notifier much more.
>
> You can give memory back from the standard shrinker interface. If we are
> reaching low reclaim priorities then we are struggling to reclaim memory
> and then you can start returning pages back.

OK. Just curious why oom notifier is thought to be hideous, and has it 
been a consensus?

Best,
Wei

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12 11:34                     ` Wei Wang
@ 2018-07-12 11:49                       ` Michal Hocko
  2018-07-13  0:33                         ` Wei Wang
  0 siblings, 1 reply; 27+ messages in thread
From: Michal Hocko @ 2018-07-12 11:49 UTC (permalink / raw)
  To: Wei Wang
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On Thu 12-07-18 19:34:16, Wei Wang wrote:
> On 07/12/2018 04:13 PM, Michal Hocko wrote:
> > On Thu 12-07-18 10:52:08, Wei Wang wrote:
> > > On 07/12/2018 10:30 AM, Linus Torvalds wrote:
> > > > On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
> > > > > Would it be better to remove __GFP_THISNODE? We actually want to get all
> > > > > the guest free pages (from all the nodes).
> > > > Maybe. Or maybe it would be better to have the memory balloon logic be
> > > > per-node? Maybe you don't want to remove too much memory from one
> > > > node? I think it's one of those "play with it" things.
> > > > 
> > > > I don't think that's the big issue, actually. I think the real issue
> > > > is how to react quickly and gracefully to "oops, I'm trying to give
> > > > memory away, but now the guest wants it back" while you're in the
> > > > middle of trying to create that 2TB list of pages.
> > > OK. virtio-balloon has already registered an oom notifier
> > > (virtballoon_oom_notify). I plan to add some control there. If oom happens,
> > > - stop the page allocation;
> > > - immediately give back the allocated pages to mm.
> > Please don't. Oom notifier is an absolutely hideous interface which
> > should go away sooner or later (I would much rather like the former) so
> > do not build a new logic on top of it. I would appreciate if you
> > actually remove the notifier much more.
> > 
> > You can give memory back from the standard shrinker interface. If we are
> > reaching low reclaim priorities then we are struggling to reclaim memory
> > and then you can start returning pages back.
> 
> OK. Just curious why oom notifier is thought to be hideous, and has it been
> a consensus?

Because it is a completely non-transparent callout from the OOM context
which is really subtle on its own. It is just too easy to end up in
weird corner cases. We really have to be careful and be as swift as
possible. Any potential sleep would make the OOM situation much worse
because nobody would be able to make a forward progress or (in)direct
dependency on MM subsystem can easily deadlock. Those are really hard
to track down and defining the notifier as blockable by design which
just asks for bad implementations because most people simply do not
realize how subtle the oom context is.

Another thing is that it happens way too late when we have basically
reclaimed the world and didn't get out of the memory pressure so you can
expect any workload is suffering already. Anybody sitting on a large
amount of reclaimable memory should have released that memory by that
time. Proportionally to the reclaim pressure ideally.

The notifier API is completely unaware of oom constrains. Just imagine
you are OOM in a subset of numa nodes. Callback doesn't have any idea
about that.

Moreover we do have proper reclaim mechanism that has a feedback
loop and that should be always preferable to an abrupt reclaim.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-11 16:23           ` Linus Torvalds
  2018-07-12  2:21             ` Wei Wang
@ 2018-07-12 13:12             ` Michal Hocko
  1 sibling, 0 replies; 27+ messages in thread
From: Michal Hocko @ 2018-07-12 13:12 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: wei.w.wang, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

[Hmm this one somehow got stuck in my outgoing emails]

On Wed 11-07-18 09:23:54, Linus Torvalds wrote:
[...]
> So I'm open to new interfaces. I just want those new interfaces to
> make sense, and be low latency and simple for the VM to do. I'm
> objecting to the incredibly baroque and heavy-weight one that can
> return near-infinite amounts of memory.

Mel was suggesting a bulk page allocator a year ago [1]. I can see only
slab bulk api so I am not sure what happened with that work. Anyway
I think that starting with what we have right now is much more
appropriate than over design this thing from the early beginning.

[1] http://lkml.kernel.org/r/20170109163518.6001-5-mgorman@techsingularity.net
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v35 1/5] mm: support to get hints of free page blocks
  2018-07-12 11:49                       ` Michal Hocko
@ 2018-07-13  0:33                         ` Wei Wang
  0 siblings, 0 replies; 27+ messages in thread
From: Wei Wang @ 2018-07-13  0:33 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Linus Torvalds, virtio-dev, Linux Kernel Mailing List,
	virtualization, KVM list, linux-mm, Michael S. Tsirkin,
	Andrew Morton, Paolo Bonzini, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, Rik van Riel, peterx

On 07/12/2018 07:49 PM, Michal Hocko wrote:
> On Thu 12-07-18 19:34:16, Wei Wang wrote:
>> On 07/12/2018 04:13 PM, Michal Hocko wrote:
>>> On Thu 12-07-18 10:52:08, Wei Wang wrote:
>>>> On 07/12/2018 10:30 AM, Linus Torvalds wrote:
>>>>> On Wed, Jul 11, 2018 at 7:17 PM Wei Wang <wei.w.wang@intel.com> wrote:
>>>>>> Would it be better to remove __GFP_THISNODE? We actually want to get all
>>>>>> the guest free pages (from all the nodes).
>>>>> Maybe. Or maybe it would be better to have the memory balloon logic be
>>>>> per-node? Maybe you don't want to remove too much memory from one
>>>>> node? I think it's one of those "play with it" things.
>>>>>
>>>>> I don't think that's the big issue, actually. I think the real issue
>>>>> is how to react quickly and gracefully to "oops, I'm trying to give
>>>>> memory away, but now the guest wants it back" while you're in the
>>>>> middle of trying to create that 2TB list of pages.
>>>> OK. virtio-balloon has already registered an oom notifier
>>>> (virtballoon_oom_notify). I plan to add some control there. If oom happens,
>>>> - stop the page allocation;
>>>> - immediately give back the allocated pages to mm.
>>> Please don't. Oom notifier is an absolutely hideous interface which
>>> should go away sooner or later (I would much rather like the former) so
>>> do not build a new logic on top of it. I would appreciate if you
>>> actually remove the notifier much more.
>>>
>>> You can give memory back from the standard shrinker interface. If we are
>>> reaching low reclaim priorities then we are struggling to reclaim memory
>>> and then you can start returning pages back.
>> OK. Just curious why oom notifier is thought to be hideous, and has it been
>> a consensus?
> Because it is a completely non-transparent callout from the OOM context
> which is really subtle on its own. It is just too easy to end up in
> weird corner cases. We really have to be careful and be as swift as
> possible. Any potential sleep would make the OOM situation much worse
> because nobody would be able to make a forward progress or (in)direct
> dependency on MM subsystem can easily deadlock. Those are really hard
> to track down and defining the notifier as blockable by design which
> just asks for bad implementations because most people simply do not
> realize how subtle the oom context is.
>
> Another thing is that it happens way too late when we have basically
> reclaimed the world and didn't get out of the memory pressure so you can
> expect any workload is suffering already. Anybody sitting on a large
> amount of reclaimable memory should have released that memory by that
> time. Proportionally to the reclaim pressure ideally.
>
> The notifier API is completely unaware of oom constrains. Just imagine
> you are OOM in a subset of numa nodes. Callback doesn't have any idea
> about that.
>
> Moreover we do have proper reclaim mechanism that has a feedback
> loop and that should be always preferable to an abrupt reclaim.

Sounds very reasonable, thanks for the elaboration. I'll try with shrinker.

Best,
Wei




^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, back to index

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-10  9:31 [PATCH v35 0/5] Virtio-balloon: support free page reporting Wei Wang
2018-07-10  9:31 ` [PATCH v35 1/5] mm: support to get hints of free page blocks Wei Wang
2018-07-10 10:16   ` Wang, Wei W
2018-07-10 17:33   ` Linus Torvalds
2018-07-11  1:28     ` Wei Wang
2018-07-11  1:44       ` Linus Torvalds
2018-07-11  9:21         ` Michal Hocko
2018-07-11 10:52           ` Wei Wang
2018-07-11 11:09             ` Michal Hocko
2018-07-11 13:55               ` Wang, Wei W
2018-07-11 14:38                 ` Michal Hocko
2018-07-11 19:36               ` Michael S. Tsirkin
2018-07-11 16:23           ` Linus Torvalds
2018-07-12  2:21             ` Wei Wang
2018-07-12  2:30               ` Linus Torvalds
2018-07-12  2:52                 ` Wei Wang
2018-07-12  8:13                   ` Michal Hocko
2018-07-12 11:34                     ` Wei Wang
2018-07-12 11:49                       ` Michal Hocko
2018-07-13  0:33                         ` Wei Wang
2018-07-12 13:12             ` Michal Hocko
2018-07-11  4:00     ` Michael S. Tsirkin
2018-07-11  4:04       ` Michael S. Tsirkin
2018-07-10  9:31 ` [PATCH v35 2/5] virtio-balloon: remove BUG() in init_vqs Wei Wang
2018-07-10  9:31 ` [PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT Wei Wang
2018-07-10  9:31 ` [PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules Wei Wang
2018-07-10  9:31 ` [PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON Wei Wang

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox