All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v24 0/2] Virtio-balloon: support free page reporting
@ 2018-01-24 10:42 ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

The second feature enables the optimization of the 1st round memory
transfer - the hypervisor can skip the transfer of guest free pages in the
1st round. It is not concerned that the memory pages are used after they
are given to the hypervisor as a hint of the free pages, because they will
be tracked by the hypervisor and transferred in the next round if they are
used and written.

ChangeLog:
v23->v24:
    - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to
      VIRTIO_BALLOON_F_FREE_PAGE_HINT
    - kick when vq->num_free < half full, instead of "= half full"
    - replace BUG_ON with bailing out
    - check vb->balloon_wq in probe(), if null, bail out
    - add a new feature bit for page poisoning
    - solve the corner case that one cmd id being sent to host twice
v22->v23:
    - change to kick the device when the vq is half-way full;
    - open-code batch_free_page_sg into add_one_sg;
    - change cmd_id from "uint32_t" to "__virtio32";
    - reserver one entry in the vq for teh driver to send cmd_id, instead
      of busywaiting for an available entry;
    - add "stop_update" check before queue_work for prudence purpose for
      now, will have a separate patch to discuss this flag check later;
    - init_vqs: change to put some variables on stack to have simpler
      implementation;
    - add destroy_workqueue(vb->balloon_wq);

v21->v22:
    - add_one_sg: some code and comment re-arrangement
    - send_cmd_id: handle a cornercase

For previous ChangeLog, please reference
https://lwn.net/Articles/743660/

Wei Wang (2):
  mm: support reporting free page blocks
  virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/linux/mm.h                  |   6 +
 include/uapi/linux/virtio_balloon.h |   7 +
 mm/page_alloc.c                     |  91 +++++++++++++
 4 files changed, 333 insertions(+), 36 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [PATCH v24 0/2] Virtio-balloon: support free page reporting
@ 2018-01-24 10:42 ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

The second feature enables the optimization of the 1st round memory
transfer - the hypervisor can skip the transfer of guest free pages in the
1st round. It is not concerned that the memory pages are used after they
are given to the hypervisor as a hint of the free pages, because they will
be tracked by the hypervisor and transferred in the next round if they are
used and written.

ChangeLog:
v23->v24:
    - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to
      VIRTIO_BALLOON_F_FREE_PAGE_HINT
    - kick when vq->num_free < half full, instead of "= half full"
    - replace BUG_ON with bailing out
    - check vb->balloon_wq in probe(), if null, bail out
    - add a new feature bit for page poisoning
    - solve the corner case that one cmd id being sent to host twice
v22->v23:
    - change to kick the device when the vq is half-way full;
    - open-code batch_free_page_sg into add_one_sg;
    - change cmd_id from "uint32_t" to "__virtio32";
    - reserver one entry in the vq for teh driver to send cmd_id, instead
      of busywaiting for an available entry;
    - add "stop_update" check before queue_work for prudence purpose for
      now, will have a separate patch to discuss this flag check later;
    - init_vqs: change to put some variables on stack to have simpler
      implementation;
    - add destroy_workqueue(vb->balloon_wq);

v21->v22:
    - add_one_sg: some code and comment re-arrangement
    - send_cmd_id: handle a cornercase

For previous ChangeLog, please reference
https://lwn.net/Articles/743660/

Wei Wang (2):
  mm: support reporting free page blocks
  virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/linux/mm.h                  |   6 +
 include/uapi/linux/virtio_balloon.h |   7 +
 mm/page_alloc.c                     |  91 +++++++++++++
 4 files changed, 333 insertions(+), 36 deletions(-)

-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] [PATCH v24 0/2] Virtio-balloon: support free page reporting
@ 2018-01-24 10:42 ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch series is separated from the previous "Virtio-balloon
Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT,  
implemented by this series enables the virtio-balloon driver to report
hints of guest free pages to the host. It can be used to accelerate live
migration of VMs. Here is an introduction of this usage:

Live migration needs to transfer the VM's memory from the source machine
to the destination round by round. For the 1st round, all the VM's memory
is transferred. From the 2nd round, only the pieces of memory that were
written by the guest (after the 1st round) are transferred. One method
that is popularly used by the hypervisor to track which part of memory is
written is to write-protect all the guest memory.

The second feature enables the optimization of the 1st round memory
transfer - the hypervisor can skip the transfer of guest free pages in the
1st round. It is not concerned that the memory pages are used after they
are given to the hypervisor as a hint of the free pages, because they will
be tracked by the hypervisor and transferred in the next round if they are
used and written.

ChangeLog:
v23->v24:
    - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to
      VIRTIO_BALLOON_F_FREE_PAGE_HINT
    - kick when vq->num_free < half full, instead of "= half full"
    - replace BUG_ON with bailing out
    - check vb->balloon_wq in probe(), if null, bail out
    - add a new feature bit for page poisoning
    - solve the corner case that one cmd id being sent to host twice
v22->v23:
    - change to kick the device when the vq is half-way full;
    - open-code batch_free_page_sg into add_one_sg;
    - change cmd_id from "uint32_t" to "__virtio32";
    - reserver one entry in the vq for teh driver to send cmd_id, instead
      of busywaiting for an available entry;
    - add "stop_update" check before queue_work for prudence purpose for
      now, will have a separate patch to discuss this flag check later;
    - init_vqs: change to put some variables on stack to have simpler
      implementation;
    - add destroy_workqueue(vb->balloon_wq);

v21->v22:
    - add_one_sg: some code and comment re-arrangement
    - send_cmd_id: handle a cornercase

For previous ChangeLog, please reference
https://lwn.net/Articles/743660/

Wei Wang (2):
  mm: support reporting free page blocks
  virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/linux/mm.h                  |   6 +
 include/uapi/linux/virtio_balloon.h |   7 +
 mm/page_alloc.c                     |  91 +++++++++++++
 4 files changed, 333 insertions(+), 36 deletions(-)

-- 
2.7.4


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-24 10:42 ` Wei Wang
  (?)
@ 2018-01-24 10:42   ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michal Hocko <mhocko@kernel.org>
---
 include/linux/mm.h |  6 ++++
 mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 97 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..b3077dd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern void walk_free_mem_block(void *opaque,
+				int min_order,
+				bool (*report_pfn_range)(void *opaque,
+							 unsigned long pfn,
+							 unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..705de22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
 	show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return false if the callback requests to stop reporting. Otherwise,
+ * return true.
+ */
+static bool walk_free_page_list(void *opaque,
+				struct zone *zone,
+				int order,
+				enum migratetype mt,
+				bool (*report_pfn_range)(void *,
+							 unsigned long,
+							 unsigned long))
+{
+	struct page *page;
+	struct list_head *list;
+	unsigned long pfn, flags;
+	bool ret;
+
+	spin_lock_irqsave(&zone->lock, flags);
+	list = &zone->free_area[order].free_list[mt];
+	list_for_each_entry(page, list, lru) {
+		pfn = page_to_pfn(page);
+		ret = report_pfn_range(opaque, pfn, 1 << order);
+		if (!ret)
+			break;
+	}
+	spin_unlock_irqrestore(&zone->lock, flags);
+
+	return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns false, stop iterating the list of free page blocks.
+ * Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ */
+void walk_free_mem_block(void *opaque,
+			 int min_order,
+			 bool (*report_pfn_range)(void *opaque,
+						  unsigned long pfn,
+						  unsigned long num))
+{
+	struct zone *zone;
+	int order;
+	enum migratetype mt;
+	bool ret;
+
+	for_each_populated_zone(zone) {
+		for (order = MAX_ORDER - 1; order >= min_order; order--) {
+			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+				ret = walk_free_page_list(opaque, zone,
+							  order, mt,
+							  report_pfn_range);
+				if (!ret)
+					return;
+			}
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(walk_free_mem_block);
+
 static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
 {
 	zoneref->zone = zone;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-24 10:42   ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michal Hocko <mhocko@kernel.org>
---
 include/linux/mm.h |  6 ++++
 mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 97 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..b3077dd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern void walk_free_mem_block(void *opaque,
+				int min_order,
+				bool (*report_pfn_range)(void *opaque,
+							 unsigned long pfn,
+							 unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..705de22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
 	show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return false if the callback requests to stop reporting. Otherwise,
+ * return true.
+ */
+static bool walk_free_page_list(void *opaque,
+				struct zone *zone,
+				int order,
+				enum migratetype mt,
+				bool (*report_pfn_range)(void *,
+							 unsigned long,
+							 unsigned long))
+{
+	struct page *page;
+	struct list_head *list;
+	unsigned long pfn, flags;
+	bool ret;
+
+	spin_lock_irqsave(&zone->lock, flags);
+	list = &zone->free_area[order].free_list[mt];
+	list_for_each_entry(page, list, lru) {
+		pfn = page_to_pfn(page);
+		ret = report_pfn_range(opaque, pfn, 1 << order);
+		if (!ret)
+			break;
+	}
+	spin_unlock_irqrestore(&zone->lock, flags);
+
+	return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns false, stop iterating the list of free page blocks.
+ * Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ */
+void walk_free_mem_block(void *opaque,
+			 int min_order,
+			 bool (*report_pfn_range)(void *opaque,
+						  unsigned long pfn,
+						  unsigned long num))
+{
+	struct zone *zone;
+	int order;
+	enum migratetype mt;
+	bool ret;
+
+	for_each_populated_zone(zone) {
+		for (order = MAX_ORDER - 1; order >= min_order; order--) {
+			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+				ret = walk_free_page_list(opaque, zone,
+							  order, mt,
+							  report_pfn_range);
+				if (!ret)
+					return;
+			}
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(walk_free_mem_block);
+
 static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
 {
 	zoneref->zone = zone;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-24 10:42 ` Wei Wang
  (?)
  (?)
@ 2018-01-24 10:42 ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: yang.zhang.wz, riel, quan.xu0, liliang.opensource, pbonzini, nilal

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michal Hocko <mhocko@kernel.org>
---
 include/linux/mm.h |  6 ++++
 mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 97 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..b3077dd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern void walk_free_mem_block(void *opaque,
+				int min_order,
+				bool (*report_pfn_range)(void *opaque,
+							 unsigned long pfn,
+							 unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..705de22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
 	show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return false if the callback requests to stop reporting. Otherwise,
+ * return true.
+ */
+static bool walk_free_page_list(void *opaque,
+				struct zone *zone,
+				int order,
+				enum migratetype mt,
+				bool (*report_pfn_range)(void *,
+							 unsigned long,
+							 unsigned long))
+{
+	struct page *page;
+	struct list_head *list;
+	unsigned long pfn, flags;
+	bool ret;
+
+	spin_lock_irqsave(&zone->lock, flags);
+	list = &zone->free_area[order].free_list[mt];
+	list_for_each_entry(page, list, lru) {
+		pfn = page_to_pfn(page);
+		ret = report_pfn_range(opaque, pfn, 1 << order);
+		if (!ret)
+			break;
+	}
+	spin_unlock_irqrestore(&zone->lock, flags);
+
+	return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns false, stop iterating the list of free page blocks.
+ * Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ */
+void walk_free_mem_block(void *opaque,
+			 int min_order,
+			 bool (*report_pfn_range)(void *opaque,
+						  unsigned long pfn,
+						  unsigned long num))
+{
+	struct zone *zone;
+	int order;
+	enum migratetype mt;
+	bool ret;
+
+	for_each_populated_zone(zone) {
+		for (order = MAX_ORDER - 1; order >= min_order; order--) {
+			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+				ret = walk_free_page_list(opaque, zone,
+							  order, mt,
+							  report_pfn_range);
+				if (!ret)
+					return;
+			}
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(walk_free_mem_block);
+
 static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
 {
 	zoneref->zone = zone;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [virtio-dev] [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-24 10:42   ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

This patch adds support to walk through the free page blocks in the
system and report them via a callback function. Some page blocks may
leave the free list after zone->lock is released, so it is the caller's
responsibility to either detect or prevent the use of such pages.

One use example of this patch is to accelerate live migration by skipping
the transfer of free pages reported from the guest. A popular method used
by the hypervisor to track which part of memory is written during live
migration is to write-protect all the guest memory. So, those pages that
are reported as free pages but are written after the report function
returns will be captured by the hypervisor, and they will be added to the
next round of memory transfer.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michal Hocko <mhocko@kernel.org>
---
 include/linux/mm.h |  6 ++++
 mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 97 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea818ff..b3077dd 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
 		unsigned long zone_start_pfn, unsigned long *zholes_size);
 extern void free_initmem(void);
 
+extern void walk_free_mem_block(void *opaque,
+				int min_order,
+				bool (*report_pfn_range)(void *opaque,
+							 unsigned long pfn,
+							 unsigned long num));
+
 /*
  * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
  * into the buddy system. The freed pages will be poisoned with pattern
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 76c9688..705de22 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
 	show_swap_cache_info();
 }
 
+/*
+ * Walk through a free page list and report the found pfn range via the
+ * callback.
+ *
+ * Return false if the callback requests to stop reporting. Otherwise,
+ * return true.
+ */
+static bool walk_free_page_list(void *opaque,
+				struct zone *zone,
+				int order,
+				enum migratetype mt,
+				bool (*report_pfn_range)(void *,
+							 unsigned long,
+							 unsigned long))
+{
+	struct page *page;
+	struct list_head *list;
+	unsigned long pfn, flags;
+	bool ret;
+
+	spin_lock_irqsave(&zone->lock, flags);
+	list = &zone->free_area[order].free_list[mt];
+	list_for_each_entry(page, list, lru) {
+		pfn = page_to_pfn(page);
+		ret = report_pfn_range(opaque, pfn, 1 << order);
+		if (!ret)
+			break;
+	}
+	spin_unlock_irqrestore(&zone->lock, flags);
+
+	return ret;
+}
+
+/**
+ * walk_free_mem_block - Walk through the free page blocks in the system
+ * @opaque: the context passed from the caller
+ * @min_order: the minimum order of free lists to check
+ * @report_pfn_range: the callback to report the pfn range of the free pages
+ *
+ * If the callback returns false, stop iterating the list of free page blocks.
+ * Otherwise, continue to report.
+ *
+ * Please note that there are no locking guarantees for the callback and
+ * that the reported pfn range might be freed or disappear after the
+ * callback returns so the caller has to be very careful how it is used.
+ *
+ * The callback itself must not sleep or perform any operations which would
+ * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
+ * or via any lock dependency. It is generally advisable to implement
+ * the callback as simple as possible and defer any heavy lifting to a
+ * different context.
+ *
+ * There is no guarantee that each free range will be reported only once
+ * during one walk_free_mem_block invocation.
+ *
+ * pfn_to_page on the given range is strongly discouraged and if there is
+ * an absolute need for that make sure to contact MM people to discuss
+ * potential problems.
+ *
+ * The function itself might sleep so it cannot be called from atomic
+ * contexts.
+ *
+ * In general low orders tend to be very volatile and so it makes more
+ * sense to query larger ones first for various optimizations which like
+ * ballooning etc... This will reduce the overhead as well.
+ */
+void walk_free_mem_block(void *opaque,
+			 int min_order,
+			 bool (*report_pfn_range)(void *opaque,
+						  unsigned long pfn,
+						  unsigned long num))
+{
+	struct zone *zone;
+	int order;
+	enum migratetype mt;
+	bool ret;
+
+	for_each_populated_zone(zone) {
+		for (order = MAX_ORDER - 1; order >= min_order; order--) {
+			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
+				ret = walk_free_page_list(opaque, zone,
+							  order, mt,
+							  report_pfn_range);
+				if (!ret)
+					return;
+			}
+		}
+	}
+}
+EXPORT_SYMBOL_GPL(walk_free_mem_block);
+
 static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
 {
 	zoneref->zone = zone;
-- 
2.7.4


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 10:42 ` Wei Wang
  (?)
@ 2018-01-24 10:42   ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free pages by sending a new cmd
id to the guest via the free_page_report_cmd_id configuration register.

When the guest starts to report, the first element added to the free page
vq is the cmd id given by host. When the guest finishes the reporting
of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
to the vq to tell host that the reporting is done. Host may also requests
the guest to stop the reporting in advance by sending the stop cmd id to
the guest via the configuration register.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
---
 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/uapi/linux/virtio_balloon.h |   7 +
 2 files changed, 236 insertions(+), 36 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index a1fb52c..4440873 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+/* The number of virtqueues supported by virtio-balloon */
+#define VIRTIO_BALLOON_VQ_NUM		4
+#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
+#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
+#define VIRTIO_BALLOON_VQ_ID_STATS	2
+#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
+
 struct virtio_balloon {
 	struct virtio_device *vdev;
-	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+	/* Balloon's own wq for cpu-intensive work items */
+	struct workqueue_struct *balloon_wq;
+	/* The free page reporting work item submitted to the balloon wq */
+	struct work_struct report_free_page_work;
 
 	/* The balloon servicing is delegated to a freezable workqueue. */
 	struct work_struct update_balloon_stats_work;
@@ -63,6 +75,13 @@ struct virtio_balloon {
 	spinlock_t stop_update_lock;
 	bool stop_update;
 
+	/* Start to report free pages */
+	bool report_free_page;
+	/* Stores the cmd id given by host to start the free page reporting */
+	__virtio32 start_cmd_id;
+	/* Stores STOP_ID as a sign to tell host that the reporting is done */
+	__virtio32 stop_cmd_id;
+
 	/* Waiting for host to ack the pages we released. */
 	wait_queue_head_t acked;
 
@@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
 	return idx;
 }
 
+static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
+{
+	struct scatterlist sg;
+	unsigned int unused;
+	int ret = 0;
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
+
+	/* Detach all the used buffers from the vq */
+	while (virtqueue_get_buf(vq, &unused))
+		;
+
+	/*
+	 * Since this is an optimization feature, losing a couple of free
+	 * pages to report isn't important. We simply return without adding
+	 * the page if the vq is full.
+	 * We are adding one entry each time, which essentially results in no
+	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
+	 * There is always one entry reserved for the cmd id to use.
+	 */
+	if (vq->num_free > 1)
+		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
+
+	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
+		virtqueue_kick(vq);
+
+	return ret;
+}
+
+static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+
+	if (unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return;
+
+	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
+
+	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	virtqueue_kick(vq);
+}
+
 /*
  * While most virtqueues communicate guest-initiated requests to the hypervisor,
  * the stats queue operates in reverse.  The driver initializes the virtqueue
@@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
 	virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-	struct virtio_balloon *vb = vdev->priv;
-	unsigned long flags;
-
-	spin_lock_irqsave(&vb->stop_update_lock, flags);
-	if (!vb->stop_update)
-		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
-	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
 	s64 target;
@@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
 	return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+	struct virtio_balloon *vb = vdev->priv;
+	unsigned long flags;
+	__u32 cmd_id;
+	s64 diff = towards_target(vb);
+
+	if (diff) {
+		spin_lock_irqsave(&vb->stop_update_lock, flags);
+		if (!vb->stop_update)
+			queue_work(system_freezable_wq,
+				   &vb->update_balloon_size_work);
+		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+	}
+
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		virtio_cread(vdev, struct virtio_balloon_config,
+			     free_page_report_cmd_id, &cmd_id);
+		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
+			vb->report_free_page = false;
+		} else {
+			/*
+			 * The request is queued only when the ack of the
+			 * previous request has been sent to host, which is
+			 * indicated by start_cmd_id set to
+			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
+			 * simply update the start_cmd_id, and when the
+			 * previous queued work runs, the latest cmd id will
+			 * be sent to host.
+			 */
+			spin_lock_irqsave(&vb->stop_update_lock, flags);
+			if (!vb->stop_update &&
+			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
+			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
+				queue_work(vb->balloon_wq,
+					   &vb->report_free_page_work);
+			vb->report_free_page = true;
+			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
+			spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+		}
+	}
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
 	u32 actual = vb->num_pages;
@@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
 
 static int init_vqs(struct virtio_balloon *vb)
 {
-	struct virtqueue *vqs[3];
-	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
-	static const char * const names[] = { "inflate", "deflate", "stats" };
-	int err, nvqs;
+	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
+	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
+	const char *names[VIRTIO_BALLOON_VQ_NUM];
+	struct scatterlist sg;
+	int ret;
 
 	/*
-	 * We expect two virtqueues: inflate and deflate, and
-	 * optionally stat.
+	 * Inflateq and deflateq are used unconditionally. The names[]
+	 * will be NULL if the related feature is not enabled, which will
+	 * cause no allocation for the corresponding virtqueue in find_vqs.
 	 */
-	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
-	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
-	if (err)
-		return err;
+	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
+	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
+	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
+	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
 
-	vb->inflate_vq = vqs[0];
-	vb->deflate_vq = vqs[1];
 	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
-		struct scatterlist sg;
-		unsigned int num_stats;
-		vb->stats_vq = vqs[2];
+		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
+		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
+	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
+		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
+	}
+
+	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
+					 vqs, callbacks, names, NULL, NULL);
+	if (ret)
+		return ret;
 
+	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
+	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
 		/*
 		 * Prime this virtqueue with one buffer so the hypervisor can
 		 * use it to signal us later (it can't be broken yet!).
 		 */
-		num_stats = update_balloon_stats(vb);
-
-		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
-		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
-		    < 0)
-			BUG();
+		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
+		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
+					   GFP_KERNEL);
+		if (ret) {
+			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
+				 __func__);
+			return ret;
+		}
 		virtqueue_kick(vb->stats_vq);
 	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
+
 	return 0;
 }
 
+static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
+					   unsigned long nr_pages)
+{
+	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
+	uint32_t len = nr_pages << PAGE_SHIFT;
+	int ret;
+
+	if (!vb->report_free_page ||
+	    unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return false;
+
+	ret = add_one_sg(vb->free_page_vq, pfn, len);
+	if (unlikely(ret))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	return !ret;
+}
+
+static void report_free_page_func(struct work_struct *work)
+{
+	struct virtio_balloon *vb;
+	unsigned long flags;
+
+	vb = container_of(work, struct virtio_balloon, report_free_page_work);
+
+	/* Start by sending the obtained cmd id to the host with an outbuf */
+	send_cmd_id(vb, &vb->start_cmd_id);
+
+	/*
+	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
+	 * indicate a new request can be queued.
+	 */
+	spin_lock_irqsave(&vb->stop_update_lock, flags);
+	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+
+	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
+
+	/* End by sending the stop id to the host with an outbuf */
+	send_cmd_id(vb, &vb->stop_cmd_id);
+}
+
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
@@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
+	__u32 poison_val;
 	int err;
 
 	if (!vdev->config->get) {
@@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vb;
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		vb->balloon_wq = alloc_workqueue("balloon-wq",
+					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+		if (!vb->balloon_wq) {
+			err = -ENOMEM;
+			goto out_del_vqs;
+		}
+		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
+		vb->start_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		vb->stop_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+			poison_val = PAGE_POISON;
+			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+				      poison_val, &poison_val);
+		}
+	}
+
 	vb->nb.notifier_call = virtballoon_oom_notify;
 	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
 	err = register_oom_notifier(&vb->nb);
 	if (err < 0)
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 
 #ifdef CONFIG_BALLOON_COMPACTION
 	balloon_mnt = kern_mount(&balloon_fs);
 	if (IS_ERR(balloon_mnt)) {
 		err = PTR_ERR(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
@@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		kern_unmount(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
 		vb->vb_dev_info.inode = NULL;
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
@@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		virtballoon_changed(vdev);
 	return 0;
 
+out_del_balloon_wq:
+	destroy_workqueue(vb->balloon_wq);
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
 out_free_vb:
@@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
 	spin_unlock_irq(&vb->stop_update_lock);
 	cancel_work_sync(&vb->update_balloon_size_work);
 	cancel_work_sync(&vb->update_balloon_stats_work);
+	cancel_work_sync(&vb->report_free_page_work);
+	destroy_workqueue(vb->balloon_wq);
 
 	remove_common(vb);
 #ifdef CONFIG_BALLOON_COMPACTION
@@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+	if (!page_poisoning_enabled())
+		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
 	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
 	return 0;
 }
@@ -682,6 +873,8 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_MUST_TELL_HOST,
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+	VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 343d7dd..3f97067 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,15 +34,22 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
 
+#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
 struct virtio_balloon_config {
 	/* Number of pages host wants Guest to give up. */
 	__u32 num_pages;
 	/* Number of pages we've actually got in balloon. */
 	__u32 actual;
+	/* Free page report command id, readonly by guest */
+	__u32 free_page_report_cmd_id;
+	/* Stores PAGE_POISON if page poisoning is in use */
+	__u32 poison_val;
 };
 
 #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-24 10:42   ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free pages by sending a new cmd
id to the guest via the free_page_report_cmd_id configuration register.

When the guest starts to report, the first element added to the free page
vq is the cmd id given by host. When the guest finishes the reporting
of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
to the vq to tell host that the reporting is done. Host may also requests
the guest to stop the reporting in advance by sending the stop cmd id to
the guest via the configuration register.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
---
 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/uapi/linux/virtio_balloon.h |   7 +
 2 files changed, 236 insertions(+), 36 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index a1fb52c..4440873 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+/* The number of virtqueues supported by virtio-balloon */
+#define VIRTIO_BALLOON_VQ_NUM		4
+#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
+#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
+#define VIRTIO_BALLOON_VQ_ID_STATS	2
+#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
+
 struct virtio_balloon {
 	struct virtio_device *vdev;
-	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+	/* Balloon's own wq for cpu-intensive work items */
+	struct workqueue_struct *balloon_wq;
+	/* The free page reporting work item submitted to the balloon wq */
+	struct work_struct report_free_page_work;
 
 	/* The balloon servicing is delegated to a freezable workqueue. */
 	struct work_struct update_balloon_stats_work;
@@ -63,6 +75,13 @@ struct virtio_balloon {
 	spinlock_t stop_update_lock;
 	bool stop_update;
 
+	/* Start to report free pages */
+	bool report_free_page;
+	/* Stores the cmd id given by host to start the free page reporting */
+	__virtio32 start_cmd_id;
+	/* Stores STOP_ID as a sign to tell host that the reporting is done */
+	__virtio32 stop_cmd_id;
+
 	/* Waiting for host to ack the pages we released. */
 	wait_queue_head_t acked;
 
@@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
 	return idx;
 }
 
+static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
+{
+	struct scatterlist sg;
+	unsigned int unused;
+	int ret = 0;
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
+
+	/* Detach all the used buffers from the vq */
+	while (virtqueue_get_buf(vq, &unused))
+		;
+
+	/*
+	 * Since this is an optimization feature, losing a couple of free
+	 * pages to report isn't important. We simply return without adding
+	 * the page if the vq is full.
+	 * We are adding one entry each time, which essentially results in no
+	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
+	 * There is always one entry reserved for the cmd id to use.
+	 */
+	if (vq->num_free > 1)
+		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
+
+	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
+		virtqueue_kick(vq);
+
+	return ret;
+}
+
+static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+
+	if (unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return;
+
+	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
+
+	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	virtqueue_kick(vq);
+}
+
 /*
  * While most virtqueues communicate guest-initiated requests to the hypervisor,
  * the stats queue operates in reverse.  The driver initializes the virtqueue
@@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
 	virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-	struct virtio_balloon *vb = vdev->priv;
-	unsigned long flags;
-
-	spin_lock_irqsave(&vb->stop_update_lock, flags);
-	if (!vb->stop_update)
-		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
-	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
 	s64 target;
@@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
 	return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+	struct virtio_balloon *vb = vdev->priv;
+	unsigned long flags;
+	__u32 cmd_id;
+	s64 diff = towards_target(vb);
+
+	if (diff) {
+		spin_lock_irqsave(&vb->stop_update_lock, flags);
+		if (!vb->stop_update)
+			queue_work(system_freezable_wq,
+				   &vb->update_balloon_size_work);
+		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+	}
+
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		virtio_cread(vdev, struct virtio_balloon_config,
+			     free_page_report_cmd_id, &cmd_id);
+		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
+			vb->report_free_page = false;
+		} else {
+			/*
+			 * The request is queued only when the ack of the
+			 * previous request has been sent to host, which is
+			 * indicated by start_cmd_id set to
+			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
+			 * simply update the start_cmd_id, and when the
+			 * previous queued work runs, the latest cmd id will
+			 * be sent to host.
+			 */
+			spin_lock_irqsave(&vb->stop_update_lock, flags);
+			if (!vb->stop_update &&
+			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
+			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
+				queue_work(vb->balloon_wq,
+					   &vb->report_free_page_work);
+			vb->report_free_page = true;
+			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
+			spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+		}
+	}
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
 	u32 actual = vb->num_pages;
@@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
 
 static int init_vqs(struct virtio_balloon *vb)
 {
-	struct virtqueue *vqs[3];
-	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
-	static const char * const names[] = { "inflate", "deflate", "stats" };
-	int err, nvqs;
+	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
+	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
+	const char *names[VIRTIO_BALLOON_VQ_NUM];
+	struct scatterlist sg;
+	int ret;
 
 	/*
-	 * We expect two virtqueues: inflate and deflate, and
-	 * optionally stat.
+	 * Inflateq and deflateq are used unconditionally. The names[]
+	 * will be NULL if the related feature is not enabled, which will
+	 * cause no allocation for the corresponding virtqueue in find_vqs.
 	 */
-	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
-	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
-	if (err)
-		return err;
+	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
+	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
+	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
+	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
 
-	vb->inflate_vq = vqs[0];
-	vb->deflate_vq = vqs[1];
 	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
-		struct scatterlist sg;
-		unsigned int num_stats;
-		vb->stats_vq = vqs[2];
+		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
+		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
+	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
+		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
+	}
+
+	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
+					 vqs, callbacks, names, NULL, NULL);
+	if (ret)
+		return ret;
 
+	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
+	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
 		/*
 		 * Prime this virtqueue with one buffer so the hypervisor can
 		 * use it to signal us later (it can't be broken yet!).
 		 */
-		num_stats = update_balloon_stats(vb);
-
-		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
-		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
-		    < 0)
-			BUG();
+		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
+		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
+					   GFP_KERNEL);
+		if (ret) {
+			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
+				 __func__);
+			return ret;
+		}
 		virtqueue_kick(vb->stats_vq);
 	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
+
 	return 0;
 }
 
+static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
+					   unsigned long nr_pages)
+{
+	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
+	uint32_t len = nr_pages << PAGE_SHIFT;
+	int ret;
+
+	if (!vb->report_free_page ||
+	    unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return false;
+
+	ret = add_one_sg(vb->free_page_vq, pfn, len);
+	if (unlikely(ret))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	return !ret;
+}
+
+static void report_free_page_func(struct work_struct *work)
+{
+	struct virtio_balloon *vb;
+	unsigned long flags;
+
+	vb = container_of(work, struct virtio_balloon, report_free_page_work);
+
+	/* Start by sending the obtained cmd id to the host with an outbuf */
+	send_cmd_id(vb, &vb->start_cmd_id);
+
+	/*
+	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
+	 * indicate a new request can be queued.
+	 */
+	spin_lock_irqsave(&vb->stop_update_lock, flags);
+	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+
+	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
+
+	/* End by sending the stop id to the host with an outbuf */
+	send_cmd_id(vb, &vb->stop_cmd_id);
+}
+
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
@@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
+	__u32 poison_val;
 	int err;
 
 	if (!vdev->config->get) {
@@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vb;
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		vb->balloon_wq = alloc_workqueue("balloon-wq",
+					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+		if (!vb->balloon_wq) {
+			err = -ENOMEM;
+			goto out_del_vqs;
+		}
+		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
+		vb->start_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		vb->stop_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+			poison_val = PAGE_POISON;
+			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+				      poison_val, &poison_val);
+		}
+	}
+
 	vb->nb.notifier_call = virtballoon_oom_notify;
 	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
 	err = register_oom_notifier(&vb->nb);
 	if (err < 0)
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 
 #ifdef CONFIG_BALLOON_COMPACTION
 	balloon_mnt = kern_mount(&balloon_fs);
 	if (IS_ERR(balloon_mnt)) {
 		err = PTR_ERR(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
@@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		kern_unmount(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
 		vb->vb_dev_info.inode = NULL;
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
@@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		virtballoon_changed(vdev);
 	return 0;
 
+out_del_balloon_wq:
+	destroy_workqueue(vb->balloon_wq);
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
 out_free_vb:
@@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
 	spin_unlock_irq(&vb->stop_update_lock);
 	cancel_work_sync(&vb->update_balloon_size_work);
 	cancel_work_sync(&vb->update_balloon_stats_work);
+	cancel_work_sync(&vb->report_free_page_work);
+	destroy_workqueue(vb->balloon_wq);
 
 	remove_common(vb);
 #ifdef CONFIG_BALLOON_COMPACTION
@@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+	if (!page_poisoning_enabled())
+		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
 	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
 	return 0;
 }
@@ -682,6 +873,8 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_MUST_TELL_HOST,
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+	VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 343d7dd..3f97067 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,15 +34,22 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
 
+#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
 struct virtio_balloon_config {
 	/* Number of pages host wants Guest to give up. */
 	__u32 num_pages;
 	/* Number of pages we've actually got in balloon. */
 	__u32 actual;
+	/* Free page report command id, readonly by guest */
+	__u32 free_page_report_cmd_id;
+	/* Stores PAGE_POISON if page poisoning is in use */
+	__u32 poison_val;
 };
 
 #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 10:42 ` Wei Wang
                   ` (4 preceding siblings ...)
  (?)
@ 2018-01-24 10:42 ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: yang.zhang.wz, riel, quan.xu0, liliang.opensource, pbonzini, nilal

Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free pages by sending a new cmd
id to the guest via the free_page_report_cmd_id configuration register.

When the guest starts to report, the first element added to the free page
vq is the cmd id given by host. When the guest finishes the reporting
of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
to the vq to tell host that the reporting is done. Host may also requests
the guest to stop the reporting in advance by sending the stop cmd id to
the guest via the configuration register.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
---
 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/uapi/linux/virtio_balloon.h |   7 +
 2 files changed, 236 insertions(+), 36 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index a1fb52c..4440873 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+/* The number of virtqueues supported by virtio-balloon */
+#define VIRTIO_BALLOON_VQ_NUM		4
+#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
+#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
+#define VIRTIO_BALLOON_VQ_ID_STATS	2
+#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
+
 struct virtio_balloon {
 	struct virtio_device *vdev;
-	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+	/* Balloon's own wq for cpu-intensive work items */
+	struct workqueue_struct *balloon_wq;
+	/* The free page reporting work item submitted to the balloon wq */
+	struct work_struct report_free_page_work;
 
 	/* The balloon servicing is delegated to a freezable workqueue. */
 	struct work_struct update_balloon_stats_work;
@@ -63,6 +75,13 @@ struct virtio_balloon {
 	spinlock_t stop_update_lock;
 	bool stop_update;
 
+	/* Start to report free pages */
+	bool report_free_page;
+	/* Stores the cmd id given by host to start the free page reporting */
+	__virtio32 start_cmd_id;
+	/* Stores STOP_ID as a sign to tell host that the reporting is done */
+	__virtio32 stop_cmd_id;
+
 	/* Waiting for host to ack the pages we released. */
 	wait_queue_head_t acked;
 
@@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
 	return idx;
 }
 
+static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
+{
+	struct scatterlist sg;
+	unsigned int unused;
+	int ret = 0;
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
+
+	/* Detach all the used buffers from the vq */
+	while (virtqueue_get_buf(vq, &unused))
+		;
+
+	/*
+	 * Since this is an optimization feature, losing a couple of free
+	 * pages to report isn't important. We simply return without adding
+	 * the page if the vq is full.
+	 * We are adding one entry each time, which essentially results in no
+	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
+	 * There is always one entry reserved for the cmd id to use.
+	 */
+	if (vq->num_free > 1)
+		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
+
+	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
+		virtqueue_kick(vq);
+
+	return ret;
+}
+
+static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+
+	if (unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return;
+
+	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
+
+	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	virtqueue_kick(vq);
+}
+
 /*
  * While most virtqueues communicate guest-initiated requests to the hypervisor,
  * the stats queue operates in reverse.  The driver initializes the virtqueue
@@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
 	virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-	struct virtio_balloon *vb = vdev->priv;
-	unsigned long flags;
-
-	spin_lock_irqsave(&vb->stop_update_lock, flags);
-	if (!vb->stop_update)
-		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
-	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
 	s64 target;
@@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
 	return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+	struct virtio_balloon *vb = vdev->priv;
+	unsigned long flags;
+	__u32 cmd_id;
+	s64 diff = towards_target(vb);
+
+	if (diff) {
+		spin_lock_irqsave(&vb->stop_update_lock, flags);
+		if (!vb->stop_update)
+			queue_work(system_freezable_wq,
+				   &vb->update_balloon_size_work);
+		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+	}
+
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		virtio_cread(vdev, struct virtio_balloon_config,
+			     free_page_report_cmd_id, &cmd_id);
+		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
+			vb->report_free_page = false;
+		} else {
+			/*
+			 * The request is queued only when the ack of the
+			 * previous request has been sent to host, which is
+			 * indicated by start_cmd_id set to
+			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
+			 * simply update the start_cmd_id, and when the
+			 * previous queued work runs, the latest cmd id will
+			 * be sent to host.
+			 */
+			spin_lock_irqsave(&vb->stop_update_lock, flags);
+			if (!vb->stop_update &&
+			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
+			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
+				queue_work(vb->balloon_wq,
+					   &vb->report_free_page_work);
+			vb->report_free_page = true;
+			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
+			spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+		}
+	}
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
 	u32 actual = vb->num_pages;
@@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
 
 static int init_vqs(struct virtio_balloon *vb)
 {
-	struct virtqueue *vqs[3];
-	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
-	static const char * const names[] = { "inflate", "deflate", "stats" };
-	int err, nvqs;
+	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
+	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
+	const char *names[VIRTIO_BALLOON_VQ_NUM];
+	struct scatterlist sg;
+	int ret;
 
 	/*
-	 * We expect two virtqueues: inflate and deflate, and
-	 * optionally stat.
+	 * Inflateq and deflateq are used unconditionally. The names[]
+	 * will be NULL if the related feature is not enabled, which will
+	 * cause no allocation for the corresponding virtqueue in find_vqs.
 	 */
-	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
-	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
-	if (err)
-		return err;
+	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
+	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
+	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
+	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
 
-	vb->inflate_vq = vqs[0];
-	vb->deflate_vq = vqs[1];
 	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
-		struct scatterlist sg;
-		unsigned int num_stats;
-		vb->stats_vq = vqs[2];
+		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
+		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
+	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
+		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
+	}
+
+	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
+					 vqs, callbacks, names, NULL, NULL);
+	if (ret)
+		return ret;
 
+	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
+	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
 		/*
 		 * Prime this virtqueue with one buffer so the hypervisor can
 		 * use it to signal us later (it can't be broken yet!).
 		 */
-		num_stats = update_balloon_stats(vb);
-
-		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
-		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
-		    < 0)
-			BUG();
+		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
+		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
+					   GFP_KERNEL);
+		if (ret) {
+			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
+				 __func__);
+			return ret;
+		}
 		virtqueue_kick(vb->stats_vq);
 	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
+
 	return 0;
 }
 
+static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
+					   unsigned long nr_pages)
+{
+	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
+	uint32_t len = nr_pages << PAGE_SHIFT;
+	int ret;
+
+	if (!vb->report_free_page ||
+	    unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return false;
+
+	ret = add_one_sg(vb->free_page_vq, pfn, len);
+	if (unlikely(ret))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	return !ret;
+}
+
+static void report_free_page_func(struct work_struct *work)
+{
+	struct virtio_balloon *vb;
+	unsigned long flags;
+
+	vb = container_of(work, struct virtio_balloon, report_free_page_work);
+
+	/* Start by sending the obtained cmd id to the host with an outbuf */
+	send_cmd_id(vb, &vb->start_cmd_id);
+
+	/*
+	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
+	 * indicate a new request can be queued.
+	 */
+	spin_lock_irqsave(&vb->stop_update_lock, flags);
+	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+
+	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
+
+	/* End by sending the stop id to the host with an outbuf */
+	send_cmd_id(vb, &vb->stop_cmd_id);
+}
+
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
@@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
+	__u32 poison_val;
 	int err;
 
 	if (!vdev->config->get) {
@@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vb;
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		vb->balloon_wq = alloc_workqueue("balloon-wq",
+					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+		if (!vb->balloon_wq) {
+			err = -ENOMEM;
+			goto out_del_vqs;
+		}
+		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
+		vb->start_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		vb->stop_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+			poison_val = PAGE_POISON;
+			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+				      poison_val, &poison_val);
+		}
+	}
+
 	vb->nb.notifier_call = virtballoon_oom_notify;
 	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
 	err = register_oom_notifier(&vb->nb);
 	if (err < 0)
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 
 #ifdef CONFIG_BALLOON_COMPACTION
 	balloon_mnt = kern_mount(&balloon_fs);
 	if (IS_ERR(balloon_mnt)) {
 		err = PTR_ERR(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
@@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		kern_unmount(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
 		vb->vb_dev_info.inode = NULL;
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
@@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		virtballoon_changed(vdev);
 	return 0;
 
+out_del_balloon_wq:
+	destroy_workqueue(vb->balloon_wq);
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
 out_free_vb:
@@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
 	spin_unlock_irq(&vb->stop_update_lock);
 	cancel_work_sync(&vb->update_balloon_size_work);
 	cancel_work_sync(&vb->update_balloon_stats_work);
+	cancel_work_sync(&vb->report_free_page_work);
+	destroy_workqueue(vb->balloon_wq);
 
 	remove_common(vb);
 #ifdef CONFIG_BALLOON_COMPACTION
@@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+	if (!page_poisoning_enabled())
+		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
 	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
 	return 0;
 }
@@ -682,6 +873,8 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_MUST_TELL_HOST,
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+	VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 343d7dd..3f97067 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,15 +34,22 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
 
+#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
 struct virtio_balloon_config {
 	/* Number of pages host wants Guest to give up. */
 	__u32 num_pages;
 	/* Number of pages we've actually got in balloon. */
 	__u32 actual;
+	/* Free page report command id, readonly by guest */
+	__u32 free_page_report_cmd_id;
+	/* Stores PAGE_POISON if page poisoning is in use */
+	__u32 poison_val;
 };
 
 #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 65+ messages in thread

* [virtio-dev] [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-24 10:42   ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-24 10:42 UTC (permalink / raw)
  To: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mst,
	mhocko, akpm
  Cc: pbonzini, wei.w.wang, liliang.opensource, yang.zhang.wz,
	quan.xu0, nilal, riel

Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
support of reporting hints of guest free pages to host via virtio-balloon.

Host requests the guest to report free pages by sending a new cmd
id to the guest via the free_page_report_cmd_id configuration register.

When the guest starts to report, the first element added to the free page
vq is the cmd id given by host. When the guest finishes the reporting
of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
to the vq to tell host that the reporting is done. Host may also requests
the guest to stop the reporting in advance by sending the stop cmd id to
the guest via the configuration register.

Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Liang Li <liang.z.li@intel.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
---
 drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
 include/uapi/linux/virtio_balloon.h |   7 +
 2 files changed, 236 insertions(+), 36 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index a1fb52c..4440873 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
 static struct vfsmount *balloon_mnt;
 #endif
 
+/* The number of virtqueues supported by virtio-balloon */
+#define VIRTIO_BALLOON_VQ_NUM		4
+#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
+#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
+#define VIRTIO_BALLOON_VQ_ID_STATS	2
+#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
+
 struct virtio_balloon {
 	struct virtio_device *vdev;
-	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
+	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
+
+	/* Balloon's own wq for cpu-intensive work items */
+	struct workqueue_struct *balloon_wq;
+	/* The free page reporting work item submitted to the balloon wq */
+	struct work_struct report_free_page_work;
 
 	/* The balloon servicing is delegated to a freezable workqueue. */
 	struct work_struct update_balloon_stats_work;
@@ -63,6 +75,13 @@ struct virtio_balloon {
 	spinlock_t stop_update_lock;
 	bool stop_update;
 
+	/* Start to report free pages */
+	bool report_free_page;
+	/* Stores the cmd id given by host to start the free page reporting */
+	__virtio32 start_cmd_id;
+	/* Stores STOP_ID as a sign to tell host that the reporting is done */
+	__virtio32 stop_cmd_id;
+
 	/* Waiting for host to ack the pages we released. */
 	wait_queue_head_t acked;
 
@@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
 	return idx;
 }
 
+static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
+{
+	struct scatterlist sg;
+	unsigned int unused;
+	int ret = 0;
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
+
+	/* Detach all the used buffers from the vq */
+	while (virtqueue_get_buf(vq, &unused))
+		;
+
+	/*
+	 * Since this is an optimization feature, losing a couple of free
+	 * pages to report isn't important. We simply return without adding
+	 * the page if the vq is full.
+	 * We are adding one entry each time, which essentially results in no
+	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
+	 * There is always one entry reserved for the cmd id to use.
+	 */
+	if (vq->num_free > 1)
+		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
+
+	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
+		virtqueue_kick(vq);
+
+	return ret;
+}
+
+static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
+{
+	struct scatterlist sg;
+	struct virtqueue *vq = vb->free_page_vq;
+
+	if (unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return;
+
+	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
+
+	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	virtqueue_kick(vq);
+}
+
 /*
  * While most virtqueues communicate guest-initiated requests to the hypervisor,
  * the stats queue operates in reverse.  The driver initializes the virtqueue
@@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
 	virtqueue_kick(vq);
 }
 
-static void virtballoon_changed(struct virtio_device *vdev)
-{
-	struct virtio_balloon *vb = vdev->priv;
-	unsigned long flags;
-
-	spin_lock_irqsave(&vb->stop_update_lock, flags);
-	if (!vb->stop_update)
-		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
-	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
-}
-
 static inline s64 towards_target(struct virtio_balloon *vb)
 {
 	s64 target;
@@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
 	return target - vb->num_pages;
 }
 
+static void virtballoon_changed(struct virtio_device *vdev)
+{
+	struct virtio_balloon *vb = vdev->priv;
+	unsigned long flags;
+	__u32 cmd_id;
+	s64 diff = towards_target(vb);
+
+	if (diff) {
+		spin_lock_irqsave(&vb->stop_update_lock, flags);
+		if (!vb->stop_update)
+			queue_work(system_freezable_wq,
+				   &vb->update_balloon_size_work);
+		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+	}
+
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		virtio_cread(vdev, struct virtio_balloon_config,
+			     free_page_report_cmd_id, &cmd_id);
+		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
+			vb->report_free_page = false;
+		} else {
+			/*
+			 * The request is queued only when the ack of the
+			 * previous request has been sent to host, which is
+			 * indicated by start_cmd_id set to
+			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
+			 * simply update the start_cmd_id, and when the
+			 * previous queued work runs, the latest cmd id will
+			 * be sent to host.
+			 */
+			spin_lock_irqsave(&vb->stop_update_lock, flags);
+			if (!vb->stop_update &&
+			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
+			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
+				queue_work(vb->balloon_wq,
+					   &vb->report_free_page_work);
+			vb->report_free_page = true;
+			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
+			spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+		}
+	}
+}
+
 static void update_balloon_size(struct virtio_balloon *vb)
 {
 	u32 actual = vb->num_pages;
@@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
 
 static int init_vqs(struct virtio_balloon *vb)
 {
-	struct virtqueue *vqs[3];
-	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
-	static const char * const names[] = { "inflate", "deflate", "stats" };
-	int err, nvqs;
+	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
+	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
+	const char *names[VIRTIO_BALLOON_VQ_NUM];
+	struct scatterlist sg;
+	int ret;
 
 	/*
-	 * We expect two virtqueues: inflate and deflate, and
-	 * optionally stat.
+	 * Inflateq and deflateq are used unconditionally. The names[]
+	 * will be NULL if the related feature is not enabled, which will
+	 * cause no allocation for the corresponding virtqueue in find_vqs.
 	 */
-	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
-	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
-	if (err)
-		return err;
+	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
+	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
+	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
+	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
+	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
 
-	vb->inflate_vq = vqs[0];
-	vb->deflate_vq = vqs[1];
 	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
-		struct scatterlist sg;
-		unsigned int num_stats;
-		vb->stats_vq = vqs[2];
+		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
+		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
+	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
+		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
+	}
+
+	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
+					 vqs, callbacks, names, NULL, NULL);
+	if (ret)
+		return ret;
 
+	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
+	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
+		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
 		/*
 		 * Prime this virtqueue with one buffer so the hypervisor can
 		 * use it to signal us later (it can't be broken yet!).
 		 */
-		num_stats = update_balloon_stats(vb);
-
-		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
-		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
-		    < 0)
-			BUG();
+		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
+		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
+					   GFP_KERNEL);
+		if (ret) {
+			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
+				 __func__);
+			return ret;
+		}
 		virtqueue_kick(vb->stats_vq);
 	}
+
+	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
+
 	return 0;
 }
 
+static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
+					   unsigned long nr_pages)
+{
+	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
+	uint32_t len = nr_pages << PAGE_SHIFT;
+	int ret;
+
+	if (!vb->report_free_page ||
+	    unlikely(!virtio_has_feature(vb->vdev,
+				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
+		return false;
+
+	ret = add_one_sg(vb->free_page_vq, pfn, len);
+	if (unlikely(ret))
+		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
+
+	return !ret;
+}
+
+static void report_free_page_func(struct work_struct *work)
+{
+	struct virtio_balloon *vb;
+	unsigned long flags;
+
+	vb = container_of(work, struct virtio_balloon, report_free_page_work);
+
+	/* Start by sending the obtained cmd id to the host with an outbuf */
+	send_cmd_id(vb, &vb->start_cmd_id);
+
+	/*
+	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
+	 * indicate a new request can be queued.
+	 */
+	spin_lock_irqsave(&vb->stop_update_lock, flags);
+	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
+
+	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
+
+	/* End by sending the stop id to the host with an outbuf */
+	send_cmd_id(vb, &vb->stop_cmd_id);
+}
+
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
@@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
+	__u32 poison_val;
 	int err;
 
 	if (!vdev->config->get) {
@@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vb;
 
+	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
+		vb->balloon_wq = alloc_workqueue("balloon-wq",
+					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);
+		if (!vb->balloon_wq) {
+			err = -ENOMEM;
+			goto out_del_vqs;
+		}
+		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
+		vb->start_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		vb->stop_cmd_id = cpu_to_virtio32(vdev,
+				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
+		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
+			poison_val = PAGE_POISON;
+			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+				      poison_val, &poison_val);
+		}
+	}
+
 	vb->nb.notifier_call = virtballoon_oom_notify;
 	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
 	err = register_oom_notifier(&vb->nb);
 	if (err < 0)
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 
 #ifdef CONFIG_BALLOON_COMPACTION
 	balloon_mnt = kern_mount(&balloon_fs);
 	if (IS_ERR(balloon_mnt)) {
 		err = PTR_ERR(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
@@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		kern_unmount(balloon_mnt);
 		unregister_oom_notifier(&vb->nb);
 		vb->vb_dev_info.inode = NULL;
-		goto out_del_vqs;
+		goto out_del_balloon_wq;
 	}
 	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
@@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
 		virtballoon_changed(vdev);
 	return 0;
 
+out_del_balloon_wq:
+	destroy_workqueue(vb->balloon_wq);
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
 out_free_vb:
@@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
 	spin_unlock_irq(&vb->stop_update_lock);
 	cancel_work_sync(&vb->update_balloon_size_work);
 	cancel_work_sync(&vb->update_balloon_stats_work);
+	cancel_work_sync(&vb->report_free_page_work);
+	destroy_workqueue(vb->balloon_wq);
 
 	remove_common(vb);
 #ifdef CONFIG_BALLOON_COMPACTION
@@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
 
 static int virtballoon_validate(struct virtio_device *vdev)
 {
+	if (!page_poisoning_enabled())
+		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
+
 	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
 	return 0;
 }
@@ -682,6 +873,8 @@ static unsigned int features[] = {
 	VIRTIO_BALLOON_F_MUST_TELL_HOST,
 	VIRTIO_BALLOON_F_STATS_VQ,
 	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
+	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
+	VIRTIO_BALLOON_F_PAGE_POISON,
 };
 
 static struct virtio_driver virtio_balloon_driver = {
diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
index 343d7dd..3f97067 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -34,15 +34,22 @@
 #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
 #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
 #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
+#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
+#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
 
 /* Size of a PFN in the balloon interface. */
 #define VIRTIO_BALLOON_PFN_SHIFT 12
 
+#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
 struct virtio_balloon_config {
 	/* Number of pages host wants Guest to give up. */
 	__u32 num_pages;
 	/* Number of pages we've actually got in balloon. */
 	__u32 actual;
+	/* Free page report command id, readonly by guest */
+	__u32 free_page_report_cmd_id;
+	/* Stores PAGE_POISON if page poisoning is in use */
+	__u32 poison_val;
 };
 
 #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
-- 
2.7.4


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply related	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 10:42   ` Wei Wang
  (?)
@ 2018-01-24 17:15     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-24 17:15 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
> support of reporting hints of guest free pages to host via virtio-balloon.
> 
> Host requests the guest to report free pages by sending a new cmd
> id to the guest via the free_page_report_cmd_id configuration register.
> 
> When the guest starts to report, the first element added to the free page
> vq is the cmd id given by host. When the guest finishes the reporting
> of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
> to the vq to tell host that the reporting is done. Host may also requests
> the guest to stop the reporting in advance by sending the stop cmd id to
> the guest via the configuration register.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>  drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
>  include/uapi/linux/virtio_balloon.h |   7 +
>  2 files changed, 236 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index a1fb52c..4440873 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
>  static struct vfsmount *balloon_mnt;
>  #endif
>  
> +/* The number of virtqueues supported by virtio-balloon */
> +#define VIRTIO_BALLOON_VQ_NUM		4
> +#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
> +#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
> +#define VIRTIO_BALLOON_VQ_ID_STATS	2
> +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
> +

Please do an enum instead of defines. VQ_ID can be just VQ
(it's not an ID, it's just the number).


>  struct virtio_balloon {
>  	struct virtio_device *vdev;
> -	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> +	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
> +
> +	/* Balloon's own wq for cpu-intensive work items */
> +	struct workqueue_struct *balloon_wq;
> +	/* The free page reporting work item submitted to the balloon wq */
> +	struct work_struct report_free_page_work;
>  
>  	/* The balloon servicing is delegated to a freezable workqueue. */
>  	struct work_struct update_balloon_stats_work;
> @@ -63,6 +75,13 @@ struct virtio_balloon {
>  	spinlock_t stop_update_lock;
>  	bool stop_update;
>  
> +	/* Start to report free pages */
> +	bool report_free_page;
> +	/* Stores the cmd id given by host to start the free page reporting */
> +	__virtio32 start_cmd_id;
> +	/* Stores STOP_ID as a sign to tell host that the reporting is done */
> +	__virtio32 stop_cmd_id;
> +
>  	/* Waiting for host to ack the pages we released. */
>  	wait_queue_head_t acked;
>  
> @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
>  	return idx;
>  }
>  
> +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
> +{
> +	struct scatterlist sg;
> +	unsigned int unused;
> +	int ret = 0;
> +
> +	sg_init_table(&sg, 1);
> +	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
> +
> +	/* Detach all the used buffers from the vq */
> +	while (virtqueue_get_buf(vq, &unused))
> +		;
> +
> +	/*
> +	 * Since this is an optimization feature, losing a couple of free
> +	 * pages to report isn't important. We simply return without adding
> +	 * the page if the vq is full.
> +	 * We are adding one entry each time, which essentially results in no
> +	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
> +	 * There is always one entry reserved for the cmd id to use.
> +	 */
> +	if (vq->num_free > 1)
> +		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
> +
> +	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
> +		virtqueue_kick(vq);
> +
> +	return ret;
> +}
> +
> +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
> +{
> +	struct scatterlist sg;
> +	struct virtqueue *vq = vb->free_page_vq;
> +
> +	if (unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return;
> +
> +	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
> +
> +	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);

What is this doing? Basically handling the case where vq is broken?
It's kind of ugly to tweak feature bits, most code assumes they never
change.  Please just return an error to caller instead and handle it
there.

You can then avoid sprinking the check for the feature bit
all over the code.

> +
> +	virtqueue_kick(vq);
> +}
> +
>  /*
>   * While most virtqueues communicate guest-initiated requests to the hypervisor,
>   * the stats queue operates in reverse.  The driver initializes the virtqueue
> @@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
>  	virtqueue_kick(vq);
>  }
>  
> -static void virtballoon_changed(struct virtio_device *vdev)
> -{
> -	struct virtio_balloon *vb = vdev->priv;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&vb->stop_update_lock, flags);
> -	if (!vb->stop_update)
> -		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
> -	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> -}
> -
>  static inline s64 towards_target(struct virtio_balloon *vb)
>  {
>  	s64 target;
> @@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
>  	return target - vb->num_pages;
>  }
>  
> +static void virtballoon_changed(struct virtio_device *vdev)
> +{
> +	struct virtio_balloon *vb = vdev->priv;
> +	unsigned long flags;
> +	__u32 cmd_id;
> +	s64 diff = towards_target(vb);
> +
> +	if (diff) {
> +		spin_lock_irqsave(&vb->stop_update_lock, flags);
> +		if (!vb->stop_update)
> +			queue_work(system_freezable_wq,
> +				   &vb->update_balloon_size_work);
> +		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		virtio_cread(vdev, struct virtio_balloon_config,
> +			     free_page_report_cmd_id, &cmd_id);
> +		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
> +			vb->report_free_page = false;
> +		} else {
> +			/*
> +			 * The request is queued only when the ack of the
> +			 * previous request has been sent to host, which is
> +			 * indicated by start_cmd_id set to
> +			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
> +			 * simply update the start_cmd_id, and when the
> +			 * previous queued work runs, the latest cmd id will
> +			 * be sent to host.
> +			 */

One thing I don't like about this one is that the previous request
will still try to run to completion.

And it all seems pretty complex.

How about:
- pass cmd id to a queued work
- queued work gets that cmd id, stores a copy and uses that,
  re-checking periodically - stop if cmd id changes:
  will replace  report_free_page too since that's set to
  stop.

This means you do not reuse the queued cmd id also
for the buffer - which is probably for the best.


> +			spin_lock_irqsave(&vb->stop_update_lock, flags);
> +			if (!vb->stop_update &&
> +			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
> +			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
> +				queue_work(vb->balloon_wq,
> +					   &vb->report_free_page_work);
> +			vb->report_free_page = true;
> +			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
> +			spin_unlock_irqrestore(&vb->stop_update_lock, flags);

While it's ok to set cmd id here because of the lock, it is easier
to understand code if you set up everything before you queue the
command.


> +		}
> +	}
> +}
> +
>  static void update_balloon_size(struct virtio_balloon *vb)
>  {
>  	u32 actual = vb->num_pages;
> @@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
>  
>  static int init_vqs(struct virtio_balloon *vb)
>  {
> -	struct virtqueue *vqs[3];
> -	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> -	static const char * const names[] = { "inflate", "deflate", "stats" };
> -	int err, nvqs;
> +	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
> +	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
> +	const char *names[VIRTIO_BALLOON_VQ_NUM];
> +	struct scatterlist sg;
> +	int ret;
>  
>  	/*
> -	 * We expect two virtqueues: inflate and deflate, and
> -	 * optionally stat.
> +	 * Inflateq and deflateq are used unconditionally. The names[]
> +	 * will be NULL if the related feature is not enabled, which will
> +	 * cause no allocation for the corresponding virtqueue in find_vqs.
>  	 */
> -	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> -	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> -	if (err)
> -		return err;
> +	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
> +	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
> +	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
> +	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
>  
> -	vb->inflate_vq = vqs[0];
> -	vb->deflate_vq = vqs[1];
>  	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> -		struct scatterlist sg;
> -		unsigned int num_stats;
> -		vb->stats_vq = vqs[2];
> +		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
> +	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
> +	}
> +
> +	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
> +					 vqs, callbacks, names, NULL, NULL);
> +	if (ret)
> +		return ret;
>  
> +	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
> +	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> +		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
>  		/*
>  		 * Prime this virtqueue with one buffer so the hypervisor can
>  		 * use it to signal us later (it can't be broken yet!).
>  		 */
> -		num_stats = update_balloon_stats(vb);
> -
> -		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> -		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> -		    < 0)
> -			BUG();
> +		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> +		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
> +					   GFP_KERNEL);
> +		if (ret) {
> +			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
> +				 __func__);
> +			return ret;
> +		}
>  		virtqueue_kick(vb->stats_vq);
>  	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> +		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
> +
>  	return 0;
>  }
>  
> +static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> +					   unsigned long nr_pages)
> +{
> +	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> +	uint32_t len = nr_pages << PAGE_SHIFT;
> +	int ret;
> +
> +	if (!vb->report_free_page ||
> +	    unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return false;
> +
> +	ret = add_one_sg(vb->free_page_vq, pfn, len);
> +	if (unlikely(ret))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
> +
> +	return !ret;
> +}
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);

Can you teach walk_free_mem_block to return the && of all
return calls, so caller knows whether it completed?


> +
> +	/* End by sending the stop id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->stop_cmd_id);
> +}
> +
>  #ifdef CONFIG_BALLOON_COMPACTION
>  /*
>   * virtballoon_migratepage - perform the balloon page migration on behalf of
> @@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
>  static int virtballoon_probe(struct virtio_device *vdev)
>  {
>  	struct virtio_balloon *vb;
> +	__u32 poison_val;
>  	int err;
>  
>  	if (!vdev->config->get) {
> @@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  	if (err)
>  		goto out_free_vb;
>  
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		vb->balloon_wq = alloc_workqueue("balloon-wq",
> +					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);

balloon_wq is initialized conditionally here but destroyed
unconditionally below. That will crash when not initialized
I think.


> +		if (!vb->balloon_wq) {
> +			err = -ENOMEM;
> +			goto out_del_vqs;
> +		}
> +		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
> +		vb->start_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		vb->stop_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
> +			poison_val = PAGE_POISON;
> +			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> +				      poison_val, &poison_val);
> +		}
> +	}
> +
>  	vb->nb.notifier_call = virtballoon_oom_notify;
>  	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
>  	err = register_oom_notifier(&vb->nb);
>  	if (err < 0)
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  
>  #ifdef CONFIG_BALLOON_COMPACTION
>  	balloon_mnt = kern_mount(&balloon_fs);
>  	if (IS_ERR(balloon_mnt)) {
>  		err = PTR_ERR(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  
>  	vb->vb_dev_info.migratepage = virtballoon_migratepage;
> @@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		kern_unmount(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
>  		vb->vb_dev_info.inode = NULL;
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
>  #endif
> @@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		virtballoon_changed(vdev);
>  	return 0;
>  
> +out_del_balloon_wq:
> +	destroy_workqueue(vb->balloon_wq);
>  out_del_vqs:
>  	vdev->config->del_vqs(vdev);
>  out_free_vb:
> @@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
>  	spin_unlock_irq(&vb->stop_update_lock);
>  	cancel_work_sync(&vb->update_balloon_size_work);
>  	cancel_work_sync(&vb->update_balloon_stats_work);
> +	cancel_work_sync(&vb->report_free_page_work);
> +	destroy_workqueue(vb->balloon_wq);
>  
>  	remove_common(vb);
>  #ifdef CONFIG_BALLOON_COMPACTION
> @@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
>  
>  static int virtballoon_validate(struct virtio_device *vdev)
>  {
> +	if (!page_poisoning_enabled())
> +		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
> +
>  	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
>  	return 0;
>  }
> @@ -682,6 +873,8 @@ static unsigned int features[] = {
>  	VIRTIO_BALLOON_F_MUST_TELL_HOST,
>  	VIRTIO_BALLOON_F_STATS_VQ,
>  	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> +	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
> +	VIRTIO_BALLOON_F_PAGE_POISON,
>  };
>  
>  static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 343d7dd..3f97067 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -34,15 +34,22 @@
>  #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
>  #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
>  #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
> +#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
>  
>  /* Size of a PFN in the balloon interface. */
>  #define VIRTIO_BALLOON_PFN_SHIFT 12
>  
> +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
>  struct virtio_balloon_config {
>  	/* Number of pages host wants Guest to give up. */
>  	__u32 num_pages;
>  	/* Number of pages we've actually got in balloon. */
>  	__u32 actual;
> +	/* Free page report command id, readonly by guest */
> +	__u32 free_page_report_cmd_id;
> +	/* Stores PAGE_POISON if page poisoning is in use */
> +	__u32 poison_val;
>  };
>  
>  #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-24 17:15     ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-24 17:15 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
> support of reporting hints of guest free pages to host via virtio-balloon.
> 
> Host requests the guest to report free pages by sending a new cmd
> id to the guest via the free_page_report_cmd_id configuration register.
> 
> When the guest starts to report, the first element added to the free page
> vq is the cmd id given by host. When the guest finishes the reporting
> of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
> to the vq to tell host that the reporting is done. Host may also requests
> the guest to stop the reporting in advance by sending the stop cmd id to
> the guest via the configuration register.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>  drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
>  include/uapi/linux/virtio_balloon.h |   7 +
>  2 files changed, 236 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index a1fb52c..4440873 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
>  static struct vfsmount *balloon_mnt;
>  #endif
>  
> +/* The number of virtqueues supported by virtio-balloon */
> +#define VIRTIO_BALLOON_VQ_NUM		4
> +#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
> +#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
> +#define VIRTIO_BALLOON_VQ_ID_STATS	2
> +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
> +

Please do an enum instead of defines. VQ_ID can be just VQ
(it's not an ID, it's just the number).


>  struct virtio_balloon {
>  	struct virtio_device *vdev;
> -	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> +	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
> +
> +	/* Balloon's own wq for cpu-intensive work items */
> +	struct workqueue_struct *balloon_wq;
> +	/* The free page reporting work item submitted to the balloon wq */
> +	struct work_struct report_free_page_work;
>  
>  	/* The balloon servicing is delegated to a freezable workqueue. */
>  	struct work_struct update_balloon_stats_work;
> @@ -63,6 +75,13 @@ struct virtio_balloon {
>  	spinlock_t stop_update_lock;
>  	bool stop_update;
>  
> +	/* Start to report free pages */
> +	bool report_free_page;
> +	/* Stores the cmd id given by host to start the free page reporting */
> +	__virtio32 start_cmd_id;
> +	/* Stores STOP_ID as a sign to tell host that the reporting is done */
> +	__virtio32 stop_cmd_id;
> +
>  	/* Waiting for host to ack the pages we released. */
>  	wait_queue_head_t acked;
>  
> @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
>  	return idx;
>  }
>  
> +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
> +{
> +	struct scatterlist sg;
> +	unsigned int unused;
> +	int ret = 0;
> +
> +	sg_init_table(&sg, 1);
> +	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
> +
> +	/* Detach all the used buffers from the vq */
> +	while (virtqueue_get_buf(vq, &unused))
> +		;
> +
> +	/*
> +	 * Since this is an optimization feature, losing a couple of free
> +	 * pages to report isn't important. We simply return without adding
> +	 * the page if the vq is full.
> +	 * We are adding one entry each time, which essentially results in no
> +	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
> +	 * There is always one entry reserved for the cmd id to use.
> +	 */
> +	if (vq->num_free > 1)
> +		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
> +
> +	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
> +		virtqueue_kick(vq);
> +
> +	return ret;
> +}
> +
> +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
> +{
> +	struct scatterlist sg;
> +	struct virtqueue *vq = vb->free_page_vq;
> +
> +	if (unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return;
> +
> +	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
> +
> +	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);

What is this doing? Basically handling the case where vq is broken?
It's kind of ugly to tweak feature bits, most code assumes they never
change.  Please just return an error to caller instead and handle it
there.

You can then avoid sprinking the check for the feature bit
all over the code.

> +
> +	virtqueue_kick(vq);
> +}
> +
>  /*
>   * While most virtqueues communicate guest-initiated requests to the hypervisor,
>   * the stats queue operates in reverse.  The driver initializes the virtqueue
> @@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
>  	virtqueue_kick(vq);
>  }
>  
> -static void virtballoon_changed(struct virtio_device *vdev)
> -{
> -	struct virtio_balloon *vb = vdev->priv;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&vb->stop_update_lock, flags);
> -	if (!vb->stop_update)
> -		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
> -	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> -}
> -
>  static inline s64 towards_target(struct virtio_balloon *vb)
>  {
>  	s64 target;
> @@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
>  	return target - vb->num_pages;
>  }
>  
> +static void virtballoon_changed(struct virtio_device *vdev)
> +{
> +	struct virtio_balloon *vb = vdev->priv;
> +	unsigned long flags;
> +	__u32 cmd_id;
> +	s64 diff = towards_target(vb);
> +
> +	if (diff) {
> +		spin_lock_irqsave(&vb->stop_update_lock, flags);
> +		if (!vb->stop_update)
> +			queue_work(system_freezable_wq,
> +				   &vb->update_balloon_size_work);
> +		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		virtio_cread(vdev, struct virtio_balloon_config,
> +			     free_page_report_cmd_id, &cmd_id);
> +		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
> +			vb->report_free_page = false;
> +		} else {
> +			/*
> +			 * The request is queued only when the ack of the
> +			 * previous request has been sent to host, which is
> +			 * indicated by start_cmd_id set to
> +			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
> +			 * simply update the start_cmd_id, and when the
> +			 * previous queued work runs, the latest cmd id will
> +			 * be sent to host.
> +			 */

One thing I don't like about this one is that the previous request
will still try to run to completion.

And it all seems pretty complex.

How about:
- pass cmd id to a queued work
- queued work gets that cmd id, stores a copy and uses that,
  re-checking periodically - stop if cmd id changes:
  will replace  report_free_page too since that's set to
  stop.

This means you do not reuse the queued cmd id also
for the buffer - which is probably for the best.


> +			spin_lock_irqsave(&vb->stop_update_lock, flags);
> +			if (!vb->stop_update &&
> +			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
> +			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
> +				queue_work(vb->balloon_wq,
> +					   &vb->report_free_page_work);
> +			vb->report_free_page = true;
> +			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
> +			spin_unlock_irqrestore(&vb->stop_update_lock, flags);

While it's ok to set cmd id here because of the lock, it is easier
to understand code if you set up everything before you queue the
command.


> +		}
> +	}
> +}
> +
>  static void update_balloon_size(struct virtio_balloon *vb)
>  {
>  	u32 actual = vb->num_pages;
> @@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
>  
>  static int init_vqs(struct virtio_balloon *vb)
>  {
> -	struct virtqueue *vqs[3];
> -	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> -	static const char * const names[] = { "inflate", "deflate", "stats" };
> -	int err, nvqs;
> +	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
> +	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
> +	const char *names[VIRTIO_BALLOON_VQ_NUM];
> +	struct scatterlist sg;
> +	int ret;
>  
>  	/*
> -	 * We expect two virtqueues: inflate and deflate, and
> -	 * optionally stat.
> +	 * Inflateq and deflateq are used unconditionally. The names[]
> +	 * will be NULL if the related feature is not enabled, which will
> +	 * cause no allocation for the corresponding virtqueue in find_vqs.
>  	 */
> -	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> -	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> -	if (err)
> -		return err;
> +	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
> +	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
> +	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
> +	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
>  
> -	vb->inflate_vq = vqs[0];
> -	vb->deflate_vq = vqs[1];
>  	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> -		struct scatterlist sg;
> -		unsigned int num_stats;
> -		vb->stats_vq = vqs[2];
> +		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
> +	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
> +	}
> +
> +	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
> +					 vqs, callbacks, names, NULL, NULL);
> +	if (ret)
> +		return ret;
>  
> +	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
> +	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> +		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
>  		/*
>  		 * Prime this virtqueue with one buffer so the hypervisor can
>  		 * use it to signal us later (it can't be broken yet!).
>  		 */
> -		num_stats = update_balloon_stats(vb);
> -
> -		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> -		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> -		    < 0)
> -			BUG();
> +		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> +		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
> +					   GFP_KERNEL);
> +		if (ret) {
> +			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
> +				 __func__);
> +			return ret;
> +		}
>  		virtqueue_kick(vb->stats_vq);
>  	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> +		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
> +
>  	return 0;
>  }
>  
> +static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> +					   unsigned long nr_pages)
> +{
> +	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> +	uint32_t len = nr_pages << PAGE_SHIFT;
> +	int ret;
> +
> +	if (!vb->report_free_page ||
> +	    unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return false;
> +
> +	ret = add_one_sg(vb->free_page_vq, pfn, len);
> +	if (unlikely(ret))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
> +
> +	return !ret;
> +}
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);

Can you teach walk_free_mem_block to return the && of all
return calls, so caller knows whether it completed?


> +
> +	/* End by sending the stop id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->stop_cmd_id);
> +}
> +
>  #ifdef CONFIG_BALLOON_COMPACTION
>  /*
>   * virtballoon_migratepage - perform the balloon page migration on behalf of
> @@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
>  static int virtballoon_probe(struct virtio_device *vdev)
>  {
>  	struct virtio_balloon *vb;
> +	__u32 poison_val;
>  	int err;
>  
>  	if (!vdev->config->get) {
> @@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  	if (err)
>  		goto out_free_vb;
>  
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		vb->balloon_wq = alloc_workqueue("balloon-wq",
> +					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);

balloon_wq is initialized conditionally here but destroyed
unconditionally below. That will crash when not initialized
I think.


> +		if (!vb->balloon_wq) {
> +			err = -ENOMEM;
> +			goto out_del_vqs;
> +		}
> +		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
> +		vb->start_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		vb->stop_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
> +			poison_val = PAGE_POISON;
> +			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> +				      poison_val, &poison_val);
> +		}
> +	}
> +
>  	vb->nb.notifier_call = virtballoon_oom_notify;
>  	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
>  	err = register_oom_notifier(&vb->nb);
>  	if (err < 0)
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  
>  #ifdef CONFIG_BALLOON_COMPACTION
>  	balloon_mnt = kern_mount(&balloon_fs);
>  	if (IS_ERR(balloon_mnt)) {
>  		err = PTR_ERR(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  
>  	vb->vb_dev_info.migratepage = virtballoon_migratepage;
> @@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		kern_unmount(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
>  		vb->vb_dev_info.inode = NULL;
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
>  #endif
> @@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		virtballoon_changed(vdev);
>  	return 0;
>  
> +out_del_balloon_wq:
> +	destroy_workqueue(vb->balloon_wq);
>  out_del_vqs:
>  	vdev->config->del_vqs(vdev);
>  out_free_vb:
> @@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
>  	spin_unlock_irq(&vb->stop_update_lock);
>  	cancel_work_sync(&vb->update_balloon_size_work);
>  	cancel_work_sync(&vb->update_balloon_stats_work);
> +	cancel_work_sync(&vb->report_free_page_work);
> +	destroy_workqueue(vb->balloon_wq);
>  
>  	remove_common(vb);
>  #ifdef CONFIG_BALLOON_COMPACTION
> @@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
>  
>  static int virtballoon_validate(struct virtio_device *vdev)
>  {
> +	if (!page_poisoning_enabled())
> +		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
> +
>  	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
>  	return 0;
>  }
> @@ -682,6 +873,8 @@ static unsigned int features[] = {
>  	VIRTIO_BALLOON_F_MUST_TELL_HOST,
>  	VIRTIO_BALLOON_F_STATS_VQ,
>  	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> +	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
> +	VIRTIO_BALLOON_F_PAGE_POISON,
>  };
>  
>  static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 343d7dd..3f97067 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -34,15 +34,22 @@
>  #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
>  #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
>  #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
> +#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
>  
>  /* Size of a PFN in the balloon interface. */
>  #define VIRTIO_BALLOON_PFN_SHIFT 12
>  
> +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
>  struct virtio_balloon_config {
>  	/* Number of pages host wants Guest to give up. */
>  	__u32 num_pages;
>  	/* Number of pages we've actually got in balloon. */
>  	__u32 actual;
> +	/* Free page report command id, readonly by guest */
> +	__u32 free_page_report_cmd_id;
> +	/* Stores PAGE_POISON if page poisoning is in use */
> +	__u32 poison_val;
>  };
>  
>  #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
> -- 
> 2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 10:42   ` Wei Wang
                     ` (2 preceding siblings ...)
  (?)
@ 2018-01-24 17:15   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-24 17:15 UTC (permalink / raw)
  To: Wei Wang
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
> support of reporting hints of guest free pages to host via virtio-balloon.
> 
> Host requests the guest to report free pages by sending a new cmd
> id to the guest via the free_page_report_cmd_id configuration register.
> 
> When the guest starts to report, the first element added to the free page
> vq is the cmd id given by host. When the guest finishes the reporting
> of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
> to the vq to tell host that the reporting is done. Host may also requests
> the guest to stop the reporting in advance by sending the stop cmd id to
> the guest via the configuration register.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>  drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
>  include/uapi/linux/virtio_balloon.h |   7 +
>  2 files changed, 236 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index a1fb52c..4440873 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
>  static struct vfsmount *balloon_mnt;
>  #endif
>  
> +/* The number of virtqueues supported by virtio-balloon */
> +#define VIRTIO_BALLOON_VQ_NUM		4
> +#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
> +#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
> +#define VIRTIO_BALLOON_VQ_ID_STATS	2
> +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
> +

Please do an enum instead of defines. VQ_ID can be just VQ
(it's not an ID, it's just the number).


>  struct virtio_balloon {
>  	struct virtio_device *vdev;
> -	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> +	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
> +
> +	/* Balloon's own wq for cpu-intensive work items */
> +	struct workqueue_struct *balloon_wq;
> +	/* The free page reporting work item submitted to the balloon wq */
> +	struct work_struct report_free_page_work;
>  
>  	/* The balloon servicing is delegated to a freezable workqueue. */
>  	struct work_struct update_balloon_stats_work;
> @@ -63,6 +75,13 @@ struct virtio_balloon {
>  	spinlock_t stop_update_lock;
>  	bool stop_update;
>  
> +	/* Start to report free pages */
> +	bool report_free_page;
> +	/* Stores the cmd id given by host to start the free page reporting */
> +	__virtio32 start_cmd_id;
> +	/* Stores STOP_ID as a sign to tell host that the reporting is done */
> +	__virtio32 stop_cmd_id;
> +
>  	/* Waiting for host to ack the pages we released. */
>  	wait_queue_head_t acked;
>  
> @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
>  	return idx;
>  }
>  
> +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
> +{
> +	struct scatterlist sg;
> +	unsigned int unused;
> +	int ret = 0;
> +
> +	sg_init_table(&sg, 1);
> +	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
> +
> +	/* Detach all the used buffers from the vq */
> +	while (virtqueue_get_buf(vq, &unused))
> +		;
> +
> +	/*
> +	 * Since this is an optimization feature, losing a couple of free
> +	 * pages to report isn't important. We simply return without adding
> +	 * the page if the vq is full.
> +	 * We are adding one entry each time, which essentially results in no
> +	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
> +	 * There is always one entry reserved for the cmd id to use.
> +	 */
> +	if (vq->num_free > 1)
> +		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
> +
> +	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
> +		virtqueue_kick(vq);
> +
> +	return ret;
> +}
> +
> +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
> +{
> +	struct scatterlist sg;
> +	struct virtqueue *vq = vb->free_page_vq;
> +
> +	if (unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return;
> +
> +	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
> +
> +	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);

What is this doing? Basically handling the case where vq is broken?
It's kind of ugly to tweak feature bits, most code assumes they never
change.  Please just return an error to caller instead and handle it
there.

You can then avoid sprinking the check for the feature bit
all over the code.

> +
> +	virtqueue_kick(vq);
> +}
> +
>  /*
>   * While most virtqueues communicate guest-initiated requests to the hypervisor,
>   * the stats queue operates in reverse.  The driver initializes the virtqueue
> @@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
>  	virtqueue_kick(vq);
>  }
>  
> -static void virtballoon_changed(struct virtio_device *vdev)
> -{
> -	struct virtio_balloon *vb = vdev->priv;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&vb->stop_update_lock, flags);
> -	if (!vb->stop_update)
> -		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
> -	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> -}
> -
>  static inline s64 towards_target(struct virtio_balloon *vb)
>  {
>  	s64 target;
> @@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
>  	return target - vb->num_pages;
>  }
>  
> +static void virtballoon_changed(struct virtio_device *vdev)
> +{
> +	struct virtio_balloon *vb = vdev->priv;
> +	unsigned long flags;
> +	__u32 cmd_id;
> +	s64 diff = towards_target(vb);
> +
> +	if (diff) {
> +		spin_lock_irqsave(&vb->stop_update_lock, flags);
> +		if (!vb->stop_update)
> +			queue_work(system_freezable_wq,
> +				   &vb->update_balloon_size_work);
> +		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		virtio_cread(vdev, struct virtio_balloon_config,
> +			     free_page_report_cmd_id, &cmd_id);
> +		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
> +			vb->report_free_page = false;
> +		} else {
> +			/*
> +			 * The request is queued only when the ack of the
> +			 * previous request has been sent to host, which is
> +			 * indicated by start_cmd_id set to
> +			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
> +			 * simply update the start_cmd_id, and when the
> +			 * previous queued work runs, the latest cmd id will
> +			 * be sent to host.
> +			 */

One thing I don't like about this one is that the previous request
will still try to run to completion.

And it all seems pretty complex.

How about:
- pass cmd id to a queued work
- queued work gets that cmd id, stores a copy and uses that,
  re-checking periodically - stop if cmd id changes:
  will replace  report_free_page too since that's set to
  stop.

This means you do not reuse the queued cmd id also
for the buffer - which is probably for the best.


> +			spin_lock_irqsave(&vb->stop_update_lock, flags);
> +			if (!vb->stop_update &&
> +			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
> +			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
> +				queue_work(vb->balloon_wq,
> +					   &vb->report_free_page_work);
> +			vb->report_free_page = true;
> +			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
> +			spin_unlock_irqrestore(&vb->stop_update_lock, flags);

While it's ok to set cmd id here because of the lock, it is easier
to understand code if you set up everything before you queue the
command.


> +		}
> +	}
> +}
> +
>  static void update_balloon_size(struct virtio_balloon *vb)
>  {
>  	u32 actual = vb->num_pages;
> @@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
>  
>  static int init_vqs(struct virtio_balloon *vb)
>  {
> -	struct virtqueue *vqs[3];
> -	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> -	static const char * const names[] = { "inflate", "deflate", "stats" };
> -	int err, nvqs;
> +	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
> +	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
> +	const char *names[VIRTIO_BALLOON_VQ_NUM];
> +	struct scatterlist sg;
> +	int ret;
>  
>  	/*
> -	 * We expect two virtqueues: inflate and deflate, and
> -	 * optionally stat.
> +	 * Inflateq and deflateq are used unconditionally. The names[]
> +	 * will be NULL if the related feature is not enabled, which will
> +	 * cause no allocation for the corresponding virtqueue in find_vqs.
>  	 */
> -	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> -	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> -	if (err)
> -		return err;
> +	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
> +	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
> +	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
> +	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
>  
> -	vb->inflate_vq = vqs[0];
> -	vb->deflate_vq = vqs[1];
>  	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> -		struct scatterlist sg;
> -		unsigned int num_stats;
> -		vb->stats_vq = vqs[2];
> +		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
> +	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
> +	}
> +
> +	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
> +					 vqs, callbacks, names, NULL, NULL);
> +	if (ret)
> +		return ret;
>  
> +	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
> +	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> +		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
>  		/*
>  		 * Prime this virtqueue with one buffer so the hypervisor can
>  		 * use it to signal us later (it can't be broken yet!).
>  		 */
> -		num_stats = update_balloon_stats(vb);
> -
> -		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> -		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> -		    < 0)
> -			BUG();
> +		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> +		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
> +					   GFP_KERNEL);
> +		if (ret) {
> +			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
> +				 __func__);
> +			return ret;
> +		}
>  		virtqueue_kick(vb->stats_vq);
>  	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> +		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
> +
>  	return 0;
>  }
>  
> +static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> +					   unsigned long nr_pages)
> +{
> +	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> +	uint32_t len = nr_pages << PAGE_SHIFT;
> +	int ret;
> +
> +	if (!vb->report_free_page ||
> +	    unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return false;
> +
> +	ret = add_one_sg(vb->free_page_vq, pfn, len);
> +	if (unlikely(ret))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
> +
> +	return !ret;
> +}
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);

Can you teach walk_free_mem_block to return the && of all
return calls, so caller knows whether it completed?


> +
> +	/* End by sending the stop id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->stop_cmd_id);
> +}
> +
>  #ifdef CONFIG_BALLOON_COMPACTION
>  /*
>   * virtballoon_migratepage - perform the balloon page migration on behalf of
> @@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
>  static int virtballoon_probe(struct virtio_device *vdev)
>  {
>  	struct virtio_balloon *vb;
> +	__u32 poison_val;
>  	int err;
>  
>  	if (!vdev->config->get) {
> @@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  	if (err)
>  		goto out_free_vb;
>  
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		vb->balloon_wq = alloc_workqueue("balloon-wq",
> +					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);

balloon_wq is initialized conditionally here but destroyed
unconditionally below. That will crash when not initialized
I think.


> +		if (!vb->balloon_wq) {
> +			err = -ENOMEM;
> +			goto out_del_vqs;
> +		}
> +		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
> +		vb->start_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		vb->stop_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
> +			poison_val = PAGE_POISON;
> +			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> +				      poison_val, &poison_val);
> +		}
> +	}
> +
>  	vb->nb.notifier_call = virtballoon_oom_notify;
>  	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
>  	err = register_oom_notifier(&vb->nb);
>  	if (err < 0)
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  
>  #ifdef CONFIG_BALLOON_COMPACTION
>  	balloon_mnt = kern_mount(&balloon_fs);
>  	if (IS_ERR(balloon_mnt)) {
>  		err = PTR_ERR(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  
>  	vb->vb_dev_info.migratepage = virtballoon_migratepage;
> @@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		kern_unmount(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
>  		vb->vb_dev_info.inode = NULL;
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
>  #endif
> @@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		virtballoon_changed(vdev);
>  	return 0;
>  
> +out_del_balloon_wq:
> +	destroy_workqueue(vb->balloon_wq);
>  out_del_vqs:
>  	vdev->config->del_vqs(vdev);
>  out_free_vb:
> @@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
>  	spin_unlock_irq(&vb->stop_update_lock);
>  	cancel_work_sync(&vb->update_balloon_size_work);
>  	cancel_work_sync(&vb->update_balloon_stats_work);
> +	cancel_work_sync(&vb->report_free_page_work);
> +	destroy_workqueue(vb->balloon_wq);
>  
>  	remove_common(vb);
>  #ifdef CONFIG_BALLOON_COMPACTION
> @@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
>  
>  static int virtballoon_validate(struct virtio_device *vdev)
>  {
> +	if (!page_poisoning_enabled())
> +		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
> +
>  	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
>  	return 0;
>  }
> @@ -682,6 +873,8 @@ static unsigned int features[] = {
>  	VIRTIO_BALLOON_F_MUST_TELL_HOST,
>  	VIRTIO_BALLOON_F_STATS_VQ,
>  	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> +	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
> +	VIRTIO_BALLOON_F_PAGE_POISON,
>  };
>  
>  static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 343d7dd..3f97067 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -34,15 +34,22 @@
>  #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
>  #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
>  #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
> +#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
>  
>  /* Size of a PFN in the balloon interface. */
>  #define VIRTIO_BALLOON_PFN_SHIFT 12
>  
> +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
>  struct virtio_balloon_config {
>  	/* Number of pages host wants Guest to give up. */
>  	__u32 num_pages;
>  	/* Number of pages we've actually got in balloon. */
>  	__u32 actual;
> +	/* Free page report command id, readonly by guest */
> +	__u32 free_page_report_cmd_id;
> +	/* Stores PAGE_POISON if page poisoning is in use */
> +	__u32 poison_val;
>  };
>  
>  #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-24 17:15     ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-24 17:15 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the
> support of reporting hints of guest free pages to host via virtio-balloon.
> 
> Host requests the guest to report free pages by sending a new cmd
> id to the guest via the free_page_report_cmd_id configuration register.
> 
> When the guest starts to report, the first element added to the free page
> vq is the cmd id given by host. When the guest finishes the reporting
> of all the free pages, VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID is added
> to the vq to tell host that the reporting is done. Host may also requests
> the guest to stop the reporting in advance by sending the stop cmd id to
> the guest via the configuration register.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>  drivers/virtio/virtio_balloon.c     | 265 +++++++++++++++++++++++++++++++-----
>  include/uapi/linux/virtio_balloon.h |   7 +
>  2 files changed, 236 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index a1fb52c..4440873 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -51,9 +51,21 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM");
>  static struct vfsmount *balloon_mnt;
>  #endif
>  
> +/* The number of virtqueues supported by virtio-balloon */
> +#define VIRTIO_BALLOON_VQ_NUM		4
> +#define VIRTIO_BALLOON_VQ_ID_INFLATE	0
> +#define VIRTIO_BALLOON_VQ_ID_DEFLATE	1
> +#define VIRTIO_BALLOON_VQ_ID_STATS	2
> +#define VIRTIO_BALLOON_VQ_ID_FREE_PAGE	3
> +

Please do an enum instead of defines. VQ_ID can be just VQ
(it's not an ID, it's just the number).


>  struct virtio_balloon {
>  	struct virtio_device *vdev;
> -	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq;
> +	struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq;
> +
> +	/* Balloon's own wq for cpu-intensive work items */
> +	struct workqueue_struct *balloon_wq;
> +	/* The free page reporting work item submitted to the balloon wq */
> +	struct work_struct report_free_page_work;
>  
>  	/* The balloon servicing is delegated to a freezable workqueue. */
>  	struct work_struct update_balloon_stats_work;
> @@ -63,6 +75,13 @@ struct virtio_balloon {
>  	spinlock_t stop_update_lock;
>  	bool stop_update;
>  
> +	/* Start to report free pages */
> +	bool report_free_page;
> +	/* Stores the cmd id given by host to start the free page reporting */
> +	__virtio32 start_cmd_id;
> +	/* Stores STOP_ID as a sign to tell host that the reporting is done */
> +	__virtio32 stop_cmd_id;
> +
>  	/* Waiting for host to ack the pages we released. */
>  	wait_queue_head_t acked;
>  
> @@ -281,6 +300,53 @@ static unsigned int update_balloon_stats(struct virtio_balloon *vb)
>  	return idx;
>  }
>  
> +static int add_one_sg(struct virtqueue *vq, unsigned long pfn, uint32_t len)
> +{
> +	struct scatterlist sg;
> +	unsigned int unused;
> +	int ret = 0;
> +
> +	sg_init_table(&sg, 1);
> +	sg_set_page(&sg, pfn_to_page(pfn), len, 0);
> +
> +	/* Detach all the used buffers from the vq */
> +	while (virtqueue_get_buf(vq, &unused))
> +		;
> +
> +	/*
> +	 * Since this is an optimization feature, losing a couple of free
> +	 * pages to report isn't important. We simply return without adding
> +	 * the page if the vq is full.
> +	 * We are adding one entry each time, which essentially results in no
> +	 * memory allocation, so the GFP_KERNEL flag below can be ignored.
> +	 * There is always one entry reserved for the cmd id to use.
> +	 */
> +	if (vq->num_free > 1)
> +		ret = virtqueue_add_inbuf(vq, &sg, 1, vq, GFP_KERNEL);
> +
> +	if (vq->num_free < virtqueue_get_vring_size(vq) / 2)
> +		virtqueue_kick(vq);
> +
> +	return ret;
> +}
> +
> +static void send_cmd_id(struct virtio_balloon *vb, __virtio32 *cmd_id)
> +{
> +	struct scatterlist sg;
> +	struct virtqueue *vq = vb->free_page_vq;
> +
> +	if (unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return;
> +
> +	sg_init_one(&sg, cmd_id, sizeof(*cmd_id));
> +
> +	if (virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);

What is this doing? Basically handling the case where vq is broken?
It's kind of ugly to tweak feature bits, most code assumes they never
change.  Please just return an error to caller instead and handle it
there.

You can then avoid sprinking the check for the feature bit
all over the code.

> +
> +	virtqueue_kick(vq);
> +}
> +
>  /*
>   * While most virtqueues communicate guest-initiated requests to the hypervisor,
>   * the stats queue operates in reverse.  The driver initializes the virtqueue
> @@ -316,17 +382,6 @@ static void stats_handle_request(struct virtio_balloon *vb)
>  	virtqueue_kick(vq);
>  }
>  
> -static void virtballoon_changed(struct virtio_device *vdev)
> -{
> -	struct virtio_balloon *vb = vdev->priv;
> -	unsigned long flags;
> -
> -	spin_lock_irqsave(&vb->stop_update_lock, flags);
> -	if (!vb->stop_update)
> -		queue_work(system_freezable_wq, &vb->update_balloon_size_work);
> -	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> -}
> -
>  static inline s64 towards_target(struct virtio_balloon *vb)
>  {
>  	s64 target;
> @@ -343,6 +398,49 @@ static inline s64 towards_target(struct virtio_balloon *vb)
>  	return target - vb->num_pages;
>  }
>  
> +static void virtballoon_changed(struct virtio_device *vdev)
> +{
> +	struct virtio_balloon *vb = vdev->priv;
> +	unsigned long flags;
> +	__u32 cmd_id;
> +	s64 diff = towards_target(vb);
> +
> +	if (diff) {
> +		spin_lock_irqsave(&vb->stop_update_lock, flags);
> +		if (!vb->stop_update)
> +			queue_work(system_freezable_wq,
> +				   &vb->update_balloon_size_work);
> +		spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +	}
> +
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		virtio_cread(vdev, struct virtio_balloon_config,
> +			     free_page_report_cmd_id, &cmd_id);
> +		if (cmd_id == VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID) {
> +			vb->report_free_page = false;
> +		} else {
> +			/*
> +			 * The request is queued only when the ack of the
> +			 * previous request has been sent to host, which is
> +			 * indicated by start_cmd_id set to
> +			 * VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID. Otherwise,
> +			 * simply update the start_cmd_id, and when the
> +			 * previous queued work runs, the latest cmd id will
> +			 * be sent to host.
> +			 */

One thing I don't like about this one is that the previous request
will still try to run to completion.

And it all seems pretty complex.

How about:
- pass cmd id to a queued work
- queued work gets that cmd id, stores a copy and uses that,
  re-checking periodically - stop if cmd id changes:
  will replace  report_free_page too since that's set to
  stop.

This means you do not reuse the queued cmd id also
for the buffer - which is probably for the best.


> +			spin_lock_irqsave(&vb->stop_update_lock, flags);
> +			if (!vb->stop_update &&
> +			    virtio32_to_cpu(vdev, vb->start_cmd_id) ==
> +			    VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID)
> +				queue_work(vb->balloon_wq,
> +					   &vb->report_free_page_work);
> +			vb->report_free_page = true;
> +			vb->start_cmd_id = cpu_to_virtio32(vdev, cmd_id);
> +			spin_unlock_irqrestore(&vb->stop_update_lock, flags);

While it's ok to set cmd id here because of the lock, it is easier
to understand code if you set up everything before you queue the
command.


> +		}
> +	}
> +}
> +
>  static void update_balloon_size(struct virtio_balloon *vb)
>  {
>  	u32 actual = vb->num_pages;
> @@ -417,42 +515,108 @@ static void update_balloon_size_func(struct work_struct *work)
>  
>  static int init_vqs(struct virtio_balloon *vb)
>  {
> -	struct virtqueue *vqs[3];
> -	vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request };
> -	static const char * const names[] = { "inflate", "deflate", "stats" };
> -	int err, nvqs;
> +	struct virtqueue *vqs[VIRTIO_BALLOON_VQ_NUM];
> +	vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_NUM];
> +	const char *names[VIRTIO_BALLOON_VQ_NUM];
> +	struct scatterlist sg;
> +	int ret;
>  
>  	/*
> -	 * We expect two virtqueues: inflate and deflate, and
> -	 * optionally stat.
> +	 * Inflateq and deflateq are used unconditionally. The names[]
> +	 * will be NULL if the related feature is not enabled, which will
> +	 * cause no allocation for the corresponding virtqueue in find_vqs.
>  	 */
> -	nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2;
> -	err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL);
> -	if (err)
> -		return err;
> +	callbacks[VIRTIO_BALLOON_VQ_ID_INFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_INFLATE] = "inflate";
> +	callbacks[VIRTIO_BALLOON_VQ_ID_DEFLATE] = balloon_ack;
> +	names[VIRTIO_BALLOON_VQ_ID_DEFLATE] = "deflate";
> +	names[VIRTIO_BALLOON_VQ_ID_STATS] = NULL;
> +	names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
>  
> -	vb->inflate_vq = vqs[0];
> -	vb->deflate_vq = vqs[1];
>  	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> -		struct scatterlist sg;
> -		unsigned int num_stats;
> -		vb->stats_vq = vqs[2];
> +		names[VIRTIO_BALLOON_VQ_ID_STATS] = "stats";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_STATS] = stats_request;
> +	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		names[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = "free_page_vq";
> +		callbacks[VIRTIO_BALLOON_VQ_ID_FREE_PAGE] = NULL;
> +	}
> +
> +	ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_NUM,
> +					 vqs, callbacks, names, NULL, NULL);
> +	if (ret)
> +		return ret;
>  
> +	vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_INFLATE];
> +	vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_ID_DEFLATE];
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) {
> +		vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_ID_STATS];
>  		/*
>  		 * Prime this virtqueue with one buffer so the hypervisor can
>  		 * use it to signal us later (it can't be broken yet!).
>  		 */
> -		num_stats = update_balloon_stats(vb);
> -
> -		sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats);
> -		if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
> -		    < 0)
> -			BUG();
> +		sg_init_one(&sg, vb->stats, sizeof(vb->stats));
> +		ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb,
> +					   GFP_KERNEL);
> +		if (ret) {
> +			dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n",
> +				 __func__);
> +			return ret;
> +		}
>  		virtqueue_kick(vb->stats_vq);
>  	}
> +
> +	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
> +		vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_ID_FREE_PAGE];
> +
>  	return 0;
>  }
>  
> +static bool virtio_balloon_send_free_pages(void *opaque, unsigned long pfn,
> +					   unsigned long nr_pages)
> +{
> +	struct virtio_balloon *vb = (struct virtio_balloon *)opaque;
> +	uint32_t len = nr_pages << PAGE_SHIFT;
> +	int ret;
> +
> +	if (!vb->report_free_page ||
> +	    unlikely(!virtio_has_feature(vb->vdev,
> +				         VIRTIO_BALLOON_F_FREE_PAGE_HINT)))
> +		return false;
> +
> +	ret = add_one_sg(vb->free_page_vq, pfn, len);
> +	if (unlikely(ret))
> +		__virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT);
> +
> +	return !ret;
> +}
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);

Can you teach walk_free_mem_block to return the && of all
return calls, so caller knows whether it completed?


> +
> +	/* End by sending the stop id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->stop_cmd_id);
> +}
> +
>  #ifdef CONFIG_BALLOON_COMPACTION
>  /*
>   * virtballoon_migratepage - perform the balloon page migration on behalf of
> @@ -537,6 +701,7 @@ static struct file_system_type balloon_fs = {
>  static int virtballoon_probe(struct virtio_device *vdev)
>  {
>  	struct virtio_balloon *vb;
> +	__u32 poison_val;
>  	int err;
>  
>  	if (!vdev->config->get) {
> @@ -566,18 +731,37 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  	if (err)
>  		goto out_free_vb;
>  
> +	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) {
> +		vb->balloon_wq = alloc_workqueue("balloon-wq",
> +					WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0);

balloon_wq is initialized conditionally here but destroyed
unconditionally below. That will crash when not initialized
I think.


> +		if (!vb->balloon_wq) {
> +			err = -ENOMEM;
> +			goto out_del_vqs;
> +		}
> +		INIT_WORK(&vb->report_free_page_work, report_free_page_func);
> +		vb->start_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		vb->stop_cmd_id = cpu_to_virtio32(vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +		if(virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) {
> +			poison_val = PAGE_POISON;
> +			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
> +				      poison_val, &poison_val);
> +		}
> +	}
> +
>  	vb->nb.notifier_call = virtballoon_oom_notify;
>  	vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY;
>  	err = register_oom_notifier(&vb->nb);
>  	if (err < 0)
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  
>  #ifdef CONFIG_BALLOON_COMPACTION
>  	balloon_mnt = kern_mount(&balloon_fs);
>  	if (IS_ERR(balloon_mnt)) {
>  		err = PTR_ERR(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  
>  	vb->vb_dev_info.migratepage = virtballoon_migratepage;
> @@ -587,7 +771,7 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		kern_unmount(balloon_mnt);
>  		unregister_oom_notifier(&vb->nb);
>  		vb->vb_dev_info.inode = NULL;
> -		goto out_del_vqs;
> +		goto out_del_balloon_wq;
>  	}
>  	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
>  #endif
> @@ -598,6 +782,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
>  		virtballoon_changed(vdev);
>  	return 0;
>  
> +out_del_balloon_wq:
> +	destroy_workqueue(vb->balloon_wq);
>  out_del_vqs:
>  	vdev->config->del_vqs(vdev);
>  out_free_vb:
> @@ -630,6 +816,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
>  	spin_unlock_irq(&vb->stop_update_lock);
>  	cancel_work_sync(&vb->update_balloon_size_work);
>  	cancel_work_sync(&vb->update_balloon_stats_work);
> +	cancel_work_sync(&vb->report_free_page_work);
> +	destroy_workqueue(vb->balloon_wq);
>  
>  	remove_common(vb);
>  #ifdef CONFIG_BALLOON_COMPACTION
> @@ -674,6 +862,9 @@ static int virtballoon_restore(struct virtio_device *vdev)
>  
>  static int virtballoon_validate(struct virtio_device *vdev)
>  {
> +	if (!page_poisoning_enabled())
> +		__virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON);
> +
>  	__virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM);
>  	return 0;
>  }
> @@ -682,6 +873,8 @@ static unsigned int features[] = {
>  	VIRTIO_BALLOON_F_MUST_TELL_HOST,
>  	VIRTIO_BALLOON_F_STATS_VQ,
>  	VIRTIO_BALLOON_F_DEFLATE_ON_OOM,
> +	VIRTIO_BALLOON_F_FREE_PAGE_HINT,
> +	VIRTIO_BALLOON_F_PAGE_POISON,
>  };
>  
>  static struct virtio_driver virtio_balloon_driver = {
> diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h
> index 343d7dd..3f97067 100644
> --- a/include/uapi/linux/virtio_balloon.h
> +++ b/include/uapi/linux/virtio_balloon.h
> @@ -34,15 +34,22 @@
>  #define VIRTIO_BALLOON_F_MUST_TELL_HOST	0 /* Tell before reclaiming pages */
>  #define VIRTIO_BALLOON_F_STATS_VQ	1 /* Memory Stats virtqueue */
>  #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM	2 /* Deflate balloon on OOM */
> +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT	3 /* VQ to report free pages */
> +#define VIRTIO_BALLOON_F_PAGE_POISON	4 /* Guest is using page poisoning */
>  
>  /* Size of a PFN in the balloon interface. */
>  #define VIRTIO_BALLOON_PFN_SHIFT 12
>  
> +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID		0
>  struct virtio_balloon_config {
>  	/* Number of pages host wants Guest to give up. */
>  	__u32 num_pages;
>  	/* Number of pages we've actually got in balloon. */
>  	__u32 actual;
> +	/* Free page report command id, readonly by guest */
> +	__u32 free_page_report_cmd_id;
> +	/* Stores PAGE_POISON if page poisoning is in use */
> +	__u32 poison_val;
>  };
>  
>  #define VIRTIO_BALLOON_S_SWAP_IN  0   /* Amount of memory swapped in */
> -- 
> 2.7.4

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 17:15     ` Michael S. Tsirkin
  (?)
@ 2018-01-25  3:32       ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  3:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> Can you teach walk_free_mem_block to return the && of all
> return calls, so caller knows whether it completed?

There will be two cases that can cause walk_free_mem_block to return 
without completing:
1) host requests to stop in advance
2) vq->broken

How about letting walk_free_mem_block simply return the value returned 
by its callback (i.e. virtio_balloon_send_free_pages)?

For host requests to stop, it returns "1", and the above only bails out 
when walk_free_mem_block return a "< 0" value.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25  3:32       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  3:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> Can you teach walk_free_mem_block to return the && of all
> return calls, so caller knows whether it completed?

There will be two cases that can cause walk_free_mem_block to return 
without completing:
1) host requests to stop in advance
2) vq->broken

How about letting walk_free_mem_block simply return the value returned 
by its callback (i.e. virtio_balloon_send_free_pages)?

For host requests to stop, it returns "1", and the above only bails out 
when walk_free_mem_block return a "< 0" value.

Best,
Wei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 17:15     ` Michael S. Tsirkin
  (?)
  (?)
@ 2018-01-25  3:32     ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  3:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> Can you teach walk_free_mem_block to return the && of all
> return calls, so caller knows whether it completed?

There will be two cases that can cause walk_free_mem_block to return 
without completing:
1) host requests to stop in advance
2) vq->broken

How about letting walk_free_mem_block simply return the value returned 
by its callback (i.e. virtio_balloon_send_free_pages)?

For host requests to stop, it returns "1", and the above only bails out 
when walk_free_mem_block return a "< 0" value.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25  3:32       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  3:32 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> +
> +static void report_free_page_func(struct work_struct *work)
> +{
> +	struct virtio_balloon *vb;
> +	unsigned long flags;
> +
> +	vb = container_of(work, struct virtio_balloon, report_free_page_work);
> +
> +	/* Start by sending the obtained cmd id to the host with an outbuf */
> +	send_cmd_id(vb, &vb->start_cmd_id);
> +
> +	/*
> +	 * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> +	 * indicate a new request can be queued.
> +	 */
> +	spin_lock_irqsave(&vb->stop_update_lock, flags);
> +	vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> +				VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> +	spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> +
> +	walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> Can you teach walk_free_mem_block to return the && of all
> return calls, so caller knows whether it completed?

There will be two cases that can cause walk_free_mem_block to return 
without completing:
1) host requests to stop in advance
2) vq->broken

How about letting walk_free_mem_block simply return the value returned 
by its callback (i.e. virtio_balloon_send_free_pages)?

For host requests to stop, it returns "1", and the above only bails out 
when walk_free_mem_block return a "< 0" value.

Best,
Wei

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 17:15     ` Michael S. Tsirkin
  (?)
@ 2018-01-25  9:45       ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  9:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>   
>>
>> What is this doing? Basically handling the case where vq is broken?
>> It's kind of ugly to tweak feature bits, most code assumes they never
>> change.  Please just return an error to caller instead and handle it
>> there.
>>
>> You can then avoid sprinking the check for the feature bit
>> all over the code.
>>
>
> One thing I don't like about this one is that the previous request
> will still try to run to completion.
>
> And it all seems pretty complex.
>
> How about:
> - pass cmd id to a queued work
> - queued work gets that cmd id, stores a copy and uses that,
>    re-checking periodically - stop if cmd id changes:
>    will replace  report_free_page too since that's set to
>    stop.
>
> This means you do not reuse the queued cmd id also
> for the buffer - which is probably for the best.

Thanks for the suggestion. Please have a check how it's implemented in v25.
Just a little reminder that work queue has internally ensured that there 
is no re-entrant of the same queued function.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25  9:45       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  9:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>   
>>
>> What is this doing? Basically handling the case where vq is broken?
>> It's kind of ugly to tweak feature bits, most code assumes they never
>> change.  Please just return an error to caller instead and handle it
>> there.
>>
>> You can then avoid sprinking the check for the feature bit
>> all over the code.
>>
>
> One thing I don't like about this one is that the previous request
> will still try to run to completion.
>
> And it all seems pretty complex.
>
> How about:
> - pass cmd id to a queued work
> - queued work gets that cmd id, stores a copy and uses that,
>    re-checking periodically - stop if cmd id changes:
>    will replace  report_free_page too since that's set to
>    stop.
>
> This means you do not reuse the queued cmd id also
> for the buffer - which is probably for the best.

Thanks for the suggestion. Please have a check how it's implemented in v25.
Just a little reminder that work queue has internally ensured that there 
is no re-entrant of the same queued function.

Best,
Wei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-24 17:15     ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2018-01-25  9:45     ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  9:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>   
>>
>> What is this doing? Basically handling the case where vq is broken?
>> It's kind of ugly to tweak feature bits, most code assumes they never
>> change.  Please just return an error to caller instead and handle it
>> there.
>>
>> You can then avoid sprinking the check for the feature bit
>> all over the code.
>>
>
> One thing I don't like about this one is that the previous request
> will still try to run to completion.
>
> And it all seems pretty complex.
>
> How about:
> - pass cmd id to a queued work
> - queued work gets that cmd id, stores a copy and uses that,
>    re-checking periodically - stop if cmd id changes:
>    will replace  report_free_page too since that's set to
>    stop.
>
> This means you do not reuse the queued cmd id also
> for the buffer - which is probably for the best.

Thanks for the suggestion. Please have a check how it's implemented in v25.
Just a little reminder that work queue has internally ensured that there 
is no re-entrant of the same queued function.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25  9:45       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25  9:45 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>   
>>
>> What is this doing? Basically handling the case where vq is broken?
>> It's kind of ugly to tweak feature bits, most code assumes they never
>> change.  Please just return an error to caller instead and handle it
>> there.
>>
>> You can then avoid sprinking the check for the feature bit
>> all over the code.
>>
>
> One thing I don't like about this one is that the previous request
> will still try to run to completion.
>
> And it all seems pretty complex.
>
> How about:
> - pass cmd id to a queued work
> - queued work gets that cmd id, stores a copy and uses that,
>    re-checking periodically - stop if cmd id changes:
>    will replace  report_free_page too since that's set to
>    stop.
>
> This means you do not reuse the queued cmd id also
> for the buffer - which is probably for the best.

Thanks for the suggestion. Please have a check how it's implemented in v25.
Just a little reminder that work queue has internally ensured that there 
is no re-entrant of the same queued function.

Best,
Wei

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-25  3:32       ` Wei Wang
@ 2018-01-25 11:28         ` Tetsuo Handa
  -1 siblings, 0 replies; 65+ messages in thread
From: Tetsuo Handa @ 2018-01-25 11:28 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 2018/01/25 12:32, Wei Wang wrote:
> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>> +
>> +static void report_free_page_func(struct work_struct *work)
>> +{
>> +    struct virtio_balloon *vb;
>> +    unsigned long flags;
>> +
>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>> +
>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>> +    send_cmd_id(vb, &vb->start_cmd_id);
>> +
>> +    /*
>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>> +     * indicate a new request can be queued.
>> +     */
>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>> +
>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>> Can you teach walk_free_mem_block to return the && of all
>> return calls, so caller knows whether it completed?
> 
> There will be two cases that can cause walk_free_mem_block to return without completing:
> 1) host requests to stop in advance
> 2) vq->broken
> 
> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> 
> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.

I feel that virtio_balloon_send_free_pages is doing too heavy things.

It can be called for many times with IRQ disabled. Number of times
it is called depends on amount of free pages (and fragmentation state).
Generally, more free pages, more calls.

Then, why don't you allocate some pages for holding all pfn values
and then call walk_free_mem_block() only for storing pfn values
and then send pfn values without disabling IRQ?

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25 11:28         ` Tetsuo Handa
  0 siblings, 0 replies; 65+ messages in thread
From: Tetsuo Handa @ 2018-01-25 11:28 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 2018/01/25 12:32, Wei Wang wrote:
> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>> +
>> +static void report_free_page_func(struct work_struct *work)
>> +{
>> +    struct virtio_balloon *vb;
>> +    unsigned long flags;
>> +
>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>> +
>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>> +    send_cmd_id(vb, &vb->start_cmd_id);
>> +
>> +    /*
>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>> +     * indicate a new request can be queued.
>> +     */
>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>> +
>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>> Can you teach walk_free_mem_block to return the && of all
>> return calls, so caller knows whether it completed?
> 
> There will be two cases that can cause walk_free_mem_block to return without completing:
> 1) host requests to stop in advance
> 2) vq->broken
> 
> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> 
> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.

I feel that virtio_balloon_send_free_pages is doing too heavy things.

It can be called for many times with IRQ disabled. Number of times
it is called depends on amount of free pages (and fragmentation state).
Generally, more free pages, more calls.

Then, why don't you allocate some pages for holding all pfn values
and then call walk_free_mem_block() only for storing pfn values
and then send pfn values without disabling IRQ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-25  3:32       ` Wei Wang
                         ` (2 preceding siblings ...)
  (?)
@ 2018-01-25 11:28       ` Tetsuo Handa
  -1 siblings, 0 replies; 65+ messages in thread
From: Tetsuo Handa @ 2018-01-25 11:28 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On 2018/01/25 12:32, Wei Wang wrote:
> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>> +
>> +static void report_free_page_func(struct work_struct *work)
>> +{
>> +    struct virtio_balloon *vb;
>> +    unsigned long flags;
>> +
>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>> +
>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>> +    send_cmd_id(vb, &vb->start_cmd_id);
>> +
>> +    /*
>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>> +     * indicate a new request can be queued.
>> +     */
>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>> +
>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>> Can you teach walk_free_mem_block to return the && of all
>> return calls, so caller knows whether it completed?
> 
> There will be two cases that can cause walk_free_mem_block to return without completing:
> 1) host requests to stop in advance
> 2) vq->broken
> 
> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> 
> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.

I feel that virtio_balloon_send_free_pages is doing too heavy things.

It can be called for many times with IRQ disabled. Number of times
it is called depends on amount of free pages (and fragmentation state).
Generally, more free pages, more calls.

Then, why don't you allocate some pages for holding all pfn values
and then call walk_free_mem_block() only for storing pfn values
and then send pfn values without disabling IRQ?

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-25 11:28         ` Tetsuo Handa
  (?)
  (?)
@ 2018-01-25 12:55           ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25 12:55 UTC (permalink / raw)
  To: Tetsuo Handa, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
>> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>> +
>>> +static void report_free_page_func(struct work_struct *work)
>>> +{
>>> +    struct virtio_balloon *vb;
>>> +    unsigned long flags;
>>> +
>>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>>> +
>>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>>> +    send_cmd_id(vb, &vb->start_cmd_id);
>>> +
>>> +    /*
>>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>>> +     * indicate a new request can be queued.
>>> +     */
>>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>>> +
>>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>>> Can you teach walk_free_mem_block to return the && of all
>>> return calls, so caller knows whether it completed?
>> There will be two cases that can cause walk_free_mem_block to return without completing:
>> 1) host requests to stop in advance
>> 2) vq->broken
>>
>> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
>>
>> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
>
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
>
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.

The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25 12:55           ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25 12:55 UTC (permalink / raw)
  To: Tetsuo Handa, Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
>> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>> +
>>> +static void report_free_page_func(struct work_struct *work)
>>> +{
>>> +    struct virtio_balloon *vb;
>>> +    unsigned long flags;
>>> +
>>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>>> +
>>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>>> +    send_cmd_id(vb, &vb->start_cmd_id);
>>> +
>>> +    /*
>>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>>> +     * indicate a new request can be queued.
>>> +     */
>>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>>> +
>>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>>> Can you teach walk_free_mem_block to return the && of all
>>> return calls, so caller knows whether it completed?
>> There will be two cases that can cause walk_free_mem_block to return without completing:
>> 1) host requests to stop in advance
>> 2) vq->broken
>>
>> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
>>
>> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
>
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
>
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.

The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25 12:55           ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25 12:55 UTC (permalink / raw)
  To: Tetsuo Handa, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
>> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>> +
>>> +static void report_free_page_func(struct work_struct *work)
>>> +{
>>> +    struct virtio_balloon *vb;
>>> +    unsigned long flags;
>>> +
>>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>>> +
>>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>>> +    send_cmd_id(vb, &vb->start_cmd_id);
>>> +
>>> +    /*
>>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>>> +     * indicate a new request can be queued.
>>> +     */
>>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>>> +
>>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>>> Can you teach walk_free_mem_block to return the && of all
>>> return calls, so caller knows whether it completed?
>> There will be two cases that can cause walk_free_mem_block to return without completing:
>> 1) host requests to stop in advance
>> 2) vq->broken
>>
>> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
>>
>> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
>
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
>
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.

The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.

Best,
Wei



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-01-25 12:55           ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-25 12:55 UTC (permalink / raw)
  To: Tetsuo Handa, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
>> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>> +
>>> +static void report_free_page_func(struct work_struct *work)
>>> +{
>>> +    struct virtio_balloon *vb;
>>> +    unsigned long flags;
>>> +
>>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>>> +
>>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>>> +    send_cmd_id(vb, &vb->start_cmd_id);
>>> +
>>> +    /*
>>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>>> +     * indicate a new request can be queued.
>>> +     */
>>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>>> +
>>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>>> Can you teach walk_free_mem_block to return the && of all
>>> return calls, so caller knows whether it completed?
>> There will be two cases that can cause walk_free_mem_block to return without completing:
>> 1) host requests to stop in advance
>> 2) vq->broken
>>
>> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
>>
>> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
>
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
>
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.

The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.

Best,
Wei




---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-24 10:42   ` Wei Wang
  (?)
@ 2018-01-25 13:41     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 13:41 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Michal Hocko <mhocko@kernel.org>
> ---
>  include/linux/mm.h |  6 ++++
>  mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 97 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..b3077dd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>  		unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern void walk_free_mem_block(void *opaque,
> +				int min_order,
> +				bool (*report_pfn_range)(void *opaque,
> +							 unsigned long pfn,
> +							 unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..705de22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>  	show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return false if the callback requests to stop reporting. Otherwise,
> + * return true.
> + */
> +static bool walk_free_page_list(void *opaque,
> +				struct zone *zone,
> +				int order,
> +				enum migratetype mt,
> +				bool (*report_pfn_range)(void *,
> +							 unsigned long,
> +							 unsigned long))
> +{
> +	struct page *page;
> +	struct list_head *list;
> +	unsigned long pfn, flags;
> +	bool ret;
> +
> +	spin_lock_irqsave(&zone->lock, flags);
> +	list = &zone->free_area[order].free_list[mt];
> +	list_for_each_entry(page, list, lru) {
> +		pfn = page_to_pfn(page);
> +		ret = report_pfn_range(opaque, pfn, 1 << order);
> +		if (!ret)
> +			break;
> +	}
> +	spin_unlock_irqrestore(&zone->lock, flags);
> +
> +	return ret;
> +}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.

Of course it's possible not all reporters want this.
So maybe change the callback to return int:
 0 - page reported, move page to end of free list
 > 0 - page skipped, proceed
 < 0 - stop processing


> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns false, stop iterating the list of free page blocks.
> + * Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or via any lock dependency. It is generally advisable to implement
> + * the callback as simple as possible and defer any heavy lifting to a
> + * different context.
> + *
> + * There is no guarantee that each free range will be reported only once
> + * during one walk_free_mem_block invocation.
> + *
> + * pfn_to_page on the given range is strongly discouraged and if there is
> + * an absolute need for that make sure to contact MM people to discuss
> + * potential problems.
> + *
> + * The function itself might sleep so it cannot be called from atomic
> + * contexts.
> + *
> + * In general low orders tend to be very volatile and so it makes more
> + * sense to query larger ones first for various optimizations which like
> + * ballooning etc... This will reduce the overhead as well.
> + */
> +void walk_free_mem_block(void *opaque,
> +			 int min_order,
> +			 bool (*report_pfn_range)(void *opaque,
> +						  unsigned long pfn,
> +						  unsigned long num))
> +{
> +	struct zone *zone;
> +	int order;
> +	enum migratetype mt;
> +	bool ret;
> +
> +	for_each_populated_zone(zone) {
> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> +				ret = walk_free_page_list(opaque, zone,
> +							  order, mt,
> +							  report_pfn_range);
> +				if (!ret)
> +					return;
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> +

I think callers need a way to
1. distinguish between completion and exit on error
2. restart from where we stopped

So I would both accept and return the current zone
and a special value to mean "complete"

>  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
>  {
>  	zoneref->zone = zone;
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-25 13:41     ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 13:41 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Michal Hocko <mhocko@kernel.org>
> ---
>  include/linux/mm.h |  6 ++++
>  mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 97 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..b3077dd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>  		unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern void walk_free_mem_block(void *opaque,
> +				int min_order,
> +				bool (*report_pfn_range)(void *opaque,
> +							 unsigned long pfn,
> +							 unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..705de22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>  	show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return false if the callback requests to stop reporting. Otherwise,
> + * return true.
> + */
> +static bool walk_free_page_list(void *opaque,
> +				struct zone *zone,
> +				int order,
> +				enum migratetype mt,
> +				bool (*report_pfn_range)(void *,
> +							 unsigned long,
> +							 unsigned long))
> +{
> +	struct page *page;
> +	struct list_head *list;
> +	unsigned long pfn, flags;
> +	bool ret;
> +
> +	spin_lock_irqsave(&zone->lock, flags);
> +	list = &zone->free_area[order].free_list[mt];
> +	list_for_each_entry(page, list, lru) {
> +		pfn = page_to_pfn(page);
> +		ret = report_pfn_range(opaque, pfn, 1 << order);
> +		if (!ret)
> +			break;
> +	}
> +	spin_unlock_irqrestore(&zone->lock, flags);
> +
> +	return ret;
> +}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.

Of course it's possible not all reporters want this.
So maybe change the callback to return int:
 0 - page reported, move page to end of free list
 > 0 - page skipped, proceed
 < 0 - stop processing


> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns false, stop iterating the list of free page blocks.
> + * Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or via any lock dependency. It is generally advisable to implement
> + * the callback as simple as possible and defer any heavy lifting to a
> + * different context.
> + *
> + * There is no guarantee that each free range will be reported only once
> + * during one walk_free_mem_block invocation.
> + *
> + * pfn_to_page on the given range is strongly discouraged and if there is
> + * an absolute need for that make sure to contact MM people to discuss
> + * potential problems.
> + *
> + * The function itself might sleep so it cannot be called from atomic
> + * contexts.
> + *
> + * In general low orders tend to be very volatile and so it makes more
> + * sense to query larger ones first for various optimizations which like
> + * ballooning etc... This will reduce the overhead as well.
> + */
> +void walk_free_mem_block(void *opaque,
> +			 int min_order,
> +			 bool (*report_pfn_range)(void *opaque,
> +						  unsigned long pfn,
> +						  unsigned long num))
> +{
> +	struct zone *zone;
> +	int order;
> +	enum migratetype mt;
> +	bool ret;
> +
> +	for_each_populated_zone(zone) {
> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> +				ret = walk_free_page_list(opaque, zone,
> +							  order, mt,
> +							  report_pfn_range);
> +				if (!ret)
> +					return;
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> +

I think callers need a way to
1. distinguish between completion and exit on error
2. restart from where we stopped

So I would both accept and return the current zone
and a special value to mean "complete"

>  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
>  {
>  	zoneref->zone = zone;
> -- 
> 2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-24 10:42   ` Wei Wang
                     ` (2 preceding siblings ...)
  (?)
@ 2018-01-25 13:41   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 13:41 UTC (permalink / raw)
  To: Wei Wang
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Michal Hocko <mhocko@kernel.org>
> ---
>  include/linux/mm.h |  6 ++++
>  mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 97 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..b3077dd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>  		unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern void walk_free_mem_block(void *opaque,
> +				int min_order,
> +				bool (*report_pfn_range)(void *opaque,
> +							 unsigned long pfn,
> +							 unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..705de22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>  	show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return false if the callback requests to stop reporting. Otherwise,
> + * return true.
> + */
> +static bool walk_free_page_list(void *opaque,
> +				struct zone *zone,
> +				int order,
> +				enum migratetype mt,
> +				bool (*report_pfn_range)(void *,
> +							 unsigned long,
> +							 unsigned long))
> +{
> +	struct page *page;
> +	struct list_head *list;
> +	unsigned long pfn, flags;
> +	bool ret;
> +
> +	spin_lock_irqsave(&zone->lock, flags);
> +	list = &zone->free_area[order].free_list[mt];
> +	list_for_each_entry(page, list, lru) {
> +		pfn = page_to_pfn(page);
> +		ret = report_pfn_range(opaque, pfn, 1 << order);
> +		if (!ret)
> +			break;
> +	}
> +	spin_unlock_irqrestore(&zone->lock, flags);
> +
> +	return ret;
> +}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.

Of course it's possible not all reporters want this.
So maybe change the callback to return int:
 0 - page reported, move page to end of free list
 > 0 - page skipped, proceed
 < 0 - stop processing


> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns false, stop iterating the list of free page blocks.
> + * Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or via any lock dependency. It is generally advisable to implement
> + * the callback as simple as possible and defer any heavy lifting to a
> + * different context.
> + *
> + * There is no guarantee that each free range will be reported only once
> + * during one walk_free_mem_block invocation.
> + *
> + * pfn_to_page on the given range is strongly discouraged and if there is
> + * an absolute need for that make sure to contact MM people to discuss
> + * potential problems.
> + *
> + * The function itself might sleep so it cannot be called from atomic
> + * contexts.
> + *
> + * In general low orders tend to be very volatile and so it makes more
> + * sense to query larger ones first for various optimizations which like
> + * ballooning etc... This will reduce the overhead as well.
> + */
> +void walk_free_mem_block(void *opaque,
> +			 int min_order,
> +			 bool (*report_pfn_range)(void *opaque,
> +						  unsigned long pfn,
> +						  unsigned long num))
> +{
> +	struct zone *zone;
> +	int order;
> +	enum migratetype mt;
> +	bool ret;
> +
> +	for_each_populated_zone(zone) {
> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> +				ret = walk_free_page_list(opaque, zone,
> +							  order, mt,
> +							  report_pfn_range);
> +				if (!ret)
> +					return;
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> +

I think callers need a way to
1. distinguish between completion and exit on error
2. restart from where we stopped

So I would both accept and return the current zone
and a special value to mean "complete"

>  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
>  {
>  	zoneref->zone = zone;
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-25 13:41     ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 13:41 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> This patch adds support to walk through the free page blocks in the
> system and report them via a callback function. Some page blocks may
> leave the free list after zone->lock is released, so it is the caller's
> responsibility to either detect or prevent the use of such pages.
> 
> One use example of this patch is to accelerate live migration by skipping
> the transfer of free pages reported from the guest. A popular method used
> by the hypervisor to track which part of memory is written during live
> migration is to write-protect all the guest memory. So, those pages that
> are reported as free pages but are written after the report function
> returns will be captured by the hypervisor, and they will be added to the
> next round of memory transfer.
> 
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Signed-off-by: Liang Li <liang.z.li@intel.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Michael S. Tsirkin <mst@redhat.com>
> Acked-by: Michal Hocko <mhocko@kernel.org>
> ---
>  include/linux/mm.h |  6 ++++
>  mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 97 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ea818ff..b3077dd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>  		unsigned long zone_start_pfn, unsigned long *zholes_size);
>  extern void free_initmem(void);
>  
> +extern void walk_free_mem_block(void *opaque,
> +				int min_order,
> +				bool (*report_pfn_range)(void *opaque,
> +							 unsigned long pfn,
> +							 unsigned long num));
> +
>  /*
>   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>   * into the buddy system. The freed pages will be poisoned with pattern
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 76c9688..705de22 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>  	show_swap_cache_info();
>  }
>  
> +/*
> + * Walk through a free page list and report the found pfn range via the
> + * callback.
> + *
> + * Return false if the callback requests to stop reporting. Otherwise,
> + * return true.
> + */
> +static bool walk_free_page_list(void *opaque,
> +				struct zone *zone,
> +				int order,
> +				enum migratetype mt,
> +				bool (*report_pfn_range)(void *,
> +							 unsigned long,
> +							 unsigned long))
> +{
> +	struct page *page;
> +	struct list_head *list;
> +	unsigned long pfn, flags;
> +	bool ret;
> +
> +	spin_lock_irqsave(&zone->lock, flags);
> +	list = &zone->free_area[order].free_list[mt];
> +	list_for_each_entry(page, list, lru) {
> +		pfn = page_to_pfn(page);
> +		ret = report_pfn_range(opaque, pfn, 1 << order);
> +		if (!ret)
> +			break;
> +	}
> +	spin_unlock_irqrestore(&zone->lock, flags);
> +
> +	return ret;
> +}

There are two issues with this API. One is that it is not
restarteable: if you return false, you start from the
beginning. So no way to drop lock, do something slow
and then proceed.

Another is that you are using it to report free page hints. Presumably
the point is to drop these pages - keeping them near head of the list
and reusing the reported ones will just make everything slower
invalidating the hint.

How about rotating these pages towards the end of the list?
Probably not on each call, callect reported pages and then
move them to tail when we exit.

Of course it's possible not all reporters want this.
So maybe change the callback to return int:
 0 - page reported, move page to end of free list
 > 0 - page skipped, proceed
 < 0 - stop processing


> +
> +/**
> + * walk_free_mem_block - Walk through the free page blocks in the system
> + * @opaque: the context passed from the caller
> + * @min_order: the minimum order of free lists to check
> + * @report_pfn_range: the callback to report the pfn range of the free pages
> + *
> + * If the callback returns false, stop iterating the list of free page blocks.
> + * Otherwise, continue to report.
> + *
> + * Please note that there are no locking guarantees for the callback and
> + * that the reported pfn range might be freed or disappear after the
> + * callback returns so the caller has to be very careful how it is used.
> + *
> + * The callback itself must not sleep or perform any operations which would
> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> + * or via any lock dependency. It is generally advisable to implement
> + * the callback as simple as possible and defer any heavy lifting to a
> + * different context.
> + *
> + * There is no guarantee that each free range will be reported only once
> + * during one walk_free_mem_block invocation.
> + *
> + * pfn_to_page on the given range is strongly discouraged and if there is
> + * an absolute need for that make sure to contact MM people to discuss
> + * potential problems.
> + *
> + * The function itself might sleep so it cannot be called from atomic
> + * contexts.
> + *
> + * In general low orders tend to be very volatile and so it makes more
> + * sense to query larger ones first for various optimizations which like
> + * ballooning etc... This will reduce the overhead as well.
> + */
> +void walk_free_mem_block(void *opaque,
> +			 int min_order,
> +			 bool (*report_pfn_range)(void *opaque,
> +						  unsigned long pfn,
> +						  unsigned long num))
> +{
> +	struct zone *zone;
> +	int order;
> +	enum migratetype mt;
> +	bool ret;
> +
> +	for_each_populated_zone(zone) {
> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> +				ret = walk_free_page_list(opaque, zone,
> +							  order, mt,
> +							  report_pfn_range);
> +				if (!ret)
> +					return;
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> +

I think callers need a way to
1. distinguish between completion and exit on error
2. restart from where we stopped

So I would both accept and return the current zone
and a special value to mean "complete"

>  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
>  {
>  	zoneref->zone = zone;
> -- 
> 2.7.4

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 13:41     ` Michael S. Tsirkin
@ 2018-01-25 14:56       ` Pankaj Gupta
  -1 siblings, 0 replies; 65+ messages in thread
From: Pankaj Gupta @ 2018-01-25 14:56 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang opensource, yang zhang wz, quan xu0,
	nilal, Rik van Riel, niteshnarayanlal


> 
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > This patch adds support to walk through the free page blocks in the
> > system and report them via a callback function. Some page blocks may
> > leave the free list after zone->lock is released, so it is the caller's
> > responsibility to either detect or prevent the use of such pages.
> > 
> > One use example of this patch is to accelerate live migration by skipping
> > the transfer of free pages reported from the guest. A popular method used
> > by the hypervisor to track which part of memory is written during live
> > migration is to write-protect all the guest memory. So, those pages that
> > are reported as free pages but are written after the report function
> > returns will be captured by the hypervisor, and they will be added to the
> > next round of memory transfer.
> > 
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Michal Hocko <mhocko@kernel.org>
> > ---
> >  include/linux/mm.h |  6 ++++
> >  mm/page_alloc.c    | 91
> >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 97 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ea818ff..b3077dd 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > long * zones_size,
> >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> >  extern void free_initmem(void);
> >  
> > +extern void walk_free_mem_block(void *opaque,
> > +				int min_order,
> > +				bool (*report_pfn_range)(void *opaque,
> > +							 unsigned long pfn,
> > +							 unsigned long num));
> > +
> >  /*
> >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> >   * into the buddy system. The freed pages will be poisoned with pattern
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 76c9688..705de22 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > *nodemask)
> >  	show_swap_cache_info();
> >  }
> >  
> > +/*
> > + * Walk through a free page list and report the found pfn range via the
> > + * callback.
> > + *
> > + * Return false if the callback requests to stop reporting. Otherwise,
> > + * return true.
> > + */
> > +static bool walk_free_page_list(void *opaque,
> > +				struct zone *zone,
> > +				int order,
> > +				enum migratetype mt,
> > +				bool (*report_pfn_range)(void *,
> > +							 unsigned long,
> > +							 unsigned long))
> > +{
> > +	struct page *page;
> > +	struct list_head *list;
> > +	unsigned long pfn, flags;
> > +	bool ret;
> > +
> > +	spin_lock_irqsave(&zone->lock, flags);
> > +	list = &zone->free_area[order].free_list[mt];
> > +	list_for_each_entry(page, list, lru) {
> > +		pfn = page_to_pfn(page);
> > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > +		if (!ret)
> > +			break;
> > +	}
> > +	spin_unlock_irqrestore(&zone->lock, flags);
> > +
> > +	return ret;
> > +}
> 
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
> 
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.

I think that's where patches[1] by 'Nitesh' will help: This patch-set
will send free page hints transparently to host and host decides to delete such 
pages.

If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
such pages during live migration. But as already discussed, if free pages are 
still in guest memory there is no point of traversing & getting all such pages
again.

[1] https://www.spinics.net/lists/kvm/msg159790.html

> 
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.
> 
> Of course it's possible not all reporters want this.
> So maybe change the callback to return int:
>  0 - page reported, move page to end of free list
>  > 0 - page skipped, proceed
>  < 0 - stop processing
> 
> 
> > +
> > +/**
> > + * walk_free_mem_block - Walk through the free page blocks in the system
> > + * @opaque: the context passed from the caller
> > + * @min_order: the minimum order of free lists to check
> > + * @report_pfn_range: the callback to report the pfn range of the free
> > pages
> > + *
> > + * If the callback returns false, stop iterating the list of free page
> > blocks.
> > + * Otherwise, continue to report.
> > + *
> > + * Please note that there are no locking guarantees for the callback and
> > + * that the reported pfn range might be freed or disappear after the
> > + * callback returns so the caller has to be very careful how it is used.
> > + *
> > + * The callback itself must not sleep or perform any operations which
> > would
> > + * require any memory allocations directly (not even
> > GFP_NOWAIT/GFP_ATOMIC)
> > + * or via any lock dependency. It is generally advisable to implement
> > + * the callback as simple as possible and defer any heavy lifting to a
> > + * different context.
> > + *
> > + * There is no guarantee that each free range will be reported only once
> > + * during one walk_free_mem_block invocation.
> > + *
> > + * pfn_to_page on the given range is strongly discouraged and if there is
> > + * an absolute need for that make sure to contact MM people to discuss
> > + * potential problems.
> > + *
> > + * The function itself might sleep so it cannot be called from atomic
> > + * contexts.
> > + *
> > + * In general low orders tend to be very volatile and so it makes more
> > + * sense to query larger ones first for various optimizations which like
> > + * ballooning etc... This will reduce the overhead as well.
> > + */
> > +void walk_free_mem_block(void *opaque,
> > +			 int min_order,
> > +			 bool (*report_pfn_range)(void *opaque,
> > +						  unsigned long pfn,
> > +						  unsigned long num))
> > +{
> > +	struct zone *zone;
> > +	int order;
> > +	enum migratetype mt;
> > +	bool ret;
> > +
> > +	for_each_populated_zone(zone) {
> > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > +				ret = walk_free_page_list(opaque, zone,
> > +							  order, mt,
> > +							  report_pfn_range);
> > +				if (!ret)
> > +					return;
> > +			}
> > +		}
> > +	}
> > +}
> > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > +
> 
> I think callers need a way to
> 1. distinguish between completion and exit on error
> 2. restart from where we stopped
> 
> So I would both accept and return the current zone
> and a special value to mean "complete"
> 
> >  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
> >  {
> >  	zoneref->zone = zone;
> > --
> > 2.7.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-25 14:56       ` Pankaj Gupta
  0 siblings, 0 replies; 65+ messages in thread
From: Pankaj Gupta @ 2018-01-25 14:56 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang opensource, yang zhang wz, quan xu0,
	nilal, Rik van Riel, niteshnarayanlal


> 
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > This patch adds support to walk through the free page blocks in the
> > system and report them via a callback function. Some page blocks may
> > leave the free list after zone->lock is released, so it is the caller's
> > responsibility to either detect or prevent the use of such pages.
> > 
> > One use example of this patch is to accelerate live migration by skipping
> > the transfer of free pages reported from the guest. A popular method used
> > by the hypervisor to track which part of memory is written during live
> > migration is to write-protect all the guest memory. So, those pages that
> > are reported as free pages but are written after the report function
> > returns will be captured by the hypervisor, and they will be added to the
> > next round of memory transfer.
> > 
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Michal Hocko <mhocko@kernel.org>
> > ---
> >  include/linux/mm.h |  6 ++++
> >  mm/page_alloc.c    | 91
> >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 97 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ea818ff..b3077dd 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > long * zones_size,
> >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> >  extern void free_initmem(void);
> >  
> > +extern void walk_free_mem_block(void *opaque,
> > +				int min_order,
> > +				bool (*report_pfn_range)(void *opaque,
> > +							 unsigned long pfn,
> > +							 unsigned long num));
> > +
> >  /*
> >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> >   * into the buddy system. The freed pages will be poisoned with pattern
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 76c9688..705de22 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > *nodemask)
> >  	show_swap_cache_info();
> >  }
> >  
> > +/*
> > + * Walk through a free page list and report the found pfn range via the
> > + * callback.
> > + *
> > + * Return false if the callback requests to stop reporting. Otherwise,
> > + * return true.
> > + */
> > +static bool walk_free_page_list(void *opaque,
> > +				struct zone *zone,
> > +				int order,
> > +				enum migratetype mt,
> > +				bool (*report_pfn_range)(void *,
> > +							 unsigned long,
> > +							 unsigned long))
> > +{
> > +	struct page *page;
> > +	struct list_head *list;
> > +	unsigned long pfn, flags;
> > +	bool ret;
> > +
> > +	spin_lock_irqsave(&zone->lock, flags);
> > +	list = &zone->free_area[order].free_list[mt];
> > +	list_for_each_entry(page, list, lru) {
> > +		pfn = page_to_pfn(page);
> > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > +		if (!ret)
> > +			break;
> > +	}
> > +	spin_unlock_irqrestore(&zone->lock, flags);
> > +
> > +	return ret;
> > +}
> 
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
> 
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.

I think that's where patches[1] by 'Nitesh' will help: This patch-set
will send free page hints transparently to host and host decides to delete such 
pages.

If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
such pages during live migration. But as already discussed, if free pages are 
still in guest memory there is no point of traversing & getting all such pages
again.

[1] https://www.spinics.net/lists/kvm/msg159790.html

> 
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.
> 
> Of course it's possible not all reporters want this.
> So maybe change the callback to return int:
>  0 - page reported, move page to end of free list
>  > 0 - page skipped, proceed
>  < 0 - stop processing
> 
> 
> > +
> > +/**
> > + * walk_free_mem_block - Walk through the free page blocks in the system
> > + * @opaque: the context passed from the caller
> > + * @min_order: the minimum order of free lists to check
> > + * @report_pfn_range: the callback to report the pfn range of the free
> > pages
> > + *
> > + * If the callback returns false, stop iterating the list of free page
> > blocks.
> > + * Otherwise, continue to report.
> > + *
> > + * Please note that there are no locking guarantees for the callback and
> > + * that the reported pfn range might be freed or disappear after the
> > + * callback returns so the caller has to be very careful how it is used.
> > + *
> > + * The callback itself must not sleep or perform any operations which
> > would
> > + * require any memory allocations directly (not even
> > GFP_NOWAIT/GFP_ATOMIC)
> > + * or via any lock dependency. It is generally advisable to implement
> > + * the callback as simple as possible and defer any heavy lifting to a
> > + * different context.
> > + *
> > + * There is no guarantee that each free range will be reported only once
> > + * during one walk_free_mem_block invocation.
> > + *
> > + * pfn_to_page on the given range is strongly discouraged and if there is
> > + * an absolute need for that make sure to contact MM people to discuss
> > + * potential problems.
> > + *
> > + * The function itself might sleep so it cannot be called from atomic
> > + * contexts.
> > + *
> > + * In general low orders tend to be very volatile and so it makes more
> > + * sense to query larger ones first for various optimizations which like
> > + * ballooning etc... This will reduce the overhead as well.
> > + */
> > +void walk_free_mem_block(void *opaque,
> > +			 int min_order,
> > +			 bool (*report_pfn_range)(void *opaque,
> > +						  unsigned long pfn,
> > +						  unsigned long num))
> > +{
> > +	struct zone *zone;
> > +	int order;
> > +	enum migratetype mt;
> > +	bool ret;
> > +
> > +	for_each_populated_zone(zone) {
> > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > +				ret = walk_free_page_list(opaque, zone,
> > +							  order, mt,
> > +							  report_pfn_range);
> > +				if (!ret)
> > +					return;
> > +			}
> > +		}
> > +	}
> > +}
> > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > +
> 
> I think callers need a way to
> 1. distinguish between completion and exit on error
> 2. restart from where we stopped
> 
> So I would both accept and return the current zone
> and a special value to mean "complete"
> 
> >  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
> >  {
> >  	zoneref->zone = zone;
> > --
> > 2.7.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 13:41     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2018-01-25 14:56     ` Pankaj Gupta
  -1 siblings, 0 replies; 65+ messages in thread
From: Pankaj Gupta @ 2018-01-25 14:56 UTC (permalink / raw)
  To: Wei Wang, Michael S. Tsirkin
  Cc: yang zhang wz, virtio-dev, quan xu0, kvm, Rik van Riel, nilal,
	liliang opensource, linux-kernel, mhocko, linux-mm,
	niteshnarayanlal, pbonzini, akpm, virtualization


> 
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > This patch adds support to walk through the free page blocks in the
> > system and report them via a callback function. Some page blocks may
> > leave the free list after zone->lock is released, so it is the caller's
> > responsibility to either detect or prevent the use of such pages.
> > 
> > One use example of this patch is to accelerate live migration by skipping
> > the transfer of free pages reported from the guest. A popular method used
> > by the hypervisor to track which part of memory is written during live
> > migration is to write-protect all the guest memory. So, those pages that
> > are reported as free pages but are written after the report function
> > returns will be captured by the hypervisor, and they will be added to the
> > next round of memory transfer.
> > 
> > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Michael S. Tsirkin <mst@redhat.com>
> > Acked-by: Michal Hocko <mhocko@kernel.org>
> > ---
> >  include/linux/mm.h |  6 ++++
> >  mm/page_alloc.c    | 91
> >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 97 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ea818ff..b3077dd 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > long * zones_size,
> >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> >  extern void free_initmem(void);
> >  
> > +extern void walk_free_mem_block(void *opaque,
> > +				int min_order,
> > +				bool (*report_pfn_range)(void *opaque,
> > +							 unsigned long pfn,
> > +							 unsigned long num));
> > +
> >  /*
> >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> >   * into the buddy system. The freed pages will be poisoned with pattern
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 76c9688..705de22 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > *nodemask)
> >  	show_swap_cache_info();
> >  }
> >  
> > +/*
> > + * Walk through a free page list and report the found pfn range via the
> > + * callback.
> > + *
> > + * Return false if the callback requests to stop reporting. Otherwise,
> > + * return true.
> > + */
> > +static bool walk_free_page_list(void *opaque,
> > +				struct zone *zone,
> > +				int order,
> > +				enum migratetype mt,
> > +				bool (*report_pfn_range)(void *,
> > +							 unsigned long,
> > +							 unsigned long))
> > +{
> > +	struct page *page;
> > +	struct list_head *list;
> > +	unsigned long pfn, flags;
> > +	bool ret;
> > +
> > +	spin_lock_irqsave(&zone->lock, flags);
> > +	list = &zone->free_area[order].free_list[mt];
> > +	list_for_each_entry(page, list, lru) {
> > +		pfn = page_to_pfn(page);
> > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > +		if (!ret)
> > +			break;
> > +	}
> > +	spin_unlock_irqrestore(&zone->lock, flags);
> > +
> > +	return ret;
> > +}
> 
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
> 
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.

I think that's where patches[1] by 'Nitesh' will help: This patch-set
will send free page hints transparently to host and host decides to delete such 
pages.

If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
such pages during live migration. But as already discussed, if free pages are 
still in guest memory there is no point of traversing & getting all such pages
again.

[1] https://www.spinics.net/lists/kvm/msg159790.html

> 
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.
> 
> Of course it's possible not all reporters want this.
> So maybe change the callback to return int:
>  0 - page reported, move page to end of free list
>  > 0 - page skipped, proceed
>  < 0 - stop processing
> 
> 
> > +
> > +/**
> > + * walk_free_mem_block - Walk through the free page blocks in the system
> > + * @opaque: the context passed from the caller
> > + * @min_order: the minimum order of free lists to check
> > + * @report_pfn_range: the callback to report the pfn range of the free
> > pages
> > + *
> > + * If the callback returns false, stop iterating the list of free page
> > blocks.
> > + * Otherwise, continue to report.
> > + *
> > + * Please note that there are no locking guarantees for the callback and
> > + * that the reported pfn range might be freed or disappear after the
> > + * callback returns so the caller has to be very careful how it is used.
> > + *
> > + * The callback itself must not sleep or perform any operations which
> > would
> > + * require any memory allocations directly (not even
> > GFP_NOWAIT/GFP_ATOMIC)
> > + * or via any lock dependency. It is generally advisable to implement
> > + * the callback as simple as possible and defer any heavy lifting to a
> > + * different context.
> > + *
> > + * There is no guarantee that each free range will be reported only once
> > + * during one walk_free_mem_block invocation.
> > + *
> > + * pfn_to_page on the given range is strongly discouraged and if there is
> > + * an absolute need for that make sure to contact MM people to discuss
> > + * potential problems.
> > + *
> > + * The function itself might sleep so it cannot be called from atomic
> > + * contexts.
> > + *
> > + * In general low orders tend to be very volatile and so it makes more
> > + * sense to query larger ones first for various optimizations which like
> > + * ballooning etc... This will reduce the overhead as well.
> > + */
> > +void walk_free_mem_block(void *opaque,
> > +			 int min_order,
> > +			 bool (*report_pfn_range)(void *opaque,
> > +						  unsigned long pfn,
> > +						  unsigned long num))
> > +{
> > +	struct zone *zone;
> > +	int order;
> > +	enum migratetype mt;
> > +	bool ret;
> > +
> > +	for_each_populated_zone(zone) {
> > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > +				ret = walk_free_page_list(opaque, zone,
> > +							  order, mt,
> > +							  report_pfn_range);
> > +				if (!ret)
> > +					return;
> > +			}
> > +		}
> > +	}
> > +}
> > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > +
> 
> I think callers need a way to
> 1. distinguish between completion and exit on error
> 2. restart from where we stopped
> 
> So I would both accept and return the current zone
> and a special value to mean "complete"
> 
> >  static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref)
> >  {
> >  	zoneref->zone = zone;
> > --
> > 2.7.4
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 14:56       ` Pankaj Gupta
  (?)
@ 2018-01-25 17:31         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 17:31 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang opensource,
	yang zhang wz, quan xu0, nilal, Rik van Riel, niteshnarayanlal

On Thu, Jan 25, 2018 at 09:56:01AM -0500, Pankaj Gupta wrote:
> 
> > 
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >  include/linux/mm.h |  6 ++++
> > >  mm/page_alloc.c    | 91
> > >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > > long * zones_size,
> > >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >  extern void free_initmem(void);
> > >  
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >  /*
> > >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >   * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > > *nodemask)
> > >  	show_swap_cache_info();
> > >  }
> > >  
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > 
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> 
> I think that's where patches[1] by 'Nitesh' will help: This patch-set
> will send free page hints transparently to host and host decides to delete such 
> pages.
>
> If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
> such pages during live migration. But as already discussed, if free pages are 
> still in guest memory there is no point of traversing & getting all such pages
> again.
> 
> [1] https://www.spinics.net/lists/kvm/msg159790.html

The main difference between Wei's and Nitesh's patches add hints to page
alloc/free path. It's this more risky performance-wise: you need to
enable it at guest boot, not just at migration time.
Maybe the overhead isn't big, unfortunately no one posted any
numbers yet.


-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-25 17:31         ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 17:31 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang opensource,
	yang zhang wz, quan xu0, nilal, Rik van Riel, niteshnarayanlal

On Thu, Jan 25, 2018 at 09:56:01AM -0500, Pankaj Gupta wrote:
> 
> > 
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >  include/linux/mm.h |  6 ++++
> > >  mm/page_alloc.c    | 91
> > >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > > long * zones_size,
> > >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >  extern void free_initmem(void);
> > >  
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >  /*
> > >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >   * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > > *nodemask)
> > >  	show_swap_cache_info();
> > >  }
> > >  
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > 
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> 
> I think that's where patches[1] by 'Nitesh' will help: This patch-set
> will send free page hints transparently to host and host decides to delete such 
> pages.
>
> If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
> such pages during live migration. But as already discussed, if free pages are 
> still in guest memory there is no point of traversing & getting all such pages
> again.
> 
> [1] https://www.spinics.net/lists/kvm/msg159790.html

The main difference between Wei's and Nitesh's patches add hints to page
alloc/free path. It's this more risky performance-wise: you need to
enable it at guest boot, not just at migration time.
Maybe the overhead isn't big, unfortunately no one posted any
numbers yet.


-- 
MST

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 14:56       ` Pankaj Gupta
  (?)
@ 2018-01-25 17:31       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 17:31 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: yang zhang wz, virtio-dev, quan xu0, kvm, Rik van Riel, nilal,
	liliang opensource, linux-kernel, virtualization, linux-mm,
	niteshnarayanlal, pbonzini, akpm, mhocko

On Thu, Jan 25, 2018 at 09:56:01AM -0500, Pankaj Gupta wrote:
> 
> > 
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >  include/linux/mm.h |  6 ++++
> > >  mm/page_alloc.c    | 91
> > >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > > long * zones_size,
> > >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >  extern void free_initmem(void);
> > >  
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >  /*
> > >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >   * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > > *nodemask)
> > >  	show_swap_cache_info();
> > >  }
> > >  
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > 
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> 
> I think that's where patches[1] by 'Nitesh' will help: This patch-set
> will send free page hints transparently to host and host decides to delete such 
> pages.
>
> If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
> such pages during live migration. But as already discussed, if free pages are 
> still in guest memory there is no point of traversing & getting all such pages
> again.
> 
> [1] https://www.spinics.net/lists/kvm/msg159790.html

The main difference between Wei's and Nitesh's patches add hints to page
alloc/free path. It's this more risky performance-wise: you need to
enable it at guest boot, not just at migration time.
Maybe the overhead isn't big, unfortunately no one posted any
numbers yet.


-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-25 17:31         ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-25 17:31 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang opensource,
	yang zhang wz, quan xu0, nilal, Rik van Riel, niteshnarayanlal

On Thu, Jan 25, 2018 at 09:56:01AM -0500, Pankaj Gupta wrote:
> 
> > 
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >  include/linux/mm.h |  6 ++++
> > >  mm/page_alloc.c    | 91
> > >  ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned
> > > long * zones_size,
> > >  		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >  extern void free_initmem(void);
> > >  
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >  /*
> > >   * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >   * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t
> > > *nodemask)
> > >  	show_swap_cache_info();
> > >  }
> > >  
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > 
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> 
> I think that's where patches[1] by 'Nitesh' will help: This patch-set
> will send free page hints transparently to host and host decides to delete such 
> pages.
>
> If I compare with patchset by 'Wei', host gets/asks free page hints and ignore 
> such pages during live migration. But as already discussed, if free pages are 
> still in guest memory there is no point of traversing & getting all such pages
> again.
> 
> [1] https://www.spinics.net/lists/kvm/msg159790.html

The main difference between Wei's and Nitesh's patches add hints to page
alloc/free path. It's this more risky performance-wise: you need to
enable it at guest boot, not just at migration time.
Maybe the overhead isn't big, unfortunately no one posted any
numbers yet.


-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 13:41     ` Michael S. Tsirkin
  (?)
@ 2018-01-26  3:29       ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-26  3:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
>> This patch adds support to walk through the free page blocks in the
>> system and report them via a callback function. Some page blocks may
>> leave the free list after zone->lock is released, so it is the caller's
>> responsibility to either detect or prevent the use of such pages.
>>
>> One use example of this patch is to accelerate live migration by skipping
>> the transfer of free pages reported from the guest. A popular method used
>> by the hypervisor to track which part of memory is written during live
>> migration is to write-protect all the guest memory. So, those pages that
>> are reported as free pages but are written after the report function
>> returns will be captured by the hypervisor, and they will be added to the
>> next round of memory transfer.
>>
>> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Acked-by: Michal Hocko <mhocko@kernel.org>
>> ---
>>   include/linux/mm.h |  6 ++++
>>   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 97 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ea818ff..b3077dd 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>>   		unsigned long zone_start_pfn, unsigned long *zholes_size);
>>   extern void free_initmem(void);
>>   
>> +extern void walk_free_mem_block(void *opaque,
>> +				int min_order,
>> +				bool (*report_pfn_range)(void *opaque,
>> +							 unsigned long pfn,
>> +							 unsigned long num));
>> +
>>   /*
>>    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>>    * into the buddy system. The freed pages will be poisoned with pattern
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 76c9688..705de22 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>>   	show_swap_cache_info();
>>   }
>>   
>> +/*
>> + * Walk through a free page list and report the found pfn range via the
>> + * callback.
>> + *
>> + * Return false if the callback requests to stop reporting. Otherwise,
>> + * return true.
>> + */
>> +static bool walk_free_page_list(void *opaque,
>> +				struct zone *zone,
>> +				int order,
>> +				enum migratetype mt,
>> +				bool (*report_pfn_range)(void *,
>> +							 unsigned long,
>> +							 unsigned long))
>> +{
>> +	struct page *page;
>> +	struct list_head *list;
>> +	unsigned long pfn, flags;
>> +	bool ret;
>> +
>> +	spin_lock_irqsave(&zone->lock, flags);
>> +	list = &zone->free_area[order].free_list[mt];
>> +	list_for_each_entry(page, list, lru) {
>> +		pfn = page_to_pfn(page);
>> +		ret = report_pfn_range(opaque, pfn, 1 << order);
>> +		if (!ret)
>> +			break;
>> +	}
>> +	spin_unlock_irqrestore(&zone->lock, flags);
>> +
>> +	return ret;
>> +}
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
>
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.
>
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.


I'm not sure how this would help. For example, we have a list of 2M free 
page blocks:
A-->B-->C-->D-->E-->F-->G--H

After reporting A and B, and put them to the end and exit, when the 
caller comes back,
1) if the list remains unchanged, then it will be
C-->D-->E-->F-->G-->H-->A-->B

2) If worse, all the blocks have been split into smaller blocks and used 
after the caller comes back.

where could we continue?


The reason to think about "restart" is the worry about the virtqueue is 
full, right? But we've agreed that losing some hints to report isn't 
important, and in practice, the virtqueue won't be full as the host side 
is faster.

I'm concerned that actions on the free list may cause more controversy 
though it might be safe to do from some aspect, and would be hard to end 
debating. If possible, we could go with the most prudent approach for 
now, and have more discussions in future improvement patches. What would 
you think?



>
>
>> +
>> +/**
>> + * walk_free_mem_block - Walk through the free page blocks in the system
>> + * @opaque: the context passed from the caller
>> + * @min_order: the minimum order of free lists to check
>> + * @report_pfn_range: the callback to report the pfn range of the free pages
>> + *
>> + * If the callback returns false, stop iterating the list of free page blocks.
>> + * Otherwise, continue to report.
>> + *
>> + * Please note that there are no locking guarantees for the callback and
>> + * that the reported pfn range might be freed or disappear after the
>> + * callback returns so the caller has to be very careful how it is used.
>> + *
>> + * The callback itself must not sleep or perform any operations which would
>> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
>> + * or via any lock dependency. It is generally advisable to implement
>> + * the callback as simple as possible and defer any heavy lifting to a
>> + * different context.
>> + *
>> + * There is no guarantee that each free range will be reported only once
>> + * during one walk_free_mem_block invocation.
>> + *
>> + * pfn_to_page on the given range is strongly discouraged and if there is
>> + * an absolute need for that make sure to contact MM people to discuss
>> + * potential problems.
>> + *
>> + * The function itself might sleep so it cannot be called from atomic
>> + * contexts.
>> + *
>> + * In general low orders tend to be very volatile and so it makes more
>> + * sense to query larger ones first for various optimizations which like
>> + * ballooning etc... This will reduce the overhead as well.
>> + */
>> +void walk_free_mem_block(void *opaque,
>> +			 int min_order,
>> +			 bool (*report_pfn_range)(void *opaque,
>> +						  unsigned long pfn,
>> +						  unsigned long num))
>> +{
>> +	struct zone *zone;
>> +	int order;
>> +	enum migratetype mt;
>> +	bool ret;
>> +
>> +	for_each_populated_zone(zone) {
>> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
>> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
>> +				ret = walk_free_page_list(opaque, zone,
>> +							  order, mt,
>> +							  report_pfn_range);
>> +				if (!ret)
>> +					return;
>> +			}
>> +		}
>> +	}
>> +}
>> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
>> +
> I think callers need a way to
> 1. distinguish between completion and exit on error

The first one here has actually been achieved by v25, where 
walk_free_mem_block returns 0 on completing the reporting, or a non-zero 
value which is returned from the callback.
So the caller will detect errors via letting the callback to return 
something.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26  3:29       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-26  3:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
>> This patch adds support to walk through the free page blocks in the
>> system and report them via a callback function. Some page blocks may
>> leave the free list after zone->lock is released, so it is the caller's
>> responsibility to either detect or prevent the use of such pages.
>>
>> One use example of this patch is to accelerate live migration by skipping
>> the transfer of free pages reported from the guest. A popular method used
>> by the hypervisor to track which part of memory is written during live
>> migration is to write-protect all the guest memory. So, those pages that
>> are reported as free pages but are written after the report function
>> returns will be captured by the hypervisor, and they will be added to the
>> next round of memory transfer.
>>
>> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Acked-by: Michal Hocko <mhocko@kernel.org>
>> ---
>>   include/linux/mm.h |  6 ++++
>>   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 97 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ea818ff..b3077dd 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>>   		unsigned long zone_start_pfn, unsigned long *zholes_size);
>>   extern void free_initmem(void);
>>   
>> +extern void walk_free_mem_block(void *opaque,
>> +				int min_order,
>> +				bool (*report_pfn_range)(void *opaque,
>> +							 unsigned long pfn,
>> +							 unsigned long num));
>> +
>>   /*
>>    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>>    * into the buddy system. The freed pages will be poisoned with pattern
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 76c9688..705de22 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>>   	show_swap_cache_info();
>>   }
>>   
>> +/*
>> + * Walk through a free page list and report the found pfn range via the
>> + * callback.
>> + *
>> + * Return false if the callback requests to stop reporting. Otherwise,
>> + * return true.
>> + */
>> +static bool walk_free_page_list(void *opaque,
>> +				struct zone *zone,
>> +				int order,
>> +				enum migratetype mt,
>> +				bool (*report_pfn_range)(void *,
>> +							 unsigned long,
>> +							 unsigned long))
>> +{
>> +	struct page *page;
>> +	struct list_head *list;
>> +	unsigned long pfn, flags;
>> +	bool ret;
>> +
>> +	spin_lock_irqsave(&zone->lock, flags);
>> +	list = &zone->free_area[order].free_list[mt];
>> +	list_for_each_entry(page, list, lru) {
>> +		pfn = page_to_pfn(page);
>> +		ret = report_pfn_range(opaque, pfn, 1 << order);
>> +		if (!ret)
>> +			break;
>> +	}
>> +	spin_unlock_irqrestore(&zone->lock, flags);
>> +
>> +	return ret;
>> +}
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
>
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.
>
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.


I'm not sure how this would help. For example, we have a list of 2M free 
page blocks:
A-->B-->C-->D-->E-->F-->G--H

After reporting A and B, and put them to the end and exit, when the 
caller comes back,
1) if the list remains unchanged, then it will be
C-->D-->E-->F-->G-->H-->A-->B

2) If worse, all the blocks have been split into smaller blocks and used 
after the caller comes back.

where could we continue?


The reason to think about "restart" is the worry about the virtqueue is 
full, right? But we've agreed that losing some hints to report isn't 
important, and in practice, the virtqueue won't be full as the host side 
is faster.

I'm concerned that actions on the free list may cause more controversy 
though it might be safe to do from some aspect, and would be hard to end 
debating. If possible, we could go with the most prudent approach for 
now, and have more discussions in future improvement patches. What would 
you think?



>
>
>> +
>> +/**
>> + * walk_free_mem_block - Walk through the free page blocks in the system
>> + * @opaque: the context passed from the caller
>> + * @min_order: the minimum order of free lists to check
>> + * @report_pfn_range: the callback to report the pfn range of the free pages
>> + *
>> + * If the callback returns false, stop iterating the list of free page blocks.
>> + * Otherwise, continue to report.
>> + *
>> + * Please note that there are no locking guarantees for the callback and
>> + * that the reported pfn range might be freed or disappear after the
>> + * callback returns so the caller has to be very careful how it is used.
>> + *
>> + * The callback itself must not sleep or perform any operations which would
>> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
>> + * or via any lock dependency. It is generally advisable to implement
>> + * the callback as simple as possible and defer any heavy lifting to a
>> + * different context.
>> + *
>> + * There is no guarantee that each free range will be reported only once
>> + * during one walk_free_mem_block invocation.
>> + *
>> + * pfn_to_page on the given range is strongly discouraged and if there is
>> + * an absolute need for that make sure to contact MM people to discuss
>> + * potential problems.
>> + *
>> + * The function itself might sleep so it cannot be called from atomic
>> + * contexts.
>> + *
>> + * In general low orders tend to be very volatile and so it makes more
>> + * sense to query larger ones first for various optimizations which like
>> + * ballooning etc... This will reduce the overhead as well.
>> + */
>> +void walk_free_mem_block(void *opaque,
>> +			 int min_order,
>> +			 bool (*report_pfn_range)(void *opaque,
>> +						  unsigned long pfn,
>> +						  unsigned long num))
>> +{
>> +	struct zone *zone;
>> +	int order;
>> +	enum migratetype mt;
>> +	bool ret;
>> +
>> +	for_each_populated_zone(zone) {
>> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
>> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
>> +				ret = walk_free_page_list(opaque, zone,
>> +							  order, mt,
>> +							  report_pfn_range);
>> +				if (!ret)
>> +					return;
>> +			}
>> +		}
>> +	}
>> +}
>> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
>> +
> I think callers need a way to
> 1. distinguish between completion and exit on error

The first one here has actually been achieved by v25, where 
walk_free_mem_block returns 0 on completing the reporting, or a non-zero 
value which is returned from the callback.
So the caller will detect errors via letting the callback to return 
something.

Best,
Wei



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-25 13:41     ` Michael S. Tsirkin
                       ` (4 preceding siblings ...)
  (?)
@ 2018-01-26  3:29     ` Wei Wang
  -1 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-26  3:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
>> This patch adds support to walk through the free page blocks in the
>> system and report them via a callback function. Some page blocks may
>> leave the free list after zone->lock is released, so it is the caller's
>> responsibility to either detect or prevent the use of such pages.
>>
>> One use example of this patch is to accelerate live migration by skipping
>> the transfer of free pages reported from the guest. A popular method used
>> by the hypervisor to track which part of memory is written during live
>> migration is to write-protect all the guest memory. So, those pages that
>> are reported as free pages but are written after the report function
>> returns will be captured by the hypervisor, and they will be added to the
>> next round of memory transfer.
>>
>> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Acked-by: Michal Hocko <mhocko@kernel.org>
>> ---
>>   include/linux/mm.h |  6 ++++
>>   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 97 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ea818ff..b3077dd 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>>   		unsigned long zone_start_pfn, unsigned long *zholes_size);
>>   extern void free_initmem(void);
>>   
>> +extern void walk_free_mem_block(void *opaque,
>> +				int min_order,
>> +				bool (*report_pfn_range)(void *opaque,
>> +							 unsigned long pfn,
>> +							 unsigned long num));
>> +
>>   /*
>>    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>>    * into the buddy system. The freed pages will be poisoned with pattern
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 76c9688..705de22 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>>   	show_swap_cache_info();
>>   }
>>   
>> +/*
>> + * Walk through a free page list and report the found pfn range via the
>> + * callback.
>> + *
>> + * Return false if the callback requests to stop reporting. Otherwise,
>> + * return true.
>> + */
>> +static bool walk_free_page_list(void *opaque,
>> +				struct zone *zone,
>> +				int order,
>> +				enum migratetype mt,
>> +				bool (*report_pfn_range)(void *,
>> +							 unsigned long,
>> +							 unsigned long))
>> +{
>> +	struct page *page;
>> +	struct list_head *list;
>> +	unsigned long pfn, flags;
>> +	bool ret;
>> +
>> +	spin_lock_irqsave(&zone->lock, flags);
>> +	list = &zone->free_area[order].free_list[mt];
>> +	list_for_each_entry(page, list, lru) {
>> +		pfn = page_to_pfn(page);
>> +		ret = report_pfn_range(opaque, pfn, 1 << order);
>> +		if (!ret)
>> +			break;
>> +	}
>> +	spin_unlock_irqrestore(&zone->lock, flags);
>> +
>> +	return ret;
>> +}
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
>
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.
>
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.


I'm not sure how this would help. For example, we have a list of 2M free 
page blocks:
A-->B-->C-->D-->E-->F-->G--H

After reporting A and B, and put them to the end and exit, when the 
caller comes back,
1) if the list remains unchanged, then it will be
C-->D-->E-->F-->G-->H-->A-->B

2) If worse, all the blocks have been split into smaller blocks and used 
after the caller comes back.

where could we continue?


The reason to think about "restart" is the worry about the virtqueue is 
full, right? But we've agreed that losing some hints to report isn't 
important, and in practice, the virtqueue won't be full as the host side 
is faster.

I'm concerned that actions on the free list may cause more controversy 
though it might be safe to do from some aspect, and would be hard to end 
debating. If possible, we could go with the most prudent approach for 
now, and have more discussions in future improvement patches. What would 
you think?



>
>
>> +
>> +/**
>> + * walk_free_mem_block - Walk through the free page blocks in the system
>> + * @opaque: the context passed from the caller
>> + * @min_order: the minimum order of free lists to check
>> + * @report_pfn_range: the callback to report the pfn range of the free pages
>> + *
>> + * If the callback returns false, stop iterating the list of free page blocks.
>> + * Otherwise, continue to report.
>> + *
>> + * Please note that there are no locking guarantees for the callback and
>> + * that the reported pfn range might be freed or disappear after the
>> + * callback returns so the caller has to be very careful how it is used.
>> + *
>> + * The callback itself must not sleep or perform any operations which would
>> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
>> + * or via any lock dependency. It is generally advisable to implement
>> + * the callback as simple as possible and defer any heavy lifting to a
>> + * different context.
>> + *
>> + * There is no guarantee that each free range will be reported only once
>> + * during one walk_free_mem_block invocation.
>> + *
>> + * pfn_to_page on the given range is strongly discouraged and if there is
>> + * an absolute need for that make sure to contact MM people to discuss
>> + * potential problems.
>> + *
>> + * The function itself might sleep so it cannot be called from atomic
>> + * contexts.
>> + *
>> + * In general low orders tend to be very volatile and so it makes more
>> + * sense to query larger ones first for various optimizations which like
>> + * ballooning etc... This will reduce the overhead as well.
>> + */
>> +void walk_free_mem_block(void *opaque,
>> +			 int min_order,
>> +			 bool (*report_pfn_range)(void *opaque,
>> +						  unsigned long pfn,
>> +						  unsigned long num))
>> +{
>> +	struct zone *zone;
>> +	int order;
>> +	enum migratetype mt;
>> +	bool ret;
>> +
>> +	for_each_populated_zone(zone) {
>> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
>> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
>> +				ret = walk_free_page_list(opaque, zone,
>> +							  order, mt,
>> +							  report_pfn_range);
>> +				if (!ret)
>> +					return;
>> +			}
>> +		}
>> +	}
>> +}
>> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
>> +
> I think callers need a way to
> 1. distinguish between completion and exit on error

The first one here has actually been achieved by v25, where 
walk_free_mem_block returns 0 on completing the reporting, or a non-zero 
value which is returned from the callback.
So the caller will detect errors via letting the callback to return 
something.

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26  3:29       ` Wei Wang
  0 siblings, 0 replies; 65+ messages in thread
From: Wei Wang @ 2018-01-26  3:29 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
>> This patch adds support to walk through the free page blocks in the
>> system and report them via a callback function. Some page blocks may
>> leave the free list after zone->lock is released, so it is the caller's
>> responsibility to either detect or prevent the use of such pages.
>>
>> One use example of this patch is to accelerate live migration by skipping
>> the transfer of free pages reported from the guest. A popular method used
>> by the hypervisor to track which part of memory is written during live
>> migration is to write-protect all the guest memory. So, those pages that
>> are reported as free pages but are written after the report function
>> returns will be captured by the hypervisor, and they will be added to the
>> next round of memory transfer.
>>
>> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
>> Signed-off-by: Liang Li <liang.z.li@intel.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Michael S. Tsirkin <mst@redhat.com>
>> Acked-by: Michal Hocko <mhocko@kernel.org>
>> ---
>>   include/linux/mm.h |  6 ++++
>>   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   2 files changed, 97 insertions(+)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ea818ff..b3077dd 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
>>   		unsigned long zone_start_pfn, unsigned long *zholes_size);
>>   extern void free_initmem(void);
>>   
>> +extern void walk_free_mem_block(void *opaque,
>> +				int min_order,
>> +				bool (*report_pfn_range)(void *opaque,
>> +							 unsigned long pfn,
>> +							 unsigned long num));
>> +
>>   /*
>>    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
>>    * into the buddy system. The freed pages will be poisoned with pattern
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 76c9688..705de22 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
>>   	show_swap_cache_info();
>>   }
>>   
>> +/*
>> + * Walk through a free page list and report the found pfn range via the
>> + * callback.
>> + *
>> + * Return false if the callback requests to stop reporting. Otherwise,
>> + * return true.
>> + */
>> +static bool walk_free_page_list(void *opaque,
>> +				struct zone *zone,
>> +				int order,
>> +				enum migratetype mt,
>> +				bool (*report_pfn_range)(void *,
>> +							 unsigned long,
>> +							 unsigned long))
>> +{
>> +	struct page *page;
>> +	struct list_head *list;
>> +	unsigned long pfn, flags;
>> +	bool ret;
>> +
>> +	spin_lock_irqsave(&zone->lock, flags);
>> +	list = &zone->free_area[order].free_list[mt];
>> +	list_for_each_entry(page, list, lru) {
>> +		pfn = page_to_pfn(page);
>> +		ret = report_pfn_range(opaque, pfn, 1 << order);
>> +		if (!ret)
>> +			break;
>> +	}
>> +	spin_unlock_irqrestore(&zone->lock, flags);
>> +
>> +	return ret;
>> +}
> There are two issues with this API. One is that it is not
> restarteable: if you return false, you start from the
> beginning. So no way to drop lock, do something slow
> and then proceed.
>
> Another is that you are using it to report free page hints. Presumably
> the point is to drop these pages - keeping them near head of the list
> and reusing the reported ones will just make everything slower
> invalidating the hint.
>
> How about rotating these pages towards the end of the list?
> Probably not on each call, callect reported pages and then
> move them to tail when we exit.


I'm not sure how this would help. For example, we have a list of 2M free 
page blocks:
A-->B-->C-->D-->E-->F-->G--H

After reporting A and B, and put them to the end and exit, when the 
caller comes back,
1) if the list remains unchanged, then it will be
C-->D-->E-->F-->G-->H-->A-->B

2) If worse, all the blocks have been split into smaller blocks and used 
after the caller comes back.

where could we continue?


The reason to think about "restart" is the worry about the virtqueue is 
full, right? But we've agreed that losing some hints to report isn't 
important, and in practice, the virtqueue won't be full as the host side 
is faster.

I'm concerned that actions on the free list may cause more controversy 
though it might be safe to do from some aspect, and would be hard to end 
debating. If possible, we could go with the most prudent approach for 
now, and have more discussions in future improvement patches. What would 
you think?



>
>
>> +
>> +/**
>> + * walk_free_mem_block - Walk through the free page blocks in the system
>> + * @opaque: the context passed from the caller
>> + * @min_order: the minimum order of free lists to check
>> + * @report_pfn_range: the callback to report the pfn range of the free pages
>> + *
>> + * If the callback returns false, stop iterating the list of free page blocks.
>> + * Otherwise, continue to report.
>> + *
>> + * Please note that there are no locking guarantees for the callback and
>> + * that the reported pfn range might be freed or disappear after the
>> + * callback returns so the caller has to be very careful how it is used.
>> + *
>> + * The callback itself must not sleep or perform any operations which would
>> + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
>> + * or via any lock dependency. It is generally advisable to implement
>> + * the callback as simple as possible and defer any heavy lifting to a
>> + * different context.
>> + *
>> + * There is no guarantee that each free range will be reported only once
>> + * during one walk_free_mem_block invocation.
>> + *
>> + * pfn_to_page on the given range is strongly discouraged and if there is
>> + * an absolute need for that make sure to contact MM people to discuss
>> + * potential problems.
>> + *
>> + * The function itself might sleep so it cannot be called from atomic
>> + * contexts.
>> + *
>> + * In general low orders tend to be very volatile and so it makes more
>> + * sense to query larger ones first for various optimizations which like
>> + * ballooning etc... This will reduce the overhead as well.
>> + */
>> +void walk_free_mem_block(void *opaque,
>> +			 int min_order,
>> +			 bool (*report_pfn_range)(void *opaque,
>> +						  unsigned long pfn,
>> +						  unsigned long num))
>> +{
>> +	struct zone *zone;
>> +	int order;
>> +	enum migratetype mt;
>> +	bool ret;
>> +
>> +	for_each_populated_zone(zone) {
>> +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
>> +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
>> +				ret = walk_free_page_list(opaque, zone,
>> +							  order, mt,
>> +							  report_pfn_range);
>> +				if (!ret)
>> +					return;
>> +			}
>> +		}
>> +	}
>> +}
>> +EXPORT_SYMBOL_GPL(walk_free_mem_block);
>> +
> I think callers need a way to
> 1. distinguish between completion and exit on error

The first one here has actually been achieved by v25, where 
walk_free_mem_block returns 0 on completing the reporting, or a non-zero 
value which is returned from the callback.
So the caller will detect errors via letting the callback to return 
something.

Best,
Wei




---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26  3:29       ` Wei Wang
  (?)
@ 2018-01-26 15:00         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 15:00 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >   include/linux/mm.h |  6 ++++
> > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >   2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >   extern void free_initmem(void);
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >   /*
> > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > >   	show_swap_cache_info();
> > >   }
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> > 
> > How about rotating these pages towards the end of the list?
> > Probably not on each call, callect reported pages and then
> > move them to tail when we exit.
> 
> 
> I'm not sure how this would help. For example, we have a list of 2M free
> page blocks:
> A-->B-->C-->D-->E-->F-->G--H
> 
> After reporting A and B, and put them to the end and exit, when the caller
> comes back,
> 1) if the list remains unchanged, then it will be
> C-->D-->E-->F-->G-->H-->A-->B

Right. So here we can just scan until we see A, right?  It's a harder
question what to do if A and only A has been consumed.  We don't want B
to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
twice. Host might know page is already gone - how about host gives us a
hint after using the buffer?

> 2) If worse, all the blocks have been split into smaller blocks and used
> after the caller comes back.
> 
> where could we continue?

I'm not sure. But an alternative appears to be to hold a lock
and just block whoever wanted to use any pages.  Yes we are sending
hints faster but apparently something wanted these pages, and holding
the lock is interfering with this something.

> 
> The reason to think about "restart" is the worry about the virtqueue is
> full, right? But we've agreed that losing some hints to report isn't
> important, and in practice, the virtqueue won't be full as the host side is
> faster.

It would be more convincing if we sent e.g. higher order pages
first. As it is - it won't take long to stuff ring full of
4K pages and it seems highly unlikely that host won't ever
be scheduled out.

Can we maybe agree on what kind of benchmark makes sense for
this work? I'm concerned that we are laser focused on just
how long does it take to migrate ignoring e.g. slowdowns
after migration.

> I'm concerned that actions on the free list may cause more controversy
> though it might be safe to do from some aspect, and would be hard to end
> debating. If possible, we could go with the most prudent approach for now,
> and have more discussions in future improvement patches. What would you
> think?

Well I'm not 100% about restartability. But keeping pages
freed by host near head of the list looks kind of wrong.
Try to float a patch on top for the rotation and see what happens?

> 
> 
> > 
> > 
> > > +
> > > +/**
> > > + * walk_free_mem_block - Walk through the free page blocks in the system
> > > + * @opaque: the context passed from the caller
> > > + * @min_order: the minimum order of free lists to check
> > > + * @report_pfn_range: the callback to report the pfn range of the free pages
> > > + *
> > > + * If the callback returns false, stop iterating the list of free page blocks.
> > > + * Otherwise, continue to report.
> > > + *
> > > + * Please note that there are no locking guarantees for the callback and
> > > + * that the reported pfn range might be freed or disappear after the
> > > + * callback returns so the caller has to be very careful how it is used.
> > > + *
> > > + * The callback itself must not sleep or perform any operations which would
> > > + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> > > + * or via any lock dependency. It is generally advisable to implement
> > > + * the callback as simple as possible and defer any heavy lifting to a
> > > + * different context.
> > > + *
> > > + * There is no guarantee that each free range will be reported only once
> > > + * during one walk_free_mem_block invocation.
> > > + *
> > > + * pfn_to_page on the given range is strongly discouraged and if there is
> > > + * an absolute need for that make sure to contact MM people to discuss
> > > + * potential problems.
> > > + *
> > > + * The function itself might sleep so it cannot be called from atomic
> > > + * contexts.
> > > + *
> > > + * In general low orders tend to be very volatile and so it makes more
> > > + * sense to query larger ones first for various optimizations which like
> > > + * ballooning etc... This will reduce the overhead as well.
> > > + */
> > > +void walk_free_mem_block(void *opaque,
> > > +			 int min_order,
> > > +			 bool (*report_pfn_range)(void *opaque,
> > > +						  unsigned long pfn,
> > > +						  unsigned long num))
> > > +{
> > > +	struct zone *zone;
> > > +	int order;
> > > +	enum migratetype mt;
> > > +	bool ret;
> > > +
> > > +	for_each_populated_zone(zone) {
> > > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > > +				ret = walk_free_page_list(opaque, zone,
> > > +							  order, mt,
> > > +							  report_pfn_range);
> > > +				if (!ret)
> > > +					return;
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > > +
> > I think callers need a way to
> > 1. distinguish between completion and exit on error
> 
> The first one here has actually been achieved by v25, where
> walk_free_mem_block returns 0 on completing the reporting, or a non-zero
> value which is returned from the callback.
> So the caller will detect errors via letting the callback to return
> something.
> 
> Best,
> Wei
> 
> 

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26 15:00         ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 15:00 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >   include/linux/mm.h |  6 ++++
> > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >   2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >   extern void free_initmem(void);
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >   /*
> > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > >   	show_swap_cache_info();
> > >   }
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> > 
> > How about rotating these pages towards the end of the list?
> > Probably not on each call, callect reported pages and then
> > move them to tail when we exit.
> 
> 
> I'm not sure how this would help. For example, we have a list of 2M free
> page blocks:
> A-->B-->C-->D-->E-->F-->G--H
> 
> After reporting A and B, and put them to the end and exit, when the caller
> comes back,
> 1) if the list remains unchanged, then it will be
> C-->D-->E-->F-->G-->H-->A-->B

Right. So here we can just scan until we see A, right?  It's a harder
question what to do if A and only A has been consumed.  We don't want B
to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
twice. Host might know page is already gone - how about host gives us a
hint after using the buffer?

> 2) If worse, all the blocks have been split into smaller blocks and used
> after the caller comes back.
> 
> where could we continue?

I'm not sure. But an alternative appears to be to hold a lock
and just block whoever wanted to use any pages.  Yes we are sending
hints faster but apparently something wanted these pages, and holding
the lock is interfering with this something.

> 
> The reason to think about "restart" is the worry about the virtqueue is
> full, right? But we've agreed that losing some hints to report isn't
> important, and in practice, the virtqueue won't be full as the host side is
> faster.

It would be more convincing if we sent e.g. higher order pages
first. As it is - it won't take long to stuff ring full of
4K pages and it seems highly unlikely that host won't ever
be scheduled out.

Can we maybe agree on what kind of benchmark makes sense for
this work? I'm concerned that we are laser focused on just
how long does it take to migrate ignoring e.g. slowdowns
after migration.

> I'm concerned that actions on the free list may cause more controversy
> though it might be safe to do from some aspect, and would be hard to end
> debating. If possible, we could go with the most prudent approach for now,
> and have more discussions in future improvement patches. What would you
> think?

Well I'm not 100% about restartability. But keeping pages
freed by host near head of the list looks kind of wrong.
Try to float a patch on top for the rotation and see what happens?

> 
> 
> > 
> > 
> > > +
> > > +/**
> > > + * walk_free_mem_block - Walk through the free page blocks in the system
> > > + * @opaque: the context passed from the caller
> > > + * @min_order: the minimum order of free lists to check
> > > + * @report_pfn_range: the callback to report the pfn range of the free pages
> > > + *
> > > + * If the callback returns false, stop iterating the list of free page blocks.
> > > + * Otherwise, continue to report.
> > > + *
> > > + * Please note that there are no locking guarantees for the callback and
> > > + * that the reported pfn range might be freed or disappear after the
> > > + * callback returns so the caller has to be very careful how it is used.
> > > + *
> > > + * The callback itself must not sleep or perform any operations which would
> > > + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> > > + * or via any lock dependency. It is generally advisable to implement
> > > + * the callback as simple as possible and defer any heavy lifting to a
> > > + * different context.
> > > + *
> > > + * There is no guarantee that each free range will be reported only once
> > > + * during one walk_free_mem_block invocation.
> > > + *
> > > + * pfn_to_page on the given range is strongly discouraged and if there is
> > > + * an absolute need for that make sure to contact MM people to discuss
> > > + * potential problems.
> > > + *
> > > + * The function itself might sleep so it cannot be called from atomic
> > > + * contexts.
> > > + *
> > > + * In general low orders tend to be very volatile and so it makes more
> > > + * sense to query larger ones first for various optimizations which like
> > > + * ballooning etc... This will reduce the overhead as well.
> > > + */
> > > +void walk_free_mem_block(void *opaque,
> > > +			 int min_order,
> > > +			 bool (*report_pfn_range)(void *opaque,
> > > +						  unsigned long pfn,
> > > +						  unsigned long num))
> > > +{
> > > +	struct zone *zone;
> > > +	int order;
> > > +	enum migratetype mt;
> > > +	bool ret;
> > > +
> > > +	for_each_populated_zone(zone) {
> > > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > > +				ret = walk_free_page_list(opaque, zone,
> > > +							  order, mt,
> > > +							  report_pfn_range);
> > > +				if (!ret)
> > > +					return;
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > > +
> > I think callers need a way to
> > 1. distinguish between completion and exit on error
> 
> The first one here has actually been achieved by v25, where
> walk_free_mem_block returns 0 on completing the reporting, or a non-zero
> value which is returned from the callback.
> So the caller will detect errors via letting the callback to return
> something.
> 
> Best,
> Wei
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26  3:29       ` Wei Wang
  (?)
  (?)
@ 2018-01-26 15:00       ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 15:00 UTC (permalink / raw)
  To: Wei Wang
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >   include/linux/mm.h |  6 ++++
> > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >   2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >   extern void free_initmem(void);
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >   /*
> > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > >   	show_swap_cache_info();
> > >   }
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> > 
> > How about rotating these pages towards the end of the list?
> > Probably not on each call, callect reported pages and then
> > move them to tail when we exit.
> 
> 
> I'm not sure how this would help. For example, we have a list of 2M free
> page blocks:
> A-->B-->C-->D-->E-->F-->G--H
> 
> After reporting A and B, and put them to the end and exit, when the caller
> comes back,
> 1) if the list remains unchanged, then it will be
> C-->D-->E-->F-->G-->H-->A-->B

Right. So here we can just scan until we see A, right?  It's a harder
question what to do if A and only A has been consumed.  We don't want B
to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
twice. Host might know page is already gone - how about host gives us a
hint after using the buffer?

> 2) If worse, all the blocks have been split into smaller blocks and used
> after the caller comes back.
> 
> where could we continue?

I'm not sure. But an alternative appears to be to hold a lock
and just block whoever wanted to use any pages.  Yes we are sending
hints faster but apparently something wanted these pages, and holding
the lock is interfering with this something.

> 
> The reason to think about "restart" is the worry about the virtqueue is
> full, right? But we've agreed that losing some hints to report isn't
> important, and in practice, the virtqueue won't be full as the host side is
> faster.

It would be more convincing if we sent e.g. higher order pages
first. As it is - it won't take long to stuff ring full of
4K pages and it seems highly unlikely that host won't ever
be scheduled out.

Can we maybe agree on what kind of benchmark makes sense for
this work? I'm concerned that we are laser focused on just
how long does it take to migrate ignoring e.g. slowdowns
after migration.

> I'm concerned that actions on the free list may cause more controversy
> though it might be safe to do from some aspect, and would be hard to end
> debating. If possible, we could go with the most prudent approach for now,
> and have more discussions in future improvement patches. What would you
> think?

Well I'm not 100% about restartability. But keeping pages
freed by host near head of the list looks kind of wrong.
Try to float a patch on top for the rotation and see what happens?

> 
> 
> > 
> > 
> > > +
> > > +/**
> > > + * walk_free_mem_block - Walk through the free page blocks in the system
> > > + * @opaque: the context passed from the caller
> > > + * @min_order: the minimum order of free lists to check
> > > + * @report_pfn_range: the callback to report the pfn range of the free pages
> > > + *
> > > + * If the callback returns false, stop iterating the list of free page blocks.
> > > + * Otherwise, continue to report.
> > > + *
> > > + * Please note that there are no locking guarantees for the callback and
> > > + * that the reported pfn range might be freed or disappear after the
> > > + * callback returns so the caller has to be very careful how it is used.
> > > + *
> > > + * The callback itself must not sleep or perform any operations which would
> > > + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> > > + * or via any lock dependency. It is generally advisable to implement
> > > + * the callback as simple as possible and defer any heavy lifting to a
> > > + * different context.
> > > + *
> > > + * There is no guarantee that each free range will be reported only once
> > > + * during one walk_free_mem_block invocation.
> > > + *
> > > + * pfn_to_page on the given range is strongly discouraged and if there is
> > > + * an absolute need for that make sure to contact MM people to discuss
> > > + * potential problems.
> > > + *
> > > + * The function itself might sleep so it cannot be called from atomic
> > > + * contexts.
> > > + *
> > > + * In general low orders tend to be very volatile and so it makes more
> > > + * sense to query larger ones first for various optimizations which like
> > > + * ballooning etc... This will reduce the overhead as well.
> > > + */
> > > +void walk_free_mem_block(void *opaque,
> > > +			 int min_order,
> > > +			 bool (*report_pfn_range)(void *opaque,
> > > +						  unsigned long pfn,
> > > +						  unsigned long num))
> > > +{
> > > +	struct zone *zone;
> > > +	int order;
> > > +	enum migratetype mt;
> > > +	bool ret;
> > > +
> > > +	for_each_populated_zone(zone) {
> > > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > > +				ret = walk_free_page_list(opaque, zone,
> > > +							  order, mt,
> > > +							  report_pfn_range);
> > > +				if (!ret)
> > > +					return;
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > > +
> > I think callers need a way to
> > 1. distinguish between completion and exit on error
> 
> The first one here has actually been achieved by v25, where
> walk_free_mem_block returns 0 on completing the reporting, or a non-zero
> value which is returned from the callback.
> So the caller will detect errors via letting the callback to return
> something.
> 
> Best,
> Wei
> 
> 

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26 15:00         ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 15:00 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > This patch adds support to walk through the free page blocks in the
> > > system and report them via a callback function. Some page blocks may
> > > leave the free list after zone->lock is released, so it is the caller's
> > > responsibility to either detect or prevent the use of such pages.
> > > 
> > > One use example of this patch is to accelerate live migration by skipping
> > > the transfer of free pages reported from the guest. A popular method used
> > > by the hypervisor to track which part of memory is written during live
> > > migration is to write-protect all the guest memory. So, those pages that
> > > are reported as free pages but are written after the report function
> > > returns will be captured by the hypervisor, and they will be added to the
> > > next round of memory transfer.
> > > 
> > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > Cc: Michal Hocko <mhocko@kernel.org>
> > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > ---
> > >   include/linux/mm.h |  6 ++++
> > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >   2 files changed, 97 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ea818ff..b3077dd 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > >   extern void free_initmem(void);
> > > +extern void walk_free_mem_block(void *opaque,
> > > +				int min_order,
> > > +				bool (*report_pfn_range)(void *opaque,
> > > +							 unsigned long pfn,
> > > +							 unsigned long num));
> > > +
> > >   /*
> > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 76c9688..705de22 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > >   	show_swap_cache_info();
> > >   }
> > > +/*
> > > + * Walk through a free page list and report the found pfn range via the
> > > + * callback.
> > > + *
> > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > + * return true.
> > > + */
> > > +static bool walk_free_page_list(void *opaque,
> > > +				struct zone *zone,
> > > +				int order,
> > > +				enum migratetype mt,
> > > +				bool (*report_pfn_range)(void *,
> > > +							 unsigned long,
> > > +							 unsigned long))
> > > +{
> > > +	struct page *page;
> > > +	struct list_head *list;
> > > +	unsigned long pfn, flags;
> > > +	bool ret;
> > > +
> > > +	spin_lock_irqsave(&zone->lock, flags);
> > > +	list = &zone->free_area[order].free_list[mt];
> > > +	list_for_each_entry(page, list, lru) {
> > > +		pfn = page_to_pfn(page);
> > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > +		if (!ret)
> > > +			break;
> > > +	}
> > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > There are two issues with this API. One is that it is not
> > restarteable: if you return false, you start from the
> > beginning. So no way to drop lock, do something slow
> > and then proceed.
> > 
> > Another is that you are using it to report free page hints. Presumably
> > the point is to drop these pages - keeping them near head of the list
> > and reusing the reported ones will just make everything slower
> > invalidating the hint.
> > 
> > How about rotating these pages towards the end of the list?
> > Probably not on each call, callect reported pages and then
> > move them to tail when we exit.
> 
> 
> I'm not sure how this would help. For example, we have a list of 2M free
> page blocks:
> A-->B-->C-->D-->E-->F-->G--H
> 
> After reporting A and B, and put them to the end and exit, when the caller
> comes back,
> 1) if the list remains unchanged, then it will be
> C-->D-->E-->F-->G-->H-->A-->B

Right. So here we can just scan until we see A, right?  It's a harder
question what to do if A and only A has been consumed.  We don't want B
to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
twice. Host might know page is already gone - how about host gives us a
hint after using the buffer?

> 2) If worse, all the blocks have been split into smaller blocks and used
> after the caller comes back.
> 
> where could we continue?

I'm not sure. But an alternative appears to be to hold a lock
and just block whoever wanted to use any pages.  Yes we are sending
hints faster but apparently something wanted these pages, and holding
the lock is interfering with this something.

> 
> The reason to think about "restart" is the worry about the virtqueue is
> full, right? But we've agreed that losing some hints to report isn't
> important, and in practice, the virtqueue won't be full as the host side is
> faster.

It would be more convincing if we sent e.g. higher order pages
first. As it is - it won't take long to stuff ring full of
4K pages and it seems highly unlikely that host won't ever
be scheduled out.

Can we maybe agree on what kind of benchmark makes sense for
this work? I'm concerned that we are laser focused on just
how long does it take to migrate ignoring e.g. slowdowns
after migration.

> I'm concerned that actions on the free list may cause more controversy
> though it might be safe to do from some aspect, and would be hard to end
> debating. If possible, we could go with the most prudent approach for now,
> and have more discussions in future improvement patches. What would you
> think?

Well I'm not 100% about restartability. But keeping pages
freed by host near head of the list looks kind of wrong.
Try to float a patch on top for the rotation and see what happens?

> 
> 
> > 
> > 
> > > +
> > > +/**
> > > + * walk_free_mem_block - Walk through the free page blocks in the system
> > > + * @opaque: the context passed from the caller
> > > + * @min_order: the minimum order of free lists to check
> > > + * @report_pfn_range: the callback to report the pfn range of the free pages
> > > + *
> > > + * If the callback returns false, stop iterating the list of free page blocks.
> > > + * Otherwise, continue to report.
> > > + *
> > > + * Please note that there are no locking guarantees for the callback and
> > > + * that the reported pfn range might be freed or disappear after the
> > > + * callback returns so the caller has to be very careful how it is used.
> > > + *
> > > + * The callback itself must not sleep or perform any operations which would
> > > + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC)
> > > + * or via any lock dependency. It is generally advisable to implement
> > > + * the callback as simple as possible and defer any heavy lifting to a
> > > + * different context.
> > > + *
> > > + * There is no guarantee that each free range will be reported only once
> > > + * during one walk_free_mem_block invocation.
> > > + *
> > > + * pfn_to_page on the given range is strongly discouraged and if there is
> > > + * an absolute need for that make sure to contact MM people to discuss
> > > + * potential problems.
> > > + *
> > > + * The function itself might sleep so it cannot be called from atomic
> > > + * contexts.
> > > + *
> > > + * In general low orders tend to be very volatile and so it makes more
> > > + * sense to query larger ones first for various optimizations which like
> > > + * ballooning etc... This will reduce the overhead as well.
> > > + */
> > > +void walk_free_mem_block(void *opaque,
> > > +			 int min_order,
> > > +			 bool (*report_pfn_range)(void *opaque,
> > > +						  unsigned long pfn,
> > > +						  unsigned long num))
> > > +{
> > > +	struct zone *zone;
> > > +	int order;
> > > +	enum migratetype mt;
> > > +	bool ret;
> > > +
> > > +	for_each_populated_zone(zone) {
> > > +		for (order = MAX_ORDER - 1; order >= min_order; order--) {
> > > +			for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> > > +				ret = walk_free_page_list(opaque, zone,
> > > +							  order, mt,
> > > +							  report_pfn_range);
> > > +				if (!ret)
> > > +					return;
> > > +			}
> > > +		}
> > > +	}
> > > +}
> > > +EXPORT_SYMBOL_GPL(walk_free_mem_block);
> > > +
> > I think callers need a way to
> > 1. distinguish between completion and exit on error
> 
> The first one here has actually been achieved by v25, where
> walk_free_mem_block returns 0 on completing the reporting, or a non-zero
> value which is returned from the callback.
> So the caller will detect errors via letting the callback to return
> something.
> 
> Best,
> Wei
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 15:00         ` Michael S. Tsirkin
  (?)
@ 2018-01-26 21:43           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 21:43 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in the
> > > > system and report them via a callback function. Some page blocks may
> > > > leave the free list after zone->lock is released, so it is the caller's
> > > > responsibility to either detect or prevent the use of such pages.
> > > > 
> > > > One use example of this patch is to accelerate live migration by skipping
> > > > the transfer of free pages reported from the guest. A popular method used
> > > > by the hypervisor to track which part of memory is written during live
> > > > migration is to write-protect all the guest memory. So, those pages that
> > > > are reported as free pages but are written after the report function
> > > > returns will be captured by the hypervisor, and they will be added to the
> > > > next round of memory transfer.
> > > > 
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > > 
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the
> > > beginning. So no way to drop lock, do something slow
> > > and then proceed.
> > > 
> > > Another is that you are using it to report free page hints. Presumably
> > > the point is to drop these pages - keeping them near head of the list
> > > and reusing the reported ones will just make everything slower
> > > invalidating the hint.
> > > 
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then
> > > move them to tail when we exit.
> > 
> > 
> > I'm not sure how this would help. For example, we have a list of 2M free
> > page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> > 
> > After reporting A and B, and put them to the end and exit, when the caller
> > comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder
> question what to do if A and only A has been consumed.  We don't want B
> to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
> twice. Host might know page is already gone - how about host gives us a
> hint after using the buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and used
> > after the caller comes back.
> > 
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock
> and just block whoever wanted to use any pages.  Yes we are sending
> hints faster but apparently something wanted these pages, and holding
> the lock is interfering with this something.

I've been thinking about it. How about the following scheme:
1. register balloon to get a (new) callback when free list runs empty
2. take pages off the free list, add them to the balloon specific list
3. report to host
4. readd to free list at tail
5. if callback triggers, interrupt balloon reporting to host,
   and readd to free list at tail


This needs some thought wrt what happens when there are
multiple users of this API, but looks like it will work.

-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26 21:43           ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 21:43 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in the
> > > > system and report them via a callback function. Some page blocks may
> > > > leave the free list after zone->lock is released, so it is the caller's
> > > > responsibility to either detect or prevent the use of such pages.
> > > > 
> > > > One use example of this patch is to accelerate live migration by skipping
> > > > the transfer of free pages reported from the guest. A popular method used
> > > > by the hypervisor to track which part of memory is written during live
> > > > migration is to write-protect all the guest memory. So, those pages that
> > > > are reported as free pages but are written after the report function
> > > > returns will be captured by the hypervisor, and they will be added to the
> > > > next round of memory transfer.
> > > > 
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > > 
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the
> > > beginning. So no way to drop lock, do something slow
> > > and then proceed.
> > > 
> > > Another is that you are using it to report free page hints. Presumably
> > > the point is to drop these pages - keeping them near head of the list
> > > and reusing the reported ones will just make everything slower
> > > invalidating the hint.
> > > 
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then
> > > move them to tail when we exit.
> > 
> > 
> > I'm not sure how this would help. For example, we have a list of 2M free
> > page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> > 
> > After reporting A and B, and put them to the end and exit, when the caller
> > comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder
> question what to do if A and only A has been consumed.  We don't want B
> to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
> twice. Host might know page is already gone - how about host gives us a
> hint after using the buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and used
> > after the caller comes back.
> > 
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock
> and just block whoever wanted to use any pages.  Yes we are sending
> hints faster but apparently something wanted these pages, and holding
> the lock is interfering with this something.

I've been thinking about it. How about the following scheme:
1. register balloon to get a (new) callback when free list runs empty
2. take pages off the free list, add them to the balloon specific list
3. report to host
4. readd to free list at tail
5. if callback triggers, interrupt balloon reporting to host,
   and readd to free list at tail


This needs some thought wrt what happens when there are
multiple users of this API, but looks like it will work.

-- 
MST

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 15:00         ` Michael S. Tsirkin
  (?)
  (?)
@ 2018-01-26 21:43         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 21:43 UTC (permalink / raw)
  To: Wei Wang
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in the
> > > > system and report them via a callback function. Some page blocks may
> > > > leave the free list after zone->lock is released, so it is the caller's
> > > > responsibility to either detect or prevent the use of such pages.
> > > > 
> > > > One use example of this patch is to accelerate live migration by skipping
> > > > the transfer of free pages reported from the guest. A popular method used
> > > > by the hypervisor to track which part of memory is written during live
> > > > migration is to write-protect all the guest memory. So, those pages that
> > > > are reported as free pages but are written after the report function
> > > > returns will be captured by the hypervisor, and they will be added to the
> > > > next round of memory transfer.
> > > > 
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > > 
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the
> > > beginning. So no way to drop lock, do something slow
> > > and then proceed.
> > > 
> > > Another is that you are using it to report free page hints. Presumably
> > > the point is to drop these pages - keeping them near head of the list
> > > and reusing the reported ones will just make everything slower
> > > invalidating the hint.
> > > 
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then
> > > move them to tail when we exit.
> > 
> > 
> > I'm not sure how this would help. For example, we have a list of 2M free
> > page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> > 
> > After reporting A and B, and put them to the end and exit, when the caller
> > comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder
> question what to do if A and only A has been consumed.  We don't want B
> to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
> twice. Host might know page is already gone - how about host gives us a
> hint after using the buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and used
> > after the caller comes back.
> > 
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock
> and just block whoever wanted to use any pages.  Yes we are sending
> hints faster but apparently something wanted these pages, and holding
> the lock is interfering with this something.

I've been thinking about it. How about the following scheme:
1. register balloon to get a (new) callback when free list runs empty
2. take pages off the free list, add them to the balloon specific list
3. report to host
4. readd to free list at tail
5. if callback triggers, interrupt balloon reporting to host,
   and readd to free list at tail


This needs some thought wrt what happens when there are
multiple users of this API, but looks like it will work.

-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-26 21:43           ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-01-26 21:43 UTC (permalink / raw)
  To: Wei Wang
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in the
> > > > system and report them via a callback function. Some page blocks may
> > > > leave the free list after zone->lock is released, so it is the caller's
> > > > responsibility to either detect or prevent the use of such pages.
> > > > 
> > > > One use example of this patch is to accelerate live migration by skipping
> > > > the transfer of free pages reported from the guest. A popular method used
> > > > by the hypervisor to track which part of memory is written during live
> > > > migration is to write-protect all the guest memory. So, those pages that
> > > > are reported as free pages but are written after the report function
> > > > returns will be captured by the hypervisor, and they will be added to the
> > > > next round of memory transfer.
> > > > 
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > > 
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with pattern
> > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > index 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting. Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the
> > > beginning. So no way to drop lock, do something slow
> > > and then proceed.
> > > 
> > > Another is that you are using it to report free page hints. Presumably
> > > the point is to drop these pages - keeping them near head of the list
> > > and reusing the reported ones will just make everything slower
> > > invalidating the hint.
> > > 
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then
> > > move them to tail when we exit.
> > 
> > 
> > I'm not sure how this would help. For example, we have a list of 2M free
> > page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> > 
> > After reporting A and B, and put them to the end and exit, when the caller
> > comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder
> question what to do if A and only A has been consumed.  We don't want B
> to be sent twice ideally. OTOH maybe that isn't a big deal if it's only
> twice. Host might know page is already gone - how about host gives us a
> hint after using the buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and used
> > after the caller comes back.
> > 
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock
> and just block whoever wanted to use any pages.  Yes we are sending
> hints faster but apparently something wanted these pages, and holding
> the lock is interfering with this something.

I've been thinking about it. How about the following scheme:
1. register balloon to get a (new) callback when free list runs empty
2. take pages off the free list, add them to the balloon specific list
3. report to host
4. readd to free list at tail
5. if callback triggers, interrupt balloon reporting to host,
   and readd to free list at tail


This needs some thought wrt what happens when there are
multiple users of this API, but looks like it will work.

-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 21:43           ` Michael S. Tsirkin
  (?)
@ 2018-01-27 13:13             ` Wang, Wei W
  -1 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 13:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Saturday, January 27, 2018 5:44 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> > On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > 2) If worse, all the blocks have been split into smaller blocks and
> > > used after the caller comes back.
> > >
> > > where could we continue?
> >
> > I'm not sure. But an alternative appears to be to hold a lock and just
> > block whoever wanted to use any pages.  Yes we are sending hints
> > faster but apparently something wanted these pages, and holding the
> > lock is interfering with this something.
> 
> I've been thinking about it. How about the following scheme:
> 1. register balloon to get a (new) callback when free list runs empty 2. take
> pages off the free list, add them to the balloon specific list 3. report to host 4.
> readd to free list at tail 5. if callback triggers, interrupt balloon reporting to
> host,
>    and readd to free list at tail

So in step 2, when we walk through the free page list, take each block, and add them to the balloon specific list, is this performed under the mm lock? If it is still under the lock, then what would be the difference compared to walking through the free list, and add each block to virtqueue? 

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-27 13:13             ` Wang, Wei W
  0 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 13:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Saturday, January 27, 2018 5:44 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> > On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > 2) If worse, all the blocks have been split into smaller blocks and
> > > used after the caller comes back.
> > >
> > > where could we continue?
> >
> > I'm not sure. But an alternative appears to be to hold a lock and just
> > block whoever wanted to use any pages.  Yes we are sending hints
> > faster but apparently something wanted these pages, and holding the
> > lock is interfering with this something.
> 
> I've been thinking about it. How about the following scheme:
> 1. register balloon to get a (new) callback when free list runs empty 2. take
> pages off the free list, add them to the balloon specific list 3. report to host 4.
> readd to free list at tail 5. if callback triggers, interrupt balloon reporting to
> host,
>    and readd to free list at tail

So in step 2, when we walk through the free page list, take each block, and add them to the balloon specific list, is this performed under the mm lock? If it is still under the lock, then what would be the difference compared to walking through the free list, and add each block to virtqueue? 

Best,
Wei

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 21:43           ` Michael S. Tsirkin
                             ` (2 preceding siblings ...)
  (?)
@ 2018-01-27 13:13           ` Wang, Wei W
  -1 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 13:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Saturday, January 27, 2018 5:44 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> > On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > 2) If worse, all the blocks have been split into smaller blocks and
> > > used after the caller comes back.
> > >
> > > where could we continue?
> >
> > I'm not sure. But an alternative appears to be to hold a lock and just
> > block whoever wanted to use any pages.  Yes we are sending hints
> > faster but apparently something wanted these pages, and holding the
> > lock is interfering with this something.
> 
> I've been thinking about it. How about the following scheme:
> 1. register balloon to get a (new) callback when free list runs empty 2. take
> pages off the free list, add them to the balloon specific list 3. report to host 4.
> readd to free list at tail 5. if callback triggers, interrupt balloon reporting to
> host,
>    and readd to free list at tail

So in step 2, when we walk through the free page list, take each block, and add them to the balloon specific list, is this performed under the mm lock? If it is still under the lock, then what would be the difference compared to walking through the free list, and add each block to virtqueue? 

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] RE: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-27 13:13             ` Wang, Wei W
  0 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 13:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Saturday, January 27, 2018 5:44 AM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 05:00:09PM +0200, Michael S. Tsirkin wrote:
> > On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > 2) If worse, all the blocks have been split into smaller blocks and
> > > used after the caller comes back.
> > >
> > > where could we continue?
> >
> > I'm not sure. But an alternative appears to be to hold a lock and just
> > block whoever wanted to use any pages.  Yes we are sending hints
> > faster but apparently something wanted these pages, and holding the
> > lock is interfering with this something.
> 
> I've been thinking about it. How about the following scheme:
> 1. register balloon to get a (new) callback when free list runs empty 2. take
> pages off the free list, add them to the balloon specific list 3. report to host 4.
> readd to free list at tail 5. if callback triggers, interrupt balloon reporting to
> host,
>    and readd to free list at tail

So in step 2, when we walk through the free page list, take each block, and add them to the balloon specific list, is this performed under the mm lock? If it is still under the lock, then what would be the difference compared to walking through the free list, and add each block to virtqueue? 

Best,
Wei

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 15:00         ` Michael S. Tsirkin
  (?)
@ 2018-01-27 14:00           ` Wang, Wei W
  -1 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 14:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Friday, January 26, 2018 11:00 PM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in
> > > > the system and report them via a callback function. Some page
> > > > blocks may leave the free list after zone->lock is released, so it
> > > > is the caller's responsibility to either detect or prevent the use of such
> pages.
> > > >
> > > > One use example of this patch is to accelerate live migration by
> > > > skipping the transfer of free pages reported from the guest. A
> > > > popular method used by the hypervisor to track which part of
> > > > memory is written during live migration is to write-protect all
> > > > the guest memory. So, those pages that are reported as free pages
> > > > but are written after the report function returns will be captured
> > > > by the hypervisor, and they will be added to the next round of memory
> transfer.
> > > >
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h index
> > > > ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid,
> unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end &
> PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with
> > > > pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index
> > > > 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range
> > > > +via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting.
> > > > +Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the beginning. So
> > > no way to drop lock, do something slow and then proceed.
> > >
> > > Another is that you are using it to report free page hints.
> > > Presumably the point is to drop these pages - keeping them near head
> > > of the list and reusing the reported ones will just make everything
> > > slower invalidating the hint.
> > >
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then move them
> > > to tail when we exit.
> >
> >
> > I'm not sure how this would help. For example, we have a list of 2M
> > free page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> >
> > After reporting A and B, and put them to the end and exit, when the
> > caller comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder question
> what to do if A and only A has been consumed.  We don't want B to be sent
> twice ideally. OTOH maybe that isn't a big deal if it's only twice. Host might
> know page is already gone - how about host gives us a hint after using the
> buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and
> > used after the caller comes back.
> >
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock and just block
> whoever wanted to use any pages.  Yes we are sending hints faster but
> apparently something wanted these pages, and holding the lock is interfering
> with this something.
> 
> >
> > The reason to think about "restart" is the worry about the virtqueue
> > is full, right? But we've agreed that losing some hints to report
> > isn't important, and in practice, the virtqueue won't be full as the
> > host side is faster.
> 
> It would be more convincing if we sent e.g. higher order pages first. As it is - it
> won't take long to stuff ring full of 4K pages and it seems highly unlikely that
> host won't ever be scheduled out.

Yes, actually we've already sent higher order pages first, please check this patch, we have:

for (order = MAX_ORDER - 1; order >= min_order; order--)
--> start from high order blocks. 

> 
> Can we maybe agree on what kind of benchmark makes sense for this work?
> I'm concerned that we are laser focused on just how long does it take to
> migrate ignoring e.g. slowdowns after migration.

Testing the time of linux compilation during migration? or what benchmark do you have in mind? 
We can compare how long it takes during legacy live migration and live migration with this feature?

If you really worry about this, we could also provide an configure option, e.g. under /sys/, for the guest to decide whether to enable or disable reporting free page hints to the host at any time. If disabled, in the balloon driver we skip the calling of walk_free_mem_block().


> > I'm concerned that actions on the free list may cause more controversy
> > though it might be safe to do from some aspect, and would be hard to
> > end debating. If possible, we could go with the most prudent approach
> > for now, and have more discussions in future improvement patches. What
> > would you think?
> 
> Well I'm not 100% about restartability. But keeping pages freed by host near
> head of the list looks kind of wrong.
> Try to float a patch on top for the rotation and see what happens?

I didn't get it, "pages freed by host", what does that mean?

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-27 14:00           ` Wang, Wei W
  0 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 14:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Friday, January 26, 2018 11:00 PM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in
> > > > the system and report them via a callback function. Some page
> > > > blocks may leave the free list after zone->lock is released, so it
> > > > is the caller's responsibility to either detect or prevent the use of such
> pages.
> > > >
> > > > One use example of this patch is to accelerate live migration by
> > > > skipping the transfer of free pages reported from the guest. A
> > > > popular method used by the hypervisor to track which part of
> > > > memory is written during live migration is to write-protect all
> > > > the guest memory. So, those pages that are reported as free pages
> > > > but are written after the report function returns will be captured
> > > > by the hypervisor, and they will be added to the next round of memory
> transfer.
> > > >
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h index
> > > > ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid,
> unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end &
> PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with
> > > > pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index
> > > > 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range
> > > > +via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting.
> > > > +Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the beginning. So
> > > no way to drop lock, do something slow and then proceed.
> > >
> > > Another is that you are using it to report free page hints.
> > > Presumably the point is to drop these pages - keeping them near head
> > > of the list and reusing the reported ones will just make everything
> > > slower invalidating the hint.
> > >
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then move them
> > > to tail when we exit.
> >
> >
> > I'm not sure how this would help. For example, we have a list of 2M
> > free page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> >
> > After reporting A and B, and put them to the end and exit, when the
> > caller comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder question
> what to do if A and only A has been consumed.  We don't want B to be sent
> twice ideally. OTOH maybe that isn't a big deal if it's only twice. Host might
> know page is already gone - how about host gives us a hint after using the
> buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and
> > used after the caller comes back.
> >
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock and just block
> whoever wanted to use any pages.  Yes we are sending hints faster but
> apparently something wanted these pages, and holding the lock is interfering
> with this something.
> 
> >
> > The reason to think about "restart" is the worry about the virtqueue
> > is full, right? But we've agreed that losing some hints to report
> > isn't important, and in practice, the virtqueue won't be full as the
> > host side is faster.
> 
> It would be more convincing if we sent e.g. higher order pages first. As it is - it
> won't take long to stuff ring full of 4K pages and it seems highly unlikely that
> host won't ever be scheduled out.

Yes, actually we've already sent higher order pages first, please check this patch, we have:

for (order = MAX_ORDER - 1; order >= min_order; order--)
--> start from high order blocks. 

> 
> Can we maybe agree on what kind of benchmark makes sense for this work?
> I'm concerned that we are laser focused on just how long does it take to
> migrate ignoring e.g. slowdowns after migration.

Testing the time of linux compilation during migration? or what benchmark do you have in mind? 
We can compare how long it takes during legacy live migration and live migration with this feature?

If you really worry about this, we could also provide an configure option, e.g. under /sys/, for the guest to decide whether to enable or disable reporting free page hints to the host at any time. If disabled, in the balloon driver we skip the calling of walk_free_mem_block().


> > I'm concerned that actions on the free list may cause more controversy
> > though it might be safe to do from some aspect, and would be hard to
> > end debating. If possible, we could go with the most prudent approach
> > for now, and have more discussions in future improvement patches. What
> > would you think?
> 
> Well I'm not 100% about restartability. But keeping pages freed by host near
> head of the list looks kind of wrong.
> Try to float a patch on top for the rotation and see what happens?

I didn't get it, "pages freed by host", what does that mean?

Best,
Wei





--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* RE: [PATCH v24 1/2] mm: support reporting free page blocks
  2018-01-26 15:00         ` Michael S. Tsirkin
                           ` (4 preceding siblings ...)
  (?)
@ 2018-01-27 14:00         ` Wang, Wei W
  -1 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 14:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, mhocko, linux-mm, pbonzini,
	akpm, virtualization

On Friday, January 26, 2018 11:00 PM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in
> > > > the system and report them via a callback function. Some page
> > > > blocks may leave the free list after zone->lock is released, so it
> > > > is the caller's responsibility to either detect or prevent the use of such
> pages.
> > > >
> > > > One use example of this patch is to accelerate live migration by
> > > > skipping the transfer of free pages reported from the guest. A
> > > > popular method used by the hypervisor to track which part of
> > > > memory is written during live migration is to write-protect all
> > > > the guest memory. So, those pages that are reported as free pages
> > > > but are written after the report function returns will be captured
> > > > by the hypervisor, and they will be added to the next round of memory
> transfer.
> > > >
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h index
> > > > ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid,
> unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end &
> PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with
> > > > pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index
> > > > 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range
> > > > +via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting.
> > > > +Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the beginning. So
> > > no way to drop lock, do something slow and then proceed.
> > >
> > > Another is that you are using it to report free page hints.
> > > Presumably the point is to drop these pages - keeping them near head
> > > of the list and reusing the reported ones will just make everything
> > > slower invalidating the hint.
> > >
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then move them
> > > to tail when we exit.
> >
> >
> > I'm not sure how this would help. For example, we have a list of 2M
> > free page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> >
> > After reporting A and B, and put them to the end and exit, when the
> > caller comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder question
> what to do if A and only A has been consumed.  We don't want B to be sent
> twice ideally. OTOH maybe that isn't a big deal if it's only twice. Host might
> know page is already gone - how about host gives us a hint after using the
> buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and
> > used after the caller comes back.
> >
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock and just block
> whoever wanted to use any pages.  Yes we are sending hints faster but
> apparently something wanted these pages, and holding the lock is interfering
> with this something.
> 
> >
> > The reason to think about "restart" is the worry about the virtqueue
> > is full, right? But we've agreed that losing some hints to report
> > isn't important, and in practice, the virtqueue won't be full as the
> > host side is faster.
> 
> It would be more convincing if we sent e.g. higher order pages first. As it is - it
> won't take long to stuff ring full of 4K pages and it seems highly unlikely that
> host won't ever be scheduled out.

Yes, actually we've already sent higher order pages first, please check this patch, we have:

for (order = MAX_ORDER - 1; order >= min_order; order--)
--> start from high order blocks. 

> 
> Can we maybe agree on what kind of benchmark makes sense for this work?
> I'm concerned that we are laser focused on just how long does it take to
> migrate ignoring e.g. slowdowns after migration.

Testing the time of linux compilation during migration? or what benchmark do you have in mind? 
We can compare how long it takes during legacy live migration and live migration with this feature?

If you really worry about this, we could also provide an configure option, e.g. under /sys/, for the guest to decide whether to enable or disable reporting free page hints to the host at any time. If disabled, in the balloon driver we skip the calling of walk_free_mem_block().


> > I'm concerned that actions on the free list may cause more controversy
> > though it might be safe to do from some aspect, and would be hard to
> > end debating. If possible, we could go with the most prudent approach
> > for now, and have more discussions in future improvement patches. What
> > would you think?
> 
> Well I'm not 100% about restartability. But keeping pages freed by host near
> head of the list looks kind of wrong.
> Try to float a patch on top for the rotation and see what happens?

I didn't get it, "pages freed by host", what does that mean?

Best,
Wei

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] RE: [PATCH v24 1/2] mm: support reporting free page blocks
@ 2018-01-27 14:00           ` Wang, Wei W
  0 siblings, 0 replies; 65+ messages in thread
From: Wang, Wei W @ 2018-01-27 14:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: virtio-dev, linux-kernel, virtualization, kvm, linux-mm, mhocko,
	akpm, pbonzini, liliang.opensource, yang.zhang.wz, quan.xu0,
	nilal, riel

On Friday, January 26, 2018 11:00 PM, Michael S. Tsirkin wrote:
> On Fri, Jan 26, 2018 at 11:29:15AM +0800, Wei Wang wrote:
> > On 01/25/2018 09:41 PM, Michael S. Tsirkin wrote:
> > > On Wed, Jan 24, 2018 at 06:42:41PM +0800, Wei Wang wrote:
> > > > This patch adds support to walk through the free page blocks in
> > > > the system and report them via a callback function. Some page
> > > > blocks may leave the free list after zone->lock is released, so it
> > > > is the caller's responsibility to either detect or prevent the use of such
> pages.
> > > >
> > > > One use example of this patch is to accelerate live migration by
> > > > skipping the transfer of free pages reported from the guest. A
> > > > popular method used by the hypervisor to track which part of
> > > > memory is written during live migration is to write-protect all
> > > > the guest memory. So, those pages that are reported as free pages
> > > > but are written after the report function returns will be captured
> > > > by the hypervisor, and they will be added to the next round of memory
> transfer.
> > > >
> > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > Signed-off-by: Liang Li <liang.z.li@intel.com>
> > > > Cc: Michal Hocko <mhocko@kernel.org>
> > > > Cc: Michael S. Tsirkin <mst@redhat.com>
> > > > Acked-by: Michal Hocko <mhocko@kernel.org>
> > > > ---
> > > >   include/linux/mm.h |  6 ++++
> > > >   mm/page_alloc.c    | 91
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >   2 files changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h index
> > > > ea818ff..b3077dd 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid,
> unsigned long * zones_size,
> > > >   		unsigned long zone_start_pfn, unsigned long *zholes_size);
> > > >   extern void free_initmem(void);
> > > > +extern void walk_free_mem_block(void *opaque,
> > > > +				int min_order,
> > > > +				bool (*report_pfn_range)(void *opaque,
> > > > +							 unsigned long pfn,
> > > > +							 unsigned long num));
> > > > +
> > > >   /*
> > > >    * Free reserved pages within range [PAGE_ALIGN(start), end &
> PAGE_MASK)
> > > >    * into the buddy system. The freed pages will be poisoned with
> > > > pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index
> > > > 76c9688..705de22 100644
> > > > --- a/mm/page_alloc.c
> > > > +++ b/mm/page_alloc.c
> > > > @@ -4899,6 +4899,97 @@ void show_free_areas(unsigned int filter,
> nodemask_t *nodemask)
> > > >   	show_swap_cache_info();
> > > >   }
> > > > +/*
> > > > + * Walk through a free page list and report the found pfn range
> > > > +via the
> > > > + * callback.
> > > > + *
> > > > + * Return false if the callback requests to stop reporting.
> > > > +Otherwise,
> > > > + * return true.
> > > > + */
> > > > +static bool walk_free_page_list(void *opaque,
> > > > +				struct zone *zone,
> > > > +				int order,
> > > > +				enum migratetype mt,
> > > > +				bool (*report_pfn_range)(void *,
> > > > +							 unsigned long,
> > > > +							 unsigned long))
> > > > +{
> > > > +	struct page *page;
> > > > +	struct list_head *list;
> > > > +	unsigned long pfn, flags;
> > > > +	bool ret;
> > > > +
> > > > +	spin_lock_irqsave(&zone->lock, flags);
> > > > +	list = &zone->free_area[order].free_list[mt];
> > > > +	list_for_each_entry(page, list, lru) {
> > > > +		pfn = page_to_pfn(page);
> > > > +		ret = report_pfn_range(opaque, pfn, 1 << order);
> > > > +		if (!ret)
> > > > +			break;
> > > > +	}
> > > > +	spin_unlock_irqrestore(&zone->lock, flags);
> > > > +
> > > > +	return ret;
> > > > +}
> > > There are two issues with this API. One is that it is not
> > > restarteable: if you return false, you start from the beginning. So
> > > no way to drop lock, do something slow and then proceed.
> > >
> > > Another is that you are using it to report free page hints.
> > > Presumably the point is to drop these pages - keeping them near head
> > > of the list and reusing the reported ones will just make everything
> > > slower invalidating the hint.
> > >
> > > How about rotating these pages towards the end of the list?
> > > Probably not on each call, callect reported pages and then move them
> > > to tail when we exit.
> >
> >
> > I'm not sure how this would help. For example, we have a list of 2M
> > free page blocks:
> > A-->B-->C-->D-->E-->F-->G--H
> >
> > After reporting A and B, and put them to the end and exit, when the
> > caller comes back,
> > 1) if the list remains unchanged, then it will be
> > C-->D-->E-->F-->G-->H-->A-->B
> 
> Right. So here we can just scan until we see A, right?  It's a harder question
> what to do if A and only A has been consumed.  We don't want B to be sent
> twice ideally. OTOH maybe that isn't a big deal if it's only twice. Host might
> know page is already gone - how about host gives us a hint after using the
> buffer?
> 
> > 2) If worse, all the blocks have been split into smaller blocks and
> > used after the caller comes back.
> >
> > where could we continue?
> 
> I'm not sure. But an alternative appears to be to hold a lock and just block
> whoever wanted to use any pages.  Yes we are sending hints faster but
> apparently something wanted these pages, and holding the lock is interfering
> with this something.
> 
> >
> > The reason to think about "restart" is the worry about the virtqueue
> > is full, right? But we've agreed that losing some hints to report
> > isn't important, and in practice, the virtqueue won't be full as the
> > host side is faster.
> 
> It would be more convincing if we sent e.g. higher order pages first. As it is - it
> won't take long to stuff ring full of 4K pages and it seems highly unlikely that
> host won't ever be scheduled out.

Yes, actually we've already sent higher order pages first, please check this patch, we have:

for (order = MAX_ORDER - 1; order >= min_order; order--)
--> start from high order blocks. 

> 
> Can we maybe agree on what kind of benchmark makes sense for this work?
> I'm concerned that we are laser focused on just how long does it take to
> migrate ignoring e.g. slowdowns after migration.

Testing the time of linux compilation during migration? or what benchmark do you have in mind? 
We can compare how long it takes during legacy live migration and live migration with this feature?

If you really worry about this, we could also provide an configure option, e.g. under /sys/, for the guest to decide whether to enable or disable reporting free page hints to the host at any time. If disabled, in the balloon driver we skip the calling of walk_free_mem_block().


> > I'm concerned that actions on the free list may cause more controversy
> > though it might be safe to do from some aspect, and would be hard to
> > end debating. If possible, we could go with the most prudent approach
> > for now, and have more discussions in future improvement patches. What
> > would you think?
> 
> Well I'm not 100% about restartability. But keeping pages freed by host near
> head of the list looks kind of wrong.
> Try to float a patch on top for the rotation and see what happens?

I didn't get it, "pages freed by host", what does that mean?

Best,
Wei






---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-25 11:28         ` Tetsuo Handa
  (?)
@ 2018-02-01 19:14           ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-02-01 19:14 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel

On Thu, Jan 25, 2018 at 08:28:52PM +0900, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
> > On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> >> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> >> +
> >> +static void report_free_page_func(struct work_struct *work)
> >> +{
> >> +    struct virtio_balloon *vb;
> >> +    unsigned long flags;
> >> +
> >> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
> >> +
> >> +    /* Start by sending the obtained cmd id to the host with an outbuf */
> >> +    send_cmd_id(vb, &vb->start_cmd_id);
> >> +
> >> +    /*
> >> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> >> +     * indicate a new request can be queued.
> >> +     */
> >> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
> >> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> >> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> >> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> >> +
> >> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> >> Can you teach walk_free_mem_block to return the && of all
> >> return calls, so caller knows whether it completed?
> > 
> > There will be two cases that can cause walk_free_mem_block to return without completing:
> > 1) host requests to stop in advance
> > 2) vq->broken
> > 
> > How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> > 
> > For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> 
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
> 
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
> 
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

So going over it, at some level we are talking about optimizations here.

I'll go over it one last time but at some level we might
be able to make progress faster if we start by enabling
the functionality in question.
If there are no other issues, I'm inclined to merge.

If nothing else, this
will make it possible for people to experiment and
report sources of overhead.

-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-02-01 19:14           ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-02-01 19:14 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel

On Thu, Jan 25, 2018 at 08:28:52PM +0900, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
> > On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> >> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> >> +
> >> +static void report_free_page_func(struct work_struct *work)
> >> +{
> >> +    struct virtio_balloon *vb;
> >> +    unsigned long flags;
> >> +
> >> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
> >> +
> >> +    /* Start by sending the obtained cmd id to the host with an outbuf */
> >> +    send_cmd_id(vb, &vb->start_cmd_id);
> >> +
> >> +    /*
> >> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> >> +     * indicate a new request can be queued.
> >> +     */
> >> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
> >> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> >> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> >> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> >> +
> >> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> >> Can you teach walk_free_mem_block to return the && of all
> >> return calls, so caller knows whether it completed?
> > 
> > There will be two cases that can cause walk_free_mem_block to return without completing:
> > 1) host requests to stop in advance
> > 2) vq->broken
> > 
> > How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> > 
> > For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> 
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
> 
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
> 
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

So going over it, at some level we are talking about optimizations here.

I'll go over it one last time but at some level we might
be able to make progress faster if we start by enabling
the functionality in question.
If there are no other issues, I'm inclined to merge.

If nothing else, this
will make it possible for people to experiment and
report sources of overhead.

-- 
MST

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
  2018-01-25 11:28         ` Tetsuo Handa
                           ` (2 preceding siblings ...)
  (?)
@ 2018-02-01 19:14         ` Michael S. Tsirkin
  -1 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-02-01 19:14 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: yang.zhang.wz, virtio-dev, riel, quan.xu0, kvm, nilal,
	liliang.opensource, linux-kernel, virtualization, linux-mm,
	pbonzini, akpm, mhocko

On Thu, Jan 25, 2018 at 08:28:52PM +0900, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
> > On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> >> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> >> +
> >> +static void report_free_page_func(struct work_struct *work)
> >> +{
> >> +    struct virtio_balloon *vb;
> >> +    unsigned long flags;
> >> +
> >> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
> >> +
> >> +    /* Start by sending the obtained cmd id to the host with an outbuf */
> >> +    send_cmd_id(vb, &vb->start_cmd_id);
> >> +
> >> +    /*
> >> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> >> +     * indicate a new request can be queued.
> >> +     */
> >> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
> >> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> >> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> >> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> >> +
> >> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> >> Can you teach walk_free_mem_block to return the && of all
> >> return calls, so caller knows whether it completed?
> > 
> > There will be two cases that can cause walk_free_mem_block to return without completing:
> > 1) host requests to stop in advance
> > 2) vq->broken
> > 
> > How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> > 
> > For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> 
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
> 
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
> 
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

So going over it, at some level we are talking about optimizations here.

I'll go over it one last time but at some level we might
be able to make progress faster if we start by enabling
the functionality in question.
If there are no other issues, I'm inclined to merge.

If nothing else, this
will make it possible for people to experiment and
report sources of overhead.

-- 
MST

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [virtio-dev] Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
@ 2018-02-01 19:14           ` Michael S. Tsirkin
  0 siblings, 0 replies; 65+ messages in thread
From: Michael S. Tsirkin @ 2018-02-01 19:14 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Wei Wang, virtio-dev, linux-kernel, virtualization, kvm,
	linux-mm, mhocko, akpm, pbonzini, liliang.opensource,
	yang.zhang.wz, quan.xu0, nilal, riel

On Thu, Jan 25, 2018 at 08:28:52PM +0900, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
> > On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
> >> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
> >> +
> >> +static void report_free_page_func(struct work_struct *work)
> >> +{
> >> +    struct virtio_balloon *vb;
> >> +    unsigned long flags;
> >> +
> >> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
> >> +
> >> +    /* Start by sending the obtained cmd id to the host with an outbuf */
> >> +    send_cmd_id(vb, &vb->start_cmd_id);
> >> +
> >> +    /*
> >> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
> >> +     * indicate a new request can be queued.
> >> +     */
> >> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
> >> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
> >> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
> >> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
> >> +
> >> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
> >> Can you teach walk_free_mem_block to return the && of all
> >> return calls, so caller knows whether it completed?
> > 
> > There will be two cases that can cause walk_free_mem_block to return without completing:
> > 1) host requests to stop in advance
> > 2) vq->broken
> > 
> > How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
> > 
> > For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> 
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
> 
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
> 
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

So going over it, at some level we are talking about optimizations here.

I'll go over it one last time but at some level we might
be able to make progress faster if we start by enabling
the functionality in question.
If there are no other issues, I'm inclined to merge.

If nothing else, this
will make it possible for people to experiment and
report sources of overhead.

-- 
MST

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2018-02-01 19:14 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-24 10:42 [PATCH v24 0/2] Virtio-balloon: support free page reporting Wei Wang
2018-01-24 10:42 ` [virtio-dev] " Wei Wang
2018-01-24 10:42 ` Wei Wang
2018-01-24 10:42 ` [PATCH v24 1/2] mm: support reporting free page blocks Wei Wang
2018-01-24 10:42 ` Wei Wang
2018-01-24 10:42   ` [virtio-dev] " Wei Wang
2018-01-24 10:42   ` Wei Wang
2018-01-25 13:41   ` Michael S. Tsirkin
2018-01-25 13:41     ` [virtio-dev] " Michael S. Tsirkin
2018-01-25 13:41     ` Michael S. Tsirkin
2018-01-25 14:56     ` Pankaj Gupta
2018-01-25 14:56       ` Pankaj Gupta
2018-01-25 17:31       ` Michael S. Tsirkin
2018-01-25 17:31       ` Michael S. Tsirkin
2018-01-25 17:31         ` [virtio-dev] " Michael S. Tsirkin
2018-01-25 17:31         ` Michael S. Tsirkin
2018-01-25 14:56     ` Pankaj Gupta
2018-01-26  3:29     ` Wei Wang
2018-01-26  3:29       ` [virtio-dev] " Wei Wang
2018-01-26  3:29       ` Wei Wang
2018-01-26 15:00       ` Michael S. Tsirkin
2018-01-26 15:00       ` Michael S. Tsirkin
2018-01-26 15:00         ` [virtio-dev] " Michael S. Tsirkin
2018-01-26 15:00         ` Michael S. Tsirkin
2018-01-26 21:43         ` Michael S. Tsirkin
2018-01-26 21:43         ` Michael S. Tsirkin
2018-01-26 21:43           ` [virtio-dev] " Michael S. Tsirkin
2018-01-26 21:43           ` Michael S. Tsirkin
2018-01-27 13:13           ` Wang, Wei W
2018-01-27 13:13             ` [virtio-dev] " Wang, Wei W
2018-01-27 13:13             ` Wang, Wei W
2018-01-27 13:13           ` Wang, Wei W
2018-01-27 14:00         ` Wang, Wei W
2018-01-27 14:00           ` [virtio-dev] " Wang, Wei W
2018-01-27 14:00           ` Wang, Wei W
2018-01-27 14:00         ` Wang, Wei W
2018-01-26  3:29     ` Wei Wang
2018-01-25 13:41   ` Michael S. Tsirkin
2018-01-24 10:42 ` [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT Wei Wang
2018-01-24 10:42   ` [virtio-dev] " Wei Wang
2018-01-24 10:42   ` Wei Wang
2018-01-24 17:15   ` Michael S. Tsirkin
2018-01-24 17:15     ` [virtio-dev] " Michael S. Tsirkin
2018-01-24 17:15     ` Michael S. Tsirkin
2018-01-25  3:32     ` Wei Wang
2018-01-25  3:32     ` Wei Wang
2018-01-25  3:32       ` [virtio-dev] " Wei Wang
2018-01-25  3:32       ` Wei Wang
2018-01-25 11:28       ` Tetsuo Handa
2018-01-25 11:28         ` Tetsuo Handa
2018-01-25 12:55         ` Wei Wang
2018-01-25 12:55           ` [virtio-dev] " Wei Wang
2018-01-25 12:55           ` Wei Wang
2018-01-25 12:55           ` Wei Wang
2018-02-01 19:14         ` Michael S. Tsirkin
2018-02-01 19:14           ` [virtio-dev] " Michael S. Tsirkin
2018-02-01 19:14           ` Michael S. Tsirkin
2018-02-01 19:14         ` Michael S. Tsirkin
2018-01-25 11:28       ` Tetsuo Handa
2018-01-25  9:45     ` Wei Wang
2018-01-25  9:45     ` Wei Wang
2018-01-25  9:45       ` [virtio-dev] " Wei Wang
2018-01-25  9:45       ` Wei Wang
2018-01-24 17:15   ` Michael S. Tsirkin
2018-01-24 10:42 ` Wei Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.