All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages
@ 2014-08-20 15:04 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

Sasha Levin reported KASAN splash inside isolate_migratepages_range().
Problem is in function __is_movable_balloon_page() which tests AS_BALLOON_MAP
in page->mapping->flags. This function has no protection against anonymous
pages. As result it tried to check address space flags in inside anon-vma.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Link: http://lkml.kernel.org/p/53E6CEAA.9020105@oracle.com
Cc: stable <stable@vger.kernel.org> # v3.8
---
 include/linux/balloon_compaction.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 089743a..53d482e 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -128,7 +128,7 @@ static inline bool page_flags_cleared(struct page *page)
 static inline bool __is_movable_balloon_page(struct page *page)
 {
 	struct address_space *mapping = page->mapping;
-	return mapping_balloon(mapping);
+	return !PageAnon(page) && mapping_balloon(mapping);
 }
 
 /*


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages
@ 2014-08-20 15:04 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

Sasha Levin reported KASAN splash inside isolate_migratepages_range().
Problem is in function __is_movable_balloon_page() which tests AS_BALLOON_MAP
in page->mapping->flags. This function has no protection against anonymous
pages. As result it tried to check address space flags in inside anon-vma.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Link: http://lkml.kernel.org/p/53E6CEAA.9020105@oracle.com
Cc: stable <stable@vger.kernel.org> # v3.8
---
 include/linux/balloon_compaction.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 089743a..53d482e 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -128,7 +128,7 @@ static inline bool page_flags_cleared(struct page *page)
 static inline bool __is_movable_balloon_page(struct page *page)
 {
 	struct address_space *mapping = page->mapping;
-	return mapping_balloon(mapping);
+	return !PageAnon(page) && mapping_balloon(mapping);
 }
 
 /*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/7] mm/balloon_compaction: keep ballooned pages away from normal migration path
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

Proper testing shows yet another problem in balloon migration: it works only
once for each page. balloon_page_movable() check page flags and page_count.
In __unmap_and_move page is locked, reference counter is elevated, so
balloon_page_movable() _always_ fails here. As result in __unmap_and_move()
migration goes to the normal migration path.

Balloon ->migratepage() is so special, it returns MIGRATEPAGE_BALLOON_SUCCESS
instead of MIGRATEPAGE_SUCCESS. After that in move_to_new_page() successfully
migrated page got NULL into its mapping pointer and loses connectivity with
balloon and ability for further migration.

It's safe to use __is_movable_balloon_page here: page is isolated and pinned.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Cc: stable <stable@vger.kernel.org> # v3.8
---
 mm/migrate.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index f78ec9b..161d044 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(balloon_page_movable(page))) {
+	if (unlikely(__is_movable_balloon_page(page))) {
 		/*
 		 * A ballooned page does not need any special attention from
 		 * physical to virtual reverse mapping procedures.


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/7] mm/balloon_compaction: keep ballooned pages away from normal migration path
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

Proper testing shows yet another problem in balloon migration: it works only
once for each page. balloon_page_movable() check page flags and page_count.
In __unmap_and_move page is locked, reference counter is elevated, so
balloon_page_movable() _always_ fails here. As result in __unmap_and_move()
migration goes to the normal migration path.

Balloon ->migratepage() is so special, it returns MIGRATEPAGE_BALLOON_SUCCESS
instead of MIGRATEPAGE_SUCCESS. After that in move_to_new_page() successfully
migrated page got NULL into its mapping pointer and loses connectivity with
balloon and ability for further migration.

It's safe to use __is_movable_balloon_page here: page is isolated and pinned.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Cc: stable <stable@vger.kernel.org> # v3.8
---
 mm/migrate.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index f78ec9b..161d044 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(balloon_page_movable(page))) {
+	if (unlikely(__is_movable_balloon_page(page))) {
 		/*
 		 * A ballooned page does not need any special attention from
 		 * physical to virtual reverse mapping procedures.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/7] mm/balloon_compaction: isolate balloon pages without lru_lock
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

LRU-lock isn't required for balloon page isolation. This check makes migration
of some ballooned pages mostly impossible because isolate_migratepages_range()
drops LRU lock periodically.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Cc: stable <stable@vger.kernel.org> # v3.8
---
 mm/compaction.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 21bf292..0653f5f 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -597,7 +597,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 		 */
 		if (!PageLRU(page)) {
 			if (unlikely(balloon_page_movable(page))) {
-				if (locked && balloon_page_isolate(page)) {
+				if (balloon_page_isolate(page)) {
 					/* Successfully isolated */
 					goto isolate_success;
 				}


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/7] mm/balloon_compaction: isolate balloon pages without lru_lock
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

LRU-lock isn't required for balloon page isolation. This check makes migration
of some ballooned pages mostly impossible because isolate_migratepages_range()
drops LRU lock periodically.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
Cc: stable <stable@vger.kernel.org> # v3.8
---
 mm/compaction.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 21bf292..0653f5f 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -597,7 +597,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 		 */
 		if (!PageLRU(page)) {
 			if (unlikely(balloon_page_movable(page))) {
-				if (locked && balloon_page_isolate(page)) {
+				if (balloon_page_isolate(page)) {
 					/* Successfully isolated */
 					goto isolate_success;
 				}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/7] selftests/vm/transhuge-stress: stress test for memory compaction
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

From: Konstantin Khlebnikov <koct9i@gmail.com>

This tool induces memory fragmentation via sequential allocation of transparent
huge pages and splitting off everything except their last sub-pages.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
---
 tools/testing/selftests/vm/Makefile           |    1 
 tools/testing/selftests/vm/transhuge-stress.c |  144 +++++++++++++++++++++++++
 2 files changed, 145 insertions(+)
 create mode 100644 tools/testing/selftests/vm/transhuge-stress.c

diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index 3f94e1a..4c4b1f6 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -3,6 +3,7 @@
 CC = $(CROSS_COMPILE)gcc
 CFLAGS = -Wall
 BINARIES = hugepage-mmap hugepage-shm map_hugetlb thuge-gen hugetlbfstest
+BINARIES += transhuge-stress
 
 all: $(BINARIES)
 %: %.c
diff --git a/tools/testing/selftests/vm/transhuge-stress.c b/tools/testing/selftests/vm/transhuge-stress.c
new file mode 100644
index 0000000..fd7f1b4
--- /dev/null
+++ b/tools/testing/selftests/vm/transhuge-stress.c
@@ -0,0 +1,144 @@
+/*
+ * Stress test for transparent huge pages, memory compaction and migration.
+ *
+ * Authors: Konstantin Khlebnikov <koct9i@gmail.com>
+ *
+ * This is free and unencumbered software released into the public domain.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <err.h>
+#include <time.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <string.h>
+#include <sys/mman.h>
+
+#define PAGE_SHIFT 12
+#define HPAGE_SHIFT 21
+
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+#define HPAGE_SIZE (1 << HPAGE_SHIFT)
+
+#define PAGEMAP_PRESENT(ent)	(((ent) & (1ull << 63)) != 0)
+#define PAGEMAP_PFN(ent)	((ent) & ((1ull << 55) - 1))
+
+int pagemap_fd;
+
+int64_t allocate_transhuge(void *ptr)
+{
+	uint64_t ent[2];
+
+	/* drop pmd */
+	if (mmap(ptr, HPAGE_SIZE, PROT_READ | PROT_WRITE,
+				MAP_FIXED | MAP_ANONYMOUS |
+				MAP_NORESERVE | MAP_PRIVATE, -1, 0) != ptr)
+		errx(2, "mmap transhuge");
+
+	if (madvise(ptr, HPAGE_SIZE, MADV_HUGEPAGE))
+		err(2, "MADV_HUGEPAGE");
+
+	/* allocate transparent huge page */
+	*(volatile void **)ptr = ptr;
+
+	if (pread(pagemap_fd, ent, sizeof(ent),
+			(uintptr_t)ptr >> (PAGE_SHIFT - 3)) != sizeof(ent))
+		err(2, "read pagemap");
+
+	if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1]) &&
+	    PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) &&
+	    !(PAGEMAP_PFN(ent[0]) & ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1)))
+		return PAGEMAP_PFN(ent[0]);
+
+	return -1;
+}
+
+int main(int argc, char **argv)
+{
+	size_t ram, len;
+	void *ptr, *p;
+	struct timespec a, b;
+	double s;
+	uint8_t *map;
+	size_t map_len;
+
+	ram = sysconf(_SC_PHYS_PAGES);
+	if (ram > SIZE_MAX / sysconf(_SC_PAGESIZE) / 4)
+		ram = SIZE_MAX / 4;
+	else
+		ram *= sysconf(_SC_PAGESIZE);
+
+	if (argc == 1)
+		len = ram;
+	else if (!strcmp(argv[1], "-h"))
+		errx(1, "usage: %s [size in MiB]", argv[0]);
+	else
+		len = atoll(argv[1]) << 20;
+
+	warnx("allocate %zd transhuge pages, using %zd MiB virtual memory"
+	      " and %zd MiB of ram", len >> HPAGE_SHIFT, len >> 20,
+	      len >> (20 + HPAGE_SHIFT - PAGE_SHIFT - 1));
+
+	pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
+	if (pagemap_fd < 0)
+		err(2, "open pagemap");
+
+	len -= len % HPAGE_SIZE;
+	ptr = mmap(NULL, len + HPAGE_SIZE, PROT_READ | PROT_WRITE,
+			MAP_ANONYMOUS | MAP_NORESERVE | MAP_PRIVATE, -1, 0);
+	if (ptr == MAP_FAILED)
+		err(2, "initial mmap");
+	ptr += HPAGE_SIZE - (uintptr_t)ptr % HPAGE_SIZE;
+
+	if (madvise(ptr, len, MADV_HUGEPAGE))
+		err(2, "MADV_HUGEPAGE");
+
+	map_len = ram >> (HPAGE_SHIFT - 1);
+	map = malloc(map_len);
+	if (!map)
+		errx(2, "map malloc");
+
+	while (1) {
+		int nr_succeed = 0, nr_failed = 0, nr_pages = 0;
+
+		memset(map, 0, map_len);
+
+		clock_gettime(CLOCK_MONOTONIC, &a);
+		for (p = ptr; p < ptr + len; p += HPAGE_SIZE) {
+			int64_t pfn;
+
+			pfn = allocate_transhuge(p);
+
+			if (pfn < 0) {
+				nr_failed++;
+			} else {
+				size_t idx = pfn >> (HPAGE_SHIFT - PAGE_SHIFT);
+
+				nr_succeed++;
+				if (idx >= map_len) {
+					map = realloc(map, idx + 1);
+					if (!map)
+						errx(2, "map realloc");
+					memset(map + map_len, 0, idx + 1 - map_len);
+					map_len = idx + 1;
+				}
+				if (!map[idx])
+					nr_pages++;
+				map[idx] = 1;
+			}
+
+			/* split transhuge page, keep last page */
+			if (madvise(p, HPAGE_SIZE - PAGE_SIZE, MADV_DONTNEED))
+				err(2, "MADV_DONTNEED");
+		}
+		clock_gettime(CLOCK_MONOTONIC, &b);
+		s = b.tv_sec - a.tv_sec + (b.tv_nsec - a.tv_nsec) / 1000000000.;
+
+		warnx("%.3f s/loop, %.3f ms/page, %10.3f MiB/s\t"
+		      "%4d succeed, %4d failed, %4d different pages",
+		      s, s * 1000 / (len >> HPAGE_SHIFT), len / s / (1 << 20),
+		      nr_succeed, nr_failed, nr_pages);
+	}
+}


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/7] selftests/vm/transhuge-stress: stress test for memory compaction
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

From: Konstantin Khlebnikov <koct9i@gmail.com>

This tool induces memory fragmentation via sequential allocation of transparent
huge pages and splitting off everything except their last sub-pages.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
---
 tools/testing/selftests/vm/Makefile           |    1 
 tools/testing/selftests/vm/transhuge-stress.c |  144 +++++++++++++++++++++++++
 2 files changed, 145 insertions(+)
 create mode 100644 tools/testing/selftests/vm/transhuge-stress.c

diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index 3f94e1a..4c4b1f6 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -3,6 +3,7 @@
 CC = $(CROSS_COMPILE)gcc
 CFLAGS = -Wall
 BINARIES = hugepage-mmap hugepage-shm map_hugetlb thuge-gen hugetlbfstest
+BINARIES += transhuge-stress
 
 all: $(BINARIES)
 %: %.c
diff --git a/tools/testing/selftests/vm/transhuge-stress.c b/tools/testing/selftests/vm/transhuge-stress.c
new file mode 100644
index 0000000..fd7f1b4
--- /dev/null
+++ b/tools/testing/selftests/vm/transhuge-stress.c
@@ -0,0 +1,144 @@
+/*
+ * Stress test for transparent huge pages, memory compaction and migration.
+ *
+ * Authors: Konstantin Khlebnikov <koct9i@gmail.com>
+ *
+ * This is free and unencumbered software released into the public domain.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <err.h>
+#include <time.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <string.h>
+#include <sys/mman.h>
+
+#define PAGE_SHIFT 12
+#define HPAGE_SHIFT 21
+
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+#define HPAGE_SIZE (1 << HPAGE_SHIFT)
+
+#define PAGEMAP_PRESENT(ent)	(((ent) & (1ull << 63)) != 0)
+#define PAGEMAP_PFN(ent)	((ent) & ((1ull << 55) - 1))
+
+int pagemap_fd;
+
+int64_t allocate_transhuge(void *ptr)
+{
+	uint64_t ent[2];
+
+	/* drop pmd */
+	if (mmap(ptr, HPAGE_SIZE, PROT_READ | PROT_WRITE,
+				MAP_FIXED | MAP_ANONYMOUS |
+				MAP_NORESERVE | MAP_PRIVATE, -1, 0) != ptr)
+		errx(2, "mmap transhuge");
+
+	if (madvise(ptr, HPAGE_SIZE, MADV_HUGEPAGE))
+		err(2, "MADV_HUGEPAGE");
+
+	/* allocate transparent huge page */
+	*(volatile void **)ptr = ptr;
+
+	if (pread(pagemap_fd, ent, sizeof(ent),
+			(uintptr_t)ptr >> (PAGE_SHIFT - 3)) != sizeof(ent))
+		err(2, "read pagemap");
+
+	if (PAGEMAP_PRESENT(ent[0]) && PAGEMAP_PRESENT(ent[1]) &&
+	    PAGEMAP_PFN(ent[0]) + 1 == PAGEMAP_PFN(ent[1]) &&
+	    !(PAGEMAP_PFN(ent[0]) & ((1 << (HPAGE_SHIFT - PAGE_SHIFT)) - 1)))
+		return PAGEMAP_PFN(ent[0]);
+
+	return -1;
+}
+
+int main(int argc, char **argv)
+{
+	size_t ram, len;
+	void *ptr, *p;
+	struct timespec a, b;
+	double s;
+	uint8_t *map;
+	size_t map_len;
+
+	ram = sysconf(_SC_PHYS_PAGES);
+	if (ram > SIZE_MAX / sysconf(_SC_PAGESIZE) / 4)
+		ram = SIZE_MAX / 4;
+	else
+		ram *= sysconf(_SC_PAGESIZE);
+
+	if (argc == 1)
+		len = ram;
+	else if (!strcmp(argv[1], "-h"))
+		errx(1, "usage: %s [size in MiB]", argv[0]);
+	else
+		len = atoll(argv[1]) << 20;
+
+	warnx("allocate %zd transhuge pages, using %zd MiB virtual memory"
+	      " and %zd MiB of ram", len >> HPAGE_SHIFT, len >> 20,
+	      len >> (20 + HPAGE_SHIFT - PAGE_SHIFT - 1));
+
+	pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
+	if (pagemap_fd < 0)
+		err(2, "open pagemap");
+
+	len -= len % HPAGE_SIZE;
+	ptr = mmap(NULL, len + HPAGE_SIZE, PROT_READ | PROT_WRITE,
+			MAP_ANONYMOUS | MAP_NORESERVE | MAP_PRIVATE, -1, 0);
+	if (ptr == MAP_FAILED)
+		err(2, "initial mmap");
+	ptr += HPAGE_SIZE - (uintptr_t)ptr % HPAGE_SIZE;
+
+	if (madvise(ptr, len, MADV_HUGEPAGE))
+		err(2, "MADV_HUGEPAGE");
+
+	map_len = ram >> (HPAGE_SHIFT - 1);
+	map = malloc(map_len);
+	if (!map)
+		errx(2, "map malloc");
+
+	while (1) {
+		int nr_succeed = 0, nr_failed = 0, nr_pages = 0;
+
+		memset(map, 0, map_len);
+
+		clock_gettime(CLOCK_MONOTONIC, &a);
+		for (p = ptr; p < ptr + len; p += HPAGE_SIZE) {
+			int64_t pfn;
+
+			pfn = allocate_transhuge(p);
+
+			if (pfn < 0) {
+				nr_failed++;
+			} else {
+				size_t idx = pfn >> (HPAGE_SHIFT - PAGE_SHIFT);
+
+				nr_succeed++;
+				if (idx >= map_len) {
+					map = realloc(map, idx + 1);
+					if (!map)
+						errx(2, "map realloc");
+					memset(map + map_len, 0, idx + 1 - map_len);
+					map_len = idx + 1;
+				}
+				if (!map[idx])
+					nr_pages++;
+				map[idx] = 1;
+			}
+
+			/* split transhuge page, keep last page */
+			if (madvise(p, HPAGE_SIZE - PAGE_SIZE, MADV_DONTNEED))
+				err(2, "MADV_DONTNEED");
+		}
+		clock_gettime(CLOCK_MONOTONIC, &b);
+		s = b.tv_sec - a.tv_sec + (b.tv_nsec - a.tv_nsec) / 1000000000.;
+
+		warnx("%.3f s/loop, %.3f ms/page, %10.3f MiB/s\t"
+		      "%4d succeed, %4d failed, %4d different pages",
+		      s, s * 1000 / (len >> HPAGE_SHIFT), len / s / (1 << 20),
+		      nr_succeed, nr_failed, nr_pages);
+	}
+}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/7] mm: introduce common page state for ballooned memory
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

This patch adds page state PageBallon() and functions __Set/ClearPageBalloon.
Like PageBuddy() PageBalloon() looks like page-flag but actually this is special
state of page->_mapcount counter. There is no conflict because ballooned pages
cannot be mapped and cannot be in buddy allocator.

Ballooned pages are counted in vmstat counter NR_BALLOON_PAGES, it's shown them
in /proc/meminfo and /proc/meminfo. Also this patch it exports PageBallon into
userspace via /proc/kpageflags as KPF_BALLOON.

All this code including mm/balloon_compaction.o is under CONFIG_MEMORY_BALLOON,
it should be selected by ballooning driver which want use this feature.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 Documentation/filesystems/proc.txt     |    2 ++
 drivers/base/node.c                    |   16 ++++++++++------
 drivers/virtio/Kconfig                 |    1 +
 fs/proc/meminfo.c                      |    6 ++++++
 fs/proc/page.c                         |    3 +++
 include/linux/mm.h                     |   10 ++++++++++
 include/linux/mmzone.h                 |    3 +++
 include/uapi/linux/kernel-page-flags.h |    1 +
 mm/Kconfig                             |    5 +++++
 mm/Makefile                            |    3 ++-
 mm/balloon_compaction.c                |   14 ++++++++++++++
 mm/vmstat.c                            |    8 +++++++-
 tools/vm/page-types.c                  |    1 +
 13 files changed, 65 insertions(+), 8 deletions(-)

diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index eb8a10e..154a345 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -796,6 +796,7 @@ VmallocTotal:   112216 kB
 VmallocUsed:       428 kB
 VmallocChunk:   111088 kB
 AnonHugePages:   49152 kB
+BalloonPages:        0 kB
 
     MemTotal: Total usable ram (i.e. physical ram minus a few reserved
               bits and the kernel binary code)
@@ -838,6 +839,7 @@ MemAvailable: An estimate of how much memory is available for starting new
    Writeback: Memory which is actively being written back to the disk
    AnonPages: Non-file backed pages mapped into userspace page tables
 AnonHugePages: Non-file backed huge pages mapped into userspace page tables
+BalloonPages: Memory which was ballooned, not included into MemTotal
       Mapped: files which have been mmaped, such as libraries
         Slab: in-kernel data structures cache
 SReclaimable: Part of Slab, that might be reclaimed, such as caches
diff --git a/drivers/base/node.c b/drivers/base/node.c
index c6d3ae0..59e565c 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -120,6 +120,9 @@ static ssize_t node_read_meminfo(struct device *dev,
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		       "Node %d AnonHugePages:  %8lu kB\n"
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		       "Node %d BalloonPages:   %8lu kB\n"
+#endif
 			,
 		       nid, K(node_page_state(nid, NR_FILE_DIRTY)),
 		       nid, K(node_page_state(nid, NR_WRITEBACK)),
@@ -136,14 +139,15 @@ static ssize_t node_read_meminfo(struct device *dev,
 		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) +
 				node_page_state(nid, NR_SLAB_UNRECLAIMABLE)),
 		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)),
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))
-			, nid,
-			K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) *
-			HPAGE_PMD_NR));
-#else
-		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)));
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+		       ,nid, K(node_page_state(nid,
+				NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR)
+#endif
+#ifdef CONFIG_MEMORY_BALLOON
+		       ,nid, K(node_page_state(nid, NR_BALLOON_PAGES))
 #endif
+		       );
 	n += hugetlb_report_node_meminfo(nid, buf + n);
 	return n;
 }
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index c6683f2..00b2286 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -25,6 +25,7 @@ config VIRTIO_PCI
 config VIRTIO_BALLOON
 	tristate "Virtio balloon driver"
 	depends on VIRTIO
+	select MEMORY_BALLOON
 	---help---
 	 This driver supports increasing and decreasing the amount
 	 of memory within a KVM guest.
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index aa1eee0..f897fbf 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -138,6 +138,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		"AnonHugePages:  %8lu kB\n"
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		"BalloonPages:   %8lu kB\n"
+#endif
 		,
 		K(i.totalram),
 		K(i.freeram),
@@ -193,6 +196,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 		,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) *
 		   HPAGE_PMD_NR)
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		,K(global_page_state(NR_BALLOON_PAGES))
+#endif
 		);
 
 	hugetlb_report_meminfo(m);
diff --git a/fs/proc/page.c b/fs/proc/page.c
index e647c55..1e3187d 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -133,6 +133,9 @@ u64 stable_page_flags(struct page *page)
 	if (PageBuddy(page))
 		u |= 1 << KPF_BUDDY;
 
+	if (PageBalloon(page))
+		u |= 1 << KPF_BALLOON;
+
 	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
 
 	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8981cc8..d2dd497 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -553,6 +553,16 @@ static inline void __ClearPageBuddy(struct page *page)
 	atomic_set(&page->_mapcount, -1);
 }
 
+#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
+
+static inline int PageBalloon(struct page *page)
+{
+	return IS_ENABLED(CONFIG_MEMORY_BALLOON) &&
+		atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
+}
+void __SetPageBalloon(struct page *page);
+void __ClearPageBalloon(struct page *page);
+
 void put_page(struct page *page);
 void put_pages_list(struct list_head *pages);
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 318df70..d88fd01 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -157,6 +157,9 @@ enum zone_stat_item {
 	WORKINGSET_NODERECLAIM,
 	NR_ANON_TRANSPARENT_HUGEPAGES,
 	NR_FREE_CMA_PAGES,
+#ifdef CONFIG_MEMORY_BALLOON
+	NR_BALLOON_PAGES,
+#endif
 	NR_VM_ZONE_STAT_ITEMS };
 
 /*
diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
index 5116a0e..2f96d23 100644
--- a/include/uapi/linux/kernel-page-flags.h
+++ b/include/uapi/linux/kernel-page-flags.h
@@ -31,6 +31,7 @@
 
 #define KPF_KSM			21
 #define KPF_THP			22
+#define KPF_BALLOON		23
 
 
 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 886db21..72e0db0 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -228,6 +228,11 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
 	boolean
 
 #
+# support for memory ballooning
+config MEMORY_BALLOON
+	boolean
+
+#
 # support for memory balloon compaction
 config BALLOON_COMPACTION
 	bool "Allow for balloon memory compaction/migration"
diff --git a/mm/Makefile b/mm/Makefile
index 632ae77..2d33d7f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -16,7 +16,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   util.o mmzone.o vmstat.o backing-dev.o \
 			   mm_init.o mmu_context.o percpu.o slab_common.o \
-			   compaction.o balloon_compaction.o vmacache.o \
+			   compaction.o vmacache.o \
 			   interval_tree.o list_lru.o workingset.o \
 			   iov_iter.o $(mmu-y)
 
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 6e45a50..533c567 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -10,6 +10,20 @@
 #include <linux/export.h>
 #include <linux/balloon_compaction.h>
 
+void __SetPageBalloon(struct page *page)
+{
+	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+	inc_zone_page_state(page, NR_BALLOON_PAGES);
+}
+
+void __ClearPageBalloon(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageBalloon(page), page);
+	atomic_set(&page->_mapcount, -1);
+	dec_zone_page_state(page, NR_BALLOON_PAGES);
+}
+
 /*
  * balloon_devinfo_alloc - allocates a balloon device information descriptor.
  * @balloon_dev_descriptor: pointer to reference the balloon device which
diff --git a/mm/vmstat.c b/mm/vmstat.c
index e9ab104..6e704cc 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -735,7 +735,7 @@ static void walk_zones_in_node(struct seq_file *m, pg_data_t *pgdat,
 					TEXT_FOR_HIGHMEM(xx) xx "_movable",
 
 const char * const vmstat_text[] = {
-	/* Zoned VM counters */
+	/* enum zone_stat_item countes */
 	"nr_free_pages",
 	"nr_alloc_batch",
 	"nr_inactive_anon",
@@ -778,10 +778,16 @@ const char * const vmstat_text[] = {
 	"workingset_nodereclaim",
 	"nr_anon_transparent_hugepages",
 	"nr_free_cma",
+#ifdef CONFIG_MEMORY_BALLOON
+	"nr_balloon_pages",
+#endif
+
+	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
 	"nr_dirty_background_threshold",
 
 #ifdef CONFIG_VM_EVENT_COUNTERS
+	/* enum vm_event_item counters */
 	"pgpgin",
 	"pgpgout",
 	"pswpin",
diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
index c4d6d2e..264fbc2 100644
--- a/tools/vm/page-types.c
+++ b/tools/vm/page-types.c
@@ -132,6 +132,7 @@ static const char * const page_flag_names[] = {
 	[KPF_NOPAGE]		= "n:nopage",
 	[KPF_KSM]		= "x:ksm",
 	[KPF_THP]		= "t:thp",
+	[KPF_BALLOON]		= "o:balloon",
 
 	[KPF_RESERVED]		= "r:reserved",
 	[KPF_MLOCKED]		= "m:mlocked",


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/7] mm: introduce common page state for ballooned memory
@ 2014-08-20 15:04   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:04 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

This patch adds page state PageBallon() and functions __Set/ClearPageBalloon.
Like PageBuddy() PageBalloon() looks like page-flag but actually this is special
state of page->_mapcount counter. There is no conflict because ballooned pages
cannot be mapped and cannot be in buddy allocator.

Ballooned pages are counted in vmstat counter NR_BALLOON_PAGES, it's shown them
in /proc/meminfo and /proc/meminfo. Also this patch it exports PageBallon into
userspace via /proc/kpageflags as KPF_BALLOON.

All this code including mm/balloon_compaction.o is under CONFIG_MEMORY_BALLOON,
it should be selected by ballooning driver which want use this feature.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 Documentation/filesystems/proc.txt     |    2 ++
 drivers/base/node.c                    |   16 ++++++++++------
 drivers/virtio/Kconfig                 |    1 +
 fs/proc/meminfo.c                      |    6 ++++++
 fs/proc/page.c                         |    3 +++
 include/linux/mm.h                     |   10 ++++++++++
 include/linux/mmzone.h                 |    3 +++
 include/uapi/linux/kernel-page-flags.h |    1 +
 mm/Kconfig                             |    5 +++++
 mm/Makefile                            |    3 ++-
 mm/balloon_compaction.c                |   14 ++++++++++++++
 mm/vmstat.c                            |    8 +++++++-
 tools/vm/page-types.c                  |    1 +
 13 files changed, 65 insertions(+), 8 deletions(-)

diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index eb8a10e..154a345 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -796,6 +796,7 @@ VmallocTotal:   112216 kB
 VmallocUsed:       428 kB
 VmallocChunk:   111088 kB
 AnonHugePages:   49152 kB
+BalloonPages:        0 kB
 
     MemTotal: Total usable ram (i.e. physical ram minus a few reserved
               bits and the kernel binary code)
@@ -838,6 +839,7 @@ MemAvailable: An estimate of how much memory is available for starting new
    Writeback: Memory which is actively being written back to the disk
    AnonPages: Non-file backed pages mapped into userspace page tables
 AnonHugePages: Non-file backed huge pages mapped into userspace page tables
+BalloonPages: Memory which was ballooned, not included into MemTotal
       Mapped: files which have been mmaped, such as libraries
         Slab: in-kernel data structures cache
 SReclaimable: Part of Slab, that might be reclaimed, such as caches
diff --git a/drivers/base/node.c b/drivers/base/node.c
index c6d3ae0..59e565c 100644
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -120,6 +120,9 @@ static ssize_t node_read_meminfo(struct device *dev,
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		       "Node %d AnonHugePages:  %8lu kB\n"
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		       "Node %d BalloonPages:   %8lu kB\n"
+#endif
 			,
 		       nid, K(node_page_state(nid, NR_FILE_DIRTY)),
 		       nid, K(node_page_state(nid, NR_WRITEBACK)),
@@ -136,14 +139,15 @@ static ssize_t node_read_meminfo(struct device *dev,
 		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) +
 				node_page_state(nid, NR_SLAB_UNRECLAIMABLE)),
 		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)),
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))
-			, nid,
-			K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) *
-			HPAGE_PMD_NR));
-#else
-		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)));
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+		       ,nid, K(node_page_state(nid,
+				NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR)
+#endif
+#ifdef CONFIG_MEMORY_BALLOON
+		       ,nid, K(node_page_state(nid, NR_BALLOON_PAGES))
 #endif
+		       );
 	n += hugetlb_report_node_meminfo(nid, buf + n);
 	return n;
 }
diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index c6683f2..00b2286 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -25,6 +25,7 @@ config VIRTIO_PCI
 config VIRTIO_BALLOON
 	tristate "Virtio balloon driver"
 	depends on VIRTIO
+	select MEMORY_BALLOON
 	---help---
 	 This driver supports increasing and decreasing the amount
 	 of memory within a KVM guest.
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
index aa1eee0..f897fbf 100644
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -138,6 +138,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 		"AnonHugePages:  %8lu kB\n"
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		"BalloonPages:   %8lu kB\n"
+#endif
 		,
 		K(i.totalram),
 		K(i.freeram),
@@ -193,6 +196,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
 		,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) *
 		   HPAGE_PMD_NR)
 #endif
+#ifdef CONFIG_MEMORY_BALLOON
+		,K(global_page_state(NR_BALLOON_PAGES))
+#endif
 		);
 
 	hugetlb_report_meminfo(m);
diff --git a/fs/proc/page.c b/fs/proc/page.c
index e647c55..1e3187d 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -133,6 +133,9 @@ u64 stable_page_flags(struct page *page)
 	if (PageBuddy(page))
 		u |= 1 << KPF_BUDDY;
 
+	if (PageBalloon(page))
+		u |= 1 << KPF_BALLOON;
+
 	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
 
 	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8981cc8..d2dd497 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -553,6 +553,16 @@ static inline void __ClearPageBuddy(struct page *page)
 	atomic_set(&page->_mapcount, -1);
 }
 
+#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
+
+static inline int PageBalloon(struct page *page)
+{
+	return IS_ENABLED(CONFIG_MEMORY_BALLOON) &&
+		atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
+}
+void __SetPageBalloon(struct page *page);
+void __ClearPageBalloon(struct page *page);
+
 void put_page(struct page *page);
 void put_pages_list(struct list_head *pages);
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 318df70..d88fd01 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -157,6 +157,9 @@ enum zone_stat_item {
 	WORKINGSET_NODERECLAIM,
 	NR_ANON_TRANSPARENT_HUGEPAGES,
 	NR_FREE_CMA_PAGES,
+#ifdef CONFIG_MEMORY_BALLOON
+	NR_BALLOON_PAGES,
+#endif
 	NR_VM_ZONE_STAT_ITEMS };
 
 /*
diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
index 5116a0e..2f96d23 100644
--- a/include/uapi/linux/kernel-page-flags.h
+++ b/include/uapi/linux/kernel-page-flags.h
@@ -31,6 +31,7 @@
 
 #define KPF_KSM			21
 #define KPF_THP			22
+#define KPF_BALLOON		23
 
 
 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 886db21..72e0db0 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -228,6 +228,11 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
 	boolean
 
 #
+# support for memory ballooning
+config MEMORY_BALLOON
+	boolean
+
+#
 # support for memory balloon compaction
 config BALLOON_COMPACTION
 	bool "Allow for balloon memory compaction/migration"
diff --git a/mm/Makefile b/mm/Makefile
index 632ae77..2d33d7f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -16,7 +16,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   readahead.o swap.o truncate.o vmscan.o shmem.o \
 			   util.o mmzone.o vmstat.o backing-dev.o \
 			   mm_init.o mmu_context.o percpu.o slab_common.o \
-			   compaction.o balloon_compaction.o vmacache.o \
+			   compaction.o vmacache.o \
 			   interval_tree.o list_lru.o workingset.o \
 			   iov_iter.o $(mmu-y)
 
@@ -64,3 +64,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
 obj-$(CONFIG_CMA)	+= cma.o
+obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 6e45a50..533c567 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -10,6 +10,20 @@
 #include <linux/export.h>
 #include <linux/balloon_compaction.h>
 
+void __SetPageBalloon(struct page *page)
+{
+	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+	inc_zone_page_state(page, NR_BALLOON_PAGES);
+}
+
+void __ClearPageBalloon(struct page *page)
+{
+	VM_BUG_ON_PAGE(!PageBalloon(page), page);
+	atomic_set(&page->_mapcount, -1);
+	dec_zone_page_state(page, NR_BALLOON_PAGES);
+}
+
 /*
  * balloon_devinfo_alloc - allocates a balloon device information descriptor.
  * @balloon_dev_descriptor: pointer to reference the balloon device which
diff --git a/mm/vmstat.c b/mm/vmstat.c
index e9ab104..6e704cc 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -735,7 +735,7 @@ static void walk_zones_in_node(struct seq_file *m, pg_data_t *pgdat,
 					TEXT_FOR_HIGHMEM(xx) xx "_movable",
 
 const char * const vmstat_text[] = {
-	/* Zoned VM counters */
+	/* enum zone_stat_item countes */
 	"nr_free_pages",
 	"nr_alloc_batch",
 	"nr_inactive_anon",
@@ -778,10 +778,16 @@ const char * const vmstat_text[] = {
 	"workingset_nodereclaim",
 	"nr_anon_transparent_hugepages",
 	"nr_free_cma",
+#ifdef CONFIG_MEMORY_BALLOON
+	"nr_balloon_pages",
+#endif
+
+	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
 	"nr_dirty_background_threshold",
 
 #ifdef CONFIG_VM_EVENT_COUNTERS
+	/* enum vm_event_item counters */
 	"pgpgin",
 	"pgpgout",
 	"pswpin",
diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
index c4d6d2e..264fbc2 100644
--- a/tools/vm/page-types.c
+++ b/tools/vm/page-types.c
@@ -132,6 +132,7 @@ static const char * const page_flag_names[] = {
 	[KPF_NOPAGE]		= "n:nopage",
 	[KPF_KSM]		= "x:ksm",
 	[KPF_THP]		= "t:thp",
+	[KPF_BALLOON]		= "o:balloon",
 
 	[KPF_RESERVED]		= "r:reserved",
 	[KPF_MLOCKED]		= "m:mlocked",

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 6/7] mm/balloon_compaction: use common page ballooning
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:05   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:05 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

This patch replaces checking AS_BALLOON_MAP in page->mapping->flags
with PageBalloon which is stored directly in the struct page.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 include/linux/balloon_compaction.h |   85 ++----------------------------------
 mm/Kconfig                         |    2 -
 mm/balloon_compaction.c            |    7 +--
 mm/compaction.c                    |    9 ++--
 mm/migrate.c                       |    4 +-
 mm/vmscan.c                        |    2 -
 6 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 53d482e..f5fda8b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -108,77 +108,6 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
 }
 
 /*
- * page_flags_cleared - helper to perform balloon @page ->flags tests.
- *
- * As balloon pages are obtained from buddy and we do not play with page->flags
- * at driver level (exception made when we get the page lock for compaction),
- * we can safely identify a ballooned page by checking if the
- * PAGE_FLAGS_CHECK_AT_PREP page->flags are all cleared.  This approach also
- * helps us skip ballooned pages that are locked for compaction or release, thus
- * mitigating their racy check at balloon_page_movable()
- */
-static inline bool page_flags_cleared(struct page *page)
-{
-	return !(page->flags & PAGE_FLAGS_CHECK_AT_PREP);
-}
-
-/*
- * __is_movable_balloon_page - helper to perform @page mapping->flags tests
- */
-static inline bool __is_movable_balloon_page(struct page *page)
-{
-	struct address_space *mapping = page->mapping;
-	return !PageAnon(page) && mapping_balloon(mapping);
-}
-
-/*
- * balloon_page_movable - test page->mapping->flags to identify balloon pages
- *			  that can be moved by compaction/migration.
- *
- * This function is used at core compaction's page isolation scheme, therefore
- * most pages exposed to it are not enlisted as balloon pages and so, to avoid
- * undesired side effects like racing against __free_pages(), we cannot afford
- * holding the page locked while testing page->mapping->flags here.
- *
- * As we might return false positives in the case of a balloon page being just
- * released under us, the page->mapping->flags need to be re-tested later,
- * under the proper page lock, at the functions that will be coping with the
- * balloon page case.
- */
-static inline bool balloon_page_movable(struct page *page)
-{
-	/*
-	 * Before dereferencing and testing mapping->flags, let's make sure
-	 * this is not a page that uses ->mapping in a different way
-	 */
-	if (page_flags_cleared(page) && !page_mapped(page) &&
-	    page_count(page) == 1)
-		return __is_movable_balloon_page(page);
-
-	return false;
-}
-
-/*
- * isolated_balloon_page - identify an isolated balloon page on private
- *			   compaction/migration page lists.
- *
- * After a compaction thread isolates a balloon page for migration, it raises
- * the page refcount to prevent concurrent compaction threads from re-isolating
- * the same page. For that reason putback_movable_pages(), or other routines
- * that need to identify isolated balloon pages on private pagelists, cannot
- * rely on balloon_page_movable() to accomplish the task.
- */
-static inline bool isolated_balloon_page(struct page *page)
-{
-	/* Already isolated balloon pages, by default, have a raised refcount */
-	if (page_flags_cleared(page) && !page_mapped(page) &&
-	    page_count(page) >= 2)
-		return __is_movable_balloon_page(page);
-
-	return false;
-}
-
-/*
  * balloon_page_insert - insert a page into the balloon's page list and make
  *		         the page->mapping assignment accordingly.
  * @page    : page to be assigned as a 'balloon page'
@@ -192,6 +121,7 @@ static inline void balloon_page_insert(struct page *page,
 				       struct address_space *mapping,
 				       struct list_head *head)
 {
+	__SetPageBalloon(page);
 	page->mapping = mapping;
 	list_add(&page->lru, head);
 }
@@ -206,6 +136,7 @@ static inline void balloon_page_insert(struct page *page,
  */
 static inline void balloon_page_delete(struct page *page)
 {
+	__ClearPageBalloon(page);
 	page->mapping = NULL;
 	list_del(&page->lru);
 }
@@ -250,24 +181,16 @@ static inline void balloon_page_insert(struct page *page,
 				       struct address_space *mapping,
 				       struct list_head *head)
 {
+	__SetPageBalloon(page);
 	list_add(&page->lru, head);
 }
 
 static inline void balloon_page_delete(struct page *page)
 {
+	__ClearPageBalloon(page);
 	list_del(&page->lru);
 }
 
-static inline bool balloon_page_movable(struct page *page)
-{
-	return false;
-}
-
-static inline bool isolated_balloon_page(struct page *page)
-{
-	return false;
-}
-
 static inline bool balloon_page_isolate(struct page *page)
 {
 	return false;
diff --git a/mm/Kconfig b/mm/Kconfig
index 72e0db0..e09cf0a 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -237,7 +237,7 @@ config MEMORY_BALLOON
 config BALLOON_COMPACTION
 	bool "Allow for balloon memory compaction/migration"
 	def_bool y
-	depends on COMPACTION && VIRTIO_BALLOON
+	depends on COMPACTION && MEMORY_BALLOON
 	help
 	  Memory fragmentation introduced by ballooning might reduce
 	  significantly the number of 2MB contiguous memory blocks that can be
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 533c567..22c8e03 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -253,8 +253,7 @@ bool balloon_page_isolate(struct page *page)
 			 * Prevent concurrent compaction threads from isolating
 			 * an already isolated balloon page by refcount check.
 			 */
-			if (__is_movable_balloon_page(page) &&
-			    page_count(page) == 2) {
+			if (PageBalloon(page) && page_count(page) == 2) {
 				__isolate_balloon_page(page);
 				unlock_page(page);
 				return true;
@@ -275,7 +274,7 @@ void balloon_page_putback(struct page *page)
 	 */
 	lock_page(page);
 
-	if (__is_movable_balloon_page(page)) {
+	if (PageBalloon(page)) {
 		__putback_balloon_page(page);
 		/* drop the extra ref count taken for page isolation */
 		put_page(page);
@@ -300,7 +299,7 @@ int balloon_page_migrate(struct page *newpage,
 	 */
 	BUG_ON(!trylock_page(newpage));
 
-	if (WARN_ON(!__is_movable_balloon_page(page))) {
+	if (WARN_ON(!PageBalloon(page))) {
 		dump_page(page, "not movable balloon page");
 		unlock_page(newpage);
 		return rc;
diff --git a/mm/compaction.c b/mm/compaction.c
index 0653f5f..e9aeed2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -596,11 +596,10 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 		 * Skip any other type of page
 		 */
 		if (!PageLRU(page)) {
-			if (unlikely(balloon_page_movable(page))) {
-				if (balloon_page_isolate(page)) {
-					/* Successfully isolated */
-					goto isolate_success;
-				}
+			if (unlikely(PageBalloon(page)) &&
+					balloon_page_isolate(page)) {
+				/* Successfully isolated */
+				goto isolate_success;
 			}
 			continue;
 		}
diff --git a/mm/migrate.c b/mm/migrate.c
index 161d044..c35e6f2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -92,7 +92,7 @@ void putback_movable_pages(struct list_head *l)
 		list_del(&page->lru);
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
-		if (unlikely(isolated_balloon_page(page)))
+		if (unlikely(PageBalloon(page)))
 			balloon_page_putback(page);
 		else
 			putback_lru_page(page);
@@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(__is_movable_balloon_page(page))) {
+	if (unlikely(PageBalloon(page))) {
 		/*
 		 * A ballooned page does not need any special attention from
 		 * physical to virtual reverse mapping procedures.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2836b53..f90f93e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1160,7 +1160,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
 
 	list_for_each_entry_safe(page, next, page_list, lru) {
 		if (page_is_file_cache(page) && !PageDirty(page) &&
-		    !isolated_balloon_page(page)) {
+		    !PageBalloon(page)) {
 			ClearPageActive(page);
 			list_move(&page->lru, &clean_pages);
 		}


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 6/7] mm/balloon_compaction: use common page ballooning
@ 2014-08-20 15:05   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:05 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

This patch replaces checking AS_BALLOON_MAP in page->mapping->flags
with PageBalloon which is stored directly in the struct page.

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 include/linux/balloon_compaction.h |   85 ++----------------------------------
 mm/Kconfig                         |    2 -
 mm/balloon_compaction.c            |    7 +--
 mm/compaction.c                    |    9 ++--
 mm/migrate.c                       |    4 +-
 mm/vmscan.c                        |    2 -
 6 files changed, 15 insertions(+), 94 deletions(-)

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 53d482e..f5fda8b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -108,77 +108,6 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
 }
 
 /*
- * page_flags_cleared - helper to perform balloon @page ->flags tests.
- *
- * As balloon pages are obtained from buddy and we do not play with page->flags
- * at driver level (exception made when we get the page lock for compaction),
- * we can safely identify a ballooned page by checking if the
- * PAGE_FLAGS_CHECK_AT_PREP page->flags are all cleared.  This approach also
- * helps us skip ballooned pages that are locked for compaction or release, thus
- * mitigating their racy check at balloon_page_movable()
- */
-static inline bool page_flags_cleared(struct page *page)
-{
-	return !(page->flags & PAGE_FLAGS_CHECK_AT_PREP);
-}
-
-/*
- * __is_movable_balloon_page - helper to perform @page mapping->flags tests
- */
-static inline bool __is_movable_balloon_page(struct page *page)
-{
-	struct address_space *mapping = page->mapping;
-	return !PageAnon(page) && mapping_balloon(mapping);
-}
-
-/*
- * balloon_page_movable - test page->mapping->flags to identify balloon pages
- *			  that can be moved by compaction/migration.
- *
- * This function is used at core compaction's page isolation scheme, therefore
- * most pages exposed to it are not enlisted as balloon pages and so, to avoid
- * undesired side effects like racing against __free_pages(), we cannot afford
- * holding the page locked while testing page->mapping->flags here.
- *
- * As we might return false positives in the case of a balloon page being just
- * released under us, the page->mapping->flags need to be re-tested later,
- * under the proper page lock, at the functions that will be coping with the
- * balloon page case.
- */
-static inline bool balloon_page_movable(struct page *page)
-{
-	/*
-	 * Before dereferencing and testing mapping->flags, let's make sure
-	 * this is not a page that uses ->mapping in a different way
-	 */
-	if (page_flags_cleared(page) && !page_mapped(page) &&
-	    page_count(page) == 1)
-		return __is_movable_balloon_page(page);
-
-	return false;
-}
-
-/*
- * isolated_balloon_page - identify an isolated balloon page on private
- *			   compaction/migration page lists.
- *
- * After a compaction thread isolates a balloon page for migration, it raises
- * the page refcount to prevent concurrent compaction threads from re-isolating
- * the same page. For that reason putback_movable_pages(), or other routines
- * that need to identify isolated balloon pages on private pagelists, cannot
- * rely on balloon_page_movable() to accomplish the task.
- */
-static inline bool isolated_balloon_page(struct page *page)
-{
-	/* Already isolated balloon pages, by default, have a raised refcount */
-	if (page_flags_cleared(page) && !page_mapped(page) &&
-	    page_count(page) >= 2)
-		return __is_movable_balloon_page(page);
-
-	return false;
-}
-
-/*
  * balloon_page_insert - insert a page into the balloon's page list and make
  *		         the page->mapping assignment accordingly.
  * @page    : page to be assigned as a 'balloon page'
@@ -192,6 +121,7 @@ static inline void balloon_page_insert(struct page *page,
 				       struct address_space *mapping,
 				       struct list_head *head)
 {
+	__SetPageBalloon(page);
 	page->mapping = mapping;
 	list_add(&page->lru, head);
 }
@@ -206,6 +136,7 @@ static inline void balloon_page_insert(struct page *page,
  */
 static inline void balloon_page_delete(struct page *page)
 {
+	__ClearPageBalloon(page);
 	page->mapping = NULL;
 	list_del(&page->lru);
 }
@@ -250,24 +181,16 @@ static inline void balloon_page_insert(struct page *page,
 				       struct address_space *mapping,
 				       struct list_head *head)
 {
+	__SetPageBalloon(page);
 	list_add(&page->lru, head);
 }
 
 static inline void balloon_page_delete(struct page *page)
 {
+	__ClearPageBalloon(page);
 	list_del(&page->lru);
 }
 
-static inline bool balloon_page_movable(struct page *page)
-{
-	return false;
-}
-
-static inline bool isolated_balloon_page(struct page *page)
-{
-	return false;
-}
-
 static inline bool balloon_page_isolate(struct page *page)
 {
 	return false;
diff --git a/mm/Kconfig b/mm/Kconfig
index 72e0db0..e09cf0a 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -237,7 +237,7 @@ config MEMORY_BALLOON
 config BALLOON_COMPACTION
 	bool "Allow for balloon memory compaction/migration"
 	def_bool y
-	depends on COMPACTION && VIRTIO_BALLOON
+	depends on COMPACTION && MEMORY_BALLOON
 	help
 	  Memory fragmentation introduced by ballooning might reduce
 	  significantly the number of 2MB contiguous memory blocks that can be
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 533c567..22c8e03 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -253,8 +253,7 @@ bool balloon_page_isolate(struct page *page)
 			 * Prevent concurrent compaction threads from isolating
 			 * an already isolated balloon page by refcount check.
 			 */
-			if (__is_movable_balloon_page(page) &&
-			    page_count(page) == 2) {
+			if (PageBalloon(page) && page_count(page) == 2) {
 				__isolate_balloon_page(page);
 				unlock_page(page);
 				return true;
@@ -275,7 +274,7 @@ void balloon_page_putback(struct page *page)
 	 */
 	lock_page(page);
 
-	if (__is_movable_balloon_page(page)) {
+	if (PageBalloon(page)) {
 		__putback_balloon_page(page);
 		/* drop the extra ref count taken for page isolation */
 		put_page(page);
@@ -300,7 +299,7 @@ int balloon_page_migrate(struct page *newpage,
 	 */
 	BUG_ON(!trylock_page(newpage));
 
-	if (WARN_ON(!__is_movable_balloon_page(page))) {
+	if (WARN_ON(!PageBalloon(page))) {
 		dump_page(page, "not movable balloon page");
 		unlock_page(newpage);
 		return rc;
diff --git a/mm/compaction.c b/mm/compaction.c
index 0653f5f..e9aeed2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -596,11 +596,10 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
 		 * Skip any other type of page
 		 */
 		if (!PageLRU(page)) {
-			if (unlikely(balloon_page_movable(page))) {
-				if (balloon_page_isolate(page)) {
-					/* Successfully isolated */
-					goto isolate_success;
-				}
+			if (unlikely(PageBalloon(page)) &&
+					balloon_page_isolate(page)) {
+				/* Successfully isolated */
+				goto isolate_success;
 			}
 			continue;
 		}
diff --git a/mm/migrate.c b/mm/migrate.c
index 161d044..c35e6f2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -92,7 +92,7 @@ void putback_movable_pages(struct list_head *l)
 		list_del(&page->lru);
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
-		if (unlikely(isolated_balloon_page(page)))
+		if (unlikely(PageBalloon(page)))
 			balloon_page_putback(page);
 		else
 			putback_lru_page(page);
@@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(__is_movable_balloon_page(page))) {
+	if (unlikely(PageBalloon(page))) {
 		/*
 		 * A ballooned page does not need any special attention from
 		 * physical to virtual reverse mapping procedures.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2836b53..f90f93e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1160,7 +1160,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
 
 	list_for_each_entry_safe(page, next, page_list, lru) {
 		if (page_is_file_cache(page) && !PageDirty(page) &&
-		    !isolated_balloon_page(page)) {
+		    !PageBalloon(page)) {
 			ClearPageActive(page);
 			list_move(&page->lru, &clean_pages);
 		}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 15:05   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:05 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

* move special branch for balloon migraion into migrate_pages
* remove special mapping for balloon and its flag AS_BALLOON_MAP
* embed struct balloon_dev_info into struct virtio_balloon
* cleanup balloon_page_dequeue, kill balloon_page_free

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 drivers/virtio/virtio_balloon.c    |   77 ++++---------
 include/linux/balloon_compaction.h |  107 ++++++------------
 include/linux/migrate.h            |   11 --
 include/linux/pagemap.h            |   18 ---
 mm/balloon_compaction.c            |  214 ++++++++++++------------------------
 mm/migrate.c                       |   27 +----
 6 files changed, 130 insertions(+), 324 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 25ebe8e..bdc6a7e 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -59,7 +59,7 @@ struct virtio_balloon
 	 * Each page on this list adds VIRTIO_BALLOON_PAGES_PER_PAGE
 	 * to num_pages above.
 	 */
-	struct balloon_dev_info *vb_dev_info;
+	struct balloon_dev_info vb_dev_info;
 
 	/* Synchronize access/update to this struct virtio_balloon elements */
 	struct mutex balloon_lock;
@@ -127,7 +127,7 @@ static void set_page_pfns(u32 pfns[], struct page *page)
 
 static void fill_balloon(struct virtio_balloon *vb, size_t num)
 {
-	struct balloon_dev_info *vb_dev_info = vb->vb_dev_info;
+	struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
 
 	/* We can only do one array worth at a time. */
 	num = min(num, ARRAY_SIZE(vb->pfns));
@@ -163,15 +163,15 @@ static void release_pages_by_pfn(const u32 pfns[], unsigned int num)
 	/* Find pfns pointing at start of each page, get pages and free them. */
 	for (i = 0; i < num; i += VIRTIO_BALLOON_PAGES_PER_PAGE) {
 		struct page *page = balloon_pfn_to_page(pfns[i]);
-		balloon_page_free(page);
 		adjust_managed_page_count(page, 1);
+		put_page(page);
 	}
 }
 
 static void leak_balloon(struct virtio_balloon *vb, size_t num)
 {
 	struct page *page;
-	struct balloon_dev_info *vb_dev_info = vb->vb_dev_info;
+	struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
 
 	/* We can only do one array worth at a time. */
 	num = min(num, ARRAY_SIZE(vb->pfns));
@@ -353,12 +353,11 @@ static int init_vqs(struct virtio_balloon *vb)
 	return 0;
 }
 
-static const struct address_space_operations virtio_balloon_aops;
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
  *			     a compation thread.     (called under page lock)
- * @mapping: the page->mapping which will be assigned to the new migrated page.
+ * @vb_dev_info: the balloon device
  * @newpage: page that will replace the isolated page after migration finishes.
  * @page   : the isolated (old) page that is about to be migrated to newpage.
  * @mode   : compaction mode -- not used for balloon page migration.
@@ -373,17 +372,13 @@ static const struct address_space_operations virtio_balloon_aops;
  * This function preforms the balloon page migration task.
  * Called through balloon_mapping->a_ops->migratepage
  */
-static int virtballoon_migratepage(struct address_space *mapping,
+static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
 		struct page *newpage, struct page *page, enum migrate_mode mode)
 {
-	struct balloon_dev_info *vb_dev_info = balloon_page_device(page);
-	struct virtio_balloon *vb;
+	struct virtio_balloon *vb = container_of(vb_dev_info,
+			struct virtio_balloon, vb_dev_info);
 	unsigned long flags;
 
-	BUG_ON(!vb_dev_info);
-
-	vb = vb_dev_info->balloon_device;
-
 	/*
 	 * In order to avoid lock contention while migrating pages concurrently
 	 * to leak_balloon() or fill_balloon() we just give up the balloon_lock
@@ -395,42 +390,34 @@ static int virtballoon_migratepage(struct address_space *mapping,
 	if (!mutex_trylock(&vb->balloon_lock))
 		return -EAGAIN;
 
+	get_page(newpage); /* balloon reference */
+
 	/* balloon's page migration 1st step  -- inflate "newpage" */
 	spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
-	balloon_page_insert(newpage, mapping, &vb_dev_info->pages);
+	balloon_page_insert(vb_dev_info, newpage);
 	vb_dev_info->isolated_pages--;
 	spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
 	vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
 	set_page_pfns(vb->pfns, newpage);
 	tell_host(vb, vb->inflate_vq);
 
-	/*
-	 * balloon's page migration 2nd step -- deflate "page"
-	 *
-	 * It's safe to delete page->lru here because this page is at
-	 * an isolated migration list, and this step is expected to happen here
-	 */
-	balloon_page_delete(page);
+	/* balloon's page migration 2nd step -- deflate "page" */
+	balloon_page_delete(page, true);
 	vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
 	set_page_pfns(vb->pfns, page);
 	tell_host(vb, vb->deflate_vq);
 
 	mutex_unlock(&vb->balloon_lock);
 
-	return MIGRATEPAGE_BALLOON_SUCCESS;
-}
+	put_page(page); /* balloon reference */
 
-/* define the balloon_mapping->a_ops callback to allow balloon page migration */
-static const struct address_space_operations virtio_balloon_aops = {
-			.migratepage = virtballoon_migratepage,
-};
+	return MIGRATEPAGE_SUCCESS;
+}
 #endif /* CONFIG_BALLOON_COMPACTION */
 
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
-	struct address_space *vb_mapping;
-	struct balloon_dev_info *vb_devinfo;
 	int err;
 
 	vdev->priv = vb = kmalloc(sizeof(*vb), GFP_KERNEL);
@@ -446,30 +433,14 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	vb->vdev = vdev;
 	vb->need_stats_update = 0;
 
-	vb_devinfo = balloon_devinfo_alloc(vb);
-	if (IS_ERR(vb_devinfo)) {
-		err = PTR_ERR(vb_devinfo);
-		goto out_free_vb;
-	}
-
-	vb_mapping = balloon_mapping_alloc(vb_devinfo,
-					   (balloon_compaction_check()) ?
-					   &virtio_balloon_aops : NULL);
-	if (IS_ERR(vb_mapping)) {
-		/*
-		 * IS_ERR(vb_mapping) && PTR_ERR(vb_mapping) == -EOPNOTSUPP
-		 * This means !CONFIG_BALLOON_COMPACTION, otherwise we get off.
-		 */
-		err = PTR_ERR(vb_mapping);
-		if (err != -EOPNOTSUPP)
-			goto out_free_vb_devinfo;
-	}
-
-	vb->vb_dev_info = vb_devinfo;
+	balloon_devinfo_init(&vb->vb_dev_info);
+#ifdef CONFIG_BALLOON_COMPACTION
+	vb->vb_dev_info.migrate_page = virtballoon_migratepage;
+#endif
 
 	err = init_vqs(vb);
 	if (err)
-		goto out_free_vb_mapping;
+		goto out_free_vb;
 
 	vb->thread = kthread_run(balloon, vb, "vballoon");
 	if (IS_ERR(vb->thread)) {
@@ -481,10 +452,6 @@ static int virtballoon_probe(struct virtio_device *vdev)
 
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
-out_free_vb_mapping:
-	balloon_mapping_free(vb_mapping);
-out_free_vb_devinfo:
-	balloon_devinfo_free(vb_devinfo);
 out_free_vb:
 	kfree(vb);
 out:
@@ -510,8 +477,6 @@ static void virtballoon_remove(struct virtio_device *vdev)
 
 	kthread_stop(vb->thread);
 	remove_common(vb);
-	balloon_mapping_free(vb->vb_dev_info->mapping);
-	balloon_devinfo_free(vb->vb_dev_info);
 	kfree(vb);
 }
 
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index f5fda8b..dc7073b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -54,58 +54,27 @@
  * balloon driver as a page book-keeper for its registered balloon devices.
  */
 struct balloon_dev_info {
-	void *balloon_device;		/* balloon device descriptor */
-	struct address_space *mapping;	/* balloon special page->mapping */
 	unsigned long isolated_pages;	/* # of isolated pages for migration */
 	spinlock_t pages_lock;		/* Protection to pages list */
 	struct list_head pages;		/* Pages enqueued & handled to Host */
+	int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
+			struct page *page, enum migrate_mode mode);
 };
 
-extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info);
-extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
-extern struct balloon_dev_info *balloon_devinfo_alloc(
-						void *balloon_dev_descriptor);
-
-static inline void balloon_devinfo_free(struct balloon_dev_info *b_dev_info)
+static inline void balloon_devinfo_init(struct balloon_dev_info *b_dev_info)
 {
-	kfree(b_dev_info);
+	b_dev_info->isolated_pages = 0;
+	spin_lock_init(&b_dev_info->pages_lock);
+	INIT_LIST_HEAD(&b_dev_info->pages);
+	b_dev_info->migrate_page = NULL;
 }
 
-/*
- * balloon_page_free - release a balloon page back to the page free lists
- * @page: ballooned page to be set free
- *
- * This function must be used to properly set free an isolated/dequeued balloon
- * page at the end of a sucessful page migration, or at the balloon driver's
- * page release procedure.
- */
-static inline void balloon_page_free(struct page *page)
-{
-	/*
-	 * Balloon pages always get an extra refcount before being isolated
-	 * and before being dequeued to help on sorting out fortuite colisions
-	 * between a thread attempting to isolate and another thread attempting
-	 * to release the very same balloon page.
-	 *
-	 * Before we handle the page back to Buddy, lets drop its extra refcnt.
-	 */
-	put_page(page);
-	__free_page(page);
-}
+extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info);
+extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
 
 #ifdef CONFIG_BALLOON_COMPACTION
 extern bool balloon_page_isolate(struct page *page);
 extern void balloon_page_putback(struct page *page);
-extern int balloon_page_migrate(struct page *newpage,
-				struct page *page, enum migrate_mode mode);
-extern struct address_space
-*balloon_mapping_alloc(struct balloon_dev_info *b_dev_info,
-			const struct address_space_operations *a_ops);
-
-static inline void balloon_mapping_free(struct address_space *balloon_mapping)
-{
-	kfree(balloon_mapping);
-}
 
 /*
  * balloon_page_insert - insert a page into the balloon's page list and make
@@ -117,13 +86,12 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
  * Caller must ensure the page is locked and the spin_lock protecting balloon
  * pages list is held before inserting a page into the balloon device.
  */
-static inline void balloon_page_insert(struct page *page,
-				       struct address_space *mapping,
-				       struct list_head *head)
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
-	page->mapping = mapping;
-	list_add(&page->lru, head);
+	set_page_private(page, (unsigned long)balloon);
+	list_add(&page->lru, &balloon->pages);
 }
 
 /*
@@ -134,24 +102,25 @@ static inline void balloon_page_insert(struct page *page,
  * Caller must ensure the page is locked and the spin_lock protecting balloon
  * pages list is held before deleting a page from the balloon device.
  */
-static inline void balloon_page_delete(struct page *page)
+static inline void balloon_page_delete(struct page *page, bool isolated)
 {
 	__ClearPageBalloon(page);
-	page->mapping = NULL;
-	list_del(&page->lru);
+	set_page_private(page, 0);
+	if (!isolated)
+		list_del(&page->lru);
 }
 
+int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
+		unsigned long private, struct page *page,
+		int force, enum migrate_mode mode);
+
 /*
  * balloon_page_device - get the b_dev_info descriptor for the balloon device
  *			 that enqueues the given page.
  */
 static inline struct balloon_dev_info *balloon_page_device(struct page *page)
 {
-	struct address_space *mapping = page->mapping;
-	if (likely(mapping))
-		return mapping->private_data;
-
-	return NULL;
+	return (struct balloon_dev_info *)page_private(page);
 }
 
 static inline gfp_t balloon_mapping_gfp_mask(void)
@@ -159,11 +128,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 	return GFP_HIGHUSER_MOVABLE;
 }
 
-static inline bool balloon_compaction_check(void)
-{
-	return true;
-}
-
 #else /* !CONFIG_BALLOON_COMPACTION */
 
 static inline void *balloon_mapping_alloc(void *balloon_device,
@@ -177,18 +141,25 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
 	return;
 }
 
-static inline void balloon_page_insert(struct page *page,
-				       struct address_space *mapping,
-				       struct list_head *head)
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
 	list_add(&page->lru, head);
 }
 
-static inline void balloon_page_delete(struct page *page)
+static inline void balloon_page_delete(struct page *page, bool isolated)
 {
 	__ClearPageBalloon(page);
-	list_del(&page->lru);
+	if (!isolated)
+		list_del(&page->lru);
+}
+
+static inline int balloon_page_migrate(new_page_t get_new_page,
+		free_page_t put_new_page, unsigned long private,
+		struct page *page, int force, enum migrate_mode mode)
+{
+	return -EAGAIN;
 }
 
 static inline bool balloon_page_isolate(struct page *page)
@@ -201,20 +172,10 @@ static inline void balloon_page_putback(struct page *page)
 	return;
 }
 
-static inline int balloon_page_migrate(struct page *newpage,
-				struct page *page, enum migrate_mode mode)
-{
-	return 0;
-}
-
 static inline gfp_t balloon_mapping_gfp_mask(void)
 {
 	return GFP_HIGHUSER;
 }
 
-static inline bool balloon_compaction_check(void)
-{
-	return false;
-}
 #endif /* CONFIG_BALLOON_COMPACTION */
 #endif /* _LINUX_BALLOON_COMPACTION_H */
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a2901c4..b33347f 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -13,18 +13,9 @@ typedef void free_page_t(struct page *page, unsigned long private);
  * Return values from addresss_space_operations.migratepage():
  * - negative errno on page migration failure;
  * - zero on page migration success;
- *
- * The balloon page migration introduces this special case where a 'distinct'
- * return code is used to flag a successful page migration to unmap_and_move().
- * This approach is necessary because page migration can race against balloon
- * deflation procedure, and for such case we could introduce a nasty page leak
- * if a successfully migrated balloon page gets released concurrently with
- * migration's unmap_and_move() wrap-up steps.
  */
 #define MIGRATEPAGE_SUCCESS		0
-#define MIGRATEPAGE_BALLOON_SUCCESS	1 /* special ret code for balloon page
-					   * sucessful migration case.
-					   */
+
 enum migrate_reason {
 	MR_COMPACTION,
 	MR_MEMORY_FAILURE,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 3df8c7d..b517c34 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -24,8 +24,7 @@ enum mapping_flags {
 	AS_ENOSPC	= __GFP_BITS_SHIFT + 1,	/* ENOSPC on async write */
 	AS_MM_ALL_LOCKS	= __GFP_BITS_SHIFT + 2,	/* under mm_take_all_locks() */
 	AS_UNEVICTABLE	= __GFP_BITS_SHIFT + 3,	/* e.g., ramdisk, SHM_LOCK */
-	AS_BALLOON_MAP  = __GFP_BITS_SHIFT + 4, /* balloon page special map */
-	AS_EXITING	= __GFP_BITS_SHIFT + 5, /* final truncate in progress */
+	AS_EXITING	= __GFP_BITS_SHIFT + 4, /* final truncate in progress */
 };
 
 static inline void mapping_set_error(struct address_space *mapping, int error)
@@ -55,21 +54,6 @@ static inline int mapping_unevictable(struct address_space *mapping)
 	return !!mapping;
 }
 
-static inline void mapping_set_balloon(struct address_space *mapping)
-{
-	set_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
-static inline void mapping_clear_balloon(struct address_space *mapping)
-{
-	clear_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
-static inline int mapping_balloon(struct address_space *mapping)
-{
-	return mapping && test_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
 static inline void mapping_set_exiting(struct address_space *mapping)
 {
 	set_bit(AS_EXITING, &mapping->flags);
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 22c8e03..fc49a8a 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -25,32 +25,6 @@ void __ClearPageBalloon(struct page *page)
 }
 
 /*
- * balloon_devinfo_alloc - allocates a balloon device information descriptor.
- * @balloon_dev_descriptor: pointer to reference the balloon device which
- *                          this struct balloon_dev_info will be servicing.
- *
- * Driver must call it to properly allocate and initialize an instance of
- * struct balloon_dev_info which will be used to reference a balloon device
- * as well as to keep track of the balloon device page list.
- */
-struct balloon_dev_info *balloon_devinfo_alloc(void *balloon_dev_descriptor)
-{
-	struct balloon_dev_info *b_dev_info;
-	b_dev_info = kmalloc(sizeof(*b_dev_info), GFP_KERNEL);
-	if (!b_dev_info)
-		return ERR_PTR(-ENOMEM);
-
-	b_dev_info->balloon_device = balloon_dev_descriptor;
-	b_dev_info->mapping = NULL;
-	b_dev_info->isolated_pages = 0;
-	spin_lock_init(&b_dev_info->pages_lock);
-	INIT_LIST_HEAD(&b_dev_info->pages);
-
-	return b_dev_info;
-}
-EXPORT_SYMBOL_GPL(balloon_devinfo_alloc);
-
-/*
  * balloon_page_enqueue - allocates a new page and inserts it into the balloon
  *			  page list.
  * @b_dev_info: balloon device decriptor where we will insert a new page to
@@ -75,7 +49,7 @@ struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info)
 	 */
 	BUG_ON(!trylock_page(page));
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-	balloon_page_insert(page, b_dev_info->mapping, &b_dev_info->pages);
+	balloon_page_insert(b_dev_info, page);
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 	unlock_page(page);
 	return page;
@@ -95,12 +69,10 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue);
  */
 struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info)
 {
-	struct page *page, *tmp;
+	struct page *page;
 	unsigned long flags;
-	bool dequeued_page;
 
-	dequeued_page = false;
-	list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) {
+	list_for_each_entry(page, &b_dev_info->pages, lru) {
 		/*
 		 * Block others from accessing the 'page' while we get around
 		 * establishing additional references and preparing the 'page'
@@ -108,98 +80,32 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info)
 		 */
 		if (trylock_page(page)) {
 			spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-			/*
-			 * Raise the page refcount here to prevent any wrong
-			 * attempt to isolate this page, in case of coliding
-			 * with balloon_page_isolate() just after we release
-			 * the page lock.
-			 *
-			 * balloon_page_free() will take care of dropping
-			 * this extra refcount later.
-			 */
-			get_page(page);
-			balloon_page_delete(page);
+			balloon_page_delete(page, false);
 			spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 			unlock_page(page);
-			dequeued_page = true;
-			break;
+			return page;
 		}
 	}
 
-	if (!dequeued_page) {
-		/*
-		 * If we are unable to dequeue a balloon page because the page
-		 * list is empty and there is no isolated pages, then something
-		 * went out of track and some balloon pages are lost.
-		 * BUG() here, otherwise the balloon driver may get stuck into
-		 * an infinite loop while attempting to release all its pages.
-		 */
-		spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-		if (unlikely(list_empty(&b_dev_info->pages) &&
-			     !b_dev_info->isolated_pages))
-			BUG();
-		spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
-		page = NULL;
-	}
-	return page;
+	/*
+	 * If we are unable to dequeue a balloon page because the page
+	 * list is empty and there is no isolated pages, then something
+	 * went out of track and some balloon pages are lost.
+	 * BUG() here, otherwise the balloon driver may get stuck into
+	 * an infinite loop while attempting to release all its pages.
+	 */
+	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
+	BUG_ON(list_empty(&b_dev_info->pages) && !b_dev_info->isolated_pages);
+	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
+	return NULL;
 }
 EXPORT_SYMBOL_GPL(balloon_page_dequeue);
 
 #ifdef CONFIG_BALLOON_COMPACTION
-/*
- * balloon_mapping_alloc - allocates a special ->mapping for ballooned pages.
- * @b_dev_info: holds the balloon device information descriptor.
- * @a_ops: balloon_mapping address_space_operations descriptor.
- *
- * Driver must call it to properly allocate and initialize an instance of
- * struct address_space which will be used as the special page->mapping for
- * balloon device enlisted page instances.
- */
-struct address_space *balloon_mapping_alloc(struct balloon_dev_info *b_dev_info,
-				const struct address_space_operations *a_ops)
-{
-	struct address_space *mapping;
-
-	mapping = kmalloc(sizeof(*mapping), GFP_KERNEL);
-	if (!mapping)
-		return ERR_PTR(-ENOMEM);
-
-	/*
-	 * Give a clean 'zeroed' status to all elements of this special
-	 * balloon page->mapping struct address_space instance.
-	 */
-	address_space_init_once(mapping);
-
-	/*
-	 * Set mapping->flags appropriately, to allow balloon pages
-	 * ->mapping identification.
-	 */
-	mapping_set_balloon(mapping);
-	mapping_set_gfp_mask(mapping, balloon_mapping_gfp_mask());
-
-	/* balloon's page->mapping->a_ops callback descriptor */
-	mapping->a_ops = a_ops;
-
-	/*
-	 * Establish a pointer reference back to the balloon device descriptor
-	 * this particular page->mapping will be servicing.
-	 * This is used by compaction / migration procedures to identify and
-	 * access the balloon device pageset while isolating / migrating pages.
-	 *
-	 * As some balloon drivers can register multiple balloon devices
-	 * for a single guest, this also helps compaction / migration to
-	 * properly deal with multiple balloon pagesets, when required.
-	 */
-	mapping->private_data = b_dev_info;
-	b_dev_info->mapping = mapping;
-
-	return mapping;
-}
-EXPORT_SYMBOL_GPL(balloon_mapping_alloc);
 
 static inline void __isolate_balloon_page(struct page *page)
 {
-	struct balloon_dev_info *b_dev_info = page->mapping->private_data;
+	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
 	list_del(&page->lru);
@@ -209,7 +115,7 @@ static inline void __isolate_balloon_page(struct page *page)
 
 static inline void __putback_balloon_page(struct page *page)
 {
-	struct balloon_dev_info *b_dev_info = page->mapping->private_data;
+	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
 	list_add(&page->lru, &b_dev_info->pages);
@@ -217,12 +123,6 @@ static inline void __putback_balloon_page(struct page *page)
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 }
 
-static inline int __migrate_balloon_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, enum migrate_mode mode)
-{
-	return page->mapping->a_ops->migratepage(mapping, newpage, page, mode);
-}
-
 /* __isolate_lru_page() counterpart for a ballooned page */
 bool balloon_page_isolate(struct page *page)
 {
@@ -265,6 +165,57 @@ bool balloon_page_isolate(struct page *page)
 	return false;
 }
 
+int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
+			 unsigned long private, struct page *page,
+			 int force, enum migrate_mode mode)
+{
+	struct balloon_dev_info *balloon = balloon_page_device(page);
+	struct page *newpage;
+	int *result = NULL;
+	int rc = -EAGAIN;
+
+	if (!balloon || !balloon->migrate_page)
+		return -EAGAIN;
+
+	newpage = get_new_page(page, private, &result);
+	if (!newpage)
+		return -ENOMEM;
+
+	if (!trylock_page(newpage))
+		BUG();
+
+	if (!trylock_page(page)) {
+		if (!force || mode != MIGRATE_SYNC)
+			goto out;
+		lock_page(page);
+	}
+
+	rc = balloon->migrate_page(balloon, newpage, page, mode);
+
+	unlock_page(page);
+out:
+	unlock_page(newpage);
+
+	if (rc != -EAGAIN) {
+		dec_zone_page_state(page, NR_ISOLATED_FILE);
+		list_del(&page->lru);
+		put_page(page);
+	}
+
+	if (rc != MIGRATEPAGE_SUCCESS && put_new_page)
+		put_new_page(newpage, private);
+	else
+		put_page(newpage);
+
+	if (result) {
+		if (rc)
+			*result = rc;
+		else
+			*result = page_to_nid(newpage);
+	}
+	return rc;
+}
+
 /* putback_lru_page() counterpart for a ballooned page */
 void balloon_page_putback(struct page *page)
 {
@@ -285,31 +236,4 @@ void balloon_page_putback(struct page *page)
 	unlock_page(page);
 }
 
-/* move_to_new_page() counterpart for a ballooned page */
-int balloon_page_migrate(struct page *newpage,
-			 struct page *page, enum migrate_mode mode)
-{
-	struct address_space *mapping;
-	int rc = -EAGAIN;
-
-	/*
-	 * Block others from accessing the 'newpage' when we get around to
-	 * establishing additional references. We should be the only one
-	 * holding a reference to the 'newpage' at this point.
-	 */
-	BUG_ON(!trylock_page(newpage));
-
-	if (WARN_ON(!PageBalloon(page))) {
-		dump_page(page, "not movable balloon page");
-		unlock_page(newpage);
-		return rc;
-	}
-
-	mapping = page->mapping;
-	if (mapping)
-		rc = __migrate_balloon_page(mapping, newpage, page, mode);
-
-	unlock_page(newpage);
-	return rc;
-}
 #endif /* CONFIG_BALLOON_COMPACTION */
diff --git a/mm/migrate.c b/mm/migrate.c
index c35e6f2..09d489c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -873,18 +873,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(PageBalloon(page))) {
-		/*
-		 * A ballooned page does not need any special attention from
-		 * physical to virtual reverse mapping procedures.
-		 * Skip any attempt to unmap PTEs or to remap swap cache,
-		 * in order to avoid burning cycles at rmap level, and perform
-		 * the page migration right away (proteced by page lock).
-		 */
-		rc = balloon_page_migrate(newpage, page, mode);
-		goto out_unlock;
-	}
-
 	/*
 	 * Corner case handling:
 	 * 1. When a new swap-cache page is read into, it is added to the LRU
@@ -952,17 +940,6 @@ static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page,
 
 	rc = __unmap_and_move(page, newpage, force, mode);
 
-	if (unlikely(rc == MIGRATEPAGE_BALLOON_SUCCESS)) {
-		/*
-		 * A ballooned page has been migrated already.
-		 * Now, it's the time to wrap-up counters,
-		 * handle the page back to Buddy and return.
-		 */
-		dec_zone_page_state(page, NR_ISOLATED_ANON +
-				    page_is_file_cache(page));
-		balloon_page_free(page);
-		return MIGRATEPAGE_SUCCESS;
-	}
 out:
 	if (rc != -EAGAIN) {
 		/*
@@ -1137,6 +1114,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 				rc = unmap_and_move_huge_page(get_new_page,
 						put_new_page, private, page,
 						pass > 2, mode);
+			else if (PageBalloon(page))
+				rc = balloon_page_migrate(get_new_page,
+						put_new_page, private,
+						page, pass > 2, mode);
 			else
 				rc = unmap_and_move(get_new_page, put_new_page,
 						private, page, pass > 2, mode);


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-20 15:05   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-20 15:05 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, Rafael Aquini
  Cc: Sasha Levin, Andrey Ryabinin, linux-kernel

* move special branch for balloon migraion into migrate_pages
* remove special mapping for balloon and its flag AS_BALLOON_MAP
* embed struct balloon_dev_info into struct virtio_balloon
* cleanup balloon_page_dequeue, kill balloon_page_free

Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
---
 drivers/virtio/virtio_balloon.c    |   77 ++++---------
 include/linux/balloon_compaction.h |  107 ++++++------------
 include/linux/migrate.h            |   11 --
 include/linux/pagemap.h            |   18 ---
 mm/balloon_compaction.c            |  214 ++++++++++++------------------------
 mm/migrate.c                       |   27 +----
 6 files changed, 130 insertions(+), 324 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 25ebe8e..bdc6a7e 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -59,7 +59,7 @@ struct virtio_balloon
 	 * Each page on this list adds VIRTIO_BALLOON_PAGES_PER_PAGE
 	 * to num_pages above.
 	 */
-	struct balloon_dev_info *vb_dev_info;
+	struct balloon_dev_info vb_dev_info;
 
 	/* Synchronize access/update to this struct virtio_balloon elements */
 	struct mutex balloon_lock;
@@ -127,7 +127,7 @@ static void set_page_pfns(u32 pfns[], struct page *page)
 
 static void fill_balloon(struct virtio_balloon *vb, size_t num)
 {
-	struct balloon_dev_info *vb_dev_info = vb->vb_dev_info;
+	struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
 
 	/* We can only do one array worth at a time. */
 	num = min(num, ARRAY_SIZE(vb->pfns));
@@ -163,15 +163,15 @@ static void release_pages_by_pfn(const u32 pfns[], unsigned int num)
 	/* Find pfns pointing at start of each page, get pages and free them. */
 	for (i = 0; i < num; i += VIRTIO_BALLOON_PAGES_PER_PAGE) {
 		struct page *page = balloon_pfn_to_page(pfns[i]);
-		balloon_page_free(page);
 		adjust_managed_page_count(page, 1);
+		put_page(page);
 	}
 }
 
 static void leak_balloon(struct virtio_balloon *vb, size_t num)
 {
 	struct page *page;
-	struct balloon_dev_info *vb_dev_info = vb->vb_dev_info;
+	struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info;
 
 	/* We can only do one array worth at a time. */
 	num = min(num, ARRAY_SIZE(vb->pfns));
@@ -353,12 +353,11 @@ static int init_vqs(struct virtio_balloon *vb)
 	return 0;
 }
 
-static const struct address_space_operations virtio_balloon_aops;
 #ifdef CONFIG_BALLOON_COMPACTION
 /*
  * virtballoon_migratepage - perform the balloon page migration on behalf of
  *			     a compation thread.     (called under page lock)
- * @mapping: the page->mapping which will be assigned to the new migrated page.
+ * @vb_dev_info: the balloon device
  * @newpage: page that will replace the isolated page after migration finishes.
  * @page   : the isolated (old) page that is about to be migrated to newpage.
  * @mode   : compaction mode -- not used for balloon page migration.
@@ -373,17 +372,13 @@ static const struct address_space_operations virtio_balloon_aops;
  * This function preforms the balloon page migration task.
  * Called through balloon_mapping->a_ops->migratepage
  */
-static int virtballoon_migratepage(struct address_space *mapping,
+static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
 		struct page *newpage, struct page *page, enum migrate_mode mode)
 {
-	struct balloon_dev_info *vb_dev_info = balloon_page_device(page);
-	struct virtio_balloon *vb;
+	struct virtio_balloon *vb = container_of(vb_dev_info,
+			struct virtio_balloon, vb_dev_info);
 	unsigned long flags;
 
-	BUG_ON(!vb_dev_info);
-
-	vb = vb_dev_info->balloon_device;
-
 	/*
 	 * In order to avoid lock contention while migrating pages concurrently
 	 * to leak_balloon() or fill_balloon() we just give up the balloon_lock
@@ -395,42 +390,34 @@ static int virtballoon_migratepage(struct address_space *mapping,
 	if (!mutex_trylock(&vb->balloon_lock))
 		return -EAGAIN;
 
+	get_page(newpage); /* balloon reference */
+
 	/* balloon's page migration 1st step  -- inflate "newpage" */
 	spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
-	balloon_page_insert(newpage, mapping, &vb_dev_info->pages);
+	balloon_page_insert(vb_dev_info, newpage);
 	vb_dev_info->isolated_pages--;
 	spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
 	vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
 	set_page_pfns(vb->pfns, newpage);
 	tell_host(vb, vb->inflate_vq);
 
-	/*
-	 * balloon's page migration 2nd step -- deflate "page"
-	 *
-	 * It's safe to delete page->lru here because this page is at
-	 * an isolated migration list, and this step is expected to happen here
-	 */
-	balloon_page_delete(page);
+	/* balloon's page migration 2nd step -- deflate "page" */
+	balloon_page_delete(page, true);
 	vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
 	set_page_pfns(vb->pfns, page);
 	tell_host(vb, vb->deflate_vq);
 
 	mutex_unlock(&vb->balloon_lock);
 
-	return MIGRATEPAGE_BALLOON_SUCCESS;
-}
+	put_page(page); /* balloon reference */
 
-/* define the balloon_mapping->a_ops callback to allow balloon page migration */
-static const struct address_space_operations virtio_balloon_aops = {
-			.migratepage = virtballoon_migratepage,
-};
+	return MIGRATEPAGE_SUCCESS;
+}
 #endif /* CONFIG_BALLOON_COMPACTION */
 
 static int virtballoon_probe(struct virtio_device *vdev)
 {
 	struct virtio_balloon *vb;
-	struct address_space *vb_mapping;
-	struct balloon_dev_info *vb_devinfo;
 	int err;
 
 	vdev->priv = vb = kmalloc(sizeof(*vb), GFP_KERNEL);
@@ -446,30 +433,14 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	vb->vdev = vdev;
 	vb->need_stats_update = 0;
 
-	vb_devinfo = balloon_devinfo_alloc(vb);
-	if (IS_ERR(vb_devinfo)) {
-		err = PTR_ERR(vb_devinfo);
-		goto out_free_vb;
-	}
-
-	vb_mapping = balloon_mapping_alloc(vb_devinfo,
-					   (balloon_compaction_check()) ?
-					   &virtio_balloon_aops : NULL);
-	if (IS_ERR(vb_mapping)) {
-		/*
-		 * IS_ERR(vb_mapping) && PTR_ERR(vb_mapping) == -EOPNOTSUPP
-		 * This means !CONFIG_BALLOON_COMPACTION, otherwise we get off.
-		 */
-		err = PTR_ERR(vb_mapping);
-		if (err != -EOPNOTSUPP)
-			goto out_free_vb_devinfo;
-	}
-
-	vb->vb_dev_info = vb_devinfo;
+	balloon_devinfo_init(&vb->vb_dev_info);
+#ifdef CONFIG_BALLOON_COMPACTION
+	vb->vb_dev_info.migrate_page = virtballoon_migratepage;
+#endif
 
 	err = init_vqs(vb);
 	if (err)
-		goto out_free_vb_mapping;
+		goto out_free_vb;
 
 	vb->thread = kthread_run(balloon, vb, "vballoon");
 	if (IS_ERR(vb->thread)) {
@@ -481,10 +452,6 @@ static int virtballoon_probe(struct virtio_device *vdev)
 
 out_del_vqs:
 	vdev->config->del_vqs(vdev);
-out_free_vb_mapping:
-	balloon_mapping_free(vb_mapping);
-out_free_vb_devinfo:
-	balloon_devinfo_free(vb_devinfo);
 out_free_vb:
 	kfree(vb);
 out:
@@ -510,8 +477,6 @@ static void virtballoon_remove(struct virtio_device *vdev)
 
 	kthread_stop(vb->thread);
 	remove_common(vb);
-	balloon_mapping_free(vb->vb_dev_info->mapping);
-	balloon_devinfo_free(vb->vb_dev_info);
 	kfree(vb);
 }
 
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index f5fda8b..dc7073b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -54,58 +54,27 @@
  * balloon driver as a page book-keeper for its registered balloon devices.
  */
 struct balloon_dev_info {
-	void *balloon_device;		/* balloon device descriptor */
-	struct address_space *mapping;	/* balloon special page->mapping */
 	unsigned long isolated_pages;	/* # of isolated pages for migration */
 	spinlock_t pages_lock;		/* Protection to pages list */
 	struct list_head pages;		/* Pages enqueued & handled to Host */
+	int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
+			struct page *page, enum migrate_mode mode);
 };
 
-extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info);
-extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
-extern struct balloon_dev_info *balloon_devinfo_alloc(
-						void *balloon_dev_descriptor);
-
-static inline void balloon_devinfo_free(struct balloon_dev_info *b_dev_info)
+static inline void balloon_devinfo_init(struct balloon_dev_info *b_dev_info)
 {
-	kfree(b_dev_info);
+	b_dev_info->isolated_pages = 0;
+	spin_lock_init(&b_dev_info->pages_lock);
+	INIT_LIST_HEAD(&b_dev_info->pages);
+	b_dev_info->migrate_page = NULL;
 }
 
-/*
- * balloon_page_free - release a balloon page back to the page free lists
- * @page: ballooned page to be set free
- *
- * This function must be used to properly set free an isolated/dequeued balloon
- * page at the end of a sucessful page migration, or at the balloon driver's
- * page release procedure.
- */
-static inline void balloon_page_free(struct page *page)
-{
-	/*
-	 * Balloon pages always get an extra refcount before being isolated
-	 * and before being dequeued to help on sorting out fortuite colisions
-	 * between a thread attempting to isolate and another thread attempting
-	 * to release the very same balloon page.
-	 *
-	 * Before we handle the page back to Buddy, lets drop its extra refcnt.
-	 */
-	put_page(page);
-	__free_page(page);
-}
+extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info);
+extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
 
 #ifdef CONFIG_BALLOON_COMPACTION
 extern bool balloon_page_isolate(struct page *page);
 extern void balloon_page_putback(struct page *page);
-extern int balloon_page_migrate(struct page *newpage,
-				struct page *page, enum migrate_mode mode);
-extern struct address_space
-*balloon_mapping_alloc(struct balloon_dev_info *b_dev_info,
-			const struct address_space_operations *a_ops);
-
-static inline void balloon_mapping_free(struct address_space *balloon_mapping)
-{
-	kfree(balloon_mapping);
-}
 
 /*
  * balloon_page_insert - insert a page into the balloon's page list and make
@@ -117,13 +86,12 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
  * Caller must ensure the page is locked and the spin_lock protecting balloon
  * pages list is held before inserting a page into the balloon device.
  */
-static inline void balloon_page_insert(struct page *page,
-				       struct address_space *mapping,
-				       struct list_head *head)
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
-	page->mapping = mapping;
-	list_add(&page->lru, head);
+	set_page_private(page, (unsigned long)balloon);
+	list_add(&page->lru, &balloon->pages);
 }
 
 /*
@@ -134,24 +102,25 @@ static inline void balloon_page_insert(struct page *page,
  * Caller must ensure the page is locked and the spin_lock protecting balloon
  * pages list is held before deleting a page from the balloon device.
  */
-static inline void balloon_page_delete(struct page *page)
+static inline void balloon_page_delete(struct page *page, bool isolated)
 {
 	__ClearPageBalloon(page);
-	page->mapping = NULL;
-	list_del(&page->lru);
+	set_page_private(page, 0);
+	if (!isolated)
+		list_del(&page->lru);
 }
 
+int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
+		unsigned long private, struct page *page,
+		int force, enum migrate_mode mode);
+
 /*
  * balloon_page_device - get the b_dev_info descriptor for the balloon device
  *			 that enqueues the given page.
  */
 static inline struct balloon_dev_info *balloon_page_device(struct page *page)
 {
-	struct address_space *mapping = page->mapping;
-	if (likely(mapping))
-		return mapping->private_data;
-
-	return NULL;
+	return (struct balloon_dev_info *)page_private(page);
 }
 
 static inline gfp_t balloon_mapping_gfp_mask(void)
@@ -159,11 +128,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 	return GFP_HIGHUSER_MOVABLE;
 }
 
-static inline bool balloon_compaction_check(void)
-{
-	return true;
-}
-
 #else /* !CONFIG_BALLOON_COMPACTION */
 
 static inline void *balloon_mapping_alloc(void *balloon_device,
@@ -177,18 +141,25 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
 	return;
 }
 
-static inline void balloon_page_insert(struct page *page,
-				       struct address_space *mapping,
-				       struct list_head *head)
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
 	list_add(&page->lru, head);
 }
 
-static inline void balloon_page_delete(struct page *page)
+static inline void balloon_page_delete(struct page *page, bool isolated)
 {
 	__ClearPageBalloon(page);
-	list_del(&page->lru);
+	if (!isolated)
+		list_del(&page->lru);
+}
+
+static inline int balloon_page_migrate(new_page_t get_new_page,
+		free_page_t put_new_page, unsigned long private,
+		struct page *page, int force, enum migrate_mode mode)
+{
+	return -EAGAIN;
 }
 
 static inline bool balloon_page_isolate(struct page *page)
@@ -201,20 +172,10 @@ static inline void balloon_page_putback(struct page *page)
 	return;
 }
 
-static inline int balloon_page_migrate(struct page *newpage,
-				struct page *page, enum migrate_mode mode)
-{
-	return 0;
-}
-
 static inline gfp_t balloon_mapping_gfp_mask(void)
 {
 	return GFP_HIGHUSER;
 }
 
-static inline bool balloon_compaction_check(void)
-{
-	return false;
-}
 #endif /* CONFIG_BALLOON_COMPACTION */
 #endif /* _LINUX_BALLOON_COMPACTION_H */
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a2901c4..b33347f 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -13,18 +13,9 @@ typedef void free_page_t(struct page *page, unsigned long private);
  * Return values from addresss_space_operations.migratepage():
  * - negative errno on page migration failure;
  * - zero on page migration success;
- *
- * The balloon page migration introduces this special case where a 'distinct'
- * return code is used to flag a successful page migration to unmap_and_move().
- * This approach is necessary because page migration can race against balloon
- * deflation procedure, and for such case we could introduce a nasty page leak
- * if a successfully migrated balloon page gets released concurrently with
- * migration's unmap_and_move() wrap-up steps.
  */
 #define MIGRATEPAGE_SUCCESS		0
-#define MIGRATEPAGE_BALLOON_SUCCESS	1 /* special ret code for balloon page
-					   * sucessful migration case.
-					   */
+
 enum migrate_reason {
 	MR_COMPACTION,
 	MR_MEMORY_FAILURE,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 3df8c7d..b517c34 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -24,8 +24,7 @@ enum mapping_flags {
 	AS_ENOSPC	= __GFP_BITS_SHIFT + 1,	/* ENOSPC on async write */
 	AS_MM_ALL_LOCKS	= __GFP_BITS_SHIFT + 2,	/* under mm_take_all_locks() */
 	AS_UNEVICTABLE	= __GFP_BITS_SHIFT + 3,	/* e.g., ramdisk, SHM_LOCK */
-	AS_BALLOON_MAP  = __GFP_BITS_SHIFT + 4, /* balloon page special map */
-	AS_EXITING	= __GFP_BITS_SHIFT + 5, /* final truncate in progress */
+	AS_EXITING	= __GFP_BITS_SHIFT + 4, /* final truncate in progress */
 };
 
 static inline void mapping_set_error(struct address_space *mapping, int error)
@@ -55,21 +54,6 @@ static inline int mapping_unevictable(struct address_space *mapping)
 	return !!mapping;
 }
 
-static inline void mapping_set_balloon(struct address_space *mapping)
-{
-	set_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
-static inline void mapping_clear_balloon(struct address_space *mapping)
-{
-	clear_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
-static inline int mapping_balloon(struct address_space *mapping)
-{
-	return mapping && test_bit(AS_BALLOON_MAP, &mapping->flags);
-}
-
 static inline void mapping_set_exiting(struct address_space *mapping)
 {
 	set_bit(AS_EXITING, &mapping->flags);
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 22c8e03..fc49a8a 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -25,32 +25,6 @@ void __ClearPageBalloon(struct page *page)
 }
 
 /*
- * balloon_devinfo_alloc - allocates a balloon device information descriptor.
- * @balloon_dev_descriptor: pointer to reference the balloon device which
- *                          this struct balloon_dev_info will be servicing.
- *
- * Driver must call it to properly allocate and initialize an instance of
- * struct balloon_dev_info which will be used to reference a balloon device
- * as well as to keep track of the balloon device page list.
- */
-struct balloon_dev_info *balloon_devinfo_alloc(void *balloon_dev_descriptor)
-{
-	struct balloon_dev_info *b_dev_info;
-	b_dev_info = kmalloc(sizeof(*b_dev_info), GFP_KERNEL);
-	if (!b_dev_info)
-		return ERR_PTR(-ENOMEM);
-
-	b_dev_info->balloon_device = balloon_dev_descriptor;
-	b_dev_info->mapping = NULL;
-	b_dev_info->isolated_pages = 0;
-	spin_lock_init(&b_dev_info->pages_lock);
-	INIT_LIST_HEAD(&b_dev_info->pages);
-
-	return b_dev_info;
-}
-EXPORT_SYMBOL_GPL(balloon_devinfo_alloc);
-
-/*
  * balloon_page_enqueue - allocates a new page and inserts it into the balloon
  *			  page list.
  * @b_dev_info: balloon device decriptor where we will insert a new page to
@@ -75,7 +49,7 @@ struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info)
 	 */
 	BUG_ON(!trylock_page(page));
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-	balloon_page_insert(page, b_dev_info->mapping, &b_dev_info->pages);
+	balloon_page_insert(b_dev_info, page);
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 	unlock_page(page);
 	return page;
@@ -95,12 +69,10 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue);
  */
 struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info)
 {
-	struct page *page, *tmp;
+	struct page *page;
 	unsigned long flags;
-	bool dequeued_page;
 
-	dequeued_page = false;
-	list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) {
+	list_for_each_entry(page, &b_dev_info->pages, lru) {
 		/*
 		 * Block others from accessing the 'page' while we get around
 		 * establishing additional references and preparing the 'page'
@@ -108,98 +80,32 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info)
 		 */
 		if (trylock_page(page)) {
 			spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-			/*
-			 * Raise the page refcount here to prevent any wrong
-			 * attempt to isolate this page, in case of coliding
-			 * with balloon_page_isolate() just after we release
-			 * the page lock.
-			 *
-			 * balloon_page_free() will take care of dropping
-			 * this extra refcount later.
-			 */
-			get_page(page);
-			balloon_page_delete(page);
+			balloon_page_delete(page, false);
 			spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 			unlock_page(page);
-			dequeued_page = true;
-			break;
+			return page;
 		}
 	}
 
-	if (!dequeued_page) {
-		/*
-		 * If we are unable to dequeue a balloon page because the page
-		 * list is empty and there is no isolated pages, then something
-		 * went out of track and some balloon pages are lost.
-		 * BUG() here, otherwise the balloon driver may get stuck into
-		 * an infinite loop while attempting to release all its pages.
-		 */
-		spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-		if (unlikely(list_empty(&b_dev_info->pages) &&
-			     !b_dev_info->isolated_pages))
-			BUG();
-		spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
-		page = NULL;
-	}
-	return page;
+	/*
+	 * If we are unable to dequeue a balloon page because the page
+	 * list is empty and there is no isolated pages, then something
+	 * went out of track and some balloon pages are lost.
+	 * BUG() here, otherwise the balloon driver may get stuck into
+	 * an infinite loop while attempting to release all its pages.
+	 */
+	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
+	BUG_ON(list_empty(&b_dev_info->pages) && !b_dev_info->isolated_pages);
+	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
+	return NULL;
 }
 EXPORT_SYMBOL_GPL(balloon_page_dequeue);
 
 #ifdef CONFIG_BALLOON_COMPACTION
-/*
- * balloon_mapping_alloc - allocates a special ->mapping for ballooned pages.
- * @b_dev_info: holds the balloon device information descriptor.
- * @a_ops: balloon_mapping address_space_operations descriptor.
- *
- * Driver must call it to properly allocate and initialize an instance of
- * struct address_space which will be used as the special page->mapping for
- * balloon device enlisted page instances.
- */
-struct address_space *balloon_mapping_alloc(struct balloon_dev_info *b_dev_info,
-				const struct address_space_operations *a_ops)
-{
-	struct address_space *mapping;
-
-	mapping = kmalloc(sizeof(*mapping), GFP_KERNEL);
-	if (!mapping)
-		return ERR_PTR(-ENOMEM);
-
-	/*
-	 * Give a clean 'zeroed' status to all elements of this special
-	 * balloon page->mapping struct address_space instance.
-	 */
-	address_space_init_once(mapping);
-
-	/*
-	 * Set mapping->flags appropriately, to allow balloon pages
-	 * ->mapping identification.
-	 */
-	mapping_set_balloon(mapping);
-	mapping_set_gfp_mask(mapping, balloon_mapping_gfp_mask());
-
-	/* balloon's page->mapping->a_ops callback descriptor */
-	mapping->a_ops = a_ops;
-
-	/*
-	 * Establish a pointer reference back to the balloon device descriptor
-	 * this particular page->mapping will be servicing.
-	 * This is used by compaction / migration procedures to identify and
-	 * access the balloon device pageset while isolating / migrating pages.
-	 *
-	 * As some balloon drivers can register multiple balloon devices
-	 * for a single guest, this also helps compaction / migration to
-	 * properly deal with multiple balloon pagesets, when required.
-	 */
-	mapping->private_data = b_dev_info;
-	b_dev_info->mapping = mapping;
-
-	return mapping;
-}
-EXPORT_SYMBOL_GPL(balloon_mapping_alloc);
 
 static inline void __isolate_balloon_page(struct page *page)
 {
-	struct balloon_dev_info *b_dev_info = page->mapping->private_data;
+	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
 	list_del(&page->lru);
@@ -209,7 +115,7 @@ static inline void __isolate_balloon_page(struct page *page)
 
 static inline void __putback_balloon_page(struct page *page)
 {
-	struct balloon_dev_info *b_dev_info = page->mapping->private_data;
+	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
 	list_add(&page->lru, &b_dev_info->pages);
@@ -217,12 +123,6 @@ static inline void __putback_balloon_page(struct page *page)
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 }
 
-static inline int __migrate_balloon_page(struct address_space *mapping,
-		struct page *newpage, struct page *page, enum migrate_mode mode)
-{
-	return page->mapping->a_ops->migratepage(mapping, newpage, page, mode);
-}
-
 /* __isolate_lru_page() counterpart for a ballooned page */
 bool balloon_page_isolate(struct page *page)
 {
@@ -265,6 +165,57 @@ bool balloon_page_isolate(struct page *page)
 	return false;
 }
 
+int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
+			 unsigned long private, struct page *page,
+			 int force, enum migrate_mode mode)
+{
+	struct balloon_dev_info *balloon = balloon_page_device(page);
+	struct page *newpage;
+	int *result = NULL;
+	int rc = -EAGAIN;
+
+	if (!balloon || !balloon->migrate_page)
+		return -EAGAIN;
+
+	newpage = get_new_page(page, private, &result);
+	if (!newpage)
+		return -ENOMEM;
+
+	if (!trylock_page(newpage))
+		BUG();
+
+	if (!trylock_page(page)) {
+		if (!force || mode != MIGRATE_SYNC)
+			goto out;
+		lock_page(page);
+	}
+
+	rc = balloon->migrate_page(balloon, newpage, page, mode);
+
+	unlock_page(page);
+out:
+	unlock_page(newpage);
+
+	if (rc != -EAGAIN) {
+		dec_zone_page_state(page, NR_ISOLATED_FILE);
+		list_del(&page->lru);
+		put_page(page);
+	}
+
+	if (rc != MIGRATEPAGE_SUCCESS && put_new_page)
+		put_new_page(newpage, private);
+	else
+		put_page(newpage);
+
+	if (result) {
+		if (rc)
+			*result = rc;
+		else
+			*result = page_to_nid(newpage);
+	}
+	return rc;
+}
+
 /* putback_lru_page() counterpart for a ballooned page */
 void balloon_page_putback(struct page *page)
 {
@@ -285,31 +236,4 @@ void balloon_page_putback(struct page *page)
 	unlock_page(page);
 }
 
-/* move_to_new_page() counterpart for a ballooned page */
-int balloon_page_migrate(struct page *newpage,
-			 struct page *page, enum migrate_mode mode)
-{
-	struct address_space *mapping;
-	int rc = -EAGAIN;
-
-	/*
-	 * Block others from accessing the 'newpage' when we get around to
-	 * establishing additional references. We should be the only one
-	 * holding a reference to the 'newpage' at this point.
-	 */
-	BUG_ON(!trylock_page(newpage));
-
-	if (WARN_ON(!PageBalloon(page))) {
-		dump_page(page, "not movable balloon page");
-		unlock_page(newpage);
-		return rc;
-	}
-
-	mapping = page->mapping;
-	if (mapping)
-		rc = __migrate_balloon_page(mapping, newpage, page, mode);
-
-	unlock_page(newpage);
-	return rc;
-}
 #endif /* CONFIG_BALLOON_COMPACTION */
diff --git a/mm/migrate.c b/mm/migrate.c
index c35e6f2..09d489c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -873,18 +873,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		}
 	}
 
-	if (unlikely(PageBalloon(page))) {
-		/*
-		 * A ballooned page does not need any special attention from
-		 * physical to virtual reverse mapping procedures.
-		 * Skip any attempt to unmap PTEs or to remap swap cache,
-		 * in order to avoid burning cycles at rmap level, and perform
-		 * the page migration right away (proteced by page lock).
-		 */
-		rc = balloon_page_migrate(newpage, page, mode);
-		goto out_unlock;
-	}
-
 	/*
 	 * Corner case handling:
 	 * 1. When a new swap-cache page is read into, it is added to the LRU
@@ -952,17 +940,6 @@ static int unmap_and_move(new_page_t get_new_page, free_page_t put_new_page,
 
 	rc = __unmap_and_move(page, newpage, force, mode);
 
-	if (unlikely(rc == MIGRATEPAGE_BALLOON_SUCCESS)) {
-		/*
-		 * A ballooned page has been migrated already.
-		 * Now, it's the time to wrap-up counters,
-		 * handle the page back to Buddy and return.
-		 */
-		dec_zone_page_state(page, NR_ISOLATED_ANON +
-				    page_is_file_cache(page));
-		balloon_page_free(page);
-		return MIGRATEPAGE_SUCCESS;
-	}
 out:
 	if (rc != -EAGAIN) {
 		/*
@@ -1137,6 +1114,10 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 				rc = unmap_and_move_huge_page(get_new_page,
 						put_new_page, private, page,
 						pass > 2, mode);
+			else if (PageBalloon(page))
+				rc = balloon_page_migrate(get_new_page,
+						put_new_page, private,
+						page, pass > 2, mode);
 			else
 				rc = unmap_and_move(get_new_page, put_new_page,
 						private, page, pass > 2, mode);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages
  2014-08-20 15:04 ` Konstantin Khlebnikov
@ 2014-08-20 23:32   ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:32 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:35PM +0400, Konstantin Khlebnikov wrote:
> Sasha Levin reported KASAN splash inside isolate_migratepages_range().
> Problem is in function __is_movable_balloon_page() which tests AS_BALLOON_MAP
> in page->mapping->flags. This function has no protection against anonymous
> pages. As result it tried to check address space flags in inside anon-vma.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Reported-by: Sasha Levin <sasha.levin@oracle.com>
> Link: http://lkml.kernel.org/p/53E6CEAA.9020105@oracle.com
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  include/linux/balloon_compaction.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 089743a..53d482e 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -128,7 +128,7 @@ static inline bool page_flags_cleared(struct page *page)
>  static inline bool __is_movable_balloon_page(struct page *page)
>  {
>  	struct address_space *mapping = page->mapping;
> -	return mapping_balloon(mapping);
> +	return !PageAnon(page) && mapping_balloon(mapping);
>  }
>  
>  /*
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages
@ 2014-08-20 23:32   ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:32 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:35PM +0400, Konstantin Khlebnikov wrote:
> Sasha Levin reported KASAN splash inside isolate_migratepages_range().
> Problem is in function __is_movable_balloon_page() which tests AS_BALLOON_MAP
> in page->mapping->flags. This function has no protection against anonymous
> pages. As result it tried to check address space flags in inside anon-vma.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Reported-by: Sasha Levin <sasha.levin@oracle.com>
> Link: http://lkml.kernel.org/p/53E6CEAA.9020105@oracle.com
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  include/linux/balloon_compaction.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 089743a..53d482e 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -128,7 +128,7 @@ static inline bool page_flags_cleared(struct page *page)
>  static inline bool __is_movable_balloon_page(struct page *page)
>  {
>  	struct address_space *mapping = page->mapping;
> -	return mapping_balloon(mapping);
> +	return !PageAnon(page) && mapping_balloon(mapping);
>  }
>  
>  /*
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/7] mm/balloon_compaction: keep ballooned pages away from normal migration path
  2014-08-20 15:04   ` Konstantin Khlebnikov
@ 2014-08-20 23:33     ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:33 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:40PM +0400, Konstantin Khlebnikov wrote:
> Proper testing shows yet another problem in balloon migration: it works only
> once for each page. balloon_page_movable() check page flags and page_count.
> In __unmap_and_move page is locked, reference counter is elevated, so
> balloon_page_movable() _always_ fails here. As result in __unmap_and_move()
> migration goes to the normal migration path.
> 
> Balloon ->migratepage() is so special, it returns MIGRATEPAGE_BALLOON_SUCCESS
> instead of MIGRATEPAGE_SUCCESS. After that in move_to_new_page() successfully
> migrated page got NULL into its mapping pointer and loses connectivity with
> balloon and ability for further migration.
> 
> It's safe to use __is_movable_balloon_page here: page is isolated and pinned.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  mm/migrate.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index f78ec9b..161d044 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>  		}
>  	}
>  
> -	if (unlikely(balloon_page_movable(page))) {
> +	if (unlikely(__is_movable_balloon_page(page))) {
>  		/*
>  		 * A ballooned page does not need any special attention from
>  		 * physical to virtual reverse mapping procedures.
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/7] mm/balloon_compaction: keep ballooned pages away from normal migration path
@ 2014-08-20 23:33     ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:33 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:40PM +0400, Konstantin Khlebnikov wrote:
> Proper testing shows yet another problem in balloon migration: it works only
> once for each page. balloon_page_movable() check page flags and page_count.
> In __unmap_and_move page is locked, reference counter is elevated, so
> balloon_page_movable() _always_ fails here. As result in __unmap_and_move()
> migration goes to the normal migration path.
> 
> Balloon ->migratepage() is so special, it returns MIGRATEPAGE_BALLOON_SUCCESS
> instead of MIGRATEPAGE_SUCCESS. After that in move_to_new_page() successfully
> migrated page got NULL into its mapping pointer and loses connectivity with
> balloon and ability for further migration.
> 
> It's safe to use __is_movable_balloon_page here: page is isolated and pinned.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  mm/migrate.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index f78ec9b..161d044 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>  		}
>  	}
>  
> -	if (unlikely(balloon_page_movable(page))) {
> +	if (unlikely(__is_movable_balloon_page(page))) {
>  		/*
>  		 * A ballooned page does not need any special attention from
>  		 * physical to virtual reverse mapping procedures.
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/7] mm/balloon_compaction: isolate balloon pages without lru_lock
  2014-08-20 15:04   ` Konstantin Khlebnikov
@ 2014-08-20 23:35     ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:35 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:46PM +0400, Konstantin Khlebnikov wrote:
> LRU-lock isn't required for balloon page isolation. This check makes migration
> of some ballooned pages mostly impossible because isolate_migratepages_range()
> drops LRU lock periodically.
>
just for historical/explanatory purposes: https://lkml.org/lkml/2013/12/6/183 

> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  mm/compaction.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 21bf292..0653f5f 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -597,7 +597,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>  		 */
>  		if (!PageLRU(page)) {
>  			if (unlikely(balloon_page_movable(page))) {
> -				if (locked && balloon_page_isolate(page)) {
> +				if (balloon_page_isolate(page)) {
>  					/* Successfully isolated */
>  					goto isolate_success;
>  				}
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/7] mm/balloon_compaction: isolate balloon pages without lru_lock
@ 2014-08-20 23:35     ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:35 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:46PM +0400, Konstantin Khlebnikov wrote:
> LRU-lock isn't required for balloon page isolation. This check makes migration
> of some ballooned pages mostly impossible because isolate_migratepages_range()
> drops LRU lock periodically.
>
just for historical/explanatory purposes: https://lkml.org/lkml/2013/12/6/183 

> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> Cc: stable <stable@vger.kernel.org> # v3.8
> ---
>  mm/compaction.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 21bf292..0653f5f 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -597,7 +597,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>  		 */
>  		if (!PageLRU(page)) {
>  			if (unlikely(balloon_page_movable(page))) {
> -				if (locked && balloon_page_isolate(page)) {
> +				if (balloon_page_isolate(page)) {
>  					/* Successfully isolated */
>  					goto isolate_success;
>  				}
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/7] mm: introduce common page state for ballooned memory
  2014-08-20 15:04   ` Konstantin Khlebnikov
@ 2014-08-20 23:46     ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:46 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:58PM +0400, Konstantin Khlebnikov wrote:
> This patch adds page state PageBallon() and functions __Set/ClearPageBalloon.
> Like PageBuddy() PageBalloon() looks like page-flag but actually this is special
> state of page->_mapcount counter. There is no conflict because ballooned pages
> cannot be mapped and cannot be in buddy allocator.
> 
> Ballooned pages are counted in vmstat counter NR_BALLOON_PAGES, it's shown them
> in /proc/meminfo and /proc/meminfo. Also this patch it exports PageBallon into
> userspace via /proc/kpageflags as KPF_BALLOON.
> 
> All this code including mm/balloon_compaction.o is under CONFIG_MEMORY_BALLOON,
> it should be selected by ballooning driver which want use this feature.
> 

Very nice overhaul Konstantin!
Please, consider the nits I have below:


> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  Documentation/filesystems/proc.txt     |    2 ++
>  drivers/base/node.c                    |   16 ++++++++++------
>  drivers/virtio/Kconfig                 |    1 +
>  fs/proc/meminfo.c                      |    6 ++++++
>  fs/proc/page.c                         |    3 +++
>  include/linux/mm.h                     |   10 ++++++++++
>  include/linux/mmzone.h                 |    3 +++
>  include/uapi/linux/kernel-page-flags.h |    1 +
>  mm/Kconfig                             |    5 +++++
>  mm/Makefile                            |    3 ++-
>  mm/balloon_compaction.c                |   14 ++++++++++++++
>  mm/vmstat.c                            |    8 +++++++-
>  tools/vm/page-types.c                  |    1 +
>  13 files changed, 65 insertions(+), 8 deletions(-)
> 
> diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
> index eb8a10e..154a345 100644
> --- a/Documentation/filesystems/proc.txt
> +++ b/Documentation/filesystems/proc.txt
> @@ -796,6 +796,7 @@ VmallocTotal:   112216 kB
>  VmallocUsed:       428 kB
>  VmallocChunk:   111088 kB
>  AnonHugePages:   49152 kB
> +BalloonPages:        0 kB
>  
>      MemTotal: Total usable ram (i.e. physical ram minus a few reserved
>                bits and the kernel binary code)
> @@ -838,6 +839,7 @@ MemAvailable: An estimate of how much memory is available for starting new
>     Writeback: Memory which is actively being written back to the disk
>     AnonPages: Non-file backed pages mapped into userspace page tables
>  AnonHugePages: Non-file backed huge pages mapped into userspace page tables
> +BalloonPages: Memory which was ballooned, not included into MemTotal
>        Mapped: files which have been mmaped, such as libraries
>          Slab: in-kernel data structures cache
>  SReclaimable: Part of Slab, that might be reclaimed, such as caches
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index c6d3ae0..59e565c 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -120,6 +120,9 @@ static ssize_t node_read_meminfo(struct device *dev,
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		       "Node %d AnonHugePages:  %8lu kB\n"
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		       "Node %d BalloonPages:   %8lu kB\n"
> +#endif
>  			,
>  		       nid, K(node_page_state(nid, NR_FILE_DIRTY)),
>  		       nid, K(node_page_state(nid, NR_WRITEBACK)),
> @@ -136,14 +139,15 @@ static ssize_t node_read_meminfo(struct device *dev,
>  		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) +
>  				node_page_state(nid, NR_SLAB_UNRECLAIMABLE)),
>  		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)),
> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))
> -			, nid,
> -			K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) *
> -			HPAGE_PMD_NR));
> -#else
> -		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)));
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +		       ,nid, K(node_page_state(nid,
> +				NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR)
> +#endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		       ,nid, K(node_page_state(nid, NR_BALLOON_PAGES))
>  #endif
> +		       );
>  	n += hugetlb_report_node_meminfo(nid, buf + n);
>  	return n;
>  }
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index c6683f2..00b2286 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -25,6 +25,7 @@ config VIRTIO_PCI
>  config VIRTIO_BALLOON
>  	tristate "Virtio balloon driver"
>  	depends on VIRTIO
> +	select MEMORY_BALLOON
>  	---help---
>  	 This driver supports increasing and decreasing the amount
>  	 of memory within a KVM guest.
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index aa1eee0..f897fbf 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -138,6 +138,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		"AnonHugePages:  %8lu kB\n"
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		"BalloonPages:   %8lu kB\n"
> +#endif
>  		,
>  		K(i.totalram),
>  		K(i.freeram),
> @@ -193,6 +196,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  		,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) *
>  		   HPAGE_PMD_NR)
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		,K(global_page_state(NR_BALLOON_PAGES))
> +#endif
>  		);
>  
>  	hugetlb_report_meminfo(m);
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index e647c55..1e3187d 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -133,6 +133,9 @@ u64 stable_page_flags(struct page *page)
>  	if (PageBuddy(page))
>  		u |= 1 << KPF_BUDDY;
>  
> +	if (PageBalloon(page))
> +		u |= 1 << KPF_BALLOON;
> +
>  	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
>  
>  	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 8981cc8..d2dd497 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -553,6 +553,16 @@ static inline void __ClearPageBuddy(struct page *page)
>  	atomic_set(&page->_mapcount, -1);
>  }
>  
> +#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
> +
> +static inline int PageBalloon(struct page *page)
> +{
> +	return IS_ENABLED(CONFIG_MEMORY_BALLOON) &&
> +		atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
> +}
> +void __SetPageBalloon(struct page *page);
> +void __ClearPageBalloon(struct page *page);
> +

1) I think you should consider the following here:

-void __SetPageBalloon(struct page *page);
-void __ClearPageBalloon(struct page *page);
+
+static inline void __SetPageBalloon(struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+        VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+        atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+#endif
+}
+
+static inline void __ClearPageBalloon(struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+        VM_BUG_ON_PAGE(!PageBalloon(page), page);
+        atomic_set(&page->_mapcount, -1);
+#endif
+}




>  void put_page(struct page *page);
>  void put_pages_list(struct list_head *pages);
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 318df70..d88fd01 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -157,6 +157,9 @@ enum zone_stat_item {
>  	WORKINGSET_NODERECLAIM,
>  	NR_ANON_TRANSPARENT_HUGEPAGES,
>  	NR_FREE_CMA_PAGES,
> +#ifdef CONFIG_MEMORY_BALLOON
> +	NR_BALLOON_PAGES,
> +#endif
>  	NR_VM_ZONE_STAT_ITEMS };
>  
>  /*
> diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
> index 5116a0e..2f96d23 100644
> --- a/include/uapi/linux/kernel-page-flags.h
> +++ b/include/uapi/linux/kernel-page-flags.h
> @@ -31,6 +31,7 @@
>  
>  #define KPF_KSM			21
>  #define KPF_THP			22
> +#define KPF_BALLOON		23
>  
>  
>  #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 886db21..72e0db0 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -228,6 +228,11 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
>  	boolean
>  
>  #
> +# support for memory ballooning
> +config MEMORY_BALLOON
> +	boolean
> +
> +#
>  # support for memory balloon compaction
>  config BALLOON_COMPACTION
>  	bool "Allow for balloon memory compaction/migration"
> diff --git a/mm/Makefile b/mm/Makefile
> index 632ae77..2d33d7f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -16,7 +16,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
>  			   readahead.o swap.o truncate.o vmscan.o shmem.o \
>  			   util.o mmzone.o vmstat.o backing-dev.o \
>  			   mm_init.o mmu_context.o percpu.o slab_common.o \
> -			   compaction.o balloon_compaction.o vmacache.o \
> +			   compaction.o vmacache.o \
>  			   interval_tree.o list_lru.o workingset.o \
>  			   iov_iter.o $(mmu-y)
>  
> @@ -64,3 +64,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
>  obj-$(CONFIG_CMA)	+= cma.o
> +obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index 6e45a50..533c567 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -10,6 +10,20 @@
>  #include <linux/export.h>
>  #include <linux/balloon_compaction.h>
>  
> +void __SetPageBalloon(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
> +	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
> +	inc_zone_page_state(page, NR_BALLOON_PAGES);
> +}
> +
> +void __ClearPageBalloon(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageBalloon(page), page);
> +	atomic_set(&page->_mapcount, -1);
> +	dec_zone_page_state(page, NR_BALLOON_PAGES);
> +}
> +

and if you go with (1), here:
-void __SetPageBalloon(struct page *page)
-{
-       VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
-       atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
-       inc_zone_page_state(page, NR_BALLOON_PAGES);
-}
-
-void __ClearPageBalloon(struct page *page)
-{
-       VM_BUG_ON_PAGE(!PageBalloon(page), page);
-       atomic_set(&page->_mapcount, -1);
-       dec_zone_page_state(page, NR_BALLOON_PAGES);
-}


>  /*
>   * balloon_devinfo_alloc - allocates a balloon device information descriptor.
>   * @balloon_dev_descriptor: pointer to reference the balloon device which
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index e9ab104..6e704cc 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -735,7 +735,7 @@ static void walk_zones_in_node(struct seq_file *m, pg_data_t *pgdat,
>  					TEXT_FOR_HIGHMEM(xx) xx "_movable",
>  
>  const char * const vmstat_text[] = {
> -	/* Zoned VM counters */
> +	/* enum zone_stat_item countes */
>  	"nr_free_pages",
>  	"nr_alloc_batch",
>  	"nr_inactive_anon",
> @@ -778,10 +778,16 @@ const char * const vmstat_text[] = {
>  	"workingset_nodereclaim",
>  	"nr_anon_transparent_hugepages",
>  	"nr_free_cma",
> +#ifdef CONFIG_MEMORY_BALLOON
> +	"nr_balloon_pages",
> +#endif
> +
> +	/* enum writeback_stat_item counters */
>  	"nr_dirty_threshold",
>  	"nr_dirty_background_threshold",
>  
>  #ifdef CONFIG_VM_EVENT_COUNTERS
> +	/* enum vm_event_item counters */
>  	"pgpgin",
>  	"pgpgout",
>  	"pswpin",
> diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
> index c4d6d2e..264fbc2 100644
> --- a/tools/vm/page-types.c
> +++ b/tools/vm/page-types.c
> @@ -132,6 +132,7 @@ static const char * const page_flag_names[] = {
>  	[KPF_NOPAGE]		= "n:nopage",
>  	[KPF_KSM]		= "x:ksm",
>  	[KPF_THP]		= "t:thp",
> +	[KPF_BALLOON]		= "o:balloon",
>  
>  	[KPF_RESERVED]		= "r:reserved",
>  	[KPF_MLOCKED]		= "m:mlocked",
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/7] mm: introduce common page state for ballooned memory
@ 2014-08-20 23:46     ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:46 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:04:58PM +0400, Konstantin Khlebnikov wrote:
> This patch adds page state PageBallon() and functions __Set/ClearPageBalloon.
> Like PageBuddy() PageBalloon() looks like page-flag but actually this is special
> state of page->_mapcount counter. There is no conflict because ballooned pages
> cannot be mapped and cannot be in buddy allocator.
> 
> Ballooned pages are counted in vmstat counter NR_BALLOON_PAGES, it's shown them
> in /proc/meminfo and /proc/meminfo. Also this patch it exports PageBallon into
> userspace via /proc/kpageflags as KPF_BALLOON.
> 
> All this code including mm/balloon_compaction.o is under CONFIG_MEMORY_BALLOON,
> it should be selected by ballooning driver which want use this feature.
> 

Very nice overhaul Konstantin!
Please, consider the nits I have below:


> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  Documentation/filesystems/proc.txt     |    2 ++
>  drivers/base/node.c                    |   16 ++++++++++------
>  drivers/virtio/Kconfig                 |    1 +
>  fs/proc/meminfo.c                      |    6 ++++++
>  fs/proc/page.c                         |    3 +++
>  include/linux/mm.h                     |   10 ++++++++++
>  include/linux/mmzone.h                 |    3 +++
>  include/uapi/linux/kernel-page-flags.h |    1 +
>  mm/Kconfig                             |    5 +++++
>  mm/Makefile                            |    3 ++-
>  mm/balloon_compaction.c                |   14 ++++++++++++++
>  mm/vmstat.c                            |    8 +++++++-
>  tools/vm/page-types.c                  |    1 +
>  13 files changed, 65 insertions(+), 8 deletions(-)
> 
> diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
> index eb8a10e..154a345 100644
> --- a/Documentation/filesystems/proc.txt
> +++ b/Documentation/filesystems/proc.txt
> @@ -796,6 +796,7 @@ VmallocTotal:   112216 kB
>  VmallocUsed:       428 kB
>  VmallocChunk:   111088 kB
>  AnonHugePages:   49152 kB
> +BalloonPages:        0 kB
>  
>      MemTotal: Total usable ram (i.e. physical ram minus a few reserved
>                bits and the kernel binary code)
> @@ -838,6 +839,7 @@ MemAvailable: An estimate of how much memory is available for starting new
>     Writeback: Memory which is actively being written back to the disk
>     AnonPages: Non-file backed pages mapped into userspace page tables
>  AnonHugePages: Non-file backed huge pages mapped into userspace page tables
> +BalloonPages: Memory which was ballooned, not included into MemTotal
>        Mapped: files which have been mmaped, such as libraries
>          Slab: in-kernel data structures cache
>  SReclaimable: Part of Slab, that might be reclaimed, such as caches
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index c6d3ae0..59e565c 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -120,6 +120,9 @@ static ssize_t node_read_meminfo(struct device *dev,
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		       "Node %d AnonHugePages:  %8lu kB\n"
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		       "Node %d BalloonPages:   %8lu kB\n"
> +#endif
>  			,
>  		       nid, K(node_page_state(nid, NR_FILE_DIRTY)),
>  		       nid, K(node_page_state(nid, NR_WRITEBACK)),
> @@ -136,14 +139,15 @@ static ssize_t node_read_meminfo(struct device *dev,
>  		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE) +
>  				node_page_state(nid, NR_SLAB_UNRECLAIMABLE)),
>  		       nid, K(node_page_state(nid, NR_SLAB_RECLAIMABLE)),
> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE))
> -			, nid,
> -			K(node_page_state(nid, NR_ANON_TRANSPARENT_HUGEPAGES) *
> -			HPAGE_PMD_NR));
> -#else
> -		       nid, K(node_page_state(nid, NR_SLAB_UNRECLAIMABLE)));
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +		       ,nid, K(node_page_state(nid,
> +				NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR)
> +#endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		       ,nid, K(node_page_state(nid, NR_BALLOON_PAGES))
>  #endif
> +		       );
>  	n += hugetlb_report_node_meminfo(nid, buf + n);
>  	return n;
>  }
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index c6683f2..00b2286 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -25,6 +25,7 @@ config VIRTIO_PCI
>  config VIRTIO_BALLOON
>  	tristate "Virtio balloon driver"
>  	depends on VIRTIO
> +	select MEMORY_BALLOON
>  	---help---
>  	 This driver supports increasing and decreasing the amount
>  	 of memory within a KVM guest.
> diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c
> index aa1eee0..f897fbf 100644
> --- a/fs/proc/meminfo.c
> +++ b/fs/proc/meminfo.c
> @@ -138,6 +138,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  		"AnonHugePages:  %8lu kB\n"
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		"BalloonPages:   %8lu kB\n"
> +#endif
>  		,
>  		K(i.totalram),
>  		K(i.freeram),
> @@ -193,6 +196,9 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
>  		,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) *
>  		   HPAGE_PMD_NR)
>  #endif
> +#ifdef CONFIG_MEMORY_BALLOON
> +		,K(global_page_state(NR_BALLOON_PAGES))
> +#endif
>  		);
>  
>  	hugetlb_report_meminfo(m);
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index e647c55..1e3187d 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -133,6 +133,9 @@ u64 stable_page_flags(struct page *page)
>  	if (PageBuddy(page))
>  		u |= 1 << KPF_BUDDY;
>  
> +	if (PageBalloon(page))
> +		u |= 1 << KPF_BALLOON;
> +
>  	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
>  
>  	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 8981cc8..d2dd497 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -553,6 +553,16 @@ static inline void __ClearPageBuddy(struct page *page)
>  	atomic_set(&page->_mapcount, -1);
>  }
>  
> +#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
> +
> +static inline int PageBalloon(struct page *page)
> +{
> +	return IS_ENABLED(CONFIG_MEMORY_BALLOON) &&
> +		atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
> +}
> +void __SetPageBalloon(struct page *page);
> +void __ClearPageBalloon(struct page *page);
> +

1) I think you should consider the following here:

-void __SetPageBalloon(struct page *page);
-void __ClearPageBalloon(struct page *page);
+
+static inline void __SetPageBalloon(struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+        VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+        atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+#endif
+}
+
+static inline void __ClearPageBalloon(struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+        VM_BUG_ON_PAGE(!PageBalloon(page), page);
+        atomic_set(&page->_mapcount, -1);
+#endif
+}




>  void put_page(struct page *page);
>  void put_pages_list(struct list_head *pages);
>  
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 318df70..d88fd01 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -157,6 +157,9 @@ enum zone_stat_item {
>  	WORKINGSET_NODERECLAIM,
>  	NR_ANON_TRANSPARENT_HUGEPAGES,
>  	NR_FREE_CMA_PAGES,
> +#ifdef CONFIG_MEMORY_BALLOON
> +	NR_BALLOON_PAGES,
> +#endif
>  	NR_VM_ZONE_STAT_ITEMS };
>  
>  /*
> diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
> index 5116a0e..2f96d23 100644
> --- a/include/uapi/linux/kernel-page-flags.h
> +++ b/include/uapi/linux/kernel-page-flags.h
> @@ -31,6 +31,7 @@
>  
>  #define KPF_KSM			21
>  #define KPF_THP			22
> +#define KPF_BALLOON		23
>  
>  
>  #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 886db21..72e0db0 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -228,6 +228,11 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
>  	boolean
>  
>  #
> +# support for memory ballooning
> +config MEMORY_BALLOON
> +	boolean
> +
> +#
>  # support for memory balloon compaction
>  config BALLOON_COMPACTION
>  	bool "Allow for balloon memory compaction/migration"
> diff --git a/mm/Makefile b/mm/Makefile
> index 632ae77..2d33d7f 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -16,7 +16,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
>  			   readahead.o swap.o truncate.o vmscan.o shmem.o \
>  			   util.o mmzone.o vmstat.o backing-dev.o \
>  			   mm_init.o mmu_context.o percpu.o slab_common.o \
> -			   compaction.o balloon_compaction.o vmacache.o \
> +			   compaction.o vmacache.o \
>  			   interval_tree.o list_lru.o workingset.o \
>  			   iov_iter.o $(mmu-y)
>  
> @@ -64,3 +64,4 @@ obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
>  obj-$(CONFIG_CMA)	+= cma.o
> +obj-$(CONFIG_MEMORY_BALLOON) += balloon_compaction.o
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index 6e45a50..533c567 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -10,6 +10,20 @@
>  #include <linux/export.h>
>  #include <linux/balloon_compaction.h>
>  
> +void __SetPageBalloon(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
> +	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
> +	inc_zone_page_state(page, NR_BALLOON_PAGES);
> +}
> +
> +void __ClearPageBalloon(struct page *page)
> +{
> +	VM_BUG_ON_PAGE(!PageBalloon(page), page);
> +	atomic_set(&page->_mapcount, -1);
> +	dec_zone_page_state(page, NR_BALLOON_PAGES);
> +}
> +

and if you go with (1), here:
-void __SetPageBalloon(struct page *page)
-{
-       VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
-       atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
-       inc_zone_page_state(page, NR_BALLOON_PAGES);
-}
-
-void __ClearPageBalloon(struct page *page)
-{
-       VM_BUG_ON_PAGE(!PageBalloon(page), page);
-       atomic_set(&page->_mapcount, -1);
-       dec_zone_page_state(page, NR_BALLOON_PAGES);
-}


>  /*
>   * balloon_devinfo_alloc - allocates a balloon device information descriptor.
>   * @balloon_dev_descriptor: pointer to reference the balloon device which
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index e9ab104..6e704cc 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -735,7 +735,7 @@ static void walk_zones_in_node(struct seq_file *m, pg_data_t *pgdat,
>  					TEXT_FOR_HIGHMEM(xx) xx "_movable",
>  
>  const char * const vmstat_text[] = {
> -	/* Zoned VM counters */
> +	/* enum zone_stat_item countes */
>  	"nr_free_pages",
>  	"nr_alloc_batch",
>  	"nr_inactive_anon",
> @@ -778,10 +778,16 @@ const char * const vmstat_text[] = {
>  	"workingset_nodereclaim",
>  	"nr_anon_transparent_hugepages",
>  	"nr_free_cma",
> +#ifdef CONFIG_MEMORY_BALLOON
> +	"nr_balloon_pages",
> +#endif
> +
> +	/* enum writeback_stat_item counters */
>  	"nr_dirty_threshold",
>  	"nr_dirty_background_threshold",
>  
>  #ifdef CONFIG_VM_EVENT_COUNTERS
> +	/* enum vm_event_item counters */
>  	"pgpgin",
>  	"pgpgout",
>  	"pswpin",
> diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c
> index c4d6d2e..264fbc2 100644
> --- a/tools/vm/page-types.c
> +++ b/tools/vm/page-types.c
> @@ -132,6 +132,7 @@ static const char * const page_flag_names[] = {
>  	[KPF_NOPAGE]		= "n:nopage",
>  	[KPF_KSM]		= "x:ksm",
>  	[KPF_THP]		= "t:thp",
> +	[KPF_BALLOON]		= "o:balloon",
>  
>  	[KPF_RESERVED]		= "r:reserved",
>  	[KPF_MLOCKED]		= "m:mlocked",
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 6/7] mm/balloon_compaction: use common page ballooning
  2014-08-20 15:05   ` Konstantin Khlebnikov
@ 2014-08-20 23:48     ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:05:04PM +0400, Konstantin Khlebnikov wrote:
> This patch replaces checking AS_BALLOON_MAP in page->mapping->flags
> with PageBalloon which is stored directly in the struct page.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  include/linux/balloon_compaction.h |   85 ++----------------------------------
>  mm/Kconfig                         |    2 -
>  mm/balloon_compaction.c            |    7 +--
>  mm/compaction.c                    |    9 ++--
>  mm/migrate.c                       |    4 +-
>  mm/vmscan.c                        |    2 -
>  6 files changed, 15 insertions(+), 94 deletions(-)
> 
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 53d482e..f5fda8b 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -108,77 +108,6 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
>  }
>  
>  /*
> - * page_flags_cleared - helper to perform balloon @page ->flags tests.
> - *
> - * As balloon pages are obtained from buddy and we do not play with page->flags
> - * at driver level (exception made when we get the page lock for compaction),
> - * we can safely identify a ballooned page by checking if the
> - * PAGE_FLAGS_CHECK_AT_PREP page->flags are all cleared.  This approach also
> - * helps us skip ballooned pages that are locked for compaction or release, thus
> - * mitigating their racy check at balloon_page_movable()
> - */
> -static inline bool page_flags_cleared(struct page *page)
> -{
> -	return !(page->flags & PAGE_FLAGS_CHECK_AT_PREP);
> -}
> -
> -/*
> - * __is_movable_balloon_page - helper to perform @page mapping->flags tests
> - */
> -static inline bool __is_movable_balloon_page(struct page *page)
> -{
> -	struct address_space *mapping = page->mapping;
> -	return !PageAnon(page) && mapping_balloon(mapping);
> -}
> -
> -/*
> - * balloon_page_movable - test page->mapping->flags to identify balloon pages
> - *			  that can be moved by compaction/migration.
> - *
> - * This function is used at core compaction's page isolation scheme, therefore
> - * most pages exposed to it are not enlisted as balloon pages and so, to avoid
> - * undesired side effects like racing against __free_pages(), we cannot afford
> - * holding the page locked while testing page->mapping->flags here.
> - *
> - * As we might return false positives in the case of a balloon page being just
> - * released under us, the page->mapping->flags need to be re-tested later,
> - * under the proper page lock, at the functions that will be coping with the
> - * balloon page case.
> - */
> -static inline bool balloon_page_movable(struct page *page)
> -{
> -	/*
> -	 * Before dereferencing and testing mapping->flags, let's make sure
> -	 * this is not a page that uses ->mapping in a different way
> -	 */
> -	if (page_flags_cleared(page) && !page_mapped(page) &&
> -	    page_count(page) == 1)
> -		return __is_movable_balloon_page(page);
> -
> -	return false;
> -}
> -
> -/*
> - * isolated_balloon_page - identify an isolated balloon page on private
> - *			   compaction/migration page lists.
> - *
> - * After a compaction thread isolates a balloon page for migration, it raises
> - * the page refcount to prevent concurrent compaction threads from re-isolating
> - * the same page. For that reason putback_movable_pages(), or other routines
> - * that need to identify isolated balloon pages on private pagelists, cannot
> - * rely on balloon_page_movable() to accomplish the task.
> - */
> -static inline bool isolated_balloon_page(struct page *page)
> -{
> -	/* Already isolated balloon pages, by default, have a raised refcount */
> -	if (page_flags_cleared(page) && !page_mapped(page) &&
> -	    page_count(page) >= 2)
> -		return __is_movable_balloon_page(page);
> -
> -	return false;
> -}
> -
> -/*
>   * balloon_page_insert - insert a page into the balloon's page list and make
>   *		         the page->mapping assignment accordingly.
>   * @page    : page to be assigned as a 'balloon page'
> @@ -192,6 +121,7 @@ static inline void balloon_page_insert(struct page *page,
>  				       struct address_space *mapping,
>  				       struct list_head *head)
>  {
> +	__SetPageBalloon(page);
>  	page->mapping = mapping;
>  	list_add(&page->lru, head);
>  }
> @@ -206,6 +136,7 @@ static inline void balloon_page_insert(struct page *page,
>   */
>  static inline void balloon_page_delete(struct page *page)
>  {
> +	__ClearPageBalloon(page);
>  	page->mapping = NULL;
>  	list_del(&page->lru);
>  }
> @@ -250,24 +181,16 @@ static inline void balloon_page_insert(struct page *page,
>  				       struct address_space *mapping,
>  				       struct list_head *head)
>  {
> +	__SetPageBalloon(page);
>  	list_add(&page->lru, head);
>  }
>  
>  static inline void balloon_page_delete(struct page *page)
>  {
> +	__ClearPageBalloon(page);
>  	list_del(&page->lru);
>  }
>  
> -static inline bool balloon_page_movable(struct page *page)
> -{
> -	return false;
> -}
> -
> -static inline bool isolated_balloon_page(struct page *page)
> -{
> -	return false;
> -}
> -
>  static inline bool balloon_page_isolate(struct page *page)
>  {
>  	return false;
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 72e0db0..e09cf0a 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -237,7 +237,7 @@ config MEMORY_BALLOON
>  config BALLOON_COMPACTION
>  	bool "Allow for balloon memory compaction/migration"
>  	def_bool y
> -	depends on COMPACTION && VIRTIO_BALLOON
> +	depends on COMPACTION && MEMORY_BALLOON
>  	help
>  	  Memory fragmentation introduced by ballooning might reduce
>  	  significantly the number of 2MB contiguous memory blocks that can be
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index 533c567..22c8e03 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -253,8 +253,7 @@ bool balloon_page_isolate(struct page *page)
>  			 * Prevent concurrent compaction threads from isolating
>  			 * an already isolated balloon page by refcount check.
>  			 */
> -			if (__is_movable_balloon_page(page) &&
> -			    page_count(page) == 2) {
> +			if (PageBalloon(page) && page_count(page) == 2) {
>  				__isolate_balloon_page(page);
>  				unlock_page(page);
>  				return true;
> @@ -275,7 +274,7 @@ void balloon_page_putback(struct page *page)
>  	 */
>  	lock_page(page);
>  
> -	if (__is_movable_balloon_page(page)) {
> +	if (PageBalloon(page)) {
>  		__putback_balloon_page(page);
>  		/* drop the extra ref count taken for page isolation */
>  		put_page(page);
> @@ -300,7 +299,7 @@ int balloon_page_migrate(struct page *newpage,
>  	 */
>  	BUG_ON(!trylock_page(newpage));
>  
> -	if (WARN_ON(!__is_movable_balloon_page(page))) {
> +	if (WARN_ON(!PageBalloon(page))) {
>  		dump_page(page, "not movable balloon page");
>  		unlock_page(newpage);
>  		return rc;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 0653f5f..e9aeed2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -596,11 +596,10 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>  		 * Skip any other type of page
>  		 */
>  		if (!PageLRU(page)) {
> -			if (unlikely(balloon_page_movable(page))) {
> -				if (balloon_page_isolate(page)) {
> -					/* Successfully isolated */
> -					goto isolate_success;
> -				}
> +			if (unlikely(PageBalloon(page)) &&
> +					balloon_page_isolate(page)) {
> +				/* Successfully isolated */
> +				goto isolate_success;
>  			}
>  			continue;
>  		}
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 161d044..c35e6f2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -92,7 +92,7 @@ void putback_movable_pages(struct list_head *l)
>  		list_del(&page->lru);
>  		dec_zone_page_state(page, NR_ISOLATED_ANON +
>  				page_is_file_cache(page));
> -		if (unlikely(isolated_balloon_page(page)))
> +		if (unlikely(PageBalloon(page)))
>  			balloon_page_putback(page);
>  		else
>  			putback_lru_page(page);
> @@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>  		}
>  	}
>  
> -	if (unlikely(__is_movable_balloon_page(page))) {
> +	if (unlikely(PageBalloon(page))) {
>  		/*
>  		 * A ballooned page does not need any special attention from
>  		 * physical to virtual reverse mapping procedures.
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 2836b53..f90f93e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1160,7 +1160,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
>  
>  	list_for_each_entry_safe(page, next, page_list, lru) {
>  		if (page_is_file_cache(page) && !PageDirty(page) &&
> -		    !isolated_balloon_page(page)) {
> +		    !PageBalloon(page)) {
>  			ClearPageActive(page);
>  			list_move(&page->lru, &clean_pages);
>  		}
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 6/7] mm/balloon_compaction: use common page ballooning
@ 2014-08-20 23:48     ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, Aug 20, 2014 at 07:05:04PM +0400, Konstantin Khlebnikov wrote:
> This patch replaces checking AS_BALLOON_MAP in page->mapping->flags
> with PageBalloon which is stored directly in the struct page.
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  include/linux/balloon_compaction.h |   85 ++----------------------------------
>  mm/Kconfig                         |    2 -
>  mm/balloon_compaction.c            |    7 +--
>  mm/compaction.c                    |    9 ++--
>  mm/migrate.c                       |    4 +-
>  mm/vmscan.c                        |    2 -
>  6 files changed, 15 insertions(+), 94 deletions(-)
> 
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 53d482e..f5fda8b 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -108,77 +108,6 @@ static inline void balloon_mapping_free(struct address_space *balloon_mapping)
>  }
>  
>  /*
> - * page_flags_cleared - helper to perform balloon @page ->flags tests.
> - *
> - * As balloon pages are obtained from buddy and we do not play with page->flags
> - * at driver level (exception made when we get the page lock for compaction),
> - * we can safely identify a ballooned page by checking if the
> - * PAGE_FLAGS_CHECK_AT_PREP page->flags are all cleared.  This approach also
> - * helps us skip ballooned pages that are locked for compaction or release, thus
> - * mitigating their racy check at balloon_page_movable()
> - */
> -static inline bool page_flags_cleared(struct page *page)
> -{
> -	return !(page->flags & PAGE_FLAGS_CHECK_AT_PREP);
> -}
> -
> -/*
> - * __is_movable_balloon_page - helper to perform @page mapping->flags tests
> - */
> -static inline bool __is_movable_balloon_page(struct page *page)
> -{
> -	struct address_space *mapping = page->mapping;
> -	return !PageAnon(page) && mapping_balloon(mapping);
> -}
> -
> -/*
> - * balloon_page_movable - test page->mapping->flags to identify balloon pages
> - *			  that can be moved by compaction/migration.
> - *
> - * This function is used at core compaction's page isolation scheme, therefore
> - * most pages exposed to it are not enlisted as balloon pages and so, to avoid
> - * undesired side effects like racing against __free_pages(), we cannot afford
> - * holding the page locked while testing page->mapping->flags here.
> - *
> - * As we might return false positives in the case of a balloon page being just
> - * released under us, the page->mapping->flags need to be re-tested later,
> - * under the proper page lock, at the functions that will be coping with the
> - * balloon page case.
> - */
> -static inline bool balloon_page_movable(struct page *page)
> -{
> -	/*
> -	 * Before dereferencing and testing mapping->flags, let's make sure
> -	 * this is not a page that uses ->mapping in a different way
> -	 */
> -	if (page_flags_cleared(page) && !page_mapped(page) &&
> -	    page_count(page) == 1)
> -		return __is_movable_balloon_page(page);
> -
> -	return false;
> -}
> -
> -/*
> - * isolated_balloon_page - identify an isolated balloon page on private
> - *			   compaction/migration page lists.
> - *
> - * After a compaction thread isolates a balloon page for migration, it raises
> - * the page refcount to prevent concurrent compaction threads from re-isolating
> - * the same page. For that reason putback_movable_pages(), or other routines
> - * that need to identify isolated balloon pages on private pagelists, cannot
> - * rely on balloon_page_movable() to accomplish the task.
> - */
> -static inline bool isolated_balloon_page(struct page *page)
> -{
> -	/* Already isolated balloon pages, by default, have a raised refcount */
> -	if (page_flags_cleared(page) && !page_mapped(page) &&
> -	    page_count(page) >= 2)
> -		return __is_movable_balloon_page(page);
> -
> -	return false;
> -}
> -
> -/*
>   * balloon_page_insert - insert a page into the balloon's page list and make
>   *		         the page->mapping assignment accordingly.
>   * @page    : page to be assigned as a 'balloon page'
> @@ -192,6 +121,7 @@ static inline void balloon_page_insert(struct page *page,
>  				       struct address_space *mapping,
>  				       struct list_head *head)
>  {
> +	__SetPageBalloon(page);
>  	page->mapping = mapping;
>  	list_add(&page->lru, head);
>  }
> @@ -206,6 +136,7 @@ static inline void balloon_page_insert(struct page *page,
>   */
>  static inline void balloon_page_delete(struct page *page)
>  {
> +	__ClearPageBalloon(page);
>  	page->mapping = NULL;
>  	list_del(&page->lru);
>  }
> @@ -250,24 +181,16 @@ static inline void balloon_page_insert(struct page *page,
>  				       struct address_space *mapping,
>  				       struct list_head *head)
>  {
> +	__SetPageBalloon(page);
>  	list_add(&page->lru, head);
>  }
>  
>  static inline void balloon_page_delete(struct page *page)
>  {
> +	__ClearPageBalloon(page);
>  	list_del(&page->lru);
>  }
>  
> -static inline bool balloon_page_movable(struct page *page)
> -{
> -	return false;
> -}
> -
> -static inline bool isolated_balloon_page(struct page *page)
> -{
> -	return false;
> -}
> -
>  static inline bool balloon_page_isolate(struct page *page)
>  {
>  	return false;
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 72e0db0..e09cf0a 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -237,7 +237,7 @@ config MEMORY_BALLOON
>  config BALLOON_COMPACTION
>  	bool "Allow for balloon memory compaction/migration"
>  	def_bool y
> -	depends on COMPACTION && VIRTIO_BALLOON
> +	depends on COMPACTION && MEMORY_BALLOON
>  	help
>  	  Memory fragmentation introduced by ballooning might reduce
>  	  significantly the number of 2MB contiguous memory blocks that can be
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index 533c567..22c8e03 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -253,8 +253,7 @@ bool balloon_page_isolate(struct page *page)
>  			 * Prevent concurrent compaction threads from isolating
>  			 * an already isolated balloon page by refcount check.
>  			 */
> -			if (__is_movable_balloon_page(page) &&
> -			    page_count(page) == 2) {
> +			if (PageBalloon(page) && page_count(page) == 2) {
>  				__isolate_balloon_page(page);
>  				unlock_page(page);
>  				return true;
> @@ -275,7 +274,7 @@ void balloon_page_putback(struct page *page)
>  	 */
>  	lock_page(page);
>  
> -	if (__is_movable_balloon_page(page)) {
> +	if (PageBalloon(page)) {
>  		__putback_balloon_page(page);
>  		/* drop the extra ref count taken for page isolation */
>  		put_page(page);
> @@ -300,7 +299,7 @@ int balloon_page_migrate(struct page *newpage,
>  	 */
>  	BUG_ON(!trylock_page(newpage));
>  
> -	if (WARN_ON(!__is_movable_balloon_page(page))) {
> +	if (WARN_ON(!PageBalloon(page))) {
>  		dump_page(page, "not movable balloon page");
>  		unlock_page(newpage);
>  		return rc;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 0653f5f..e9aeed2 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -596,11 +596,10 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
>  		 * Skip any other type of page
>  		 */
>  		if (!PageLRU(page)) {
> -			if (unlikely(balloon_page_movable(page))) {
> -				if (balloon_page_isolate(page)) {
> -					/* Successfully isolated */
> -					goto isolate_success;
> -				}
> +			if (unlikely(PageBalloon(page)) &&
> +					balloon_page_isolate(page)) {
> +				/* Successfully isolated */
> +				goto isolate_success;
>  			}
>  			continue;
>  		}
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 161d044..c35e6f2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -92,7 +92,7 @@ void putback_movable_pages(struct list_head *l)
>  		list_del(&page->lru);
>  		dec_zone_page_state(page, NR_ISOLATED_ANON +
>  				page_is_file_cache(page));
> -		if (unlikely(isolated_balloon_page(page)))
> +		if (unlikely(PageBalloon(page)))
>  			balloon_page_putback(page);
>  		else
>  			putback_lru_page(page);
> @@ -873,7 +873,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>  		}
>  	}
>  
> -	if (unlikely(__is_movable_balloon_page(page))) {
> +	if (unlikely(PageBalloon(page))) {
>  		/*
>  		 * A ballooned page does not need any special attention from
>  		 * physical to virtual reverse mapping procedures.
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 2836b53..f90f93e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1160,7 +1160,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
>  
>  	list_for_each_entry_safe(page, next, page_list, lru) {
>  		if (page_is_file_cache(page) && !PageDirty(page) &&
> -		    !isolated_balloon_page(page)) {
> +		    !PageBalloon(page)) {
>  			ClearPageActive(page);
>  			list_move(&page->lru, &clean_pages);
>  		}
> 
Acked-by: Rafael Aquini <aquini@redhat.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
       [not found]   ` <5ad4664811559496e563ead974f10e8ee6b4ed47.1408576903.git.aquini@redhat.com>
@ 2014-08-20 23:58       ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:58 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, linux-kernel, Andrew Morton, Sasha Levin, Andrey Ryabinin

On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
>  include/linux/balloon_compaction.h |  107 ++++++------------
>  include/linux/migrate.h            |   11 --
>  include/linux/pagemap.h            |   18 ---
>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
>  mm/migrate.c                       |   27 +----
>  6 files changed, 130 insertions(+), 324 deletions(-)
> 
Very nice clean-up, just as all other patches in this set.
Please, just consider amending the following changes to this patch of yours

Rafael
---

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index dc7073b..569cf96 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
 #ifdef CONFIG_BALLOON_COMPACTION
 extern bool balloon_page_isolate(struct page *page);
 extern void balloon_page_putback(struct page *page);
-
-/*
- * balloon_page_insert - insert a page into the balloon's page list and make
- *		         the page->mapping assignment accordingly.
- * @page    : page to be assigned as a 'balloon page'
- * @mapping : allocated special 'balloon_mapping'
- * @head    : balloon's device page list head
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before inserting a page into the balloon device.
- */
-static inline void
-balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
-{
-	__SetPageBalloon(page);
-	set_page_private(page, (unsigned long)balloon);
-	list_add(&page->lru, &balloon->pages);
-}
-
-/*
- * balloon_page_delete - delete a page from balloon's page list and clear
- *			 the page->mapping assignement accordingly.
- * @page    : page to be released from balloon's page list
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before deleting a page from the balloon device.
- */
-static inline void balloon_page_delete(struct page *page, bool isolated)
-{
-	__ClearPageBalloon(page);
-	set_page_private(page, 0);
-	if (!isolated)
-		list_del(&page->lru);
-}
-
 int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
 		unsigned long private, struct page *page,
 		int force, enum migrate_mode mode);
@@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 
 #else /* !CONFIG_BALLOON_COMPACTION */
 
-static inline void *balloon_mapping_alloc(void *balloon_device,
-				const struct address_space_operations *a_ops)
-{
-	return ERR_PTR(-EOPNOTSUPP);
-}
-
-static inline void balloon_mapping_free(struct address_space *balloon_mapping)
-{
-	return;
-}
-
-static inline void
-balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
-{
-	__SetPageBalloon(page);
-	list_add(&page->lru, head);
-}
-
-static inline void balloon_page_delete(struct page *page, bool isolated)
-{
-	__ClearPageBalloon(page);
-	if (!isolated)
-		list_del(&page->lru);
-}
-
 static inline int balloon_page_migrate(new_page_t get_new_page,
 		free_page_t put_new_page, unsigned long private,
 		struct page *page, int force, enum migrate_mode mode)
@@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 {
 	return GFP_HIGHUSER;
 }
-
 #endif /* CONFIG_BALLOON_COMPACTION */
+
+/*
+ * balloon_page_insert - insert a page into the balloon's page list and make
+ *		         the page->mapping assignment accordingly.
+ * @page    : page to be assigned as a 'balloon page'
+ * @mapping : allocated special 'balloon_mapping'
+ * @head    : balloon's device page list head
+ *
+ * Caller must ensure the page is locked and the spin_lock protecting balloon
+ * pages list is held before inserting a page into the balloon device.
+ */
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+	__SetPageBalloon(page);
+	set_page_private(page, (unsigned long)balloon);
+	list_add(&page->lru, &balloon->pages);
+	inc_zone_page_state(page, NR_BALLOON_PAGES);
+#endif
+}
+
+/*
+ * balloon_page_delete - delete a page from balloon's page list and clear
+ *			 the page->mapping assignement accordingly.
+ * @page    : page to be released from balloon's page list
+ *
+ * Caller must ensure the page is locked and the spin_lock protecting balloon
+ * pages list is held before deleting a page from the balloon device.
+ */
+static inline void balloon_page_delete(struct page *page, bool isolated)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+	__ClearPageBalloon(page);
+	set_page_private(page, 0);
+	if (!isolated)
+		list_del(&page->lru);
+	dec_zone_page_state(page, NR_BALLOON_PAGES);
+#endif
+}
+
 #endif /* _LINUX_BALLOON_COMPACTION_H */
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-20 23:58       ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-20 23:58 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, linux-kernel, Andrew Morton, Sasha Levin, Andrey Ryabinin

On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free
> 
> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> ---
>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
>  include/linux/balloon_compaction.h |  107 ++++++------------
>  include/linux/migrate.h            |   11 --
>  include/linux/pagemap.h            |   18 ---
>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
>  mm/migrate.c                       |   27 +----
>  6 files changed, 130 insertions(+), 324 deletions(-)
> 
Very nice clean-up, just as all other patches in this set.
Please, just consider amending the following changes to this patch of yours

Rafael
---

diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index dc7073b..569cf96 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
 #ifdef CONFIG_BALLOON_COMPACTION
 extern bool balloon_page_isolate(struct page *page);
 extern void balloon_page_putback(struct page *page);
-
-/*
- * balloon_page_insert - insert a page into the balloon's page list and make
- *		         the page->mapping assignment accordingly.
- * @page    : page to be assigned as a 'balloon page'
- * @mapping : allocated special 'balloon_mapping'
- * @head    : balloon's device page list head
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before inserting a page into the balloon device.
- */
-static inline void
-balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
-{
-	__SetPageBalloon(page);
-	set_page_private(page, (unsigned long)balloon);
-	list_add(&page->lru, &balloon->pages);
-}
-
-/*
- * balloon_page_delete - delete a page from balloon's page list and clear
- *			 the page->mapping assignement accordingly.
- * @page    : page to be released from balloon's page list
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before deleting a page from the balloon device.
- */
-static inline void balloon_page_delete(struct page *page, bool isolated)
-{
-	__ClearPageBalloon(page);
-	set_page_private(page, 0);
-	if (!isolated)
-		list_del(&page->lru);
-}
-
 int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
 		unsigned long private, struct page *page,
 		int force, enum migrate_mode mode);
@@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 
 #else /* !CONFIG_BALLOON_COMPACTION */
 
-static inline void *balloon_mapping_alloc(void *balloon_device,
-				const struct address_space_operations *a_ops)
-{
-	return ERR_PTR(-EOPNOTSUPP);
-}
-
-static inline void balloon_mapping_free(struct address_space *balloon_mapping)
-{
-	return;
-}
-
-static inline void
-balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
-{
-	__SetPageBalloon(page);
-	list_add(&page->lru, head);
-}
-
-static inline void balloon_page_delete(struct page *page, bool isolated)
-{
-	__ClearPageBalloon(page);
-	if (!isolated)
-		list_del(&page->lru);
-}
-
 static inline int balloon_page_migrate(new_page_t get_new_page,
 		free_page_t put_new_page, unsigned long private,
 		struct page *page, int force, enum migrate_mode mode)
@@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
 {
 	return GFP_HIGHUSER;
 }
-
 #endif /* CONFIG_BALLOON_COMPACTION */
+
+/*
+ * balloon_page_insert - insert a page into the balloon's page list and make
+ *		         the page->mapping assignment accordingly.
+ * @page    : page to be assigned as a 'balloon page'
+ * @mapping : allocated special 'balloon_mapping'
+ * @head    : balloon's device page list head
+ *
+ * Caller must ensure the page is locked and the spin_lock protecting balloon
+ * pages list is held before inserting a page into the balloon device.
+ */
+static inline void
+balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+	__SetPageBalloon(page);
+	set_page_private(page, (unsigned long)balloon);
+	list_add(&page->lru, &balloon->pages);
+	inc_zone_page_state(page, NR_BALLOON_PAGES);
+#endif
+}
+
+/*
+ * balloon_page_delete - delete a page from balloon's page list and clear
+ *			 the page->mapping assignement accordingly.
+ * @page    : page to be released from balloon's page list
+ *
+ * Caller must ensure the page is locked and the spin_lock protecting balloon
+ * pages list is held before deleting a page from the balloon device.
+ */
+static inline void balloon_page_delete(struct page *page, bool isolated)
+{
+#ifdef CONFIG_MEMORY_BALLOON
+	__ClearPageBalloon(page);
+	set_page_private(page, 0);
+	if (!isolated)
+		list_del(&page->lru);
+	dec_zone_page_state(page, NR_BALLOON_PAGES);
+#endif
+}
+
 #endif /* _LINUX_BALLOON_COMPACTION_H */
-- 
1.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-20 23:58       ` Rafael Aquini
@ 2014-08-21  7:30         ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-21  7:30 UTC (permalink / raw)
  To: Rafael Aquini
  Cc: Konstantin Khlebnikov, linux-mm, Linux Kernel Mailing List,
	Andrew Morton, Sasha Levin, Andrey Ryabinin

On Thu, Aug 21, 2014 at 3:58 AM, Rafael Aquini <aquini@redhat.com> wrote:
> On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
>> * move special branch for balloon migraion into migrate_pages
>> * remove special mapping for balloon and its flag AS_BALLOON_MAP
>> * embed struct balloon_dev_info into struct virtio_balloon
>> * cleanup balloon_page_dequeue, kill balloon_page_free
>>
>> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
>> ---
>>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
>>  include/linux/balloon_compaction.h |  107 ++++++------------
>>  include/linux/migrate.h            |   11 --
>>  include/linux/pagemap.h            |   18 ---
>>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
>>  mm/migrate.c                       |   27 +----
>>  6 files changed, 130 insertions(+), 324 deletions(-)
>>
> Very nice clean-up, just as all other patches in this set.
> Please, just consider amending the following changes to this patch of yours

Well. Probably it's better to hide __Set/Clear inside mm/balloon_compaction.c
it very unlikely that they might  be used by somebody else.
mm.h contains too many obscure static inlines and other barely used stuff.

And it's worth to rename balloon_compaction.c/h into just balloon.c or
memory_balloon because
it provides generic balloon wtihout compaction too. Any objections?

>
> Rafael
> ---
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index dc7073b..569cf96 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
>  #ifdef CONFIG_BALLOON_COMPACTION
>  extern bool balloon_page_isolate(struct page *page);
>  extern void balloon_page_putback(struct page *page);
> -
> -/*
> - * balloon_page_insert - insert a page into the balloon's page list and make
> - *                      the page->mapping assignment accordingly.
> - * @page    : page to be assigned as a 'balloon page'
> - * @mapping : allocated special 'balloon_mapping'
> - * @head    : balloon's device page list head
> - *
> - * Caller must ensure the page is locked and the spin_lock protecting balloon
> - * pages list is held before inserting a page into the balloon device.
> - */
> -static inline void
> -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> -{
> -       __SetPageBalloon(page);
> -       set_page_private(page, (unsigned long)balloon);
> -       list_add(&page->lru, &balloon->pages);
> -}
> -
> -/*
> - * balloon_page_delete - delete a page from balloon's page list and clear
> - *                      the page->mapping assignement accordingly.
> - * @page    : page to be released from balloon's page list
> - *
> - * Caller must ensure the page is locked and the spin_lock protecting balloon
> - * pages list is held before deleting a page from the balloon device.
> - */
> -static inline void balloon_page_delete(struct page *page, bool isolated)
> -{
> -       __ClearPageBalloon(page);
> -       set_page_private(page, 0);
> -       if (!isolated)
> -               list_del(&page->lru);
> -}
> -
>  int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
>                 unsigned long private, struct page *page,
>                 int force, enum migrate_mode mode);
> @@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>
>  #else /* !CONFIG_BALLOON_COMPACTION */
>
> -static inline void *balloon_mapping_alloc(void *balloon_device,
> -                               const struct address_space_operations *a_ops)
> -{
> -       return ERR_PTR(-EOPNOTSUPP);
> -}
> -
> -static inline void balloon_mapping_free(struct address_space *balloon_mapping)
> -{
> -       return;
> -}
> -
> -static inline void
> -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> -{
> -       __SetPageBalloon(page);
> -       list_add(&page->lru, head);
> -}
> -
> -static inline void balloon_page_delete(struct page *page, bool isolated)
> -{
> -       __ClearPageBalloon(page);
> -       if (!isolated)
> -               list_del(&page->lru);
> -}
> -
>  static inline int balloon_page_migrate(new_page_t get_new_page,
>                 free_page_t put_new_page, unsigned long private,
>                 struct page *page, int force, enum migrate_mode mode)
> @@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>  {
>         return GFP_HIGHUSER;
>  }
> -
>  #endif /* CONFIG_BALLOON_COMPACTION */
> +
> +/*
> + * balloon_page_insert - insert a page into the balloon's page list and make
> + *                      the page->mapping assignment accordingly.
> + * @page    : page to be assigned as a 'balloon page'
> + * @mapping : allocated special 'balloon_mapping'
> + * @head    : balloon's device page list head
> + *
> + * Caller must ensure the page is locked and the spin_lock protecting balloon
> + * pages list is held before inserting a page into the balloon device.
> + */
> +static inline void
> +balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> +{
> +#ifdef CONFIG_MEMORY_BALLOON
> +       __SetPageBalloon(page);
> +       set_page_private(page, (unsigned long)balloon);
> +       list_add(&page->lru, &balloon->pages);
> +       inc_zone_page_state(page, NR_BALLOON_PAGES);
> +#endif
> +}
> +
> +/*
> + * balloon_page_delete - delete a page from balloon's page list and clear
> + *                      the page->mapping assignement accordingly.
> + * @page    : page to be released from balloon's page list
> + *
> + * Caller must ensure the page is locked and the spin_lock protecting balloon
> + * pages list is held before deleting a page from the balloon device.
> + */
> +static inline void balloon_page_delete(struct page *page, bool isolated)
> +{
> +#ifdef CONFIG_MEMORY_BALLOON
> +       __ClearPageBalloon(page);
> +       set_page_private(page, 0);
> +       if (!isolated)
> +               list_del(&page->lru);
> +       dec_zone_page_state(page, NR_BALLOON_PAGES);
> +#endif
> +}
> +
>  #endif /* _LINUX_BALLOON_COMPACTION_H */
> --
> 1.9.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-21  7:30         ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-21  7:30 UTC (permalink / raw)
  To: Rafael Aquini
  Cc: Konstantin Khlebnikov, linux-mm, Linux Kernel Mailing List,
	Andrew Morton, Sasha Levin, Andrey Ryabinin

On Thu, Aug 21, 2014 at 3:58 AM, Rafael Aquini <aquini@redhat.com> wrote:
> On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
>> * move special branch for balloon migraion into migrate_pages
>> * remove special mapping for balloon and its flag AS_BALLOON_MAP
>> * embed struct balloon_dev_info into struct virtio_balloon
>> * cleanup balloon_page_dequeue, kill balloon_page_free
>>
>> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
>> ---
>>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
>>  include/linux/balloon_compaction.h |  107 ++++++------------
>>  include/linux/migrate.h            |   11 --
>>  include/linux/pagemap.h            |   18 ---
>>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
>>  mm/migrate.c                       |   27 +----
>>  6 files changed, 130 insertions(+), 324 deletions(-)
>>
> Very nice clean-up, just as all other patches in this set.
> Please, just consider amending the following changes to this patch of yours

Well. Probably it's better to hide __Set/Clear inside mm/balloon_compaction.c
it very unlikely that they might  be used by somebody else.
mm.h contains too many obscure static inlines and other barely used stuff.

And it's worth to rename balloon_compaction.c/h into just balloon.c or
memory_balloon because
it provides generic balloon wtihout compaction too. Any objections?

>
> Rafael
> ---
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index dc7073b..569cf96 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
>  #ifdef CONFIG_BALLOON_COMPACTION
>  extern bool balloon_page_isolate(struct page *page);
>  extern void balloon_page_putback(struct page *page);
> -
> -/*
> - * balloon_page_insert - insert a page into the balloon's page list and make
> - *                      the page->mapping assignment accordingly.
> - * @page    : page to be assigned as a 'balloon page'
> - * @mapping : allocated special 'balloon_mapping'
> - * @head    : balloon's device page list head
> - *
> - * Caller must ensure the page is locked and the spin_lock protecting balloon
> - * pages list is held before inserting a page into the balloon device.
> - */
> -static inline void
> -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> -{
> -       __SetPageBalloon(page);
> -       set_page_private(page, (unsigned long)balloon);
> -       list_add(&page->lru, &balloon->pages);
> -}
> -
> -/*
> - * balloon_page_delete - delete a page from balloon's page list and clear
> - *                      the page->mapping assignement accordingly.
> - * @page    : page to be released from balloon's page list
> - *
> - * Caller must ensure the page is locked and the spin_lock protecting balloon
> - * pages list is held before deleting a page from the balloon device.
> - */
> -static inline void balloon_page_delete(struct page *page, bool isolated)
> -{
> -       __ClearPageBalloon(page);
> -       set_page_private(page, 0);
> -       if (!isolated)
> -               list_del(&page->lru);
> -}
> -
>  int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
>                 unsigned long private, struct page *page,
>                 int force, enum migrate_mode mode);
> @@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>
>  #else /* !CONFIG_BALLOON_COMPACTION */
>
> -static inline void *balloon_mapping_alloc(void *balloon_device,
> -                               const struct address_space_operations *a_ops)
> -{
> -       return ERR_PTR(-EOPNOTSUPP);
> -}
> -
> -static inline void balloon_mapping_free(struct address_space *balloon_mapping)
> -{
> -       return;
> -}
> -
> -static inline void
> -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> -{
> -       __SetPageBalloon(page);
> -       list_add(&page->lru, head);
> -}
> -
> -static inline void balloon_page_delete(struct page *page, bool isolated)
> -{
> -       __ClearPageBalloon(page);
> -       if (!isolated)
> -               list_del(&page->lru);
> -}
> -
>  static inline int balloon_page_migrate(new_page_t get_new_page,
>                 free_page_t put_new_page, unsigned long private,
>                 struct page *page, int force, enum migrate_mode mode)
> @@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>  {
>         return GFP_HIGHUSER;
>  }
> -
>  #endif /* CONFIG_BALLOON_COMPACTION */
> +
> +/*
> + * balloon_page_insert - insert a page into the balloon's page list and make
> + *                      the page->mapping assignment accordingly.
> + * @page    : page to be assigned as a 'balloon page'
> + * @mapping : allocated special 'balloon_mapping'
> + * @head    : balloon's device page list head
> + *
> + * Caller must ensure the page is locked and the spin_lock protecting balloon
> + * pages list is held before inserting a page into the balloon device.
> + */
> +static inline void
> +balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> +{
> +#ifdef CONFIG_MEMORY_BALLOON
> +       __SetPageBalloon(page);
> +       set_page_private(page, (unsigned long)balloon);
> +       list_add(&page->lru, &balloon->pages);
> +       inc_zone_page_state(page, NR_BALLOON_PAGES);
> +#endif
> +}
> +
> +/*
> + * balloon_page_delete - delete a page from balloon's page list and clear
> + *                      the page->mapping assignement accordingly.
> + * @page    : page to be released from balloon's page list
> + *
> + * Caller must ensure the page is locked and the spin_lock protecting balloon
> + * pages list is held before deleting a page from the balloon device.
> + */
> +static inline void balloon_page_delete(struct page *page, bool isolated)
> +{
> +#ifdef CONFIG_MEMORY_BALLOON
> +       __ClearPageBalloon(page);
> +       set_page_private(page, 0);
> +       if (!isolated)
> +               list_del(&page->lru);
> +       dec_zone_page_state(page, NR_BALLOON_PAGES);
> +#endif
> +}
> +
>  #endif /* _LINUX_BALLOON_COMPACTION_H */
> --
> 1.9.3
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-21  7:30         ` Konstantin Khlebnikov
@ 2014-08-21 12:31           ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-21 12:31 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Konstantin Khlebnikov, linux-mm, Linux Kernel Mailing List,
	Andrew Morton, Sasha Levin, Andrey Ryabinin

On Thu, Aug 21, 2014 at 11:30:59AM +0400, Konstantin Khlebnikov wrote:
> On Thu, Aug 21, 2014 at 3:58 AM, Rafael Aquini <aquini@redhat.com> wrote:
> > On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
> >> * move special branch for balloon migraion into migrate_pages
> >> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> >> * embed struct balloon_dev_info into struct virtio_balloon
> >> * cleanup balloon_page_dequeue, kill balloon_page_free
> >>
> >> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> >> ---
> >>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
> >>  include/linux/balloon_compaction.h |  107 ++++++------------
> >>  include/linux/migrate.h            |   11 --
> >>  include/linux/pagemap.h            |   18 ---
> >>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
> >>  mm/migrate.c                       |   27 +----
> >>  6 files changed, 130 insertions(+), 324 deletions(-)
> >>
> > Very nice clean-up, just as all other patches in this set.
> > Please, just consider amending the following changes to this patch of yours
> 
> Well. Probably it's better to hide __Set/Clear inside mm/balloon_compaction.c
> it very unlikely that they might  be used by somebody else.
> mm.h contains too many obscure static inlines and other barely used stuff.
>
Although I agree that very few codesites will actually resort to them,
I believe there's no argument to hide __Set/Clear if PageBalloon() itself is there.
Take a look into how many codesites __{Set,Clear}PageBuddy are being
called -- (as in their Balloon counterpart, just 1 codesite each).

For the sake of consistency and ease of maintainability, either leave all
BalloonPage() and friends at mm.h or move them all out, 
hiding them in balloon_compaction.h.

> And it's worth to rename balloon_compaction.c/h into just balloon.c or
> memory_balloon because
> it provides generic balloon wtihout compaction too. Any objections?
>
No objections here, I was actually thinking about renaming them too.

--
Rafael
 
> >
> > Rafael
> > ---
> >
> > diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> > index dc7073b..569cf96 100644
> > --- a/include/linux/balloon_compaction.h
> > +++ b/include/linux/balloon_compaction.h
> > @@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
> >  #ifdef CONFIG_BALLOON_COMPACTION
> >  extern bool balloon_page_isolate(struct page *page);
> >  extern void balloon_page_putback(struct page *page);
> > -
> > -/*
> > - * balloon_page_insert - insert a page into the balloon's page list and make
> > - *                      the page->mapping assignment accordingly.
> > - * @page    : page to be assigned as a 'balloon page'
> > - * @mapping : allocated special 'balloon_mapping'
> > - * @head    : balloon's device page list head
> > - *
> > - * Caller must ensure the page is locked and the spin_lock protecting balloon
> > - * pages list is held before inserting a page into the balloon device.
> > - */
> > -static inline void
> > -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > -{
> > -       __SetPageBalloon(page);
> > -       set_page_private(page, (unsigned long)balloon);
> > -       list_add(&page->lru, &balloon->pages);
> > -}
> > -
> > -/*
> > - * balloon_page_delete - delete a page from balloon's page list and clear
> > - *                      the page->mapping assignement accordingly.
> > - * @page    : page to be released from balloon's page list
> > - *
> > - * Caller must ensure the page is locked and the spin_lock protecting balloon
> > - * pages list is held before deleting a page from the balloon device.
> > - */
> > -static inline void balloon_page_delete(struct page *page, bool isolated)
> > -{
> > -       __ClearPageBalloon(page);
> > -       set_page_private(page, 0);
> > -       if (!isolated)
> > -               list_del(&page->lru);
> > -}
> > -
> >  int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
> >                 unsigned long private, struct page *page,
> >                 int force, enum migrate_mode mode);
> > @@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> >
> >  #else /* !CONFIG_BALLOON_COMPACTION */
> >
> > -static inline void *balloon_mapping_alloc(void *balloon_device,
> > -                               const struct address_space_operations *a_ops)
> > -{
> > -       return ERR_PTR(-EOPNOTSUPP);
> > -}
> > -
> > -static inline void balloon_mapping_free(struct address_space *balloon_mapping)
> > -{
> > -       return;
> > -}
> > -
> > -static inline void
> > -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > -{
> > -       __SetPageBalloon(page);
> > -       list_add(&page->lru, head);
> > -}
> > -
> > -static inline void balloon_page_delete(struct page *page, bool isolated)
> > -{
> > -       __ClearPageBalloon(page);
> > -       if (!isolated)
> > -               list_del(&page->lru);
> > -}
> > -
> >  static inline int balloon_page_migrate(new_page_t get_new_page,
> >                 free_page_t put_new_page, unsigned long private,
> >                 struct page *page, int force, enum migrate_mode mode)
> > @@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> >  {
> >         return GFP_HIGHUSER;
> >  }
> > -
> >  #endif /* CONFIG_BALLOON_COMPACTION */
> > +
> > +/*
> > + * balloon_page_insert - insert a page into the balloon's page list and make
> > + *                      the page->mapping assignment accordingly.
> > + * @page    : page to be assigned as a 'balloon page'
> > + * @mapping : allocated special 'balloon_mapping'
> > + * @head    : balloon's device page list head
> > + *
> > + * Caller must ensure the page is locked and the spin_lock protecting balloon
> > + * pages list is held before inserting a page into the balloon device.
> > + */
> > +static inline void
> > +balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > +{
> > +#ifdef CONFIG_MEMORY_BALLOON
> > +       __SetPageBalloon(page);
> > +       set_page_private(page, (unsigned long)balloon);
> > +       list_add(&page->lru, &balloon->pages);
> > +       inc_zone_page_state(page, NR_BALLOON_PAGES);
> > +#endif
> > +}
> > +
> > +/*
> > + * balloon_page_delete - delete a page from balloon's page list and clear
> > + *                      the page->mapping assignement accordingly.
> > + * @page    : page to be released from balloon's page list
> > + *
> > + * Caller must ensure the page is locked and the spin_lock protecting balloon
> > + * pages list is held before deleting a page from the balloon device.
> > + */
> > +static inline void balloon_page_delete(struct page *page, bool isolated)
> > +{
> > +#ifdef CONFIG_MEMORY_BALLOON
> > +       __ClearPageBalloon(page);
> > +       set_page_private(page, 0);
> > +       if (!isolated)
> > +               list_del(&page->lru);
> > +       dec_zone_page_state(page, NR_BALLOON_PAGES);
> > +#endif
> > +}
> > +
> >  #endif /* _LINUX_BALLOON_COMPACTION_H */
> > --
> > 1.9.3
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to majordomo@kvack.org.  For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-21 12:31           ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-21 12:31 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Konstantin Khlebnikov, linux-mm, Linux Kernel Mailing List,
	Andrew Morton, Sasha Levin, Andrey Ryabinin

On Thu, Aug 21, 2014 at 11:30:59AM +0400, Konstantin Khlebnikov wrote:
> On Thu, Aug 21, 2014 at 3:58 AM, Rafael Aquini <aquini@redhat.com> wrote:
> > On Wed, Aug 20, 2014 at 07:05:09PM +0400, Konstantin Khlebnikov wrote:
> >> * move special branch for balloon migraion into migrate_pages
> >> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> >> * embed struct balloon_dev_info into struct virtio_balloon
> >> * cleanup balloon_page_dequeue, kill balloon_page_free
> >>
> >> Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com>
> >> ---
> >>  drivers/virtio/virtio_balloon.c    |   77 ++++---------
> >>  include/linux/balloon_compaction.h |  107 ++++++------------
> >>  include/linux/migrate.h            |   11 --
> >>  include/linux/pagemap.h            |   18 ---
> >>  mm/balloon_compaction.c            |  214 ++++++++++++------------------------
> >>  mm/migrate.c                       |   27 +----
> >>  6 files changed, 130 insertions(+), 324 deletions(-)
> >>
> > Very nice clean-up, just as all other patches in this set.
> > Please, just consider amending the following changes to this patch of yours
> 
> Well. Probably it's better to hide __Set/Clear inside mm/balloon_compaction.c
> it very unlikely that they might  be used by somebody else.
> mm.h contains too many obscure static inlines and other barely used stuff.
>
Although I agree that very few codesites will actually resort to them,
I believe there's no argument to hide __Set/Clear if PageBalloon() itself is there.
Take a look into how many codesites __{Set,Clear}PageBuddy are being
called -- (as in their Balloon counterpart, just 1 codesite each).

For the sake of consistency and ease of maintainability, either leave all
BalloonPage() and friends at mm.h or move them all out, 
hiding them in balloon_compaction.h.

> And it's worth to rename balloon_compaction.c/h into just balloon.c or
> memory_balloon because
> it provides generic balloon wtihout compaction too. Any objections?
>
No objections here, I was actually thinking about renaming them too.

--
Rafael
 
> >
> > Rafael
> > ---
> >
> > diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> > index dc7073b..569cf96 100644
> > --- a/include/linux/balloon_compaction.h
> > +++ b/include/linux/balloon_compaction.h
> > @@ -75,41 +75,6 @@ extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info);
> >  #ifdef CONFIG_BALLOON_COMPACTION
> >  extern bool balloon_page_isolate(struct page *page);
> >  extern void balloon_page_putback(struct page *page);
> > -
> > -/*
> > - * balloon_page_insert - insert a page into the balloon's page list and make
> > - *                      the page->mapping assignment accordingly.
> > - * @page    : page to be assigned as a 'balloon page'
> > - * @mapping : allocated special 'balloon_mapping'
> > - * @head    : balloon's device page list head
> > - *
> > - * Caller must ensure the page is locked and the spin_lock protecting balloon
> > - * pages list is held before inserting a page into the balloon device.
> > - */
> > -static inline void
> > -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > -{
> > -       __SetPageBalloon(page);
> > -       set_page_private(page, (unsigned long)balloon);
> > -       list_add(&page->lru, &balloon->pages);
> > -}
> > -
> > -/*
> > - * balloon_page_delete - delete a page from balloon's page list and clear
> > - *                      the page->mapping assignement accordingly.
> > - * @page    : page to be released from balloon's page list
> > - *
> > - * Caller must ensure the page is locked and the spin_lock protecting balloon
> > - * pages list is held before deleting a page from the balloon device.
> > - */
> > -static inline void balloon_page_delete(struct page *page, bool isolated)
> > -{
> > -       __ClearPageBalloon(page);
> > -       set_page_private(page, 0);
> > -       if (!isolated)
> > -               list_del(&page->lru);
> > -}
> > -
> >  int balloon_page_migrate(new_page_t get_new_page, free_page_t put_new_page,
> >                 unsigned long private, struct page *page,
> >                 int force, enum migrate_mode mode);
> > @@ -130,31 +95,6 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> >
> >  #else /* !CONFIG_BALLOON_COMPACTION */
> >
> > -static inline void *balloon_mapping_alloc(void *balloon_device,
> > -                               const struct address_space_operations *a_ops)
> > -{
> > -       return ERR_PTR(-EOPNOTSUPP);
> > -}
> > -
> > -static inline void balloon_mapping_free(struct address_space *balloon_mapping)
> > -{
> > -       return;
> > -}
> > -
> > -static inline void
> > -balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > -{
> > -       __SetPageBalloon(page);
> > -       list_add(&page->lru, head);
> > -}
> > -
> > -static inline void balloon_page_delete(struct page *page, bool isolated)
> > -{
> > -       __ClearPageBalloon(page);
> > -       if (!isolated)
> > -               list_del(&page->lru);
> > -}
> > -
> >  static inline int balloon_page_migrate(new_page_t get_new_page,
> >                 free_page_t put_new_page, unsigned long private,
> >                 struct page *page, int force, enum migrate_mode mode)
> > @@ -176,6 +116,46 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> >  {
> >         return GFP_HIGHUSER;
> >  }
> > -
> >  #endif /* CONFIG_BALLOON_COMPACTION */
> > +
> > +/*
> > + * balloon_page_insert - insert a page into the balloon's page list and make
> > + *                      the page->mapping assignment accordingly.
> > + * @page    : page to be assigned as a 'balloon page'
> > + * @mapping : allocated special 'balloon_mapping'
> > + * @head    : balloon's device page list head
> > + *
> > + * Caller must ensure the page is locked and the spin_lock protecting balloon
> > + * pages list is held before inserting a page into the balloon device.
> > + */
> > +static inline void
> > +balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> > +{
> > +#ifdef CONFIG_MEMORY_BALLOON
> > +       __SetPageBalloon(page);
> > +       set_page_private(page, (unsigned long)balloon);
> > +       list_add(&page->lru, &balloon->pages);
> > +       inc_zone_page_state(page, NR_BALLOON_PAGES);
> > +#endif
> > +}
> > +
> > +/*
> > + * balloon_page_delete - delete a page from balloon's page list and clear
> > + *                      the page->mapping assignement accordingly.
> > + * @page    : page to be released from balloon's page list
> > + *
> > + * Caller must ensure the page is locked and the spin_lock protecting balloon
> > + * pages list is held before deleting a page from the balloon device.
> > + */
> > +static inline void balloon_page_delete(struct page *page, bool isolated)
> > +{
> > +#ifdef CONFIG_MEMORY_BALLOON
> > +       __ClearPageBalloon(page);
> > +       set_page_private(page, 0);
> > +       if (!isolated)
> > +               list_del(&page->lru);
> > +       dec_zone_page_state(page, NR_BALLOON_PAGES);
> > +#endif
> > +}
> > +
> >  #endif /* _LINUX_BALLOON_COMPACTION_H */
> > --
> > 1.9.3
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to majordomo@kvack.org.  For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-20 15:05   ` Konstantin Khlebnikov
@ 2014-08-29 21:05     ` Andrew Morton
  -1 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2014-08-29 21:05 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Rafael Aquini, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:

> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free
> 

grump.

diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
--- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
+++ a/include/linux/balloon_compaction.h
@@ -145,7 +145,7 @@ static inline void
 balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
-	list_add(&page->lru, head);
+	list_add(&page->lru, &balloon->pages);
 }
 
 static inline void balloon_page_delete(struct page *page, bool isolated)


This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
complete the testing of this patchset and let us know the result?

Thanks.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-29 21:05     ` Andrew Morton
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2014-08-29 21:05 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Rafael Aquini, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:

> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free
> 

grump.

diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
--- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
+++ a/include/linux/balloon_compaction.h
@@ -145,7 +145,7 @@ static inline void
 balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
 {
 	__SetPageBalloon(page);
-	list_add(&page->lru, head);
+	list_add(&page->lru, &balloon->pages);
 }
 
 static inline void balloon_page_delete(struct page *page, bool isolated)


This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
complete the testing of this patchset and let us know the result?

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-29 21:05     ` Andrew Morton
@ 2014-08-29 21:09       ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-29 21:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Sasha Levin, Andrey Ryabinin,
	linux-kernel

On Fri, Aug 29, 2014 at 02:05:21PM -0700, Andrew Morton wrote:
> On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
> 
> > * move special branch for balloon migraion into migrate_pages
> > * remove special mapping for balloon and its flag AS_BALLOON_MAP
> > * embed struct balloon_dev_info into struct virtio_balloon
> > * cleanup balloon_page_dequeue, kill balloon_page_free
> > 
> 
> grump.
> 
> diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
> --- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
> +++ a/include/linux/balloon_compaction.h
> @@ -145,7 +145,7 @@ static inline void
>  balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
>  {
>  	__SetPageBalloon(page);
> -	list_add(&page->lru, head);
> +	list_add(&page->lru, &balloon->pages);
>  }
>  
>  static inline void balloon_page_delete(struct page *page, bool isolated)
> 
> 
> This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
> complete the testing of this patchset and let us know the result?
>

That also reminds me why I suggested moving those as static inlines into mm.h, 
instead of getting them hidden in mm/balloon_compaction.c

Cheers,
-- Rafael

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-29 21:09       ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-29 21:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Sasha Levin, Andrey Ryabinin,
	linux-kernel

On Fri, Aug 29, 2014 at 02:05:21PM -0700, Andrew Morton wrote:
> On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
> 
> > * move special branch for balloon migraion into migrate_pages
> > * remove special mapping for balloon and its flag AS_BALLOON_MAP
> > * embed struct balloon_dev_info into struct virtio_balloon
> > * cleanup balloon_page_dequeue, kill balloon_page_free
> > 
> 
> grump.
> 
> diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
> --- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
> +++ a/include/linux/balloon_compaction.h
> @@ -145,7 +145,7 @@ static inline void
>  balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
>  {
>  	__SetPageBalloon(page);
> -	list_add(&page->lru, head);
> +	list_add(&page->lru, &balloon->pages);
>  }
>  
>  static inline void balloon_page_delete(struct page *page, bool isolated)
> 
> 
> This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
> complete the testing of this patchset and let us know the result?
>

That also reminds me why I suggested moving those as static inlines into mm.h, 
instead of getting them hidden in mm/balloon_compaction.c

Cheers,
-- Rafael

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-29 21:09       ` Rafael Aquini
@ 2014-08-29 21:26         ` Rafael Aquini
  -1 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-29 21:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Sasha Levin, Andrey Ryabinin,
	linux-kernel

On Fri, Aug 29, 2014 at 05:09:55PM -0400, Rafael Aquini wrote:
> On Fri, Aug 29, 2014 at 02:05:21PM -0700, Andrew Morton wrote:
> > On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
> > 
> > > * move special branch for balloon migraion into migrate_pages
> > > * remove special mapping for balloon and its flag AS_BALLOON_MAP
> > > * embed struct balloon_dev_info into struct virtio_balloon
> > > * cleanup balloon_page_dequeue, kill balloon_page_free
> > > 
> > 
> > grump.
> > 
> > diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
> > --- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
> > +++ a/include/linux/balloon_compaction.h
> > @@ -145,7 +145,7 @@ static inline void
> >  balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> >  {
> >  	__SetPageBalloon(page);
> > -	list_add(&page->lru, head);
> > +	list_add(&page->lru, &balloon->pages);
> >  }
> >  
> >  static inline void balloon_page_delete(struct page *page, bool isolated)
> > 
> > 
> > This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
> > complete the testing of this patchset and let us know the result?
> >
>

Btw, I'll do a mea culpa here. Although this build failure was addressed
by my extra-cleanup suggestion, I never made that statement clear at my
original message.

http://permalink.gmane.org/gmane.linux.kernel.mm/121788

Sorry,
-- Rafael
 
> That also reminds me why I suggested moving those as static inlines into mm.h, 
> instead of getting them hidden in mm/balloon_compaction.c
> 
> Cheers,
> -- Rafael

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-29 21:26         ` Rafael Aquini
  0 siblings, 0 replies; 42+ messages in thread
From: Rafael Aquini @ 2014-08-29 21:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Sasha Levin, Andrey Ryabinin,
	linux-kernel

On Fri, Aug 29, 2014 at 05:09:55PM -0400, Rafael Aquini wrote:
> On Fri, Aug 29, 2014 at 02:05:21PM -0700, Andrew Morton wrote:
> > On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
> > 
> > > * move special branch for balloon migraion into migrate_pages
> > > * remove special mapping for balloon and its flag AS_BALLOON_MAP
> > > * embed struct balloon_dev_info into struct virtio_balloon
> > > * cleanup balloon_page_dequeue, kill balloon_page_free
> > > 
> > 
> > grump.
> > 
> > diff -puN include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix include/linux/balloon_compaction.h
> > --- a/include/linux/balloon_compaction.h~mm-balloon_compaction-general-cleanup-fix
> > +++ a/include/linux/balloon_compaction.h
> > @@ -145,7 +145,7 @@ static inline void
> >  balloon_page_insert(struct balloon_dev_info *balloon, struct page *page)
> >  {
> >  	__SetPageBalloon(page);
> > -	list_add(&page->lru, head);
> > +	list_add(&page->lru, &balloon->pages);
> >  }
> >  
> >  static inline void balloon_page_delete(struct page *page, bool isolated)
> > 
> > 
> > This obviously wasn't tested with CONFIG_BALLOON_COMPACTION=n.  Please
> > complete the testing of this patchset and let us know the result?
> >
>

Btw, I'll do a mea culpa here. Although this build failure was addressed
by my extra-cleanup suggestion, I never made that statement clear at my
original message.

http://permalink.gmane.org/gmane.linux.kernel.mm/121788

Sorry,
-- Rafael
 
> That also reminds me why I suggested moving those as static inlines into mm.h, 
> instead of getting them hidden in mm/balloon_compaction.c
> 
> Cheers,
> -- Rafael

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-20 15:05   ` Konstantin Khlebnikov
@ 2014-08-29 21:38     ` Andrew Morton
  -1 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2014-08-29 21:38 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Rafael Aquini, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:

> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free

Another testing failure.  Guys, allnoconfig is really fast.

> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -54,58 +54,27 @@
>   * balloon driver as a page book-keeper for its registered balloon devices.
>   */
>  struct balloon_dev_info {
> -	void *balloon_device;		/* balloon device descriptor */
> -	struct address_space *mapping;	/* balloon special page->mapping */
>  	unsigned long isolated_pages;	/* # of isolated pages for migration */
>  	spinlock_t pages_lock;		/* Protection to pages list */
>  	struct list_head pages;		/* Pages enqueued & handled to Host */
> +	int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
> +			struct page *page, enum migrate_mode mode);
>  };

If CONFIG_MIGRATION=n this gets turned into "NULL" and chaos ensues.  I
think I'll just nuke that #define:

--- a/include/linux/migrate.h~include-linux-migrateh-remove-migratepage-define
+++ a/include/linux/migrate.h
@@ -82,9 +82,6 @@ static inline int migrate_huge_page_move
 	return -ENOSYS;
 }
 
-/* Possible settings for the migrate_page() method in address_operations */
-#define migrate_page NULL
-
 #endif /* CONFIG_MIGRATION */
 
 #ifdef CONFIG_NUMA_BALANCING
--- a/mm/swap_state.c~include-linux-migrateh-remove-migratepage-define
+++ a/mm/swap_state.c
@@ -28,7 +28,9 @@
 static const struct address_space_operations swap_aops = {
 	.writepage	= swap_writepage,
 	.set_page_dirty	= swap_set_page_dirty,
+#ifdef CONFIG_MIGRATION
 	.migratepage	= migrate_page,
+#endif
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
--- a/mm/shmem.c~include-linux-migrateh-remove-migratepage-define
+++ a/mm/shmem.c
@@ -3075,7 +3075,9 @@ static const struct address_space_operat
 	.write_begin	= shmem_write_begin,
 	.write_end	= shmem_write_end,
 #endif
+#ifdef CONFIG_MIGRATION
 	.migratepage	= migrate_page,
+#endif
 	.error_remove_page = generic_error_remove_page,
 };
 

Our mixture of "migratepage" and "migrate_page" is maddening.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-29 21:38     ` Andrew Morton
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Morton @ 2014-08-29 21:38 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Rafael Aquini, Sasha Levin, Andrey Ryabinin, linux-kernel

On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:

> * move special branch for balloon migraion into migrate_pages
> * remove special mapping for balloon and its flag AS_BALLOON_MAP
> * embed struct balloon_dev_info into struct virtio_balloon
> * cleanup balloon_page_dequeue, kill balloon_page_free

Another testing failure.  Guys, allnoconfig is really fast.

> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -54,58 +54,27 @@
>   * balloon driver as a page book-keeper for its registered balloon devices.
>   */
>  struct balloon_dev_info {
> -	void *balloon_device;		/* balloon device descriptor */
> -	struct address_space *mapping;	/* balloon special page->mapping */
>  	unsigned long isolated_pages;	/* # of isolated pages for migration */
>  	spinlock_t pages_lock;		/* Protection to pages list */
>  	struct list_head pages;		/* Pages enqueued & handled to Host */
> +	int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
> +			struct page *page, enum migrate_mode mode);
>  };

If CONFIG_MIGRATION=n this gets turned into "NULL" and chaos ensues.  I
think I'll just nuke that #define:

--- a/include/linux/migrate.h~include-linux-migrateh-remove-migratepage-define
+++ a/include/linux/migrate.h
@@ -82,9 +82,6 @@ static inline int migrate_huge_page_move
 	return -ENOSYS;
 }
 
-/* Possible settings for the migrate_page() method in address_operations */
-#define migrate_page NULL
-
 #endif /* CONFIG_MIGRATION */
 
 #ifdef CONFIG_NUMA_BALANCING
--- a/mm/swap_state.c~include-linux-migrateh-remove-migratepage-define
+++ a/mm/swap_state.c
@@ -28,7 +28,9 @@
 static const struct address_space_operations swap_aops = {
 	.writepage	= swap_writepage,
 	.set_page_dirty	= swap_set_page_dirty,
+#ifdef CONFIG_MIGRATION
 	.migratepage	= migrate_page,
+#endif
 };
 
 static struct backing_dev_info swap_backing_dev_info = {
--- a/mm/shmem.c~include-linux-migrateh-remove-migratepage-define
+++ a/mm/shmem.c
@@ -3075,7 +3075,9 @@ static const struct address_space_operat
 	.write_begin	= shmem_write_begin,
 	.write_end	= shmem_write_end,
 #endif
+#ifdef CONFIG_MIGRATION
 	.migratepage	= migrate_page,
+#endif
 	.error_remove_page = generic_error_remove_page,
 };
 

Our mixture of "migratepage" and "migrate_page" is maddening.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
  2014-08-29 21:38     ` Andrew Morton
@ 2014-08-30  6:44       ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-30  6:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Rafael Aquini, Sasha Levin,
	Andrey Ryabinin, Linux Kernel Mailing List

On Sat, Aug 30, 2014 at 1:38 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
>
>> * move special branch for balloon migraion into migrate_pages
>> * remove special mapping for balloon and its flag AS_BALLOON_MAP
>> * embed struct balloon_dev_info into struct virtio_balloon
>> * cleanup balloon_page_dequeue, kill balloon_page_free
>
> Another testing failure.  Guys, allnoconfig is really fast.

Heh, mea culpa too. I've missed messages about including my patches except one
with stress-test, probably they are stuck somewhere in my corporate email.
So I thought you've picked only one patch.

Rafael had several suggestions so I postponed them till v2 patchset
which never been sent.

>
>> --- a/include/linux/balloon_compaction.h
>> +++ b/include/linux/balloon_compaction.h
>> @@ -54,58 +54,27 @@
>>   * balloon driver as a page book-keeper for its registered balloon devices.
>>   */
>>  struct balloon_dev_info {
>> -     void *balloon_device;           /* balloon device descriptor */
>> -     struct address_space *mapping;  /* balloon special page->mapping */
>>       unsigned long isolated_pages;   /* # of isolated pages for migration */
>>       spinlock_t pages_lock;          /* Protection to pages list */
>>       struct list_head pages;         /* Pages enqueued & handled to Host */
>> +     int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
>> +                     struct page *page, enum migrate_mode mode);
>>  };
>
> If CONFIG_MIGRATION=n this gets turned into "NULL" and chaos ensues.  I
> think I'll just nuke that #define:

Hmm, i think it's better to rename migrate_page() into something less generic.
for example generic_migrate_page() or generic_migratepage().

>
> --- a/include/linux/migrate.h~include-linux-migrateh-remove-migratepage-define
> +++ a/include/linux/migrate.h
> @@ -82,9 +82,6 @@ static inline int migrate_huge_page_move
>         return -ENOSYS;
>  }
>
> -/* Possible settings for the migrate_page() method in address_operations */
> -#define migrate_page NULL
> -
>  #endif /* CONFIG_MIGRATION */
>
>  #ifdef CONFIG_NUMA_BALANCING
> --- a/mm/swap_state.c~include-linux-migrateh-remove-migratepage-define
> +++ a/mm/swap_state.c
> @@ -28,7 +28,9 @@
>  static const struct address_space_operations swap_aops = {
>         .writepage      = swap_writepage,
>         .set_page_dirty = swap_set_page_dirty,
> +#ifdef CONFIG_MIGRATION
>         .migratepage    = migrate_page,
> +#endif
>  };
>
>  static struct backing_dev_info swap_backing_dev_info = {
> --- a/mm/shmem.c~include-linux-migrateh-remove-migratepage-define
> +++ a/mm/shmem.c
> @@ -3075,7 +3075,9 @@ static const struct address_space_operat
>         .write_begin    = shmem_write_begin,
>         .write_end      = shmem_write_end,
>  #endif
> +#ifdef CONFIG_MIGRATION
>         .migratepage    = migrate_page,
> +#endif
>         .error_remove_page = generic_error_remove_page,
>  };
>
>
> Our mixture of "migratepage" and "migrate_page" is maddening.
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 7/7] mm/balloon_compaction: general cleanup
@ 2014-08-30  6:44       ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-30  6:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, linux-mm, Rafael Aquini, Sasha Levin,
	Andrey Ryabinin, Linux Kernel Mailing List

On Sat, Aug 30, 2014 at 1:38 AM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 20 Aug 2014 19:05:09 +0400 Konstantin Khlebnikov <k.khlebnikov@samsung.com> wrote:
>
>> * move special branch for balloon migraion into migrate_pages
>> * remove special mapping for balloon and its flag AS_BALLOON_MAP
>> * embed struct balloon_dev_info into struct virtio_balloon
>> * cleanup balloon_page_dequeue, kill balloon_page_free
>
> Another testing failure.  Guys, allnoconfig is really fast.

Heh, mea culpa too. I've missed messages about including my patches except one
with stress-test, probably they are stuck somewhere in my corporate email.
So I thought you've picked only one patch.

Rafael had several suggestions so I postponed them till v2 patchset
which never been sent.

>
>> --- a/include/linux/balloon_compaction.h
>> +++ b/include/linux/balloon_compaction.h
>> @@ -54,58 +54,27 @@
>>   * balloon driver as a page book-keeper for its registered balloon devices.
>>   */
>>  struct balloon_dev_info {
>> -     void *balloon_device;           /* balloon device descriptor */
>> -     struct address_space *mapping;  /* balloon special page->mapping */
>>       unsigned long isolated_pages;   /* # of isolated pages for migration */
>>       spinlock_t pages_lock;          /* Protection to pages list */
>>       struct list_head pages;         /* Pages enqueued & handled to Host */
>> +     int (* migrate_page)(struct balloon_dev_info *, struct page *newpage,
>> +                     struct page *page, enum migrate_mode mode);
>>  };
>
> If CONFIG_MIGRATION=n this gets turned into "NULL" and chaos ensues.  I
> think I'll just nuke that #define:

Hmm, i think it's better to rename migrate_page() into something less generic.
for example generic_migrate_page() or generic_migratepage().

>
> --- a/include/linux/migrate.h~include-linux-migrateh-remove-migratepage-define
> +++ a/include/linux/migrate.h
> @@ -82,9 +82,6 @@ static inline int migrate_huge_page_move
>         return -ENOSYS;
>  }
>
> -/* Possible settings for the migrate_page() method in address_operations */
> -#define migrate_page NULL
> -
>  #endif /* CONFIG_MIGRATION */
>
>  #ifdef CONFIG_NUMA_BALANCING
> --- a/mm/swap_state.c~include-linux-migrateh-remove-migratepage-define
> +++ a/mm/swap_state.c
> @@ -28,7 +28,9 @@
>  static const struct address_space_operations swap_aops = {
>         .writepage      = swap_writepage,
>         .set_page_dirty = swap_set_page_dirty,
> +#ifdef CONFIG_MIGRATION
>         .migratepage    = migrate_page,
> +#endif
>  };
>
>  static struct backing_dev_info swap_backing_dev_info = {
> --- a/mm/shmem.c~include-linux-migrateh-remove-migratepage-define
> +++ a/mm/shmem.c
> @@ -3075,7 +3075,9 @@ static const struct address_space_operat
>         .write_begin    = shmem_write_begin,
>         .write_end      = shmem_write_end,
>  #endif
> +#ifdef CONFIG_MIGRATION
>         .migratepage    = migrate_page,
> +#endif
>         .error_remove_page = generic_error_remove_page,
>  };
>
>
> Our mixture of "migratepage" and "migrate_page" is maddening.
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH] mm: rename "migrate_page" to "generic_migrate_page"
  2014-08-30  6:44       ` Konstantin Khlebnikov
@ 2014-08-30 16:36         ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-30 16:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, Rafael Aquini, Linux Kernel Mailing List,
	linux-mm, Andrey Ryabinin, Sasha Levin

If CONFIG_MIGRATION=n "migrate_page" turns into NULL. This kills ifdef-endif
mess inside definitions of address space operations. But this macro affects
everything with this name, "migrate_page" is too short and generic.

This patch renames it into generic_migrate_page. Fortunately it's used only in
few places. Also here minor update for documentation: a_ops method is called
"migratepage", without underscore, obviously for keeping the macro away.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
---
 Documentation/filesystems/vfs.txt |   13 ++++++++-----
 fs/btrfs/disk-io.c                |    2 +-
 fs/nfs/write.c                    |    2 +-
 include/linux/migrate.h           |    6 +++---
 mm/migrate.c                      |   10 +++++-----
 mm/shmem.c                        |    2 +-
 mm/swap_state.c                   |    2 +-
 7 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 02a766c..a633fa7 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -746,12 +746,15 @@ struct address_space_operations {
 	Filesystems that want to use execute-in-place (XIP) need to implement
 	it.  An example implementation can be found in fs/ext2/xip.c.
 
-  migrate_page:  This is used to compact the physical memory usage.
-        If the VM wants to relocate a page (maybe off a memory card
-        that is signalling imminent failure) it will pass a new page
-	and an old page to this function.  migrate_page should
+  migratepage:  This is used to compact the physical memory usage.
+	If the VM wants to relocate a page (maybe off a memory card
+	that is signalling imminent failure) it will pass a new page
+	and an old page to this function.  migratepage should
 	transfer any private data across and update any references
-        that it has to the page.
+	that it has to the page.
+
+	Filesystem might use here generic_migrate_page if pages have no
+	private data or buffer_migrate_page for pages with buffers.
 
   launder_page: Called before freeing a page - it writes back the dirty page. To
   	prevent redirtying the page, it is kept locked during the whole
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a1d36e6..af1a274 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -973,7 +973,7 @@ static int btree_migratepage(struct address_space *mapping,
 	if (page_has_private(page) &&
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 #endif
 
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 175d5d0..7101a6d 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1898,7 +1898,7 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
 	if (!nfs_fscache_release_page(page, GFP_KERNEL))
 		return -EBUSY;
 
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 #endif
 
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a2901c4..0a4604a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -38,7 +38,7 @@ enum migrate_reason {
 #ifdef CONFIG_MIGRATION
 
 extern void putback_movable_pages(struct list_head *l);
-extern int migrate_page(struct address_space *,
+extern int generic_migrate_page(struct address_space *,
 			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
 		unsigned long private, enum migrate_mode mode, int reason);
@@ -82,8 +82,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
 	return -ENOSYS;
 }
 
-/* Possible settings for the migrate_page() method in address_operations */
-#define migrate_page NULL
+/* Possible settings for the migratepage() method in address_operations */
+#define generic_migrate_page	NULL
 
 #endif /* CONFIG_MIGRATION */
 
diff --git a/mm/migrate.c b/mm/migrate.c
index f78ec9b..905b1aa 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -588,7 +588,7 @@ void migrate_page_copy(struct page *newpage, struct page *page)
  *
  * Pages are locked upon entry and exit.
  */
-int migrate_page(struct address_space *mapping,
+int generic_migrate_page(struct address_space *mapping,
 		struct page *newpage, struct page *page,
 		enum migrate_mode mode)
 {
@@ -604,7 +604,7 @@ int migrate_page(struct address_space *mapping,
 	migrate_page_copy(newpage, page);
 	return MIGRATEPAGE_SUCCESS;
 }
-EXPORT_SYMBOL(migrate_page);
+EXPORT_SYMBOL(generic_migrate_page);
 
 #ifdef CONFIG_BLOCK
 /*
@@ -619,7 +619,7 @@ int buffer_migrate_page(struct address_space *mapping,
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page, mode);
+		return generic_migrate_page(mapping, newpage, page, mode);
 
 	head = page_buffers(page);
 
@@ -728,7 +728,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 
 /*
@@ -764,7 +764,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page, mode);
+		rc = generic_migrate_page(mapping, newpage, page, mode);
 	else if (mapping->a_ops->migratepage)
 		/*
 		 * Most pages have a mapping and most filesystems provide a
diff --git a/mm/shmem.c b/mm/shmem.c
index 0e5fb22..2e0058e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3075,7 +3075,7 @@ static const struct address_space_operations shmem_aops = {
 	.write_begin	= shmem_write_begin,
 	.write_end	= shmem_write_end,
 #endif
-	.migratepage	= migrate_page,
+	.migratepage	= generic_migrate_page,
 	.error_remove_page = generic_error_remove_page,
 };
 
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3e0ec83..0ac57c4 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -28,7 +28,7 @@
 static const struct address_space_operations swap_aops = {
 	.writepage	= swap_writepage,
 	.set_page_dirty	= swap_set_page_dirty,
-	.migratepage	= migrate_page,
+	.migratepage	= generic_migrate_page,
 };
 
 static struct backing_dev_info swap_backing_dev_info = {


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH] mm: rename "migrate_page" to "generic_migrate_page"
@ 2014-08-30 16:36         ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2014-08-30 16:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Konstantin Khlebnikov, Rafael Aquini, Linux Kernel Mailing List,
	linux-mm, Andrey Ryabinin, Sasha Levin

If CONFIG_MIGRATION=n "migrate_page" turns into NULL. This kills ifdef-endif
mess inside definitions of address space operations. But this macro affects
everything with this name, "migrate_page" is too short and generic.

This patch renames it into generic_migrate_page. Fortunately it's used only in
few places. Also here minor update for documentation: a_ops method is called
"migratepage", without underscore, obviously for keeping the macro away.

Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
---
 Documentation/filesystems/vfs.txt |   13 ++++++++-----
 fs/btrfs/disk-io.c                |    2 +-
 fs/nfs/write.c                    |    2 +-
 include/linux/migrate.h           |    6 +++---
 mm/migrate.c                      |   10 +++++-----
 mm/shmem.c                        |    2 +-
 mm/swap_state.c                   |    2 +-
 7 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 02a766c..a633fa7 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -746,12 +746,15 @@ struct address_space_operations {
 	Filesystems that want to use execute-in-place (XIP) need to implement
 	it.  An example implementation can be found in fs/ext2/xip.c.
 
-  migrate_page:  This is used to compact the physical memory usage.
-        If the VM wants to relocate a page (maybe off a memory card
-        that is signalling imminent failure) it will pass a new page
-	and an old page to this function.  migrate_page should
+  migratepage:  This is used to compact the physical memory usage.
+	If the VM wants to relocate a page (maybe off a memory card
+	that is signalling imminent failure) it will pass a new page
+	and an old page to this function.  migratepage should
 	transfer any private data across and update any references
-        that it has to the page.
+	that it has to the page.
+
+	Filesystem might use here generic_migrate_page if pages have no
+	private data or buffer_migrate_page for pages with buffers.
 
   launder_page: Called before freeing a page - it writes back the dirty page. To
   	prevent redirtying the page, it is kept locked during the whole
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a1d36e6..af1a274 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -973,7 +973,7 @@ static int btree_migratepage(struct address_space *mapping,
 	if (page_has_private(page) &&
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 #endif
 
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 175d5d0..7101a6d 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -1898,7 +1898,7 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
 	if (!nfs_fscache_release_page(page, GFP_KERNEL))
 		return -EBUSY;
 
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 #endif
 
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index a2901c4..0a4604a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -38,7 +38,7 @@ enum migrate_reason {
 #ifdef CONFIG_MIGRATION
 
 extern void putback_movable_pages(struct list_head *l);
-extern int migrate_page(struct address_space *,
+extern int generic_migrate_page(struct address_space *,
 			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
 		unsigned long private, enum migrate_mode mode, int reason);
@@ -82,8 +82,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
 	return -ENOSYS;
 }
 
-/* Possible settings for the migrate_page() method in address_operations */
-#define migrate_page NULL
+/* Possible settings for the migratepage() method in address_operations */
+#define generic_migrate_page	NULL
 
 #endif /* CONFIG_MIGRATION */
 
diff --git a/mm/migrate.c b/mm/migrate.c
index f78ec9b..905b1aa 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -588,7 +588,7 @@ void migrate_page_copy(struct page *newpage, struct page *page)
  *
  * Pages are locked upon entry and exit.
  */
-int migrate_page(struct address_space *mapping,
+int generic_migrate_page(struct address_space *mapping,
 		struct page *newpage, struct page *page,
 		enum migrate_mode mode)
 {
@@ -604,7 +604,7 @@ int migrate_page(struct address_space *mapping,
 	migrate_page_copy(newpage, page);
 	return MIGRATEPAGE_SUCCESS;
 }
-EXPORT_SYMBOL(migrate_page);
+EXPORT_SYMBOL(generic_migrate_page);
 
 #ifdef CONFIG_BLOCK
 /*
@@ -619,7 +619,7 @@ int buffer_migrate_page(struct address_space *mapping,
 	int rc;
 
 	if (!page_has_buffers(page))
-		return migrate_page(mapping, newpage, page, mode);
+		return generic_migrate_page(mapping, newpage, page, mode);
 
 	head = page_buffers(page);
 
@@ -728,7 +728,7 @@ static int fallback_migrate_page(struct address_space *mapping,
 	    !try_to_release_page(page, GFP_KERNEL))
 		return -EAGAIN;
 
-	return migrate_page(mapping, newpage, page, mode);
+	return generic_migrate_page(mapping, newpage, page, mode);
 }
 
 /*
@@ -764,7 +764,7 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 
 	mapping = page_mapping(page);
 	if (!mapping)
-		rc = migrate_page(mapping, newpage, page, mode);
+		rc = generic_migrate_page(mapping, newpage, page, mode);
 	else if (mapping->a_ops->migratepage)
 		/*
 		 * Most pages have a mapping and most filesystems provide a
diff --git a/mm/shmem.c b/mm/shmem.c
index 0e5fb22..2e0058e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3075,7 +3075,7 @@ static const struct address_space_operations shmem_aops = {
 	.write_begin	= shmem_write_begin,
 	.write_end	= shmem_write_end,
 #endif
-	.migratepage	= migrate_page,
+	.migratepage	= generic_migrate_page,
 	.error_remove_page = generic_error_remove_page,
 };
 
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3e0ec83..0ac57c4 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -28,7 +28,7 @@
 static const struct address_space_operations swap_aops = {
 	.writepage	= swap_writepage,
 	.set_page_dirty	= swap_set_page_dirty,
-	.migratepage	= migrate_page,
+	.migratepage	= generic_migrate_page,
 };
 
 static struct backing_dev_info swap_backing_dev_info = {

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2014-08-30 16:36 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-20 15:04 [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages Konstantin Khlebnikov
2014-08-20 15:04 ` Konstantin Khlebnikov
2014-08-20 15:04 ` [PATCH 2/7] mm/balloon_compaction: keep ballooned pages away from normal migration path Konstantin Khlebnikov
2014-08-20 15:04   ` Konstantin Khlebnikov
2014-08-20 23:33   ` Rafael Aquini
2014-08-20 23:33     ` Rafael Aquini
2014-08-20 15:04 ` [PATCH 3/7] mm/balloon_compaction: isolate balloon pages without lru_lock Konstantin Khlebnikov
2014-08-20 15:04   ` Konstantin Khlebnikov
2014-08-20 23:35   ` Rafael Aquini
2014-08-20 23:35     ` Rafael Aquini
2014-08-20 15:04 ` [PATCH 4/7] selftests/vm/transhuge-stress: stress test for memory compaction Konstantin Khlebnikov
2014-08-20 15:04   ` Konstantin Khlebnikov
2014-08-20 15:04 ` [PATCH 5/7] mm: introduce common page state for ballooned memory Konstantin Khlebnikov
2014-08-20 15:04   ` Konstantin Khlebnikov
2014-08-20 23:46   ` Rafael Aquini
2014-08-20 23:46     ` Rafael Aquini
2014-08-20 15:05 ` [PATCH 6/7] mm/balloon_compaction: use common page ballooning Konstantin Khlebnikov
2014-08-20 15:05   ` Konstantin Khlebnikov
2014-08-20 23:48   ` Rafael Aquini
2014-08-20 23:48     ` Rafael Aquini
2014-08-20 15:05 ` [PATCH 7/7] mm/balloon_compaction: general cleanup Konstantin Khlebnikov
2014-08-20 15:05   ` Konstantin Khlebnikov
     [not found]   ` <5ad4664811559496e563ead974f10e8ee6b4ed47.1408576903.git.aquini@redhat.com>
2014-08-20 23:58     ` Rafael Aquini
2014-08-20 23:58       ` Rafael Aquini
2014-08-21  7:30       ` Konstantin Khlebnikov
2014-08-21  7:30         ` Konstantin Khlebnikov
2014-08-21 12:31         ` Rafael Aquini
2014-08-21 12:31           ` Rafael Aquini
2014-08-29 21:05   ` Andrew Morton
2014-08-29 21:05     ` Andrew Morton
2014-08-29 21:09     ` Rafael Aquini
2014-08-29 21:09       ` Rafael Aquini
2014-08-29 21:26       ` Rafael Aquini
2014-08-29 21:26         ` Rafael Aquini
2014-08-29 21:38   ` Andrew Morton
2014-08-29 21:38     ` Andrew Morton
2014-08-30  6:44     ` Konstantin Khlebnikov
2014-08-30  6:44       ` Konstantin Khlebnikov
2014-08-30 16:36       ` [PATCH] mm: rename "migrate_page" to "generic_migrate_page" Konstantin Khlebnikov
2014-08-30 16:36         ` Konstantin Khlebnikov
2014-08-20 23:32 ` [PATCH 1/7] mm/balloon_compaction: ignore anonymous pages Rafael Aquini
2014-08-20 23:32   ` Rafael Aquini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.