linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 00/19] Support non-lru page migration
@ 2016-03-11  7:30 Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page Minchan Kim
                   ` (18 more replies)
  0 siblings, 19 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.

The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.

Other pain point is that they cannot work with CMA.
Most of CMA memory space could be idle(ie, it could be used
for movable pages unless driver is using) but if driver(i.e.,
zram) cannot migrate his page, that memory space could be
wasted. In our product which has big CMA memory, it reclaims
zones too exccessively although there are lots of free space
in CMA so system was very slow easily.

To solve these problem, this patch try to add facility to
migrate non-lru pages via introducing new friend functions
of migratepage in address_space_operation and new page flags.

	(isolate_page, putback_page)
	(PG_movable, PG_isolated)

For details, please read description in
"mm/compaction: support non-lru movable page migration".

Originally, Gioh Kim tried to support this feature but he moved
so I took over the work. But I took many code from his work and
changed a little bit.
Thanks, Gioh!

And I should mention Konstantin Khlebnikov. He really heped Gioh
at that time so he should deserve to have many credit, too.
Thanks, Konstantin!

This patchset consists of three parts

1. add non-lru page migration feature

  mm: use put_page to free page instead of putback_lru_page
  fs/anon_inodes: new interface to create new inode
  mm/compaction: support non-lru movable page migration

2. rework KVM memory-ballooning

  mm/balloon: use general movable page feature into balloon

3. rework zsmalloc

  zsmalloc: use first_page rather than page
  zsmalloc: clean up many BUG_ON
  zsmalloc: reordering function parameter
  zsmalloc: remove unused pool param in obj_free
  zsmalloc: keep max_object in size_class
  zsmalloc: squeeze inuse into page->mapping
  zsmalloc: squeeze freelist into page->mapping
  zsmalloc: move struct zs_meta from mapping to freelist
  zsmalloc: factor page chain functionality out
  zsmalloc: separate free_zspage from putback_zspage
  zsmalloc: zs_compact refactoring
  zsmalloc: migrate head page of zspage
  zsmalloc: use single linked list for page chain
  zsmalloc: migrate tail pages in zspage
  zram: use __GFP_MOVABLE for memory allocation

Gioh Kim (1):
  fs/anon_inodes: new interface to create new inode

Minchan Kim (18):
  mm: use put_page to free page instead of putback_lru_page
  mm/compaction: support non-lru movable page migration
  mm/balloon: use general movable page feature into balloon
  zsmalloc: use first_page rather than page
  zsmalloc: clean up many BUG_ON
  zsmalloc: reordering function parameter
  zsmalloc: remove unused pool param in obj_free
  zsmalloc: keep max_object in size_class
  zsmalloc: squeeze inuse into page->mapping
  zsmalloc: squeeze freelist into page->mapping
  zsmalloc: move struct zs_meta from mapping to freelist
  zsmalloc: factor page chain functionality out
  zsmalloc: separate free_zspage from putback_zspage
  zsmalloc: zs_compact refactoring
  zsmalloc: migrate head page of zspage
  zsmalloc: use single linked list for page chain
  zsmalloc: migrate tail pages in zspage
  zram: use __GFP_MOVABLE for memory allocation

 Documentation/filesystems/Locking      |    4 +
 Documentation/filesystems/vfs.txt      |    5 +
 drivers/block/zram/zram_drv.c          |    3 +-
 drivers/virtio/virtio_balloon.c        |    4 +
 fs/anon_inodes.c                       |    6 +
 fs/proc/page.c                         |    3 +
 include/linux/anon_inodes.h            |    1 +
 include/linux/balloon_compaction.h     |   47 +-
 include/linux/compaction.h             |    8 +
 include/linux/fs.h                     |    2 +
 include/linux/migrate.h                |    2 +
 include/linux/page-flags.h             |   42 +-
 include/uapi/linux/kernel-page-flags.h |    1 +
 mm/balloon_compaction.c                |  101 +--
 mm/compaction.c                        |   15 +-
 mm/migrate.c                           |  197 +++--
 mm/vmscan.c                            |    2 +-
 mm/zsmalloc.c                          | 1295 +++++++++++++++++++++++---------
 18 files changed, 1219 insertions(+), 519 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-14  8:48   ` Vlastimil Babka
  2016-03-11  7:30 ` [PATCH v1 02/19] mm/compaction: support non-lru movable page migration Minchan Kim
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim, Naoya Horiguchi

Procedure of page migration is as follows:

First of all, it should isolate a page from LRU and try to
migrate the page. If it is successful, it releases the page
for freeing. Otherwise, it should put the page back to LRU
list.

For LRU pages, we have used putback_lru_page for both freeing
and putback to LRU list. It's okay because put_page is aware of
LRU list so if it releases last refcount of the page, it removes
the page from LRU list. However, It makes unnecessary operations
(e.g., lru_cache_add, pagevec and flags operations. It would be
not significant but no worth to do) and harder to support new
non-lru page migration because put_page isn't aware of non-lru
page's data structure.

To solve the problem, we can add new hook in put_page with
PageMovable flags check but it can increase overhead in
hot path and needs new locking scheme to stabilize the flag check
with put_page.

So, this patch cleans it up to divide two semantic(ie, put and putback).
If migration is successful, use put_page instead of putback_lru_page and
use putback_lru_page only on failure. That makes code more readable
and doesn't add overhead in put_page.

Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/migrate.c | 49 ++++++++++++++++++++++++++++++-------------------
 1 file changed, 30 insertions(+), 19 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 3ad0fea5c438..bf31ea9ffaf8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -907,6 +907,14 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		put_anon_vma(anon_vma);
 	unlock_page(page);
 out:
+	/* If migration is scucessful, move newpage to right list */
+	if (rc == MIGRATEPAGE_SUCCESS) {
+		if (unlikely(__is_movable_balloon_page(newpage)))
+			put_page(newpage);
+		else
+			putback_lru_page(newpage);
+	}
+
 	return rc;
 }
 
@@ -940,6 +948,12 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 
 	if (page_count(page) == 1) {
 		/* page was freed from under us. So we are done. */
+		ClearPageActive(page);
+		ClearPageUnevictable(page);
+		if (put_new_page)
+			put_new_page(newpage, private);
+		else
+			put_page(newpage);
 		goto out;
 	}
 
@@ -952,9 +966,6 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 	}
 
 	rc = __unmap_and_move(page, newpage, force, mode);
-	if (rc == MIGRATEPAGE_SUCCESS)
-		put_new_page = NULL;
-
 out:
 	if (rc != -EAGAIN) {
 		/*
@@ -966,28 +977,28 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 		list_del(&page->lru);
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
-		/* Soft-offlined page shouldn't go through lru cache list */
+	}
+
+	/*
+	 * If migration is successful, drop the reference grabbed during
+	 * isolation. Otherwise, restore the page to LRU list unless we
+	 * want to retry.
+	 */
+	if (rc == MIGRATEPAGE_SUCCESS) {
+		put_page(page);
 		if (reason == MR_MEMORY_FAILURE) {
-			put_page(page);
 			if (!test_set_page_hwpoison(page))
 				num_poisoned_pages_inc();
-		} else
+		}
+	} else {
+		if (rc != -EAGAIN)
 			putback_lru_page(page);
+		if (put_new_page)
+			put_new_page(newpage, private);
+		else
+			put_page(newpage);
 	}
 
-	/*
-	 * If migration was not successful and there's a freeing callback, use
-	 * it.  Otherwise, putback_lru_page() will drop the reference grabbed
-	 * during isolation.
-	 */
-	if (put_new_page)
-		put_new_page(newpage, private);
-	else if (unlikely(__is_movable_balloon_page(newpage))) {
-		/* drop our reference, page already in the balloon */
-		put_page(newpage);
-	} else
-		putback_lru_page(newpage);
-
 	if (result) {
 		if (rc)
 			*result = rc;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 02/19] mm/compaction: support non-lru movable page migration
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  8:11   ` kbuild test robot
  2016-03-11  7:30 ` [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode Minchan Kim
                   ` (16 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim, dri-devel

We have allowed migration for only LRU pages until now and it was
enough to make high-order pages. But recently, embedded system(e.g.,
webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory)
so we have seen several reports about troubles of small high-order
allocation. For fixing the problem, there were several efforts
(e,g,. enhance compaction algorithm, SLUB fallback to 0-order page,
reserved memory, vmalloc and so on) but if there are lots of
non-movable pages in system, their solutions are void in the long run.

So, this patch is to support facility to change non-movable pages
with movable. For the feature, this patch introduces functions related
to migration to address_space_operations as well as some page flags.

Basically, this patch supports two page-flags and two functions related
to page migration. The flag and page->mapping stability are protected
by PG_lock.

	PG_movable
	PG_isolated

	bool (*isolate_page) (struct page *, isolate_mode_t);
	void (*putback_page) (struct page *);

Duty of subsystem want to make their pages as migratable are
as follows:

1. It should register address_space to page->mapping then mark
the page as PG_movable via __SetPageMovable.

2. It should mark the page as PG_isolated via SetPageIsolated
if isolation is sucessful and return true.

3. If migration is successful, it should clear PG_isolated and
PG_movable of the page for free preparation then release the
reference of the page to free.

4. If migration fails, putback function of subsystem should
clear PG_isolated via ClearPageIsolated.

Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: dri-devel@lists.freedesktop.org
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Gioh Kim <gurugio@hanmail.net>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/filesystems/Locking      |   4 +
 Documentation/filesystems/vfs.txt      |   5 ++
 fs/proc/page.c                         |   3 +
 include/linux/compaction.h             |   8 ++
 include/linux/fs.h                     |   2 +
 include/linux/migrate.h                |   2 +
 include/linux/page-flags.h             |  29 ++++++++
 include/uapi/linux/kernel-page-flags.h |   1 +
 mm/compaction.c                        |  14 +++-
 mm/migrate.c                           | 132 +++++++++++++++++++++++++++++----
 10 files changed, 185 insertions(+), 15 deletions(-)

diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking
index 619af9bfdcb3..0bb79560abb3 100644
--- a/Documentation/filesystems/Locking
+++ b/Documentation/filesystems/Locking
@@ -195,7 +195,9 @@ unlocks and drops the reference.
 	int (*releasepage) (struct page *, int);
 	void (*freepage)(struct page *);
 	int (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset);
+	bool (*isolate_page) (struct page *, isolate_mode_t);
 	int (*migratepage)(struct address_space *, struct page *, struct page *);
+	void (*putback_page) (struct page *);
 	int (*launder_page)(struct page *);
 	int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long);
 	int (*error_remove_page)(struct address_space *, struct page *);
@@ -219,7 +221,9 @@ invalidatepage:		yes
 releasepage:		yes
 freepage:		yes
 direct_IO:
+isolate_page:		yes
 migratepage:		yes (both)
+putback_page:		yes
 launder_page:		yes
 is_partially_uptodate:	yes
 error_remove_page:	yes
diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index b02a7d598258..4c1b6c3b4bc8 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -592,9 +592,14 @@ struct address_space_operations {
 	int (*releasepage) (struct page *, int);
 	void (*freepage)(struct page *);
 	ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset);
+	/* isolate a page for migration */
+	bool (*isolate_page) (struct page *, isolate_mode_t);
 	/* migrate the contents of a page to the specified target */
 	int (*migratepage) (struct page *, struct page *);
+	/* put the page back to right list */
+	void (*putback_page) (struct page *);
 	int (*launder_page) (struct page *);
+
 	int (*is_partially_uptodate) (struct page *, unsigned long,
 					unsigned long);
 	void (*is_dirty_writeback) (struct page *, bool *, bool *);
diff --git a/fs/proc/page.c b/fs/proc/page.c
index b2855eea5405..b2bab774adea 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -155,6 +155,9 @@ u64 stable_page_flags(struct page *page)
 	if (page_is_idle(page))
 		u |= 1 << KPF_IDLE;
 
+	if (PageMovable(page))
+		u |= 1 << KPF_MOVABLE;
+
 	u |= kpf_copy_bit(k, KPF_LOCKED,	PG_locked);
 
 	u |= kpf_copy_bit(k, KPF_SLAB,		PG_slab);
diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 4cd4ddf64cc7..6f040ad379ce 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -84,6 +84,14 @@ static inline bool compaction_deferred(struct zone *zone, int order)
 	return true;
 }
 
+static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static inline void putback_movable_page(struct page *page)
+{
+}
 #endif /* CONFIG_COMPACTION */
 
 #if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ae681002100a..6cd3810a6a27 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -398,6 +398,8 @@ struct address_space_operations {
 	 */
 	int (*migratepage) (struct address_space *,
 			struct page *, struct page *, enum migrate_mode);
+	bool (*isolate_page)(struct page *, isolate_mode_t);
+	void (*putback_page)(struct page *);
 	int (*launder_page) (struct page *);
 	int (*is_partially_uptodate) (struct page *, unsigned long,
 					unsigned long);
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index cac1c0904d5f..f10fd92860ac 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -33,6 +33,8 @@ extern int migrate_page(struct address_space *,
 			struct page *, struct page *, enum migrate_mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
 		unsigned long private, enum migrate_mode mode, int reason);
+extern bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+extern void putback_movable_page(struct page *page);
 
 extern int migrate_prep(void);
 extern int migrate_prep_local(void);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 19724e6ebd26..cdf07c3f3a6f 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -129,6 +129,10 @@ enum pageflags {
 
 	/* Compound pages. Stored in first tail page's flags */
 	PG_double_map = PG_private_2,
+
+	/* non-lru movable pages */
+	PG_movable = PG_reclaim,
+	PG_isolated = PG_owner_priv_1,
 };
 
 #ifndef __GENERATING_BOUNDS_H
@@ -612,6 +616,31 @@ static inline void __ClearPageBalloon(struct page *page)
 	atomic_set(&page->_mapcount, -1);
 }
 
+#define PAGE_MOVABLE_MAPCOUNT_VALUE (-255)
+
+static inline int PageMovable(struct page *page)
+{
+	return ((test_bit(PG_movable, &(page)->flags) &&
+		atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE)
+		|| PageBalloon(page));
+}
+
+static inline void __SetPageMovable(struct page *page)
+{
+	WARN_ON(!page->mapping);
+
+	__set_bit(PG_movable, &page->flags);
+	atomic_set(&page->_mapcount, PAGE_MOVABLE_MAPCOUNT_VALUE);
+}
+
+static inline void __ClearPageMovable(struct page *page)
+{
+	atomic_set(&page->_mapcount, -1);
+	__clear_bit(PG_movable, &(page)->flags);
+}
+
+PAGEFLAG(Isolated, isolated, PF_ANY);
+
 /*
  * If network-based swap is enabled, sl*b must keep track of whether pages
  * were allocated from pfmemalloc reserves.
diff --git a/include/uapi/linux/kernel-page-flags.h b/include/uapi/linux/kernel-page-flags.h
index 5da5f8751ce7..a184fd2434fa 100644
--- a/include/uapi/linux/kernel-page-flags.h
+++ b/include/uapi/linux/kernel-page-flags.h
@@ -34,6 +34,7 @@
 #define KPF_BALLOON		23
 #define KPF_ZERO_PAGE		24
 #define KPF_IDLE		25
+#define KPF_MOVABLE		26
 
 
 #endif /* _UAPILINUX_KERNEL_PAGE_FLAGS_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 585de54dbe8c..99f791bf2ba6 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -736,7 +736,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 		/*
 		 * Check may be lockless but that's ok as we recheck later.
-		 * It's possible to migrate LRU pages and balloon pages
+		 * It's possible to migrate LRU and movable kernel pages.
 		 * Skip any other type of page
 		 */
 		is_lru = PageLRU(page);
@@ -747,6 +747,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 					goto isolate_success;
 				}
 			}
+
+			if (unlikely(PageMovable(page)) &&
+					!PageIsolated(page)) {
+				if (locked) {
+					spin_unlock_irqrestore(&zone->lru_lock,
+									flags);
+					locked = false;
+				}
+
+				if (isolate_movable_page(page, isolate_mode))
+					goto isolate_success;
+			}
 		}
 
 		/*
diff --git a/mm/migrate.c b/mm/migrate.c
index bf31ea9ffaf8..b7b2a60f57c4 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -72,6 +72,75 @@ int migrate_prep_local(void)
 	return 0;
 }
 
+bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+{
+	bool ret = false;
+
+	/*
+	 * Avoid burning cycles with pages that are yet under __free_pages(),
+	 * or just got freed under us.
+	 *
+	 * In case we 'win' a race for a movable page being freed under us and
+	 * raise its refcount preventing __free_pages() from doing its job
+	 * the put_page() at the end of this block will take care of
+	 * release this page, thus avoiding a nasty leakage.
+	 */
+	if (unlikely(!get_page_unless_zero(page)))
+		goto out;
+
+	/*
+	 * As movable pages are not isolated from LRU lists, concurrent
+	 * compaction threads can race against page migration functions
+	 * as well as race against the releasing a page.
+	 *
+	 * In order to avoid having an already isolated movable page
+	 * being (wrongly) re-isolated while it is under migration,
+	 * or to avoid attempting to isolate pages being released,
+	 * lets be sure we have the page lock
+	 * before proceeding with the movable page isolation steps.
+	 */
+	if (unlikely(!trylock_page(page)))
+		goto out_putpage;
+
+	if (!PageMovable(page) || PageIsolated(page))
+		goto out_no_isolated;
+
+	ret = page->mapping->a_ops->isolate_page(page, mode);
+	if (!ret)
+		goto out_no_isolated;
+
+	WARN_ON_ONCE(!PageIsolated(page));
+	unlock_page(page);
+	return ret;
+
+out_no_isolated:
+	unlock_page(page);
+out_putpage:
+	put_page(page);
+out:
+	return ret;
+}
+
+void putback_movable_page(struct page *page)
+{
+	struct address_space *mapping;
+
+	/*
+	 * 'lock_page()' stabilizes the page and prevents races against
+	 * concurrent isolation threads attempting to re-isolate it.
+	 */
+	lock_page(page);
+	mapping = page_mapping(page);
+	if (mapping) {
+		mapping->a_ops->putback_page(page);
+		WARN_ON_ONCE(PageIsolated(page));
+	}
+	unlock_page(page);
+	/* drop the extra ref count taken for movable page isolation */
+	put_page(page);
+}
+
+
 /*
  * Put previously isolated pages back onto the appropriate lists
  * from where they were once taken off for compaction/migration.
@@ -95,6 +164,8 @@ void putback_movable_pages(struct list_head *l)
 				page_is_file_cache(page));
 		if (unlikely(isolated_balloon_page(page)))
 			balloon_page_putback(page);
+		else if (unlikely(PageIsolated(page)))
+			putback_movable_page(page);
 		else
 			putback_lru_page(page);
 	}
@@ -585,7 +656,7 @@ void migrate_page_copy(struct page *newpage, struct page *page)
  ***********************************************************/
 
 /*
- * Common logic to directly migrate a single page suitable for
+ * Common logic to directly migrate a single LRU page suitable for
  * pages that do not use PagePrivate/PagePrivate2.
  *
  * Pages are locked upon entry and exit.
@@ -748,24 +819,53 @@ static int move_to_new_page(struct page *newpage, struct page *page,
 				enum migrate_mode mode)
 {
 	struct address_space *mapping;
-	int rc;
+	int rc = -EAGAIN;
+	bool isolated_lru_page;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
 
 	mapping = page_mapping(page);
-	if (!mapping)
-		rc = migrate_page(mapping, newpage, page, mode);
-	else if (mapping->a_ops->migratepage)
+	/*
+	 * In case of non-lru page, it could be released after
+	 * isolation step. In that case, we shouldn't try
+	 * fallback migration which was designed for LRU pages.
+	 *
+	 * To identify such pages, we cannot use PageMovable
+	 * because owner of the page can reset it. So intead,
+	 * use PG_isolated bit.
+	 */
+	isolated_lru_page = !PageIsolated(page);
+
+	if (likely(isolated_lru_page)) {
+		if (!mapping)
+			rc = migrate_page(mapping, newpage, page, mode);
+		else if (mapping->a_ops->migratepage)
+			/*
+			 * Most pages have a mapping and most filesystems
+			 * provide a migratepage callback. Anonymous pages
+			 * are part of swap space which also has its own
+			 * migratepage callback. This is the most common path
+			 * for page migration.
+			 */
+			rc = mapping->a_ops->migratepage(mapping, newpage,
+							page, mode);
+		else
+			rc = fallback_migrate_page(mapping, newpage,
+							page, mode);
+	} else {
 		/*
-		 * Most pages have a mapping and most filesystems provide a
-		 * migratepage callback. Anonymous pages are part of swap
-		 * space which also has its own migratepage callback. This
-		 * is the most common path for page migration.
+		 * If mapping is NULL, it returns -EAGAIN so retrial
+		 * of migration will see refcount as 1 and free it,
+		 * finally.
 		 */
-		rc = mapping->a_ops->migratepage(mapping, newpage, page, mode);
-	else
-		rc = fallback_migrate_page(mapping, newpage, page, mode);
+		if (mapping) {
+			rc = mapping->a_ops->migratepage(mapping, newpage,
+							page, mode);
+			WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS &&
+				PageIsolated(page));
+		}
+	}
 
 	/*
 	 * When successful, old pagecache page->mapping must be cleared before
@@ -991,8 +1091,12 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
 				num_poisoned_pages_inc();
 		}
 	} else {
-		if (rc != -EAGAIN)
-			putback_lru_page(page);
+		if (rc != -EAGAIN) {
+			if (likely(!PageIsolated(page)))
+				putback_lru_page(page);
+			else
+				putback_movable_page(page);
+		}
 		if (put_new_page)
 			put_new_page(newpage, private);
 		else
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 02/19] mm/compaction: support non-lru movable page migration Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  8:05   ` Al Viro
  2016-03-11  7:30 ` [PATCH v1 04/19] mm/balloon: use general movable page feature into balloon Minchan Kim
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

From: Gioh Kim <gurugio@hanmail.net>

The anon_inodes has already complete interfaces to create manage
many anonymous inodes but don't have interface to get
new inode. Other sub-modules can create anonymous inode
without creating and mounting it's own pseudo filesystem.

Acked-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Gioh Kim <gurugio@hanmail.net>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 fs/anon_inodes.c            | 6 ++++++
 include/linux/anon_inodes.h | 1 +
 2 files changed, 7 insertions(+)

diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c
index 80ef38c73e5a..1d51f96acdd9 100644
--- a/fs/anon_inodes.c
+++ b/fs/anon_inodes.c
@@ -162,6 +162,12 @@ int anon_inode_getfd(const char *name, const struct file_operations *fops,
 }
 EXPORT_SYMBOL_GPL(anon_inode_getfd);
 
+struct inode *anon_inode_new(void)
+{
+	return alloc_anon_inode(anon_inode_mnt->mnt_sb);
+}
+EXPORT_SYMBOL_GPL(anon_inode_new);
+
 static int __init anon_inode_init(void)
 {
 	anon_inode_mnt = kern_mount(&anon_inode_fs_type);
diff --git a/include/linux/anon_inodes.h b/include/linux/anon_inodes.h
index 8013a45242fe..ddbd67f8a73f 100644
--- a/include/linux/anon_inodes.h
+++ b/include/linux/anon_inodes.h
@@ -15,6 +15,7 @@ struct file *anon_inode_getfile(const char *name,
 				void *priv, int flags);
 int anon_inode_getfd(const char *name, const struct file_operations *fops,
 		     void *priv, int flags);
+struct inode *anon_inode_new(void);
 
 #endif /* _LINUX_ANON_INODES_H */
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 04/19] mm/balloon: use general movable page feature into balloon
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (2 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 05/19] zsmalloc: use first_page rather than page Minchan Kim
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Now, VM has a feature to migrate non-lru movable pages so
balloon doesn't need custom migration hooks in migrate.c
and compact.c. Instead, this patch implements page->mapping
->{isolate|migrate|putback} functions.

With that, we could remove hooks for ballooning in general
migration functions and make balloon compaction simple.

Cc: virtualization@lists.linux-foundation.org
Cc: Rafael Aquini <aquini@redhat.com>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Gioh Kim <gurugio@hanmail.net>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/virtio/virtio_balloon.c    |   4 ++
 include/linux/balloon_compaction.h |  47 ++++-------------
 include/linux/page-flags.h         |  53 +++++++++++--------
 mm/balloon_compaction.c            | 101 ++++++++-----------------------------
 mm/compaction.c                    |   7 ---
 mm/migrate.c                       |  22 ++------
 mm/vmscan.c                        |   2 +-
 7 files changed, 73 insertions(+), 163 deletions(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 0c3691f46575..30a1ea31bef4 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -30,6 +30,7 @@
 #include <linux/balloon_compaction.h>
 #include <linux/oom.h>
 #include <linux/wait.h>
+#include <linux/anon_inodes.h>
 
 /*
  * Balloon device works in 4K page units.  So each page is pointed to by
@@ -476,6 +477,7 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
 
 	mutex_unlock(&vb->balloon_lock);
 
+	ClearPageIsolated(page);
 	put_page(page); /* balloon reference */
 
 	return MIGRATEPAGE_SUCCESS;
@@ -509,6 +511,8 @@ static int virtballoon_probe(struct virtio_device *vdev)
 	balloon_devinfo_init(&vb->vb_dev_info);
 #ifdef CONFIG_BALLOON_COMPACTION
 	vb->vb_dev_info.migratepage = virtballoon_migratepage;
+	vb->vb_dev_info.inode = anon_inode_new();
+	vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops;
 #endif
 
 	err = init_vqs(vb);
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 9b0a15d06a4f..43a858545844 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -48,6 +48,7 @@
 #include <linux/migrate.h>
 #include <linux/gfp.h>
 #include <linux/err.h>
+#include <linux/fs.h>
 
 /*
  * Balloon device information descriptor.
@@ -62,6 +63,7 @@ struct balloon_dev_info {
 	struct list_head pages;		/* Pages enqueued & handled to Host */
 	int (*migratepage)(struct balloon_dev_info *, struct page *newpage,
 			struct page *page, enum migrate_mode mode);
+	struct inode *inode;
 };
 
 extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info);
@@ -73,45 +75,19 @@ static inline void balloon_devinfo_init(struct balloon_dev_info *balloon)
 	spin_lock_init(&balloon->pages_lock);
 	INIT_LIST_HEAD(&balloon->pages);
 	balloon->migratepage = NULL;
+	balloon->inode = NULL;
 }
 
 #ifdef CONFIG_BALLOON_COMPACTION
-extern bool balloon_page_isolate(struct page *page);
+extern const struct address_space_operations balloon_aops;
+extern bool balloon_page_isolate(struct page *page,
+				isolate_mode_t mode);
 extern void balloon_page_putback(struct page *page);
-extern int balloon_page_migrate(struct page *newpage,
+extern int balloon_page_migrate(struct address_space *mapping,
+				struct page *newpage,
 				struct page *page, enum migrate_mode mode);
 
 /*
- * __is_movable_balloon_page - helper to perform @page PageBalloon tests
- */
-static inline bool __is_movable_balloon_page(struct page *page)
-{
-	return PageBalloon(page);
-}
-
-/*
- * balloon_page_movable - test PageBalloon to identify balloon pages
- *			  and PagePrivate to check that the page is not
- *			  isolated and can be moved by compaction/migration.
- *
- * As we might return false positives in the case of a balloon page being just
- * released under us, this need to be re-tested later, under the page lock.
- */
-static inline bool balloon_page_movable(struct page *page)
-{
-	return PageBalloon(page) && PagePrivate(page);
-}
-
-/*
- * isolated_balloon_page - identify an isolated balloon page on private
- *			   compaction/migration page lists.
- */
-static inline bool isolated_balloon_page(struct page *page)
-{
-	return PageBalloon(page);
-}
-
-/*
  * balloon_page_insert - insert a page into the balloon's page list and make
  *			 the page->private assignment accordingly.
  * @balloon : pointer to balloon device
@@ -123,8 +99,8 @@ static inline bool isolated_balloon_page(struct page *page)
 static inline void balloon_page_insert(struct balloon_dev_info *balloon,
 				       struct page *page)
 {
+	page->mapping = balloon->inode->i_mapping;
 	__SetPageBalloon(page);
-	SetPagePrivate(page);
 	set_page_private(page, (unsigned long)balloon);
 	list_add(&page->lru, &balloon->pages);
 }
@@ -140,11 +116,10 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
 static inline void balloon_page_delete(struct page *page)
 {
 	__ClearPageBalloon(page);
+	page->mapping = NULL;
 	set_page_private(page, 0);
-	if (PagePrivate(page)) {
-		ClearPagePrivate(page);
+	if (!PageIsolated(page))
 		list_del(&page->lru);
-	}
 }
 
 /*
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index cdf07c3f3a6f..94d46d947490 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -597,50 +597,59 @@ static inline void __ClearPageBuddy(struct page *page)
 	atomic_set(&page->_mapcount, -1);
 }
 
-#define PAGE_BALLOON_MAPCOUNT_VALUE (-256)
+#define PAGE_MOVABLE_MAPCOUNT_VALUE (-256)
+#define PAGE_BALLOON_MAPCOUNT_VALUE PAGE_MOVABLE_MAPCOUNT_VALUE
 
-static inline int PageBalloon(struct page *page)
+static inline int PageMovable(struct page *page)
 {
-	return atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE;
+	return (test_bit(PG_movable, &(page)->flags) &&
+		atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE);
 }
 
-static inline void __SetPageBalloon(struct page *page)
+static inline void __SetPageMovable(struct page *page)
 {
-	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
-	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+	WARN_ON(!page->mapping);
+
+	__set_bit(PG_movable, &page->flags);
+	atomic_set(&page->_mapcount, PAGE_MOVABLE_MAPCOUNT_VALUE);
 }
 
-static inline void __ClearPageBalloon(struct page *page)
+static inline void __ClearPageMovable(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageBalloon(page), page);
 	atomic_set(&page->_mapcount, -1);
+	__clear_bit(PG_movable, &(page)->flags);
 }
 
-#define PAGE_MOVABLE_MAPCOUNT_VALUE (-255)
+PAGEFLAG(Isolated, isolated, PF_ANY);
 
-static inline int PageMovable(struct page *page)
+static inline int PageBalloon(struct page *page)
 {
-	return ((test_bit(PG_movable, &(page)->flags) &&
-		atomic_read(&page->_mapcount) == PAGE_MOVABLE_MAPCOUNT_VALUE)
-		|| PageBalloon(page));
+	return atomic_read(&page->_mapcount) == PAGE_BALLOON_MAPCOUNT_VALUE
+		&& PagePrivate2(page);
 }
 
-static inline void __SetPageMovable(struct page *page)
+static inline void __SetPageBalloon(struct page *page)
 {
-	WARN_ON(!page->mapping);
-
-	__set_bit(PG_movable, &page->flags);
-	atomic_set(&page->_mapcount, PAGE_MOVABLE_MAPCOUNT_VALUE);
+	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+#ifdef CONFIG_BALLOON_COMPACTION
+	__SetPageMovable(page);
+#else
+	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
+#endif
+	SetPagePrivate2(page);
 }
 
-static inline void __ClearPageMovable(struct page *page)
+static inline void __ClearPageBalloon(struct page *page)
 {
+	VM_BUG_ON_PAGE(!PageBalloon(page), page);
+#ifdef CONFIG_BALLOON_COMPACTION
+	__ClearPageMovable(page);
+#else
 	atomic_set(&page->_mapcount, -1);
-	__clear_bit(PG_movable, &(page)->flags);
+#endif
+	ClearPagePrivate2(page);
 }
 
-PAGEFLAG(Isolated, isolated, PF_ANY);
-
 /*
  * If network-based swap is enabled, sl*b must keep track of whether pages
  * were allocated from pfmemalloc reserves.
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index 300117f1a08f..2c091bf5e22b 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -70,7 +70,7 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info)
 		 */
 		if (trylock_page(page)) {
 #ifdef CONFIG_BALLOON_COMPACTION
-			if (!PagePrivate(page)) {
+			if (PageIsolated(page)) {
 				/* raced with isolation */
 				unlock_page(page);
 				continue;
@@ -106,110 +106,53 @@ EXPORT_SYMBOL_GPL(balloon_page_dequeue);
 
 #ifdef CONFIG_BALLOON_COMPACTION
 
-static inline void __isolate_balloon_page(struct page *page)
+/* __isolate_lru_page() counterpart for a ballooned page */
+bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
 {
 	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-	ClearPagePrivate(page);
 	list_del(&page->lru);
 	b_dev_info->isolated_pages++;
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
+	SetPageIsolated(page);
+
+	return true;
 }
 
-static inline void __putback_balloon_page(struct page *page)
+/* putback_lru_page() counterpart for a ballooned page */
+void balloon_page_putback(struct page *page)
 {
 	struct balloon_dev_info *b_dev_info = balloon_page_device(page);
 	unsigned long flags;
 
+	ClearPageIsolated(page);
 	spin_lock_irqsave(&b_dev_info->pages_lock, flags);
-	SetPagePrivate(page);
 	list_add(&page->lru, &b_dev_info->pages);
 	b_dev_info->isolated_pages--;
 	spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
 }
 
-/* __isolate_lru_page() counterpart for a ballooned page */
-bool balloon_page_isolate(struct page *page)
-{
-	/*
-	 * Avoid burning cycles with pages that are yet under __free_pages(),
-	 * or just got freed under us.
-	 *
-	 * In case we 'win' a race for a balloon page being freed under us and
-	 * raise its refcount preventing __free_pages() from doing its job
-	 * the put_page() at the end of this block will take care of
-	 * release this page, thus avoiding a nasty leakage.
-	 */
-	if (likely(get_page_unless_zero(page))) {
-		/*
-		 * As balloon pages are not isolated from LRU lists, concurrent
-		 * compaction threads can race against page migration functions
-		 * as well as race against the balloon driver releasing a page.
-		 *
-		 * In order to avoid having an already isolated balloon page
-		 * being (wrongly) re-isolated while it is under migration,
-		 * or to avoid attempting to isolate pages being released by
-		 * the balloon driver, lets be sure we have the page lock
-		 * before proceeding with the balloon page isolation steps.
-		 */
-		if (likely(trylock_page(page))) {
-			/*
-			 * A ballooned page, by default, has PagePrivate set.
-			 * Prevent concurrent compaction threads from isolating
-			 * an already isolated balloon page by clearing it.
-			 */
-			if (balloon_page_movable(page)) {
-				__isolate_balloon_page(page);
-				unlock_page(page);
-				return true;
-			}
-			unlock_page(page);
-		}
-		put_page(page);
-	}
-	return false;
-}
-
-/* putback_lru_page() counterpart for a ballooned page */
-void balloon_page_putback(struct page *page)
-{
-	/*
-	 * 'lock_page()' stabilizes the page and prevents races against
-	 * concurrent isolation threads attempting to re-isolate it.
-	 */
-	lock_page(page);
-
-	if (__is_movable_balloon_page(page)) {
-		__putback_balloon_page(page);
-		/* drop the extra ref count taken for page isolation */
-		put_page(page);
-	} else {
-		WARN_ON(1);
-		dump_page(page, "not movable balloon page");
-	}
-	unlock_page(page);
-}
-
 /* move_to_new_page() counterpart for a ballooned page */
-int balloon_page_migrate(struct page *newpage,
-			 struct page *page, enum migrate_mode mode)
+int balloon_page_migrate(struct address_space *mapping,
+		struct page *newpage, struct page *page,
+		enum migrate_mode mode)
 {
 	struct balloon_dev_info *balloon = balloon_page_device(page);
-	int rc = -EAGAIN;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
+	VM_BUG_ON_PAGE(!PageMovable(page), page);
+	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
-	if (WARN_ON(!__is_movable_balloon_page(page))) {
-		dump_page(page, "not movable balloon page");
-		return rc;
-	}
-
-	if (balloon && balloon->migratepage)
-		rc = balloon->migratepage(balloon, newpage, page, mode);
-
-	return rc;
+	return balloon->migratepage(balloon, newpage, page, mode);
 }
+
+const struct address_space_operations balloon_aops = {
+	.migratepage = balloon_page_migrate,
+	.isolate_page = balloon_page_isolate,
+	.putback_page = balloon_page_putback,
+};
+EXPORT_SYMBOL_GPL(balloon_aops);
 #endif /* CONFIG_BALLOON_COMPACTION */
diff --git a/mm/compaction.c b/mm/compaction.c
index 99f791bf2ba6..e322307ac8de 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -741,13 +741,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		 */
 		is_lru = PageLRU(page);
 		if (!is_lru) {
-			if (unlikely(balloon_page_movable(page))) {
-				if (balloon_page_isolate(page)) {
-					/* Successfully isolated */
-					goto isolate_success;
-				}
-			}
-
 			if (unlikely(PageMovable(page)) &&
 					!PageIsolated(page)) {
 				if (locked) {
diff --git a/mm/migrate.c b/mm/migrate.c
index b7b2a60f57c4..98b5e7f07548 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -146,8 +146,8 @@ void putback_movable_page(struct page *page)
  * from where they were once taken off for compaction/migration.
  *
  * This function shall be used whenever the isolated pageset has been
- * built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
- * and isolate_huge_page().
+ * built from lru, movable, hugetlbfs page.
+ * See isolate_migratepages_range() and isolate_huge_page().
  */
 void putback_movable_pages(struct list_head *l)
 {
@@ -162,9 +162,7 @@ void putback_movable_pages(struct list_head *l)
 		list_del(&page->lru);
 		dec_zone_page_state(page, NR_ISOLATED_ANON +
 				page_is_file_cache(page));
-		if (unlikely(isolated_balloon_page(page)))
-			balloon_page_putback(page);
-		else if (unlikely(PageIsolated(page)))
+		if (unlikely(PageIsolated(page)))
 			putback_movable_page(page);
 		else
 			putback_lru_page(page);
@@ -953,18 +951,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	if (unlikely(!trylock_page(newpage)))
 		goto out_unlock;
 
-	if (unlikely(isolated_balloon_page(page))) {
-		/*
-		 * A ballooned page does not need any special attention from
-		 * physical to virtual reverse mapping procedures.
-		 * Skip any attempt to unmap PTEs or to remap swap cache,
-		 * in order to avoid burning cycles at rmap level, and perform
-		 * the page migration right away (proteced by page lock).
-		 */
-		rc = balloon_page_migrate(newpage, page, mode);
-		goto out_unlock_both;
-	}
-
 	/*
 	 * Corner case handling:
 	 * 1. When a new swap-cache page is read into, it is added to the LRU
@@ -1009,7 +995,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 out:
 	/* If migration is scucessful, move newpage to right list */
 	if (rc == MIGRATEPAGE_SUCCESS) {
-		if (unlikely(__is_movable_balloon_page(newpage)))
+		if (unlikely(PageMovable(newpage)))
 			put_page(newpage);
 		else
 			putback_lru_page(newpage);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 71b1c29948db..ca49b4f53c81 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1262,7 +1262,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
 
 	list_for_each_entry_safe(page, next, page_list, lru) {
 		if (page_is_file_cache(page) && !PageDirty(page) &&
-		    !isolated_balloon_page(page)) {
+		    !PageIsolated(page)) {
 			ClearPageActive(page);
 			list_move(&page->lru, &clean_pages);
 		}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 05/19] zsmalloc: use first_page rather than page
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (3 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 04/19] mm/balloon: use general movable page feature into balloon Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:19   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 06/19] zsmalloc: clean up many BUG_ON Minchan Kim
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

This patch cleans up function parameter "struct page".
Many functions of zsmalloc expects that page paramter is "first_page"
so use "first_page" rather than "page" for code readability.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 62 ++++++++++++++++++++++++++++++-----------------------------
 1 file changed, 32 insertions(+), 30 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 2d7c4c11fc63..bb29203ec6b3 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -414,26 +414,28 @@ static int is_last_page(struct page *page)
 	return PagePrivate2(page);
 }
 
-static void get_zspage_mapping(struct page *page, unsigned int *class_idx,
+static void get_zspage_mapping(struct page *first_page,
+				unsigned int *class_idx,
 				enum fullness_group *fullness)
 {
 	unsigned long m;
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
-	m = (unsigned long)page->mapping;
+	m = (unsigned long)first_page->mapping;
 	*fullness = m & FULLNESS_MASK;
 	*class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
 }
 
-static void set_zspage_mapping(struct page *page, unsigned int class_idx,
+static void set_zspage_mapping(struct page *first_page,
+				unsigned int class_idx,
 				enum fullness_group fullness)
 {
 	unsigned long m;
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
 	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
 			(fullness & FULLNESS_MASK);
-	page->mapping = (struct address_space *)m;
+	first_page->mapping = (struct address_space *)m;
 }
 
 /*
@@ -620,14 +622,14 @@ static inline void zs_pool_stat_destroy(struct zs_pool *pool)
  * the pool (not yet implemented). This function returns fullness
  * status of the given page.
  */
-static enum fullness_group get_fullness_group(struct page *page)
+static enum fullness_group get_fullness_group(struct page *first_page)
 {
 	int inuse, max_objects;
 	enum fullness_group fg;
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
-	inuse = page->inuse;
-	max_objects = page->objects;
+	inuse = first_page->inuse;
+	max_objects = first_page->objects;
 
 	if (inuse == 0)
 		fg = ZS_EMPTY;
@@ -647,12 +649,12 @@ static enum fullness_group get_fullness_group(struct page *page)
  * have. This functions inserts the given zspage into the freelist
  * identified by <class, fullness_group>.
  */
-static void insert_zspage(struct page *page, struct size_class *class,
+static void insert_zspage(struct page *first_page, struct size_class *class,
 				enum fullness_group fullness)
 {
 	struct page **head;
 
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
 	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
 		return;
@@ -662,7 +664,7 @@ static void insert_zspage(struct page *page, struct size_class *class,
 
 	head = &class->fullness_list[fullness];
 	if (!*head) {
-		*head = page;
+		*head = first_page;
 		return;
 	}
 
@@ -670,21 +672,21 @@ static void insert_zspage(struct page *page, struct size_class *class,
 	 * We want to see more ZS_FULL pages and less almost
 	 * empty/full. Put pages with higher ->inuse first.
 	 */
-	list_add_tail(&page->lru, &(*head)->lru);
-	if (page->inuse >= (*head)->inuse)
-		*head = page;
+	list_add_tail(&first_page->lru, &(*head)->lru);
+	if (first_page->inuse >= (*head)->inuse)
+		*head = first_page;
 }
 
 /*
  * This function removes the given zspage from the freelist identified
  * by <class, fullness_group>.
  */
-static void remove_zspage(struct page *page, struct size_class *class,
+static void remove_zspage(struct page *first_page, struct size_class *class,
 				enum fullness_group fullness)
 {
 	struct page **head;
 
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
 	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
 		return;
@@ -693,11 +695,11 @@ static void remove_zspage(struct page *page, struct size_class *class,
 	BUG_ON(!*head);
 	if (list_empty(&(*head)->lru))
 		*head = NULL;
-	else if (*head == page)
+	else if (*head == first_page)
 		*head = (struct page *)list_entry((*head)->lru.next,
 					struct page, lru);
 
-	list_del_init(&page->lru);
+	list_del_init(&first_page->lru);
 	zs_stat_dec(class, fullness == ZS_ALMOST_EMPTY ?
 			CLASS_ALMOST_EMPTY : CLASS_ALMOST_FULL, 1);
 }
@@ -712,21 +714,21 @@ static void remove_zspage(struct page *page, struct size_class *class,
  * fullness group.
  */
 static enum fullness_group fix_fullness_group(struct size_class *class,
-						struct page *page)
+						struct page *first_page)
 {
 	int class_idx;
 	enum fullness_group currfg, newfg;
 
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
-	get_zspage_mapping(page, &class_idx, &currfg);
-	newfg = get_fullness_group(page);
+	get_zspage_mapping(first_page, &class_idx, &currfg);
+	newfg = get_fullness_group(first_page);
 	if (newfg == currfg)
 		goto out;
 
-	remove_zspage(page, class, currfg);
-	insert_zspage(page, class, newfg);
-	set_zspage_mapping(page, class_idx, newfg);
+	remove_zspage(first_page, class, currfg);
+	insert_zspage(first_page, class, newfg);
+	set_zspage_mapping(first_page, class_idx, newfg);
 
 out:
 	return newfg;
@@ -1231,11 +1233,11 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 	return true;
 }
 
-static bool zspage_full(struct page *page)
+static bool zspage_full(struct page *first_page)
 {
-	BUG_ON(!is_first_page(page));
+	BUG_ON(!is_first_page(first_page));
 
-	return page->inuse == page->objects;
+	return first_page->inuse == first_page->objects;
 }
 
 unsigned long zs_get_total_pages(struct zs_pool *pool)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 06/19] zsmalloc: clean up many BUG_ON
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (4 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 05/19] zsmalloc: use first_page rather than page Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:19   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 07/19] zsmalloc: reordering function parameter Minchan Kim
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

There are many BUG_ON in zsmalloc.c which is not recommened so
change them as alternatives.

Normal rule is as follows:

1. avoid BUG_ON if possible. Instead, use VM_BUG_ON or VM_BUG_ON_PAGE
2. use VM_BUG_ON_PAGE if we need to see struct page's fields
3. use those assertion in primitive functions so higher functions
can rely on the assertion in the primitive function.
4. Don't use assertion if following instruction can trigger Oops

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 42 +++++++++++++++---------------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index bb29203ec6b3..3c82011cc405 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -419,7 +419,7 @@ static void get_zspage_mapping(struct page *first_page,
 				enum fullness_group *fullness)
 {
 	unsigned long m;
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	m = (unsigned long)first_page->mapping;
 	*fullness = m & FULLNESS_MASK;
@@ -431,7 +431,7 @@ static void set_zspage_mapping(struct page *first_page,
 				enum fullness_group fullness)
 {
 	unsigned long m;
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
 			(fullness & FULLNESS_MASK);
@@ -626,7 +626,8 @@ static enum fullness_group get_fullness_group(struct page *first_page)
 {
 	int inuse, max_objects;
 	enum fullness_group fg;
-	BUG_ON(!is_first_page(first_page));
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	inuse = first_page->inuse;
 	max_objects = first_page->objects;
@@ -654,7 +655,7 @@ static void insert_zspage(struct page *first_page, struct size_class *class,
 {
 	struct page **head;
 
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
 		return;
@@ -686,13 +687,13 @@ static void remove_zspage(struct page *first_page, struct size_class *class,
 {
 	struct page **head;
 
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	if (fullness >= _ZS_NR_FULLNESS_GROUPS)
 		return;
 
 	head = &class->fullness_list[fullness];
-	BUG_ON(!*head);
+	VM_BUG_ON_PAGE(!*head, first_page);
 	if (list_empty(&(*head)->lru))
 		*head = NULL;
 	else if (*head == first_page)
@@ -719,8 +720,6 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	int class_idx;
 	enum fullness_group currfg, newfg;
 
-	BUG_ON(!is_first_page(first_page));
-
 	get_zspage_mapping(first_page, &class_idx, &currfg);
 	newfg = get_fullness_group(first_page);
 	if (newfg == currfg)
@@ -806,7 +805,7 @@ static void *location_to_obj(struct page *page, unsigned long obj_idx)
 	unsigned long obj;
 
 	if (!page) {
-		BUG_ON(obj_idx);
+		VM_BUG_ON(obj_idx);
 		return NULL;
 	}
 
@@ -839,7 +838,7 @@ static unsigned long obj_to_head(struct size_class *class, struct page *page,
 			void *obj)
 {
 	if (class->huge) {
-		VM_BUG_ON(!is_first_page(page));
+		VM_BUG_ON_PAGE(!is_first_page(page), page);
 		return page_private(page);
 	} else
 		return *(unsigned long *)obj;
@@ -889,8 +888,8 @@ static void free_zspage(struct page *first_page)
 {
 	struct page *nextp, *tmp, *head_extra;
 
-	BUG_ON(!is_first_page(first_page));
-	BUG_ON(first_page->inuse);
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+	VM_BUG_ON_PAGE(first_page->inuse, first_page);
 
 	head_extra = (struct page *)page_private(first_page);
 
@@ -916,7 +915,8 @@ static void init_zspage(struct page *first_page, struct size_class *class)
 	unsigned long off = 0;
 	struct page *page = first_page;
 
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
 	while (page) {
 		struct page *next_page;
 		struct link_free *link;
@@ -1235,7 +1235,7 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 
 static bool zspage_full(struct page *first_page)
 {
-	BUG_ON(!is_first_page(first_page));
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	return first_page->inuse == first_page->objects;
 }
@@ -1273,14 +1273,12 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	struct page *pages[2];
 	void *ret;
 
-	BUG_ON(!handle);
-
 	/*
 	 * Because we use per-cpu mapping areas shared among the
 	 * pools/users, we can't allow mapping in interrupt context
 	 * because it can corrupt another users mappings.
 	 */
-	BUG_ON(in_interrupt());
+	WARN_ON_ONCE(in_interrupt());
 
 	/* From now on, migration cannot move the object */
 	pin_tag(handle);
@@ -1324,8 +1322,6 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	struct size_class *class;
 	struct mapping_area *area;
 
-	BUG_ON(!handle);
-
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &page, &obj_idx);
 	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
@@ -1445,8 +1441,6 @@ static void obj_free(struct zs_pool *pool, struct size_class *class,
 	unsigned long f_objidx, f_offset;
 	void *vaddr;
 
-	BUG_ON(!obj);
-
 	obj &= ~OBJ_ALLOCATED_TAG;
 	obj_to_location(obj, &f_page, &f_objidx);
 	first_page = get_first_page(f_page);
@@ -1546,7 +1540,6 @@ static void zs_object_copy(unsigned long dst, unsigned long src,
 			kunmap_atomic(d_addr);
 			kunmap_atomic(s_addr);
 			s_page = get_next_page(s_page);
-			BUG_ON(!s_page);
 			s_addr = kmap_atomic(s_page);
 			d_addr = kmap_atomic(d_page);
 			s_size = class->size - written;
@@ -1556,7 +1549,6 @@ static void zs_object_copy(unsigned long dst, unsigned long src,
 		if (d_off >= PAGE_SIZE) {
 			kunmap_atomic(d_addr);
 			d_page = get_next_page(d_page);
-			BUG_ON(!d_page);
 			d_addr = kmap_atomic(d_page);
 			d_size = class->size - written;
 			d_off = 0;
@@ -1691,8 +1683,6 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 {
 	enum fullness_group fullness;
 
-	BUG_ON(!is_first_page(first_page));
-
 	fullness = get_fullness_group(first_page);
 	insert_zspage(first_page, class, fullness);
 	set_zspage_mapping(first_page, class->index, fullness);
@@ -1753,8 +1743,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 	spin_lock(&class->lock);
 	while ((src_page = isolate_source_page(class))) {
 
-		BUG_ON(!is_first_page(src_page));
-
 		if (!zs_can_compact(class))
 			break;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 07/19] zsmalloc: reordering function parameter
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (5 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 06/19] zsmalloc: clean up many BUG_ON Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:20   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free Minchan Kim
                   ` (11 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

This patch cleans up function parameter ordering to order
higher data structure first.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 50 ++++++++++++++++++++++++++------------------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3c82011cc405..156edf909046 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -564,7 +564,7 @@ static const struct file_operations zs_stat_size_ops = {
 	.release        = single_release,
 };
 
-static int zs_pool_stat_create(const char *name, struct zs_pool *pool)
+static int zs_pool_stat_create(struct zs_pool *pool, const char *name)
 {
 	struct dentry *entry;
 
@@ -604,7 +604,7 @@ static void __exit zs_stat_exit(void)
 {
 }
 
-static inline int zs_pool_stat_create(const char *name, struct zs_pool *pool)
+static inline int zs_pool_stat_create(struct zs_pool *pool, const char *name)
 {
 	return 0;
 }
@@ -650,8 +650,9 @@ static enum fullness_group get_fullness_group(struct page *first_page)
  * have. This functions inserts the given zspage into the freelist
  * identified by <class, fullness_group>.
  */
-static void insert_zspage(struct page *first_page, struct size_class *class,
-				enum fullness_group fullness)
+static void insert_zspage(struct size_class *class,
+				enum fullness_group fullness,
+				struct page *first_page)
 {
 	struct page **head;
 
@@ -682,8 +683,9 @@ static void insert_zspage(struct page *first_page, struct size_class *class,
  * This function removes the given zspage from the freelist identified
  * by <class, fullness_group>.
  */
-static void remove_zspage(struct page *first_page, struct size_class *class,
-				enum fullness_group fullness)
+static void remove_zspage(struct size_class *class,
+				enum fullness_group fullness,
+				struct page *first_page)
 {
 	struct page **head;
 
@@ -725,8 +727,8 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	if (newfg == currfg)
 		goto out;
 
-	remove_zspage(first_page, class, currfg);
-	insert_zspage(first_page, class, newfg);
+	remove_zspage(class, currfg, first_page);
+	insert_zspage(class, newfg, first_page);
 	set_zspage_mapping(first_page, class_idx, newfg);
 
 out:
@@ -910,7 +912,7 @@ static void free_zspage(struct page *first_page)
 }
 
 /* Initialize a newly allocated zspage */
-static void init_zspage(struct page *first_page, struct size_class *class)
+static void init_zspage(struct size_class *class, struct page *first_page)
 {
 	unsigned long off = 0;
 	struct page *page = first_page;
@@ -998,7 +1000,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 		prev_page = page;
 	}
 
-	init_zspage(first_page, class);
+	init_zspage(class, first_page);
 
 	first_page->freelist = location_to_obj(first_page, 0);
 	/* Maximum number of objects we can store in this zspage */
@@ -1345,8 +1347,8 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
-static unsigned long obj_malloc(struct page *first_page,
-		struct size_class *class, unsigned long handle)
+static unsigned long obj_malloc(struct size_class *class,
+				struct page *first_page, unsigned long handle)
 {
 	unsigned long obj;
 	struct link_free *link;
@@ -1423,7 +1425,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 				class->size, class->pages_per_zspage));
 	}
 
-	obj = obj_malloc(first_page, class, handle);
+	obj = obj_malloc(class, first_page, handle);
 	/* Now move the zspage to another fullness group, if required */
 	fix_fullness_group(class, first_page);
 	record_obj(handle, obj);
@@ -1496,8 +1498,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_free);
 
-static void zs_object_copy(unsigned long dst, unsigned long src,
-				struct size_class *class)
+static void zs_object_copy(struct size_class *class, unsigned long dst,
+				unsigned long src)
 {
 	struct page *s_page, *d_page;
 	unsigned long s_objidx, d_objidx;
@@ -1563,8 +1565,8 @@ static void zs_object_copy(unsigned long dst, unsigned long src,
  * Find alloced object in zspage from index object and
  * return handle.
  */
-static unsigned long find_alloced_obj(struct page *page, int index,
-					struct size_class *class)
+static unsigned long find_alloced_obj(struct size_class *class,
+					struct page *page, int index)
 {
 	unsigned long head;
 	int offset = 0;
@@ -1614,7 +1616,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(s_page, index, class);
+		handle = find_alloced_obj(class, s_page, index);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
@@ -1631,8 +1633,8 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		}
 
 		used_obj = handle_to_obj(handle);
-		free_obj = obj_malloc(d_page, class, handle);
-		zs_object_copy(free_obj, used_obj, class);
+		free_obj = obj_malloc(class, d_page, handle);
+		zs_object_copy(class, free_obj, used_obj);
 		index++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
@@ -1661,7 +1663,7 @@ static struct page *isolate_target_page(struct size_class *class)
 	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
 		page = class->fullness_list[i];
 		if (page) {
-			remove_zspage(page, class, i);
+			remove_zspage(class, i, page);
 			break;
 		}
 	}
@@ -1684,7 +1686,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 	enum fullness_group fullness;
 
 	fullness = get_fullness_group(first_page);
-	insert_zspage(first_page, class, fullness);
+	insert_zspage(class, fullness, first_page);
 	set_zspage_mapping(first_page, class->index, fullness);
 
 	if (fullness == ZS_EMPTY) {
@@ -1709,7 +1711,7 @@ static struct page *isolate_source_page(struct size_class *class)
 		if (!page)
 			continue;
 
-		remove_zspage(page, class, i);
+		remove_zspage(class, i, page);
 		break;
 	}
 
@@ -1943,7 +1945,7 @@ struct zs_pool *zs_create_pool(const char *name, gfp_t flags)
 
 	pool->flags = flags;
 
-	if (zs_pool_stat_create(name, pool))
+	if (zs_pool_stat_create(pool, name))
 		goto err;
 
 	/*
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (6 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 07/19] zsmalloc: reordering function parameter Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:21   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 09/19] zsmalloc: keep max_object in size_class Minchan Kim
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Let's remove unused pool param in obj_free

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 156edf909046..b4fb11831acb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1435,8 +1435,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 }
 EXPORT_SYMBOL_GPL(zs_malloc);
 
-static void obj_free(struct zs_pool *pool, struct size_class *class,
-			unsigned long obj)
+static void obj_free(struct size_class *class, unsigned long obj)
 {
 	struct link_free *link;
 	struct page *first_page, *f_page;
@@ -1482,7 +1481,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	class = pool->size_class[class_idx];
 
 	spin_lock(&class->lock);
-	obj_free(pool, class, obj);
+	obj_free(class, obj);
 	fullness = fix_fullness_group(class, first_page);
 	if (fullness == ZS_EMPTY) {
 		zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
@@ -1645,7 +1644,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj |= BIT(HANDLE_PIN_BIT);
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
-		obj_free(pool, class, used_obj);
+		obj_free(class, used_obj);
 	}
 
 	/* Remember last position in this iteration */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 09/19] zsmalloc: keep max_object in size_class
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (7 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-12  1:44   ` xuyiping
  2016-03-15  6:28   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 10/19] zsmalloc: squeeze inuse into page->mapping Minchan Kim
                   ` (9 subsequent siblings)
  18 siblings, 2 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Every zspage in a size_class has same number of max objects so
we could move it to a size_class.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 29 ++++++++++++++---------------
 1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b4fb11831acb..ca663c82c1fc 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -32,8 +32,6 @@
  *	page->freelist: points to the first free object in zspage.
  *		Free objects are linked together using in-place
  *		metadata.
- *	page->objects: maximum number of objects we can store in this
- *		zspage (class->zspage_order * PAGE_SIZE / class->size)
  *	page->lru: links together first pages of various zspages.
  *		Basically forming list of zspages in a fullness group.
  *	page->mapping: class index and fullness group of the zspage
@@ -211,6 +209,7 @@ struct size_class {
 	 * of ZS_ALIGN.
 	 */
 	int size;
+	int objs_per_zspage;
 	unsigned int index;
 
 	struct zs_size_stat stats;
@@ -622,21 +621,22 @@ static inline void zs_pool_stat_destroy(struct zs_pool *pool)
  * the pool (not yet implemented). This function returns fullness
  * status of the given page.
  */
-static enum fullness_group get_fullness_group(struct page *first_page)
+static enum fullness_group get_fullness_group(struct size_class *class,
+						struct page *first_page)
 {
-	int inuse, max_objects;
+	int inuse, objs_per_zspage;
 	enum fullness_group fg;
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
 	inuse = first_page->inuse;
-	max_objects = first_page->objects;
+	objs_per_zspage = class->objs_per_zspage;
 
 	if (inuse == 0)
 		fg = ZS_EMPTY;
-	else if (inuse == max_objects)
+	else if (inuse == objs_per_zspage)
 		fg = ZS_FULL;
-	else if (inuse <= 3 * max_objects / fullness_threshold_frac)
+	else if (inuse <= 3 * objs_per_zspage / fullness_threshold_frac)
 		fg = ZS_ALMOST_EMPTY;
 	else
 		fg = ZS_ALMOST_FULL;
@@ -723,7 +723,7 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	enum fullness_group currfg, newfg;
 
 	get_zspage_mapping(first_page, &class_idx, &currfg);
-	newfg = get_fullness_group(first_page);
+	newfg = get_fullness_group(class, first_page);
 	if (newfg == currfg)
 		goto out;
 
@@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 	init_zspage(class, first_page);
 
 	first_page->freelist = location_to_obj(first_page, 0);
-	/* Maximum number of objects we can store in this zspage */
-	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
-
 	error = 0; /* Success */
 
 cleanup:
@@ -1235,11 +1232,11 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 	return true;
 }
 
-static bool zspage_full(struct page *first_page)
+static bool zspage_full(struct size_class *class, struct page *first_page)
 {
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	return first_page->inuse == first_page->objects;
+	return first_page->inuse == class->objs_per_zspage;
 }
 
 unsigned long zs_get_total_pages(struct zs_pool *pool)
@@ -1625,7 +1622,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		}
 
 		/* Stop if there is no more space */
-		if (zspage_full(d_page)) {
+		if (zspage_full(class, d_page)) {
 			unpin_tag(handle);
 			ret = -ENOMEM;
 			break;
@@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 {
 	enum fullness_group fullness;
 
-	fullness = get_fullness_group(first_page);
+	fullness = get_fullness_group(class, first_page);
 	insert_zspage(class, fullness, first_page);
 	set_zspage_mapping(first_page, class->index, fullness);
 
@@ -1933,6 +1930,8 @@ struct zs_pool *zs_create_pool(const char *name, gfp_t flags)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
+		class->objs_per_zspage = class->pages_per_zspage *
+						PAGE_SIZE / class->size;
 		if (pages_per_zspage == 1 &&
 			get_maxobj_per_zspage(size, pages_per_zspage) == 1)
 			class->huge = true;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 10/19] zsmalloc: squeeze inuse into page->mapping
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (8 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 09/19] zsmalloc: keep max_object in size_class Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 11/19] zsmalloc: squeeze freelist " Minchan Kim
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Currently, we store class:fullness into page->mapping.
The number of class we can support is 255 and fullness is 4 so
(8 + 2 = 10bit) is enough to represent them.
Meanwhile, the bits we need to store in-use objects in zspage
is that 11bit is enough.

For example, If we assume that 64K PAGE_SIZE, class_size 32
which is worst case, class->pages_per_zspage become 1 so
the number of objects in zspage is 2048 so 11bit is enough.
The next class is 32 + 256(i.e., ZS_SIZE_CLASS_DELTA).
With worst case that ZS_MAX_PAGES_PER_ZSPAGE, 64K * 4 /
(32 + 256) = 910 so 11bit is still enough.

So, we could squeeze inuse object count to page->mapping.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 103 ++++++++++++++++++++++++++++++++++++++++------------------
 1 file changed, 71 insertions(+), 32 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ca663c82c1fc..954e8758a78d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -34,8 +34,7 @@
  *		metadata.
  *	page->lru: links together first pages of various zspages.
  *		Basically forming list of zspages in a fullness group.
- *	page->mapping: class index and fullness group of the zspage
- *	page->inuse: the number of objects that are used in this zspage
+ *	page->mapping: override by struct zs_meta
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -132,6 +131,13 @@
 /* each chunk includes extra space to keep handle */
 #define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
 
+#define CLASS_BITS	8
+#define CLASS_MASK	((1 << CLASS_BITS) - 1)
+#define FULLNESS_BITS	2
+#define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
+#define INUSE_BITS	11
+#define INUSE_MASK	((1 << INUSE_BITS) - 1)
+
 /*
  * On systems with 4K page size, this gives 255 size classes! There is a
  * trader-off here:
@@ -145,7 +151,7 @@
  *  ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN
  *  (reason above)
  */
-#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> 8)
+#define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> CLASS_BITS)
 
 /*
  * We do not maintain any list for completely empty or full pages
@@ -155,7 +161,7 @@ enum fullness_group {
 	ZS_ALMOST_EMPTY,
 	_ZS_NR_FULLNESS_GROUPS,
 
-	ZS_EMPTY,
+	ZS_EMPTY = _ZS_NR_FULLNESS_GROUPS,
 	ZS_FULL
 };
 
@@ -263,14 +269,11 @@ struct zs_pool {
 #endif
 };
 
-/*
- * A zspage's class index and fullness group
- * are encoded in its (first)page->mapping
- */
-#define CLASS_IDX_BITS	28
-#define FULLNESS_BITS	4
-#define CLASS_IDX_MASK	((1 << CLASS_IDX_BITS) - 1)
-#define FULLNESS_MASK	((1 << FULLNESS_BITS) - 1)
+struct zs_meta {
+	unsigned long class:CLASS_BITS;
+	unsigned long fullness:FULLNESS_BITS;
+	unsigned long inuse:INUSE_BITS;
+};
 
 struct mapping_area {
 #ifdef CONFIG_PGTABLE_MAPPING
@@ -413,28 +416,61 @@ static int is_last_page(struct page *page)
 	return PagePrivate2(page);
 }
 
+static int get_zspage_inuse(struct page *first_page)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	m = (struct zs_meta *)&first_page->mapping;
+
+	return m->inuse;
+}
+
+static void set_zspage_inuse(struct page *first_page, int val)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	m = (struct zs_meta *)&first_page->mapping;
+	m->inuse = val;
+}
+
+static void mod_zspage_inuse(struct page *first_page, int val)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	m = (struct zs_meta *)&first_page->mapping;
+	m->inuse += val;
+}
+
 static void get_zspage_mapping(struct page *first_page,
 				unsigned int *class_idx,
 				enum fullness_group *fullness)
 {
-	unsigned long m;
+	struct zs_meta *m;
+
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (unsigned long)first_page->mapping;
-	*fullness = m & FULLNESS_MASK;
-	*class_idx = (m >> FULLNESS_BITS) & CLASS_IDX_MASK;
+	m = (struct zs_meta *)&first_page->mapping;
+	*fullness = m->fullness;
+	*class_idx = m->class;
 }
 
 static void set_zspage_mapping(struct page *first_page,
 				unsigned int class_idx,
 				enum fullness_group fullness)
 {
-	unsigned long m;
+	struct zs_meta *m;
+
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = ((class_idx & CLASS_IDX_MASK) << FULLNESS_BITS) |
-			(fullness & FULLNESS_MASK);
-	first_page->mapping = (struct address_space *)m;
+	m = (struct zs_meta *)&first_page->mapping;
+	m->fullness = fullness;
+	m->class = class_idx;
 }
 
 /*
@@ -627,9 +663,7 @@ static enum fullness_group get_fullness_group(struct size_class *class,
 	int inuse, objs_per_zspage;
 	enum fullness_group fg;
 
-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
-
-	inuse = first_page->inuse;
+	inuse = get_zspage_inuse(first_page);
 	objs_per_zspage = class->objs_per_zspage;
 
 	if (inuse == 0)
@@ -672,10 +706,10 @@ static void insert_zspage(struct size_class *class,
 
 	/*
 	 * We want to see more ZS_FULL pages and less almost
-	 * empty/full. Put pages with higher ->inuse first.
+	 * empty/full. Put pages with higher inuse first.
 	 */
 	list_add_tail(&first_page->lru, &(*head)->lru);
-	if (first_page->inuse >= (*head)->inuse)
+	if (get_zspage_inuse(first_page) >= get_zspage_inuse(*head))
 		*head = first_page;
 }
 
@@ -891,7 +925,7 @@ static void free_zspage(struct page *first_page)
 	struct page *nextp, *tmp, *head_extra;
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
-	VM_BUG_ON_PAGE(first_page->inuse, first_page);
+	VM_BUG_ON_PAGE(get_zspage_inuse(first_page), first_page);
 
 	head_extra = (struct page *)page_private(first_page);
 
@@ -987,7 +1021,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 			SetPagePrivate(page);
 			set_page_private(page, 0);
 			first_page = page;
-			first_page->inuse = 0;
+			set_zspage_inuse(page, 0);
 		}
 		if (i == 1)
 			set_page_private(first_page, (unsigned long)page);
@@ -1234,9 +1268,7 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 
 static bool zspage_full(struct size_class *class, struct page *first_page)
 {
-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
-
-	return first_page->inuse == class->objs_per_zspage;
+	return get_zspage_inuse(first_page) == class->objs_per_zspage;
 }
 
 unsigned long zs_get_total_pages(struct zs_pool *pool)
@@ -1369,7 +1401,7 @@ static unsigned long obj_malloc(struct size_class *class,
 		/* record handle in first_page->private */
 		set_page_private(first_page, handle);
 	kunmap_atomic(vaddr);
-	first_page->inuse++;
+	mod_zspage_inuse(first_page, 1);
 	zs_stat_inc(class, OBJ_USED, 1);
 
 	return obj;
@@ -1454,7 +1486,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
 		set_page_private(first_page, 0);
 	kunmap_atomic(vaddr);
 	first_page->freelist = (void *)obj;
-	first_page->inuse--;
+	mod_zspage_inuse(first_page, -1);
 	zs_stat_dec(class, OBJ_USED, 1);
 }
 
@@ -2000,6 +2032,13 @@ static int __init zs_init(void)
 	if (ret)
 		goto notifier_fail;
 
+	/*
+	 * A zspage's class index, fullness group, inuse object count are
+	 * encoded in its (first)page->mapping so sizeof(struct zs_meta)
+	 * should be less than sizeof(page->mapping(i.e., unsigned long)).
+	 */
+	BUILD_BUG_ON(sizeof(struct zs_meta) > sizeof(unsigned long));
+
 	init_zs_size_classes();
 
 #ifdef CONFIG_ZPOOL
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (9 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 10/19] zsmalloc: squeeze inuse into page->mapping Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:40   ` Sergey Senozhatsky
  2016-03-11  7:30 ` [PATCH v1 12/19] zsmalloc: move struct zs_meta from mapping to freelist Minchan Kim
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Zsmalloc stores first free object's position into first_page->freelist
in each zspage. If we change it with object index from first_page
instead of location, we could squeeze it into page->mapping because
the number of bit we need to store offset is at most 11bit.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 159 +++++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 96 insertions(+), 63 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 954e8758a78d..e23cd3b2dd71 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -18,9 +18,7 @@
  * Usage of struct page fields:
  *	page->private: points to the first component (0-order) page
  *	page->index (union with page->freelist): offset of the first object
- *		starting in this page. For the first page, this is
- *		always 0, so we use this field (aka freelist) to point
- *		to the first free object in zspage.
+ *		starting in this page.
  *	page->lru: links together all component pages (except the first page)
  *		of a zspage
  *
@@ -29,9 +27,6 @@
  *	page->private: refers to the component page after the first page
  *		If the page is first_page for huge object, it stores handle.
  *		Look at size_class->huge.
- *	page->freelist: points to the first free object in zspage.
- *		Free objects are linked together using in-place
- *		metadata.
  *	page->lru: links together first pages of various zspages.
  *		Basically forming list of zspages in a fullness group.
  *	page->mapping: override by struct zs_meta
@@ -131,6 +126,7 @@
 /* each chunk includes extra space to keep handle */
 #define ZS_MAX_ALLOC_SIZE	PAGE_SIZE
 
+#define FREEOBJ_BITS 11
 #define CLASS_BITS	8
 #define CLASS_MASK	((1 << CLASS_BITS) - 1)
 #define FULLNESS_BITS	2
@@ -228,17 +224,17 @@ struct size_class {
 
 /*
  * Placed within free objects to form a singly linked list.
- * For every zspage, first_page->freelist gives head of this list.
+ * For every zspage, first_page->freeobj gives head of this list.
  *
  * This must be power of 2 and less than or equal to ZS_ALIGN
  */
 struct link_free {
 	union {
 		/*
-		 * Position of next free chunk (encodes <PFN, obj_idx>)
+		 * free object list
 		 * It's valid for non-allocated object
 		 */
-		void *next;
+		unsigned long next;
 		/*
 		 * Handle of allocated object.
 		 */
@@ -270,6 +266,7 @@ struct zs_pool {
 };
 
 struct zs_meta {
+	unsigned long freeobj:FREEOBJ_BITS;
 	unsigned long class:CLASS_BITS;
 	unsigned long fullness:FULLNESS_BITS;
 	unsigned long inuse:INUSE_BITS;
@@ -447,6 +444,26 @@ static void mod_zspage_inuse(struct page *first_page, int val)
 	m->inuse += val;
 }
 
+static void set_freeobj(struct page *first_page, int idx)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	m = (struct zs_meta *)&first_page->mapping;
+	m->freeobj = idx;
+}
+
+static unsigned long get_freeobj(struct page *first_page)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	m = (struct zs_meta *)&first_page->mapping;
+	return m->freeobj;
+}
+
 static void get_zspage_mapping(struct page *first_page,
 				unsigned int *class_idx,
 				enum fullness_group *fullness)
@@ -832,30 +849,33 @@ static struct page *get_next_page(struct page *page)
 	return next;
 }
 
-/*
- * Encode <page, obj_idx> as a single handle value.
- * We use the least bit of handle for tagging.
- */
-static void *location_to_obj(struct page *page, unsigned long obj_idx)
+static void objidx_to_page_and_ofs(struct size_class *class,
+				struct page *first_page,
+				unsigned long obj_idx,
+				struct page **obj_page,
+				unsigned long *ofs_in_page)
 {
-	unsigned long obj;
+	int i;
+	unsigned long ofs;
+	struct page *cursor;
+	int nr_page;
 
-	if (!page) {
-		VM_BUG_ON(obj_idx);
-		return NULL;
-	}
+	ofs = obj_idx * class->size;
+	cursor = first_page;
+	nr_page = ofs >> PAGE_SHIFT;
 
-	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
-	obj |= ((obj_idx) & OBJ_INDEX_MASK);
-	obj <<= OBJ_TAG_BITS;
+	*ofs_in_page = ofs & ~PAGE_MASK;
+
+	for (i = 0; i < nr_page; i++)
+		cursor = get_next_page(cursor);
 
-	return (void *)obj;
+	*obj_page = cursor;
 }
 
-/*
- * Decode <page, obj_idx> pair from the given object handle. We adjust the
- * decoded obj_idx back to its original value since it was adjusted in
- * location_to_obj().
+/**
+ * obj_to_location - get (<page>, <obj_idx>) from encoded object value
+ * @page: page object resides in zspage
+ * @obj_idx: object index
  */
 static void obj_to_location(unsigned long obj, struct page **page,
 				unsigned long *obj_idx)
@@ -865,6 +885,23 @@ static void obj_to_location(unsigned long obj, struct page **page,
 	*obj_idx = (obj & OBJ_INDEX_MASK);
 }
 
+/**
+ * location_to_obj - get obj value encoded from (<page>, <obj_idx>)
+ * @page: page object resides in zspage
+ * @obj_idx: object index
+ */
+static unsigned long location_to_obj(struct page *page,
+				unsigned long obj_idx)
+{
+	unsigned long obj;
+
+	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
+	obj |= obj_idx & OBJ_INDEX_MASK;
+	obj <<= OBJ_TAG_BITS;
+
+	return obj;
+}
+
 static unsigned long handle_to_obj(unsigned long handle)
 {
 	return *(unsigned long *)handle;
@@ -880,17 +917,6 @@ static unsigned long obj_to_head(struct size_class *class, struct page *page,
 		return *(unsigned long *)obj;
 }
 
-static unsigned long obj_idx_to_offset(struct page *page,
-				unsigned long obj_idx, int class_size)
-{
-	unsigned long off = 0;
-
-	if (!is_first_page(page))
-		off = page->index;
-
-	return off + obj_idx * class_size;
-}
-
 static inline int trypin_tag(unsigned long handle)
 {
 	unsigned long *ptr = (unsigned long *)handle;
@@ -916,7 +942,6 @@ static void reset_page(struct page *page)
 	clear_bit(PG_private_2, &page->flags);
 	set_page_private(page, 0);
 	page->mapping = NULL;
-	page->freelist = NULL;
 	page_mapcount_reset(page);
 }
 
@@ -948,6 +973,7 @@ static void free_zspage(struct page *first_page)
 /* Initialize a newly allocated zspage */
 static void init_zspage(struct size_class *class, struct page *first_page)
 {
+	int freeobj = 1;
 	unsigned long off = 0;
 	struct page *page = first_page;
 
@@ -956,14 +982,11 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 	while (page) {
 		struct page *next_page;
 		struct link_free *link;
-		unsigned int i = 1;
 		void *vaddr;
 
 		/*
 		 * page->index stores offset of first object starting
-		 * in the page. For the first page, this is always 0,
-		 * so we use first_page->index (aka ->freelist) to store
-		 * head of corresponding zspage's freelist.
+		 * in the page.
 		 */
 		if (page != first_page)
 			page->index = off;
@@ -972,7 +995,7 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 		link = (struct link_free *)vaddr + off / sizeof(*link);
 
 		while ((off += class->size) < PAGE_SIZE) {
-			link->next = location_to_obj(page, i++);
+			link->next = freeobj++ << OBJ_ALLOCATED_TAG;
 			link += class->size / sizeof(*link);
 		}
 
@@ -982,11 +1005,21 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 		 * page (if present)
 		 */
 		next_page = get_next_page(page);
-		link->next = location_to_obj(next_page, 0);
+		if (next_page) {
+			link->next = freeobj++ << OBJ_ALLOCATED_TAG;
+		} else {
+			/*
+			 * Reset OBJ_ALLOCATED_TAG bit to last link for
+			 * migration to know it is allocated object or not.
+			 */
+			link->next = -1 << OBJ_ALLOCATED_TAG;
+		}
 		kunmap_atomic(vaddr);
 		page = next_page;
 		off %= PAGE_SIZE;
 	}
+
+	set_freeobj(first_page, 0);
 }
 
 /*
@@ -1036,7 +1069,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 
 	init_zspage(class, first_page);
 
-	first_page->freelist = location_to_obj(first_page, 0);
 	error = 0; /* Success */
 
 cleanup:
@@ -1318,7 +1350,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 	obj_to_location(obj, &page, &obj_idx);
 	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
 	class = pool->size_class[class_idx];
-	off = obj_idx_to_offset(page, obj_idx, class->size);
+	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = &get_cpu_var(zs_map_area);
 	area->vm_mm = mm;
@@ -1357,7 +1389,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 	obj_to_location(obj, &page, &obj_idx);
 	get_zspage_mapping(get_first_page(page), &class_idx, &fg);
 	class = pool->size_class[class_idx];
-	off = obj_idx_to_offset(page, obj_idx, class->size);
+	off = (class->size * obj_idx) & ~PAGE_MASK;
 
 	area = this_cpu_ptr(&zs_map_area);
 	if (off + class->size <= PAGE_SIZE)
@@ -1383,17 +1415,17 @@ static unsigned long obj_malloc(struct size_class *class,
 	struct link_free *link;
 
 	struct page *m_page;
-	unsigned long m_objidx, m_offset;
+	unsigned long m_offset;
 	void *vaddr;
 
 	handle |= OBJ_ALLOCATED_TAG;
-	obj = (unsigned long)first_page->freelist;
-	obj_to_location(obj, &m_page, &m_objidx);
-	m_offset = obj_idx_to_offset(m_page, m_objidx, class->size);
+	obj = get_freeobj(first_page);
+	objidx_to_page_and_ofs(class, first_page, obj,
+				&m_page, &m_offset);
 
 	vaddr = kmap_atomic(m_page);
 	link = (struct link_free *)vaddr + m_offset / sizeof(*link);
-	first_page->freelist = link->next;
+	set_freeobj(first_page, link->next >> OBJ_ALLOCATED_TAG);
 	if (!class->huge)
 		/* record handle in the header of allocated chunk */
 		link->handle = handle;
@@ -1404,6 +1436,8 @@ static unsigned long obj_malloc(struct size_class *class,
 	mod_zspage_inuse(first_page, 1);
 	zs_stat_inc(class, OBJ_USED, 1);
 
+	obj = location_to_obj(m_page, obj);
+
 	return obj;
 }
 
@@ -1473,19 +1507,17 @@ static void obj_free(struct size_class *class, unsigned long obj)
 
 	obj &= ~OBJ_ALLOCATED_TAG;
 	obj_to_location(obj, &f_page, &f_objidx);
+	f_offset = (class->size * f_objidx) & ~PAGE_MASK;
 	first_page = get_first_page(f_page);
-
-	f_offset = obj_idx_to_offset(f_page, f_objidx, class->size);
-
 	vaddr = kmap_atomic(f_page);
 
 	/* Insert this object in containing zspage's freelist */
 	link = (struct link_free *)(vaddr + f_offset);
-	link->next = first_page->freelist;
+	link->next = get_freeobj(first_page) << OBJ_ALLOCATED_TAG;
 	if (class->huge)
 		set_page_private(first_page, 0);
 	kunmap_atomic(vaddr);
-	first_page->freelist = (void *)obj;
+	set_freeobj(first_page, f_objidx);
 	mod_zspage_inuse(first_page, -1);
 	zs_stat_dec(class, OBJ_USED, 1);
 }
@@ -1541,8 +1573,8 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 	obj_to_location(src, &s_page, &s_objidx);
 	obj_to_location(dst, &d_page, &d_objidx);
 
-	s_off = obj_idx_to_offset(s_page, s_objidx, class->size);
-	d_off = obj_idx_to_offset(d_page, d_objidx, class->size);
+	s_off = (class->size * s_objidx) & ~PAGE_MASK;
+	d_off = (class->size * d_objidx) & ~PAGE_MASK;
 
 	if (s_off + class->size > PAGE_SIZE)
 		s_size = PAGE_SIZE - s_off;
@@ -2033,9 +2065,10 @@ static int __init zs_init(void)
 		goto notifier_fail;
 
 	/*
-	 * A zspage's class index, fullness group, inuse object count are
-	 * encoded in its (first)page->mapping so sizeof(struct zs_meta)
-	 * should be less than sizeof(page->mapping(i.e., unsigned long)).
+	 * A zspage's a free object index, class index, fullness group,
+	 * inuse object count are encoded in its (first)page->mapping
+	 * so sizeof(struct zs_meta) should be less than
+	 * sizeof(page->mapping(i.e., unsigned long)).
 	 */
 	BUILD_BUG_ON(sizeof(struct zs_meta) > sizeof(unsigned long));
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 12/19] zsmalloc: move struct zs_meta from mapping to freelist
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (10 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 11/19] zsmalloc: squeeze freelist " Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 13/19] zsmalloc: factor page chain functionality out Minchan Kim
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

For supporting migration from VM, we need to have address_space
on every page so zsmalloc shouldn't use page->mapping. So,
this patch moves zs_meta from mapping to freelist.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e23cd3b2dd71..bfc6a048afac 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -29,7 +29,7 @@
  *		Look at size_class->huge.
  *	page->lru: links together first pages of various zspages.
  *		Basically forming list of zspages in a fullness group.
- *	page->mapping: override by struct zs_meta
+ *	page->freelist: override by struct zs_meta
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -419,7 +419,7 @@ static int get_zspage_inuse(struct page *first_page)
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 
 	return m->inuse;
 }
@@ -430,7 +430,7 @@ static void set_zspage_inuse(struct page *first_page, int val)
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	m->inuse = val;
 }
 
@@ -440,7 +440,7 @@ static void mod_zspage_inuse(struct page *first_page, int val)
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	m->inuse += val;
 }
 
@@ -450,7 +450,7 @@ static void set_freeobj(struct page *first_page, int idx)
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	m->freeobj = idx;
 }
 
@@ -460,7 +460,7 @@ static unsigned long get_freeobj(struct page *first_page)
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	return m->freeobj;
 }
 
@@ -472,7 +472,7 @@ static void get_zspage_mapping(struct page *first_page,
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	*fullness = m->fullness;
 	*class_idx = m->class;
 }
@@ -485,7 +485,7 @@ static void set_zspage_mapping(struct page *first_page,
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 
-	m = (struct zs_meta *)&first_page->mapping;
+	m = (struct zs_meta *)&first_page->freelist;
 	m->fullness = fullness;
 	m->class = class_idx;
 }
@@ -941,7 +941,7 @@ static void reset_page(struct page *page)
 	clear_bit(PG_private, &page->flags);
 	clear_bit(PG_private_2, &page->flags);
 	set_page_private(page, 0);
-	page->mapping = NULL;
+	page->freelist = NULL;
 	page_mapcount_reset(page);
 }
 
@@ -1051,6 +1051,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 
 		INIT_LIST_HEAD(&page->lru);
 		if (i == 0) {	/* first page */
+			page->freelist = NULL;
 			SetPagePrivate(page);
 			set_page_private(page, 0);
 			first_page = page;
@@ -2066,9 +2067,9 @@ static int __init zs_init(void)
 
 	/*
 	 * A zspage's a free object index, class index, fullness group,
-	 * inuse object count are encoded in its (first)page->mapping
+	 * inuse object count are encoded in its (first)page->freelist
 	 * so sizeof(struct zs_meta) should be less than
-	 * sizeof(page->mapping(i.e., unsigned long)).
+	 * sizeof(page->freelist(i.e., void *)).
 	 */
 	BUILD_BUG_ON(sizeof(struct zs_meta) > sizeof(unsigned long));
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 13/19] zsmalloc: factor page chain functionality out
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (11 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 12/19] zsmalloc: move struct zs_meta from mapping to freelist Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-12  3:09   ` xuyiping
  2016-03-11  7:30 ` [PATCH v1 14/19] zsmalloc: separate free_zspage from putback_zspage Minchan Kim
                   ` (5 subsequent siblings)
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.

As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 45 insertions(+), 33 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index bfc6a048afac..f86f8aaeb902 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -977,7 +977,9 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 	unsigned long off = 0;
 	struct page *page = first_page;
 
-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+	first_page->freelist = NULL;
+	INIT_LIST_HEAD(&first_page->lru);
+	set_zspage_inuse(first_page, 0);
 
 	while (page) {
 		struct page *next_page;
@@ -1022,13 +1024,44 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 	set_freeobj(first_page, 0);
 }
 
+static void create_page_chain(struct page *pages[], int nr_pages)
+{
+	int i;
+	struct page *page;
+	struct page *prev_page = NULL;
+	struct page *first_page = NULL;
+
+	for (i = 0; i < nr_pages; i++) {
+		page = pages[i];
+
+		INIT_LIST_HEAD(&page->lru);
+		if (i == 0) {
+			SetPagePrivate(page);
+			set_page_private(page, 0);
+			first_page = page;
+		}
+
+		if (i == 1)
+			set_page_private(first_page, (unsigned long)page);
+		if (i >= 1)
+			set_page_private(page, (unsigned long)first_page);
+		if (i >= 2)
+			list_add(&page->lru, &prev_page->lru);
+		if (i == nr_pages - 1)
+			SetPagePrivate2(page);
+
+		prev_page = page;
+	}
+}
+
 /*
  * Allocate a zspage for the given size class
  */
 static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 {
-	int i, error;
+	int i;
 	struct page *first_page = NULL, *uninitialized_var(prev_page);
+	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
 
 	/*
 	 * Allocate individual pages and link them together as:
@@ -1041,43 +1074,23 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
 	 * identify the last page.
 	 */
-	error = -ENOMEM;
 	for (i = 0; i < class->pages_per_zspage; i++) {
 		struct page *page;
 
 		page = alloc_page(flags);
-		if (!page)
-			goto cleanup;
-
-		INIT_LIST_HEAD(&page->lru);
-		if (i == 0) {	/* first page */
-			page->freelist = NULL;
-			SetPagePrivate(page);
-			set_page_private(page, 0);
-			first_page = page;
-			set_zspage_inuse(page, 0);
+		if (!page) {
+			while (--i >= 0)
+				__free_page(pages[i]);
+			return NULL;
 		}
-		if (i == 1)
-			set_page_private(first_page, (unsigned long)page);
-		if (i >= 1)
-			set_page_private(page, (unsigned long)first_page);
-		if (i >= 2)
-			list_add(&page->lru, &prev_page->lru);
-		if (i == class->pages_per_zspage - 1)	/* last page */
-			SetPagePrivate2(page);
-		prev_page = page;
+
+		pages[i] = page;
 	}
 
+	create_page_chain(pages, class->pages_per_zspage);
+	first_page = pages[0];
 	init_zspage(class, first_page);
 
-	error = 0; /* Success */
-
-cleanup:
-	if (unlikely(error) && first_page) {
-		free_zspage(first_page);
-		first_page = NULL;
-	}
-
 	return first_page;
 }
 
@@ -1419,7 +1432,6 @@ static unsigned long obj_malloc(struct size_class *class,
 	unsigned long m_offset;
 	void *vaddr;
 
-	handle |= OBJ_ALLOCATED_TAG;
 	obj = get_freeobj(first_page);
 	objidx_to_page_and_ofs(class, first_page, obj,
 				&m_page, &m_offset);
@@ -1429,10 +1441,10 @@ static unsigned long obj_malloc(struct size_class *class,
 	set_freeobj(first_page, link->next >> OBJ_ALLOCATED_TAG);
 	if (!class->huge)
 		/* record handle in the header of allocated chunk */
-		link->handle = handle;
+		link->handle = handle | OBJ_ALLOCATED_TAG;
 	else
 		/* record handle in first_page->private */
-		set_page_private(first_page, handle);
+		set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(first_page, 1);
 	zs_stat_inc(class, OBJ_USED, 1);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 14/19] zsmalloc: separate free_zspage from putback_zspage
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (12 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 13/19] zsmalloc: factor page chain functionality out Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 15/19] zsmalloc: zs_compact refactoring Minchan Kim
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Currently, putback_zspage does free zspage under class->lock
if fullness become ZS_EMPTY but it makes trouble to implement
locking scheme for new zspage migration.
So, this patch is to separate free_zspage from putback_zspage
and free zspage out of class->lock which is preparation for
zspage migration.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 46 +++++++++++++++++++++++-----------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f86f8aaeb902..49ae6531b7ad 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -945,7 +945,8 @@ static void reset_page(struct page *page)
 	page_mapcount_reset(page);
 }
 
-static void free_zspage(struct page *first_page)
+static void free_zspage(struct zs_pool *pool, struct size_class *class,
+			struct page *first_page)
 {
 	struct page *nextp, *tmp, *head_extra;
 
@@ -968,6 +969,11 @@ static void free_zspage(struct page *first_page)
 	}
 	reset_page(head_extra);
 	__free_page(head_extra);
+
+	zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
+			class->size, class->pages_per_zspage));
+	atomic_long_sub(class->pages_per_zspage,
+				&pool->pages_allocated);
 }
 
 /* Initialize a newly allocated zspage */
@@ -1557,13 +1563,8 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	spin_lock(&class->lock);
 	obj_free(class, obj);
 	fullness = fix_fullness_group(class, first_page);
-	if (fullness == ZS_EMPTY) {
-		zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-				class->size, class->pages_per_zspage));
-		atomic_long_sub(class->pages_per_zspage,
-				&pool->pages_allocated);
-		free_zspage(first_page);
-	}
+	if (fullness == ZS_EMPTY)
+		free_zspage(pool, class, first_page);
 	spin_unlock(&class->lock);
 	unpin_tag(handle);
 
@@ -1750,7 +1751,7 @@ static struct page *isolate_target_page(struct size_class *class)
  * @class: destination class
  * @first_page: target page
  *
- * Return @fist_page's fullness_group
+ * Return @first_page's updated fullness_group
  */
 static enum fullness_group putback_zspage(struct zs_pool *pool,
 			struct size_class *class,
@@ -1762,15 +1763,6 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 	insert_zspage(class, fullness, first_page);
 	set_zspage_mapping(first_page, class->index, fullness);
 
-	if (fullness == ZS_EMPTY) {
-		zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
-		atomic_long_sub(class->pages_per_zspage,
-				&pool->pages_allocated);
-
-		free_zspage(first_page);
-	}
-
 	return fullness;
 }
 
@@ -1833,23 +1825,31 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 			if (!migrate_zspage(pool, class, &cc))
 				break;
 
-			putback_zspage(pool, class, dst_page);
+			VM_BUG_ON_PAGE(putback_zspage(pool, class,
+				dst_page) == ZS_EMPTY, dst_page);
 		}
 
 		/* Stop if we couldn't find slot */
 		if (dst_page == NULL)
 			break;
 
-		putback_zspage(pool, class, dst_page);
-		if (putback_zspage(pool, class, src_page) == ZS_EMPTY)
+		VM_BUG_ON_PAGE(putback_zspage(pool, class,
+				dst_page) == ZS_EMPTY, dst_page);
+		if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
 			pool->stats.pages_compacted += class->pages_per_zspage;
-		spin_unlock(&class->lock);
+			spin_unlock(&class->lock);
+			free_zspage(pool, class, src_page);
+		} else {
+			spin_unlock(&class->lock);
+		}
+
 		cond_resched();
 		spin_lock(&class->lock);
 	}
 
 	if (src_page)
-		putback_zspage(pool, class, src_page);
+		VM_BUG_ON_PAGE(putback_zspage(pool, class,
+				src_page) == ZS_EMPTY, src_page);
 
 	spin_unlock(&class->lock);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 15/19] zsmalloc: zs_compact refactoring
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (13 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 14/19] zsmalloc: separate free_zspage from putback_zspage Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 16/19] zsmalloc: migrate head page of zspage Minchan Kim
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Currently, we rely on class->lock to prevent zspage destruction.
It was okay until now because the critical section is short but
with run-time migration, it could be long so class->lock is not
a good apporach any more.

So, this patch introduces [un]freeze_zspage functions which
freeze allocated objects in the zspage with pinning tag so
user cannot free using object. With those functions, this patch
redesign compaction.

Those functions will be used for implementing zspage runtime
migrations, too.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 393 ++++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 257 insertions(+), 136 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 49ae6531b7ad..43ab16affa68 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -917,6 +917,13 @@ static unsigned long obj_to_head(struct size_class *class, struct page *page,
 		return *(unsigned long *)obj;
 }
 
+static inline int testpin_tag(unsigned long handle)
+{
+	unsigned long *ptr = (unsigned long *)handle;
+
+	return test_bit(HANDLE_PIN_BIT, ptr);
+}
+
 static inline int trypin_tag(unsigned long handle)
 {
 	unsigned long *ptr = (unsigned long *)handle;
@@ -945,8 +952,7 @@ static void reset_page(struct page *page)
 	page_mapcount_reset(page);
 }
 
-static void free_zspage(struct zs_pool *pool, struct size_class *class,
-			struct page *first_page)
+static void free_zspage(struct zs_pool *pool, struct page *first_page)
 {
 	struct page *nextp, *tmp, *head_extra;
 
@@ -969,11 +975,6 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class,
 	}
 	reset_page(head_extra);
 	__free_page(head_extra);
-
-	zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
-	atomic_long_sub(class->pages_per_zspage,
-				&pool->pages_allocated);
 }
 
 /* Initialize a newly allocated zspage */
@@ -1323,6 +1324,11 @@ static bool zspage_full(struct size_class *class, struct page *first_page)
 	return get_zspage_inuse(first_page) == class->objs_per_zspage;
 }
 
+static bool zspage_empty(struct size_class *class, struct page *first_page)
+{
+	return get_zspage_inuse(first_page) == 0;
+}
+
 unsigned long zs_get_total_pages(struct zs_pool *pool)
 {
 	return atomic_long_read(&pool->pages_allocated);
@@ -1453,7 +1459,6 @@ static unsigned long obj_malloc(struct size_class *class,
 		set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
 	kunmap_atomic(vaddr);
 	mod_zspage_inuse(first_page, 1);
-	zs_stat_inc(class, OBJ_USED, 1);
 
 	obj = location_to_obj(m_page, obj);
 
@@ -1508,6 +1513,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 	}
 
 	obj = obj_malloc(class, first_page, handle);
+	zs_stat_inc(class, OBJ_USED, 1);
 	/* Now move the zspage to another fullness group, if required */
 	fix_fullness_group(class, first_page);
 	record_obj(handle, obj);
@@ -1538,7 +1544,6 @@ static void obj_free(struct size_class *class, unsigned long obj)
 	kunmap_atomic(vaddr);
 	set_freeobj(first_page, f_objidx);
 	mod_zspage_inuse(first_page, -1);
-	zs_stat_dec(class, OBJ_USED, 1);
 }
 
 void zs_free(struct zs_pool *pool, unsigned long handle)
@@ -1562,10 +1567,19 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 
 	spin_lock(&class->lock);
 	obj_free(class, obj);
+	zs_stat_dec(class, OBJ_USED, 1);
 	fullness = fix_fullness_group(class, first_page);
-	if (fullness == ZS_EMPTY)
-		free_zspage(pool, class, first_page);
+	if (fullness == ZS_EMPTY) {
+		zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
+				class->size, class->pages_per_zspage));
+		spin_unlock(&class->lock);
+		atomic_long_sub(class->pages_per_zspage,
+					&pool->pages_allocated);
+		free_zspage(pool, first_page);
+		goto out;
+	}
 	spin_unlock(&class->lock);
+out:
 	unpin_tag(handle);
 
 	free_handle(pool, handle);
@@ -1635,127 +1649,66 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
 	kunmap_atomic(s_addr);
 }
 
-/*
- * Find alloced object in zspage from index object and
- * return handle.
- */
-static unsigned long find_alloced_obj(struct size_class *class,
-					struct page *page, int index)
+static unsigned long handle_from_obj(struct size_class *class,
+				struct page *first_page, int obj_idx)
 {
-	unsigned long head;
-	int offset = 0;
-	unsigned long handle = 0;
-	void *addr = kmap_atomic(page);
-
-	if (!is_first_page(page))
-		offset = page->index;
-	offset += class->size * index;
-
-	while (offset < PAGE_SIZE) {
-		head = obj_to_head(class, page, addr + offset);
-		if (head & OBJ_ALLOCATED_TAG) {
-			handle = head & ~OBJ_ALLOCATED_TAG;
-			if (trypin_tag(handle))
-				break;
-			handle = 0;
-		}
+	struct page *page;
+	unsigned long ofs_in_page;
+	void *addr;
+	unsigned long head, handle = 0;
 
-		offset += class->size;
-		index++;
-	}
+	objidx_to_page_and_ofs(class, first_page, obj_idx,
+			&page, &ofs_in_page);
 
+	addr = kmap_atomic(page);
+	head = obj_to_head(class, page, addr + ofs_in_page);
+	if (head & OBJ_ALLOCATED_TAG)
+		handle = head & ~OBJ_ALLOCATED_TAG;
 	kunmap_atomic(addr);
+
 	return handle;
 }
 
-struct zs_compact_control {
-	/* Source page for migration which could be a subpage of zspage. */
-	struct page *s_page;
-	/* Destination page for migration which should be a first page
-	 * of zspage. */
-	struct page *d_page;
-	 /* Starting object index within @s_page which used for live object
-	  * in the subpage. */
-	int index;
-};
-
-static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
-				struct zs_compact_control *cc)
+static int migrate_zspage(struct size_class *class, struct page *dst_page,
+				struct page *src_page)
 {
-	unsigned long used_obj, free_obj;
 	unsigned long handle;
-	struct page *s_page = cc->s_page;
-	struct page *d_page = cc->d_page;
-	unsigned long index = cc->index;
-	int ret = 0;
+	unsigned long old_obj, new_obj;
+	int i;
+	int nr_migrated = 0;
 
-	while (1) {
-		handle = find_alloced_obj(class, s_page, index);
-		if (!handle) {
-			s_page = get_next_page(s_page);
-			if (!s_page)
-				break;
-			index = 0;
+	for (i = 0; i < class->objs_per_zspage; i++) {
+		handle = handle_from_obj(class, src_page, i);
+		if (!handle)
 			continue;
-		}
-
-		/* Stop if there is no more space */
-		if (zspage_full(class, d_page)) {
-			unpin_tag(handle);
-			ret = -ENOMEM;
+		if (zspage_full(class, dst_page))
 			break;
-		}
-
-		used_obj = handle_to_obj(handle);
-		free_obj = obj_malloc(class, d_page, handle);
-		zs_object_copy(class, free_obj, used_obj);
-		index++;
+		old_obj = handle_to_obj(handle);
+		new_obj = obj_malloc(class, dst_page, handle);
+		zs_object_copy(class, new_obj, old_obj);
+		nr_migrated++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
 		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
 		 * breaks synchronization using pin_tag(e,g, zs_free) so
 		 * let's keep the lock bit.
 		 */
-		free_obj |= BIT(HANDLE_PIN_BIT);
-		record_obj(handle, free_obj);
-		unpin_tag(handle);
-		obj_free(class, used_obj);
+		new_obj |= BIT(HANDLE_PIN_BIT);
+		record_obj(handle, new_obj);
+		obj_free(class, old_obj);
 	}
-
-	/* Remember last position in this iteration */
-	cc->s_page = s_page;
-	cc->index = index;
-
-	return ret;
-}
-
-static struct page *isolate_target_page(struct size_class *class)
-{
-	int i;
-	struct page *page;
-
-	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
-		page = class->fullness_list[i];
-		if (page) {
-			remove_zspage(class, i, page);
-			break;
-		}
-	}
-
-	return page;
+	return nr_migrated;
 }
 
 /*
  * putback_zspage - add @first_page into right class's fullness list
- * @pool: target pool
  * @class: destination class
  * @first_page: target page
  *
  * Return @first_page's updated fullness_group
  */
-static enum fullness_group putback_zspage(struct zs_pool *pool,
-			struct size_class *class,
-			struct page *first_page)
+static enum fullness_group putback_zspage(struct size_class *class,
+					struct page *first_page)
 {
 	enum fullness_group fullness;
 
@@ -1766,17 +1719,155 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
 	return fullness;
 }
 
+/*
+ * freeze_zspage - freeze all objects in a zspage
+ * @class: size class of the page
+ * @first_page: first page of zspage
+ *
+ * Freeze all allocated objects in a zspage so objects couldn't be
+ * freed until unfreeze objects. It should be called under class->lock.
+ *
+ * RETURNS:
+ * the number of pinned objects
+ */
+static int freeze_zspage(struct size_class *class, struct page *first_page)
+{
+	unsigned long obj_idx;
+	struct page *obj_page;
+	unsigned long ofs;
+	void *addr;
+	int nr_freeze = 0;
+
+	for (obj_idx = 0; obj_idx < class->objs_per_zspage; obj_idx++) {
+		unsigned long head;
+
+		objidx_to_page_and_ofs(class, first_page, obj_idx,
+					&obj_page, &ofs);
+		addr = kmap_atomic(obj_page);
+		head = obj_to_head(class, obj_page, addr + ofs);
+		if (head & OBJ_ALLOCATED_TAG) {
+			unsigned long handle = head & ~OBJ_ALLOCATED_TAG;
+
+			if (!trypin_tag(handle)) {
+				kunmap_atomic(addr);
+				break;
+			}
+			nr_freeze++;
+		}
+		kunmap_atomic(addr);
+	}
+
+	return nr_freeze;
+}
+
+/*
+ * unfreeze_page - unfreeze objects freezed by freeze_zspage in a zspage
+ * @class: size class of the page
+ * @first_page: freezed zspage to unfreeze
+ * @nr_obj: the number of objects to unfreeze
+ *
+ * unfreeze objects in a zspage.
+ */
+static void unfreeze_zspage(struct size_class *class, struct page *first_page,
+			int nr_obj)
+{
+	unsigned long obj_idx;
+	struct page *obj_page;
+	unsigned long ofs;
+	void *addr;
+	int nr_unfreeze = 0;
+
+	for (obj_idx = 0; obj_idx < class->objs_per_zspage &&
+			nr_unfreeze < nr_obj; obj_idx++) {
+		unsigned long head;
+
+		objidx_to_page_and_ofs(class, first_page, obj_idx,
+					&obj_page, &ofs);
+		addr = kmap_atomic(obj_page);
+		head = obj_to_head(class, obj_page, addr + ofs);
+		if (head & OBJ_ALLOCATED_TAG) {
+			unsigned long handle = head & ~OBJ_ALLOCATED_TAG;
+
+			VM_BUG_ON(!testpin_tag(handle));
+			unpin_tag(handle);
+			nr_unfreeze++;
+		}
+		kunmap_atomic(addr);
+	}
+}
+
+/*
+ * isolate_source_page - isolate a zspage for migration source
+ * @class: size class of zspage for isolation
+ *
+ * Returns a zspage which are isolated from list so anyone can
+ * allocate a object from that page. As well, freeze all objects
+ * allocated in the zspage so anyone cannot access that objects
+ * (e.g., zs_map_object, zs_free).
+ */
 static struct page *isolate_source_page(struct size_class *class)
 {
 	int i;
 	struct page *page = NULL;
 
 	for (i = ZS_ALMOST_EMPTY; i >= ZS_ALMOST_FULL; i--) {
+		int inuse, freezed;
+
 		page = class->fullness_list[i];
 		if (!page)
 			continue;
 
 		remove_zspage(class, i, page);
+
+		inuse = get_zspage_inuse(page);
+		freezed = freeze_zspage(class, page);
+
+		if (inuse != freezed) {
+			unfreeze_zspage(class, page, freezed);
+			putback_zspage(class, page);
+			page = NULL;
+			continue;
+		}
+
+		break;
+	}
+
+	return page;
+}
+
+/*
+ * isolate_target_page - isolate a zspage for migration target
+ * @class: size class of zspage for isolation
+ *
+ * Returns a zspage which are isolated from list so anyone can
+ * allocate a object from that page. As well, freeze all objects
+ * allocated in the zspage so anyone cannot access that objects
+ * (e.g., zs_map_object, zs_free).
+ */
+static struct page *isolate_target_page(struct size_class *class)
+{
+	int i;
+	struct page *page;
+
+	for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
+		int inuse, freezed;
+
+		page = class->fullness_list[i];
+		if (!page)
+			continue;
+
+		remove_zspage(class, i, page);
+
+		inuse = get_zspage_inuse(page);
+		freezed = freeze_zspage(class, page);
+
+		if (inuse != freezed) {
+			unfreeze_zspage(class, page, freezed);
+			putback_zspage(class, page);
+			page = NULL;
+			continue;
+		}
+
 		break;
 	}
 
@@ -1791,9 +1882,11 @@ static struct page *isolate_source_page(struct size_class *class)
 static unsigned long zs_can_compact(struct size_class *class)
 {
 	unsigned long obj_wasted;
+	unsigned long obj_allocated, obj_used;
 
-	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
-		zs_stat_get(class, OBJ_USED);
+	obj_allocated = zs_stat_get(class, OBJ_ALLOCATED);
+	obj_used = zs_stat_get(class, OBJ_USED);
+	obj_wasted = obj_allocated - obj_used;
 
 	obj_wasted /= get_maxobj_per_zspage(class->size,
 			class->pages_per_zspage);
@@ -1803,53 +1896,81 @@ static unsigned long zs_can_compact(struct size_class *class)
 
 static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
-	struct zs_compact_control cc;
-	struct page *src_page;
+	struct page *src_page = NULL;
 	struct page *dst_page = NULL;
 
-	spin_lock(&class->lock);
-	while ((src_page = isolate_source_page(class))) {
+	while (1) {
+		int nr_migrated;
 
-		if (!zs_can_compact(class))
+		spin_lock(&class->lock);
+		if (!zs_can_compact(class)) {
+			spin_unlock(&class->lock);
 			break;
+		}
 
-		cc.index = 0;
-		cc.s_page = src_page;
+		/*
+		 * Isolate source page and freeze all objects in a zspage
+		 * to prevent zspage destroying.
+		 */
+		if (!src_page) {
+			src_page = isolate_source_page(class);
+			if (!src_page) {
+				spin_unlock(&class->lock);
+				break;
+			}
+		}
 
-		while ((dst_page = isolate_target_page(class))) {
-			cc.d_page = dst_page;
-			/*
-			 * If there is no more space in dst_page, resched
-			 * and see if anyone had allocated another zspage.
-			 */
-			if (!migrate_zspage(pool, class, &cc))
+		/* Isolate target page and freeze all objects in the zspage */
+		if (!dst_page) {
+			dst_page = isolate_target_page(class);
+			if (!dst_page) {
+				spin_unlock(&class->lock);
 				break;
+			}
+		}
+		spin_unlock(&class->lock);
+
+		nr_migrated = migrate_zspage(class, dst_page, src_page);
 
-			VM_BUG_ON_PAGE(putback_zspage(pool, class,
-				dst_page) == ZS_EMPTY, dst_page);
+		if (zspage_full(class, dst_page)) {
+			spin_lock(&class->lock);
+			putback_zspage(class, dst_page);
+			unfreeze_zspage(class, dst_page,
+				class->objs_per_zspage);
+			spin_unlock(&class->lock);
+			dst_page = NULL;
 		}
 
-		/* Stop if we couldn't find slot */
-		if (dst_page == NULL)
-			break;
+		if (zspage_empty(class, src_page)) {
+			free_zspage(pool, src_page);
+			spin_lock(&class->lock);
+			zs_stat_dec(class, OBJ_ALLOCATED,
+				get_maxobj_per_zspage(
+				class->size, class->pages_per_zspage));
+			atomic_long_sub(class->pages_per_zspage,
+					&pool->pages_allocated);
 
-		VM_BUG_ON_PAGE(putback_zspage(pool, class,
-				dst_page) == ZS_EMPTY, dst_page);
-		if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
 			pool->stats.pages_compacted += class->pages_per_zspage;
 			spin_unlock(&class->lock);
-			free_zspage(pool, class, src_page);
-		} else {
-			spin_unlock(&class->lock);
+			src_page = NULL;
 		}
+	}
 
-		cond_resched();
-		spin_lock(&class->lock);
+	if (!src_page && !dst_page)
+		return;
+
+	spin_lock(&class->lock);
+	if (src_page) {
+		putback_zspage(class, src_page);
+		unfreeze_zspage(class, src_page,
+				class->objs_per_zspage);
 	}
 
-	if (src_page)
-		VM_BUG_ON_PAGE(putback_zspage(pool, class,
-				src_page) == ZS_EMPTY, src_page);
+	if (dst_page) {
+		putback_zspage(class, dst_page);
+		unfreeze_zspage(class, dst_page,
+				class->objs_per_zspage);
+	}
 
 	spin_unlock(&class->lock);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 16/19] zsmalloc: migrate head page of zspage
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (14 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 15/19] zsmalloc: zs_compact refactoring Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 17/19] zsmalloc: use single linked list for page chain Minchan Kim
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

This patch introduces run-time migration feature for zspage.
To begin with, it supports only head page migration for
easy review(later patches will support tail page migration).

For migration, it supports three functions

* zs_page_isolate

It isolates a zspage which includes a subpage VM want to migrate
from class so anyone cannot allocate new object from the zspage.
IOW, allocation freeze

* zs_page_migrate

First of all, it freezes zspage to prevent zspage destrunction
so anyone cannot free object. Then, It copies content from oldpage
to newpage and create new page-chain with new page.
If it was successful, drop the refcount of old page to free
and putback new zspage to right data structure of zsmalloc.
Lastly, unfreeze zspages so we allows object allocation/free
from now on.

* zs_page_putback

It returns isolated zspage to right fullness_group list
if it fails to migrate a page.

NOTE: A hurdle to support migration is that destroying zspage
while migration is going on. Once a zspage is isolated,
anyone cannot allocate object from the zspage but can deallocate
object freely so a zspage could be destroyed until all of objects
in zspage are freezed to prevent deallocation. The problem is
large window betwwen zs_page_isolate and freeze_zspage
in zs_page_migrate so the zspage could be destroyed.

A easy approach to solve the problem is that object freezing
in zs_page_isolate but it has a drawback that any object cannot
be deallocated until migration fails after isolation. However,
There is large time gab between isolation and migration so
any object freeing in other CPU should spin by pin_tag which
would cause big latency. So, this patch introduces lock_zspage
which holds PG_lock of all pages in a zspage right before
freeing the zspage. VM migration locks the page, too right
before calling ->migratepage so such race doesn't exist any more.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 291 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 280 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 43ab16affa68..8eb785000069 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -56,6 +56,8 @@
 #include <linux/debugfs.h>
 #include <linux/zsmalloc.h>
 #include <linux/zpool.h>
+#include <linux/anon_inodes.h>
+#include <linux/migrate.h>
 
 /*
  * This must be power of 2 and greater than of equal to sizeof(link_free).
@@ -263,6 +265,7 @@ struct zs_pool {
 #ifdef CONFIG_ZSMALLOC_STAT
 	struct dentry *stat_dentry;
 #endif
+	struct inode *inode;
 };
 
 struct zs_meta {
@@ -413,6 +416,29 @@ static int is_last_page(struct page *page)
 	return PagePrivate2(page);
 }
 
+/*
+ * Indicate that whether zspage is isolated for page migration.
+ * Protected by size_class lock
+ */
+static void SetZsPageIsolate(struct page *first_page)
+{
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+	SetPageUptodate(first_page);
+}
+
+static int ZsPageIsolate(struct page *first_page)
+{
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	return PageUptodate(first_page);
+}
+
+static void ClearZsPageIsolate(struct page *first_page)
+{
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+	ClearPageUptodate(first_page);
+}
+
 static int get_zspage_inuse(struct page *first_page)
 {
 	struct zs_meta *m;
@@ -778,8 +804,11 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
 	if (newfg == currfg)
 		goto out;
 
-	remove_zspage(class, currfg, first_page);
-	insert_zspage(class, newfg, first_page);
+	/* Later, putback will insert page to right list */
+	if (!ZsPageIsolate(first_page)) {
+		remove_zspage(class, currfg, first_page);
+		insert_zspage(class, newfg, first_page);
+	}
 	set_zspage_mapping(first_page, class_idx, newfg);
 
 out:
@@ -945,13 +974,31 @@ static void unpin_tag(unsigned long handle)
 
 static void reset_page(struct page *page)
 {
+	__ClearPageMovable(page);
 	clear_bit(PG_private, &page->flags);
 	clear_bit(PG_private_2, &page->flags);
 	set_page_private(page, 0);
 	page->freelist = NULL;
+	page->mapping = NULL;
 	page_mapcount_reset(page);
 }
 
+/**
+ * lock_zspage - lock all pages in the zspage
+ * @first_page: head page of the zspage
+ *
+ * To prevent destroy during migration, zspage freeing should
+ * hold locks of all pages in a zspage
+ */
+void lock_zspage(struct page *first_page)
+{
+	struct page *cursor = first_page;
+
+	do {
+		while (!trylock_page(cursor));
+	} while ((cursor = get_next_page(cursor)) != NULL);
+}
+
 static void free_zspage(struct zs_pool *pool, struct page *first_page)
 {
 	struct page *nextp, *tmp, *head_extra;
@@ -959,26 +1006,31 @@ static void free_zspage(struct zs_pool *pool, struct page *first_page)
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 	VM_BUG_ON_PAGE(get_zspage_inuse(first_page), first_page);
 
+	lock_zspage(first_page);
 	head_extra = (struct page *)page_private(first_page);
 
-	reset_page(first_page);
-	__free_page(first_page);
-
 	/* zspage with only 1 system page */
 	if (!head_extra)
-		return;
+		goto out;
 
 	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
 		list_del(&nextp->lru);
 		reset_page(nextp);
+		unlock_page(nextp);
 		__free_page(nextp);
 	}
 	reset_page(head_extra);
+	unlock_page(head_extra);
 	__free_page(head_extra);
+out:
+	reset_page(first_page);
+	unlock_page(first_page);
+	__free_page(first_page);
 }
 
 /* Initialize a newly allocated zspage */
-static void init_zspage(struct size_class *class, struct page *first_page)
+static void init_zspage(struct size_class *class, struct page *first_page,
+			struct address_space *mapping)
 {
 	int freeobj = 1;
 	unsigned long off = 0;
@@ -987,6 +1039,10 @@ static void init_zspage(struct size_class *class, struct page *first_page)
 	first_page->freelist = NULL;
 	INIT_LIST_HEAD(&first_page->lru);
 	set_zspage_inuse(first_page, 0);
+	BUG_ON(!trylock_page(first_page));
+	first_page->mapping = mapping;
+	__SetPageMovable(first_page);
+	unlock_page(first_page);
 
 	while (page) {
 		struct page *next_page;
@@ -1061,10 +1117,46 @@ static void create_page_chain(struct page *pages[], int nr_pages)
 	}
 }
 
+static void replace_sub_page(struct size_class *class, struct page *first_page,
+		struct page *newpage, struct page *oldpage)
+{
+	struct page *page;
+	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL,};
+	int idx = 0;
+
+	page = first_page;
+	do {
+		if (page == oldpage)
+			pages[idx] = newpage;
+		else
+			pages[idx] = page;
+		idx++;
+	} while ((page = get_next_page(page)) != NULL);
+
+	create_page_chain(pages, class->pages_per_zspage);
+
+	if (is_first_page(oldpage)) {
+		enum fullness_group fg;
+		int class_idx;
+
+		SetZsPageIsolate(newpage);
+		get_zspage_mapping(oldpage, &class_idx, &fg);
+		set_zspage_mapping(newpage, class_idx, fg);
+		set_freeobj(newpage, get_freeobj(oldpage));
+		set_zspage_inuse(newpage, get_zspage_inuse(oldpage));
+		if (class->huge)
+			set_page_private(newpage,  page_private(oldpage));
+	}
+
+	newpage->mapping = oldpage->mapping;
+	__SetPageMovable(newpage);
+}
+
 /*
  * Allocate a zspage for the given size class
  */
-static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
+static struct page *alloc_zspage(struct zs_pool *pool,
+				struct size_class *class)
 {
 	int i;
 	struct page *first_page = NULL, *uninitialized_var(prev_page);
@@ -1084,7 +1176,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 	for (i = 0; i < class->pages_per_zspage; i++) {
 		struct page *page;
 
-		page = alloc_page(flags);
+		page = alloc_page(pool->flags);
 		if (!page) {
 			while (--i >= 0)
 				__free_page(pages[i]);
@@ -1096,7 +1188,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
 
 	create_page_chain(pages, class->pages_per_zspage);
 	first_page = pages[0];
-	init_zspage(class, first_page);
+	init_zspage(class, first_page, pool->inode->i_mapping);
 
 	return first_page;
 }
@@ -1497,7 +1589,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 
 	if (!first_page) {
 		spin_unlock(&class->lock);
-		first_page = alloc_zspage(class, pool->flags);
+		first_page = alloc_zspage(pool, class);
 		if (unlikely(!first_page)) {
 			free_handle(pool, handle);
 			return 0;
@@ -1557,6 +1649,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 	if (unlikely(!handle))
 		return;
 
+	/* Once handle is pinned, page|object migration cannot work */
 	pin_tag(handle);
 	obj = handle_to_obj(handle);
 	obj_to_location(obj, &f_page, &f_objidx);
@@ -1712,6 +1805,9 @@ static enum fullness_group putback_zspage(struct size_class *class,
 {
 	enum fullness_group fullness;
 
+	VM_BUG_ON_PAGE(!list_empty(&first_page->lru), first_page);
+	VM_BUG_ON_PAGE(ZsPageIsolate(first_page), first_page);
+
 	fullness = get_fullness_group(class, first_page);
 	insert_zspage(class, fullness, first_page);
 	set_zspage_mapping(first_page, class->index, fullness);
@@ -2057,6 +2153,173 @@ static int zs_register_shrinker(struct zs_pool *pool)
 	return register_shrinker(&pool->shrinker);
 }
 
+bool zs_page_isolate(struct page *page, isolate_mode_t mode)
+{
+	struct zs_pool *pool;
+	struct size_class *class;
+	int class_idx;
+	enum fullness_group fullness;
+	struct page *first_page;
+
+	/*
+	 * The page is locked so it couldn't be destroyed.
+	 * For detail, look at lock_zspage in free_zspage.
+	 */
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG_ON_PAGE(PageIsolated(page), page);
+	/*
+	 * In this implementation, it allows only first page migration.
+	 */
+	VM_BUG_ON_PAGE(!is_first_page(page), page);
+	first_page = page;
+
+	/*
+	 * Without class lock, fullness is meaningless while constant
+	 * class_idx is okay. We will get it under class lock at below,
+	 * again.
+	 */
+	get_zspage_mapping(first_page, &class_idx, &fullness);
+	pool = page->mapping->private_data;
+	class = pool->size_class[class_idx];
+
+	if (!spin_trylock(&class->lock))
+		return false;
+
+	get_zspage_mapping(first_page, &class_idx, &fullness);
+	remove_zspage(class, fullness, first_page);
+	SetZsPageIsolate(first_page);
+	SetPageIsolated(page);
+	spin_unlock(&class->lock);
+
+	return true;
+}
+
+int zs_page_migrate(struct address_space *mapping, struct page *newpage,
+		struct page *page, enum migrate_mode mode)
+{
+	struct zs_pool *pool;
+	struct size_class *class;
+	int class_idx;
+	enum fullness_group fullness;
+	struct page *first_page;
+	void *s_addr, *d_addr, *addr;
+	int ret = -EBUSY;
+	int offset = 0;
+	int freezed = 0;
+
+	VM_BUG_ON_PAGE(!PageMovable(page), page);
+	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+
+	first_page = page;
+	get_zspage_mapping(first_page, &class_idx, &fullness);
+	pool = page->mapping->private_data;
+	class = pool->size_class[class_idx];
+
+	/*
+	 * Get stable fullness under class->lock
+	 */
+	if (!spin_trylock(&class->lock))
+		return ret;
+
+	get_zspage_mapping(first_page, &class_idx, &fullness);
+	if (get_zspage_inuse(first_page) == 0)
+		goto out_class_unlock;
+
+	freezed = freeze_zspage(class, first_page);
+	if (freezed != get_zspage_inuse(first_page))
+		goto out_unfreeze;
+
+	/* copy contents from page to newpage */
+	s_addr = kmap_atomic(page);
+	d_addr = kmap_atomic(newpage);
+	memcpy(d_addr, s_addr, PAGE_SIZE);
+	kunmap_atomic(d_addr);
+	kunmap_atomic(s_addr);
+
+	if (!is_first_page(page))
+		offset = page->index;
+
+	addr = kmap_atomic(page);
+	do {
+		unsigned long handle;
+		unsigned long head;
+		unsigned long new_obj, old_obj;
+		unsigned long obj_idx;
+		struct page *dummy;
+
+		head = obj_to_head(class, page, addr + offset);
+		if (head & OBJ_ALLOCATED_TAG) {
+			handle = head & ~OBJ_ALLOCATED_TAG;
+			if (!testpin_tag(handle))
+				BUG();
+
+			old_obj = handle_to_obj(handle);
+			obj_to_location(old_obj, &dummy, &obj_idx);
+			new_obj = location_to_obj(newpage, obj_idx);
+			new_obj |= BIT(HANDLE_PIN_BIT);
+			record_obj(handle, new_obj);
+		}
+		offset += class->size;
+	} while (offset < PAGE_SIZE);
+	kunmap_atomic(addr);
+
+	replace_sub_page(class, first_page, newpage, page);
+	first_page = newpage;
+	get_page(newpage);
+	VM_BUG_ON_PAGE(get_fullness_group(class, first_page) ==
+			ZS_EMPTY, first_page);
+	ClearZsPageIsolate(first_page);
+	putback_zspage(class, first_page);
+
+	/* Migration complete. Free old page */
+	reset_page(page);
+	ClearPageIsolated(page);
+	put_page(page);
+	ret = MIGRATEPAGE_SUCCESS;
+
+out_unfreeze:
+	unfreeze_zspage(class, first_page, freezed);
+out_class_unlock:
+	spin_unlock(&class->lock);
+
+	return ret;
+}
+
+void zs_page_putback(struct page *page)
+{
+	struct zs_pool *pool;
+	struct size_class *class;
+	int class_idx;
+	enum fullness_group fullness;
+	struct page *first_page;
+
+	VM_BUG_ON_PAGE(!PageMovable(page), page);
+	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+
+	first_page = page;
+	get_zspage_mapping(first_page, &class_idx, &fullness);
+	pool = page->mapping->private_data;
+	class = pool->size_class[class_idx];
+
+	/*
+	 * If there is race betwwen zs_free and here, free_zspage
+	 * in zs_free will wait the page lock of @page without
+	 * destroying of zspage.
+	 */
+	INIT_LIST_HEAD(&first_page->lru);
+	spin_lock(&class->lock);
+	ClearPageIsolated(page);
+	ClearZsPageIsolate(first_page);
+	putback_zspage(class, first_page);
+	spin_unlock(&class->lock);
+}
+
+const struct address_space_operations zsmalloc_aops = {
+	.isolate_page = zs_page_isolate,
+	.migratepage = zs_page_migrate,
+	.putback_page = zs_page_putback,
+};
+
 /**
  * zs_create_pool - Creates an allocation pool to work from.
  * @flags: allocation flags used to allocate pool metadata
@@ -2144,6 +2407,12 @@ struct zs_pool *zs_create_pool(const char *name, gfp_t flags)
 	if (zs_pool_stat_create(pool, name))
 		goto err;
 
+	pool->inode = anon_inode_new();
+	if (IS_ERR(pool->inode))
+		goto err;
+	pool->inode->i_mapping->a_ops = &zsmalloc_aops;
+	pool->inode->i_mapping->private_data = pool;
+
 	/*
 	 * Not critical, we still can use the pool
 	 * and user can trigger compaction manually.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 17/19] zsmalloc: use single linked list for page chain
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (15 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 16/19] zsmalloc: migrate head page of zspage Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 18/19] zsmalloc: migrate tail pages in zspage Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation Minchan Kim
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

For tail page migration, we shouldn't use page->lru which
was used for page chaining because VM will use it for own
purpose so that we need another field for chaining.
For chaining, singly linked list is enough and page->index
of tail page to point first object offset in the page could
be replaced in run-time calculation.

So, this patch change page->lru list for chaining with singly
linked list via page->freelist squeeze and introduces
get_first_obj_ofs to get first object offset in a page.

With that, it could maintain page chaining without using
page->lru.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 119 ++++++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 78 insertions(+), 41 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 8eb785000069..24d8dd1fc749 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -17,10 +17,7 @@
  *
  * Usage of struct page fields:
  *	page->private: points to the first component (0-order) page
- *	page->index (union with page->freelist): offset of the first object
- *		starting in this page.
- *	page->lru: links together all component pages (except the first page)
- *		of a zspage
+ *	page->index (union with page->freelist): override by struct zs_meta
  *
  *	For _first_ page only:
  *
@@ -269,10 +266,19 @@ struct zs_pool {
 };
 
 struct zs_meta {
-	unsigned long freeobj:FREEOBJ_BITS;
-	unsigned long class:CLASS_BITS;
-	unsigned long fullness:FULLNESS_BITS;
-	unsigned long inuse:INUSE_BITS;
+	union {
+		/* first page */
+		struct {
+			unsigned long freeobj:FREEOBJ_BITS;
+			unsigned long class:CLASS_BITS;
+			unsigned long fullness:FULLNESS_BITS;
+			unsigned long inuse:INUSE_BITS;
+		};
+		/* tail pages */
+		struct {
+			struct page *next;
+		};
+	};
 };
 
 struct mapping_area {
@@ -490,6 +496,34 @@ static unsigned long get_freeobj(struct page *first_page)
 	return m->freeobj;
 }
 
+static void set_next_page(struct page *page, struct page *next)
+{
+	struct zs_meta *m;
+
+	VM_BUG_ON_PAGE(is_first_page(page), page);
+
+	m = (struct zs_meta *)&page->index;
+	m->next = next;
+}
+
+static struct page *get_next_page(struct page *page)
+{
+	struct page *next;
+
+	if (is_last_page(page))
+		next = NULL;
+	else if (is_first_page(page))
+		next = (struct page *)page_private(page);
+	else {
+		struct zs_meta *m = (struct zs_meta *)&page->index;
+
+		VM_BUG_ON(!m->next);
+		next = m->next;
+	}
+
+	return next;
+}
+
 static void get_zspage_mapping(struct page *first_page,
 				unsigned int *class_idx,
 				enum fullness_group *fullness)
@@ -864,18 +898,30 @@ static struct page *get_first_page(struct page *page)
 		return (struct page *)page_private(page);
 }
 
-static struct page *get_next_page(struct page *page)
+int get_first_obj_ofs(struct size_class *class, struct page *first_page,
+			struct page *page)
 {
-	struct page *next;
+	int pos, bound;
+	int page_idx = 0;
+	int ofs = 0;
+	struct page *cursor = first_page;
 
-	if (is_last_page(page))
-		next = NULL;
-	else if (is_first_page(page))
-		next = (struct page *)page_private(page);
-	else
-		next = list_entry(page->lru.next, struct page, lru);
+	if (first_page == page)
+		goto out;
 
-	return next;
+	while (page != cursor) {
+		page_idx++;
+		cursor = get_next_page(cursor);
+	}
+
+	bound = PAGE_SIZE * page_idx;
+	pos = (((class->objs_per_zspage * class->size) *
+		page_idx / class->pages_per_zspage) / class->size
+		) * class->size;
+
+	ofs = (pos + class->size) % PAGE_SIZE;
+out:
+	return ofs;
 }
 
 static void objidx_to_page_and_ofs(struct size_class *class,
@@ -1001,27 +1047,25 @@ void lock_zspage(struct page *first_page)
 
 static void free_zspage(struct zs_pool *pool, struct page *first_page)
 {
-	struct page *nextp, *tmp, *head_extra;
+	struct page *nextp, *tmp;
 
 	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
 	VM_BUG_ON_PAGE(get_zspage_inuse(first_page), first_page);
 
 	lock_zspage(first_page);
-	head_extra = (struct page *)page_private(first_page);
+	nextp = (struct page *)page_private(first_page);
 
 	/* zspage with only 1 system page */
-	if (!head_extra)
+	if (!nextp)
 		goto out;
 
-	list_for_each_entry_safe(nextp, tmp, &head_extra->lru, lru) {
-		list_del(&nextp->lru);
-		reset_page(nextp);
-		unlock_page(nextp);
-		__free_page(nextp);
-	}
-	reset_page(head_extra);
-	unlock_page(head_extra);
-	__free_page(head_extra);
+	do {
+		tmp = nextp;
+		nextp = get_next_page(nextp);
+		reset_page(tmp);
+		unlock_page(tmp);
+		__free_page(tmp);
+	} while (nextp);
 out:
 	reset_page(first_page);
 	unlock_page(first_page);
@@ -1049,13 +1093,6 @@ static void init_zspage(struct size_class *class, struct page *first_page,
 		struct link_free *link;
 		void *vaddr;
 
-		/*
-		 * page->index stores offset of first object starting
-		 * in the page.
-		 */
-		if (page != first_page)
-			page->index = off;
-
 		vaddr = kmap_atomic(page);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
 
@@ -1097,7 +1134,6 @@ static void create_page_chain(struct page *pages[], int nr_pages)
 	for (i = 0; i < nr_pages; i++) {
 		page = pages[i];
 
-		INIT_LIST_HEAD(&page->lru);
 		if (i == 0) {
 			SetPagePrivate(page);
 			set_page_private(page, 0);
@@ -1106,10 +1142,12 @@ static void create_page_chain(struct page *pages[], int nr_pages)
 
 		if (i == 1)
 			set_page_private(first_page, (unsigned long)page);
-		if (i >= 1)
+		if (i >= 1) {
+			set_next_page(page, NULL);
 			set_page_private(page, (unsigned long)first_page);
+		}
 		if (i >= 2)
-			list_add(&page->lru, &prev_page->lru);
+			set_next_page(prev_page, page);
 		if (i == nr_pages - 1)
 			SetPagePrivate2(page);
 
@@ -2236,8 +2274,7 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	kunmap_atomic(d_addr);
 	kunmap_atomic(s_addr);
 
-	if (!is_first_page(page))
-		offset = page->index;
+	offset = get_first_obj_ofs(class, first_page, page);
 
 	addr = kmap_atomic(page);
 	do {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 18/19] zsmalloc: migrate tail pages in zspage
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (16 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 17/19] zsmalloc: use single linked list for page chain Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-11  7:30 ` [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation Minchan Kim
  18 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

This patch enables tail page migration of zspage.

In this point, I tested zsmalloc regression with micro-benchmark
which does zs_malloc/map/unmap/zs_free for all size class
in every CPU(my system is 12) during 20 sec.

It shows 1% regression which is really small when we consider
the benefit of this feature and realworkload overhead(i.e.,
most overhead comes from compression).

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 131 +++++++++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 115 insertions(+), 16 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 24d8dd1fc749..b9ff698115a1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -550,6 +550,19 @@ static void set_zspage_mapping(struct page *first_page,
 	m->class = class_idx;
 }
 
+static bool check_isolated_page(struct page *first_page)
+{
+	struct page *cursor;
+
+	for (cursor = first_page; cursor != NULL; cursor =
+					get_next_page(cursor)) {
+		if (PageIsolated(cursor))
+			return true;
+	}
+
+	return false;
+}
+
 /*
  * zsmalloc divides the pool into various size classes where each
  * class maintains a list of zspages where each zspage is divided
@@ -1045,6 +1058,44 @@ void lock_zspage(struct page *first_page)
 	} while ((cursor = get_next_page(cursor)) != NULL);
 }
 
+int trylock_zspage(struct page *first_page, struct page *locked_page)
+{
+	struct page *cursor, *fail;
+
+	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+	for (cursor = first_page; cursor != NULL; cursor =
+			get_next_page(cursor)) {
+		if (cursor != locked_page) {
+			if (!trylock_page(cursor)) {
+				fail = cursor;
+				goto unlock;
+			}
+		}
+	}
+
+	return 1;
+unlock:
+	for (cursor = first_page; cursor != fail; cursor =
+			get_next_page(cursor)) {
+		if (cursor != locked_page)
+			unlock_page(cursor);
+	}
+
+	return 0;
+}
+
+void unlock_zspage(struct page *first_page, struct page *locked_page)
+{
+	struct page *cursor = first_page;
+
+	for (; cursor != NULL; cursor = get_next_page(cursor)) {
+		VM_BUG_ON_PAGE(!PageLocked(cursor), cursor);
+		if (cursor != locked_page)
+			unlock_page(cursor);
+	};
+}
+
 static void free_zspage(struct zs_pool *pool, struct page *first_page)
 {
 	struct page *nextp, *tmp;
@@ -1083,16 +1134,17 @@ static void init_zspage(struct size_class *class, struct page *first_page,
 	first_page->freelist = NULL;
 	INIT_LIST_HEAD(&first_page->lru);
 	set_zspage_inuse(first_page, 0);
-	BUG_ON(!trylock_page(first_page));
-	first_page->mapping = mapping;
-	__SetPageMovable(first_page);
-	unlock_page(first_page);
 
 	while (page) {
 		struct page *next_page;
 		struct link_free *link;
 		void *vaddr;
 
+		BUG_ON(!trylock_page(page));
+		page->mapping = mapping;
+		__SetPageMovable(page);
+		unlock_page(page);
+
 		vaddr = kmap_atomic(page);
 		link = (struct link_free *)vaddr + off / sizeof(*link);
 
@@ -1845,6 +1897,7 @@ static enum fullness_group putback_zspage(struct size_class *class,
 
 	VM_BUG_ON_PAGE(!list_empty(&first_page->lru), first_page);
 	VM_BUG_ON_PAGE(ZsPageIsolate(first_page), first_page);
+	VM_BUG_ON_PAGE(check_isolated_page(first_page), first_page);
 
 	fullness = get_fullness_group(class, first_page);
 	insert_zspage(class, fullness, first_page);
@@ -1951,6 +2004,12 @@ static struct page *isolate_source_page(struct size_class *class)
 		if (!page)
 			continue;
 
+		/* To prevent race between object and page migration */
+		if (!trylock_zspage(page, NULL)) {
+			page = NULL;
+			continue;
+		}
+
 		remove_zspage(class, i, page);
 
 		inuse = get_zspage_inuse(page);
@@ -1959,6 +2018,7 @@ static struct page *isolate_source_page(struct size_class *class)
 		if (inuse != freezed) {
 			unfreeze_zspage(class, page, freezed);
 			putback_zspage(class, page);
+			unlock_zspage(page, NULL);
 			page = NULL;
 			continue;
 		}
@@ -1990,6 +2050,12 @@ static struct page *isolate_target_page(struct size_class *class)
 		if (!page)
 			continue;
 
+		/* To prevent race between object and page migration */
+		if (!trylock_zspage(page, NULL)) {
+			page = NULL;
+			continue;
+		}
+
 		remove_zspage(class, i, page);
 
 		inuse = get_zspage_inuse(page);
@@ -1998,6 +2064,7 @@ static struct page *isolate_target_page(struct size_class *class)
 		if (inuse != freezed) {
 			unfreeze_zspage(class, page, freezed);
 			putback_zspage(class, page);
+			unlock_zspage(page, NULL);
 			page = NULL;
 			continue;
 		}
@@ -2071,11 +2138,13 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 			putback_zspage(class, dst_page);
 			unfreeze_zspage(class, dst_page,
 				class->objs_per_zspage);
+			unlock_zspage(dst_page, NULL);
 			spin_unlock(&class->lock);
 			dst_page = NULL;
 		}
 
 		if (zspage_empty(class, src_page)) {
+			unlock_zspage(src_page, NULL);
 			free_zspage(pool, src_page);
 			spin_lock(&class->lock);
 			zs_stat_dec(class, OBJ_ALLOCATED,
@@ -2098,12 +2167,14 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, src_page);
 		unfreeze_zspage(class, src_page,
 				class->objs_per_zspage);
+		unlock_zspage(src_page, NULL);
 	}
 
 	if (dst_page) {
 		putback_zspage(class, dst_page);
 		unfreeze_zspage(class, dst_page,
 				class->objs_per_zspage);
+		unlock_zspage(dst_page, NULL);
 	}
 
 	spin_unlock(&class->lock);
@@ -2206,10 +2277,11 @@ bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(PageIsolated(page), page);
 	/*
-	 * In this implementation, it allows only first page migration.
+	 * first_page will not be destroyed by PG_lock of @page but it could
+	 * be migrated out. For prohibiting it, zs_page_migrate calls
+	 * trylock_zspage so it closes the race.
 	 */
-	VM_BUG_ON_PAGE(!is_first_page(page), page);
-	first_page = page;
+	first_page = get_first_page(page);
 
 	/*
 	 * Without class lock, fullness is meaningless while constant
@@ -2223,9 +2295,18 @@ bool zs_page_isolate(struct page *page, isolate_mode_t mode)
 	if (!spin_trylock(&class->lock))
 		return false;
 
+	if (check_isolated_page(first_page))
+		goto skip_isolate;
+
+	/*
+	 * If this is first time isolation for zspage, isolate zspage from
+	 * size_class to prevent further allocations from the zspage.
+	 */
 	get_zspage_mapping(first_page, &class_idx, &fullness);
 	remove_zspage(class, fullness, first_page);
 	SetZsPageIsolate(first_page);
+
+skip_isolate:
 	SetPageIsolated(page);
 	spin_unlock(&class->lock);
 
@@ -2248,7 +2329,7 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
-	first_page = page;
+	first_page = get_first_page(page);
 	get_zspage_mapping(first_page, &class_idx, &fullness);
 	pool = page->mapping->private_data;
 	class = pool->size_class[class_idx];
@@ -2263,6 +2344,13 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	if (get_zspage_inuse(first_page) == 0)
 		goto out_class_unlock;
 
+	/*
+	 * It prevents first_page migration during tail page opeartion for
+	 * get_first_page's stability.
+	 */
+	if (!trylock_zspage(first_page, page))
+		goto out_class_unlock;
+
 	freezed = freeze_zspage(class, first_page);
 	if (freezed != get_zspage_inuse(first_page))
 		goto out_unfreeze;
@@ -2301,21 +2389,26 @@ int zs_page_migrate(struct address_space *mapping, struct page *newpage,
 	kunmap_atomic(addr);
 
 	replace_sub_page(class, first_page, newpage, page);
-	first_page = newpage;
+	first_page = get_first_page(newpage);
 	get_page(newpage);
 	VM_BUG_ON_PAGE(get_fullness_group(class, first_page) ==
 			ZS_EMPTY, first_page);
-	ClearZsPageIsolate(first_page);
-	putback_zspage(class, first_page);
+	if (!check_isolated_page(first_page)) {
+		INIT_LIST_HEAD(&first_page->lru);
+		ClearZsPageIsolate(first_page);
+		putback_zspage(class, first_page);
+	}
+
 
 	/* Migration complete. Free old page */
 	reset_page(page);
 	ClearPageIsolated(page);
 	put_page(page);
 	ret = MIGRATEPAGE_SUCCESS;
-
+	page = newpage;
 out_unfreeze:
 	unfreeze_zspage(class, first_page, freezed);
+	unlock_zspage(first_page, page);
 out_class_unlock:
 	spin_unlock(&class->lock);
 
@@ -2333,7 +2426,7 @@ void zs_page_putback(struct page *page)
 	VM_BUG_ON_PAGE(!PageMovable(page), page);
 	VM_BUG_ON_PAGE(!PageIsolated(page), page);
 
-	first_page = page;
+	first_page = get_first_page(page);
 	get_zspage_mapping(first_page, &class_idx, &fullness);
 	pool = page->mapping->private_data;
 	class = pool->size_class[class_idx];
@@ -2343,11 +2436,17 @@ void zs_page_putback(struct page *page)
 	 * in zs_free will wait the page lock of @page without
 	 * destroying of zspage.
 	 */
-	INIT_LIST_HEAD(&first_page->lru);
 	spin_lock(&class->lock);
 	ClearPageIsolated(page);
-	ClearZsPageIsolate(first_page);
-	putback_zspage(class, first_page);
+	/*
+	 * putback zspage to right list if this is last isolated page
+	 * putback in the zspage.
+	 */
+	if (!check_isolated_page(first_page)) {
+		INIT_LIST_HEAD(&first_page->lru);
+		ClearZsPageIsolate(first_page);
+		putback_zspage(class, first_page);
+	}
 	spin_unlock(&class->lock);
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation
  2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
                   ` (17 preceding siblings ...)
  2016-03-11  7:30 ` [PATCH v1 18/19] zsmalloc: migrate tail pages in zspage Minchan Kim
@ 2016-03-11  7:30 ` Minchan Kim
  2016-03-15  6:56   ` Sergey Senozhatsky
  18 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Minchan Kim

Zsmalloc is ready for page migration so zram can use __GFP_MOVABLE
from now on.

I did test to see how it helps to make higher order pages.
Test scenario is as follows.

KVM guest, 1G memory, ext4 formated zram block device,

for i in `seq 1 8`;
do
        dd if=/dev/vda1 of=mnt/test$i.txt bs=128M count=1 &
done

wait `pidof dd`

for i in `seq 1 2 8`;
do
        rm -rf mnt/test$i.txt
done
fstrim -v mnt

echo "init"
cat /proc/buddyinfo

echo "compaction"
echo 1 > /proc/sys/vm/compact_memory
cat /proc/buddyinfo

old:

init
Node 0, zone      DMA    208    120     51     41     11      0      0      0      0      0      0
Node 0, zone    DMA32  16380  13777   9184   3805    789     54      3      0      0      0      0
compaction
Node 0, zone      DMA    132     82     40     39     16      2      1      0      0      0      0
Node 0, zone    DMA32   5219   5526   4969   3455   1831    677    139     15      0      0      0

new:

init
Node 0, zone      DMA    379    115     97     19      2      0      0      0      0      0      0
Node 0, zone    DMA32  18891  16774  10862   3947    637     21      0      0      0      0      0
compaction  1
Node 0, zone      DMA    214     66     87     29     10      3      0      0      0      0      0
Node 0, zone    DMA32   1612   3139   3154   2469   1745    990    384     94      7      0      0

As you can see, compaction made so many high-order pages. Yay!

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c | 3 ++-
 mm/zsmalloc.c                 | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 370c2f76016d..10f6ff1cf6a0 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -514,7 +514,8 @@ static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
 		goto out_error;
 	}
 
-	meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO | __GFP_HIGHMEM);
+	meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO|__GFP_HIGHMEM
+						|__GFP_MOVABLE);
 	if (!meta->mem_pool) {
 		pr_err("Error creating memory pool\n");
 		goto out_error;
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b9ff698115a1..a1f47a7b3a99 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -307,7 +307,7 @@ static void destroy_handle_cache(struct zs_pool *pool)
 static unsigned long alloc_handle(struct zs_pool *pool)
 {
 	return (unsigned long)kmem_cache_alloc(pool->handle_cachep,
-		pool->flags & ~__GFP_HIGHMEM);
+		pool->flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
 }
 
 static void free_handle(struct zs_pool *pool, unsigned long handle)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode
  2016-03-11  7:30 ` [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode Minchan Kim
@ 2016-03-11  8:05   ` Al Viro
  2016-03-11 14:24     ` Gioh Kim
  0 siblings, 1 reply; 42+ messages in thread
From: Al Viro @ 2016-03-11  8:05 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On Fri, Mar 11, 2016 at 04:30:07PM +0900, Minchan Kim wrote:
> From: Gioh Kim <gurugio@hanmail.net>
> 
> The anon_inodes has already complete interfaces to create manage
> many anonymous inodes but don't have interface to get
> new inode. Other sub-modules can create anonymous inode
> without creating and mounting it's own pseudo filesystem.

IMO that's a bad idea.  In case of aio "creating and mounting" takes this:
static struct dentry *aio_mount(struct file_system_type *fs_type,
                                int flags, const char *dev_name, void *data)  
{
        static const struct dentry_operations ops = {
                .d_dname        = simple_dname,
        };
        return mount_pseudo(fs_type, "aio:", NULL, &ops, AIO_RING_MAGIC);
}
and
        static struct file_system_type aio_fs = {
                .name           = "aio",
                .mount          = aio_mount,
                .kill_sb        = kill_anon_super,
        };
        aio_mnt = kern_mount(&aio_fs);

All of 12 lines.  Your export is not much shorter.  To quote old mail on
the same topic:

> Note that anon_inodes.c reason to exist was "it's for situations where
> all context lives on struct file and we don't need separate inode for
> them".  Going from that to "it happens to contain a handy function for inode
> allocation"...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 02/19] mm/compaction: support non-lru movable page migration
  2016-03-11  7:30 ` [PATCH v1 02/19] mm/compaction: support non-lru movable page migration Minchan Kim
@ 2016-03-11  8:11   ` kbuild test robot
  2016-03-11  8:35     ` Minchan Kim
  0 siblings, 1 reply; 42+ messages in thread
From: kbuild test robot @ 2016-03-11  8:11 UTC (permalink / raw)
  To: Minchan Kim
  Cc: kbuild-all, Andrew Morton, linux-mm, linux-kernel, jlayton,
	bfields, Vlastimil Babka, Joonsoo Kim, koct9i, aquini,
	virtualization, Mel Gorman, Hugh Dickins, Sergey Senozhatsky,
	rknize, Rik van Riel, Gioh Kim, Minchan Kim, dri-devel

[-- Attachment #1: Type: text/plain, Size: 2066 bytes --]

Hi Minchan,

[auto build test ERROR on v4.5-rc7]
[cannot apply to next-20160310]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160311-153649
config: x86_64-nfsroot (attached as .config)
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   In file included from mm/compaction.c:12:0:
>> include/linux/compaction.h:87:20: error: static declaration of 'isolate_movable_page' follows non-static declaration
    static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
                       ^
   In file included from mm/compaction.c:11:0:
   include/linux/migrate.h:36:13: note: previous declaration of 'isolate_movable_page' was here
    extern bool isolate_movable_page(struct page *page, isolate_mode_t mode);
                ^
   In file included from mm/compaction.c:12:0:
>> include/linux/compaction.h:92:20: error: static declaration of 'putback_movable_page' follows non-static declaration
    static inline void putback_movable_page(struct page *page)
                       ^
   In file included from mm/compaction.c:11:0:
   include/linux/migrate.h:37:13: note: previous declaration of 'putback_movable_page' was here
    extern void putback_movable_page(struct page *page);
                ^

vim +/isolate_movable_page +87 include/linux/compaction.h

    81	
    82	static inline bool compaction_deferred(struct zone *zone, int order)
    83	{
    84		return true;
    85	}
    86	
  > 87	static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
    88	{
    89		return false;
    90	}
    91	
  > 92	static inline void putback_movable_page(struct page *page)
    93	{
    94	}
    95	#endif /* CONFIG_COMPACTION */

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 25444 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 02/19] mm/compaction: support non-lru movable page migration
  2016-03-11  8:11   ` kbuild test robot
@ 2016-03-11  8:35     ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-11  8:35 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, Andrew Morton, linux-mm, linux-kernel, jlayton,
	bfields, Vlastimil Babka, Joonsoo Kim, koct9i, aquini,
	virtualization, Mel Gorman, Hugh Dickins, Sergey Senozhatsky,
	rknize, Rik van Riel, Gioh Kim, dri-devel

Hi kbuild,

On Fri, Mar 11, 2016 at 04:11:19PM +0800, kbuild test robot wrote:
> Hi Minchan,
> 
> [auto build test ERROR on v4.5-rc7]
> [cannot apply to next-20160310]
> [if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160311-153649
> config: x86_64-nfsroot (attached as .config)
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=x86_64 
> 
> All errors (new ones prefixed by >>):
> 
>    In file included from mm/compaction.c:12:0:
> >> include/linux/compaction.h:87:20: error: static declaration of 'isolate_movable_page' follows non-static declaration
>     static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>                        ^
>    In file included from mm/compaction.c:11:0:
>    include/linux/migrate.h:36:13: note: previous declaration of 'isolate_movable_page' was here
>     extern bool isolate_movable_page(struct page *page, isolate_mode_t mode);
>                 ^
>    In file included from mm/compaction.c:12:0:
> >> include/linux/compaction.h:92:20: error: static declaration of 'putback_movable_page' follows non-static declaration
>     static inline void putback_movable_page(struct page *page)
>                        ^
>    In file included from mm/compaction.c:11:0:
>    include/linux/migrate.h:37:13: note: previous declaration of 'putback_movable_page' was here
>     extern void putback_movable_page(struct page *page);
>                 ^
> 
> vim +/isolate_movable_page +87 include/linux/compaction.h
> 
>     81	
>     82	static inline bool compaction_deferred(struct zone *zone, int order)
>     83	{
>     84		return true;
>     85	}
>     86	
>   > 87	static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>     88	{
>     89		return false;
>     90	}
>     91	
>   > 92	static inline void putback_movable_page(struct page *page)
>     93	{
>     94	}
>     95	#endif /* CONFIG_COMPACTION */
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

Actually, I made patchset based on v4.5-rc6 but the problem you found is
still problem in v4.5-rc6, too. Thanks for catching it fast.

I should apply following patch to fix the problem.

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 6f040ad379ce..4cd4ddf64cc7 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -84,14 +84,6 @@ static inline bool compaction_deferred(struct zone *zone, int order)
 	return true;
 }
 
-static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
-{
-	return false;
-}
-
-static inline void putback_movable_page(struct page *page)
-{
-}
 #endif /* CONFIG_COMPACTION */
 
 #if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA)

Thanks.

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode
  2016-03-11  8:05   ` Al Viro
@ 2016-03-11 14:24     ` Gioh Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Gioh Kim @ 2016-03-11 14:24 UTC (permalink / raw)
  To: Al Viro, Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim



On 11.03.2016 09:05, Al Viro wrote:
> On Fri, Mar 11, 2016 at 04:30:07PM +0900, Minchan Kim wrote:
>> From: Gioh Kim <gurugio@hanmail.net>
>>
>> The anon_inodes has already complete interfaces to create manage
>> many anonymous inodes but don't have interface to get
>> new inode. Other sub-modules can create anonymous inode
>> without creating and mounting it's own pseudo filesystem.
> IMO that's a bad idea.  In case of aio "creating and mounting" takes this:
> static struct dentry *aio_mount(struct file_system_type *fs_type,
>                                  int flags, const char *dev_name, void *data)
> {
>          static const struct dentry_operations ops = {
>                  .d_dname        = simple_dname,
>          };
>          return mount_pseudo(fs_type, "aio:", NULL, &ops, AIO_RING_MAGIC);
> }
> and
>          static struct file_system_type aio_fs = {
>                  .name           = "aio",
>                  .mount          = aio_mount,
>                  .kill_sb        = kill_anon_super,
>          };
>          aio_mnt = kern_mount(&aio_fs);
>
> All of 12 lines.  Your export is not much shorter.  To quote old mail on
> the same topic:
I know what aio_setup() does. It can be a solution.
But I thought creating anon_inode_new() is simpler than several drivers 
create its own pseudo filesystem.
Creating a filesystem requires memory allocation and locking some lists 
even though it is pseudo.

Could you inform me if there is a reason we should avoid creating 
anonymous inode?

>
>> Note that anon_inodes.c reason to exist was "it's for situations where
>> all context lives on struct file and we don't need separate inode for
>> them".  Going from that to "it happens to contain a handy function for inode
>> allocation"...


-- 
Best regards,
Gioh Kim

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 09/19] zsmalloc: keep max_object in size_class
  2016-03-11  7:30 ` [PATCH v1 09/19] zsmalloc: keep max_object in size_class Minchan Kim
@ 2016-03-12  1:44   ` xuyiping
  2016-03-14  4:55     ` Minchan Kim
  2016-03-15  6:28   ` Sergey Senozhatsky
  1 sibling, 1 reply; 42+ messages in thread
From: xuyiping @ 2016-03-12  1:44 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim



On 2016/3/11 15:30, Minchan Kim wrote:
> Every zspage in a size_class has same number of max objects so
> we could move it to a size_class.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>   mm/zsmalloc.c | 29 ++++++++++++++---------------
>   1 file changed, 14 insertions(+), 15 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index b4fb11831acb..ca663c82c1fc 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -32,8 +32,6 @@
>    *	page->freelist: points to the first free object in zspage.
>    *		Free objects are linked together using in-place
>    *		metadata.
> - *	page->objects: maximum number of objects we can store in this
> - *		zspage (class->zspage_order * PAGE_SIZE / class->size)
>    *	page->lru: links together first pages of various zspages.
>    *		Basically forming list of zspages in a fullness group.
>    *	page->mapping: class index and fullness group of the zspage
> @@ -211,6 +209,7 @@ struct size_class {
>   	 * of ZS_ALIGN.
>   	 */
>   	int size;
> +	int objs_per_zspage;
>   	unsigned int index;
>
>   	struct zs_size_stat stats;
> @@ -622,21 +621,22 @@ static inline void zs_pool_stat_destroy(struct zs_pool *pool)
>    * the pool (not yet implemented). This function returns fullness
>    * status of the given page.
>    */
> -static enum fullness_group get_fullness_group(struct page *first_page)
> +static enum fullness_group get_fullness_group(struct size_class *class,
> +						struct page *first_page)
>   {
> -	int inuse, max_objects;
> +	int inuse, objs_per_zspage;
>   	enum fullness_group fg;
>
>   	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
>
>   	inuse = first_page->inuse;
> -	max_objects = first_page->objects;
> +	objs_per_zspage = class->objs_per_zspage;
>
>   	if (inuse == 0)
>   		fg = ZS_EMPTY;
> -	else if (inuse == max_objects)
> +	else if (inuse == objs_per_zspage)
>   		fg = ZS_FULL;
> -	else if (inuse <= 3 * max_objects / fullness_threshold_frac)
> +	else if (inuse <= 3 * objs_per_zspage / fullness_threshold_frac)
>   		fg = ZS_ALMOST_EMPTY;
>   	else
>   		fg = ZS_ALMOST_FULL;
> @@ -723,7 +723,7 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
>   	enum fullness_group currfg, newfg;
>
>   	get_zspage_mapping(first_page, &class_idx, &currfg);
> -	newfg = get_fullness_group(first_page);
> +	newfg = get_fullness_group(class, first_page);
>   	if (newfg == currfg)
>   		goto out;
>
> @@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
>   	init_zspage(class, first_page);
>
>   	first_page->freelist = location_to_obj(first_page, 0);
> -	/* Maximum number of objects we can store in this zspage */
> -	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> -
>   	error = 0; /* Success */
>
>   cleanup:
> @@ -1235,11 +1232,11 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
>   	return true;
>   }
>
> -static bool zspage_full(struct page *first_page)
> +static bool zspage_full(struct size_class *class, struct page *first_page)
>   {
>   	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
>
> -	return first_page->inuse == first_page->objects;
> +	return first_page->inuse == class->objs_per_zspage;
>   }
>
>   unsigned long zs_get_total_pages(struct zs_pool *pool)
> @@ -1625,7 +1622,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>   		}
>
>   		/* Stop if there is no more space */
> -		if (zspage_full(d_page)) {
> +		if (zspage_full(class, d_page)) {
>   			unpin_tag(handle);
>   			ret = -ENOMEM;
>   			break;
> @@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
>   {
>   	enum fullness_group fullness;
>
> -	fullness = get_fullness_group(first_page);
> +	fullness = get_fullness_group(class, first_page);
>   	insert_zspage(class, fullness, first_page);
>   	set_zspage_mapping(first_page, class->index, fullness);
>
> @@ -1933,6 +1930,8 @@ struct zs_pool *zs_create_pool(const char *name, gfp_t flags)
>   		class->size = size;
>   		class->index = i;
>   		class->pages_per_zspage = pages_per_zspage;
> +		class->objs_per_zspage = class->pages_per_zspage *
> +						PAGE_SIZE / class->size;
>   		if (pages_per_zspage == 1 &&
>   			get_maxobj_per_zspage(size, pages_per_zspage) == 1)
>   			class->huge = true;

		computes the "objs_per_zspage" twice here.

		class->objs_per_zspage = get_maxobj_per_zspage(size,
						pages_per_zspage);
		if (pages_per_zspage == 1 && class->objs_per_zspage ==1)
			class->huge = true;

>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 13/19] zsmalloc: factor page chain functionality out
  2016-03-11  7:30 ` [PATCH v1 13/19] zsmalloc: factor page chain functionality out Minchan Kim
@ 2016-03-12  3:09   ` xuyiping
  2016-03-14  4:58     ` Minchan Kim
  0 siblings, 1 reply; 42+ messages in thread
From: xuyiping @ 2016-03-12  3:09 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Vlastimil Babka,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim



On 2016/3/11 15:30, Minchan Kim wrote:
> For migration, we need to create sub-page chain of zspage
> dynamically so this patch factors it out from alloc_zspage.
>
> As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> more clear in obj_malloc(it could be another patch but it's
> trivial so I want to put together in this patch).
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>   mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
>   1 file changed, 45 insertions(+), 33 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index bfc6a048afac..f86f8aaeb902 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -977,7 +977,9 @@ static void init_zspage(struct size_class *class, struct page *first_page)
>   	unsigned long off = 0;
>   	struct page *page = first_page;
>
> -	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
> +	first_page->freelist = NULL;
> +	INIT_LIST_HEAD(&first_page->lru);
> +	set_zspage_inuse(first_page, 0);
>
>   	while (page) {
>   		struct page *next_page;
> @@ -1022,13 +1024,44 @@ static void init_zspage(struct size_class *class, struct page *first_page)
>   	set_freeobj(first_page, 0);
>   }
>
> +static void create_page_chain(struct page *pages[], int nr_pages)
> +{
> +	int i;
> +	struct page *page;
> +	struct page *prev_page = NULL;
> +	struct page *first_page = NULL;
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		page = pages[i];
> +
> +		INIT_LIST_HEAD(&page->lru);
> +		if (i == 0) {
> +			SetPagePrivate(page);
> +			set_page_private(page, 0);
> +			first_page = page;
> +		}
> +
> +		if (i == 1)
> +			set_page_private(first_page, (unsigned long)page);
> +		if (i >= 1)
> +			set_page_private(page, (unsigned long)first_page);
> +		if (i >= 2)
> +			list_add(&page->lru, &prev_page->lru);
> +		if (i == nr_pages - 1)
> +			SetPagePrivate2(page);
> +
> +		prev_page = page;
> +	}
> +}
> +
>   /*
>    * Allocate a zspage for the given size class
>    */
>   static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
>   {
> -	int i, error;
> +	int i;
>   	struct page *first_page = NULL, *uninitialized_var(prev_page);
> +	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
>
>   	/*
>   	 * Allocate individual pages and link them together as:
> @@ -1041,43 +1074,23 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)

	*uninitialized_var(prev_page) in alloc_zspage is not in use more.

>   	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
>   	 * identify the last page.
>   	 */
> -	error = -ENOMEM;
>   	for (i = 0; i < class->pages_per_zspage; i++) {
>   		struct page *page;
>
>   		page = alloc_page(flags);
> -		if (!page)
> -			goto cleanup;
> -
> -		INIT_LIST_HEAD(&page->lru);
> -		if (i == 0) {	/* first page */
> -			page->freelist = NULL;
> -			SetPagePrivate(page);
> -			set_page_private(page, 0);
> -			first_page = page;
> -			set_zspage_inuse(page, 0);
> +		if (!page) {
> +			while (--i >= 0)
> +				__free_page(pages[i]);
> +			return NULL;
>   		}
> -		if (i == 1)
> -			set_page_private(first_page, (unsigned long)page);
> -		if (i >= 1)
> -			set_page_private(page, (unsigned long)first_page);
> -		if (i >= 2)
> -			list_add(&page->lru, &prev_page->lru);
> -		if (i == class->pages_per_zspage - 1)	/* last page */
> -			SetPagePrivate2(page);
> -		prev_page = page;
> +
> +		pages[i] = page;
>   	}
>
> +	create_page_chain(pages, class->pages_per_zspage);
> +	first_page = pages[0];
>   	init_zspage(class, first_page);
>
> -	error = 0; /* Success */
> -
> -cleanup:
> -	if (unlikely(error) && first_page) {
> -		free_zspage(first_page);
> -		first_page = NULL;
> -	}
> -
>   	return first_page;
>   }
>
> @@ -1419,7 +1432,6 @@ static unsigned long obj_malloc(struct size_class *class,
>   	unsigned long m_offset;
>   	void *vaddr;
>
> -	handle |= OBJ_ALLOCATED_TAG;
>   	obj = get_freeobj(first_page);
>   	objidx_to_page_and_ofs(class, first_page, obj,
>   				&m_page, &m_offset);
> @@ -1429,10 +1441,10 @@ static unsigned long obj_malloc(struct size_class *class,
>   	set_freeobj(first_page, link->next >> OBJ_ALLOCATED_TAG);
>   	if (!class->huge)
>   		/* record handle in the header of allocated chunk */
> -		link->handle = handle;
> +		link->handle = handle | OBJ_ALLOCATED_TAG;
>   	else
>   		/* record handle in first_page->private */
> -		set_page_private(first_page, handle);
> +		set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
>   	kunmap_atomic(vaddr);
>   	mod_zspage_inuse(first_page, 1);
>   	zs_stat_inc(class, OBJ_USED, 1);
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 09/19] zsmalloc: keep max_object in size_class
  2016-03-12  1:44   ` xuyiping
@ 2016-03-14  4:55     ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-14  4:55 UTC (permalink / raw)
  To: xuyiping
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On Sat, Mar 12, 2016 at 09:44:48AM +0800, xuyiping wrote:
> 
> 
> On 2016/3/11 15:30, Minchan Kim wrote:
> >Every zspage in a size_class has same number of max objects so
> >we could move it to a size_class.
> >
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> >---
> >  mm/zsmalloc.c | 29 ++++++++++++++---------------
> >  1 file changed, 14 insertions(+), 15 deletions(-)
> >
> >diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >index b4fb11831acb..ca663c82c1fc 100644
> >--- a/mm/zsmalloc.c
> >+++ b/mm/zsmalloc.c
> >@@ -32,8 +32,6 @@
> >   *	page->freelist: points to the first free object in zspage.
> >   *		Free objects are linked together using in-place
> >   *		metadata.
> >- *	page->objects: maximum number of objects we can store in this
> >- *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> >   *	page->lru: links together first pages of various zspages.
> >   *		Basically forming list of zspages in a fullness group.
> >   *	page->mapping: class index and fullness group of the zspage
> >@@ -211,6 +209,7 @@ struct size_class {
> >  	 * of ZS_ALIGN.
> >  	 */
> >  	int size;
> >+	int objs_per_zspage;
> >  	unsigned int index;
> >
> >  	struct zs_size_stat stats;
> >@@ -622,21 +621,22 @@ static inline void zs_pool_stat_destroy(struct zs_pool *pool)
> >   * the pool (not yet implemented). This function returns fullness
> >   * status of the given page.
> >   */
> >-static enum fullness_group get_fullness_group(struct page *first_page)
> >+static enum fullness_group get_fullness_group(struct size_class *class,
> >+						struct page *first_page)
> >  {
> >-	int inuse, max_objects;
> >+	int inuse, objs_per_zspage;
> >  	enum fullness_group fg;
> >
> >  	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
> >
> >  	inuse = first_page->inuse;
> >-	max_objects = first_page->objects;
> >+	objs_per_zspage = class->objs_per_zspage;
> >
> >  	if (inuse == 0)
> >  		fg = ZS_EMPTY;
> >-	else if (inuse == max_objects)
> >+	else if (inuse == objs_per_zspage)
> >  		fg = ZS_FULL;
> >-	else if (inuse <= 3 * max_objects / fullness_threshold_frac)
> >+	else if (inuse <= 3 * objs_per_zspage / fullness_threshold_frac)
> >  		fg = ZS_ALMOST_EMPTY;
> >  	else
> >  		fg = ZS_ALMOST_FULL;
> >@@ -723,7 +723,7 @@ static enum fullness_group fix_fullness_group(struct size_class *class,
> >  	enum fullness_group currfg, newfg;
> >
> >  	get_zspage_mapping(first_page, &class_idx, &currfg);
> >-	newfg = get_fullness_group(first_page);
> >+	newfg = get_fullness_group(class, first_page);
> >  	if (newfg == currfg)
> >  		goto out;
> >
> >@@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> >  	init_zspage(class, first_page);
> >
> >  	first_page->freelist = location_to_obj(first_page, 0);
> >-	/* Maximum number of objects we can store in this zspage */
> >-	first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
> >-
> >  	error = 0; /* Success */
> >
> >  cleanup:
> >@@ -1235,11 +1232,11 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
> >  	return true;
> >  }
> >
> >-static bool zspage_full(struct page *first_page)
> >+static bool zspage_full(struct size_class *class, struct page *first_page)
> >  {
> >  	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
> >
> >-	return first_page->inuse == first_page->objects;
> >+	return first_page->inuse == class->objs_per_zspage;
> >  }
> >
> >  unsigned long zs_get_total_pages(struct zs_pool *pool)
> >@@ -1625,7 +1622,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> >  		}
> >
> >  		/* Stop if there is no more space */
> >-		if (zspage_full(d_page)) {
> >+		if (zspage_full(class, d_page)) {
> >  			unpin_tag(handle);
> >  			ret = -ENOMEM;
> >  			break;
> >@@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
> >  {
> >  	enum fullness_group fullness;
> >
> >-	fullness = get_fullness_group(first_page);
> >+	fullness = get_fullness_group(class, first_page);
> >  	insert_zspage(class, fullness, first_page);
> >  	set_zspage_mapping(first_page, class->index, fullness);
> >
> >@@ -1933,6 +1930,8 @@ struct zs_pool *zs_create_pool(const char *name, gfp_t flags)
> >  		class->size = size;
> >  		class->index = i;
> >  		class->pages_per_zspage = pages_per_zspage;
> >+		class->objs_per_zspage = class->pages_per_zspage *
> >+						PAGE_SIZE / class->size;
> >  		if (pages_per_zspage == 1 &&
> >  			get_maxobj_per_zspage(size, pages_per_zspage) == 1)
> >  			class->huge = true;
> 
> 		computes the "objs_per_zspage" twice here.
> 
> 		class->objs_per_zspage = get_maxobj_per_zspage(size,
> 						pages_per_zspage);
> 		if (pages_per_zspage == 1 && class->objs_per_zspage ==1)
> 			class->huge = true;

Yeb. I will do.

Thanks.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 13/19] zsmalloc: factor page chain functionality out
  2016-03-12  3:09   ` xuyiping
@ 2016-03-14  4:58     ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-14  4:58 UTC (permalink / raw)
  To: xuyiping
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On Sat, Mar 12, 2016 at 11:09:36AM +0800, xuyiping wrote:
> 
> 
> On 2016/3/11 15:30, Minchan Kim wrote:
> >For migration, we need to create sub-page chain of zspage
> >dynamically so this patch factors it out from alloc_zspage.
> >
> >As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> >more clear in obj_malloc(it could be another patch but it's
> >trivial so I want to put together in this patch).
> >
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> >---
> >  mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
> >  1 file changed, 45 insertions(+), 33 deletions(-)
> >
> >diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> >index bfc6a048afac..f86f8aaeb902 100644
> >--- a/mm/zsmalloc.c
> >+++ b/mm/zsmalloc.c
> >@@ -977,7 +977,9 @@ static void init_zspage(struct size_class *class, struct page *first_page)
> >  	unsigned long off = 0;
> >  	struct page *page = first_page;
> >
> >-	VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
> >+	first_page->freelist = NULL;
> >+	INIT_LIST_HEAD(&first_page->lru);
> >+	set_zspage_inuse(first_page, 0);
> >
> >  	while (page) {
> >  		struct page *next_page;
> >@@ -1022,13 +1024,44 @@ static void init_zspage(struct size_class *class, struct page *first_page)
> >  	set_freeobj(first_page, 0);
> >  }
> >
> >+static void create_page_chain(struct page *pages[], int nr_pages)
> >+{
> >+	int i;
> >+	struct page *page;
> >+	struct page *prev_page = NULL;
> >+	struct page *first_page = NULL;
> >+
> >+	for (i = 0; i < nr_pages; i++) {
> >+		page = pages[i];
> >+
> >+		INIT_LIST_HEAD(&page->lru);
> >+		if (i == 0) {
> >+			SetPagePrivate(page);
> >+			set_page_private(page, 0);
> >+			first_page = page;
> >+		}
> >+
> >+		if (i == 1)
> >+			set_page_private(first_page, (unsigned long)page);
> >+		if (i >= 1)
> >+			set_page_private(page, (unsigned long)first_page);
> >+		if (i >= 2)
> >+			list_add(&page->lru, &prev_page->lru);
> >+		if (i == nr_pages - 1)
> >+			SetPagePrivate2(page);
> >+
> >+		prev_page = page;
> >+	}
> >+}
> >+
> >  /*
> >   * Allocate a zspage for the given size class
> >   */
> >  static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> >  {
> >-	int i, error;
> >+	int i;
> >  	struct page *first_page = NULL, *uninitialized_var(prev_page);
> >+	struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE];
> >
> >  	/*
> >  	 * Allocate individual pages and link them together as:
> >@@ -1041,43 +1074,23 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> 
> 	*uninitialized_var(prev_page) in alloc_zspage is not in use more.

True.
It says why we should avoid uninitialized_var if possible.
If we didn't use uninitialized_var, compiler could warn about it
when I did build test.

Thanks.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page
  2016-03-11  7:30 ` [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page Minchan Kim
@ 2016-03-14  8:48   ` Vlastimil Babka
  2016-03-15  1:16     ` Minchan Kim
  0 siblings, 1 reply; 42+ messages in thread
From: Vlastimil Babka @ 2016-03-14  8:48 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, jlayton, bfields, Joonsoo Kim, koct9i,
	aquini, virtualization, Mel Gorman, Hugh Dickins,
	Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Naoya Horiguchi

On 03/11/2016 08:30 AM, Minchan Kim wrote:
> Procedure of page migration is as follows:
>
> First of all, it should isolate a page from LRU and try to
> migrate the page. If it is successful, it releases the page
> for freeing. Otherwise, it should put the page back to LRU
> list.
>
> For LRU pages, we have used putback_lru_page for both freeing
> and putback to LRU list. It's okay because put_page is aware of
> LRU list so if it releases last refcount of the page, it removes
> the page from LRU list. However, It makes unnecessary operations
> (e.g., lru_cache_add, pagevec and flags operations.

Yeah, and compaction (perhaps also other migration users) has to drain 
the lru pvec... Getting rid of this stuff is worth even by itself.

> It would be
> not significant but no worth to do) and harder to support new
> non-lru page migration because put_page isn't aware of non-lru
> page's data structure.
>
> To solve the problem, we can add new hook in put_page with
> PageMovable flags check but it can increase overhead in
> hot path and needs new locking scheme to stabilize the flag check
> with put_page.
>
> So, this patch cleans it up to divide two semantic(ie, put and putback).
> If migration is successful, use put_page instead of putback_lru_page and
> use putback_lru_page only on failure. That makes code more readable
> and doesn't add overhead in put_page.

I had an idea of checking for count==1 in putback_lru_page() which would 
take the put_page() shortcut from there. But maybe it can't be done 
nicely without races.

> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Note in -next/after 4.6-rc1 this will need some rebasing though.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page
  2016-03-14  8:48   ` Vlastimil Babka
@ 2016-03-15  1:16     ` Minchan Kim
  2016-03-15 19:06       ` Vlastimil Babka
  0 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-15  1:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Naoya Horiguchi

On Mon, Mar 14, 2016 at 09:48:33AM +0100, Vlastimil Babka wrote:
> On 03/11/2016 08:30 AM, Minchan Kim wrote:
> >Procedure of page migration is as follows:
> >
> >First of all, it should isolate a page from LRU and try to
> >migrate the page. If it is successful, it releases the page
> >for freeing. Otherwise, it should put the page back to LRU
> >list.
> >
> >For LRU pages, we have used putback_lru_page for both freeing
> >and putback to LRU list. It's okay because put_page is aware of
> >LRU list so if it releases last refcount of the page, it removes
> >the page from LRU list. However, It makes unnecessary operations
> >(e.g., lru_cache_add, pagevec and flags operations.
> 
> Yeah, and compaction (perhaps also other migration users) has to
> drain the lru pvec... Getting rid of this stuff is worth even by
> itself.

Good note. Although we cannot remove lru pvec draining completely,
at least, this patch removes a case which should drain pvec for
returning freed page to buddy.

Thanks for the notice.

> 
> >It would be
> >not significant but no worth to do) and harder to support new
> >non-lru page migration because put_page isn't aware of non-lru
> >page's data structure.
> >
> >To solve the problem, we can add new hook in put_page with
> >PageMovable flags check but it can increase overhead in
> >hot path and needs new locking scheme to stabilize the flag check
> >with put_page.
> >
> >So, this patch cleans it up to divide two semantic(ie, put and putback).
> >If migration is successful, use put_page instead of putback_lru_page and
> >use putback_lru_page only on failure. That makes code more readable
> >and doesn't add overhead in put_page.
> 
> I had an idea of checking for count==1 in putback_lru_page() which
> would take the put_page() shortcut from there. But maybe it can't be
> done nicely without races.

I thought about it and we might do it via page_freeze_refs but
what I want at this moment is to separte two semantic put and putback.
;-)

> 
> >Cc: Vlastimil Babka <vbabka@suse.cz>
> >Cc: Mel Gorman <mgorman@suse.de>
> >Cc: Hugh Dickins <hughd@google.com>
> >Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> 
> Note in -next/after 4.6-rc1 this will need some rebasing though.

Thanks for the review!

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 05/19] zsmalloc: use first_page rather than page
  2016-03-11  7:30 ` [PATCH v1 05/19] zsmalloc: use first_page rather than page Minchan Kim
@ 2016-03-15  6:19   ` Sergey Senozhatsky
  0 siblings, 0 replies; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:19 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> This patch cleans up function parameter "struct page".
> Many functions of zsmalloc expects that page paramter is "first_page"
> so use "first_page" rather than "page" for code readability.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 06/19] zsmalloc: clean up many BUG_ON
  2016-03-11  7:30 ` [PATCH v1 06/19] zsmalloc: clean up many BUG_ON Minchan Kim
@ 2016-03-15  6:19   ` Sergey Senozhatsky
  0 siblings, 0 replies; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:19 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> There are many BUG_ON in zsmalloc.c which is not recommened so
> change them as alternatives.
> 
> Normal rule is as follows:
> 
> 1. avoid BUG_ON if possible. Instead, use VM_BUG_ON or VM_BUG_ON_PAGE
> 2. use VM_BUG_ON_PAGE if we need to see struct page's fields
> 3. use those assertion in primitive functions so higher functions
> can rely on the assertion in the primitive function.
> 4. Don't use assertion if following instruction can trigger Oops
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 07/19] zsmalloc: reordering function parameter
  2016-03-11  7:30 ` [PATCH v1 07/19] zsmalloc: reordering function parameter Minchan Kim
@ 2016-03-15  6:20   ` Sergey Senozhatsky
  0 siblings, 0 replies; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:20 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> This patch cleans up function parameter ordering to order
> higher data structure first.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free
  2016-03-11  7:30 ` [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free Minchan Kim
@ 2016-03-15  6:21   ` Sergey Senozhatsky
  0 siblings, 0 replies; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:21 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> Let's remove unused pool param in obj_free
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 09/19] zsmalloc: keep max_object in size_class
  2016-03-11  7:30 ` [PATCH v1 09/19] zsmalloc: keep max_object in size_class Minchan Kim
  2016-03-12  1:44   ` xuyiping
@ 2016-03-15  6:28   ` Sergey Senozhatsky
  2016-03-15  6:41     ` Minchan Kim
  1 sibling, 1 reply; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:28 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> Every zspage in a size_class has same number of max objects so
> we could move it to a size_class.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/zsmalloc.c | 29 ++++++++++++++---------------
>  1 file changed, 14 insertions(+), 15 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index b4fb11831acb..ca663c82c1fc 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -32,8 +32,6 @@
>   *	page->freelist: points to the first free object in zspage.
>   *		Free objects are linked together using in-place
>   *		metadata.
> - *	page->objects: maximum number of objects we can store in this
> - *		zspage (class->zspage_order * PAGE_SIZE / class->size)
>   *	page->lru: links together first pages of various zspages.
>   *		Basically forming list of zspages in a fullness group.
>   *	page->mapping: class index and fullness group of the zspage
> @@ -211,6 +209,7 @@ struct size_class {
>  	 * of ZS_ALIGN.
>  	 */
>  	int size;
> +	int objs_per_zspage;
>  	unsigned int index;

struct page ->objects "comes for free". now we don't use it, instead
every size_class grows by 4 bytes? is there any reason for this?

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
  2016-03-11  7:30 ` [PATCH v1 11/19] zsmalloc: squeeze freelist " Minchan Kim
@ 2016-03-15  6:40   ` Sergey Senozhatsky
  2016-03-15  6:51     ` Minchan Kim
  0 siblings, 1 reply; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:40 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
> -static void *location_to_obj(struct page *page, unsigned long obj_idx)
> +static void objidx_to_page_and_ofs(struct size_class *class,
> +				struct page *first_page,
> +				unsigned long obj_idx,
> +				struct page **obj_page,
> +				unsigned long *ofs_in_page)

this looks big; 5 params, function "returning" both page and offset...
any chance to split it in two steps, perhaps?

besides, it is more intuitive (at least to me) when 'offset'
shortened to 'offt', not 'ofs'.

	-ss

>  {
> -	unsigned long obj;
> +	int i;
> +	unsigned long ofs;
> +	struct page *cursor;
> +	int nr_page;
>  
> -	if (!page) {
> -		VM_BUG_ON(obj_idx);
> -		return NULL;
> -	}
> +	ofs = obj_idx * class->size;
> +	cursor = first_page;
> +	nr_page = ofs >> PAGE_SHIFT;
>  
> -	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
> -	obj |= ((obj_idx) & OBJ_INDEX_MASK);
> -	obj <<= OBJ_TAG_BITS;
> +	*ofs_in_page = ofs & ~PAGE_MASK;
> +
> +	for (i = 0; i < nr_page; i++)
> +		cursor = get_next_page(cursor);
>  
> -	return (void *)obj;
> +	*obj_page = cursor;
>  }

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 09/19] zsmalloc: keep max_object in size_class
  2016-03-15  6:28   ` Sergey Senozhatsky
@ 2016-03-15  6:41     ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-15  6:41 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On Tue, Mar 15, 2016 at 03:28:24PM +0900, Sergey Senozhatsky wrote:
> On (03/11/16 16:30), Minchan Kim wrote:
> > Every zspage in a size_class has same number of max objects so
> > we could move it to a size_class.
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/zsmalloc.c | 29 ++++++++++++++---------------
> >  1 file changed, 14 insertions(+), 15 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index b4fb11831acb..ca663c82c1fc 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -32,8 +32,6 @@
> >   *	page->freelist: points to the first free object in zspage.
> >   *		Free objects are linked together using in-place
> >   *		metadata.
> > - *	page->objects: maximum number of objects we can store in this
> > - *		zspage (class->zspage_order * PAGE_SIZE / class->size)
> >   *	page->lru: links together first pages of various zspages.
> >   *		Basically forming list of zspages in a fullness group.
> >   *	page->mapping: class index and fullness group of the zspage
> > @@ -211,6 +209,7 @@ struct size_class {
> >  	 * of ZS_ALIGN.
> >  	 */
> >  	int size;
> > +	int objs_per_zspage;
> >  	unsigned int index;
> 
> struct page ->objects "comes for free". now we don't use it, instead
> every size_class grows by 4 bytes? is there any reason for this?

It is union with _mapcount and it is used by checking non-lru movable
page in this patchset.
> 
> 	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
  2016-03-15  6:40   ` Sergey Senozhatsky
@ 2016-03-15  6:51     ` Minchan Kim
  2016-03-17 12:09       ` YiPing Xu
  0 siblings, 1 reply; 42+ messages in thread
From: Minchan Kim @ 2016-03-15  6:51 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On Tue, Mar 15, 2016 at 03:40:53PM +0900, Sergey Senozhatsky wrote:
> On (03/11/16 16:30), Minchan Kim wrote:
> > -static void *location_to_obj(struct page *page, unsigned long obj_idx)
> > +static void objidx_to_page_and_ofs(struct size_class *class,
> > +				struct page *first_page,
> > +				unsigned long obj_idx,
> > +				struct page **obj_page,
> > +				unsigned long *ofs_in_page)
> 
> this looks big; 5 params, function "returning" both page and offset...
> any chance to split it in two steps, perhaps?

Yes, it's rather ugly but I don't have a good idea.
Feel free to suggest if you have a better idea.

> 
> besides, it is more intuitive (at least to me) when 'offset'
> shortened to 'offt', not 'ofs'.

Indeed. I will change it to get_page_and_offset instead of
abbreviation if we cannot refactor it more.

> 
> 	-ss
> 
> >  {
> > -	unsigned long obj;
> > +	int i;
> > +	unsigned long ofs;
> > +	struct page *cursor;
> > +	int nr_page;
> >  
> > -	if (!page) {
> > -		VM_BUG_ON(obj_idx);
> > -		return NULL;
> > -	}
> > +	ofs = obj_idx * class->size;
> > +	cursor = first_page;
> > +	nr_page = ofs >> PAGE_SHIFT;
> >  
> > -	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
> > -	obj |= ((obj_idx) & OBJ_INDEX_MASK);
> > -	obj <<= OBJ_TAG_BITS;
> > +	*ofs_in_page = ofs & ~PAGE_MASK;
> > +
> > +	for (i = 0; i < nr_page; i++)
> > +		cursor = get_next_page(cursor);
> >  
> > -	return (void *)obj;
> > +	*obj_page = cursor;
> >  }

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation
  2016-03-11  7:30 ` [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation Minchan Kim
@ 2016-03-15  6:56   ` Sergey Senozhatsky
  0 siblings, 0 replies; 42+ messages in thread
From: Sergey Senozhatsky @ 2016-03-15  6:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim

On (03/11/16 16:30), Minchan Kim wrote:
[..]
> init
> Node 0, zone      DMA    208    120     51     41     11      0      0      0      0      0      0
> Node 0, zone    DMA32  16380  13777   9184   3805    789     54      3      0      0      0      0
> compaction
> Node 0, zone      DMA    132     82     40     39     16      2      1      0      0      0      0
> Node 0, zone    DMA32   5219   5526   4969   3455   1831    677    139     15      0      0      0
> 
> new:
> 
> init
> Node 0, zone      DMA    379    115     97     19      2      0      0      0      0      0      0
> Node 0, zone    DMA32  18891  16774  10862   3947    637     21      0      0      0      0      0
> compaction  1
> Node 0, zone      DMA    214     66     87     29     10      3      0      0      0      0      0
> Node 0, zone    DMA32   1612   3139   3154   2469   1745    990    384     94      7      0      0
> 
> As you can see, compaction made so many high-order pages. Yay!
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

	-ss

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page
  2016-03-15  1:16     ` Minchan Kim
@ 2016-03-15 19:06       ` Vlastimil Babka
  0 siblings, 0 replies; 42+ messages in thread
From: Vlastimil Babka @ 2016-03-15 19:06 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Joonsoo Kim, koct9i, aquini, virtualization, Mel Gorman,
	Hugh Dickins, Sergey Senozhatsky, rknize, Rik van Riel, Gioh Kim,
	Naoya Horiguchi

On 15.3.2016 2:16, Minchan Kim wrote:
> On Mon, Mar 14, 2016 at 09:48:33AM +0100, Vlastimil Babka wrote:
>> On 03/11/2016 08:30 AM, Minchan Kim wrote:
>>
>> Yeah, and compaction (perhaps also other migration users) has to
>> drain the lru pvec... Getting rid of this stuff is worth even by
>> itself.
> 
> Good note. Although we cannot remove lru pvec draining completely,
> at least, this patch removes a case which should drain pvec for
> returning freed page to buddy.

And this is in fact the only interesting case, right. The migrated page (at its
new target) doesn't concern compaction that much, that can go to lru pvec just
fine. But we do want the freed buddy pages to merge ASAP. I guess that's the
same for CMA, page isolation...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
  2016-03-15  6:51     ` Minchan Kim
@ 2016-03-17 12:09       ` YiPing Xu
  2016-03-17 22:17         ` Minchan Kim
  0 siblings, 1 reply; 42+ messages in thread
From: YiPing Xu @ 2016-03-17 12:09 UTC (permalink / raw)
  To: Minchan Kim, Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, jlayton, bfields,
	Vlastimil Babka, Joonsoo Kim, koct9i, aquini, virtualization,
	Mel Gorman, Hugh Dickins, Sergey Senozhatsky, rknize,
	Rik van Riel, Gioh Kim



On 2016/3/15 14:51, Minchan Kim wrote:
> On Tue, Mar 15, 2016 at 03:40:53PM +0900, Sergey Senozhatsky wrote:
>> On (03/11/16 16:30), Minchan Kim wrote:
>>> -static void *location_to_obj(struct page *page, unsigned long obj_idx)
>>> +static void objidx_to_page_and_ofs(struct size_class *class,
>>> +				struct page *first_page,
>>> +				unsigned long obj_idx,
>>> +				struct page **obj_page,
>>> +				unsigned long *ofs_in_page)
>>
>> this looks big; 5 params, function "returning" both page and offset...
>> any chance to split it in two steps, perhaps?
>
> Yes, it's rather ugly but I don't have a good idea.
> Feel free to suggest if you have a better idea.
 >
>>
>> besides, it is more intuitive (at least to me) when 'offset'
>> shortened to 'offt', not 'ofs'.

	the purpose to get 'obj_page' and 'ofs_in_page' is to map the page and 
get the meta-data pointer in the page, so, we can finish this in a 
single function.

	just like this, and maybe we could have a better function name

static unsigned long *map_handle(struct size_class *class,
	struct page *first_page, unsigned long obj_idx)
{
	struct page *cursor = first_page;
	unsigned long offset = obj_idx * class->size;
	int nr_page = offset >> PAGE_SHIFT;
	unsigned long offset_in_page = offset & ~PAGE_MASK;
	void *addr;
	int i;

	if (class->huge) {
		VM_BUG_ON_PAGE(!is_first_page(page), page);
		return &page_private(page);
	}

	for (i = 0; i < nr_page; i++)
		cursor = get_next_page(cursor);

	addr = kmap_atomic(cursor);
	
	return addr + offset_in_page;
}

static void unmap_handle(unsigned long *addr)
{
	if (class->huge) {
		return;
	}

	kunmap_atomic(addr & ~PAGE_MASK);
}

	all functions called "objidx_to_page_and_ofs" could use it like this, 
for example:

static unsigned long handle_from_obj(struct size_class *class,
				struct page *first_page, int obj_idx)
{
	unsigned long *head = map_handle(class, first_page, obj_idx);

	if (*head & OBJ_ALLOCATED_TAG)
		handle = *head & ~OBJ_ALLOCATED_TAG;

	unmap_handle(*head);

	return handle;
}

	'freeze_zspage', u'nfreeze_zspage' use it in the same way.

	but in 'obj_malloc', we still have to get the page to get obj.

	obj = location_to_obj(m_page, obj);


> Indeed. I will change it to get_page_and_offset instead of
> abbreviation if we cannot refactor it more.
>
>>
>> 	-ss
>>
>>>   {
>>> -	unsigned long obj;
>>> +	int i;
>>> +	unsigned long ofs;
>>> +	struct page *cursor;
>>> +	int nr_page;
>>>
>>> -	if (!page) {
>>> -		VM_BUG_ON(obj_idx);
>>> -		return NULL;
>>> -	}
>>> +	ofs = obj_idx * class->size;
>>> +	cursor = first_page;
>>> +	nr_page = ofs >> PAGE_SHIFT;
>>>
>>> -	obj = page_to_pfn(page) << OBJ_INDEX_BITS;
>>> -	obj |= ((obj_idx) & OBJ_INDEX_MASK);
>>> -	obj <<= OBJ_TAG_BITS;
>>> +	*ofs_in_page = ofs & ~PAGE_MASK;
>>> +
>>> +	for (i = 0; i < nr_page; i++)
>>> +		cursor = get_next_page(cursor);
>>>
>>> -	return (void *)obj;
>>> +	*obj_page = cursor;
>>>   }
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> .
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
  2016-03-17 12:09       ` YiPing Xu
@ 2016-03-17 22:17         ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2016-03-17 22:17 UTC (permalink / raw)
  To: YiPing Xu
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	jlayton, bfields, Vlastimil Babka, Joonsoo Kim, koct9i, aquini,
	virtualization, Mel Gorman, Hugh Dickins, Sergey Senozhatsky,
	rknize, Rik van Riel, Gioh Kim

On Thu, Mar 17, 2016 at 08:09:50PM +0800, YiPing Xu wrote:
> 
> 
> On 2016/3/15 14:51, Minchan Kim wrote:
> >On Tue, Mar 15, 2016 at 03:40:53PM +0900, Sergey Senozhatsky wrote:
> >>On (03/11/16 16:30), Minchan Kim wrote:
> >>>-static void *location_to_obj(struct page *page, unsigned long obj_idx)
> >>>+static void objidx_to_page_and_ofs(struct size_class *class,
> >>>+				struct page *first_page,
> >>>+				unsigned long obj_idx,
> >>>+				struct page **obj_page,
> >>>+				unsigned long *ofs_in_page)
> >>
> >>this looks big; 5 params, function "returning" both page and offset...
> >>any chance to split it in two steps, perhaps?
> >
> >Yes, it's rather ugly but I don't have a good idea.
> >Feel free to suggest if you have a better idea.
> >
> >>
> >>besides, it is more intuitive (at least to me) when 'offset'
> >>shortened to 'offt', not 'ofs'.
> 
> 	the purpose to get 'obj_page' and 'ofs_in_page' is to map the page
> and get the meta-data pointer in the page, so, we can finish this in
> a single function.
> 
> 	just like this, and maybe we could have a better function name
> 
> static unsigned long *map_handle(struct size_class *class,
> 	struct page *first_page, unsigned long obj_idx)
> {
> 	struct page *cursor = first_page;
> 	unsigned long offset = obj_idx * class->size;
> 	int nr_page = offset >> PAGE_SHIFT;
> 	unsigned long offset_in_page = offset & ~PAGE_MASK;
> 	void *addr;
> 	int i;
> 
> 	if (class->huge) {
> 		VM_BUG_ON_PAGE(!is_first_page(page), page);
> 		return &page_private(page);
> 	}
> 
> 	for (i = 0; i < nr_page; i++)
> 		cursor = get_next_page(cursor);
> 
> 	addr = kmap_atomic(cursor);
> 	
> 	return addr + offset_in_page;
> }
> 
> static void unmap_handle(unsigned long *addr)
> {
> 	if (class->huge) {
> 		return;
> 	}
> 
> 	kunmap_atomic(addr & ~PAGE_MASK);
> }
> 
> 	all functions called "objidx_to_page_and_ofs" could use it like
> this, for example:
> 
> static unsigned long handle_from_obj(struct size_class *class,
> 				struct page *first_page, int obj_idx)
> {
> 	unsigned long *head = map_handle(class, first_page, obj_idx);
> 
> 	if (*head & OBJ_ALLOCATED_TAG)
> 		handle = *head & ~OBJ_ALLOCATED_TAG;
> 
> 	unmap_handle(*head);
> 
> 	return handle;
> }
> 
> 	'freeze_zspage', u'nfreeze_zspage' use it in the same way.
> 
> 	but in 'obj_malloc', we still have to get the page to get obj.
> 
> 	obj = location_to_obj(m_page, obj);

Yes, That's why I didn't use such pattern. I didn't want to
add unnecessary overhead in that hot path.

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2016-03-17 22:16 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-11  7:30 [PATCH v1 00/19] Support non-lru page migration Minchan Kim
2016-03-11  7:30 ` [PATCH v1 01/19] mm: use put_page to free page instead of putback_lru_page Minchan Kim
2016-03-14  8:48   ` Vlastimil Babka
2016-03-15  1:16     ` Minchan Kim
2016-03-15 19:06       ` Vlastimil Babka
2016-03-11  7:30 ` [PATCH v1 02/19] mm/compaction: support non-lru movable page migration Minchan Kim
2016-03-11  8:11   ` kbuild test robot
2016-03-11  8:35     ` Minchan Kim
2016-03-11  7:30 ` [PATCH v1 03/19] fs/anon_inodes: new interface to create new inode Minchan Kim
2016-03-11  8:05   ` Al Viro
2016-03-11 14:24     ` Gioh Kim
2016-03-11  7:30 ` [PATCH v1 04/19] mm/balloon: use general movable page feature into balloon Minchan Kim
2016-03-11  7:30 ` [PATCH v1 05/19] zsmalloc: use first_page rather than page Minchan Kim
2016-03-15  6:19   ` Sergey Senozhatsky
2016-03-11  7:30 ` [PATCH v1 06/19] zsmalloc: clean up many BUG_ON Minchan Kim
2016-03-15  6:19   ` Sergey Senozhatsky
2016-03-11  7:30 ` [PATCH v1 07/19] zsmalloc: reordering function parameter Minchan Kim
2016-03-15  6:20   ` Sergey Senozhatsky
2016-03-11  7:30 ` [PATCH v1 08/19] zsmalloc: remove unused pool param in obj_free Minchan Kim
2016-03-15  6:21   ` Sergey Senozhatsky
2016-03-11  7:30 ` [PATCH v1 09/19] zsmalloc: keep max_object in size_class Minchan Kim
2016-03-12  1:44   ` xuyiping
2016-03-14  4:55     ` Minchan Kim
2016-03-15  6:28   ` Sergey Senozhatsky
2016-03-15  6:41     ` Minchan Kim
2016-03-11  7:30 ` [PATCH v1 10/19] zsmalloc: squeeze inuse into page->mapping Minchan Kim
2016-03-11  7:30 ` [PATCH v1 11/19] zsmalloc: squeeze freelist " Minchan Kim
2016-03-15  6:40   ` Sergey Senozhatsky
2016-03-15  6:51     ` Minchan Kim
2016-03-17 12:09       ` YiPing Xu
2016-03-17 22:17         ` Minchan Kim
2016-03-11  7:30 ` [PATCH v1 12/19] zsmalloc: move struct zs_meta from mapping to freelist Minchan Kim
2016-03-11  7:30 ` [PATCH v1 13/19] zsmalloc: factor page chain functionality out Minchan Kim
2016-03-12  3:09   ` xuyiping
2016-03-14  4:58     ` Minchan Kim
2016-03-11  7:30 ` [PATCH v1 14/19] zsmalloc: separate free_zspage from putback_zspage Minchan Kim
2016-03-11  7:30 ` [PATCH v1 15/19] zsmalloc: zs_compact refactoring Minchan Kim
2016-03-11  7:30 ` [PATCH v1 16/19] zsmalloc: migrate head page of zspage Minchan Kim
2016-03-11  7:30 ` [PATCH v1 17/19] zsmalloc: use single linked list for page chain Minchan Kim
2016-03-11  7:30 ` [PATCH v1 18/19] zsmalloc: migrate tail pages in zspage Minchan Kim
2016-03-11  7:30 ` [PATCH v1 19/19] zram: use __GFP_MOVABLE for memory allocation Minchan Kim
2016-03-15  6:56   ` Sergey Senozhatsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).