linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] z3fold: support page migration
@ 2019-04-11 15:32 Vitaly Wool
  2019-04-11 15:34 ` [PATCH 1/4] z3fold: introduce helper functions Vitaly Wool
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-11 15:32 UTC (permalink / raw)
  To: Linux-MM, linux-kernel; +Cc: Andrew Morton, Oleksiy.Avramchenko, Dan Streetman

This patchset implements page migration support and slightly better
buddy search. To implement page migration support, z3fold has to move
away from the current scheme of handle encoding. i. e. stop encoding
page address in handles. Instead, a small per-page structure is created
which will contain actual addresses for z3fold objects, while pointers
to fields of that structure will be used as handles.

Thus, it will be possible to change the underlying addresses to reflect
page migration.

To support migration itself, 3 callbacks will be implemented:
	1: isolation callback: z3fold_page_isolate(): try to isolate
the page by removing it from all lists. Pages scheduled for some
activity and mapped pages will not be isolated. Return true if
isolation was successful or false otherwise
	2: migration callback: z3fold_page_migrate(): re-check critical
conditions and migrate page contents to the new page provided by the
system. Returns 0 on success or negative error code otherwise
	3: putback callback: z3fold_page_isolate(): put back the page
if z3fold_page_migrate() for it failed permanently (i. e. not with
-EAGAIN code).

To make sure an isolated page doesn't get freed, its kref is incremented
in z3fold_page_isolate() and decremented during post-migration
compaction, if migration was successful, or by z3fold_page_isolate() in
the other case.

Since the new handle encoding scheme implies slight memory consumption
increase, better buddy search (which decreases memory consumption) is
included in this patchset.

Vitaly Wool (4):
   z3fold: introduce helper functions
   z3fold: improve compression by extending search
   z3fold: add structure for buddy handles
   z3fold: support page migration

  mm/z3fold.c |  638 ++++++++++++++++++++++++++++++++++++++++++++++++++-------------
  1 file changed, 508 insertions(+), 130 deletions(-)

   



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] z3fold: introduce helper functions
  2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
@ 2019-04-11 15:34 ` Vitaly Wool
  2019-04-11 15:35 ` [PATCH 2/4] z3fold: improve compression by extending search Vitaly Wool
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-11 15:34 UTC (permalink / raw)
  To: Linux-MM, linux-kernel; +Cc: Andrew Morton, Oleksiy.Avramchenko, Dan Streetman

This patch introduces a separate helper function for object
allocation, as well as 2 smaller helpers to add a buddy to the list
and to get a pointer to the pool from the z3fold header. No
functional changes here.

Signed-off-by: Vitaly Wool <vitaly.vul@sony.com>
---
  mm/z3fold.c | 184 ++++++++++++++++++++++++++++------------------------
  1 file changed, 100 insertions(+), 84 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index aee9b0b8d907..7a59875d880c 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -255,10 +255,15 @@ static enum buddy handle_to_buddy(unsigned long handle)
  	return (handle - zhdr->first_num) & BUDDY_MASK;
  }
  
+static inline struct z3fold_pool *zhdr_to_pool(struct z3fold_header *zhdr)
+{
+	return zhdr->pool;
+}
+
  static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
  {
  	struct page *page = virt_to_page(zhdr);
-	struct z3fold_pool *pool = zhdr->pool;
+	struct z3fold_pool *pool = zhdr_to_pool(zhdr);
  
  	WARN_ON(!list_empty(&zhdr->buddy));
  	set_bit(PAGE_STALE, &page->private);
@@ -295,9 +300,10 @@ static void release_z3fold_page_locked_list(struct kref *ref)
  {
  	struct z3fold_header *zhdr = container_of(ref, struct z3fold_header,
  					       refcount);
-	spin_lock(&zhdr->pool->lock);
+	struct z3fold_pool *pool = zhdr_to_pool(zhdr);
+	spin_lock(&pool->lock);
  	list_del_init(&zhdr->buddy);
-	spin_unlock(&zhdr->pool->lock);
+	spin_unlock(&pool->lock);
  
  	WARN_ON(z3fold_page_trylock(zhdr));
  	__release_z3fold_page(zhdr, true);
@@ -349,6 +355,23 @@ static int num_free_chunks(struct z3fold_header *zhdr)
  	return nfree;
  }
  
+/* Add to the appropriate unbuddied list */
+static inline void add_to_unbuddied(struct z3fold_pool *pool,
+				struct z3fold_header *zhdr)
+{
+	if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 ||
+			zhdr->middle_chunks == 0) {
+		struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied);
+
+		int freechunks = num_free_chunks(zhdr);
+		spin_lock(&pool->lock);
+		list_add(&zhdr->buddy, &unbuddied[freechunks]);
+		spin_unlock(&pool->lock);
+		zhdr->cpu = smp_processor_id();
+		put_cpu_ptr(pool->unbuddied);
+	}
+}
+
  static inline void *mchunk_memmove(struct z3fold_header *zhdr,
  				unsigned short dst_chunk)
  {
@@ -406,10 +429,8 @@ static int z3fold_compact_page(struct z3fold_header *zhdr)
  
  static void do_compact_page(struct z3fold_header *zhdr, bool locked)
  {
-	struct z3fold_pool *pool = zhdr->pool;
+	struct z3fold_pool *pool = zhdr_to_pool(zhdr);
  	struct page *page;
-	struct list_head *unbuddied;
-	int fchunks;
  
  	page = virt_to_page(zhdr);
  	if (locked)
@@ -430,18 +451,7 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked)
  	}
  
  	z3fold_compact_page(zhdr);
-	unbuddied = get_cpu_ptr(pool->unbuddied);
-	fchunks = num_free_chunks(zhdr);
-	if (fchunks < NCHUNKS &&
-	    (!zhdr->first_chunks || !zhdr->middle_chunks ||
-			!zhdr->last_chunks)) {
-		/* the page's not completely free and it's unbuddied */
-		spin_lock(&pool->lock);
-		list_add(&zhdr->buddy, &unbuddied[fchunks]);
-		spin_unlock(&pool->lock);
-		zhdr->cpu = smp_processor_id();
-	}
-	put_cpu_ptr(pool->unbuddied);
+	add_to_unbuddied(pool, zhdr);
  	z3fold_page_unlock(zhdr);
  }
  
@@ -453,6 +463,67 @@ static void compact_page_work(struct work_struct *w)
  	do_compact_page(zhdr, false);
  }
  
+/* returns _locked_ z3fold page header or NULL */
+static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
+						size_t size, bool can_sleep)
+{
+	struct z3fold_header *zhdr = NULL;
+	struct page *page;
+	struct list_head *unbuddied;
+	int chunks = size_to_chunks(size), i;
+
+lookup:
+	/* First, try to find an unbuddied z3fold page. */
+	unbuddied = get_cpu_ptr(pool->unbuddied);
+	for_each_unbuddied_list(i, chunks) {
+		struct list_head *l = &unbuddied[i];
+
+		zhdr = list_first_entry_or_null(READ_ONCE(l),
+					struct z3fold_header, buddy);
+
+		if (!zhdr)
+			continue;
+
+		/* Re-check under lock. */
+		spin_lock(&pool->lock);
+		l = &unbuddied[i];
+		if (unlikely(zhdr != list_first_entry(READ_ONCE(l),
+						struct z3fold_header, buddy)) ||
+		    !z3fold_page_trylock(zhdr)) {
+			spin_unlock(&pool->lock);
+			zhdr = NULL;
+			put_cpu_ptr(pool->unbuddied);
+			if (can_sleep)
+				cond_resched();
+			goto lookup;
+		}
+		list_del_init(&zhdr->buddy);
+		zhdr->cpu = -1;
+		spin_unlock(&pool->lock);
+
+		page = virt_to_page(zhdr);
+		if (test_bit(NEEDS_COMPACTING, &page->private)) {
+			z3fold_page_unlock(zhdr);
+			zhdr = NULL;
+			put_cpu_ptr(pool->unbuddied);
+			if (can_sleep)
+				cond_resched();
+			goto lookup;
+		}
+
+		/*
+		 * this page could not be removed from its unbuddied
+		 * list while pool lock was held, and then we've taken
+		 * page lock so kref_put could not be called before
+		 * we got here, so it's safe to just call kref_get()
+		 */
+		kref_get(&zhdr->refcount);
+		break;
+	}
+	put_cpu_ptr(pool->unbuddied);
+
+	return zhdr;
+}
  
  /*
   * API Functions
@@ -546,7 +617,7 @@ static void z3fold_destroy_pool(struct z3fold_pool *pool)
  static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  			unsigned long *handle)
  {
-	int chunks = 0, i, freechunks;
+	int chunks = size_to_chunks(size);
  	struct z3fold_header *zhdr = NULL;
  	struct page *page = NULL;
  	enum buddy bud;
@@ -561,56 +632,8 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  	if (size > PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE)
  		bud = HEADLESS;
  	else {
-		struct list_head *unbuddied;
-		chunks = size_to_chunks(size);
-
-lookup:
-		/* First, try to find an unbuddied z3fold page. */
-		unbuddied = get_cpu_ptr(pool->unbuddied);
-		for_each_unbuddied_list(i, chunks) {
-			struct list_head *l = &unbuddied[i];
-
-			zhdr = list_first_entry_or_null(READ_ONCE(l),
-						struct z3fold_header, buddy);
-
-			if (!zhdr)
-				continue;
-
-			/* Re-check under lock. */
-			spin_lock(&pool->lock);
-			l = &unbuddied[i];
-			if (unlikely(zhdr != list_first_entry(READ_ONCE(l),
-					struct z3fold_header, buddy)) ||
-			    !z3fold_page_trylock(zhdr)) {
-				spin_unlock(&pool->lock);
-				put_cpu_ptr(pool->unbuddied);
-				goto lookup;
-			}
-			list_del_init(&zhdr->buddy);
-			zhdr->cpu = -1;
-			spin_unlock(&pool->lock);
-
-			page = virt_to_page(zhdr);
-			if (test_bit(NEEDS_COMPACTING, &page->private)) {
-				z3fold_page_unlock(zhdr);
-				zhdr = NULL;
-				put_cpu_ptr(pool->unbuddied);
-				if (can_sleep)
-					cond_resched();
-				goto lookup;
-			}
-
-			/*
-			 * this page could not be removed from its unbuddied
-			 * list while pool lock was held, and then we've taken
-			 * page lock so kref_put could not be called before
-			 * we got here, so it's safe to just call kref_get()
-			 */
-			kref_get(&zhdr->refcount);
-			break;
-		}
-		put_cpu_ptr(pool->unbuddied);
-
+retry:
+		zhdr = __z3fold_alloc(pool, size, can_sleep);
  		if (zhdr) {
  			if (zhdr->first_chunks == 0) {
  				if (zhdr->middle_chunks != 0 &&
@@ -630,8 +653,9 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  					z3fold_page_unlock(zhdr);
  				pr_err("No free chunks in unbuddied\n");
  				WARN_ON(1);
-				goto lookup;
+				goto retry;
  			}
+			page = virt_to_page(zhdr);
  			goto found;
  		}
  		bud = FIRST;
@@ -662,8 +686,12 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  	if (!page)
  		return -ENOMEM;
  
-	atomic64_inc(&pool->pages_nr);
  	zhdr = init_z3fold_page(page, pool);
+	if (!zhdr) {
+		__free_page(page);
+		return -ENOMEM;
+	}
+	atomic64_inc(&pool->pages_nr);
  
  	if (bud == HEADLESS) {
  		set_bit(PAGE_HEADLESS, &page->private);
@@ -680,19 +708,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  		zhdr->middle_chunks = chunks;
  		zhdr->start_middle = zhdr->first_chunks + ZHDR_CHUNKS;
  	}
-
-	if (zhdr->first_chunks == 0 || zhdr->last_chunks == 0 ||
-			zhdr->middle_chunks == 0) {
-		struct list_head *unbuddied = get_cpu_ptr(pool->unbuddied);
-
-		/* Add to unbuddied list */
-		freechunks = num_free_chunks(zhdr);
-		spin_lock(&pool->lock);
-		list_add(&zhdr->buddy, &unbuddied[freechunks]);
-		spin_unlock(&pool->lock);
-		zhdr->cpu = smp_processor_id();
-		put_cpu_ptr(pool->unbuddied);
-	}
+	add_to_unbuddied(pool, zhdr);
  
  headless:
  	spin_lock(&pool->lock);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] z3fold: improve compression by extending search
  2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
  2019-04-11 15:34 ` [PATCH 1/4] z3fold: introduce helper functions Vitaly Wool
@ 2019-04-11 15:35 ` Vitaly Wool
  2019-04-11 15:37 ` [PATCH 3/4] z3fold: add structure for buddy handles Vitaly Wool
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-11 15:35 UTC (permalink / raw)
  To: Linux-MM, linux-kernel; +Cc: Andrew Morton, Oleksiy.Avramchenko, Dan Streetman

The current z3fold implementation only searches this CPU's page
lists for a fitting page to put a new object into. This patch adds
quick search for very well fitting pages (i. e. those having
exactly the required number of free space) on other CPUs too,
before allocating a new page for that object.

Signed-off-by: Vitaly Wool <vitaly.vul@sony.com>
---
  mm/z3fold.c | 36 ++++++++++++++++++++++++++++++++++++
  1 file changed, 36 insertions(+)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7a59875d880c..29a4f1249bef 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -522,6 +522,42 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
  	}
  	put_cpu_ptr(pool->unbuddied);
  
+	if (!zhdr) {
+		int cpu;
+
+		/* look for _exact_ match on other cpus' lists */
+		for_each_online_cpu(cpu) {
+			struct list_head *l;
+
+			unbuddied = per_cpu_ptr(pool->unbuddied, cpu);
+			spin_lock(&pool->lock);
+			l = &unbuddied[chunks];
+
+			zhdr = list_first_entry_or_null(READ_ONCE(l),
+						struct z3fold_header, buddy);
+
+			if (!zhdr || !z3fold_page_trylock(zhdr)) {
+				spin_unlock(&pool->lock);
+				zhdr = NULL;
+				continue;
+			}
+			list_del_init(&zhdr->buddy);
+			zhdr->cpu = -1;
+			spin_unlock(&pool->lock);
+
+			page = virt_to_page(zhdr);
+			if (test_bit(NEEDS_COMPACTING, &page->private)) {
+				z3fold_page_unlock(zhdr);
+				zhdr = NULL;
+				if (can_sleep)
+					cond_resched();
+				continue;
+			}
+			kref_get(&zhdr->refcount);
+			break;
+		}
+	}
+
  	return zhdr;
  }
  
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] z3fold: add structure for buddy handles
  2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
  2019-04-11 15:34 ` [PATCH 1/4] z3fold: introduce helper functions Vitaly Wool
  2019-04-11 15:35 ` [PATCH 2/4] z3fold: improve compression by extending search Vitaly Wool
@ 2019-04-11 15:37 ` Vitaly Wool
  2019-04-11 15:38 ` [PATCH 4/4] z3fold: support page migration Vitaly Wool
  2019-04-16 23:18 ` [PATCH 0/4] " Andrew Morton
  4 siblings, 0 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-11 15:37 UTC (permalink / raw)
  To: Linux-MM, linux-kernel; +Cc: Andrew Morton, Oleksiy.Avramchenko, Dan Streetman

For z3fold to be able to move its pages per request of the memory
subsystem, it should not use direct object addresses in handles.
Instead, it will create abstract handles (3 per page) which will
contain pointers to z3fold objects. Thus, it will be possible to
change these pointers when z3fold page is moved.

Signed-off-by: Vitaly Wool <vitaly.vul@sony.com>
---
  mm/z3fold.c | 185 ++++++++++++++++++++++++++++++++++++++++------------
  1 file changed, 145 insertions(+), 40 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 29a4f1249bef..bebc10083f1c 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -34,6 +34,29 @@
  #include <linux/spinlock.h>
  #include <linux/zpool.h>
  
+/*
+ * NCHUNKS_ORDER determines the internal allocation granularity, effectively
+ * adjusting internal fragmentation.  It also determines the number of
+ * freelists maintained in each pool. NCHUNKS_ORDER of 6 means that the
+ * allocation granularity will be in chunks of size PAGE_SIZE/64. Some chunks
+ * in the beginning of an allocated page are occupied by z3fold header, so
+ * NCHUNKS will be calculated to 63 (or 62 in case CONFIG_DEBUG_SPINLOCK=y),
+ * which shows the max number of free chunks in z3fold page, also there will
+ * be 63, or 62, respectively, freelists per pool.
+ */
+#define NCHUNKS_ORDER	6
+
+#define CHUNK_SHIFT	(PAGE_SHIFT - NCHUNKS_ORDER)
+#define CHUNK_SIZE	(1 << CHUNK_SHIFT)
+#define ZHDR_SIZE_ALIGNED round_up(sizeof(struct z3fold_header), CHUNK_SIZE)
+#define ZHDR_CHUNKS	(ZHDR_SIZE_ALIGNED >> CHUNK_SHIFT)
+#define TOTAL_CHUNKS	(PAGE_SIZE >> CHUNK_SHIFT)
+#define NCHUNKS		((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)
+
+#define BUDDY_MASK	(0x3)
+#define BUDDY_SHIFT	2
+#define SLOTS_ALIGN	(0x40)
+
  /*****************
   * Structures
  *****************/
@@ -47,9 +70,19 @@ enum buddy {
  	FIRST,
  	MIDDLE,
  	LAST,
-	BUDDIES_MAX
+	BUDDIES_MAX = LAST
  };
  
+struct z3fold_buddy_slots {
+	/*
+	 * we are using BUDDY_MASK in handle_to_buddy etc. so there should
+	 * be enough slots to hold all possible variants
+	 */
+	unsigned long slot[BUDDY_MASK + 1];
+	unsigned long pool; /* back link + flags */
+};
+#define HANDLE_FLAG_MASK	(0x03)
+
  /*
   * struct z3fold_header - z3fold page metadata occupying first chunks of each
   *			z3fold page, except for HEADLESS pages
@@ -58,7 +91,7 @@ enum buddy {
   * @page_lock:		per-page lock
   * @refcount:		reference count for the z3fold page
   * @work:		work_struct for page layout optimization
- * @pool:		pointer to the pool which this page belongs to
+ * @slots:		pointer to the structure holding buddy slots
   * @cpu:		CPU which this page "belongs" to
   * @first_chunks:	the size of the first buddy in chunks, 0 if free
   * @middle_chunks:	the size of the middle buddy in chunks, 0 if free
@@ -70,7 +103,7 @@ struct z3fold_header {
  	spinlock_t page_lock;
  	struct kref refcount;
  	struct work_struct work;
-	struct z3fold_pool *pool;
+	struct z3fold_buddy_slots *slots;
  	short cpu;
  	unsigned short first_chunks;
  	unsigned short middle_chunks;
@@ -79,28 +112,6 @@ struct z3fold_header {
  	unsigned short first_num:2;
  };
  
-/*
- * NCHUNKS_ORDER determines the internal allocation granularity, effectively
- * adjusting internal fragmentation.  It also determines the number of
- * freelists maintained in each pool. NCHUNKS_ORDER of 6 means that the
- * allocation granularity will be in chunks of size PAGE_SIZE/64. Some chunks
- * in the beginning of an allocated page are occupied by z3fold header, so
- * NCHUNKS will be calculated to 63 (or 62 in case CONFIG_DEBUG_SPINLOCK=y),
- * which shows the max number of free chunks in z3fold page, also there will
- * be 63, or 62, respectively, freelists per pool.
- */
-#define NCHUNKS_ORDER	6
-
-#define CHUNK_SHIFT	(PAGE_SHIFT - NCHUNKS_ORDER)
-#define CHUNK_SIZE	(1 << CHUNK_SHIFT)
-#define ZHDR_SIZE_ALIGNED round_up(sizeof(struct z3fold_header), CHUNK_SIZE)
-#define ZHDR_CHUNKS	(ZHDR_SIZE_ALIGNED >> CHUNK_SHIFT)
-#define TOTAL_CHUNKS	(PAGE_SIZE >> CHUNK_SHIFT)
-#define NCHUNKS		((PAGE_SIZE - ZHDR_SIZE_ALIGNED) >> CHUNK_SHIFT)
-
-#define BUDDY_MASK	(0x3)
-#define BUDDY_SHIFT	2
-
  /**
   * struct z3fold_pool - stores metadata for each z3fold pool
   * @name:	pool name
@@ -113,6 +124,7 @@ struct z3fold_header {
   *		added buddy.
   * @stale:	list of pages marked for freeing
   * @pages_nr:	number of z3fold pages in the pool.
+ * @c_handle:	cache for z3fold_buddy_slots allocation
   * @ops:	pointer to a structure of user defined operations specified at
   *		pool creation time.
   * @compact_wq:	workqueue for page layout background optimization
@@ -130,6 +142,7 @@ struct z3fold_pool {
  	struct list_head lru;
  	struct list_head stale;
  	atomic64_t pages_nr;
+	struct kmem_cache *c_handle;
  	const struct z3fold_ops *ops;
  	struct zpool *zpool;
  	const struct zpool_ops *zpool_ops;
@@ -164,11 +177,65 @@ static int size_to_chunks(size_t size)
  
  static void compact_page_work(struct work_struct *w);
  
+static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool)
+{
+	struct z3fold_buddy_slots *slots = kmem_cache_alloc(pool->c_handle,
+							GFP_KERNEL);
+
+	if (slots) {
+		memset(slots->slot, 0, sizeof(slots->slot));
+		slots->pool = (unsigned long)pool;
+	}
+
+	return slots;
+}
+
+static inline struct z3fold_pool *slots_to_pool(struct z3fold_buddy_slots *s)
+{
+	return (struct z3fold_pool *)(s->pool & ~HANDLE_FLAG_MASK);
+}
+
+static inline struct z3fold_buddy_slots *handle_to_slots(unsigned long handle)
+{
+	return (struct z3fold_buddy_slots *)(handle & ~(SLOTS_ALIGN - 1));
+}
+
+static inline void free_handle(unsigned long handle)
+{
+	struct z3fold_buddy_slots *slots;
+	int i;
+	bool is_free;
+
+	if (handle & (1 << PAGE_HEADLESS))
+		return;
+
+	WARN_ON(*(unsigned long *)handle == 0);
+	*(unsigned long *)handle = 0;
+	slots = handle_to_slots(handle);
+	is_free = true;
+	for (i = 0; i <= BUDDY_MASK; i++) {
+		if (slots->slot[i]) {
+			is_free = false;
+			break;
+		}
+	}
+
+	if (is_free) {
+		struct z3fold_pool *pool = slots_to_pool(slots);
+
+		kmem_cache_free(pool->c_handle, slots);
+	}
+}
+
  /* Initializes the z3fold header of a newly allocated z3fold page */
  static struct z3fold_header *init_z3fold_page(struct page *page,
  					struct z3fold_pool *pool)
  {
  	struct z3fold_header *zhdr = page_address(page);
+	struct z3fold_buddy_slots *slots = alloc_slots(pool);
+
+	if (!slots)
+		return NULL;
  
  	INIT_LIST_HEAD(&page->lru);
  	clear_bit(PAGE_HEADLESS, &page->private);
@@ -185,7 +252,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page,
  	zhdr->first_num = 0;
  	zhdr->start_middle = 0;
  	zhdr->cpu = -1;
-	zhdr->pool = pool;
+	zhdr->slots = slots;
  	INIT_LIST_HEAD(&zhdr->buddy);
  	INIT_WORK(&zhdr->work, compact_page_work);
  	return zhdr;
@@ -215,33 +282,57 @@ static inline void z3fold_page_unlock(struct z3fold_header *zhdr)
  	spin_unlock(&zhdr->page_lock);
  }
  
+/* Helper function to build the index */
+static inline int __idx(struct z3fold_header *zhdr, enum buddy bud)
+{
+	return (bud + zhdr->first_num) & BUDDY_MASK;
+}
+
  /*
   * Encodes the handle of a particular buddy within a z3fold page
   * Pool lock should be held as this function accesses first_num
   */
  static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
  {
-	unsigned long handle;
+	struct z3fold_buddy_slots *slots;
+	unsigned long h = (unsigned long)zhdr;
+	int idx = 0;
  
-	handle = (unsigned long)zhdr;
-	if (bud != HEADLESS) {
-		handle |= (bud + zhdr->first_num) & BUDDY_MASK;
-		if (bud == LAST)
-			handle |= (zhdr->last_chunks << BUDDY_SHIFT);
-	}
-	return handle;
+	/*
+	 * For a headless page, its handle is its pointer with the extra
+	 * PAGE_HEADLESS bit set
+	 */
+	if (bud == HEADLESS)
+		return h | (1 << PAGE_HEADLESS);
+
+	/* otherwise, return pointer to encoded handle */
+	idx = __idx(zhdr, bud);
+	h += idx;
+	if (bud == LAST)
+		h |= (zhdr->last_chunks << BUDDY_SHIFT);
+
+	slots = zhdr->slots;
+	slots->slot[idx] = h;
+	return (unsigned long)&slots->slot[idx];
  }
  
  /* Returns the z3fold page where a given handle is stored */
-static struct z3fold_header *handle_to_z3fold_header(unsigned long handle)
+static inline struct z3fold_header *handle_to_z3fold_header(unsigned long handle)
  {
-	return (struct z3fold_header *)(handle & PAGE_MASK);
+	unsigned long addr = handle;
+
+	if (!(addr & (1 << PAGE_HEADLESS)))
+		addr = *(unsigned long *)handle;
+
+	return (struct z3fold_header *)(addr & PAGE_MASK);
  }
  
  /* only for LAST bud, returns zero otherwise */
  static unsigned short handle_to_chunks(unsigned long handle)
  {
-	return (handle & ~PAGE_MASK) >> BUDDY_SHIFT;
+	unsigned long addr = *(unsigned long *)handle;
+
+	return (addr & ~PAGE_MASK) >> BUDDY_SHIFT;
  }
  
  /*
@@ -251,13 +342,18 @@ static unsigned short handle_to_chunks(unsigned long handle)
   */
  static enum buddy handle_to_buddy(unsigned long handle)
  {
-	struct z3fold_header *zhdr = handle_to_z3fold_header(handle);
-	return (handle - zhdr->first_num) & BUDDY_MASK;
+	struct z3fold_header *zhdr;
+	unsigned long addr;
+
+	WARN_ON(handle & (1 << PAGE_HEADLESS));
+	addr = *(unsigned long *)handle;
+	zhdr = (struct z3fold_header *)(addr & PAGE_MASK);
+	return (addr - zhdr->first_num) & BUDDY_MASK;
  }
  
  static inline struct z3fold_pool *zhdr_to_pool(struct z3fold_header *zhdr)
  {
-	return zhdr->pool;
+	return slots_to_pool(zhdr->slots);
  }
  
  static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
@@ -583,6 +679,11 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
  	pool = kzalloc(sizeof(struct z3fold_pool), gfp);
  	if (!pool)
  		goto out;
+	pool->c_handle = kmem_cache_create("z3fold_handle",
+				sizeof(struct z3fold_buddy_slots),
+				SLOTS_ALIGN, 0, NULL);
+	if (!pool->c_handle)
+		goto out_c;
  	spin_lock_init(&pool->lock);
  	spin_lock_init(&pool->stale_lock);
  	pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2);
@@ -613,6 +714,8 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
  out_unbuddied:
  	free_percpu(pool->unbuddied);
  out_pool:
+	kmem_cache_destroy(pool->c_handle);
+out_c:
  	kfree(pool);
  out:
  	return NULL;
@@ -626,6 +729,7 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
   */
  static void z3fold_destroy_pool(struct z3fold_pool *pool)
  {
+	kmem_cache_destroy(pool->c_handle);
  	destroy_workqueue(pool->release_wq);
  	destroy_workqueue(pool->compact_wq);
  	kfree(pool);
@@ -818,6 +922,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
  		return;
  	}
  
+	free_handle(handle);
  	if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) {
  		atomic64_dec(&pool->pages_nr);
  		return;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] z3fold: support page migration
  2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
                   ` (2 preceding siblings ...)
  2019-04-11 15:37 ` [PATCH 3/4] z3fold: add structure for buddy handles Vitaly Wool
@ 2019-04-11 15:38 ` Vitaly Wool
  2019-04-16 23:18 ` [PATCH 0/4] " Andrew Morton
  4 siblings, 0 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-11 15:38 UTC (permalink / raw)
  To: Linux-MM, linux-kernel; +Cc: Andrew Morton, Oleksiy.Avramchenko, Dan Streetman

Now that we are not using page address in handles directly, we
can make z3fold pages movable to decrease the memory fragmentation
z3fold may create over time.

This patch starts advertising non-headless z3fold pages as movable
and uses the existing kernel infrastructure to implement moving of
such pages per memory management subsystem's request. It thus
implements 3 required callbacks for page migration:

* isolation callback: z3fold_page_isolate(): try to isolate the
page by removing it from all lists. Pages scheduled for some activity
and mapped pages will not be isolated. Return true if isolation was
successful or false otherwise
* migration callback: z3fold_page_migrate(): re-check critical
conditions and migrate page contents to the new page provided by the
memory subsystem. Returns 0 on success or negative error code
otherwise
* putback callback: z3fold_page_putback(): put back the page if
z3fold_page_migrate() for it failed permanently (i. e. not with
-EAGAIN code).

Signed-off-by: Vitaly Wool <vitaly.vul@sony.com>
---
  mm/z3fold.c | 241 +++++++++++++++++++++++++++++++++++++++++++++++++---
  1 file changed, 231 insertions(+), 10 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index bebc10083f1c..d9eabfdad0fe 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -24,10 +24,18 @@
  
  #include <linux/atomic.h>
  #include <linux/sched.h>
+#include <linux/cpumask.h>
+#include <linux/dcache.h>
  #include <linux/list.h>
  #include <linux/mm.h>
  #include <linux/module.h>
+#include <linux/page-flags.h>
+#include <linux/migrate.h>
+#include <linux/node.h>
+#include <linux/compaction.h>
  #include <linux/percpu.h>
+#include <linux/mount.h>
+#include <linux/fs.h>
  #include <linux/preempt.h>
  #include <linux/workqueue.h>
  #include <linux/slab.h>
@@ -97,6 +105,7 @@ struct z3fold_buddy_slots {
   * @middle_chunks:	the size of the middle buddy in chunks, 0 if free
   * @last_chunks:	the size of the last buddy in chunks, 0 if free
   * @first_num:		the starting number (for the first handle)
+ * @mapped_count:	the number of objects currently mapped
   */
  struct z3fold_header {
  	struct list_head buddy;
@@ -110,6 +119,7 @@ struct z3fold_header {
  	unsigned short last_chunks;
  	unsigned short start_middle;
  	unsigned short first_num:2;
+	unsigned short mapped_count:2;
  };
  
  /**
@@ -130,6 +140,7 @@ struct z3fold_header {
   * @compact_wq:	workqueue for page layout background optimization
   * @release_wq:	workqueue for safe page release
   * @work:	work_struct for safe page release
+ * @inode:	inode for z3fold pseudo filesystem
   *
   * This structure is allocated at pool creation time and maintains metadata
   * pertaining to a particular z3fold pool.
@@ -149,6 +160,7 @@ struct z3fold_pool {
  	struct workqueue_struct *compact_wq;
  	struct workqueue_struct *release_wq;
  	struct work_struct work;
+	struct inode *inode;
  };
  
  /*
@@ -227,6 +239,59 @@ static inline void free_handle(unsigned long handle)
  	}
  }
  
+static struct dentry *z3fold_do_mount(struct file_system_type *fs_type,
+				int flags, const char *dev_name, void *data)
+{
+	static const struct dentry_operations ops = {
+		.d_dname = simple_dname,
+	};
+
+	return mount_pseudo(fs_type, "z3fold:", NULL, &ops, 0x33);
+}
+
+static struct file_system_type z3fold_fs = {
+	.name		= "z3fold",
+	.mount		= z3fold_do_mount,
+	.kill_sb	= kill_anon_super,
+};
+
+static struct vfsmount *z3fold_mnt;
+static int z3fold_mount(void)
+{
+	int ret = 0;
+
+	z3fold_mnt = kern_mount(&z3fold_fs);
+	if (IS_ERR(z3fold_mnt))
+		ret = PTR_ERR(z3fold_mnt);
+
+	return ret;
+}
+
+static void z3fold_unmount(void)
+{
+	kern_unmount(z3fold_mnt);
+}
+
+static const struct address_space_operations z3fold_aops;
+static int z3fold_register_migration(struct z3fold_pool *pool)
+{
+	pool->inode = alloc_anon_inode(z3fold_mnt->mnt_sb);
+	if (IS_ERR(pool->inode)) {
+		pool->inode = NULL;
+		return 1;
+	}
+
+	pool->inode->i_mapping->private_data = pool;
+	pool->inode->i_mapping->a_ops = &z3fold_aops;
+	return 0;
+}
+
+static void z3fold_unregister_migration(struct z3fold_pool *pool)
+{
+	if (pool->inode)
+		iput(pool->inode);
+ }
+
  /* Initializes the z3fold header of a newly allocated z3fold page */
  static struct z3fold_header *init_z3fold_page(struct page *page,
  					struct z3fold_pool *pool)
@@ -259,8 +324,14 @@ static struct z3fold_header *init_z3fold_page(struct page *page,
  }
  
  /* Resets the struct page fields and frees the page */
-static void free_z3fold_page(struct page *page)
+static void free_z3fold_page(struct page *page, bool headless)
  {
+	if (!headless) {
+		lock_page(page);
+		__ClearPageMovable(page);
+		unlock_page(page);
+	}
+	ClearPagePrivate(page);
  	__free_page(page);
  }
  
@@ -317,12 +388,12 @@ static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud)
  }
  
  /* Returns the z3fold page where a given handle is stored */
-static inline struct z3fold_header *handle_to_z3fold_header(unsigned long handle)
+static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h)
  {
-	unsigned long addr = handle;
+	unsigned long addr = h;
  
  	if (!(addr & (1 << PAGE_HEADLESS)))
-		addr = *(unsigned long *)handle;
+		addr = *(unsigned long *)h;
  
  	return (struct z3fold_header *)(addr & PAGE_MASK);
  }
@@ -366,7 +437,7 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
  	clear_bit(NEEDS_COMPACTING, &page->private);
  	spin_lock(&pool->lock);
  	if (!list_empty(&page->lru))
-		list_del(&page->lru);
+		list_del_init(&page->lru);
  	spin_unlock(&pool->lock);
  	if (locked)
  		z3fold_page_unlock(zhdr);
@@ -420,7 +491,7 @@ static void free_pages_work(struct work_struct *w)
  			continue;
  		spin_unlock(&pool->stale_lock);
  		cancel_work_sync(&zhdr->work);
-		free_z3fold_page(page);
+		free_z3fold_page(page, false);
  		cond_resched();
  		spin_lock(&pool->stale_lock);
  	}
@@ -486,6 +557,9 @@ static int z3fold_compact_page(struct z3fold_header *zhdr)
  	if (test_bit(MIDDLE_CHUNK_MAPPED, &page->private))
  		return 0; /* can't move middle chunk, it's used */
  
+	if (unlikely(PageIsolated(page)))
+		return 0;
+
  	if (zhdr->middle_chunks == 0)
  		return 0; /* nothing to compact */
  
@@ -546,6 +620,12 @@ static void do_compact_page(struct z3fold_header *zhdr, bool locked)
  		return;
  	}
  
+	if (unlikely(PageIsolated(page) ||
+		     test_bit(PAGE_STALE, &page->private))) {
+		z3fold_page_unlock(zhdr);
+		return;
+	}
+
  	z3fold_compact_page(zhdr);
  	add_to_unbuddied(pool, zhdr);
  	z3fold_page_unlock(zhdr);
@@ -705,10 +785,14 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
  	pool->release_wq = create_singlethread_workqueue(pool->name);
  	if (!pool->release_wq)
  		goto out_wq;
+	if (z3fold_register_migration(pool))
+		goto out_rwq;
  	INIT_WORK(&pool->work, free_pages_work);
  	pool->ops = ops;
  	return pool;
  
+out_rwq:
+	destroy_workqueue(pool->release_wq);
  out_wq:
  	destroy_workqueue(pool->compact_wq);
  out_unbuddied:
@@ -730,6 +814,7 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp,
  static void z3fold_destroy_pool(struct z3fold_pool *pool)
  {
  	kmem_cache_destroy(pool->c_handle);
+	z3fold_unregister_migration(pool);
  	destroy_workqueue(pool->release_wq);
  	destroy_workqueue(pool->compact_wq);
  	kfree(pool);
@@ -837,6 +922,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
  		set_bit(PAGE_HEADLESS, &page->private);
  		goto headless;
  	}
+	__SetPageMovable(page, pool->inode->i_mapping);
  	z3fold_page_lock(zhdr);
  
  found:
@@ -895,7 +981,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
  			spin_lock(&pool->lock);
  			list_del(&page->lru);
  			spin_unlock(&pool->lock);
-			free_z3fold_page(page);
+			free_z3fold_page(page, true);
  			atomic64_dec(&pool->pages_nr);
  		}
  		return;
@@ -931,7 +1017,8 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
  		z3fold_page_unlock(zhdr);
  		return;
  	}
-	if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
+	if (unlikely(PageIsolated(page)) ||
+	    test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
  		z3fold_page_unlock(zhdr);
  		return;
  	}
@@ -1012,10 +1099,12 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
  			if (test_and_set_bit(PAGE_CLAIMED, &page->private))
  				continue;
  
-			zhdr = page_address(page);
+			if (unlikely(PageIsolated(page)))
+				continue;
  			if (test_bit(PAGE_HEADLESS, &page->private))
  				break;
  
+			zhdr = page_address(page);
  			if (!z3fold_page_trylock(zhdr)) {
  				zhdr = NULL;
  				continue; /* can't evict at this point */
@@ -1076,7 +1165,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
  next:
  		if (test_bit(PAGE_HEADLESS, &page->private)) {
  			if (ret == 0) {
-				free_z3fold_page(page);
+				free_z3fold_page(page, true);
  				atomic64_dec(&pool->pages_nr);
  				return 0;
  			}
@@ -1153,6 +1242,8 @@ static void *z3fold_map(struct z3fold_pool *pool, unsigned long handle)
  		break;
  	}
  
+	if (addr)
+		zhdr->mapped_count++;
  	z3fold_page_unlock(zhdr);
  out:
  	return addr;
@@ -1179,6 +1270,7 @@ static void z3fold_unmap(struct z3fold_pool *pool, unsigned long handle)
  	buddy = handle_to_buddy(handle);
  	if (buddy == MIDDLE)
  		clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);
+	zhdr->mapped_count--;
  	z3fold_page_unlock(zhdr);
  }
  
@@ -1193,6 +1285,128 @@ static u64 z3fold_get_pool_size(struct z3fold_pool *pool)
  	return atomic64_read(&pool->pages_nr);
  }
  
+bool z3fold_page_isolate(struct page *page, isolate_mode_t mode)
+{
+	struct z3fold_header *zhdr;
+	struct z3fold_pool *pool;
+
+	VM_BUG_ON_PAGE(!PageMovable(page), page);
+	VM_BUG_ON_PAGE(PageIsolated(page), page);
+
+	if (test_bit(PAGE_HEADLESS, &page->private))
+		return false;
+
+	zhdr = page_address(page);
+	z3fold_page_lock(zhdr);
+	if (test_bit(NEEDS_COMPACTING, &page->private) ||
+	    test_bit(PAGE_STALE, &page->private))
+		goto out;
+
+	pool = zhdr_to_pool(zhdr);
+
+	if (zhdr->mapped_count == 0) {
+		kref_get(&zhdr->refcount);
+		if (!list_empty(&zhdr->buddy))
+			list_del_init(&zhdr->buddy);
+		spin_lock(&pool->lock);
+		if (!list_empty(&page->lru))
+			list_del(&page->lru);
+		spin_unlock(&pool->lock);
+		z3fold_page_unlock(zhdr);
+		return true;
+	}
+out:
+	z3fold_page_unlock(zhdr);
+	return false;
+}
+
+int z3fold_page_migrate(struct address_space *mapping, struct page *newpage,
+			struct page *page, enum migrate_mode mode)
+{
+	struct z3fold_header *zhdr, *new_zhdr;
+	struct z3fold_pool *pool;
+	struct address_space *new_mapping;
+
+	VM_BUG_ON_PAGE(!PageMovable(page), page);
+	VM_BUG_ON_PAGE(!PageIsolated(page), page);
+
+	zhdr = page_address(page);
+	pool = zhdr_to_pool(zhdr);
+
+	if (!trylock_page(page))
+		return -EAGAIN;
+
+	if (!z3fold_page_trylock(zhdr)) {
+		unlock_page(page);
+		return -EAGAIN;
+	}
+	if (zhdr->mapped_count != 0) {
+		z3fold_page_unlock(zhdr);
+		unlock_page(page);
+		return -EBUSY;
+	}
+	new_zhdr = page_address(newpage);
+	memcpy(new_zhdr, zhdr, PAGE_SIZE);
+	newpage->private = page->private;
+	page->private = 0;
+	z3fold_page_unlock(zhdr);
+	spin_lock_init(&new_zhdr->page_lock);
+	new_mapping = page_mapping(page);
+	__ClearPageMovable(page);
+	ClearPagePrivate(page);
+
+	get_page(newpage);
+	z3fold_page_lock(new_zhdr);
+	if (new_zhdr->first_chunks)
+		encode_handle(new_zhdr, FIRST);
+	if (new_zhdr->last_chunks)
+		encode_handle(new_zhdr, LAST);
+	if (new_zhdr->middle_chunks)
+		encode_handle(new_zhdr, MIDDLE);
+	set_bit(NEEDS_COMPACTING, &newpage->private);
+	new_zhdr->cpu = smp_processor_id();
+	spin_lock(&pool->lock);
+	list_add(&newpage->lru, &pool->lru);
+	spin_unlock(&pool->lock);
+	__SetPageMovable(newpage, new_mapping);
+	z3fold_page_unlock(new_zhdr);
+
+	queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work);
+
+	page_mapcount_reset(page);
+	unlock_page(page);
+	put_page(page);
+	return 0;
+}
+
+void z3fold_page_putback(struct page *page)
+{
+	struct z3fold_header *zhdr;
+	struct z3fold_pool *pool;
+
+	zhdr = page_address(page);
+	pool = zhdr_to_pool(zhdr);
+
+	z3fold_page_lock(zhdr);
+	if (!list_empty(&zhdr->buddy))
+		list_del_init(&zhdr->buddy);
+	INIT_LIST_HEAD(&page->lru);
+	if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) {
+		atomic64_dec(&pool->pages_nr);
+		return;
+	}
+	spin_lock(&pool->lock);
+	list_add(&page->lru, &pool->lru);
+	spin_unlock(&pool->lock);
+	z3fold_page_unlock(zhdr);
+}
+
+static const struct address_space_operations z3fold_aops = {
+	.isolate_page = z3fold_page_isolate,
+	.migratepage = z3fold_page_migrate,
+	.putback_page = z3fold_page_putback,
+};
+
  /*****************
   * zpool
   ****************/
@@ -1290,8 +1504,14 @@ MODULE_ALIAS("zpool-z3fold");
  
  static int __init init_z3fold(void)
  {
+	int ret;
+
  	/* Make sure the z3fold header is not larger than the page size */
  	BUILD_BUG_ON(ZHDR_SIZE_ALIGNED > PAGE_SIZE);
+	ret = z3fold_mount();
+	if (ret)
+		return ret;
+
  	zpool_register_driver(&z3fold_zpool_driver);
  
  	return 0;
@@ -1299,6 +1519,7 @@ static int __init init_z3fold(void)
  
  static void __exit exit_z3fold(void)
  {
+	z3fold_unmount();
  	zpool_unregister_driver(&z3fold_zpool_driver);
  }
  
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/4] z3fold: support page migration
  2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
                   ` (3 preceding siblings ...)
  2019-04-11 15:38 ` [PATCH 4/4] z3fold: support page migration Vitaly Wool
@ 2019-04-16 23:18 ` Andrew Morton
  2019-04-17  7:41   ` Vitaly Wool
  4 siblings, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2019-04-16 23:18 UTC (permalink / raw)
  To: Vitaly Wool; +Cc: Linux-MM, linux-kernel, Oleksiy.Avramchenko, Dan Streetman

On Thu, 11 Apr 2019 17:32:12 +0200 Vitaly Wool <vitalywool@gmail.com> wrote:

> This patchset implements page migration support and slightly better
> buddy search. To implement page migration support, z3fold has to move
> away from the current scheme of handle encoding. i. e. stop encoding
> page address in handles. Instead, a small per-page structure is created
> which will contain actual addresses for z3fold objects, while pointers
> to fields of that structure will be used as handles.

Can you please help find a reviewer for this work?

For some reason I'm seeing a massive number of rejects when trying to
apply these.  It looks like your mail client performed some sort of
selective space-stuffing.  I suggest you email a patch to yourself,
check that the result applies properly.




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 0/4] z3fold: support page migration
  2019-04-16 23:18 ` [PATCH 0/4] " Andrew Morton
@ 2019-04-17  7:41   ` Vitaly Wool
  0 siblings, 0 replies; 7+ messages in thread
From: Vitaly Wool @ 2019-04-17  7:41 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux-MM, LKML, Oleksiy.Avramchenko, Dan Streetman

Den ons 17 apr. 2019 kl 01:18 skrev Andrew Morton <akpm@linux-foundation.org>:
>
> On Thu, 11 Apr 2019 17:32:12 +0200 Vitaly Wool <vitalywool@gmail.com> wrote:
>
> > This patchset implements page migration support and slightly better
> > buddy search. To implement page migration support, z3fold has to move
> > away from the current scheme of handle encoding. i. e. stop encoding
> > page address in handles. Instead, a small per-page structure is created
> > which will contain actual addresses for z3fold objects, while pointers
> > to fields of that structure will be used as handles.
>
> Can you please help find a reviewer for this work?
>
> For some reason I'm seeing a massive number of rejects when trying to
> apply these.  It looks like your mail client performed some sort of
> selective space-stuffing.  I suggest you email a patch to yourself,
> check that the result applies properly.


Sorry about that. You can never be sure when you work with
Thunderbird. I checked the tabs were not converted to spaces but
Thunderbird managed to add extra space in the beginning of each
unchanged line of the patch.

I'll just to a v2 patchset today.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-04-17  7:41 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-11 15:32 [PATCH 0/4] z3fold: support page migration Vitaly Wool
2019-04-11 15:34 ` [PATCH 1/4] z3fold: introduce helper functions Vitaly Wool
2019-04-11 15:35 ` [PATCH 2/4] z3fold: improve compression by extending search Vitaly Wool
2019-04-11 15:37 ` [PATCH 3/4] z3fold: add structure for buddy handles Vitaly Wool
2019-04-11 15:38 ` [PATCH 4/4] z3fold: support page migration Vitaly Wool
2019-04-16 23:18 ` [PATCH 0/4] " Andrew Morton
2019-04-17  7:41   ` Vitaly Wool

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).