From: Vlastimil Babka <vbabka@suse.cz> To: Minchan Kim <minchan@kernel.org>, Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Subject: Re: [PATCH v7 11/12] zsmalloc: page migration support Date: Wed, 1 Jun 2016 16:09:26 +0200 [thread overview] Message-ID: <574EEC96.8050805@suse.cz> (raw) In-Reply-To: <1464736881-24886-12-git-send-email-minchan@kernel.org> On 06/01/2016 01:21 AM, Minchan Kim wrote: [...] > > Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> > Signed-off-by: Minchan Kim <minchan@kernel.org> I'm not that familiar with zsmalloc, so this is not a full review. I was just curious how it's handling the movable migration API, and stumbled upon some things pointed out below. > @@ -252,16 +276,23 @@ struct zs_pool { > */ > #define FULLNESS_BITS 2 > #define CLASS_BITS 8 > +#define ISOLATED_BITS 3 > +#define MAGIC_VAL_BITS 8 > > struct zspage { > struct { > unsigned int fullness:FULLNESS_BITS; > unsigned int class:CLASS_BITS; > + unsigned int isolated:ISOLATED_BITS; > + unsigned int magic:MAGIC_VAL_BITS; This magic seems to be only tested via VM_BUG_ON, so it's presence should be also guarded by #ifdef DEBUG_VM, no? > @@ -999,6 +1141,8 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, > return NULL; > > memset(zspage, 0, sizeof(struct zspage)); > + zspage->magic = ZSPAGE_MAGIC; Same here. > +int zs_page_migrate(struct address_space *mapping, struct page *newpage, > + struct page *page, enum migrate_mode mode) > +{ > + struct zs_pool *pool; > + struct size_class *class; > + int class_idx; > + enum fullness_group fullness; > + struct zspage *zspage; > + struct page *dummy; > + void *s_addr, *d_addr, *addr; > + int offset, pos; > + unsigned long handle, head; > + unsigned long old_obj, new_obj; > + unsigned int obj_idx; > + int ret = -EAGAIN; > + > + VM_BUG_ON_PAGE(!PageMovable(page), page); > + VM_BUG_ON_PAGE(!PageIsolated(page), page); > + > + zspage = get_zspage(page); > + > + /* Concurrent compactor cannot migrate any subpage in zspage */ > + migrate_write_lock(zspage); > + get_zspage_mapping(zspage, &class_idx, &fullness); > + pool = mapping->private_data; > + class = pool->size_class[class_idx]; > + offset = get_first_obj_offset(class, get_first_page(zspage), page); > + > + spin_lock(&class->lock); > + if (!get_zspage_inuse(zspage)) { > + ret = -EBUSY; > + goto unlock_class; > + } > + > + pos = offset; > + s_addr = kmap_atomic(page); > + while (pos < PAGE_SIZE) { > + head = obj_to_head(page, s_addr + pos); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!trypin_tag(handle)) > + goto unpin_objects; > + } > + pos += class->size; > + } > + > + /* > + * Here, any user cannot access all objects in the zspage so let's move. > + */ > + d_addr = kmap_atomic(newpage); > + memcpy(d_addr, s_addr, PAGE_SIZE); > + kunmap_atomic(d_addr); > + > + for (addr = s_addr + offset; addr < s_addr + pos; > + addr += class->size) { > + head = obj_to_head(page, addr); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!testpin_tag(handle)) > + BUG(); > + > + old_obj = handle_to_obj(handle); > + obj_to_location(old_obj, &dummy, &obj_idx); > + new_obj = (unsigned long)location_to_obj(newpage, > + obj_idx); > + new_obj |= BIT(HANDLE_PIN_BIT); > + record_obj(handle, new_obj); > + } > + } > + > + replace_sub_page(class, zspage, newpage, page); > + get_page(newpage); > + > + dec_zspage_isolation(zspage); > + > + /* > + * Page migration is done so let's putback isolated zspage to > + * the list if @page is final isolated subpage in the zspage. > + */ > + if (!is_zspage_isolated(zspage)) > + putback_zspage(class, zspage); > + > + reset_page(page); > + put_page(page); > + page = newpage; > + > + ret = 0; > +unpin_objects: > + for (addr = s_addr + offset; addr < s_addr + pos; > + addr += class->size) { > + head = obj_to_head(page, addr); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!testpin_tag(handle)) > + BUG(); > + unpin_tag(handle); > + } > + } > + kunmap_atomic(s_addr); The above seems suspicious to me. In the success case, page points to newpage, but s_addr is still the original one? Vlastimil
WARNING: multiple messages have this Message-ID (diff)
From: Vlastimil Babka <vbabka@suse.cz> To: Minchan Kim <minchan@kernel.org>, Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Subject: Re: [PATCH v7 11/12] zsmalloc: page migration support Date: Wed, 1 Jun 2016 16:09:26 +0200 [thread overview] Message-ID: <574EEC96.8050805@suse.cz> (raw) In-Reply-To: <1464736881-24886-12-git-send-email-minchan@kernel.org> On 06/01/2016 01:21 AM, Minchan Kim wrote: [...] > > Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> > Signed-off-by: Minchan Kim <minchan@kernel.org> I'm not that familiar with zsmalloc, so this is not a full review. I was just curious how it's handling the movable migration API, and stumbled upon some things pointed out below. > @@ -252,16 +276,23 @@ struct zs_pool { > */ > #define FULLNESS_BITS 2 > #define CLASS_BITS 8 > +#define ISOLATED_BITS 3 > +#define MAGIC_VAL_BITS 8 > > struct zspage { > struct { > unsigned int fullness:FULLNESS_BITS; > unsigned int class:CLASS_BITS; > + unsigned int isolated:ISOLATED_BITS; > + unsigned int magic:MAGIC_VAL_BITS; This magic seems to be only tested via VM_BUG_ON, so it's presence should be also guarded by #ifdef DEBUG_VM, no? > @@ -999,6 +1141,8 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, > return NULL; > > memset(zspage, 0, sizeof(struct zspage)); > + zspage->magic = ZSPAGE_MAGIC; Same here. > +int zs_page_migrate(struct address_space *mapping, struct page *newpage, > + struct page *page, enum migrate_mode mode) > +{ > + struct zs_pool *pool; > + struct size_class *class; > + int class_idx; > + enum fullness_group fullness; > + struct zspage *zspage; > + struct page *dummy; > + void *s_addr, *d_addr, *addr; > + int offset, pos; > + unsigned long handle, head; > + unsigned long old_obj, new_obj; > + unsigned int obj_idx; > + int ret = -EAGAIN; > + > + VM_BUG_ON_PAGE(!PageMovable(page), page); > + VM_BUG_ON_PAGE(!PageIsolated(page), page); > + > + zspage = get_zspage(page); > + > + /* Concurrent compactor cannot migrate any subpage in zspage */ > + migrate_write_lock(zspage); > + get_zspage_mapping(zspage, &class_idx, &fullness); > + pool = mapping->private_data; > + class = pool->size_class[class_idx]; > + offset = get_first_obj_offset(class, get_first_page(zspage), page); > + > + spin_lock(&class->lock); > + if (!get_zspage_inuse(zspage)) { > + ret = -EBUSY; > + goto unlock_class; > + } > + > + pos = offset; > + s_addr = kmap_atomic(page); > + while (pos < PAGE_SIZE) { > + head = obj_to_head(page, s_addr + pos); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!trypin_tag(handle)) > + goto unpin_objects; > + } > + pos += class->size; > + } > + > + /* > + * Here, any user cannot access all objects in the zspage so let's move. > + */ > + d_addr = kmap_atomic(newpage); > + memcpy(d_addr, s_addr, PAGE_SIZE); > + kunmap_atomic(d_addr); > + > + for (addr = s_addr + offset; addr < s_addr + pos; > + addr += class->size) { > + head = obj_to_head(page, addr); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!testpin_tag(handle)) > + BUG(); > + > + old_obj = handle_to_obj(handle); > + obj_to_location(old_obj, &dummy, &obj_idx); > + new_obj = (unsigned long)location_to_obj(newpage, > + obj_idx); > + new_obj |= BIT(HANDLE_PIN_BIT); > + record_obj(handle, new_obj); > + } > + } > + > + replace_sub_page(class, zspage, newpage, page); > + get_page(newpage); > + > + dec_zspage_isolation(zspage); > + > + /* > + * Page migration is done so let's putback isolated zspage to > + * the list if @page is final isolated subpage in the zspage. > + */ > + if (!is_zspage_isolated(zspage)) > + putback_zspage(class, zspage); > + > + reset_page(page); > + put_page(page); > + page = newpage; > + > + ret = 0; > +unpin_objects: > + for (addr = s_addr + offset; addr < s_addr + pos; > + addr += class->size) { > + head = obj_to_head(page, addr); > + if (head & OBJ_ALLOCATED_TAG) { > + handle = head & ~OBJ_ALLOCATED_TAG; > + if (!testpin_tag(handle)) > + BUG(); > + unpin_tag(handle); > + } > + } > + kunmap_atomic(s_addr); The above seems suspicious to me. In the success case, page points to newpage, but s_addr is still the original one? Vlastimil -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-06-01 14:09 UTC|newest] Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-05-31 23:21 [PATCH v7 00/12] Support non-lru page migration Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 01/12] mm: use put_page to free page instead of putback_lru_page Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 02/12] mm: migrate: support non-lru movable page migration Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 03/12] mm: balloon: use general non-lru movable page feature Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 04/12] zsmalloc: keep max_object in size_class Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 05/12] zsmalloc: use bit_spin_lock Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 06/12] zsmalloc: use accessor Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 07/12] zsmalloc: factor page chain functionality out Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 08/12] zsmalloc: introduce zspage structure Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 09/12] zsmalloc: separate free_zspage from putback_zspage Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 10/12] zsmalloc: use freeobj for index Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 11/12] zsmalloc: page migration support Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-06-01 14:09 ` Vlastimil Babka [this message] 2016-06-01 14:09 ` Vlastimil Babka 2016-06-02 0:25 ` Minchan Kim 2016-06-02 0:25 ` Minchan Kim 2016-06-02 11:44 ` Vlastimil Babka 2016-06-02 11:44 ` Vlastimil Babka 2016-06-01 21:39 ` Andrew Morton 2016-06-01 21:39 ` Andrew Morton 2016-06-02 0:15 ` Minchan Kim 2016-06-02 0:15 ` Minchan Kim [not found] ` <CGME20170119001317epcas1p188357c77e1f4ff08b6d3dcb76dedca06@epcas1p1.samsung.com> 2017-01-19 0:13 ` Chulmin Kim 2017-01-19 2:44 ` Minchan Kim 2017-01-19 3:39 ` Chulmin Kim 2017-01-19 6:21 ` Minchan Kim 2017-01-19 8:16 ` Chulmin Kim 2017-01-23 5:22 ` Minchan Kim 2017-01-23 5:30 ` Sergey Senozhatsky 2017-01-23 5:40 ` Minchan Kim 2017-01-25 4:06 ` Chulmin Kim 2017-01-25 4:25 ` Sergey Senozhatsky 2017-01-25 5:26 ` Minchan Kim 2017-01-26 17:04 ` Dan Streetman 2017-01-31 0:10 ` Minchan Kim 2017-01-31 13:09 ` Dan Streetman 2017-02-01 6:51 ` Minchan Kim 2017-02-01 19:38 ` Dan Streetman 2017-02-02 8:48 ` Minchan Kim 2016-05-31 23:21 ` [PATCH v7 12/12] zram: use __GFP_MOVABLE for memory allocation Minchan Kim 2016-05-31 23:21 ` Minchan Kim 2016-06-01 21:41 ` [PATCH v7 00/12] Support non-lru page migration Andrew Morton 2016-06-01 21:41 ` Andrew Morton 2016-06-01 21:41 ` Andrew Morton 2016-06-01 22:40 ` Daniel Vetter 2016-06-01 22:40 ` Daniel Vetter 2016-06-01 22:40 ` Daniel Vetter 2016-06-02 0:36 ` Minchan Kim 2016-06-02 0:36 ` Minchan Kim 2016-06-02 0:36 ` Minchan Kim 2016-06-15 7:59 ` Sergey Senozhatsky 2016-06-15 7:59 ` Sergey Senozhatsky 2016-06-15 7:59 ` Sergey Senozhatsky 2016-06-15 23:12 ` Minchan Kim 2016-06-15 23:12 ` Minchan Kim 2016-06-16 2:48 ` Sergey Senozhatsky 2016-06-16 2:48 ` Sergey Senozhatsky 2016-06-16 2:58 ` Minchan Kim 2016-06-16 2:58 ` Minchan Kim 2016-06-16 2:58 ` Minchan Kim 2016-06-16 4:23 ` Sergey Senozhatsky 2016-06-16 4:23 ` Sergey Senozhatsky 2016-06-16 4:47 ` Minchan Kim 2016-06-16 4:47 ` Minchan Kim 2016-06-16 4:47 ` Minchan Kim 2016-06-16 5:22 ` Sergey Senozhatsky 2016-06-16 5:22 ` Sergey Senozhatsky 2016-06-16 5:22 ` Sergey Senozhatsky 2016-06-16 6:47 ` Minchan Kim 2016-06-16 6:47 ` Minchan Kim 2016-06-16 6:47 ` Minchan Kim 2016-06-16 8:42 ` Sergey Senozhatsky 2016-06-16 8:42 ` Sergey Senozhatsky 2016-06-16 10:09 ` Minchan Kim 2016-06-16 10:09 ` Minchan Kim 2016-06-17 7:28 ` Joonsoo Kim 2016-06-17 7:28 ` Joonsoo Kim 2016-06-17 7:28 ` Joonsoo Kim 2016-06-16 10:09 ` Minchan Kim 2016-06-16 8:42 ` Sergey Senozhatsky 2016-06-16 4:23 ` Sergey Senozhatsky 2016-06-16 2:48 ` Sergey Senozhatsky 2016-06-15 23:12 ` Minchan Kim
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=574EEC96.8050805@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=minchan@kernel.org \ --cc=sergey.senozhatsky@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.