linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] A few fixup patches for z3fold
@ 2022-04-29  6:40 Miaohe Lin
  2022-04-29  6:40 ` [PATCH 1/9] mm/z3fold: fix sheduling while atomic Miaohe Lin
                   ` (9 more replies)
  0 siblings, 10 replies; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Hi everyone,
This series contains a few fixup patches to fix sheduling while atomic,
fix possible null pointer dereferencing, fix various race conditions and
so on. More details can be found in the respective changelogs. Thanks!

Miaohe Lin (9):
  mm/z3fold: fix sheduling while atomic
  mm/z3fold: fix possible null pointer dereferencing
  mm/z3fold: remove buggy use of stale list for allocation
  mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc
  revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  mm/z3fold: put z3fold page back into unbuddied list when reclaim or
    migration fails
  mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock
  mm/z3fold: fix z3fold_reclaim_page races with z3fold_free
  mm/z3fold: fix z3fold_page_migrate races with z3fold_map

 mm/z3fold.c | 97 ++++++++++++++++++++++-------------------------------
 1 file changed, 41 insertions(+), 56 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH 1/9] mm/z3fold: fix sheduling while atomic
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:00   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing Miaohe Lin
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

z3fold's page_lock is always held when calling alloc_slots. So gfp should
be GFP_ATOMIC to avoid "scheduling while atomic" bug.

Fixes: fc5488651c7d ("z3fold: simplify freeing slots")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 83b5a3514427..c2260f5a5885 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -941,8 +941,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
 	}
 
 	if (zhdr && !zhdr->slots)
-		zhdr->slots = alloc_slots(pool,
-					can_sleep ? GFP_NOIO : GFP_ATOMIC);
+		zhdr->slots = alloc_slots(pool, GFP_ATOMIC);
 	return zhdr;
 }
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
  2022-04-29  6:40 ` [PATCH 1/9] mm/z3fold: fix sheduling while atomic Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:04   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation Miaohe Lin
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

alloc_slots could fail to allocate memory under heavy memory pressure. So
we should check zhdr->slots against NULL to avoid future null pointer
dereferencing.

Fixes: fc5488651c7d ("z3fold: simplify freeing slots")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index c2260f5a5885..5d8c21f2bc59 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -940,9 +940,19 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
 		}
 	}
 
-	if (zhdr && !zhdr->slots)
+	if (zhdr && !zhdr->slots) {
 		zhdr->slots = alloc_slots(pool, GFP_ATOMIC);
+		if (!zhdr->slots)
+			goto out_fail;
+	}
 	return zhdr;
+
+out_fail:
+	if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) {
+		add_to_unbuddied(pool, zhdr);
+		z3fold_page_unlock(zhdr);
+	}
+	return NULL;
 }
 
 /*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
  2022-04-29  6:40 ` [PATCH 1/9] mm/z3fold: fix sheduling while atomic Miaohe Lin
  2022-04-29  6:40 ` [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:06   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc Miaohe Lin
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Currently if z3fold couldn't find an unbuddied page it would first try to
pull a page off the stale list. But this approach is problematic. If init
z3fold page fails later, the page should be freed via free_z3fold_page to
clean up the relevant resource instead of using __free_page directly. And
if page is successfully reused, it will BUG_ON later in __SetPageMovable
because it's already non-lru movable page, i.e. PAGE_MAPPING_MOVABLE is
already set in page->mapping. In order to fix all of these issues, we can
simply remove the buggy use of stale list for allocation because can_sleep
should always be false and we never really hit the reusing code path now.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 23 +----------------------
 1 file changed, 1 insertion(+), 22 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 5d8c21f2bc59..4e6814c5694f 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1102,28 +1102,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
 		bud = FIRST;
 	}
 
-	page = NULL;
-	if (can_sleep) {
-		spin_lock(&pool->stale_lock);
-		zhdr = list_first_entry_or_null(&pool->stale,
-						struct z3fold_header, buddy);
-		/*
-		 * Before allocating a page, let's see if we can take one from
-		 * the stale pages list. cancel_work_sync() can sleep so we
-		 * limit this case to the contexts where we can sleep
-		 */
-		if (zhdr) {
-			list_del(&zhdr->buddy);
-			spin_unlock(&pool->stale_lock);
-			cancel_work_sync(&zhdr->work);
-			page = virt_to_page(zhdr);
-		} else {
-			spin_unlock(&pool->stale_lock);
-		}
-	}
-	if (!page)
-		page = alloc_page(gfp);
-
+	page = alloc_page(gfp);
 	if (!page)
 		return -ENOMEM;
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (2 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:10   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc" Miaohe Lin
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

If trylock_page fails, the page won't be non-lru movable page. When this
page is freed via free_z3fold_page, it will trigger bug on PageMovable
check in __ClearPageMovable. Throw warning on failure of trylock_page to
guard against such rare case just as what zsmalloc does.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4e6814c5694f..b3b4e65c107f 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1122,10 +1122,9 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
 		__SetPageMovable(page, pool->inode->i_mapping);
 		unlock_page(page);
 	} else {
-		if (trylock_page(page)) {
-			__SetPageMovable(page, pool->inode->i_mapping);
-			unlock_page(page);
-		}
+		WARN_ON(!trylock_page(page));
+		__SetPageMovable(page, pool->inode->i_mapping);
+		unlock_page(page);
 	}
 	z3fold_page_lock(zhdr);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (3 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:12   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails Miaohe Lin
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Revert commit f1549cb5ab2b ("mm/z3fold.c: allow __GFP_HIGHMEM in
z3fold_alloc").

z3fold can't support GFP_HIGHMEM page now. page_address is used
directly at all places. Moreover, z3fold_header is on per cpu
unbuddied list which could be access anytime. So we should rid
the support of GFP_HIGHMEM allocation for z3fold.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index b3b4e65c107f..5f5d5f1556be 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -212,10 +212,8 @@ static int size_to_chunks(size_t size)
 static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool,
 							gfp_t gfp)
 {
-	struct z3fold_buddy_slots *slots;
-
-	slots = kmem_cache_zalloc(pool->c_handle,
-				 (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
+	struct z3fold_buddy_slots *slots = kmem_cache_zalloc(pool->c_handle,
+							     gfp);
 
 	if (slots) {
 		/* It will be freed separately in free_handle(). */
@@ -1075,7 +1073,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
 	enum buddy bud;
 	bool can_sleep = gfpflags_allow_blocking(gfp);
 
-	if (!size)
+	if (!size || (gfp & __GFP_HIGHMEM))
 		return -EINVAL;
 
 	if (size > PAGE_SIZE)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (4 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc" Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:13   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock Miaohe Lin
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

When doing z3fold page reclaim or migration, the page is removed from
unbuddied list. If reclaim or migration succeeds, it's fine as page is
released. But in case it fails, the page is not put back into unbuddied
list now. The page will be leaked until next compaction work, reclaim
or migration is done.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 5f5d5f1556be..a1c150fc8def 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1422,6 +1422,8 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
 			spin_lock(&pool->lock);
 			list_add(&page->lru, &pool->lru);
 			spin_unlock(&pool->lock);
+			if (list_empty(&zhdr->buddy))
+				add_to_unbuddied(pool, zhdr);
 			z3fold_page_unlock(zhdr);
 			clear_bit(PAGE_CLAIMED, &page->private);
 		}
@@ -1638,6 +1640,8 @@ static void z3fold_page_putback(struct page *page)
 	spin_lock(&pool->lock);
 	list_add(&page->lru, &pool->lru);
 	spin_unlock(&pool->lock);
+	if (list_empty(&zhdr->buddy))
+		add_to_unbuddied(pool, zhdr);
 	clear_bit(PAGE_CLAIMED, &page->private);
 	z3fold_page_unlock(zhdr);
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (5 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:14   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free Miaohe Lin
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Think about the below race window:

CPU1				CPU2
z3fold_reclaim_page		z3fold_free
 test_and_set_bit PAGE_CLAIMED
 failed to reclaim page
 z3fold_page_lock(zhdr);
 add back to the lru list;
 z3fold_page_unlock(zhdr);
				 get_z3fold_header
				 page_claimed=test_and_set_bit PAGE_CLAIMED

 clear_bit(PAGE_CLAIMED, &page->private);

				 if (!page_claimed) /* it's false true */
				  free_handle is not called

free_handle won't be called in this case. So z3fold_buddy_slots will leak.
Fix it by always clear PAGE_CLAIMED under z3fold page lock.

Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index a1c150fc8def..4a3cd2ff15b0 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -1221,8 +1221,8 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
 		return;
 	}
 	if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
-		put_z3fold_header(zhdr);
 		clear_bit(PAGE_CLAIMED, &page->private);
+		put_z3fold_header(zhdr);
 		return;
 	}
 	if (zhdr->cpu < 0 || !cpu_online(zhdr->cpu)) {
@@ -1424,8 +1424,8 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
 			spin_unlock(&pool->lock);
 			if (list_empty(&zhdr->buddy))
 				add_to_unbuddied(pool, zhdr);
-			z3fold_page_unlock(zhdr);
 			clear_bit(PAGE_CLAIMED, &page->private);
+			z3fold_page_unlock(zhdr);
 		}
 
 		/* We started off locked to we need to lock the pool back */
@@ -1577,8 +1577,8 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
 	if (!z3fold_page_trylock(zhdr))
 		return -EAGAIN;
 	if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) {
-		z3fold_page_unlock(zhdr);
 		clear_bit(PAGE_CLAIMED, &page->private);
+		z3fold_page_unlock(zhdr);
 		return -EBUSY;
 	}
 	if (work_pending(&zhdr->work)) {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (6 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:24   ` Vitaly Wool
  2022-04-29  6:40 ` [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map Miaohe Lin
  2022-05-17 23:45 ` [PATCH 0/9] A few fixup patches for z3fold Andrew Morton
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Think about the below scene:

CPU1				CPU2
z3fold_reclaim_page		z3fold_free
 spin_lock(&pool->lock)		 get_z3fold_header -- hold page_lock
 kref_get_unless_zero
				 kref_put--zhdr->refcount can be 1 now
 !z3fold_page_trylock
  kref_put -- zhdr->refcount is 0 now
   release_z3fold_page
    WARN_ON(!list_empty(&zhdr->buddy)); -- we're on buddy now!
    spin_lock(&pool->lock); -- deadlock here!

z3fold_reclaim_page might race with z3fold_free and will lead to pool lock
deadlock and zhdr buddy non-empty warning. To fix this, defer getting the
refcount until page_lock is held just like what __z3fold_alloc does. Note
this has the side effect that we won't break the reclaim if we meet a soon
to be released z3fold page now.

Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 18 +++---------------
 1 file changed, 3 insertions(+), 15 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4a3cd2ff15b0..a7769befd74e 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -519,13 +519,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
 	atomic64_dec(&pool->pages_nr);
 }
 
-static void release_z3fold_page(struct kref *ref)
-{
-	struct z3fold_header *zhdr = container_of(ref, struct z3fold_header,
-						refcount);
-	__release_z3fold_page(zhdr, false);
-}
-
 static void release_z3fold_page_locked(struct kref *ref)
 {
 	struct z3fold_header *zhdr = container_of(ref, struct z3fold_header,
@@ -1317,12 +1310,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
 				break;
 			}
 
-			if (kref_get_unless_zero(&zhdr->refcount) == 0) {
-				zhdr = NULL;
-				break;
-			}
 			if (!z3fold_page_trylock(zhdr)) {
-				kref_put(&zhdr->refcount, release_z3fold_page);
 				zhdr = NULL;
 				continue; /* can't evict at this point */
 			}
@@ -1333,14 +1321,14 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
 			 */
 			if (zhdr->foreign_handles ||
 			    test_and_set_bit(PAGE_CLAIMED, &page->private)) {
-				if (!kref_put(&zhdr->refcount,
-						release_z3fold_page_locked))
-					z3fold_page_unlock(zhdr);
+				z3fold_page_unlock(zhdr);
 				zhdr = NULL;
 				continue; /* can't evict such page */
 			}
 			list_del_init(&zhdr->buddy);
 			zhdr->cpu = -1;
+			/* See comment in __z3fold_alloc. */
+			kref_get(&zhdr->refcount);
 			break;
 		}
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (7 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free Miaohe Lin
@ 2022-04-29  6:40 ` Miaohe Lin
  2022-05-19  7:28   ` Vitaly Wool
  2022-05-17 23:45 ` [PATCH 0/9] A few fixup patches for z3fold Andrew Morton
  9 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-04-29  6:40 UTC (permalink / raw)
  To: akpm, vitaly.wool; +Cc: linux-mm, linux-kernel, linmiaohe

Think about the below scene:
CPU1				CPU2
 z3fold_page_migrate		z3fold_map
  z3fold_page_trylock
  ...
  z3fold_page_unlock
  /* slots still points to old zhdr*/
				 get_z3fold_header
				  get slots from handle
				  get old zhdr from slots
				  z3fold_page_trylock
				  return *old* zhdr
  encode_handle(new_zhdr, FIRST|LAST|MIDDLE)
  put_page(page) /* zhdr is freed! */
				 but zhdr is still used by caller!

z3fold_map can map freed z3fold page and lead to use-after-free bug.
To fix it, we add PAGE_MIGRATED to indicate z3fold page is migrated
and soon to be released. So get_z3fold_header won't return such page.

Fixes: 1f862989b04a ("mm/z3fold.c: support page migration")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
 mm/z3fold.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index a7769befd74e..f41f8b0d9e9a 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -181,6 +181,7 @@ enum z3fold_page_flags {
 	NEEDS_COMPACTING,
 	PAGE_STALE,
 	PAGE_CLAIMED, /* by either reclaim or free */
+	PAGE_MIGRATED, /* page is migrated and soon to be released */
 };
 
 /*
@@ -270,8 +271,13 @@ static inline struct z3fold_header *get_z3fold_header(unsigned long handle)
 			zhdr = (struct z3fold_header *)(addr & PAGE_MASK);
 			locked = z3fold_page_trylock(zhdr);
 			read_unlock(&slots->lock);
-			if (locked)
-				break;
+			if (locked) {
+				struct page *page = virt_to_page(zhdr);
+
+				if (!test_bit(PAGE_MIGRATED, &page->private))
+					break;
+				z3fold_page_unlock(zhdr);
+			}
 			cpu_relax();
 		} while (true);
 	} else {
@@ -389,6 +395,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless,
 	clear_bit(NEEDS_COMPACTING, &page->private);
 	clear_bit(PAGE_STALE, &page->private);
 	clear_bit(PAGE_CLAIMED, &page->private);
+	clear_bit(PAGE_MIGRATED, &page->private);
 	if (headless)
 		return zhdr;
 
@@ -1576,7 +1583,7 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
 	new_zhdr = page_address(newpage);
 	memcpy(new_zhdr, zhdr, PAGE_SIZE);
 	newpage->private = page->private;
-	page->private = 0;
+	set_bit(PAGE_MIGRATED, &page->private);
 	z3fold_page_unlock(zhdr);
 	spin_lock_init(&new_zhdr->page_lock);
 	INIT_WORK(&new_zhdr->work, compact_page_work);
@@ -1606,7 +1613,8 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
 
 	queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work);
 
-	clear_bit(PAGE_CLAIMED, &page->private);
+	/* PAGE_CLAIMED and PAGE_MIGRATED are cleared now. */
+	page->private = 0;
 	put_page(page);
 	return 0;
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/9] A few fixup patches for z3fold
  2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
                   ` (8 preceding siblings ...)
  2022-04-29  6:40 ` [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map Miaohe Lin
@ 2022-05-17 23:45 ` Andrew Morton
  2022-05-18  2:01   ` Miaohe Lin
  9 siblings, 1 reply; 27+ messages in thread
From: Andrew Morton @ 2022-05-17 23:45 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: vitaly.wool, linux-mm, linux-kernel

On Fri, 29 Apr 2022 14:40:42 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:

> This series contains a few fixup patches to fix sheduling while atomic,
> fix possible null pointer dereferencing, fix various race conditions and
> so on. More details can be found in the respective changelogs.

We haven't heard from Vitaly but this series has been in mm-unstable
and linux-next for three weeks, so I plan to move it into mm-stable
later this week.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/9] A few fixup patches for z3fold
  2022-05-17 23:45 ` [PATCH 0/9] A few fixup patches for z3fold Andrew Morton
@ 2022-05-18  2:01   ` Miaohe Lin
  2022-05-18 10:39     ` Vitaly Wool
  0 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-05-18  2:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: vitaly.wool, linux-mm, linux-kernel

On 2022/5/18 7:45, Andrew Morton wrote:
> On Fri, 29 Apr 2022 14:40:42 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:
> 
>> This series contains a few fixup patches to fix sheduling while atomic,
>> fix possible null pointer dereferencing, fix various race conditions and
>> so on. More details can be found in the respective changelogs.
> 
> We haven't heard from Vitaly but this series has been in mm-unstable

I will be really grateful if Vitaly has the time to review. :)

> and linux-next for three weeks, so I plan to move it into mm-stable
> later this week.

Thanks!

> .
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/9] A few fixup patches for z3fold
  2022-05-18  2:01   ` Miaohe Lin
@ 2022-05-18 10:39     ` Vitaly Wool
  2022-05-19  1:54       ` Miaohe Lin
  0 siblings, 1 reply; 27+ messages in thread
From: Vitaly Wool @ 2022-05-18 10:39 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Wed, May 18, 2022 at 4:02 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> On 2022/5/18 7:45, Andrew Morton wrote:
> > On Fri, 29 Apr 2022 14:40:42 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:
> >
> >> This series contains a few fixup patches to fix sheduling while atomic,
> >> fix possible null pointer dereferencing, fix various race conditions and
> >> so on. More details can be found in the respective changelogs.
> >
> > We haven't heard from Vitaly but this series has been in mm-unstable
>
> I will be really grateful if Vitaly has the time to review. :)

I absolutely will, sorry for the delay.

~Vitaly

>
>
> > and linux-next for three weeks, so I plan to move it into mm-stable
> > later this week.
>
> Thanks!
>
> > .
> >
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 0/9] A few fixup patches for z3fold
  2022-05-18 10:39     ` Vitaly Wool
@ 2022-05-19  1:54       ` Miaohe Lin
  0 siblings, 0 replies; 27+ messages in thread
From: Miaohe Lin @ 2022-05-19  1:54 UTC (permalink / raw)
  To: Vitaly Wool; +Cc: Andrew Morton, Linux-MM, LKML

On 2022/5/18 18:39, Vitaly Wool wrote:
> On Wed, May 18, 2022 at 4:02 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2022/5/18 7:45, Andrew Morton wrote:
>>> On Fri, 29 Apr 2022 14:40:42 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>
>>>> This series contains a few fixup patches to fix sheduling while atomic,
>>>> fix possible null pointer dereferencing, fix various race conditions and
>>>> so on. More details can be found in the respective changelogs.
>>>
>>> We haven't heard from Vitaly but this series has been in mm-unstable
>>
>> I will be really grateful if Vitaly has the time to review. :)
> 
> I absolutely will, sorry for the delay.

That will be really helpful. Thanks!

> 
> ~Vitaly
> 
>>
>>
>>> and linux-next for three weeks, so I plan to move it into mm-stable
>>> later this week.
>>
>> Thanks!
>>
>>> .
>>>
>>
> .
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/9] mm/z3fold: fix sheduling while atomic
  2022-04-29  6:40 ` [PATCH 1/9] mm/z3fold: fix sheduling while atomic Miaohe Lin
@ 2022-05-19  7:00   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:00 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> z3fold's page_lock is always held when calling alloc_slots. So gfp should
> be GFP_ATOMIC to avoid "scheduling while atomic" bug.
>
> Fixes: fc5488651c7d ("z3fold: simplify freeing slots")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 83b5a3514427..c2260f5a5885 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -941,8 +941,7 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
>         }
>
>         if (zhdr && !zhdr->slots)
> -               zhdr->slots = alloc_slots(pool,
> -                                       can_sleep ? GFP_NOIO : GFP_ATOMIC);
> +               zhdr->slots = alloc_slots(pool, GFP_ATOMIC);
>         return zhdr;
>  }

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing
  2022-04-29  6:40 ` [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing Miaohe Lin
@ 2022-05-19  7:04   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:04 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> alloc_slots could fail to allocate memory under heavy memory pressure. So
> we should check zhdr->slots against NULL to avoid future null pointer
> dereferencing.
>
> Fixes: fc5488651c7d ("z3fold: simplify freeing slots")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index c2260f5a5885..5d8c21f2bc59 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -940,9 +940,19 @@ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool,
>                 }
>         }
>
> -       if (zhdr && !zhdr->slots)
> +       if (zhdr && !zhdr->slots) {
>                 zhdr->slots = alloc_slots(pool, GFP_ATOMIC);
> +               if (!zhdr->slots)
> +                       goto out_fail;
> +       }
>         return zhdr;
> +
> +out_fail:
> +       if (!kref_put(&zhdr->refcount, release_z3fold_page_locked)) {
> +               add_to_unbuddied(pool, zhdr);
> +               z3fold_page_unlock(zhdr);
> +       }
> +       return NULL;
>  }

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
>  /*
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation
  2022-04-29  6:40 ` [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation Miaohe Lin
@ 2022-05-19  7:06   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:06 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Currently if z3fold couldn't find an unbuddied page it would first try to
> pull a page off the stale list. But this approach is problematic. If init
> z3fold page fails later, the page should be freed via free_z3fold_page to
> clean up the relevant resource instead of using __free_page directly. And
> if page is successfully reused, it will BUG_ON later in __SetPageMovable
> because it's already non-lru movable page, i.e. PAGE_MAPPING_MOVABLE is
> already set in page->mapping. In order to fix all of these issues, we can
> simply remove the buggy use of stale list for allocation because can_sleep
> should always be false and we never really hit the reusing code path now.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 23 +----------------------
>  1 file changed, 1 insertion(+), 22 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 5d8c21f2bc59..4e6814c5694f 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1102,28 +1102,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>                 bud = FIRST;
>         }
>
> -       page = NULL;
> -       if (can_sleep) {
> -               spin_lock(&pool->stale_lock);
> -               zhdr = list_first_entry_or_null(&pool->stale,
> -                                               struct z3fold_header, buddy);
> -               /*
> -                * Before allocating a page, let's see if we can take one from
> -                * the stale pages list. cancel_work_sync() can sleep so we
> -                * limit this case to the contexts where we can sleep
> -                */
> -               if (zhdr) {
> -                       list_del(&zhdr->buddy);
> -                       spin_unlock(&pool->stale_lock);
> -                       cancel_work_sync(&zhdr->work);
> -                       page = virt_to_page(zhdr);
> -               } else {
> -                       spin_unlock(&pool->stale_lock);
> -               }
> -       }
> -       if (!page)
> -               page = alloc_page(gfp);
> -
> +       page = alloc_page(gfp);
>         if (!page)
>                 return -ENOMEM;

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc
  2022-04-29  6:40 ` [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc Miaohe Lin
@ 2022-05-19  7:10   ` Vitaly Wool
  2022-05-19 11:10     ` Miaohe Lin
  0 siblings, 1 reply; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:10 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> If trylock_page fails, the page won't be non-lru movable page. When this
> page is freed via free_z3fold_page, it will trigger bug on PageMovable
> check in __ClearPageMovable. Throw warning on failure of trylock_page to
> guard against such rare case just as what zsmalloc does.

I don't see how this is better than what we currently have. We can
check if a page is movable before calling __ClearPageMovable instead.

~Vitaly

> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 4e6814c5694f..b3b4e65c107f 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1122,10 +1122,9 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>                 __SetPageMovable(page, pool->inode->i_mapping);
>                 unlock_page(page);
>         } else {
> -               if (trylock_page(page)) {
> -                       __SetPageMovable(page, pool->inode->i_mapping);
> -                       unlock_page(page);
> -               }
> +               WARN_ON(!trylock_page(page));
> +               __SetPageMovable(page, pool->inode->i_mapping);
> +               unlock_page(page);
>         }
>         z3fold_page_lock(zhdr);
>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  2022-04-29  6:40 ` [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc" Miaohe Lin
@ 2022-05-19  7:12   ` Vitaly Wool
  2022-05-19 11:34     ` Miaohe Lin
  0 siblings, 1 reply; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:12 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Revert commit f1549cb5ab2b ("mm/z3fold.c: allow __GFP_HIGHMEM in
> z3fold_alloc").
>
> z3fold can't support GFP_HIGHMEM page now. page_address is used
> directly at all places. Moreover, z3fold_header is on per cpu
> unbuddied list which could be access anytime. So we should rid
> the support of GFP_HIGHMEM allocation for z3fold.

Could you please clarify how kmem_cache is affected here?

Thanks,
Vitaly

> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index b3b4e65c107f..5f5d5f1556be 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -212,10 +212,8 @@ static int size_to_chunks(size_t size)
>  static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool,
>                                                         gfp_t gfp)
>  {
> -       struct z3fold_buddy_slots *slots;
> -
> -       slots = kmem_cache_zalloc(pool->c_handle,
> -                                (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
> +       struct z3fold_buddy_slots *slots = kmem_cache_zalloc(pool->c_handle,
> +                                                            gfp);
>
>         if (slots) {
>                 /* It will be freed separately in free_handle(). */
> @@ -1075,7 +1073,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>         enum buddy bud;
>         bool can_sleep = gfpflags_allow_blocking(gfp);
>
> -       if (!size)
> +       if (!size || (gfp & __GFP_HIGHMEM))
>                 return -EINVAL;
>
>         if (size > PAGE_SIZE)
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails
  2022-04-29  6:40 ` [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails Miaohe Lin
@ 2022-05-19  7:13   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:13 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> When doing z3fold page reclaim or migration, the page is removed from
> unbuddied list. If reclaim or migration succeeds, it's fine as page is
> released. But in case it fails, the page is not put back into unbuddied
> list now. The page will be leaked until next compaction work, reclaim
> or migration is done.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 5f5d5f1556be..a1c150fc8def 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1422,6 +1422,8 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>                         spin_lock(&pool->lock);
>                         list_add(&page->lru, &pool->lru);
>                         spin_unlock(&pool->lock);
> +                       if (list_empty(&zhdr->buddy))
> +                               add_to_unbuddied(pool, zhdr);
>                         z3fold_page_unlock(zhdr);
>                         clear_bit(PAGE_CLAIMED, &page->private);
>                 }
> @@ -1638,6 +1640,8 @@ static void z3fold_page_putback(struct page *page)
>         spin_lock(&pool->lock);
>         list_add(&page->lru, &pool->lru);
>         spin_unlock(&pool->lock);
> +       if (list_empty(&zhdr->buddy))
> +               add_to_unbuddied(pool, zhdr);
>         clear_bit(PAGE_CLAIMED, &page->private);
>         z3fold_page_unlock(zhdr);
>  }

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock
  2022-04-29  6:40 ` [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock Miaohe Lin
@ 2022-05-19  7:14   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:14 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Think about the below race window:
>
> CPU1                            CPU2
> z3fold_reclaim_page             z3fold_free
>  test_and_set_bit PAGE_CLAIMED
>  failed to reclaim page
>  z3fold_page_lock(zhdr);
>  add back to the lru list;
>  z3fold_page_unlock(zhdr);
>                                  get_z3fold_header
>                                  page_claimed=test_and_set_bit PAGE_CLAIMED
>
>  clear_bit(PAGE_CLAIMED, &page->private);
>
>                                  if (!page_claimed) /* it's false true */
>                                   free_handle is not called
>
> free_handle won't be called in this case. So z3fold_buddy_slots will leak.
> Fix it by always clear PAGE_CLAIMED under z3fold page lock.
>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index a1c150fc8def..4a3cd2ff15b0 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -1221,8 +1221,8 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle)
>                 return;
>         }
>         if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) {
> -               put_z3fold_header(zhdr);
>                 clear_bit(PAGE_CLAIMED, &page->private);
> +               put_z3fold_header(zhdr);
>                 return;
>         }
>         if (zhdr->cpu < 0 || !cpu_online(zhdr->cpu)) {
> @@ -1424,8 +1424,8 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>                         spin_unlock(&pool->lock);
>                         if (list_empty(&zhdr->buddy))
>                                 add_to_unbuddied(pool, zhdr);
> -                       z3fold_page_unlock(zhdr);
>                         clear_bit(PAGE_CLAIMED, &page->private);
> +                       z3fold_page_unlock(zhdr);
>                 }
>
>                 /* We started off locked to we need to lock the pool back */
> @@ -1577,8 +1577,8 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
>         if (!z3fold_page_trylock(zhdr))
>                 return -EAGAIN;
>         if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) {
> -               z3fold_page_unlock(zhdr);
>                 clear_bit(PAGE_CLAIMED, &page->private);
> +               z3fold_page_unlock(zhdr);
>                 return -EBUSY;
>         }
>         if (work_pending(&zhdr->work)) {

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free
  2022-04-29  6:40 ` [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free Miaohe Lin
@ 2022-05-19  7:24   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:24 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Think about the below scene:
>
> CPU1                            CPU2
> z3fold_reclaim_page             z3fold_free
>  spin_lock(&pool->lock)          get_z3fold_header -- hold page_lock
>  kref_get_unless_zero
>                                  kref_put--zhdr->refcount can be 1 now
>  !z3fold_page_trylock
>   kref_put -- zhdr->refcount is 0 now
>    release_z3fold_page
>     WARN_ON(!list_empty(&zhdr->buddy)); -- we're on buddy now!
>     spin_lock(&pool->lock); -- deadlock here!
>
> z3fold_reclaim_page might race with z3fold_free and will lead to pool lock
> deadlock and zhdr buddy non-empty warning. To fix this, defer getting the
> refcount until page_lock is held just like what __z3fold_alloc does. Note
> this has the side effect that we won't break the reclaim if we meet a soon
> to be released z3fold page now.
>
> Fixes: dcf5aedb24f8 ("z3fold: stricter locking and more careful reclaim")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 18 +++---------------
>  1 file changed, 3 insertions(+), 15 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 4a3cd2ff15b0..a7769befd74e 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -519,13 +519,6 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
>         atomic64_dec(&pool->pages_nr);
>  }
>
> -static void release_z3fold_page(struct kref *ref)
> -{
> -       struct z3fold_header *zhdr = container_of(ref, struct z3fold_header,
> -                                               refcount);
> -       __release_z3fold_page(zhdr, false);
> -}
> -
>  static void release_z3fold_page_locked(struct kref *ref)
>  {
>         struct z3fold_header *zhdr = container_of(ref, struct z3fold_header,
> @@ -1317,12 +1310,7 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>                                 break;
>                         }
>
> -                       if (kref_get_unless_zero(&zhdr->refcount) == 0) {
> -                               zhdr = NULL;
> -                               break;
> -                       }
>                         if (!z3fold_page_trylock(zhdr)) {
> -                               kref_put(&zhdr->refcount, release_z3fold_page);
>                                 zhdr = NULL;
>                                 continue; /* can't evict at this point */
>                         }
> @@ -1333,14 +1321,14 @@ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries)
>                          */
>                         if (zhdr->foreign_handles ||
>                             test_and_set_bit(PAGE_CLAIMED, &page->private)) {
> -                               if (!kref_put(&zhdr->refcount,
> -                                               release_z3fold_page_locked))
> -                                       z3fold_page_unlock(zhdr);
> +                               z3fold_page_unlock(zhdr);
>                                 zhdr = NULL;
>                                 continue; /* can't evict such page */
>                         }
>                         list_del_init(&zhdr->buddy);
>                         zhdr->cpu = -1;
> +                       /* See comment in __z3fold_alloc. */
> +                       kref_get(&zhdr->refcount);
>                         break;
>                 }
>

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map
  2022-04-29  6:40 ` [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map Miaohe Lin
@ 2022-05-19  7:28   ` Vitaly Wool
  0 siblings, 0 replies; 27+ messages in thread
From: Vitaly Wool @ 2022-05-19  7:28 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Andrew Morton, Linux-MM, LKML

On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>
> Think about the below scene:
> CPU1                            CPU2
>  z3fold_page_migrate            z3fold_map
>   z3fold_page_trylock
>   ...
>   z3fold_page_unlock
>   /* slots still points to old zhdr*/
>                                  get_z3fold_header
>                                   get slots from handle
>                                   get old zhdr from slots
>                                   z3fold_page_trylock
>                                   return *old* zhdr
>   encode_handle(new_zhdr, FIRST|LAST|MIDDLE)
>   put_page(page) /* zhdr is freed! */
>                                  but zhdr is still used by caller!
>
> z3fold_map can map freed z3fold page and lead to use-after-free bug.
> To fix it, we add PAGE_MIGRATED to indicate z3fold page is migrated
> and soon to be released. So get_z3fold_header won't return such page.
>
> Fixes: 1f862989b04a ("mm/z3fold.c: support page migration")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/z3fold.c | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index a7769befd74e..f41f8b0d9e9a 100644
> --- a/mm/z3fold.c
> +++ b/mm/z3fold.c
> @@ -181,6 +181,7 @@ enum z3fold_page_flags {
>         NEEDS_COMPACTING,
>         PAGE_STALE,
>         PAGE_CLAIMED, /* by either reclaim or free */
> +       PAGE_MIGRATED, /* page is migrated and soon to be released */
>  };
>
>  /*
> @@ -270,8 +271,13 @@ static inline struct z3fold_header *get_z3fold_header(unsigned long handle)
>                         zhdr = (struct z3fold_header *)(addr & PAGE_MASK);
>                         locked = z3fold_page_trylock(zhdr);
>                         read_unlock(&slots->lock);
> -                       if (locked)
> -                               break;
> +                       if (locked) {
> +                               struct page *page = virt_to_page(zhdr);
> +
> +                               if (!test_bit(PAGE_MIGRATED, &page->private))
> +                                       break;
> +                               z3fold_page_unlock(zhdr);
> +                       }
>                         cpu_relax();
>                 } while (true);
>         } else {
> @@ -389,6 +395,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless,
>         clear_bit(NEEDS_COMPACTING, &page->private);
>         clear_bit(PAGE_STALE, &page->private);
>         clear_bit(PAGE_CLAIMED, &page->private);
> +       clear_bit(PAGE_MIGRATED, &page->private);
>         if (headless)
>                 return zhdr;
>
> @@ -1576,7 +1583,7 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
>         new_zhdr = page_address(newpage);
>         memcpy(new_zhdr, zhdr, PAGE_SIZE);
>         newpage->private = page->private;
> -       page->private = 0;
> +       set_bit(PAGE_MIGRATED, &page->private);
>         z3fold_page_unlock(zhdr);
>         spin_lock_init(&new_zhdr->page_lock);
>         INIT_WORK(&new_zhdr->work, compact_page_work);
> @@ -1606,7 +1613,8 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa
>
>         queue_work_on(new_zhdr->cpu, pool->compact_wq, &new_zhdr->work);
>
> -       clear_bit(PAGE_CLAIMED, &page->private);
> +       /* PAGE_CLAIMED and PAGE_MIGRATED are cleared now. */
> +       page->private = 0;
>         put_page(page);
>         return 0;
>  }

Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc
  2022-05-19  7:10   ` Vitaly Wool
@ 2022-05-19 11:10     ` Miaohe Lin
  0 siblings, 0 replies; 27+ messages in thread
From: Miaohe Lin @ 2022-05-19 11:10 UTC (permalink / raw)
  To: Vitaly Wool; +Cc: Andrew Morton, Linux-MM, LKML

On 2022/5/19 15:10, Vitaly Wool wrote:
> On Fri, Apr 29, 2022 at 8:40 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> If trylock_page fails, the page won't be non-lru movable page. When this
>> page is freed via free_z3fold_page, it will trigger bug on PageMovable
>> check in __ClearPageMovable. Throw warning on failure of trylock_page to
>> guard against such rare case just as what zsmalloc does.
> 
> I don't see how this is better than what we currently have. We can
> check if a page is movable before calling __ClearPageMovable instead.

Currently the z3fold page (unless headless page) is assumed to be non-lru movable.
We will always do __ClearPageMovable in free_z3fold_page which will trigger BUG_ON
!PageMovable now. And do you mean we should do something like below ?

diff --git a/mm/z3fold.c b/mm/z3fold.c
index f41f8b0d9e9a..a244bb5dcb34 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -417,9 +417,10 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless,
 /* Resets the struct page fields and frees the page */
 static void free_z3fold_page(struct page *page, bool headless)
 {
-       if (!headless) {
+       if (likely(__PageMovable(page))) {
                lock_page(page);
-               __ClearPageMovable(page);
+               if (PageMovable(page))
+                       __ClearPageMovable(page);
                unlock_page(page);
        }
        __free_page(page);

Thanks!

> 
> ~Vitaly
> 
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/z3fold.c | 7 +++----
>>  1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> index 4e6814c5694f..b3b4e65c107f 100644
>> --- a/mm/z3fold.c
>> +++ b/mm/z3fold.c
>> @@ -1122,10 +1122,9 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>>                 __SetPageMovable(page, pool->inode->i_mapping);
>>                 unlock_page(page);
>>         } else {
>> -               if (trylock_page(page)) {
>> -                       __SetPageMovable(page, pool->inode->i_mapping);
>> -                       unlock_page(page);
>> -               }
>> +               WARN_ON(!trylock_page(page));
>> +               __SetPageMovable(page, pool->inode->i_mapping);
>> +               unlock_page(page);
>>         }
>>         z3fold_page_lock(zhdr);
>>
>> --
>> 2.23.0
>>
> .
> 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  2022-05-19  7:12   ` Vitaly Wool
@ 2022-05-19 11:34     ` Miaohe Lin
  2022-05-19 18:31       ` Andrew Morton
  0 siblings, 1 reply; 27+ messages in thread
From: Miaohe Lin @ 2022-05-19 11:34 UTC (permalink / raw)
  To: Vitaly Wool; +Cc: Andrew Morton, Linux-MM, LKML

On 2022/5/19 15:12, Vitaly Wool wrote:
> On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> Revert commit f1549cb5ab2b ("mm/z3fold.c: allow __GFP_HIGHMEM in
>> z3fold_alloc").
>>
>> z3fold can't support GFP_HIGHMEM page now. page_address is used
>> directly at all places. Moreover, z3fold_header is on per cpu
>> unbuddied list which could be access anytime. So we should rid
>> the support of GFP_HIGHMEM allocation for z3fold.
> 
> Could you please clarify how kmem_cache is affected here?

With this code changes, kmem_cache should be unaffected. HIGHMEM is still not supported for
kmem_cache just like before but caller ensures __GFP_HIGHMEM is not passed in now. The issue
I want to fix here is that if z3fold page is allocated from highmem, page_address can't be
used directly. Did I answer your question? Or don't I get your point?

Thanks!

> 
> Thanks,
> Vitaly

Many thanks for your time! :)

> 
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>>  mm/z3fold.c | 8 +++-----
>>  1 file changed, 3 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> index b3b4e65c107f..5f5d5f1556be 100644
>> --- a/mm/z3fold.c
>> +++ b/mm/z3fold.c
>> @@ -212,10 +212,8 @@ static int size_to_chunks(size_t size)
>>  static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool,
>>                                                         gfp_t gfp)
>>  {
>> -       struct z3fold_buddy_slots *slots;
>> -
>> -       slots = kmem_cache_zalloc(pool->c_handle,
>> -                                (gfp & ~(__GFP_HIGHMEM | __GFP_MOVABLE)));
>> +       struct z3fold_buddy_slots *slots = kmem_cache_zalloc(pool->c_handle,
>> +                                                            gfp);
>>
>>         if (slots) {
>>                 /* It will be freed separately in free_handle(). */
>> @@ -1075,7 +1073,7 @@ static int z3fold_alloc(struct z3fold_pool *pool, size_t size, gfp_t gfp,
>>         enum buddy bud;
>>         bool can_sleep = gfpflags_allow_blocking(gfp);
>>
>> -       if (!size)
>> +       if (!size || (gfp & __GFP_HIGHMEM))
>>                 return -EINVAL;
>>
>>         if (size > PAGE_SIZE)
>> --
>> 2.23.0
>>
> .
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  2022-05-19 11:34     ` Miaohe Lin
@ 2022-05-19 18:31       ` Andrew Morton
  2022-05-20  2:30         ` Miaohe Lin
  0 siblings, 1 reply; 27+ messages in thread
From: Andrew Morton @ 2022-05-19 18:31 UTC (permalink / raw)
  To: Miaohe Lin; +Cc: Vitaly Wool, Linux-MM, LKML

On Thu, 19 May 2022 19:34:01 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:

> On 2022/5/19 15:12, Vitaly Wool wrote:
> > On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
> >>
> >> Revert commit f1549cb5ab2b ("mm/z3fold.c: allow __GFP_HIGHMEM in
> >> z3fold_alloc").
> >>
> >> z3fold can't support GFP_HIGHMEM page now. page_address is used
> >> directly at all places. Moreover, z3fold_header is on per cpu
> >> unbuddied list which could be access anytime. So we should rid
> >> the support of GFP_HIGHMEM allocation for z3fold.
> > 
> > Could you please clarify how kmem_cache is affected here?
> 
> With this code changes, kmem_cache should be unaffected. HIGHMEM is still not supported for
> kmem_cache just like before but caller ensures __GFP_HIGHMEM is not passed in now. The issue
> I want to fix here is that if z3fold page is allocated from highmem, page_address can't be
> used directly. Did I answer your question? Or don't I get your point?
> 

Yes, page_address() against a highmem page only works if that page has
been mapped into pagetables with kmap() or kmap_atomic(), and z3fold
doesn't appear to do that.

Given that other zpool_driver implementations do appear to support
highmem pages, I expect that z3fold should be taught likewise.


I didn't look very hard, but this particular patch is a bit worrisome. 
As I understand it, zswap can enable highmem:

	if (zpool_malloc_support_movable(entry->pool->zpool))
		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;

and z3fold will silently ignore the __GFP_HIGHMEM, which is OK.  But
with this patch, z3fold will now return -EINVAL, so existing setups
will start failing?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc"
  2022-05-19 18:31       ` Andrew Morton
@ 2022-05-20  2:30         ` Miaohe Lin
  0 siblings, 0 replies; 27+ messages in thread
From: Miaohe Lin @ 2022-05-20  2:30 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Vitaly Wool, Linux-MM, LKML

On 2022/5/20 2:31, Andrew Morton wrote:
> On Thu, 19 May 2022 19:34:01 +0800 Miaohe Lin <linmiaohe@huawei.com> wrote:
> 
>> On 2022/5/19 15:12, Vitaly Wool wrote:
>>> On Fri, Apr 29, 2022 at 8:41 AM Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>
>>>> Revert commit f1549cb5ab2b ("mm/z3fold.c: allow __GFP_HIGHMEM in
>>>> z3fold_alloc").
>>>>
>>>> z3fold can't support GFP_HIGHMEM page now. page_address is used
>>>> directly at all places. Moreover, z3fold_header is on per cpu
>>>> unbuddied list which could be access anytime. So we should rid
>>>> the support of GFP_HIGHMEM allocation for z3fold.
>>>
>>> Could you please clarify how kmem_cache is affected here?
>>
>> With this code changes, kmem_cache should be unaffected. HIGHMEM is still not supported for
>> kmem_cache just like before but caller ensures __GFP_HIGHMEM is not passed in now. The issue
>> I want to fix here is that if z3fold page is allocated from highmem, page_address can't be
>> used directly. Did I answer your question? Or don't I get your point?
>>
> 
> Yes, page_address() against a highmem page only works if that page has
> been mapped into pagetables with kmap() or kmap_atomic(), and z3fold
> doesn't appear to do that.

What's more, usually z3fold page is on the percpu unbuddied list and thus can be
accessed directly at any time when needed. So we can't do kunmap or kunmap_atomic
until z3fold page is taken off the list.

> 
> Given that other zpool_driver implementations do appear to support
> highmem pages, I expect that z3fold should be taught likewise.
> 

IMHO, it might be too cumbersome to support highmem pages due to above reason.

> 
> I didn't look very hard, but this particular patch is a bit worrisome. 
> As I understand it, zswap can enable highmem:
> 
> 	if (zpool_malloc_support_movable(entry->pool->zpool))
> 		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
> 
> and z3fold will silently ignore the __GFP_HIGHMEM, which is OK.  But
> with this patch, z3fold will now return -EINVAL, so existing setups
> will start failing?

IIUC, malloc_support_movable is never set for z3fold_zpool_driver. So zpool_malloc_support_movable
will always return false, i.e. __GFP_HIGHMEM | __GFP_MOVABLE won't be set for z3fold page now
(otherwise page_address will return NULL for highmem pages and null-pointer dereferencing should be
witnessed already). Therefore existing setups will keep unaffected. Or am I miss something?

Thanks a lot!

> 
> .
> 


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2022-05-20  2:31 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-29  6:40 [PATCH 0/9] A few fixup patches for z3fold Miaohe Lin
2022-04-29  6:40 ` [PATCH 1/9] mm/z3fold: fix sheduling while atomic Miaohe Lin
2022-05-19  7:00   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 2/9] mm/z3fold: fix possible null pointer dereferencing Miaohe Lin
2022-05-19  7:04   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 3/9] mm/z3fold: remove buggy use of stale list for allocation Miaohe Lin
2022-05-19  7:06   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 4/9] mm/z3fold: throw warning on failure of trylock_page in z3fold_alloc Miaohe Lin
2022-05-19  7:10   ` Vitaly Wool
2022-05-19 11:10     ` Miaohe Lin
2022-04-29  6:40 ` [PATCH 5/9] revert "mm/z3fold.c: allow __GFP_HIGHMEM in z3fold_alloc" Miaohe Lin
2022-05-19  7:12   ` Vitaly Wool
2022-05-19 11:34     ` Miaohe Lin
2022-05-19 18:31       ` Andrew Morton
2022-05-20  2:30         ` Miaohe Lin
2022-04-29  6:40 ` [PATCH 6/9] mm/z3fold: put z3fold page back into unbuddied list when reclaim or migration fails Miaohe Lin
2022-05-19  7:13   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 7/9] mm/z3fold: always clear PAGE_CLAIMED under z3fold page lock Miaohe Lin
2022-05-19  7:14   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 8/9] mm/z3fold: fix z3fold_reclaim_page races with z3fold_free Miaohe Lin
2022-05-19  7:24   ` Vitaly Wool
2022-04-29  6:40 ` [PATCH 9/9] mm/z3fold: fix z3fold_page_migrate races with z3fold_map Miaohe Lin
2022-05-19  7:28   ` Vitaly Wool
2022-05-17 23:45 ` [PATCH 0/9] A few fixup patches for z3fold Andrew Morton
2022-05-18  2:01   ` Miaohe Lin
2022-05-18 10:39     ` Vitaly Wool
2022-05-19  1:54       ` Miaohe Lin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).