linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
@ 2022-05-09  2:47 Sultan Alsawaf
  2022-05-10  0:06 ` Andrew Morton
  2022-05-11 18:01 ` Minchan Kim
  0 siblings, 2 replies; 10+ messages in thread
From: Sultan Alsawaf @ 2022-05-09  2:47 UTC (permalink / raw)
  Cc: Sultan Alsawaf, stable, Minchan Kim, Nitin Gupta,
	Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel

From: Sultan Alsawaf <sultan@kerneltoast.com>

The asynchronous zspage free worker tries to lock a zspage's entire page
list without defending against page migration. Since pages which haven't
yet been locked can concurrently migrate off the zspage page list while
lock_zspage() churns away, lock_zspage() can suffer from a few different
lethal races. It can lock a page which no longer belongs to the zspage and
unsafely dereference page_private(), it can unsafely dereference a torn
pointer to the next page (since there's a data race), and it can observe a
spurious NULL pointer to the next page and thus not lock all of the
zspage's pages (since a single page migration will reconstruct the entire
page list, and create_page_chain() unconditionally zeroes out each list
pointer in the process).

Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
with page migration.

Cc: stable@vger.kernel.org
Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
---
 mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++----
 1 file changed, 33 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..5d5fc04385b8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
  */
 static void lock_zspage(struct zspage *zspage)
 {
-	struct page *page = get_first_page(zspage);
+	struct page *curr_page, *page;
 
-	do {
-		lock_page(page);
-	} while ((page = get_next_page(page)) != NULL);
+	/*
+	 * Pages we haven't locked yet can be migrated off the list while we're
+	 * trying to lock them, so we need to be careful and only attempt to
+	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
+	 * may no longer belong to the zspage. This means that we may wait for
+	 * the wrong page to unlock, so we must take a reference to the page
+	 * prior to waiting for it to unlock outside migrate_read_lock().
+	 */
+	while (1) {
+		migrate_read_lock(zspage);
+		page = get_first_page(zspage);
+		if (trylock_page(page))
+			break;
+		get_page(page);
+		migrate_read_unlock(zspage);
+		wait_on_page_locked(page);
+		put_page(page);
+	}
+
+	curr_page = page;
+	while ((page = get_next_page(curr_page))) {
+		if (trylock_page(page)) {
+			curr_page = page;
+		} else {
+			get_page(page);
+			migrate_read_unlock(zspage);
+			wait_on_page_locked(page);
+			put_page(page);
+			migrate_read_lock(zspage);
+		}
+	}
+	migrate_read_unlock(zspage);
 }
 
 static int zs_init_fs_context(struct fs_context *fc)
-- 
2.36.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-09  2:47 [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration Sultan Alsawaf
@ 2022-05-10  0:06 ` Andrew Morton
  2022-05-10  1:22   ` Sultan Alsawaf
  2022-05-11 18:01 ` Minchan Kim
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2022-05-10  0:06 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: stable, Minchan Kim, Nitin Gupta, Sergey Senozhatsky, linux-mm,
	linux-kernel

On Sun,  8 May 2022 19:47:02 -0700 Sultan Alsawaf <sultan@kerneltoast.com> wrote:

> From: Sultan Alsawaf <sultan@kerneltoast.com>
> 
> The asynchronous zspage free worker tries to lock a zspage's entire page
> list without defending against page migration. Since pages which haven't
> yet been locked can concurrently migrate off the zspage page list while
> lock_zspage() churns away, lock_zspage() can suffer from a few different
> lethal races. It can lock a page which no longer belongs to the zspage and
> unsafely dereference page_private(), it can unsafely dereference a torn
> pointer to the next page (since there's a data race), and it can observe a
> spurious NULL pointer to the next page and thus not lock all of the
> zspage's pages (since a single page migration will reconstruct the entire
> page list, and create_page_chain() unconditionally zeroes out each list
> pointer in the process).
> 
> Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
> with page migration.
> 
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
>   */
>  static void lock_zspage(struct zspage *zspage)
>  {
> -	struct page *page = get_first_page(zspage);
> +	struct page *curr_page, *page;
>  
> -	do {
> -		lock_page(page);
> -	} while ((page = get_next_page(page)) != NULL);
> +	/*
> +	 * Pages we haven't locked yet can be migrated off the list while we're
> +	 * trying to lock them, so we need to be careful and only attempt to
> +	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
> +	 * may no longer belong to the zspage. This means that we may wait for
> +	 * the wrong page to unlock, so we must take a reference to the page
> +	 * prior to waiting for it to unlock outside migrate_read_lock().
> +	 */
> +	while (1) {
> +		migrate_read_lock(zspage);
> +		page = get_first_page(zspage);
> +		if (trylock_page(page))
> +			break;
> +		get_page(page);
> +		migrate_read_unlock(zspage);
> +		wait_on_page_locked(page);

Why not simply lock_page() here?  The get_page() alone won't protect
from all the dire consequences which you have identified?

> +		put_page(page);
> +	}
> +
> +	curr_page = page;
> +	while ((page = get_next_page(curr_page))) {
> +		if (trylock_page(page)) {
> +			curr_page = page;
> +		} else {
> +			get_page(page);
> +			migrate_read_unlock(zspage);
> +			wait_on_page_locked(page);

ditto.

> +			put_page(page);
> +			migrate_read_lock(zspage);
> +		}
> +	}
> +	migrate_read_unlock(zspage);
>  }
>  
>  static int zs_init_fs_context(struct fs_context *fc)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-10  0:06 ` Andrew Morton
@ 2022-05-10  1:22   ` Sultan Alsawaf
  0 siblings, 0 replies; 10+ messages in thread
From: Sultan Alsawaf @ 2022-05-10  1:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: stable, Minchan Kim, Nitin Gupta, Sergey Senozhatsky, linux-mm,
	linux-kernel

On Mon, May 09, 2022 at 05:06:32PM -0700, Andrew Morton wrote:
> Why not simply lock_page() here?  The get_page() alone won't protect
> from all the dire consequences which you have identified?

Hi,

My reasoning is that if the page migrated, then we've got the last reference
to it anyway and there's no point in locking. But more importantly, we'd still
need to take migrate_read_lock() again in order to verify whether or not the
page migrated because of data races stemming from replace_sub_page(), so I don't
think there's much to gain by using lock_page(). When any of the pages in the
zspage migrates, the entire page list is reconstructed and every page's private
storage is rewritten. I had drafted another change that fixes the data races by
trimming out all of that redundant work done in replace_sub_page(), but I wanted
to keep this patch small to make it easier to review and easier to backport.

Sultan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-09  2:47 [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration Sultan Alsawaf
  2022-05-10  0:06 ` Andrew Morton
@ 2022-05-11 18:01 ` Minchan Kim
  2022-05-11 19:50   ` Sultan Alsawaf
  1 sibling, 1 reply; 10+ messages in thread
From: Minchan Kim @ 2022-05-11 18:01 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: stable, Nitin Gupta, Sergey Senozhatsky, Andrew Morton, linux-mm,
	linux-kernel

On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> From: Sultan Alsawaf <sultan@kerneltoast.com>
> 
> The asynchronous zspage free worker tries to lock a zspage's entire page
> list without defending against page migration. Since pages which haven't
> yet been locked can concurrently migrate off the zspage page list while
> lock_zspage() churns away, lock_zspage() can suffer from a few different
> lethal races. It can lock a page which no longer belongs to the zspage and
> unsafely dereference page_private(), it can unsafely dereference a torn
> pointer to the next page (since there's a data race), and it can observe a
> spurious NULL pointer to the next page and thus not lock all of the
> zspage's pages (since a single page migration will reconstruct the entire
> page list, and create_page_chain() unconditionally zeroes out each list
> pointer in the process).
> 
> Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
> with page migration.
> 
> Cc: stable@vger.kernel.org
> Fixes: 48b4800a1c6a ("zsmalloc: page migration support")

Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
unnecessary loops but not return -EBUSY if zspage is not inuse)?
Because we didn't migrate ZS_EMPTY pages before.

> Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
> ---
>  mm/zsmalloc.c | 37 +++++++++++++++++++++++++++++++++----
>  1 file changed, 33 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..5d5fc04385b8 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1718,11 +1718,40 @@ static enum fullness_group putback_zspage(struct size_class *class,
>   */
>  static void lock_zspage(struct zspage *zspage)
>  {
> -	struct page *page = get_first_page(zspage);
> +	struct page *curr_page, *page;
>  
> -	do {
> -		lock_page(page);
> -	} while ((page = get_next_page(page)) != NULL);
> +	/*
> +	 * Pages we haven't locked yet can be migrated off the list while we're
> +	 * trying to lock them, so we need to be careful and only attempt to
> +	 * lock each page under migrate_read_lock(). Otherwise, the page we lock
> +	 * may no longer belong to the zspage. This means that we may wait for
> +	 * the wrong page to unlock, so we must take a reference to the page
> +	 * prior to waiting for it to unlock outside migrate_read_lock().

I couldn't get the point here. Why couldn't we simple lock zspage migration?

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..05ff2315b7b1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
 
        list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
                list_del(&zspage->list);
+
+               migrate_read_lock(zspage);
                lock_zspage(zspage);
+               migrate_read_unlock(zspage);
 
                get_zspage_mapping(zspage, &class_idx, &fullness);
                VM_BUG_ON(fullness != ZS_EMPTY);


> +	 */
> +	while (1) {
> +		migrate_read_lock(zspage);
> +		page = get_first_page(zspage);
> +		if (trylock_page(page))
> +			break;
> +		get_page(page);
> +		migrate_read_unlock(zspage);
> +		wait_on_page_locked(page);
> +		put_page(page);
> +	}
> +
> +	curr_page = page;
> +	while ((page = get_next_page(curr_page))) {
> +		if (trylock_page(page)) {
> +			curr_page = page;
> +		} else {
> +			get_page(page);
> +			migrate_read_unlock(zspage);
> +			wait_on_page_locked(page);
> +			put_page(page);
> +			migrate_read_lock(zspage);
> +		}
> +	}
> +	migrate_read_unlock(zspage);
>  }
>  
>  static int zs_init_fs_context(struct fs_context *fc)
> -- 
> 2.36.0
> 

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 18:01 ` Minchan Kim
@ 2022-05-11 19:50   ` Sultan Alsawaf
  2022-05-11 20:43     ` Andrew Morton
  2022-05-11 21:07     ` Minchan Kim
  0 siblings, 2 replies; 10+ messages in thread
From: Sultan Alsawaf @ 2022-05-11 19:50 UTC (permalink / raw)
  To: Minchan Kim
  Cc: stable, Nitin Gupta, Sergey Senozhatsky, Andrew Morton, linux-mm,
	linux-kernel

On Wed, May 11, 2022 at 11:01:01AM -0700, Minchan Kim wrote:
> On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> > Cc: stable@vger.kernel.org
> > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> 
> Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> unnecessary loops but not return -EBUSY if zspage is not inuse)?
> Because we didn't migrate ZS_EMPTY pages before.

Hi,

Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.

> I couldn't get the point here. Why couldn't we simple lock zspage migration?
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..05ff2315b7b1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
>  
>         list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
>                 list_del(&zspage->list);
> +
> +               migrate_read_lock(zspage);
>                 lock_zspage(zspage);
> +               migrate_read_unlock(zspage);
>  
>                 get_zspage_mapping(zspage, &class_idx, &fullness);
>                 VM_BUG_ON(fullness != ZS_EMPTY);

There are two problems with this:
1. migrate_read_lock() is a rwlock and lock_page() can sleep.
2. This will cause a deadlock because it violates the lock ordering in
   zs_page_migrate(), since zs_page_migrate() takes migrate_write_lock() under
   the page lock.

Sultan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 19:50   ` Sultan Alsawaf
@ 2022-05-11 20:43     ` Andrew Morton
  2022-05-11 23:12       ` Minchan Kim
  2022-05-11 21:07     ` Minchan Kim
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2022-05-11 20:43 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: Minchan Kim, stable, Nitin Gupta, Sergey Senozhatsky, linux-mm,
	linux-kernel

On Wed, 11 May 2022 12:50:20 -0700 Sultan Alsawaf <sultan@kerneltoast.com> wrote:

> > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > Because we didn't migrate ZS_EMPTY pages before.
> 
> Hi,
> 
> Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.

I updated the changelog, thanks.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 19:50   ` Sultan Alsawaf
  2022-05-11 20:43     ` Andrew Morton
@ 2022-05-11 21:07     ` Minchan Kim
  2022-05-11 21:45       ` Sultan Alsawaf
  1 sibling, 1 reply; 10+ messages in thread
From: Minchan Kim @ 2022-05-11 21:07 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: stable, Nitin Gupta, Sergey Senozhatsky, Andrew Morton, linux-mm,
	linux-kernel

On Wed, May 11, 2022 at 12:50:20PM -0700, Sultan Alsawaf wrote:
> On Wed, May 11, 2022 at 11:01:01AM -0700, Minchan Kim wrote:
> > On Sun, May 08, 2022 at 07:47:02PM -0700, Sultan Alsawaf wrote:
> > > Cc: stable@vger.kernel.org
> > > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> > 
> > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > Because we didn't migrate ZS_EMPTY pages before.
> 
> Hi,
> 
> Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.
> 
> > I couldn't get the point here. Why couldn't we simple lock zspage migration?
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 9152fbde33b5..05ff2315b7b1 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1987,7 +1987,10 @@ static void async_free_zspage(struct work_struct *work)
> >  
> >         list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
> >                 list_del(&zspage->list);
> > +
> > +               migrate_read_lock(zspage);
> >                 lock_zspage(zspage);
> > +               migrate_read_unlock(zspage);
> >  
> >                 get_zspage_mapping(zspage, &class_idx, &fullness);
> >                 VM_BUG_ON(fullness != ZS_EMPTY);
> 
> There are two problems with this:
> 1. migrate_read_lock() is a rwlock and lock_page() can sleep.
> 2. This will cause a deadlock because it violates the lock ordering in
>    zs_page_migrate(), since zs_page_migrate() takes migrate_write_lock() under
>    the page lock.
> 

That's true. Thanks!

Then, how about this?

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 9152fbde33b5..2f205c18aee4 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
  * To prevent zspage destroy during migration, zspage freeing should
  * hold locks of all pages in the zspage.
  */
-static void lock_zspage(struct zspage *zspage)
+static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
 {
-       struct page *page = get_first_page(zspage);
-
+       struct page *page;
+       int nr_locked;
+       struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
+       struct address_space *mapping;
+retry:
+       nr_locked = 0;
+       memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));
+       page = get_first_page(zspage);
        do {
                lock_page(page);
+               locked_pages[nr_locked++] = page;
+               /*
+                * subpage in the zspage could be migrated under us so
+                * verify it. Once it happens, retry the lock sequence.
+                */
+               mapping = page_mapping(page)
+               if (mapping != pool->inode->i_mapping ||
+                   page_private(page) != (unsigned long)zspage) {
+                       do {
+                               unlock_page(locked_pages[--nr_locked]);
+                       } while (nr_locked > 0)
+                       goto retry;
+               }
        } while ((page = get_next_page(page)) != NULL);
 }

@@ -1987,7 +2006,7 @@ static void async_free_zspage(struct work_struct *work)

        list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
                list_del(&zspage->list);
-               lock_zspage(zspage);
+               lock_zspage(pool, zspage);

                get_zspage_mapping(zspage, &class_idx, &fullness);
                VM_BUG_ON(fullness != ZS_EMPTY);



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 21:07     ` Minchan Kim
@ 2022-05-11 21:45       ` Sultan Alsawaf
  2022-05-11 23:11         ` Minchan Kim
  0 siblings, 1 reply; 10+ messages in thread
From: Sultan Alsawaf @ 2022-05-11 21:45 UTC (permalink / raw)
  To: Minchan Kim
  Cc: stable, Nitin Gupta, Sergey Senozhatsky, Andrew Morton, linux-mm,
	linux-kernel

On Wed, May 11, 2022 at 02:07:19PM -0700, Minchan Kim wrote:
> Then, how about this?

Your proposal is completely wrong still. My original patch is fine; we can stick
with that.

> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9152fbde33b5..2f205c18aee4 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
>   * To prevent zspage destroy during migration, zspage freeing should
>   * hold locks of all pages in the zspage.
>   */
> -static void lock_zspage(struct zspage *zspage)
> +static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
>  {
> -       struct page *page = get_first_page(zspage);
> -
> +       struct page *page;
> +       int nr_locked;
> +       struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
> +       struct address_space *mapping;
> +retry:
> +       nr_locked = 0;
> +       memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));

This memset() zeroes out memory past the end of the array because it is an array
of pointers, not an array of page structs; the sizeof() is incorrect.

> +       page = get_first_page(zspage);

You can't use get_first_page() outside of the migrate lock.

>         do {
>                 lock_page(page);

You can't lock a page that you don't own.

> +               locked_pages[nr_locked++] = page;
> +               /*
> +                * subpage in the zspage could be migrated under us so
> +                * verify it. Once it happens, retry the lock sequence.
> +                */
> +               mapping = page_mapping(page)

This doesn't compile.

> +               if (mapping != pool->inode->i_mapping ||
> +                   page_private(page) != (unsigned long)zspage) {
> +                       do {
> +                               unlock_page(locked_pages[--nr_locked]);
> +                       } while (nr_locked > 0)

This doesn't compile.

> +                       goto retry;
> +               }

There's no need to unlock all of the pages that were locked so far because once
a page is locked, it cannot be migrated.

>         } while ((page = get_next_page(page)) != NULL);
>  }

You can't use get_next_page() outside of the migrate lock.

> 
> @@ -1987,7 +2006,7 @@ static void async_free_zspage(struct work_struct *work)
> 
>         list_for_each_entry_safe(zspage, tmp, &free_pages, list) {
>                 list_del(&zspage->list);
> -               lock_zspage(zspage);
> +               lock_zspage(pool, zspage);
> 
>                 get_zspage_mapping(zspage, &class_idx, &fullness);
>                 VM_BUG_ON(fullness != ZS_EMPTY);

Sultan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 21:45       ` Sultan Alsawaf
@ 2022-05-11 23:11         ` Minchan Kim
  0 siblings, 0 replies; 10+ messages in thread
From: Minchan Kim @ 2022-05-11 23:11 UTC (permalink / raw)
  To: Sultan Alsawaf
  Cc: stable, Nitin Gupta, Sergey Senozhatsky, Andrew Morton, linux-mm,
	linux-kernel

On Wed, May 11, 2022 at 02:45:30PM -0700, Sultan Alsawaf wrote:
> On Wed, May 11, 2022 at 02:07:19PM -0700, Minchan Kim wrote:
> > Then, how about this?
> 
> Your proposal is completely wrong still. My original patch is fine; we can stick
> with that.
> 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 9152fbde33b5..2f205c18aee4 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1716,12 +1716,31 @@ static enum fullness_group putback_zspage(struct size_class *class,
> >   * To prevent zspage destroy during migration, zspage freeing should
> >   * hold locks of all pages in the zspage.
> >   */
> > -static void lock_zspage(struct zspage *zspage)
> > +static void lock_zspage(struct zs_pool *pool, struct zspage *zspage)
> >  {
> > -       struct page *page = get_first_page(zspage);
> > -
> > +       struct page *page;
> > +       int nr_locked;
> > +       struct page *locked_pages[ZS_MAX_PAGES_PER_ZSPAGE];
> > +       struct address_space *mapping;
> > +retry:
> > +       nr_locked = 0;
> > +       memset(locked_pages, 0, sizeof(struct page) * ARRAY_SIZE(locked_pages));
> 
> This memset() zeroes out memory past the end of the array because it is an array
> of pointers, not an array of page structs; the sizeof() is incorrect.
> 
> > +       page = get_first_page(zspage);
> 
> You can't use get_first_page() outside of the migrate lock.
> 
> >         do {
> >                 lock_page(page);
> 
> You can't lock a page that you don't own.

That's key point what my idea was wrong.

Thanks for correction!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration
  2022-05-11 20:43     ` Andrew Morton
@ 2022-05-11 23:12       ` Minchan Kim
  0 siblings, 0 replies; 10+ messages in thread
From: Minchan Kim @ 2022-05-11 23:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Sultan Alsawaf, stable, Nitin Gupta, Sergey Senozhatsky,
	linux-mm, linux-kernel

On Wed, May 11, 2022 at 01:43:22PM -0700, Andrew Morton wrote:
> On Wed, 11 May 2022 12:50:20 -0700 Sultan Alsawaf <sultan@kerneltoast.com> wrote:
> 
> > > Shouldn't the fix be Fixes: 77ff465799c6 ("zsmalloc: zs_page_migrate: skip
> > > unnecessary loops but not return -EBUSY if zspage is not inuse)?
> > > Because we didn't migrate ZS_EMPTY pages before.
> > 
> > Hi,
> > 
> > Yeah, 77ff465799c6 indeed seems like the commit that introduced the bug.
> 
> I updated the changelog, thanks.

Thanhks, Andrew.

Feel free to include my

Acked-by: Minchan Kim <minchan@kernel.org>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-05-11 23:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-09  2:47 [PATCH] zsmalloc: Fix races between asynchronous zspage free and page migration Sultan Alsawaf
2022-05-10  0:06 ` Andrew Morton
2022-05-10  1:22   ` Sultan Alsawaf
2022-05-11 18:01 ` Minchan Kim
2022-05-11 19:50   ` Sultan Alsawaf
2022-05-11 20:43     ` Andrew Morton
2022-05-11 23:12       ` Minchan Kim
2022-05-11 21:07     ` Minchan Kim
2022-05-11 21:45       ` Sultan Alsawaf
2022-05-11 23:11         ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).