All of lore.kernel.org
 help / color / mirror / Atom feed
* + mm-zswap-use-workqueue-to-destroy-pool.patch added to -mm tree
@ 2016-04-26 23:52 akpm
  2016-04-27  5:40 ` Sergey Senozhatsky
  0 siblings, 1 reply; 3+ messages in thread
From: akpm @ 2016-04-26 23:52 UTC (permalink / raw)
  To: ddstreet, dan.streetman, yuzhao, mm-commits


The patch titled
     Subject: mm/zswap: use workqueue to destroy pool
has been added to the -mm tree.  Its filename is
     mm-zswap-use-workqueue-to-destroy-pool.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-zswap-use-workqueue-to-destroy-pool.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-zswap-use-workqueue-to-destroy-pool.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/SubmitChecklist when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Dan Streetman <ddstreet@ieee.org>
Subject: mm/zswap: use workqueue to destroy pool

Add a work_struct to struct zswap_pool, and change __zswap_pool_empty to
use the workqueue instead of using call_rcu().

When zswap destroys a pool no longer in use, it uses call_rcu() to perform
the destruction/freeing.  Since that executes in softirq context, it must
not sleep.  However, actually destroying the pool involves freeing the
per-cpu compressors (which requires locking the cpu_add_remove_lock mutex)
and freeing the zpool, for which the implementation may sleep (e.g. 
zsmalloc calls kmem_cache_destroy, which locks the slab_mutex).  So if
either mutex is currently taken, or any other part of the compressor or
zpool implementation sleeps, it will result in a BUG().

It's not easy to reproduce this when changing zswap's params normally.  In
testing with a loaded system, this does not fail:

$ cd /sys/module/zswap/parameters
$ echo lz4 > compressor ; echo zsmalloc > zpool

nor does this:

$ while true ; do
> echo lzo > compressor ; echo zbud > zpool
> sleep 1
> echo lz4 > compressor ; echo zsmalloc > zpool
> sleep 1
> done

although it's still possible either of those might fail, depending on
whether anything else besides zswap has locked the mutexes.

However, changing a parameter with no delay immediately causes the
schedule while atomic BUG:

$ while true ; do
> echo lzo > compressor ; echo lz4 > compressor
> done

This is essentially the same as Yu Zhao's proposed patch to zsmalloc,
but moved to zswap, to cover compressor and zpool freeing.

Fixes: f1c54846ee45 ("zswap: dynamic pool creation")
Signed-off-by: Dan Streetman <ddstreet@ieee.org>
Reported-by: Yu Zhao <yuzhao@google.com>
Cc: Dan Streetman <dan.streetman@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/zswap.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff -puN mm/zswap.c~mm-zswap-use-workqueue-to-destroy-pool mm/zswap.c
--- a/mm/zswap.c~mm-zswap-use-workqueue-to-destroy-pool
+++ a/mm/zswap.c
@@ -117,7 +117,7 @@ struct zswap_pool {
 	struct crypto_comp * __percpu *tfm;
 	struct kref kref;
 	struct list_head list;
-	struct rcu_head rcu_head;
+	struct work_struct work;
 	struct notifier_block notifier;
 	char tfm_name[CRYPTO_MAX_ALG_NAME];
 };
@@ -652,9 +652,11 @@ static int __must_check zswap_pool_get(s
 	return kref_get_unless_zero(&pool->kref);
 }
 
-static void __zswap_pool_release(struct rcu_head *head)
+static void __zswap_pool_release(struct work_struct *work)
 {
-	struct zswap_pool *pool = container_of(head, typeof(*pool), rcu_head);
+	struct zswap_pool *pool = container_of(work, typeof(*pool), work);
+
+	synchronize_rcu();
 
 	/* nobody should have been able to get a kref... */
 	WARN_ON(kref_get_unless_zero(&pool->kref));
@@ -674,7 +676,9 @@ static void __zswap_pool_empty(struct kr
 	WARN_ON(pool == zswap_pool_current());
 
 	list_del_rcu(&pool->list);
-	call_rcu(&pool->rcu_head, __zswap_pool_release);
+
+	INIT_WORK(&pool->work, __zswap_pool_release);
+	schedule_work(&pool->work);
 
 	spin_unlock(&zswap_pools_lock);
 }
_

Patches currently in -mm which might be from ddstreet@ieee.org are

mm-zpool-use-workqueue-for-zpool_destroy.patch
mm-zswap-use-workqueue-to-destroy-pool.patch


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: + mm-zswap-use-workqueue-to-destroy-pool.patch added to -mm tree
  2016-04-26 23:52 + mm-zswap-use-workqueue-to-destroy-pool.patch added to -mm tree akpm
@ 2016-04-27  5:40 ` Sergey Senozhatsky
  2016-04-27  8:15   ` Dan Streetman
  0 siblings, 1 reply; 3+ messages in thread
From: Sergey Senozhatsky @ 2016-04-27  5:40 UTC (permalink / raw)
  To: akpm; +Cc: ddstreet, dan.streetman, yuzhao, mm-commits, linux-kernel

Hello,

On (04/26/16 16:52), akpm@linux-foundation.org wrote:
[..]
> -static void __zswap_pool_release(struct rcu_head *head)
> +static void __zswap_pool_release(struct work_struct *work)
>  {
> -	struct zswap_pool *pool = container_of(head, typeof(*pool), rcu_head);
> +	struct zswap_pool *pool = container_of(work, typeof(*pool), work);
> +
> +	synchronize_rcu();
>  
>  	/* nobody should have been able to get a kref... */
>  	WARN_ON(kref_get_unless_zero(&pool->kref));
> @@ -674,7 +676,9 @@ static void __zswap_pool_empty(struct kr
>  	WARN_ON(pool == zswap_pool_current());
>  
>  	list_del_rcu(&pool->list);
> -	call_rcu(&pool->rcu_head, __zswap_pool_release);
> +
> +	INIT_WORK(&pool->work, __zswap_pool_release);
> +	schedule_work(&pool->work);
>  
>  	spin_unlock(&zswap_pools_lock);
>  }
> _
> 
> Patches currently in -mm which might be from ddstreet@ieee.org are
> 
> mm-zpool-use-workqueue-for-zpool_destroy.patch
> mm-zswap-use-workqueue-to-destroy-pool.patch

I think only mm-zswap-use-workqueue-to-destroy-pool.patch is
needed.

	-ss

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: + mm-zswap-use-workqueue-to-destroy-pool.patch added to -mm tree
  2016-04-27  5:40 ` Sergey Senozhatsky
@ 2016-04-27  8:15   ` Dan Streetman
  0 siblings, 0 replies; 3+ messages in thread
From: Dan Streetman @ 2016-04-27  8:15 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, Dan Streetman, Yu Zhao, mm-commits, linux-kernel

On Wed, Apr 27, 2016 at 1:40 AM, Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
> Hello,
>
> On (04/26/16 16:52), akpm@linux-foundation.org wrote:
> [..]
>> -static void __zswap_pool_release(struct rcu_head *head)
>> +static void __zswap_pool_release(struct work_struct *work)
>>  {
>> -     struct zswap_pool *pool = container_of(head, typeof(*pool), rcu_head);
>> +     struct zswap_pool *pool = container_of(work, typeof(*pool), work);
>> +
>> +     synchronize_rcu();
>>
>>       /* nobody should have been able to get a kref... */
>>       WARN_ON(kref_get_unless_zero(&pool->kref));
>> @@ -674,7 +676,9 @@ static void __zswap_pool_empty(struct kr
>>       WARN_ON(pool == zswap_pool_current());
>>
>>       list_del_rcu(&pool->list);
>> -     call_rcu(&pool->rcu_head, __zswap_pool_release);
>> +
>> +     INIT_WORK(&pool->work, __zswap_pool_release);
>> +     schedule_work(&pool->work);
>>
>>       spin_unlock(&zswap_pools_lock);
>>  }
>> _
>>
>> Patches currently in -mm which might be from ddstreet@ieee.org are
>>
>> mm-zpool-use-workqueue-for-zpool_destroy.patch
>> mm-zswap-use-workqueue-to-destroy-pool.patch
>
> I think only mm-zswap-use-workqueue-to-destroy-pool.patch is
> needed.

yep, please drop mm-zpool-use-workqueue-for-zpool_destroy.patch

thanks!

>
>         -ss

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-04-27  8:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-26 23:52 + mm-zswap-use-workqueue-to-destroy-pool.patch added to -mm tree akpm
2016-04-27  5:40 ` Sergey Senozhatsky
2016-04-27  8:15   ` Dan Streetman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.