* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate [not found] ` <YW6OptglA6UykZg/@T590> @ 2021-10-20 6:43 ` Miroslav Benes 2021-10-20 7:49 ` Ming Lei 0 siblings, 1 reply; 20+ messages in thread From: Miroslav Benes @ 2021-10-20 6:43 UTC (permalink / raw) To: Ming Lei Cc: Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue, 19 Oct 2021, Ming Lei wrote: > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > forgetting that there *may* already be present drivers which *do* implement > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > I was informed livepatching *did* have that issue as well and so very > > > > likely a generic solution to the deadlock could be beneficial to other > > > > random drivers. > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > I would not call it a fix. It is a kind of ugly workaround because the > > generic infrastructure lacked (lacks) the proper support in my opinion. > > Luis is trying to fix that. > > What is the proper support of the generic infrastructure? I am not > familiar with livepatching's model(especially with module unload), you mean > livepatching have to do the following way from sysfs: > > 1) during module exit: > > mutex_lock(lp_lock); > kobject_put(lp_kobj); > mutex_unlock(lp_lock); > > 2) show()/store() method of attributes of lp_kobj > > mutex_lock(lp_lock) > ... > mutex_unlock(lp_lock) Yes, this was exactly the case. We then reworked it a lot (see 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so now the call sequence is different. kobject_put() is basically offloaded to a workqueue scheduled right from the store() method. Meaning that Luis's work would probably not help us currently, but on the other hand the issues with AA deadlock were one of the main drivers of the redesign (if I remember correctly). There were other reasons too as the changelog of the commit describes. So, from my perspective, if there was a way to easily synchronize between a data cleanup from module_exit callback and sysfs/kernfs operations, it could spare people many headaches. > IMO, the above usage simply caused AA deadlock. Even in Luis's patch > 'zram: fix crashes with cpu hotplug multistate', new/same AA deadlock > (hot_remove_store() vs. disksize_store() or reset_store()) is added > because hot_remove_store() isn't called from module_exit(). > > Luis tries to delay unloading module until all show()/store() are done. But > that can be obtained by the following way simply during module_exit(): > > kobject_del(lp_kobj); //all pending store()/show() from lp_kobj are done, > //no new store()/show() can come after > //kobject_del() returns > mutex_lock(lp_lock); > kobject_put(lp_kobj); > mutex_unlock(lp_lock); kobject_del() already calls kobject_put(). Did you mean __kobject_del(). That one is internal though. > Or can you explain your requirement on kobject/module unload in a bit > details? Does the above makes sense? Thanks Miroslav ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 6:43 ` [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate Miroslav Benes @ 2021-10-20 7:49 ` Ming Lei 2021-10-20 8:19 ` Miroslav Benes 0 siblings, 1 reply; 20+ messages in thread From: Ming Lei @ 2021-10-20 7:49 UTC (permalink / raw) To: Miroslav Benes Cc: Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, ming.lei On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > On Tue, 19 Oct 2021, Ming Lei wrote: > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > random drivers. > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > Luis is trying to fix that. > > > > What is the proper support of the generic infrastructure? I am not > > familiar with livepatching's model(especially with module unload), you mean > > livepatching have to do the following way from sysfs: > > > > 1) during module exit: > > > > mutex_lock(lp_lock); > > kobject_put(lp_kobj); > > mutex_unlock(lp_lock); > > > > 2) show()/store() method of attributes of lp_kobj > > > > mutex_lock(lp_lock) > > ... > > mutex_unlock(lp_lock) > > Yes, this was exactly the case. We then reworked it a lot (see > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > now the call sequence is different. kobject_put() is basically offloaded > to a workqueue scheduled right from the store() method. Meaning that > Luis's work would probably not help us currently, but on the other hand > the issues with AA deadlock were one of the main drivers of the redesign > (if I remember correctly). There were other reasons too as the changelog > of the commit describes. > > So, from my perspective, if there was a way to easily synchronize between > a data cleanup from module_exit callback and sysfs/kernfs operations, it > could spare people many headaches. kobject_del() is supposed to do so, but you can't hold a shared lock which is required in show()/store() method. Once kobject_del() returns, no pending show()/store() any more. The question is that why one shared lock is required for livepatching to delete the kobject. What are you protecting when you delete one kobject? > > > IMO, the above usage simply caused AA deadlock. Even in Luis's patch > > 'zram: fix crashes with cpu hotplug multistate', new/same AA deadlock > > (hot_remove_store() vs. disksize_store() or reset_store()) is added > > because hot_remove_store() isn't called from module_exit(). > > > > Luis tries to delay unloading module until all show()/store() are done. But > > that can be obtained by the following way simply during module_exit(): > > > > kobject_del(lp_kobj); //all pending store()/show() from lp_kobj are done, > > //no new store()/show() can come after > > //kobject_del() returns > > mutex_lock(lp_lock); > > kobject_put(lp_kobj); > > mutex_unlock(lp_lock); > > kobject_del() already calls kobject_put(). Did you mean __kobject_del(). > That one is internal though. kobject_del() is counter-part of kobject_add(), and kobject_put() will call kobject_del() automatically() if it isn't deleted yet, but usually kobject_put() is for releasing the object only. It is more often to release kobject by calling kobject_del() and kobject_put(). > > > Or can you explain your requirement on kobject/module unload in a bit > > details? > > Does the above makes sense? I think now focus is the shared lock between kobject_del() and show()/store() of the kobject's attributes. Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 7:49 ` Ming Lei @ 2021-10-20 8:19 ` Miroslav Benes 2021-10-20 8:28 ` Greg KH 2021-10-20 10:09 ` Ming Lei 0 siblings, 2 replies; 20+ messages in thread From: Miroslav Benes @ 2021-10-20 8:19 UTC (permalink / raw) To: Ming Lei Cc: Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed, 20 Oct 2021, Ming Lei wrote: > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > random drivers. > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > Luis is trying to fix that. > > > > > > What is the proper support of the generic infrastructure? I am not > > > familiar with livepatching's model(especially with module unload), you mean > > > livepatching have to do the following way from sysfs: > > > > > > 1) during module exit: > > > > > > mutex_lock(lp_lock); > > > kobject_put(lp_kobj); > > > mutex_unlock(lp_lock); > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > mutex_lock(lp_lock) > > > ... > > > mutex_unlock(lp_lock) > > > > Yes, this was exactly the case. We then reworked it a lot (see > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > now the call sequence is different. kobject_put() is basically offloaded > > to a workqueue scheduled right from the store() method. Meaning that > > Luis's work would probably not help us currently, but on the other hand > > the issues with AA deadlock were one of the main drivers of the redesign > > (if I remember correctly). There were other reasons too as the changelog > > of the commit describes. > > > > So, from my perspective, if there was a way to easily synchronize between > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > could spare people many headaches. > > kobject_del() is supposed to do so, but you can't hold a shared lock > which is required in show()/store() method. Once kobject_del() returns, > no pending show()/store() any more. > > The question is that why one shared lock is required for livepatching to > delete the kobject. What are you protecting when you delete one kobject? I think it boils down to the fact that we embed kobject statically to structures which livepatch uses to maintain data. That is discouraged generally, but all the attempts to implement it correctly were utter failures. Miroslav ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 8:19 ` Miroslav Benes @ 2021-10-20 8:28 ` Greg KH 2021-10-25 9:58 ` Miroslav Benes 2021-10-20 10:09 ` Ming Lei 1 sibling, 1 reply; 20+ messages in thread From: Greg KH @ 2021-10-20 8:28 UTC (permalink / raw) To: Miroslav Benes Cc: Ming Lei, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed, Oct 20, 2021 at 10:19:27AM +0200, Miroslav Benes wrote: > On Wed, 20 Oct 2021, Ming Lei wrote: > > > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > > random drivers. > > > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > > Luis is trying to fix that. > > > > > > > > What is the proper support of the generic infrastructure? I am not > > > > familiar with livepatching's model(especially with module unload), you mean > > > > livepatching have to do the following way from sysfs: > > > > > > > > 1) during module exit: > > > > > > > > mutex_lock(lp_lock); > > > > kobject_put(lp_kobj); > > > > mutex_unlock(lp_lock); > > > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > > > mutex_lock(lp_lock) > > > > ... > > > > mutex_unlock(lp_lock) > > > > > > Yes, this was exactly the case. We then reworked it a lot (see > > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > > now the call sequence is different. kobject_put() is basically offloaded > > > to a workqueue scheduled right from the store() method. Meaning that > > > Luis's work would probably not help us currently, but on the other hand > > > the issues with AA deadlock were one of the main drivers of the redesign > > > (if I remember correctly). There were other reasons too as the changelog > > > of the commit describes. > > > > > > So, from my perspective, if there was a way to easily synchronize between > > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > > could spare people many headaches. > > > > kobject_del() is supposed to do so, but you can't hold a shared lock > > which is required in show()/store() method. Once kobject_del() returns, > > no pending show()/store() any more. > > > > The question is that why one shared lock is required for livepatching to > > delete the kobject. What are you protecting when you delete one kobject? > > I think it boils down to the fact that we embed kobject statically to > structures which livepatch uses to maintain data. That is discouraged > generally, but all the attempts to implement it correctly were utter > failures. Sounds like this is the real problem that needs to be fixed. kobjects should always control the lifespan of the structure they are embedded in. If not, then that is a design flaw of the user of the kobject :( Where in the kernel is this happening? And where have been the attempts to fix this up? thanks, greg k-h ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 8:28 ` Greg KH @ 2021-10-25 9:58 ` Miroslav Benes 0 siblings, 0 replies; 20+ messages in thread From: Miroslav Benes @ 2021-10-25 9:58 UTC (permalink / raw) To: Greg KH Cc: Ming Lei, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, pmladek On Wed, 20 Oct 2021, Greg KH wrote: > On Wed, Oct 20, 2021 at 10:19:27AM +0200, Miroslav Benes wrote: > > On Wed, 20 Oct 2021, Ming Lei wrote: > > > > > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > > > random drivers. > > > > > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > > > Luis is trying to fix that. > > > > > > > > > > What is the proper support of the generic infrastructure? I am not > > > > > familiar with livepatching's model(especially with module unload), you mean > > > > > livepatching have to do the following way from sysfs: > > > > > > > > > > 1) during module exit: > > > > > > > > > > mutex_lock(lp_lock); > > > > > kobject_put(lp_kobj); > > > > > mutex_unlock(lp_lock); > > > > > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > > > > > mutex_lock(lp_lock) > > > > > ... > > > > > mutex_unlock(lp_lock) > > > > > > > > Yes, this was exactly the case. We then reworked it a lot (see > > > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > > > now the call sequence is different. kobject_put() is basically offloaded > > > > to a workqueue scheduled right from the store() method. Meaning that > > > > Luis's work would probably not help us currently, but on the other hand > > > > the issues with AA deadlock were one of the main drivers of the redesign > > > > (if I remember correctly). There were other reasons too as the changelog > > > > of the commit describes. > > > > > > > > So, from my perspective, if there was a way to easily synchronize between > > > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > > > could spare people many headaches. > > > > > > kobject_del() is supposed to do so, but you can't hold a shared lock > > > which is required in show()/store() method. Once kobject_del() returns, > > > no pending show()/store() any more. > > > > > > The question is that why one shared lock is required for livepatching to > > > delete the kobject. What are you protecting when you delete one kobject? > > > > I think it boils down to the fact that we embed kobject statically to > > structures which livepatch uses to maintain data. That is discouraged > > generally, but all the attempts to implement it correctly were utter > > failures. > > Sounds like this is the real problem that needs to be fixed. kobjects > should always control the lifespan of the structure they are embedded > in. If not, then that is a design flaw of the user of the kobject :( Right, and you've already told us. A couple of times. For example here https://lore.kernel.org/all/20190502074230.GA27847@kroah.com/ :) > Where in the kernel is this happening? And where have been the attempts > to fix this up? include/linux/livepatch.h and kernel/livepatch/core.c. See klp_{patch,object,func}. It took some archeology, but I think https://lore.kernel.org/all/1464018848-4303-1-git-send-email-pmladek@suse.com/ is it. Petr might correct me. It was long before we added some important features to the code, so it might be even more difficult today. It resurfaced later when Tobin tried to fix some of kobject call sites in the kernel... https://lore.kernel.org/all/20190430001534.26246-1-tobin@kernel.org/ https://lore.kernel.org/all/20190430233803.GB10777@eros.localdomain/ https://lore.kernel.org/all/20190502023142.20139-6-tobin@kernel.org/ There are probably more references. Anyway, the current code works fine (well, one could argue about that). If someone wants to take a (another) stab at this, then why not, but it seemed like a rabbit hole without a substantial gain in the past. On the other hand, we currently misuse the API to some extent. /me scratches head Miroslav ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 8:19 ` Miroslav Benes 2021-10-20 8:28 ` Greg KH @ 2021-10-20 10:09 ` Ming Lei 2021-10-26 8:48 ` Petr Mladek 1 sibling, 1 reply; 20+ messages in thread From: Ming Lei @ 2021-10-20 10:09 UTC (permalink / raw) To: Miroslav Benes Cc: Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed, Oct 20, 2021 at 10:19:27AM +0200, Miroslav Benes wrote: > On Wed, 20 Oct 2021, Ming Lei wrote: > > > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > > random drivers. > > > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > > Luis is trying to fix that. > > > > > > > > What is the proper support of the generic infrastructure? I am not > > > > familiar with livepatching's model(especially with module unload), you mean > > > > livepatching have to do the following way from sysfs: > > > > > > > > 1) during module exit: > > > > > > > > mutex_lock(lp_lock); > > > > kobject_put(lp_kobj); > > > > mutex_unlock(lp_lock); > > > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > > > mutex_lock(lp_lock) > > > > ... > > > > mutex_unlock(lp_lock) > > > > > > Yes, this was exactly the case. We then reworked it a lot (see > > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > > now the call sequence is different. kobject_put() is basically offloaded > > > to a workqueue scheduled right from the store() method. Meaning that > > > Luis's work would probably not help us currently, but on the other hand > > > the issues with AA deadlock were one of the main drivers of the redesign > > > (if I remember correctly). There were other reasons too as the changelog > > > of the commit describes. > > > > > > So, from my perspective, if there was a way to easily synchronize between > > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > > could spare people many headaches. > > > > kobject_del() is supposed to do so, but you can't hold a shared lock > > which is required in show()/store() method. Once kobject_del() returns, > > no pending show()/store() any more. > > > > The question is that why one shared lock is required for livepatching to > > delete the kobject. What are you protecting when you delete one kobject? > > I think it boils down to the fact that we embed kobject statically to > structures which livepatch uses to maintain data. That is discouraged > generally, but all the attempts to implement it correctly were utter > failures. OK, then it isn't one common usage, in which kobject covers the release of the external object. What is the exact kobject in livepatching? But kobject_del() won't release the kobject, you shouldn't need the lock to delete kobject first. After the kobject is deleted, no any show() and store() any more, isn't such sync[1] you expected? Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-20 10:09 ` Ming Lei @ 2021-10-26 8:48 ` Petr Mladek 2021-10-26 15:37 ` Ming Lei 0 siblings, 1 reply; 20+ messages in thread From: Petr Mladek @ 2021-10-26 8:48 UTC (permalink / raw) To: Ming Lei Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed 2021-10-20 18:09:51, Ming Lei wrote: > On Wed, Oct 20, 2021 at 10:19:27AM +0200, Miroslav Benes wrote: > > On Wed, 20 Oct 2021, Ming Lei wrote: > > > > > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > > > random drivers. > > > > > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > > > Luis is trying to fix that. > > > > > > > > > > What is the proper support of the generic infrastructure? I am not > > > > > familiar with livepatching's model(especially with module unload), you mean > > > > > livepatching have to do the following way from sysfs: > > > > > > > > > > 1) during module exit: > > > > > > > > > > mutex_lock(lp_lock); > > > > > kobject_put(lp_kobj); > > > > > mutex_unlock(lp_lock); > > > > > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > > > > > mutex_lock(lp_lock) > > > > > ... > > > > > mutex_unlock(lp_lock) > > > > > > > > Yes, this was exactly the case. We then reworked it a lot (see > > > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > > > now the call sequence is different. kobject_put() is basically offloaded > > > > to a workqueue scheduled right from the store() method. Meaning that > > > > Luis's work would probably not help us currently, but on the other hand > > > > the issues with AA deadlock were one of the main drivers of the redesign > > > > (if I remember correctly). There were other reasons too as the changelog > > > > of the commit describes. > > > > > > > > So, from my perspective, if there was a way to easily synchronize between > > > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > > > could spare people many headaches. > > > > > > kobject_del() is supposed to do so, but you can't hold a shared lock > > > which is required in show()/store() method. Once kobject_del() returns, > > > no pending show()/store() any more. > > > > > > The question is that why one shared lock is required for livepatching to > > > delete the kobject. What are you protecting when you delete one kobject? > > > > I think it boils down to the fact that we embed kobject statically to > > structures which livepatch uses to maintain data. That is discouraged > > generally, but all the attempts to implement it correctly were utter > > failures. > > OK, then it isn't one common usage, in which kobject covers the release > of the external object. What is the exact kobject in livepatching? Below are more details about the livepatch code. I hope that it will help you to see if zram has similar problems or not. We have kobject in three structures: klp_func, klp_object, and klp_patch, see include/linux/livepatch.h. These structures have to be statically defined in the module sources because they define what is livepatched, see samples/livepatch/livepatch-sample.c The kobject is used there to show information about the patch, patched objects, and patched functions, in sysfs. And most importantly, the sysfs interface can be used to disable the livepatch. The problem with static structures is that the module must stay in the memory as long as the sysfs interface exists. It can be solved in module_exit() callback. It could wait until the sysfs interface is destroyed. kobject API does not support this scenario. The relase() callbacks are called asynchronously. It expects that the structure is bundled in a dynamically allocated structure. As a result, the sysfs interface can be removed even after the module removal. The livepatching might create the dynamic structures by duplicating the structures defined in the module statically. It might safe us some headaches with kobject release. But it would also need an extra code that would need to be maintained. The structure constrains strings than need to be duplicated and later freed... > But kobject_del() won't release the kobject, you shouldn't need the lock > to delete kobject first. After the kobject is deleted, no any show() and > store() any more, isn't such sync[1] you expected? Livepatch code never called kobject_del() under a lock. It would cause the obvious deadlock. The historic code only waited in the module_exit() callback until the sysfs interface was removed. It has changed in the commit 958ef1e39d24d6cb8bf2a740 ("livepatch: Simplify API by removing registration step"). The livepatch could never get enabled again after it was disabled now. The sysfs interface is removed when the livepatch gets disabled. The module could be removed only after the sysfs interface is destroyed, see the module_put() in klp_free_patch_finish(). The livepatch code uses workqueue because the livepatch can be disabled via sysfs interface. It obviously could not wait until the sysfs interface is removed in the sysfs write() callback that triggered the removal. HTH, Petr ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-26 8:48 ` Petr Mladek @ 2021-10-26 15:37 ` Ming Lei 2021-10-26 17:01 ` Luis Chamberlain ` (2 more replies) 0 siblings, 3 replies; 20+ messages in thread From: Ming Lei @ 2021-10-26 15:37 UTC (permalink / raw) To: Petr Mladek Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, ming.lei On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > On Wed 2021-10-20 18:09:51, Ming Lei wrote: > > On Wed, Oct 20, 2021 at 10:19:27AM +0200, Miroslav Benes wrote: > > > On Wed, 20 Oct 2021, Ming Lei wrote: > > > > > > > On Wed, Oct 20, 2021 at 08:43:37AM +0200, Miroslav Benes wrote: > > > > > On Tue, 19 Oct 2021, Ming Lei wrote: > > > > > > > > > > > On Tue, Oct 19, 2021 at 08:23:51AM +0200, Miroslav Benes wrote: > > > > > > > > > By you only addressing the deadlock as a requirement on approach a) you are > > > > > > > > > forgetting that there *may* already be present drivers which *do* implement > > > > > > > > > such patterns in the kernel. I worked on addressing the deadlock because > > > > > > > > > I was informed livepatching *did* have that issue as well and so very > > > > > > > > > likely a generic solution to the deadlock could be beneficial to other > > > > > > > > > random drivers. > > > > > > > > > > > > > > > > In-tree zram doesn't have such deadlock, if livepatching has such AA deadlock, > > > > > > > > just fixed it, and seems it has been fixed by 3ec24776bfd0. > > > > > > > > > > > > > > I would not call it a fix. It is a kind of ugly workaround because the > > > > > > > generic infrastructure lacked (lacks) the proper support in my opinion. > > > > > > > Luis is trying to fix that. > > > > > > > > > > > > What is the proper support of the generic infrastructure? I am not > > > > > > familiar with livepatching's model(especially with module unload), you mean > > > > > > livepatching have to do the following way from sysfs: > > > > > > > > > > > > 1) during module exit: > > > > > > > > > > > > mutex_lock(lp_lock); > > > > > > kobject_put(lp_kobj); > > > > > > mutex_unlock(lp_lock); > > > > > > > > > > > > 2) show()/store() method of attributes of lp_kobj > > > > > > > > > > > > mutex_lock(lp_lock) > > > > > > ... > > > > > > mutex_unlock(lp_lock) > > > > > > > > > > Yes, this was exactly the case. We then reworked it a lot (see > > > > > 958ef1e39d24 ("livepatch: Simplify API by removing registration step"), so > > > > > now the call sequence is different. kobject_put() is basically offloaded > > > > > to a workqueue scheduled right from the store() method. Meaning that > > > > > Luis's work would probably not help us currently, but on the other hand > > > > > the issues with AA deadlock were one of the main drivers of the redesign > > > > > (if I remember correctly). There were other reasons too as the changelog > > > > > of the commit describes. > > > > > > > > > > So, from my perspective, if there was a way to easily synchronize between > > > > > a data cleanup from module_exit callback and sysfs/kernfs operations, it > > > > > could spare people many headaches. > > > > > > > > kobject_del() is supposed to do so, but you can't hold a shared lock > > > > which is required in show()/store() method. Once kobject_del() returns, > > > > no pending show()/store() any more. > > > > > > > > The question is that why one shared lock is required for livepatching to > > > > delete the kobject. What are you protecting when you delete one kobject? > > > > > > I think it boils down to the fact that we embed kobject statically to > > > structures which livepatch uses to maintain data. That is discouraged > > > generally, but all the attempts to implement it correctly were utter > > > failures. > > > > OK, then it isn't one common usage, in which kobject covers the release > > of the external object. What is the exact kobject in livepatching? > > Below are more details about the livepatch code. I hope that it will > help you to see if zram has similar problems or not. > > We have kobject in three structures: klp_func, klp_object, and > klp_patch, see include/linux/livepatch.h. > > These structures have to be statically defined in the module sources > because they define what is livepatched, see > samples/livepatch/livepatch-sample.c > > The kobject is used there to show information about the patch, patched > objects, and patched functions, in sysfs. And most importantly, > the sysfs interface can be used to disable the livepatch. > > The problem with static structures is that the module must stay > in the memory as long as the sysfs interface exists. It can be > solved in module_exit() callback. It could wait until the sysfs > interface is destroyed. > > kobject API does not support this scenario. The relase() callbacks kobject_delete() is for supporting this scenario, that is why we don't need to grab module refcnt before calling show()/store() of the kobject's attributes. kobject_delete() can be called in module_exit(), then any show()/store() will be done after kobject_delete() returns. > are called asynchronously. It expects that the structure is bundled > in a dynamically allocated structure. As a result, the sysfs > interface can be removed even after the module removal. That should be one bug, otherwise store()/show() method could be called into after the module is unloaded. > > The livepatching might create the dynamic structures by duplicating > the structures defined in the module statically. It might safe us > some headaches with kobject release. But it would also need an extra code > that would need to be maintained. The structure constrains strings > than need to be duplicated and later freed... > > > > But kobject_del() won't release the kobject, you shouldn't need the lock > > to delete kobject first. After the kobject is deleted, no any show() and > > store() any more, isn't such sync[1] you expected? > > Livepatch code never called kobject_del() under a lock. It would cause > the obvious deadlock. The historic code only waited in the > module_exit() callback until the sysfs interface was removed. OK, then Luis shouldn't consider livepatching as one such issue to solve with one generic solution. > > It has changed in the commit 958ef1e39d24d6cb8bf2a740 ("livepatch: > Simplify API by removing registration step"). The livepatch could > never get enabled again after it was disabled now. The sysfs interface > is removed when the livepatch gets disabled. The module could > be removed only after the sysfs interface is destroyed, see > the module_put() in klp_free_patch_finish(). OK, that is livepatching's implementation: all the kobjects are deleted & freed after disabling the livepatch module, that looks one kill-me operation, instead of disabling, so this way isn't a normal usage, scsi has similar sysfs interface of delete. Also kobjects can't be removed in enable's store() directly, since deadlock could be caused, looks wq has to be used here for avoiding deadlock. BTW, what is the livepatching module use model? try_module_get() is called in klp_init_patch_early()<-klp_enable_patch()<-module_init(), module_put() is called in klp_free_patch_finish() which seems only be called after 'echo 0 > /sys/kernel/livepatch/$lp_mod/enabled'. Usually when the module isn't used, module_exit() gets chance to be called by userspace rmmod, then all kobjects created in this module can be deleted in module_exit(). > > The livepatch code uses workqueue because the livepatch can be > disabled via sysfs interface. It obviously could not wait until > the sysfs interface is removed in the sysfs write() callback > that triggered the removal. If klp_free_patch_* is moved into module_exit() and not let enable store() to kill kobjects, all kobjects can be deleted in module_exit(), then wait_for_completion(patch->finish) may be removed, also wq isn't required for the async cleanup. Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-26 15:37 ` Ming Lei @ 2021-10-26 17:01 ` Luis Chamberlain 2021-10-27 11:57 ` Miroslav Benes 2021-10-27 11:42 ` Miroslav Benes 2021-11-02 14:15 ` Petr Mladek 2 siblings, 1 reply; 20+ messages in thread From: Luis Chamberlain @ 2021-10-26 17:01 UTC (permalink / raw) To: Ming Lei, Julia Lawall Cc: Petr Mladek, Miroslav Benes, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue, Oct 26, 2021 at 11:37:30PM +0800, Ming Lei wrote: > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > Livepatch code never called kobject_del() under a lock. It would cause > > the obvious deadlock. Never? > > The historic code only waited in the > > module_exit() callback until the sysfs interface was removed. > > OK, then Luis shouldn't consider livepatching as one such issue to solve > with one generic solution. It's not what I was told when the deadlock was found with zram, so I was informed quite the contrary. I'm working on a generic coccinelle patch which hunts for actual cases using iteration (a feature of coccinelle for complex searches). The search is pretty involved, so I don't think I'll have an answer to this soon. Since the question of how generic this deadlock is remains questionable, I think it makes sense to put the generic deadlock fix off the table for now, and we address this once we have a more concrete search with coccinelle. But to say we *don't* have drivers which can cause this is obviously wrong as well, from a cursory search so far. But let's wait and see how big this list actually is. I'll drop the deadlock generic fixes and move on with at least a starter kernfs / sysfs tests. Luis ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-26 17:01 ` Luis Chamberlain @ 2021-10-27 11:57 ` Miroslav Benes 2021-10-27 14:27 ` Luis Chamberlain 2021-11-02 15:24 ` Petr Mladek 0 siblings, 2 replies; 20+ messages in thread From: Miroslav Benes @ 2021-10-27 11:57 UTC (permalink / raw) To: Luis Chamberlain Cc: Ming Lei, Julia Lawall, Petr Mladek, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue, 26 Oct 2021, Luis Chamberlain wrote: > On Tue, Oct 26, 2021 at 11:37:30PM +0800, Ming Lei wrote: > > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > > Livepatch code never called kobject_del() under a lock. It would cause > > > the obvious deadlock. > > Never? kobject_put() to be precise. When I started working on the support for module/live patches removal, calling kobject_put() under our klp_mutex lock was the obvious first choice given how the code was structured, but I ran into problems with deadlocks immediately. So it was changed to async approach with the workqueue. Thus the mainline code has never suffered from this, but we knew about the issues. > > > The historic code only waited in the > > > module_exit() callback until the sysfs interface was removed. > > > > OK, then Luis shouldn't consider livepatching as one such issue to solve > > with one generic solution. > > It's not what I was told when the deadlock was found with zram, so I was > informed quite the contrary. From my perspective, it is quite easy to get it wrong due to either a lack of generic support, or missing rules/documentation. So if this thread leads to "do not share locks between a module removal and a sysfs operation" strict rule, it would be at least something. In the same manner as Luis proposed to document try_module_get() expectations. > I'm working on a generic coccinelle patch which hunts for actual cases > using iteration (a feature of coccinelle for complex searches). The > search is pretty involved, so I don't think I'll have an answer to this > soon. > > Since the question of how generic this deadlock is remains questionable, > I think it makes sense to put the generic deadlock fix off the table for > now, and we address this once we have a more concrete search with > coccinelle. > > But to say we *don't* have drivers which can cause this is obviously > wrong as well, from a cursory search so far. But let's wait and see how > big this list actually is. > > I'll drop the deadlock generic fixes and move on with at least a starter > kernfs / sysfs tests. It makes sense to me. Thanks, Luis, for pursuing it. Miroslav ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-27 11:57 ` Miroslav Benes @ 2021-10-27 14:27 ` Luis Chamberlain 2021-11-02 15:24 ` Petr Mladek 1 sibling, 0 replies; 20+ messages in thread From: Luis Chamberlain @ 2021-10-27 14:27 UTC (permalink / raw) To: Miroslav Benes Cc: Ming Lei, Julia Lawall, Petr Mladek, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed, Oct 27, 2021 at 01:57:40PM +0200, Miroslav Benes wrote: > On Tue, 26 Oct 2021, Luis Chamberlain wrote: > > > On Tue, Oct 26, 2021 at 11:37:30PM +0800, Ming Lei wrote: > > > OK, then Luis shouldn't consider livepatching as one such issue to solve > > > with one generic solution. > > > > It's not what I was told when the deadlock was found with zram, so I was > > informed quite the contrary. > > From my perspective, it is quite easy to get it wrong due to either a lack > of generic support, or missing rules/documentation. Indeed. I agree some level of guidence is needed, even if subtle, rather than tribal knowledge. I'll start off with the test_sysfs demo'ing what not to do and documenting this there. I don't think it makes sense to formalize yet documentation for "though shalt not do this" generically until a full depth search is done with Coccinelle. > So if this thread > leads to "do not share locks between a module removal and a sysfs > operation" strict rule, it would be at least something. I think that's where we are at. I'll wait to complete my coccinelle deadlock hunt patch to complete the full search, and that could be useful to *warn* aboute new use cases, so to prevent this deadlock in the future. Until then I agree that the complexity introduced is not worth it given the evidence of users, but the full evidence of actual users still remains to be determined. A perfect job left to advances with Coccinelle. > In the same > manner as Luis proposed to document try_module_get() expectations. Right and so sysfs ops using try_module_get() *still* remains safe, and so will keep that patch in my next iteration because there *are* *many* uses cases for that. Luis ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-27 11:57 ` Miroslav Benes 2021-10-27 14:27 ` Luis Chamberlain @ 2021-11-02 15:24 ` Petr Mladek 2021-11-02 16:25 ` Luis Chamberlain 1 sibling, 1 reply; 20+ messages in thread From: Petr Mladek @ 2021-11-02 15:24 UTC (permalink / raw) To: Miroslav Benes Cc: Luis Chamberlain, Ming Lei, Julia Lawall, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed 2021-10-27 13:57:40, Miroslav Benes wrote: > On Tue, 26 Oct 2021, Luis Chamberlain wrote: > > > On Tue, Oct 26, 2021 at 11:37:30PM +0800, Ming Lei wrote: > > > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > > > Livepatch code never called kobject_del() under a lock. It would cause > > > > the obvious deadlock. I have to correct myself. IMHO, the deadlock is far from obvious. I always get lost in the code and the documentation is not clear. I always get lost. > > > > Never? > > kobject_put() to be precise. IMHO, the problem is actually with kobject_del() that gets blocked until the sysfs interface gets removed. kobject_put() will have the same problem only when the clean up is not delayed. > When I started working on the support for module/live patches removal, > calling kobject_put() under our klp_mutex lock was the obvious first > choice given how the code was structured, but I ran into problems with > deadlocks immediately. So it was changed to async approach with the > workqueue. Thus the mainline code has never suffered from this, but we > knew about the issues. > > > > > The historic code only waited in the > > > > module_exit() callback until the sysfs interface was removed. > > > > > > OK, then Luis shouldn't consider livepatching as one such issue to solve > > > with one generic solution. > > > > It's not what I was told when the deadlock was found with zram, so I was > > informed quite the contrary. > > >From my perspective, it is quite easy to get it wrong due to either a lack > of generic support, or missing rules/documentation. So if this thread > leads to "do not share locks between a module removal and a sysfs > operation" strict rule, it would be at least something. In the same > manner as Luis proposed to document try_module_get() expectations. The rule "do not share locks between a module removal and a sysfs operation" is not clear to me. IMHO, there are the following rules: 1. rule: kobject_del() or kobject_put() must not be called under a lock that is used by store()/show() callbacks. reason: kobject_del() waits until the sysfs interface is destroyed. It has to wait until all store()/show() callbacks are finished. 2. rule: kobject_del()/kobject_put() must not be called from the related store() callbacks. reason: same as in 1st rule. 3. rule: module_exit() must wait until all release() callbacks are called when kobject are static. reason: kobject_put() must be called to clean up internal dependencies. The clean up might be done asynchronously and need access to the kobject structure. Best Regards, Petr PS: I am sorry if I am messing things. I want to be sure that we are all talking about the same and understand it the same way. ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-02 15:24 ` Petr Mladek @ 2021-11-02 16:25 ` Luis Chamberlain 2021-11-03 0:01 ` Ming Lei 0 siblings, 1 reply; 20+ messages in thread From: Luis Chamberlain @ 2021-11-02 16:25 UTC (permalink / raw) To: Petr Mladek Cc: Miroslav Benes, Ming Lei, Julia Lawall, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue, Nov 02, 2021 at 04:24:06PM +0100, Petr Mladek wrote: > On Wed 2021-10-27 13:57:40, Miroslav Benes wrote: > > >From my perspective, it is quite easy to get it wrong due to either a lack > > of generic support, or missing rules/documentation. So if this thread > > leads to "do not share locks between a module removal and a sysfs > > operation" strict rule, it would be at least something. In the same > > manner as Luis proposed to document try_module_get() expectations. > > The rule "do not share locks between a module removal and a sysfs > operation" is not clear to me. That's exactly it. It *is* not. The test_sysfs selftest will hopefully help with this. But I'll wait to take a final position on whether or not a generic fix should be merged until the Coccinelle patch which looks for all uses cases completes. So I think that once that Coccinelle hunt is done for the deadlock, we should also remind folks of the potential deadlock and some of the rules you mentioned below so that if we take a position that we don't support this, we at least inform developers why and what to avoid. If Coccinelle finds quite a bit of cases, then perhaps evaluating the generic fix might be worth evaluating. > IMHO, there are the following rules: > > 1. rule: kobject_del() or kobject_put() must not be called under a lock that > is used by store()/show() callbacks. > > reason: kobject_del() waits until the sysfs interface is destroyed. > It has to wait until all store()/show() callbacks are finished. Right, this is what actually started this entire conversation. Note that as Ming pointed out, the generic kernfs fix I proposed would only cover the case when kobject_del() ends up being called on module exit, so it would not cover the cases where perhaps kobject_del() might be called outside of module exit, and so the cope of the possible deadlock then increases in scope. Likewise, the Coccinelle hunt I'm trying would only cover the module exit case. I'm a bit of afraid of the complexity of a generic hunt as expresed in rule 1. > > 2. rule: kobject_del()/kobject_put() must not be called from the > related store() callbacks. > > reason: same as in 1st rule. Sensible corollary. Given tha the exact kobjet_del() / kobject_put() which must not be called from the respective sysfs ops depends on which kobject is underneath the device for which the sysfs ops is being created, it would make this hunt in Coccinelle a bit tricky. My current iteration of a coccinelle hunt cheats and looks at any sysfs looking op and ensures a module exit exists. > 3. rule: module_exit() must wait until all release() callbacks are called > when kobject are static. > > reason: kobject_put() must be called to clean up internal > dependencies. The clean up might be done asynchronously > and need access to the kobject structure. This might be an easier rule to implement a respective Coccinelle rule for. Luis ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-02 16:25 ` Luis Chamberlain @ 2021-11-03 0:01 ` Ming Lei 2021-11-03 12:44 ` Luis Chamberlain 0 siblings, 1 reply; 20+ messages in thread From: Ming Lei @ 2021-11-03 0:01 UTC (permalink / raw) To: Luis Chamberlain Cc: Petr Mladek, Miroslav Benes, Julia Lawall, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, ming.lei On Tue, Nov 02, 2021 at 09:25:44AM -0700, Luis Chamberlain wrote: > On Tue, Nov 02, 2021 at 04:24:06PM +0100, Petr Mladek wrote: > > On Wed 2021-10-27 13:57:40, Miroslav Benes wrote: > > > >From my perspective, it is quite easy to get it wrong due to either a lack > > > of generic support, or missing rules/documentation. So if this thread > > > leads to "do not share locks between a module removal and a sysfs > > > operation" strict rule, it would be at least something. In the same > > > manner as Luis proposed to document try_module_get() expectations. > > > > The rule "do not share locks between a module removal and a sysfs > > operation" is not clear to me. > > That's exactly it. It *is* not. The test_sysfs selftest will hopefully > help with this. But I'll wait to take a final position on whether or not > a generic fix should be merged until the Coccinelle patch which looks > for all uses cases completes. > > So I think that once that Coccinelle hunt is done for the deadlock, we > should also remind folks of the potential deadlock and some of the rules > you mentioned below so that if we take a position that we don't support > this, we at least inform developers why and what to avoid. If Coccinelle > finds quite a bit of cases, then perhaps evaluating the generic fix > might be worth evaluating. > > > IMHO, there are the following rules: > > > > 1. rule: kobject_del() or kobject_put() must not be called under a lock that > > is used by store()/show() callbacks. > > > > reason: kobject_del() waits until the sysfs interface is destroyed. > > It has to wait until all store()/show() callbacks are finished. > > Right, this is what actually started this entire conversation. > > Note that as Ming pointed out, the generic kernfs fix I proposed would > only cover the case when kobject_del() ends up being called on module > exit, so it would not cover the cases where perhaps kobject_del() might > be called outside of module exit, and so the cope of the possible > deadlock then increases in scope. > > Likewise, the Coccinelle hunt I'm trying would only cover the module > exit case. I'm a bit of afraid of the complexity of a generic hunt > as expresed in rule 1. Question is that why one shared lock is required between kobject_del() and its show()/store(), both zram and livepatch needn't that. Is it one common usage? > > > > > 2. rule: kobject_del()/kobject_put() must not be called from the > > related store() callbacks. > > > > reason: same as in 1st rule. > > Sensible corollary. > > Given tha the exact kobjet_del() / kobject_put() which must not be > called from the respective sysfs ops depends on which kobject is > underneath the device for which the sysfs ops is being created, > it would make this hunt in Coccinelle a bit tricky. My current iteration > of a coccinelle hunt cheats and looks at any sysfs looking op and > ensures a module exit exists. Actually kernfs/sysfs provides interface for supporting deleting kobject/attr from the attr's show()/store(), see example of sdev_store_delete(), and the livepatch example: https://lore.kernel.org/lkml/20211102145932.3623108-4-ming.lei@redhat.com/ > > > 3. rule: module_exit() must wait until all release() callbacks are called > > when kobject are static. > > > > reason: kobject_put() must be called to clean up internal > > dependencies. The clean up might be done asynchronously > > and need access to the kobject structure. > > This might be an easier rule to implement a respective Coccinelle rule > for. If kobject_del() is done in module_exit() or before module_exit(), kobject should have been freed in module_exit() via kobject_put(). But yes, it can be asynchronously because of CONFIG_DEBUG_KOBJECT_RELEASE, seems like one real issue. Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-03 0:01 ` Ming Lei @ 2021-11-03 12:44 ` Luis Chamberlain 0 siblings, 0 replies; 20+ messages in thread From: Luis Chamberlain @ 2021-11-03 12:44 UTC (permalink / raw) To: Ming Lei Cc: Petr Mladek, Miroslav Benes, Julia Lawall, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Wed, Nov 03, 2021 at 08:01:45AM +0800, Ming Lei wrote: > On Tue, Nov 02, 2021 at 09:25:44AM -0700, Luis Chamberlain wrote: > > On Tue, Nov 02, 2021 at 04:24:06PM +0100, Petr Mladek wrote: > > > On Wed 2021-10-27 13:57:40, Miroslav Benes wrote: > > > > >From my perspective, it is quite easy to get it wrong due to either a lack > > > > of generic support, or missing rules/documentation. So if this thread > > > > leads to "do not share locks between a module removal and a sysfs > > > > operation" strict rule, it would be at least something. In the same > > > > manner as Luis proposed to document try_module_get() expectations. > > > > > > The rule "do not share locks between a module removal and a sysfs > > > operation" is not clear to me. > > > > That's exactly it. It *is* not. The test_sysfs selftest will hopefully > > help with this. But I'll wait to take a final position on whether or not > > a generic fix should be merged until the Coccinelle patch which looks > > for all uses cases completes. > > > > So I think that once that Coccinelle hunt is done for the deadlock, we > > should also remind folks of the potential deadlock and some of the rules > > you mentioned below so that if we take a position that we don't support > > this, we at least inform developers why and what to avoid. If Coccinelle > > finds quite a bit of cases, then perhaps evaluating the generic fix > > might be worth evaluating. > > > > > IMHO, there are the following rules: > > > > > > 1. rule: kobject_del() or kobject_put() must not be called under a lock that > > > is used by store()/show() callbacks. > > > > > > reason: kobject_del() waits until the sysfs interface is destroyed. > > > It has to wait until all store()/show() callbacks are finished. > > > > Right, this is what actually started this entire conversation. > > > > Note that as Ming pointed out, the generic kernfs fix I proposed would > > only cover the case when kobject_del() ends up being called on module > > exit, so it would not cover the cases where perhaps kobject_del() might > > be called outside of module exit, and so the cope of the possible > > deadlock then increases in scope. > > > > Likewise, the Coccinelle hunt I'm trying would only cover the module > > exit case. I'm a bit of afraid of the complexity of a generic hunt > > as expresed in rule 1. > > Question is that why one shared lock is required between kobject_del() > and its show()/store(), both zram and livepatch needn't that. Is it > one common usage? That is the question the coccinelle hunt is aimed at finding. Answering that in the context of module removal is easier than the generic case. But also note that I had mentioned before that we have semantics to check *when* we're in the module removal case, and as such can address that case. For the other cases we have no possible semantics to be able to address a generic fix. I tried though, refer to my reply in this thread and refer to the new kobject_being_removed() I'm adding: https://lkml.kernel.org/r/YWdMpv8lAFYtc18c@bombadil.infradead.org So we have semantics for knowing when about to remove a module but, my attempt with kobject_being_removed() isn't sufficient to address this generically. In either case, having a gauge of how common this is either on module removal of generally would be wonderful. It is easier to answer the question from a module removal perspective though. > > > 2. rule: kobject_del()/kobject_put() must not be called from the > > > related store() callbacks. > > > > > > reason: same as in 1st rule. > > > > Sensible corollary. > > > > Given tha the exact kobjet_del() / kobject_put() which must not be > > called from the respective sysfs ops depends on which kobject is > > underneath the device for which the sysfs ops is being created, > > it would make this hunt in Coccinelle a bit tricky. My current iteration > > of a coccinelle hunt cheats and looks at any sysfs looking op and > > ensures a module exit exists. > > Actually kernfs/sysfs provides interface for supporting deleting > kobject/attr from the attr's show()/store(), see example of > sdev_store_delete(), and the livepatch example: > > https://lore.kernel.org/lkml/20211102145932.3623108-4-ming.lei@redhat.com/ Imagine that.. is that the suicidal thing? > > > 3. rule: module_exit() must wait until all release() callbacks are called > > > when kobject are static. > > > > > > reason: kobject_put() must be called to clean up internal > > > dependencies. The clean up might be done asynchronously > > > and need access to the kobject structure. > > > > This might be an easier rule to implement a respective Coccinelle rule > > for. > > If kobject_del() is done in module_exit() or before module_exit(), > kobject should have been freed in module_exit() via kobject_put(). > > But yes, it can be asynchronously because of CONFIG_DEBUG_KOBJECT_RELEASE, > seems like one real issue. Alright thanks for confirming. Luis ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-26 15:37 ` Ming Lei 2021-10-26 17:01 ` Luis Chamberlain @ 2021-10-27 11:42 ` Miroslav Benes 2021-11-02 14:15 ` Petr Mladek 2 siblings, 0 replies; 20+ messages in thread From: Miroslav Benes @ 2021-10-27 11:42 UTC (permalink / raw) To: Ming Lei Cc: Petr Mladek, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching > > > > The livepatch code uses workqueue because the livepatch can be > > disabled via sysfs interface. It obviously could not wait until > > the sysfs interface is removed in the sysfs write() callback > > that triggered the removal. > > If klp_free_patch_* is moved into module_exit() and not let enable > store() to kill kobjects, all kobjects can be deleted in module_exit(), > then wait_for_completion(patch->finish) may be removed, also wq isn't > required for the async cleanup. It sounds like a nice cleanup. If we combine kobject_del() to prevent any show()/store() accesses and free everything later in module_exit(), it could work. If I am not missing something around how we maintain internal lists of live patches and their modules. Thanks Miroslav ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-10-26 15:37 ` Ming Lei 2021-10-26 17:01 ` Luis Chamberlain 2021-10-27 11:42 ` Miroslav Benes @ 2021-11-02 14:15 ` Petr Mladek 2021-11-02 14:51 ` Petr Mladek 2021-11-02 14:56 ` Ming Lei 2 siblings, 2 replies; 20+ messages in thread From: Petr Mladek @ 2021-11-02 14:15 UTC (permalink / raw) To: Ming Lei Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue 2021-10-26 23:37:30, Ming Lei wrote: > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > Below are more details about the livepatch code. I hope that it will > > help you to see if zram has similar problems or not. > > > > We have kobject in three structures: klp_func, klp_object, and > > klp_patch, see include/linux/livepatch.h. > > > > These structures have to be statically defined in the module sources > > because they define what is livepatched, see > > samples/livepatch/livepatch-sample.c > > > > The kobject is used there to show information about the patch, patched > > objects, and patched functions, in sysfs. And most importantly, > > the sysfs interface can be used to disable the livepatch. > > > > The problem with static structures is that the module must stay > > in the memory as long as the sysfs interface exists. It can be > > solved in module_exit() callback. It could wait until the sysfs > > interface is destroyed. > > > > kobject API does not support this scenario. The relase() callbacks > > kobject_delete() is for supporting this scenario, that is why we don't > need to grab module refcnt before calling show()/store() of the > kobject's attributes. > > kobject_delete() can be called in module_exit(), then any show()/store() > will be done after kobject_delete() returns. I am a bit confused. I do not see kobject_delete() anywhere in kernel sources. I see only kobject_del() and kobject_put(). AFAIK, they do _not_ guarantee that either the sysfs interface was destroyed or the release callbacks were called. For example, see schedule_delayed_work(&kobj->release, delay) in kobject_release(). By other words, anyone could still be using either the sysfs interface or the related structures after kobject_del() or kobject_put() returns. IMHO, kobject API does not support static structures and module removal. Best Regards, Petr ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-02 14:15 ` Petr Mladek @ 2021-11-02 14:51 ` Petr Mladek 2021-11-02 15:17 ` Ming Lei 2021-11-02 14:56 ` Ming Lei 1 sibling, 1 reply; 20+ messages in thread From: Petr Mladek @ 2021-11-02 14:51 UTC (permalink / raw) To: Ming Lei Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching On Tue 2021-11-02 15:15:19, Petr Mladek wrote: > On Tue 2021-10-26 23:37:30, Ming Lei wrote: > > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > > Below are more details about the livepatch code. I hope that it will > > > help you to see if zram has similar problems or not. > > > > > > We have kobject in three structures: klp_func, klp_object, and > > > klp_patch, see include/linux/livepatch.h. > > > > > > These structures have to be statically defined in the module sources > > > because they define what is livepatched, see > > > samples/livepatch/livepatch-sample.c > > > > > > The kobject is used there to show information about the patch, patched > > > objects, and patched functions, in sysfs. And most importantly, > > > the sysfs interface can be used to disable the livepatch. > > > > > > The problem with static structures is that the module must stay > > > in the memory as long as the sysfs interface exists. It can be > > > solved in module_exit() callback. It could wait until the sysfs > > > interface is destroyed. > > > > > > kobject API does not support this scenario. The relase() callbacks > > > > kobject_delete() is for supporting this scenario, that is why we don't > > need to grab module refcnt before calling show()/store() of the > > kobject's attributes. > > > > kobject_delete() can be called in module_exit(), then any show()/store() > > will be done after kobject_delete() returns. > > I am a bit confused. I do not see kobject_delete() anywhere in kernel > sources. > > I see only kobject_del() and kobject_put(). AFAIK, they do _not_ > guarantee that either the sysfs interface was destroyed or > the release callbacks were called. For example, see > schedule_delayed_work(&kobj->release, delay) in kobject_release(). Grr, I always get confused by the code. kobject_del() actually waits until the sysfs interface gets destroyed. This is why there is the deadlock. But kobject_put() is _not_ synchronous. And the comment above kobject_add() repeat 3 times that kobject_put() must be called on success: * Return: If this function returns an error, kobject_put() must be * called to properly clean up the memory associated with the * object. Under no instance should the kobject that is passed * to this function be directly freed with a call to kfree(), * that can leak memory. * * If this function returns success, kobject_put() must also be called * in order to properly clean up the memory associated with the object. * * In short, once this function is called, kobject_put() MUST be called * when the use of the object is finished in order to properly free * everything. and similar text in Documentation/core-api/kobject.rst After a kobject has been registered with the kobject core successfully, it must be cleaned up when the code is finished with it. To do that, call kobject_put(). If I read the code correctly then kobject_put() calls kref_put() that might call kobject_delayed_cleanup(). This function does a lot of things and need to access struct kobject. > IMHO, kobject API does not support static structures and module > removal. If kobject_put() has to be called also for static structures then module_exit() must explicitly wait until the clean up is finished. Best Regards, Petr ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-02 14:51 ` Petr Mladek @ 2021-11-02 15:17 ` Ming Lei 0 siblings, 0 replies; 20+ messages in thread From: Ming Lei @ 2021-11-02 15:17 UTC (permalink / raw) To: Petr Mladek Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, ming.lei On Tue, Nov 02, 2021 at 03:51:33PM +0100, Petr Mladek wrote: > On Tue 2021-11-02 15:15:19, Petr Mladek wrote: > > On Tue 2021-10-26 23:37:30, Ming Lei wrote: > > > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > > > Below are more details about the livepatch code. I hope that it will > > > > help you to see if zram has similar problems or not. > > > > > > > > We have kobject in three structures: klp_func, klp_object, and > > > > klp_patch, see include/linux/livepatch.h. > > > > > > > > These structures have to be statically defined in the module sources > > > > because they define what is livepatched, see > > > > samples/livepatch/livepatch-sample.c > > > > > > > > The kobject is used there to show information about the patch, patched > > > > objects, and patched functions, in sysfs. And most importantly, > > > > the sysfs interface can be used to disable the livepatch. > > > > > > > > The problem with static structures is that the module must stay > > > > in the memory as long as the sysfs interface exists. It can be > > > > solved in module_exit() callback. It could wait until the sysfs > > > > interface is destroyed. > > > > > > > > kobject API does not support this scenario. The relase() callbacks > > > > > > kobject_delete() is for supporting this scenario, that is why we don't > > > need to grab module refcnt before calling show()/store() of the > > > kobject's attributes. > > > > > > kobject_delete() can be called in module_exit(), then any show()/store() > > > will be done after kobject_delete() returns. > > > > I am a bit confused. I do not see kobject_delete() anywhere in kernel > > sources. > > > > I see only kobject_del() and kobject_put(). AFAIK, they do _not_ > > guarantee that either the sysfs interface was destroyed or > > the release callbacks were called. For example, see > > schedule_delayed_work(&kobj->release, delay) in kobject_release(). > > Grr, I always get confused by the code. kobject_del() actually waits > until the sysfs interface gets destroyed. This is why there is > the deadlock. Right. > > But kobject_put() is _not_ synchronous. And the comment above > kobject_add() repeat 3 times that kobject_put() must be called > on success: > > * Return: If this function returns an error, kobject_put() must be > * called to properly clean up the memory associated with the > * object. Under no instance should the kobject that is passed > * to this function be directly freed with a call to kfree(), > * that can leak memory. > * > * If this function returns success, kobject_put() must also be called > * in order to properly clean up the memory associated with the object. > * > * In short, once this function is called, kobject_put() MUST be called > * when the use of the object is finished in order to properly free > * everything. > > and similar text in Documentation/core-api/kobject.rst > > After a kobject has been registered with the kobject core successfully, it > must be cleaned up when the code is finished with it. To do that, call > kobject_put(). > > > If I read the code correctly then kobject_put() calls kref_put() > that might call kobject_delayed_cleanup(). This function does a lot > of things and need to access struct kobject. Yes, then what is the problem here wrt. kobject_put() which may not be synchronous? > > > IMHO, kobject API does not support static structures and module > > removal. > > If kobject_put() has to be called also for static structures then > module_exit() must explicitly wait until the clean up is finished. Right, that is exactly how klp_patch kobject is implemented. klp_patch kobject has to be disabled first, then module refcnt can be dropped after the klp_patch kobject is released. Then module_exit() is possible. Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate 2021-11-02 14:15 ` Petr Mladek 2021-11-02 14:51 ` Petr Mladek @ 2021-11-02 14:56 ` Ming Lei 1 sibling, 0 replies; 20+ messages in thread From: Ming Lei @ 2021-11-02 14:56 UTC (permalink / raw) To: Petr Mladek Cc: Miroslav Benes, Luis Chamberlain, Benjamin Herrenschmidt, Paul Mackerras, tj, gregkh, akpm, minchan, jeyu, shuah, bvanassche, dan.j.williams, joe, tglx, keescook, rostedt, linux-spdx, linux-doc, linux-block, linux-fsdevel, linux-kselftest, linux-kernel, live-patching, ming.lei On Tue, Nov 02, 2021 at 03:15:15PM +0100, Petr Mladek wrote: > On Tue 2021-10-26 23:37:30, Ming Lei wrote: > > On Tue, Oct 26, 2021 at 10:48:18AM +0200, Petr Mladek wrote: > > > Below are more details about the livepatch code. I hope that it will > > > help you to see if zram has similar problems or not. > > > > > > We have kobject in three structures: klp_func, klp_object, and > > > klp_patch, see include/linux/livepatch.h. > > > > > > These structures have to be statically defined in the module sources > > > because they define what is livepatched, see > > > samples/livepatch/livepatch-sample.c > > > > > > The kobject is used there to show information about the patch, patched > > > objects, and patched functions, in sysfs. And most importantly, > > > the sysfs interface can be used to disable the livepatch. > > > > > > The problem with static structures is that the module must stay > > > in the memory as long as the sysfs interface exists. It can be > > > solved in module_exit() callback. It could wait until the sysfs > > > interface is destroyed. > > > > > > kobject API does not support this scenario. The relase() callbacks > > > > kobject_delete() is for supporting this scenario, that is why we don't > > need to grab module refcnt before calling show()/store() of the > > kobject's attributes. > > > > kobject_delete() can be called in module_exit(), then any show()/store() > > will be done after kobject_delete() returns. > > I am a bit confused. I do not see kobject_delete() anywhere in kernel > sources. > > I see only kobject_del() and kobject_put(). AFAIK, they do _not_ > guarantee that either the sysfs interface was destroyed or > the release callbacks were called. For example, see > schedule_delayed_work(&kobj->release, delay) in kobject_release(). After kobject_del() returns, no one can call run into show()/store(), and all pending show()/store() are drained meantime. But yes, the release handler may still be called later, and the kobject has to be freed during or before module_exit(). https://lore.kernel.org/lkml/20211101112548.3364086-2-ming.lei@redhat.com/ > > By other words, anyone could still be using either the sysfs interface > or the related structures after kobject_del() or kobject_put() > returns. No, no one can do that after kobject_del() returns. > > IMHO, kobject API does not support static structures and module > removal. But so far klp_patch can only be defined as static instance, and it depends on the implementation, especially the release handler. Thanks, Ming ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2021-11-03 12:44 UTC | newest] Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <YWeR4moCRh+ZHOmH@T590> [not found] ` <YWiSAN6xfYcUDJCb@bombadil.infradead.org> [not found] ` <YWjCpLUNPF3s4P2U@T590> [not found] ` <YWjJ0O7K+31Iz3ox@bombadil.infradead.org> [not found] ` <YWk9e957Hb+I7HvR@T590> [not found] ` <YWm68xUnAofop3PZ@bombadil.infradead.org> [not found] ` <YWq3Z++uoJ/kcp+3@T590> [not found] ` <YW3LuzaPhW96jSBK@bombadil.infradead.org> [not found] ` <YW4uwep3BCe9Vxq8@T590> [not found] ` <alpine.LSU.2.21.2110190820590.15009@pobox.suse.cz> [not found] ` <YW6OptglA6UykZg/@T590> 2021-10-20 6:43 ` [PATCH v8 11/12] zram: fix crashes with cpu hotplug multistate Miroslav Benes 2021-10-20 7:49 ` Ming Lei 2021-10-20 8:19 ` Miroslav Benes 2021-10-20 8:28 ` Greg KH 2021-10-25 9:58 ` Miroslav Benes 2021-10-20 10:09 ` Ming Lei 2021-10-26 8:48 ` Petr Mladek 2021-10-26 15:37 ` Ming Lei 2021-10-26 17:01 ` Luis Chamberlain 2021-10-27 11:57 ` Miroslav Benes 2021-10-27 14:27 ` Luis Chamberlain 2021-11-02 15:24 ` Petr Mladek 2021-11-02 16:25 ` Luis Chamberlain 2021-11-03 0:01 ` Ming Lei 2021-11-03 12:44 ` Luis Chamberlain 2021-10-27 11:42 ` Miroslav Benes 2021-11-02 14:15 ` Petr Mladek 2021-11-02 14:51 ` Petr Mladek 2021-11-02 15:17 ` Ming Lei 2021-11-02 14:56 ` Ming Lei
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).