From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2880DC433B4 for ; Wed, 14 Apr 2021 01:17:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB120613BD for ; Wed, 14 Apr 2021 01:17:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242793AbhDNBSR (ORCPT ); Tue, 13 Apr 2021 21:18:17 -0400 Received: from mga05.intel.com ([192.55.52.43]:13088 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231983AbhDNBSQ (ORCPT ); Tue, 13 Apr 2021 21:18:16 -0400 IronPort-SDR: /38Fmp4TnEtcqdR4a3wEmuP+pv8SlcuzXp3ZPhSfCqNIFWO/yllveqe6cgIc/yHZdFOs1XNRAW tVZ2islxaCmg== X-IronPort-AV: E=McAfee;i="6200,9189,9953"; a="279850521" X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="279850521" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 18:17:55 -0700 IronPort-SDR: FdDLUHla0g04zOeCMteLz5CYLoP3B8ENkU2tWAe3ITen9tISFv/JAFqoBg5dmPltYdm9Y8euU1 oX0eOkxT57rQ== X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="424499263" Received: from yhuang6-desk1.sh.intel.com (HELO yhuang6-desk1.ccr.corp.intel.com) ([10.239.13.1]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 18:17:51 -0700 From: "Huang, Ying" To: Miaohe Lin Cc: , , , , , , , , , , , , Subject: Re: [PATCH 1/5] mm/swapfile: add percpu_ref support for swap References: <20210408130820.48233-1-linmiaohe@huawei.com> <20210408130820.48233-2-linmiaohe@huawei.com> <87fszww55d.fsf@yhuang6-desk1.ccr.corp.intel.com> <87zgy4ufr3.fsf@yhuang6-desk1.ccr.corp.intel.com> <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> Date: Wed, 14 Apr 2021 09:17:47 +0800 In-Reply-To: <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> (Miaohe Lin's message of "Tue, 13 Apr 2021 20:39:24 +0800") Message-ID: <874kg9u0jo.fsf@yhuang6-desk1.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Miaohe Lin writes: > On 2021/4/12 15:24, Huang, Ying wrote: >> "Huang, Ying" writes: >> >>> Miaohe Lin writes: >>> >>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>> patch adds the percpu_ref support for later fixup. >>>> >>>> Signed-off-by: Miaohe Lin >>>> --- >>>> include/linux/swap.h | 2 ++ >>>> mm/swapfile.c | 25 ++++++++++++++++++++++--- >>>> 2 files changed, 24 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>> index 144727041e78..849ba5265c11 100644 >>>> --- a/include/linux/swap.h >>>> +++ b/include/linux/swap.h >>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>> * The in-memory structure used to track swap areas. >>>> */ >>>> struct swap_info_struct { >>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>> unsigned long flags; /* SWP_USED etc: see above */ >>>> signed short prio; /* swap priority of this type */ >>>> struct plist_node list; /* entry in swap_active_head */ >>>> @@ -260,6 +261,7 @@ struct swap_info_struct { >>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>> struct file *swap_file; /* seldom referenced */ >>>> unsigned int old_block_size; /* seldom referenced */ >>>> + struct completion comp; /* seldom referenced */ >>>> #ifdef CONFIG_FRONTSWAP >>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>> index 149e77454e3c..724173cd7d0c 100644 >>>> --- a/mm/swapfile.c >>>> +++ b/mm/swapfile.c >>>> @@ -39,6 +39,7 @@ >>>> #include >>>> #include >>>> #include >>>> +#include >>>> >>>> #include >>>> #include >>>> @@ -511,6 +512,15 @@ static void swap_discard_work(struct work_struct *work) >>>> spin_unlock(&si->lock); >>>> } >>>> >>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>> +{ >>>> + struct swap_info_struct *si; >>>> + >>>> + si = container_of(ref, struct swap_info_struct, users); >>>> + complete(&si->comp); >>>> + percpu_ref_exit(&si->users); >>> >>> Because percpu_ref_exit() is used, we cannot use percpu_ref_tryget() in >>> get_swap_device(), better to add comments there. >> >> I just noticed that the comments of percpu_ref_tryget_live() says, >> >> * This function is safe to call as long as @ref is between init and exit. >> >> While we need to call get_swap_device() almost at any time, so it's >> better to avoid to call percpu_ref_exit() at all. This will waste some >> memory, but we need to follow the API definition to avoid potential >> issues in the long term. > > I have to admit that I'am not really familiar with percpu_ref. So I read the > implementation code of the percpu_ref and found percpu_ref_tryget_live() could > be called after exit now. But you're right we need to follow the API definition > to avoid potential issues in the long term. > >> >> And we need to call percpu_ref_init() before insert the swap_info_struct >> into the swap_info[]. > > If we remove the call to percpu_ref_exit(), we should not use percpu_ref_init() > here because *percpu_ref->data is assumed to be NULL* in percpu_ref_init() while > this is not the case as we do not call percpu_ref_exit(). Maybe percpu_ref_reinit() > or percpu_ref_resurrect() will do the work. > > One more thing, how could I distinguish the killed percpu_ref from newly allocated one? > It seems percpu_ref_is_dying is only safe to call when @ref is between init and exit. > Maybe I could do this in alloc_swap_info()? Yes. In alloc_swap_info(), you can distinguish newly allocated and reused swap_info_struct. >> >>>> +} >>>> + >>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>> { >>>> struct swap_cluster_info *ci = si->cluster_info; >>>> @@ -2500,7 +2510,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>> * between get/put_swap_device() if SWP_VALID bit is set >>>> */ >>>> - synchronize_rcu(); >>>> + percpu_ref_reinit(&p->users); >>> >>> Although the effect is same, I think it's better to use >>> percpu_ref_resurrect() here to improve code readability. >> >> Check the original commit description for commit eb085574a752 "mm, swap: >> fix race between swapoff and some swap operations" and discussion email >> thread as follows again, >> >> https://lore.kernel.org/linux-mm/20171219053650.GB7829@linux.vnet.ibm.com/ >> >> I found that the synchronize_rcu() here is to avoid to call smp_rmb() or >> smp_load_acquire() in get_swap_device(). Now we will use >> percpu_ref_tryget_live() in get_swap_device(), so we will need to add >> the necessary memory barrier, or make sure percpu_ref_tryget_live() has >> ACQUIRE semantics. Per my understanding, we need to change >> percpu_ref_tryget_live() for that. >> > > Do you mean the below scene is possible? > > cpu1 > swapon() > ... > percpu_ref_init > ... > setup_swap_info > /* smp_store_release() is inside percpu_ref_reinit */ > percpu_ref_reinit spin_unlock() has RELEASE semantics already. > ... > > cpu2 > get_swap_device() > /* ignored smp_rmb() */ > percpu_ref_tryget_live Some kind of ACQUIRE is required here to guarantee the refcount is checked before fetching the other fields of swap_info_struct. I have sent out a RFC patch to mailing list to discuss this. > ... > > There is indeed missing smp_rmb() in percpu_ref_tryget_live. So I think the above > scene possible and we should fix this. > >>>> spin_lock(&swap_lock); >>>> spin_lock(&p->lock); >>>> _enable_swap_info(p); >>>> @@ -2621,11 +2631,13 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) >>>> p->flags &= ~SWP_VALID; /* mark swap device as invalid */ >>>> spin_unlock(&p->lock); >>>> spin_unlock(&swap_lock); >>>> + >>>> + percpu_ref_kill(&p->users); >>>> /* >>>> * wait for swap operations protected by get/put_swap_device() >>>> * to complete >>>> */ >>>> - synchronize_rcu(); >>>> + wait_for_completion(&p->comp); >>> >>> Better to move percpu_ref_kill() after the comments. And maybe revise >>> the comments. >> >> After reading the original commit description as above, I found that we >> need synchronize_rcu() here to protect the accessing to the swap cache >> data structure. Because there's call_rcu() during percpu_ref_kill(), it >> appears OK to keep the synchronize_rcu() here. And we need to revise >> the comments to make it clear what is protected by which operation. >> > > Per my understanding, percpu_ref->data->release is called only after the refcnt > reaches 0, including a full grace period has elapsed or refcnt won't be 0. > wait_for_completion() is used for waiting the last refcnt being released. So > synchronize_rcu() is not necessary here? Then we will depends on the implementation of percpu_ref. If it changed its implementation, it may take long to find out we need to change the code here. I guess in most cases, even adding a synchronize_rcu() here, we still only need to wait for one grace period. So the overhead to call synchronize_rcu() is low here. And the code is easier to be maintained. Best Regards, Huang, Ying From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F425C433ED for ; Wed, 14 Apr 2021 01:17:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B568961003 for ; Wed, 14 Apr 2021 01:17:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B568961003 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 20B956B0036; Tue, 13 Apr 2021 21:17:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CF5B6B006C; Tue, 13 Apr 2021 21:17:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0705A6B0070; Tue, 13 Apr 2021 21:17:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id DD5506B0036 for ; Tue, 13 Apr 2021 21:17:57 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A2B45181D9024 for ; Wed, 14 Apr 2021 01:17:57 +0000 (UTC) X-FDA: 78029210994.17.5E34A1E Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf16.hostedemail.com (Postfix) with ESMTP id 020E080192C7 for ; Wed, 14 Apr 2021 01:17:55 +0000 (UTC) IronPort-SDR: kjsgx4IJU3gpOTtxFzS+q2q7662IFIqMmQ8H/6bF5cofJ4QmCT4bAc3kZp/nedHcEhSy5JLnkA J8JGDnKJhFyg== X-IronPort-AV: E=McAfee;i="6200,9189,9953"; a="279850522" X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="279850522" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 18:17:55 -0700 IronPort-SDR: FdDLUHla0g04zOeCMteLz5CYLoP3B8ENkU2tWAe3ITen9tISFv/JAFqoBg5dmPltYdm9Y8euU1 oX0eOkxT57rQ== X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="424499263" Received: from yhuang6-desk1.sh.intel.com (HELO yhuang6-desk1.ccr.corp.intel.com) ([10.239.13.1]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 18:17:51 -0700 From: "Huang, Ying" To: Miaohe Lin Cc: , , , , , , , , , , , , Subject: Re: [PATCH 1/5] mm/swapfile: add percpu_ref support for swap References: <20210408130820.48233-1-linmiaohe@huawei.com> <20210408130820.48233-2-linmiaohe@huawei.com> <87fszww55d.fsf@yhuang6-desk1.ccr.corp.intel.com> <87zgy4ufr3.fsf@yhuang6-desk1.ccr.corp.intel.com> <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> Date: Wed, 14 Apr 2021 09:17:47 +0800 In-Reply-To: <46a51c49-2887-0c1a-bcf3-e1ebe9698ebf@huawei.com> (Miaohe Lin's message of "Tue, 13 Apr 2021 20:39:24 +0800") Message-ID: <874kg9u0jo.fsf@yhuang6-desk1.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 020E080192C7 X-Stat-Signature: ywnfzbt158ko5rr95grr8tcqkc5wpsg3 Received-SPF: none (intel.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=mga05.intel.com; client-ip=192.55.52.43 X-HE-DKIM-Result: none/none X-HE-Tag: 1618363075-532146 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Miaohe Lin writes: > On 2021/4/12 15:24, Huang, Ying wrote: >> "Huang, Ying" writes: >> >>> Miaohe Lin writes: >>> >>>> We will use percpu-refcount to serialize against concurrent swapoff. This >>>> patch adds the percpu_ref support for later fixup. >>>> >>>> Signed-off-by: Miaohe Lin >>>> --- >>>> include/linux/swap.h | 2 ++ >>>> mm/swapfile.c | 25 ++++++++++++++++++++++--- >>>> 2 files changed, 24 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>>> index 144727041e78..849ba5265c11 100644 >>>> --- a/include/linux/swap.h >>>> +++ b/include/linux/swap.h >>>> @@ -240,6 +240,7 @@ struct swap_cluster_list { >>>> * The in-memory structure used to track swap areas. >>>> */ >>>> struct swap_info_struct { >>>> + struct percpu_ref users; /* serialization against concurrent swapoff */ >>>> unsigned long flags; /* SWP_USED etc: see above */ >>>> signed short prio; /* swap priority of this type */ >>>> struct plist_node list; /* entry in swap_active_head */ >>>> @@ -260,6 +261,7 @@ struct swap_info_struct { >>>> struct block_device *bdev; /* swap device or bdev of swap file */ >>>> struct file *swap_file; /* seldom referenced */ >>>> unsigned int old_block_size; /* seldom referenced */ >>>> + struct completion comp; /* seldom referenced */ >>>> #ifdef CONFIG_FRONTSWAP >>>> unsigned long *frontswap_map; /* frontswap in-use, one bit per page */ >>>> atomic_t frontswap_pages; /* frontswap pages in-use counter */ >>>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>>> index 149e77454e3c..724173cd7d0c 100644 >>>> --- a/mm/swapfile.c >>>> +++ b/mm/swapfile.c >>>> @@ -39,6 +39,7 @@ >>>> #include >>>> #include >>>> #include >>>> +#include >>>> >>>> #include >>>> #include >>>> @@ -511,6 +512,15 @@ static void swap_discard_work(struct work_struct *work) >>>> spin_unlock(&si->lock); >>>> } >>>> >>>> +static void swap_users_ref_free(struct percpu_ref *ref) >>>> +{ >>>> + struct swap_info_struct *si; >>>> + >>>> + si = container_of(ref, struct swap_info_struct, users); >>>> + complete(&si->comp); >>>> + percpu_ref_exit(&si->users); >>> >>> Because percpu_ref_exit() is used, we cannot use percpu_ref_tryget() in >>> get_swap_device(), better to add comments there. >> >> I just noticed that the comments of percpu_ref_tryget_live() says, >> >> * This function is safe to call as long as @ref is between init and exit. >> >> While we need to call get_swap_device() almost at any time, so it's >> better to avoid to call percpu_ref_exit() at all. This will waste some >> memory, but we need to follow the API definition to avoid potential >> issues in the long term. > > I have to admit that I'am not really familiar with percpu_ref. So I read the > implementation code of the percpu_ref and found percpu_ref_tryget_live() could > be called after exit now. But you're right we need to follow the API definition > to avoid potential issues in the long term. > >> >> And we need to call percpu_ref_init() before insert the swap_info_struct >> into the swap_info[]. > > If we remove the call to percpu_ref_exit(), we should not use percpu_ref_init() > here because *percpu_ref->data is assumed to be NULL* in percpu_ref_init() while > this is not the case as we do not call percpu_ref_exit(). Maybe percpu_ref_reinit() > or percpu_ref_resurrect() will do the work. > > One more thing, how could I distinguish the killed percpu_ref from newly allocated one? > It seems percpu_ref_is_dying is only safe to call when @ref is between init and exit. > Maybe I could do this in alloc_swap_info()? Yes. In alloc_swap_info(), you can distinguish newly allocated and reused swap_info_struct. >> >>>> +} >>>> + >>>> static void alloc_cluster(struct swap_info_struct *si, unsigned long idx) >>>> { >>>> struct swap_cluster_info *ci = si->cluster_info; >>>> @@ -2500,7 +2510,7 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, >>>> * Guarantee swap_map, cluster_info, etc. fields are valid >>>> * between get/put_swap_device() if SWP_VALID bit is set >>>> */ >>>> - synchronize_rcu(); >>>> + percpu_ref_reinit(&p->users); >>> >>> Although the effect is same, I think it's better to use >>> percpu_ref_resurrect() here to improve code readability. >> >> Check the original commit description for commit eb085574a752 "mm, swap: >> fix race between swapoff and some swap operations" and discussion email >> thread as follows again, >> >> https://lore.kernel.org/linux-mm/20171219053650.GB7829@linux.vnet.ibm.com/ >> >> I found that the synchronize_rcu() here is to avoid to call smp_rmb() or >> smp_load_acquire() in get_swap_device(). Now we will use >> percpu_ref_tryget_live() in get_swap_device(), so we will need to add >> the necessary memory barrier, or make sure percpu_ref_tryget_live() has >> ACQUIRE semantics. Per my understanding, we need to change >> percpu_ref_tryget_live() for that. >> > > Do you mean the below scene is possible? > > cpu1 > swapon() > ... > percpu_ref_init > ... > setup_swap_info > /* smp_store_release() is inside percpu_ref_reinit */ > percpu_ref_reinit spin_unlock() has RELEASE semantics already. > ... > > cpu2 > get_swap_device() > /* ignored smp_rmb() */ > percpu_ref_tryget_live Some kind of ACQUIRE is required here to guarantee the refcount is checked before fetching the other fields of swap_info_struct. I have sent out a RFC patch to mailing list to discuss this. > ... > > There is indeed missing smp_rmb() in percpu_ref_tryget_live. So I think the above > scene possible and we should fix this. > >>>> spin_lock(&swap_lock); >>>> spin_lock(&p->lock); >>>> _enable_swap_info(p); >>>> @@ -2621,11 +2631,13 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) >>>> p->flags &= ~SWP_VALID; /* mark swap device as invalid */ >>>> spin_unlock(&p->lock); >>>> spin_unlock(&swap_lock); >>>> + >>>> + percpu_ref_kill(&p->users); >>>> /* >>>> * wait for swap operations protected by get/put_swap_device() >>>> * to complete >>>> */ >>>> - synchronize_rcu(); >>>> + wait_for_completion(&p->comp); >>> >>> Better to move percpu_ref_kill() after the comments. And maybe revise >>> the comments. >> >> After reading the original commit description as above, I found that we >> need synchronize_rcu() here to protect the accessing to the swap cache >> data structure. Because there's call_rcu() during percpu_ref_kill(), it >> appears OK to keep the synchronize_rcu() here. And we need to revise >> the comments to make it clear what is protected by which operation. >> > > Per my understanding, percpu_ref->data->release is called only after the refcnt > reaches 0, including a full grace period has elapsed or refcnt won't be 0. > wait_for_completion() is used for waiting the last refcnt being released. So > synchronize_rcu() is not necessary here? Then we will depends on the implementation of percpu_ref. If it changed its implementation, it may take long to find out we need to change the code here. I guess in most cases, even adding a synchronize_rcu() here, we still only need to wait for one grace period. So the overhead to call synchronize_rcu() is low here. And the code is easier to be maintained. Best Regards, Huang, Ying