From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F41C3C433F5 for ; Fri, 27 May 2022 02:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237561AbiE0Czo (ORCPT ); Thu, 26 May 2022 22:55:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48908 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237284AbiE0Czi (ORCPT ); Thu, 26 May 2022 22:55:38 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98FB0E8B8C for ; Thu, 26 May 2022 19:55:36 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id q4so3003345plr.11 for ; Thu, 26 May 2022 19:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=yuwVnApgi9MDFT2rN0f5AwfAJ/OcShQIf6fyKMPeJKY=; b=trgl7zimFq+jjxSKGE+eRM0KjdEFcKXRfyUC1CMFQx2jc5m5ZGc1LDB9U/bGKZq+6k g9Zecb4Kg/Z2nt8zJW1P+UEnfAkbWhti8JLFagRac6ap3LSEUBkZOz69BnCZ/fU7s2mA TDhjiHtPDCE+hfSP1+jCvWCwJsRcTPKA8rxRV+uEEtpGLk13lDAWMBT/sPNNgwIwXlX0 9BmvY/XA6zhuYfVaz47ZWsgNp/5KkEWAFKmyKHn8bP1jRpaUCm2UUoyxn2MnsKS1lOb2 DqTu50/zB88pZq0wlhGePgVBWC9SsSWrPJis+CC/j4rWkKSSNNHmKapWDfO9kyyldvsa ZRXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=yuwVnApgi9MDFT2rN0f5AwfAJ/OcShQIf6fyKMPeJKY=; b=E7/i4EnkW1tymxynSBoi5aHsLr90Im6CmFJLHqr12+hOKpGn/xD/qrYAwDJuiCA6TF nP7jiM58TInfp6FTcvUz5aRNJyt7AQebKBe5nA5X+nne2bwwYKHOTvmRLifBo7cQ7VJ5 AmUfy6G/kEBqtHxu/2qWIwJid7Ng+C661+6FtDsfvvWw1Q9MprShZhAH9+QggQAqqCsE CtEJbJxrDYcnyY4CI3CEkhotw5wqTypzJuLVqAXGVW2opoJ0LxpErMlFc0pSxws/BtT4 lrjnBVcc+aiIZ6JRxPirl+v7xtmDbONZFzFui/7b38ffzbRr1fqOLY2wZbAkejaF/L81 3GRg== X-Gm-Message-State: AOAM533C9ig5KEGqixnwd8B6V7bBeYvLfNqgiYpr1zU9ey/zllHeClID 9/lKFL2LelWkm2DS+gaacmH8lw== X-Google-Smtp-Source: ABdhPJz9Z5VK0bzXKTxJhdyCoPCs/72Zg/kC66tJ8oRkdvuAUXPaG5d/MaaDccUvoakb6rCWvl3RgA== X-Received: by 2002:a17:903:2352:b0:163:5f7d:cd0d with SMTP id c18-20020a170903235200b001635f7dcd0dmr10018344plh.35.1653620136172; Thu, 26 May 2022 19:55:36 -0700 (PDT) Received: from localhost ([139.177.225.238]) by smtp.gmail.com with ESMTPSA id 12-20020a170902c20c00b0015e8d4eb2adsm2365787pll.247.2022.05.26.19.55.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 May 2022 19:55:35 -0700 (PDT) Date: Fri, 27 May 2022 10:55:30 +0800 From: Muchun Song To: Waiman Long Cc: Johannes Weiner , mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-4-songmuchun@bytedance.com> <9fe57cf7-9d21-3f91-ef27-e046b426c219@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9fe57cf7-9d21-3f91-ef27-e046b426c219@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 26, 2022 at 04:17:27PM -0400, Waiman Long wrote: > On 5/25/22 11:38, Muchun Song wrote: > > On Wed, May 25, 2022 at 10:48:54AM -0400, Johannes Weiner wrote: > > > On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote: > > > > On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote: > > > > > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote: > > > > > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote: > > > > > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote: > > > > > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) > > > > > > > > */ > > > > > > > > struct lruvec *folio_lruvec_lock(struct folio *folio) > > > > > > > > { > > > > > > > > - struct lruvec *lruvec = folio_lruvec(folio); > > > > > > > > + struct lruvec *lruvec; > > > > > > > > + rcu_read_lock(); > > > > > > > > +retry: > > > > > > > > + lruvec = folio_lruvec(folio); > > > > > > > > spin_lock(&lruvec->lru_lock); > > > > > > > > - lruvec_memcg_debug(lruvec, folio); > > > > > > > > + > > > > > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { > > > > > > > > + spin_unlock(&lruvec->lru_lock); > > > > > > > > + goto retry; > > > > > > > > + } > > > > > > > > + > > > > > > > > + /* > > > > > > > > + * Preemption is disabled in the internal of spin_lock, which can serve > > > > > > > > + * as RCU read-side critical sections. > > > > > > > > + */ > > > > > > > > + rcu_read_unlock(); > > > > > > > The code looks right to me, but I don't understand the comment: why do > > > > > > > we care that the rcu read-side continues? With the lru_lock held, > > > > > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no? > > > > > > > > > > > > > Right. We could hold rcu read lock until end of reparting. So you mean > > > > > > we do rcu_read_unlock in folio_lruvec_lock()? > > > > > The comment seems to suggest that disabling preemption is what keeps > > > > > the lruvec alive. But it's the lru_lock that keeps it alive. The > > > > > cgroup destruction path tries to take the lru_lock long before it even > > > > > gets to synchronize_rcu(). Once you hold the lru_lock, having an > > > > > implied read-side critical section as well doesn't seem to matter. > > > > > > > > > Well, I thought that spinlocks have implicit read-side critical sections > > > > because it disables preemption (I learned from the comments above > > > > synchronize_rcu() that says interrupts, preemption, or softirqs have been > > > > disabled also serve as RCU read-side critical sections). So I have a > > > > question: is it still true in a PREEMPT_RT kernel (I am not familiar with > > > > this)? > > > Yes, but you're missing my point. > > > > > > > > Should the comment be deleted? > > > > I think we could remove the comments. If the above question is false, seems > > > > like we should continue holding rcu read lock. > > > It's true. > > > > > Thanks for your answer. > > > > > But assume it's false for a second. Why would you need to continue > > > holding it? What would it protect? The lruvec would be pinned by the > > > spinlock even if it DIDN'T imply an RCU lock, right? > > > > > > So I don't understand the point of the comment. If the implied RCU > > > lock is protecting something not covered by the bare spinlock itself, > > > it should be added to the comment. Otherwise, the comment should go. > > > > > Got it. Thanks for your nice explanation. I'll remove > > the comment here. > > Note that there is a similar comment in patch 6 which may have to be removed > as well. > I have noticed that. Thank you for remindering me as well. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muchun Song Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Date: Fri, 27 May 2022 10:55:30 +0800 Message-ID: References: <20220524060551.80037-1-songmuchun@bytedance.com> <20220524060551.80037-4-songmuchun@bytedance.com> <9fe57cf7-9d21-3f91-ef27-e046b426c219@redhat.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=yuwVnApgi9MDFT2rN0f5AwfAJ/OcShQIf6fyKMPeJKY=; b=trgl7zimFq+jjxSKGE+eRM0KjdEFcKXRfyUC1CMFQx2jc5m5ZGc1LDB9U/bGKZq+6k g9Zecb4Kg/Z2nt8zJW1P+UEnfAkbWhti8JLFagRac6ap3LSEUBkZOz69BnCZ/fU7s2mA TDhjiHtPDCE+hfSP1+jCvWCwJsRcTPKA8rxRV+uEEtpGLk13lDAWMBT/sPNNgwIwXlX0 9BmvY/XA6zhuYfVaz47ZWsgNp/5KkEWAFKmyKHn8bP1jRpaUCm2UUoyxn2MnsKS1lOb2 DqTu50/zB88pZq0wlhGePgVBWC9SsSWrPJis+CC/j4rWkKSSNNHmKapWDfO9kyyldvsa ZRXA== Content-Disposition: inline In-Reply-To: <9fe57cf7-9d21-3f91-ef27-e046b426c219-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Waiman Long Cc: Johannes Weiner , mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org On Thu, May 26, 2022 at 04:17:27PM -0400, Waiman Long wrote: > On 5/25/22 11:38, Muchun Song wrote: > > On Wed, May 25, 2022 at 10:48:54AM -0400, Johannes Weiner wrote: > > > On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote: > > > > On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote: > > > > > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote: > > > > > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote: > > > > > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote: > > > > > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) > > > > > > > > */ > > > > > > > > struct lruvec *folio_lruvec_lock(struct folio *folio) > > > > > > > > { > > > > > > > > - struct lruvec *lruvec = folio_lruvec(folio); > > > > > > > > + struct lruvec *lruvec; > > > > > > > > + rcu_read_lock(); > > > > > > > > +retry: > > > > > > > > + lruvec = folio_lruvec(folio); > > > > > > > > spin_lock(&lruvec->lru_lock); > > > > > > > > - lruvec_memcg_debug(lruvec, folio); > > > > > > > > + > > > > > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) { > > > > > > > > + spin_unlock(&lruvec->lru_lock); > > > > > > > > + goto retry; > > > > > > > > + } > > > > > > > > + > > > > > > > > + /* > > > > > > > > + * Preemption is disabled in the internal of spin_lock, which can serve > > > > > > > > + * as RCU read-side critical sections. > > > > > > > > + */ > > > > > > > > + rcu_read_unlock(); > > > > > > > The code looks right to me, but I don't understand the comment: why do > > > > > > > we care that the rcu read-side continues? With the lru_lock held, > > > > > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no? > > > > > > > > > > > > > Right. We could hold rcu read lock until end of reparting. So you mean > > > > > > we do rcu_read_unlock in folio_lruvec_lock()? > > > > > The comment seems to suggest that disabling preemption is what keeps > > > > > the lruvec alive. But it's the lru_lock that keeps it alive. The > > > > > cgroup destruction path tries to take the lru_lock long before it even > > > > > gets to synchronize_rcu(). Once you hold the lru_lock, having an > > > > > implied read-side critical section as well doesn't seem to matter. > > > > > > > > > Well, I thought that spinlocks have implicit read-side critical sections > > > > because it disables preemption (I learned from the comments above > > > > synchronize_rcu() that says interrupts, preemption, or softirqs have been > > > > disabled also serve as RCU read-side critical sections). So I have a > > > > question: is it still true in a PREEMPT_RT kernel (I am not familiar with > > > > this)? > > > Yes, but you're missing my point. > > > > > > > > Should the comment be deleted? > > > > I think we could remove the comments. If the above question is false, seems > > > > like we should continue holding rcu read lock. > > > It's true. > > > > > Thanks for your answer. > > > > > But assume it's false for a second. Why would you need to continue > > > holding it? What would it protect? The lruvec would be pinned by the > > > spinlock even if it DIDN'T imply an RCU lock, right? > > > > > > So I don't understand the point of the comment. If the implied RCU > > > lock is protecting something not covered by the bare spinlock itself, > > > it should be added to the comment. Otherwise, the comment should go. > > > > > Got it. Thanks for your nice explanation. I'll remove > > the comment here. > > Note that there is a similar comment in patch 6 which may have to be removed > as well. > I have noticed that. Thank you for remindering me as well.