From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42A53C433F5 for ; Mon, 28 Mar 2022 21:20:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229733AbiC1VWQ (ORCPT ); Mon, 28 Mar 2022 17:22:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229501AbiC1VWM (ORCPT ); Mon, 28 Mar 2022 17:22:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id CFCB7E29C0 for ; Mon, 28 Mar 2022 14:20:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1648502424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mUd5WAhqRr0QpqIojRY9Mvzy4YJpSqTmXJ8hZzENJfc=; b=cieWqpjPy1mQ5zQtyagbdEbLj8Njo3+VdHCUu8kkulaJOy5DeXd9hkF2I7fSbVx1DKB+EB MJoDZE8JWMVTEFlIqfyy4PkxoTCUnRdNvTi+o0+Re0zcMDEsQJnTSyRM6P/t5HUWxxXpHp vE1zhvYDU6uNxd/UW1UHRnzzCv2SIQw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-288-rvnrj3h4OWa_ug285Jo_ZA-1; Mon, 28 Mar 2022 17:20:20 -0400 X-MC-Unique: rvnrj3h4OWa_ug285Jo_ZA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7C8E93803902; Mon, 28 Mar 2022 21:20:20 +0000 (UTC) Received: from [10.18.17.215] (dhcp-17-215.bos.redhat.com [10.18.17.215]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1786B40D1B9B; Mon, 28 Mar 2022 21:20:17 +0000 (UTC) Message-ID: <9e184cff-263a-d83a-0fc9-0ac7d453aa2a@redhat.com> Date: Mon, 28 Mar 2022 17:20:16 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH-mm v3] mm/list_lru: Optimize memcg_reparent_list_lru_node() Content-Language: en-US To: Roman Gushchin Cc: Muchun Song , Andrew Morton , Linux Memory Management List , LKML References: <20220309144000.1470138-1-longman@redhat.com> <2263666d-5eef-b1fe-d5e3-b166a3185263@redhat.com> <5aa687c4-2888-7977-8c1a-d51384e685aa@redhat.com> From: Waiman Long In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 3/28/22 17:12, Roman Gushchin wrote: > On Mon, Mar 28, 2022 at 04:46:39PM -0400, Waiman Long wrote: >> On 3/28/22 15:12, Roman Gushchin wrote: >>> On Sun, Mar 27, 2022 at 08:57:15PM -0400, Waiman Long wrote: >>>> On 3/22/22 22:12, Muchun Song wrote: >>>>> On Wed, Mar 23, 2022 at 9:55 AM Waiman Long wrote: >>>>>> On 3/22/22 21:06, Muchun Song wrote: >>>>>>> On Wed, Mar 9, 2022 at 10:40 PM Waiman Long wrote: >>>>>>>> Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node() >>>>>>>> to be race free"), we are tracking the total number of lru >>>>>>>> entries in a list_lru_node in its nr_items field. In the case of >>>>>>>> memcg_reparent_list_lru_node(), there is nothing to be done if nr_items >>>>>>>> is 0. We don't even need to take the nlru->lock as no new lru entry >>>>>>>> could be added by a racing list_lru_add() to the draining src_idx memcg >>>>>>>> at this point. >>>>>>> Hi Waiman, >>>>>>> >>>>>>> Sorry for the late reply. Quick question: what if there is an inflight >>>>>>> list_lru_add()? How about the following race? >>>>>>> >>>>>>> CPU0: CPU1: >>>>>>> list_lru_add() >>>>>>> spin_lock(&nlru->lock) >>>>>>> l = list_lru_from_kmem(memcg) >>>>>>> memcg_reparent_objcgs(memcg) >>>>>>> memcg_reparent_list_lrus(memcg) >>>>>>> memcg_reparent_list_lru() >>>>>>> memcg_reparent_list_lru_node() >>>>>>> if (!READ_ONCE(nlru->nr_items)) >>>>>>> // Miss reparenting >>>>>>> return >>>>>>> // Assume 0->1 >>>>>>> l->nr_items++ >>>>>>> // Assume 0->1 >>>>>>> nlru->nr_items++ >>>>>>> >>>>>>> IIUC, we use nlru->lock to serialise this scenario. >>>>>> I guess this race is theoretically possible but very unlikely since it >>>>>> means a very long pause between list_lru_from_kmem() and the increment >>>>>> of nr_items. >>>>> It is more possible in a VM. >>>>> >>>>>> How about the following changes to make sure that this race can't happen? >>>>>> >>>>>> diff --git a/mm/list_lru.c b/mm/list_lru.c >>>>>> index c669d87001a6..c31a0a8ad4e7 100644 >>>>>> --- a/mm/list_lru.c >>>>>> +++ b/mm/list_lru.c >>>>>> @@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct >>>>>> list_lru *lru, int nid, >>>>>> struct list_lru_one *src, *dst; >>>>>> >>>>>> /* >>>>>> - * If there is no lru entry in this nlru, we can skip it >>>>>> immediately. >>>>>> + * If there is no lru entry in this nlru and the nlru->lock is free, >>>>>> + * we can skip it immediately. >>>>>> */ >>>>>> - if (!READ_ONCE(nlru->nr_items)) >>>>>> + if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock)) >>>>> I think we also should insert a smp_rmb() between those two loads. >>>> Thinking about this some more, I believe that adding spin_is_locked() check >>>> will be enough for x86. However, that will likely not be enough for arches >>>> with a more relaxed memory semantics. So the safest way to avoid this >>>> possible race is to move the check to within the lock critical section, >>>> though that comes with a slightly higher overhead for the 0 nr_items case. I >>>> will send out a patch to correct that. Thanks for bring this possible race >>>> to my attention. >>> Yes, I think it's not enough: >>> CPU0 CPU1 >>> READ_ONCE(&nlru->nr_items) -> 0 >>> spin_lock(&nlru->lock); >>> nlru->nr_items++; >>> spin_unlock(&nlru->lock); >>> && !spin_is_locked(&nlru->lock) -> 0 >> I have actually thought of that. I am even thinking about reading nr_items >> again after spin_is_locked(). Still for arches with relaxed memory >> semantics, when will a memory write by one cpu be propagated to another cpu >> can be highly variable. It is very hard to prove that it is completely safe. >> >> x86 has a more strict memory semantics and it is the only architecture that >> I have enough confidence that doing the check without taking a lock can be >> safe. Perhaps we could use this optimization just for x86 and do it inside >> locks for the rests. > Hm, is this such a big problem in the real life? Can you describe the setup? > I'm somewhat resistant to an idea of having arch-specific optimizations here > without a HUGE reason. I am just throwing this idea out for discussion. It does not mean that I want to do an arch specific patch unless there is performance data to indicate a substantial gain in performance in some use cases. Cheers, Longman