From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D72BC352BE for ; Tue, 14 Apr 2020 16:37:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5201420678 for ; Tue, 14 Apr 2020 16:37:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="KS5KSsAg" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391850AbgDNQhg (ORCPT ); Tue, 14 Apr 2020 12:37:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S2391839AbgDNQhC (ORCPT ); Tue, 14 Apr 2020 12:37:02 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6653C061A0E for ; Tue, 14 Apr 2020 09:37:01 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id s63so9763303qke.4 for ; Tue, 14 Apr 2020 09:37:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=DYImV4WFqfjPjnkhvl2nD8nYSEJboQaYtVJWAd3b//Q=; b=KS5KSsAg991FP2nFTxhky8wQhQOu9BwakhqYU+efKhtive2f3PbE42bLxc5ZIjZRzc PNWD/LEGWAtMm5/XDuYn/2V4ghpqNkIk8WmZzEJDhpzW4YQNMk1oJKjjSmLLHWlxNlwS SRUHNobeIH59Sg0OvosYB0vVt4FtWC7jtzla2qHmPJoefzLEY2MgbK5IVRlVcrYE4yjp pbtJ8ktGppKUT+kjFT9pc36mAcNXqgmuiq2LD/VFTMh5hdHNLBGM4atG+Z8QcePqRbK+ Va56KXOZ1ODNr3uJ0s9LW7EOHOBYLzqekZ1OIXQgckvam6fjqxNU8ywgqJZiDaF17jON qxUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=DYImV4WFqfjPjnkhvl2nD8nYSEJboQaYtVJWAd3b//Q=; b=DzZGE7RAXT+YNlqMK5gr6to0+wlMRKteEeQPCg/QPu8dn6K/suR0feSZLNipWLA4W8 wUze3zTn58DJT9OBEyrR2jhs4F77k/ItjZBOPPN414wwdNSUntd8KjHWdZLVluZdTf6Z hOMOQHyM3b+A0gRxia2OwbEZQtlChwZhexi514tGCbxhcbwDmoSHEohcnBFSp9JVqZ8s PEd22XULmoXuwXx8lPq6PAqFtPHH4mhESIIlEudobeizNgditkE0FXR0aIhZpsi3fKr8 ZcxSX1nRiWGo7wYr4HmOgj5xUtm2sjNp7o/icFcCpitT1ZIFtzY7SqtYBGpb1UVqt/BK 7a4A== X-Gm-Message-State: AGi0PubSBZEKVC10DJi+a5tfy2oxSXOczKnc1bvz8d7nYb9l7tebsmrG wQz3KMZnflQC33BAiA6+ldvREw== X-Google-Smtp-Source: APiQypJDaEd/nGS9BTaUv72nAKo3JSl8h//gwiZ42bd2PG7DF/jOIddhKGGPuIigNWUmnLczWkFCwg== X-Received: by 2002:a37:ac08:: with SMTP id e8mr3627259qkm.439.1586882220593; Tue, 14 Apr 2020 09:37:00 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::e623]) by smtp.gmail.com with ESMTPSA id u24sm2884840qkk.84.2020.04.14.09.36.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 14 Apr 2020 09:36:59 -0700 (PDT) Date: Tue, 14 Apr 2020 12:36:58 -0400 From: Johannes Weiner To: Alex Shi Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao , Wei Yang Subject: Re: [PATCH v8 03/10] mm/lru: replace pgdat lru_lock with lruvec lock Message-ID: <20200414163658.GB136578@cmpxchg.org> References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> <1579143909-156105-4-git-send-email-alex.shi@linux.alibaba.com> <20200116215222.GA64230@cmpxchg.org> <20200413180725.GA99267@cmpxchg.org> <42d5c2cb-3019-993f-eba7-33a1d69ef699@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <42d5c2cb-3019-993f-eba7-33a1d69ef699@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 14, 2020 at 04:19:01PM +0800, Alex Shi wrote: > > > 在 2020/4/14 上午2:07, Johannes Weiner 写道: > > But isolation actually needs to lock out charging, or it would operate > > on the wrong list: > > > > isolation: commit_charge: > > if (TestClearPageLRU(page)) > > page->mem_cgroup = new > > // page is still physically on > > // the root_mem_cgroup's LRU. We're > > // updating the wrong list: > > memcg = page->mem_cgroup > > spin_lock(memcg->lru_lock) > > del_page_from_lru_list(page, memcg) > > spin_unlock(memcg->lru_lock) > > > > lrucare really is a mess. Even before this patch series, it makes > > things tricky and subtle and error prone. > > > > The only reason we're doing it is for when there is swapping without > > swap tracking, in which case swap reahadead needs to put pages on the > > LRU but cannot charge them until we have a faulting vma later. > > > > But it's not clear how practical such a configuration is. Both memory > > and swap are shared resources, and isolation isn't really effective > > when you restrict access to memory but then let workloads swap freely. > > > > Plus, the overhead of tracking is tiny - 512k per G of swap (0.04%). > > > > Maybe we should just delete MEMCG_SWAP and unconditionally track swap > > entry ownership when the memory controller is enabled. I don't see a > > good reason not to, and it would simplify the entire swapin path, the > > LRU locking, and the page->mem_cgroup stabilization rules. > > Hi Johannes, > > I think what you mean here is to keep swap_cgroup id even it was swaped, > then we read back the page from swap disk, we don't need to charge it. > So all other memcg charge are just happens on non lru list, thus we have > no isolation required in above awkward scenario. We don't need to change how swap recording works, we just need to always do it when CONFIG_MEMCG && CONFIG_SWAP. We can uncharge the page once it's swapped out. The only difference is that with a swap record, we know who owned the page and can charge readahead pages right awya, before setting PageLRU; whereas without a record, we read pages onto the LRU, and then wait until we hit a page fault with an mm to charge. That's why we have this lrucare mess.