From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E124C433E1 for ; Wed, 19 Aug 2020 04:27:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB29120772 for ; Wed, 19 Aug 2020 04:27:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Z+DDR3Mm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB29120772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F64A6B0033; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7811E8D000C; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 621F48D000A; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 45BF16B0033 for ; Wed, 19 Aug 2020 00:27:43 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EF284362C for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) X-FDA: 77166034764.19.trip06_1007adc27025 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id B35221ACEBC for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) X-HE-Tag: trip06_1007adc27025 X-Filterd-Recvd-Size: 7842 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Aug 2020 04:27:42 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id v22so16909390qtq.8 for ; Tue, 18 Aug 2020 21:27:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=taoylAfZ50RvbeR+qDxRIBM5RKgvtzNS5VKzJspUyB8=; b=Z+DDR3MmkmAOpsWWDpyXeYNgIGzTgK9Ddb7UhBT/vJrDAZuGC7IlYskyQk03tzEygi hg0yvwJORukjN33wkxhvlIHH+zvaP6U6lypz2wcfStTBHY8CmTz2F7TgzrpOR/fJm1M7 EFODaa5/wquuu3op5FFYJRBLf5S9RBmRBvBOB54DJqk3ofigrbivIJ1NXhgI5tnxk9EX VX77DTOK8ANaygFl9DWT0sOHN+dtc1RaB7H+Xn9h5/7jP/Epn+84BTlG6LRCCUFBnKKo Ikfqq6/wFEvz43BPxSR6OKSjvmmyvtPFx/O6FVNMW3b4ApIC1B4o5rqHt+Vy/0CELP6F R4ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=taoylAfZ50RvbeR+qDxRIBM5RKgvtzNS5VKzJspUyB8=; b=O6UyTHps0V0kk8juDpQX6ruKjxtT326szLTJ3fxjWe0pPLHqgIzlcPvdBFmZF8rh5t UWeIbVZEmnNkbylWIpVnyBfciBu0p3RthUKl2FSLNwyJnSWgPHRMiWfakXV02H+/8+ET 28yG851/n3wVpK5T2L1V17ZwPkZmKWDDspHYYxPVQ7jEzh/mplv/bNhl8dO4Bkv6LPaI NKbuAQzzkEsyarMcIysOlTby99eznz2ojqtaFdR5/ZzXN0cXln+glggPMZxzcMVPGnlW SbGGT8uyeb6OPx+4csUZJiybqic5YZzq6kc9cRBWWFIwhkWvsJUjdJ5yHQRCfIRT3ohE 5z3A== X-Gm-Message-State: AOAM531nFmZw+q/vuqrg7uYln4+9GTHM/CtEFx58Uu4f1c3QfXefpLd2 u5VmcbH29XwX8fUcQNqTqp4= X-Google-Smtp-Source: ABdhPJyDanWGHvp3JCIJhebJK8Z+qpciCv8gSBa8JKVjUZllBLCq+m9VK5Y9O/7JNh540/o4GxDybA== X-Received: by 2002:ac8:5146:: with SMTP id h6mr21244001qtn.290.1597811261609; Tue, 18 Aug 2020 21:27:41 -0700 (PDT) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id g129sm24061413qkb.39.2020.08.18.21.27.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Aug 2020 21:27:41 -0700 (PDT) Subject: [RFC PATCH v2 5/5] mm: Split move_pages_to_lru into 3 separate passes From: Alexander Duyck To: alex.shi@linux.alibaba.com Cc: yang.shi@linux.alibaba.com, lkp@intel.com, rong.a.chen@intel.com, khlebnikov@yandex-team.ru, kirill@shutemov.name, hughd@google.com, linux-kernel@vger.kernel.org, alexander.duyck@gmail.com, daniel.m.jordan@oracle.com, linux-mm@kvack.org, shakeelb@google.com, willy@infradead.org, hannes@cmpxchg.org, tj@kernel.org, cgroups@vger.kernel.org, akpm@linux-foundation.org, richard.weiyang@gmail.com, mgorman@techsingularity.net, iamjoonsoo.kim@lge.com Date: Tue, 18 Aug 2020 21:27:38 -0700 Message-ID: <20200819042738.23414.60815.stgit@localhost.localdomain> In-Reply-To: <20200819041852.23414.95939.stgit@localhost.localdomain> References: <20200819041852.23414.95939.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: B35221ACEBC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck The current code for move_pages_to_lru is meant to release the LRU lock every time it encounters an unevictable page or a compound page that must be freed. This results in a fair amount of code bulk because the lruvec has to be reacquired every time the lock is released and reacquired. Instead of doing this I believe we can break the code up into 3 passes. The first pass will identify the pages we can move to LRU and move those. In addition it will sort the list out leaving the unevictable pages in the list and moving those pages that have dropped to a reference count of 0 to pages_to_free. The second pass will return the unevictable pages to the LRU. The final pass will free any compound pages we have in the pages_to_free list before we merge it back with the original list and return from the function. The advantage of doing it this way is that we only have to release the lock between pass 1 and 2, and then we reacquire the lock after pass 3 after we merge the pages_to_free back into the original list. As such we only have to release the lock at most once in an entire call instead of having to test to see if we need to relock with each page. Signed-off-by: Alexander Duyck --- mm/vmscan.c | 68 ++++++++++++++++++++++++++++++++++------------------------- 1 file changed, 39 insertions(+), 29 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 3ebe3f9b653b..6a2bdbc1a9eb 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1850,22 +1850,21 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, { int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); - struct page *page; - struct lruvec *orig_lruvec = lruvec; + struct page *page, *next; enum lru_list lru; - while (!list_empty(list)) { - page = lru_to_page(list); + list_for_each_entry_safe(page, next, list, lru) { VM_BUG_ON_PAGE(PageLRU(page), page); - list_del(&page->lru); - if (unlikely(!page_evictable(page))) { - if (lruvec) { - spin_unlock_irq(&lruvec->lru_lock); - lruvec = NULL; - } - putback_lru_page(page); + + /* + * if page is unevictable leave it on the list to be returned + * to the LRU after we have finished processing the other + * entries in the list. + */ + if (unlikely(!page_evictable(page))) continue; - } + + list_del(&page->lru); /* * The SetPageLRU needs to be kept here for list intergrity. @@ -1878,20 +1877,14 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, * list_add(&page->lru,) * list_add(&page->lru,) */ - lruvec = relock_page_lruvec_irq(page, lruvec); SetPageLRU(page); if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - if (unlikely(PageCompound(page))) { - spin_unlock_irq(&lruvec->lru_lock); - lruvec = NULL; - destroy_compound_page(page); - } else - list_add(&page->lru, &pages_to_free); - + /* defer freeing until we can release lru_lock */ + list_add(&page->lru, &pages_to_free); continue; } @@ -1904,16 +1897,33 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, if (PageActive(page)) workingset_age_nonresident(lruvec, nr_pages); } - if (orig_lruvec != lruvec) { - if (lruvec) - spin_unlock_irq(&lruvec->lru_lock); - spin_lock_irq(&orig_lruvec->lru_lock); - } - /* - * To save our caller's stack, now use input list for pages to free. - */ - list_splice(&pages_to_free, list); + if (unlikely(!list_empty(list) || !list_empty(&pages_to_free))) { + spin_unlock_irq(&lruvec->lru_lock); + + /* return any unevictable pages to the LRU list */ + while (!list_empty(list)) { + page = lru_to_page(list); + list_del(&page->lru); + putback_lru_page(page); + } + + /* + * To save our caller's stack use input + * list for pages to free. + */ + list_splice(&pages_to_free, list); + + /* free any compound pages we have in the list */ + list_for_each_entry_safe(page, next, list, lru) { + if (likely(!PageCompound(page))) + continue; + list_del(&page->lru); + destroy_compound_page(page); + } + + spin_lock_irq(&lruvec->lru_lock); + } return nr_moved; }