From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2408BC00140 for ; Mon, 8 Aug 2022 19:35:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76A76940011; Mon, 8 Aug 2022 15:35:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6FF8F94000F; Mon, 8 Aug 2022 15:35:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 40C57940011; Mon, 8 Aug 2022 15:35:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 25B4294000F for ; Mon, 8 Aug 2022 15:35:19 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id F09AD80122 for ; Mon, 8 Aug 2022 19:35:18 +0000 (UTC) X-FDA: 79777429116.07.D0DEE24 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf29.hostedemail.com (Postfix) with ESMTP id A194F120063 for ; Mon, 8 Aug 2022 19:35:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pL+3uDREeON3UijNk2fSt/B1Lgs/D8FgAX6uutYNL7w=; b=C58w4GYGaZ3mWkFOKJ0LvL71Ap mSwBSMjCiwrQLyADowLxO/EdAxuTKdttnYkdTwlHkCTN9gfzSN3/0UGG/bprMtrjowIRh62dLwfWv Nwky9fYsCpi5LMJrkcNJfPbpUV8x7mZW3yLkKRtKgHha8/KQkU5x1aPEv45pa9atugC9VuohKdM9z bUt3uu5cGqdYMfIN2J6A3cVBvtwI5dn/EmBwju3H7l90NVcH/mgWfK0Gxme+f4WaLDd0P0B7jzcB0 InPoGmzDYKPwBKFWg4qw6kzBkvIvPD0huUlltuCJIUSIiAt/YQtpxqVj4zldfb7RbtPtfYzGOnxFa YFTKCWng==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8Wt-00EAvM-QM; Mon, 08 Aug 2022 19:35:14 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to a folio Date: Mon, 8 Aug 2022 20:33:46 +0100 Message-Id: <20220808193430.3378317-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987318; a=rsa-sha256; cv=none; b=jvOOsUbE7nXWP8pRGsF7vM0wd9a3syBBn7aPmowgJy6/3gjrcN2kNt2DYXNv5wp7/fa53l W8BlCwzySiDsR4lGYIwwFOwzuCPiEDBsMyD3Z33OMPQ1/3jucVOg7oJ5mjC8lIWLzbrz/Y psM9GFvrgQ918MkNJQ0jSGxDtJDxt5Y= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C58w4GYG; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pL+3uDREeON3UijNk2fSt/B1Lgs/D8FgAX6uutYNL7w=; b=Gbk9WtUZIreEyqb/QT0UVJBkagl8K0y9AG/2zqLdro6uMRE5nkahIHC1qUR3unwPT+HYnP vIZyCeXvT48N9Odba46ZO+dXxKKsstGW03DYHYBpnB/a8SG4/RZyMydlYEEDecIBySU87+ ZgR9mMZ1SR2KygCmy1gfVpakSooyscs= X-Stat-Signature: 7hnpwcodpkucbr9m7ua5ms7xerk6t938 X-Rspamd-Queue-Id: A194F120063 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C58w4GYG; spf=none (imf29.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1659987318-90888 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The 'swapcache' variable is used to track whether the page is from the swapcache or not. It can do this equally well by being the folio of the page rather than the page itself, and this saves a number of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f172b148e29b..471102f0cbf2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; - struct folio *folio; - struct page *page = NULL, *swapcache; + struct folio *swapcache, *folio = NULL; + struct page *page; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; bool exclusive = false; @@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out; page = lookup_swap_cache(entry, vma, vmf->address); - swapcache = page; if (page) folio = page_folio(page); + swapcache = folio; - if (!page) { + if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ @@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } else { page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); - swapcache = page; if (page) folio = page_folio(page); + swapcache = folio; } - if (!page) { + if (!folio) { /* * Back out if somebody else faulted in this pte * while we released the pte lock. @@ -3856,10 +3856,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = ksm_might_need_to_copy(page, vma, vmf->address); if (unlikely(!page)) { ret = VM_FAULT_OOM; - page = swapcache; goto out_page; } folio = page_folio(page); + swapcache = folio; /* * If we want to map a page that's in the swapcache writable, we @@ -3867,7 +3867,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * owner. Try removing the extra reference from the local LRU * pagevecs if required. */ - if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache && + if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache && !folio_test_ksm(folio) && !folio_test_lru(folio)) lru_add_drain(); } @@ -3908,7 +3908,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE. */ exclusive = pte_swp_exclusive(vmf->orig_pte); - if (page != swapcache) { + if (folio != swapcache) { /* * We have a fresh page that is not exposed to the * swapcache -> certainly exclusive. @@ -3976,7 +3976,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->orig_pte = pte; /* ksm created a completely new copy */ - if (unlikely(page != swapcache && swapcache)) { + if (unlikely(folio != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address); folio_add_lru_vma(folio, vma); } else { @@ -3989,7 +3989,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); folio_unlock(folio); - if (page != swapcache && swapcache) { + if (folio != swapcache && swapcache) { /* * Hold the lock to avoid the swap entry to be reused * until we take the PT lock for the pte_same() check @@ -3998,8 +3998,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * so that the swap count won't change under a * parallel locked swapcache. */ - unlock_page(swapcache); - put_page(swapcache); + folio_unlock(swapcache); + folio_put(swapcache); } if (vmf->flags & FAULT_FLAG_WRITE) { @@ -4023,9 +4023,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(folio); out_release: folio_put(folio); - if (page != swapcache && swapcache) { - unlock_page(swapcache); - put_page(swapcache); + if (folio != swapcache && swapcache) { + folio_unlock(swapcache); + folio_put(swapcache); } if (si) put_swap_device(si); -- 2.35.1