From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32F63C25B0C for ; Mon, 8 Aug 2022 19:35:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9188394000E; Mon, 8 Aug 2022 15:35:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A1A5940007; Mon, 8 Aug 2022 15:35:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BBCA94000E; Mon, 8 Aug 2022 15:35:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 347F8940007 for ; Mon, 8 Aug 2022 15:35:16 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0D39A8016B for ; Mon, 8 Aug 2022 19:35:16 +0000 (UTC) X-FDA: 79777429032.17.DBB6363 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id 561AF80171 for ; Mon, 8 Aug 2022 19:35:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=m1WsqyDeTWbIGRO38ZCMMnf2bX8TTKK3IemcukWSpSE=; b=P2Uo/wYjj2ydFTTeakzx3Z0qT8 JBFslqdjbOzQ//I2TsN82dDXrmBNbdo38OaIesUdzUDFIv/FEfvGy0/9xT7ov7WrVR8DdZCt7Ce4w gDbsPDCiN8nj5e7AXlrOS6QVUVHSILXep8DAZR+RxIkIytzL1bADW0uZf2AeaOn7uKq1N3AJv5trY jYyIr0pUvCSOlzqPiBG9kSOniZqRYDP/NiFfLRtbM/eVJk6RwAf0puLWEIuuv4QNpMd44Jr0saTey pj3u0snoqdKKVgjwOuhP87CQKrtYKE4yoFsUn1iHSwUOwCdDNRrK2r0OTywNNWoE31K+VUdYd5V1p a253HEYQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8Wt-00EAvD-54; Mon, 08 Aug 2022 19:35:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 17/59] mm: Convert do_swap_page() to use a folio Date: Mon, 8 Aug 2022 20:33:45 +0100 Message-Id: <20220808193430.3378317-18-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987315; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m1WsqyDeTWbIGRO38ZCMMnf2bX8TTKK3IemcukWSpSE=; b=40MSiyxZ/HdjGbsljfGOfPP2MJx7RKsCRSc2kcE3Ik/kUYLw3flPkjvlq0nB8fl3RNAmZy bjM4QMZcm3Z00Wd17BPWrvEbetY9gp5favUaWv7/H399br6tutB1PNGs12wTzFzAKGcRF+ BWGzT2XiY3k9UQNPluMByqwecthtiUs= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="P2Uo/wYj"; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987315; a=rsa-sha256; cv=none; b=ji60WPg3TkfX4PWQifGO8foop/NbH+FmzdnrZu62OujJNL3B6RHVaqRUGtvUYBhgYxGIu2 Rni10HIP0QVKJ7YGcyPe+KTWwrXyQEIINpoC/Ly+PCKZaXHPozQNgaOIHqiEcX1PUmDJJu BtAT6SpJqUmKoh/tcUxGbjry866Fnk4= Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="P2Uo/wYj"; dmarc=none; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam02 X-Stat-Signature: 4mbu1tpuugsq6oes4nrzndc6wukfzukf X-Rspamd-Queue-Id: 561AF80171 X-Rspam-User: X-HE-Tag: 1659987315-788816 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Removes quite a lot of calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 57 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 33 insertions(+), 24 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4ba73f5aa8bb..f172b148e29b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3718,6 +3718,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; + struct folio *folio; struct page *page = NULL, *swapcache; struct swap_info_struct *si = NULL; rmap_t rmap_flags = RMAP_NONE; @@ -3762,19 +3763,23 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = lookup_swap_cache(entry, vma, vmf->address); swapcache = page; + if (page) + folio = page_folio(page); if (!page) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, - vmf->address); - if (page) { - __SetPageLocked(page); - __SetPageSwapBacked(page); + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, + vma, vmf->address, false); + page = &folio->page; + if (folio) { + __folio_set_locked(folio); + __folio_set_swapbacked(folio); if (mem_cgroup_swapin_charge_page(page, - vma->vm_mm, GFP_KERNEL, entry)) { + vma->vm_mm, GFP_KERNEL, + entry)) { ret = VM_FAULT_OOM; goto out_page; } @@ -3782,20 +3787,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) shadow = get_shadow_from_swap_cache(entry); if (shadow) - workingset_refault(page_folio(page), - shadow); + workingset_refault(folio, shadow); - lru_cache_add(page); + folio_add_lru(folio); /* To provide entry to swap_readpage() */ - set_page_private(page, entry.val); + folio_set_swap_entry(folio, entry); swap_readpage(page, true, NULL); - set_page_private(page, 0); + folio->private = NULL; } } else { page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, vmf); swapcache = page; + if (page) + folio = page_folio(page); } if (!page) { @@ -3838,7 +3844,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * swapcache, we need to check that the page's swap has not * changed. */ - if (unlikely(!PageSwapCache(page) || + if (unlikely(!folio_test_swapcache(folio) || page_private(page) != entry.val)) goto out_page; @@ -3853,6 +3859,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) page = swapcache; goto out_page; } + folio = page_folio(page); /* * If we want to map a page that's in the swapcache writable, we @@ -3861,7 +3868,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * pagevecs if required. */ if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache && - !PageKsm(page) && !PageLRU(page)) + !folio_test_ksm(folio) && !folio_test_lru(folio)) lru_add_drain(); } @@ -3875,7 +3882,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!pte_same(*vmf->pte, vmf->orig_pte))) goto out_nomap; - if (unlikely(!PageUptodate(page))) { + if (unlikely(!folio_test_uptodate(folio))) { ret = VM_FAULT_SIGBUS; goto out_nomap; } @@ -3888,14 +3895,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * check after taking the PT lock and making sure that nobody * concurrently faulted in this page and set PG_anon_exclusive. */ - BUG_ON(!PageAnon(page) && PageMappedToDisk(page)); - BUG_ON(PageAnon(page) && PageAnonExclusive(page)); + BUG_ON(!folio_test_anon(folio) && folio_test_mappedtodisk(folio)); + BUG_ON(folio_test_anon(folio) && PageAnonExclusive(page)); /* * Check under PT lock (to protect against concurrent fork() sharing * the swap entry concurrently) for certainly exclusive pages. */ - if (!PageKsm(page)) { + if (!folio_test_ksm(folio)) { /* * Note that pte_swp_exclusive() == false for architectures * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE. @@ -3907,7 +3914,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * swapcache -> certainly exclusive. */ exclusive = true; - } else if (exclusive && PageWriteback(page) && + } else if (exclusive && folio_test_writeback(folio) && data_race(si->flags & SWP_STABLE_WRITES)) { /* * This is tricky: not all swap backends support @@ -3950,7 +3957,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * exposing them to the swapcache or because the swap entry indicates * exclusivity. */ - if (!PageKsm(page) && (exclusive || page_count(page) == 1)) { + if (!folio_test_ksm(folio) && + (exclusive || folio_ref_count(folio) == 1)) { if (vmf->flags & FAULT_FLAG_WRITE) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; @@ -3970,16 +3978,17 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* ksm created a completely new copy */ if (unlikely(page != swapcache && swapcache)) { page_add_new_anon_rmap(page, vma, vmf->address); - lru_cache_add_inactive_or_unevictable(page, vma); + folio_add_lru_vma(folio, vma); } else { page_add_anon_rmap(page, vma, vmf->address, rmap_flags); } - VM_BUG_ON(!PageAnon(page) || (pte_write(pte) && !PageAnonExclusive(page))); + VM_BUG_ON(!folio_test_anon(folio) || + (pte_write(pte) && !PageAnonExclusive(page))); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); - unlock_page(page); + folio_unlock(folio); if (page != swapcache && swapcache) { /* * Hold the lock to avoid the swap entry to be reused @@ -4011,9 +4020,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) out_nomap: pte_unmap_unlock(vmf->pte, vmf->ptl); out_page: - unlock_page(page); + folio_unlock(folio); out_release: - put_page(page); + folio_put(folio); if (page != swapcache && swapcache) { unlock_page(swapcache); put_page(swapcache); -- 2.35.1