From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C787DC433E0 for ; Fri, 15 May 2020 13:41:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85DF220657 for ; Fri, 15 May 2020 13:41:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="elQes2Gd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 85DF220657 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 35C1180052; Fri, 15 May 2020 09:41:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 30B5D8004F; Fri, 15 May 2020 09:41:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A9CA80052; Fri, 15 May 2020 09:41:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id EDC748004F for ; Fri, 15 May 2020 09:41:48 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8D4E4181AEF00 for ; Fri, 15 May 2020 13:41:48 +0000 (UTC) X-FDA: 76819066296.10.boys05_fc4838f4d00b X-HE-Tag: boys05_fc4838f4d00b X-Filterd-Recvd-Size: 4458 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 May 2020 13:41:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=/2fZQu8w66RARx8vOLpWsju1X69BaYw4U1H3jkF+yV0=; b=elQes2Gdc+RL2W7koeKjeqTtEi iHImR9YYkNYUoglFq/QkccA+A21i0pJLwUdvFbalSsbMQ+EsrbJcIdarnX/s2Q7Nd2OS/oLMW+fUY 3NWjbIC+EW/FfeTl5xRfv0clS1MZMbtYWtNpq40G8lBsubHFSwd2Kxk19yLleLFS/IyIwUXTbCTlI c411k2dTXhpMyGG5g4yydiCCuEjgEeqCV0mXuPVs6gzd2UVMTehfopvcj0UftbYzOtCcB1ywHi7Ef k1C1HInyqSNsY6ZD0YAMGI8ht7+zWJWsriyPgTE/PuNTlXQL7CH6yq17OqU7H4/scoR6y4rkgIw7f 8ECOcKeg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jZaCz-0005hg-Vy; Fri, 15 May 2020 13:17:02 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 22/36] mm: Allow large pages to be added to the page cache Date: Fri, 15 May 2020 06:16:42 -0700 Message-Id: <20200515131656.12890-23-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200515131656.12890-1-willy@infradead.org> References: <20200515131656.12890-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" We return -EEXIST if there are any non-shadow entries in the page cache in the range covered by the large page. If there are multiple shadow entries in the range, we set *shadowp to one of them (currently the one at the highest index). If that turns out to be the wrong answer, we can implement something more complex. This is mostly modelled after the equivalent function in the shmem code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 44 +++++++++++++++++++++++++++++++------------- 1 file changed, 31 insertions(+), 13 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 9abba062973a..437484d42b78 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -834,6 +834,7 @@ static int __add_to_page_cache_locked(struct page *pa= ge, int huge =3D PageHuge(page); struct mem_cgroup *memcg; int error; + unsigned int nr =3D 1; void *old; =20 VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -845,31 +846,48 @@ static int __add_to_page_cache_locked(struct page *= page, gfp_mask, &memcg, false); if (error) return error; + xas_set_order(&xas, offset, thp_order(page)); + nr =3D hpage_nr_pages(page); } =20 - get_page(page); + page_ref_add(page, nr); page->mapping =3D mapping; page->index =3D offset; =20 do { + unsigned long exceptional =3D 0; + unsigned int i =3D 0; + xas_lock_irq(&xas); - old =3D xas_load(&xas); - if (old && !xa_is_value(old)) - xas_set_err(&xas, -EEXIST); - xas_store(&xas, page); + xas_for_each_conflict(&xas, old) { + if (!xa_is_value(old)) { + xas_set_err(&xas, -EEXIST); + break; + } + exceptional++; + if (shadowp) + *shadowp =3D old; + } + xas_create_range(&xas); if (xas_error(&xas)) goto unlock; =20 - if (xa_is_value(old)) { - mapping->nrexceptional--; - if (shadowp) - *shadowp =3D old; +next: + xas_store(&xas, page); + if (++i < nr) { + xas_next(&xas); + goto next; } - mapping->nrpages++; + mapping->nrexceptional -=3D exceptional; + mapping->nrpages +=3D nr; =20 /* hugetlb pages do not participate in page cache accounting */ - if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); + if (!huge) { + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, + nr); + if (nr > 1) + __inc_node_page_state(page, NR_FILE_THPS); + } unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); @@ -886,7 +904,7 @@ static int __add_to_page_cache_locked(struct page *pa= ge, /* Leave page->index set: truncation relies upon it */ if (!huge) mem_cgroup_cancel_charge(page, memcg, false); - put_page(page); + page_ref_sub(page, nr); return xas_error(&xas); } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); --=20 2.26.2