From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2702FC432C1 for ; Wed, 25 Sep 2019 00:52:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB2C4214AF for ; Wed, 25 Sep 2019 00:52:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uN4G2Y5e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB2C4214AF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B8E966B0276; Tue, 24 Sep 2019 20:52:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B17156B0277; Tue, 24 Sep 2019 20:52:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A547A6B0278; Tue, 24 Sep 2019 20:52:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 7A4F06B0276 for ; Tue, 24 Sep 2019 20:52:36 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 04DBF824376D for ; Wed, 25 Sep 2019 00:52:36 +0000 (UTC) X-FDA: 75971617512.28.bird58_6df60a8cf953a X-HE-Tag: bird58_6df60a8cf953a X-Filterd-Recvd-Size: 4497 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Sep 2019 00:52:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4eLFInkalxdv/sw4VoZ/JGQOQopPh05FfkwoHlyPkHg=; b=uN4G2Y5e/IFFyzI5vzkyq9bD5O oQuuzIZbJlYUPt/CXX1xsBPVnzUFCpKS++qcwiDKgM9+XQAjgrXprTYPPhsb+X0dAl6aqRG+fci/n Zh8JLGwV03UI7QCiMFQeNxQXofuXQ27mkpDxQaiK56gSGsW5nX8qzi/IUQzn514r/ovUZinSwO6E2 01Dg2mcF8Y2mmyLqTQdqmvyTuOYF9oi3Goy2NkaRWM9YDw/WLEv2uw2b9IuiaGkIsR1vC+24Jn4xG sayzdbjPq++cLENak4akLGHWWGPyNTWZkotZSDxzBw1iW/gwICJYxHd478jR0XOx06UcKbznGmmw9 sTGPtDHg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.2 #3 (Red Hat Linux)) id 1iCvXV-00076n-JL; Wed, 25 Sep 2019 00:52:17 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 09/15] mm: Allow large pages to be added to the page cache Date: Tue, 24 Sep 2019 17:52:08 -0700 Message-Id: <20190925005214.27240-10-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190925005214.27240-1-willy@infradead.org> References: <20190925005214.27240-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" We return -EEXIST if there are any non-shadow entries in the page cache in the range covered by the large page. If there are multiple shadow entries in the range, we set *shadowp to one of them (currently the one at the highest index). If that turns out to be the wrong answer, we can implement something more complex. This is mostly modelled after the equivalent function in the shmem code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index bab97addbb1d..afe8f5d95810 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -855,6 +855,7 @@ static int __add_to_page_cache_locked(struct page *pa= ge, int huge =3D PageHuge(page); struct mem_cgroup *memcg; int error; + unsigned int nr =3D 1; void *old; =20 VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -866,31 +867,45 @@ static int __add_to_page_cache_locked(struct page *= page, gfp_mask, &memcg, false); if (error) return error; + xas_set_order(&xas, offset, compound_order(page)); + nr =3D compound_nr(page); } =20 - get_page(page); + page_ref_add(page, nr); page->mapping =3D mapping; page->index =3D offset; =20 do { + unsigned long exceptional =3D 0; + unsigned int i =3D 0; + xas_lock_irq(&xas); - old =3D xas_load(&xas); - if (old && !xa_is_value(old)) + xas_for_each_conflict(&xas, old) { + if (!xa_is_value(old)) + break; + exceptional++; + if (shadowp) + *shadowp =3D old; + } + if (old) xas_set_err(&xas, -EEXIST); - xas_store(&xas, page); + xas_create_range(&xas); if (xas_error(&xas)) goto unlock; =20 - if (xa_is_value(old)) { - mapping->nrexceptional--; - if (shadowp) - *shadowp =3D old; +next: + xas_store(&xas, page); + if (++i < nr) { + xas_next(&xas); + goto next; } - mapping->nrpages++; + mapping->nrexceptional -=3D exceptional; + mapping->nrpages +=3D nr; =20 /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, + nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); @@ -907,7 +922,7 @@ static int __add_to_page_cache_locked(struct page *pa= ge, /* Leave page->index set: truncation relies upon it */ if (!huge) mem_cgroup_cancel_charge(page, memcg, false); - put_page(page); + page_ref_sub(page, nr); return xas_error(&xas); } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); --=20 2.23.0