From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.6 required=3.0 tests=DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_PASS,T_DKIM_INVALID, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E6CCC5CFC0 for ; Sun, 17 Jun 2018 02:07:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DD21C20895 for ; Sun, 17 Jun 2018 02:07:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="CcFsLVjf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD21C20895 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934104AbeFQCBP (ORCPT ); Sat, 16 Jun 2018 22:01:15 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:58998 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933871AbeFQCBF (ORCPT ); Sat, 16 Jun 2018 22:01:05 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nftG592FglmKPNkS0O3i9Vic0KQckutXgqXmFPqt590=; b=CcFsLVjfPRyi1vknngu8RhDQ8 muB7aY4XhABG07Aog0tciv2jAAj+KgiMLi5p4zFe6qMimHLGhUgET3eDeUHWK18b87QRicc0bLHHJ hRys3NiUYJ665IgdlLKQmaO2xFNFTA6RF3LB0y/Ya9jwGKHmts8iRV+I0258DQrcioJZZ9Do+zdLo Nu+P5FbQ8EFn4blFG+ygkyaQD560JEjLv7sgINzBT2VY1zeN8ta+xZUf2I1XOD1P5oxMWvwpA/mFR K97JtgH0UjLe7lx+RWdLwM4j6xCv5kdJ9LKfHWgllsKYwCVFqb2JEMjVTGslTZGzE348aItwPzpgb TtUNgVr5Q==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fUN04-0001L7-5o; Sun, 17 Jun 2018 02:01:04 +0000 From: Matthew Wilcox To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Jan Kara , Jeff Layton , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues , Nicholas Piggin , Ryusuke Konishi , linux-nilfs@vger.kernel.org, Jaegeuk Kim , Chao Yu , linux-f2fs-devel@lists.sourceforge.net Subject: [PATCH v14 37/74] mm: Convert truncate to XArray Date: Sat, 16 Jun 2018 19:00:15 -0700 Message-Id: <20180617020052.4759-38-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180617020052.4759-1-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This is essentially xa_cmpxchg() with the locking handled above us, and it doesn't have to handle replacing a NULL entry. Signed-off-by: Matthew Wilcox --- mm/truncate.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index ed778555c9f3..45d68e90b703 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -33,15 +33,12 @@ static inline void __clear_shadow_entry(struct address_space *mapping, pgoff_t index, void *entry) { - struct radix_tree_node *node; - void **slot; + XA_STATE(xas, &mapping->i_pages, index); - if (!__radix_tree_lookup(&mapping->i_pages, index, &node, &slot)) + xas_set_update(&xas, workingset_update_node); + if (xas_load(&xas) != entry) return; - if (*slot != entry) - return; - __radix_tree_replace(&mapping->i_pages, node, slot, NULL, - workingset_update_node); + xas_store(&xas, NULL); mapping->nrexceptional--; } @@ -738,10 +735,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping, index++; } /* - * For DAX we invalidate page tables after invalidating radix tree. We + * For DAX we invalidate page tables after invalidating page cache. We * could invalidate page tables while invalidating each entry however * that would be expensive. And doing range unmapping before doesn't - * work as we have no cheap way to find whether radix tree entry didn't + * work as we have no cheap way to find whether page cache entry didn't * get remapped later. */ if (dax_mapping(mapping)) { -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v14 37/74] mm: Convert truncate to XArray Date: Sat, 16 Jun 2018 19:00:15 -0700 Message-ID: <20180617020052.4759-38-willy@infradead.org> References: <20180617020052.4759-1-willy@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sourceforge.net; s=x; h=References:In-Reply-To:Message-Id:Date:Subject:Cc: To:From:Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=nftG592FglmKPNkS0O3i9Vic0KQckutXgqXmFPqt590=; b=UV5bTdoWZm1/rJItNbk8ZAvqED TZm4wsVqiAVBKV8IVN56JxqhhrC6BQ8i7YldG4mlWYu+qC9+T4nlFh5Auc+w8Enmb8WYD6ophOnlQ 5IpfRyeolZEUoxGYmaQdZdYA5U+f8s9vLW600H4Hq7fBv5Ynw81KG2hcEg+V096y3uLY=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sf.net; s=x ; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To :MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=nftG592FglmKPNkS0O3i9Vic0KQckutXgqXmFPqt590=; b=aEjdUsmCMep8Lp1fvIVCntEN9K 5Gi1GzKHGktXJ1sL1puFDPWO8zWRXTOL1HdVLeC+V4aW26yn3onqApayjiy0GoU5fawblV+0Ln7Q9 TsgCLXYzpRebWS3BiwTW8OQ4M5fFjvwO+YZjadP69uqMugVdlt4b6fwM7RAapSUCY8qw=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nftG592FglmKPNkS0O3i9Vic0KQckutXgqXmFPqt590=; b=CcFsLVjfPRyi1vknngu8RhDQ8 muB7aY4XhABG07Aog0tciv2jAAj+KgiMLi5p4zFe6qMimHLGhUgET3eDeUHWK18b87QRicc0bLHHJ hRys3NiUYJ665IgdlLKQmaO2xFNFTA6RF3LB0y/Ya9jwGKHmts8iRV+I0258DQrcioJZZ9Do+zdLo Nu+P5FbQ8EFn4blFG+ygkyaQD560JEjLv7sgINzBT2VY1zeN8ta+xZUf2I1XOD1P5oxMWvwpA/mFR K97JtgH0UjLe7lx+RWdLwM4j6xCv5kdJ9LKfHWgllsKYwCVFqb2JEMjVTGslTZGzE348aItwPzpgb TtUNgVr5Q==; In-Reply-To: <20180617020052.4759-1-willy@infradead.org> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: linux-nilfs@vger.kernel.org, Jan Kara , Jeff Layton , Jaegeuk Kim , Matthew Wilcox , linux-f2fs-devel@lists.sourceforge.net, Nicholas Piggin , Ryusuke Konishi , Lukas Czerner , Ross Zwisler , Christoph Hellwig , Goldwyn Rodrigues This is essentially xa_cmpxchg() with the locking handled above us, and it doesn't have to handle replacing a NULL entry. Signed-off-by: Matthew Wilcox --- mm/truncate.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index ed778555c9f3..45d68e90b703 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -33,15 +33,12 @@ static inline void __clear_shadow_entry(struct address_space *mapping, pgoff_t index, void *entry) { - struct radix_tree_node *node; - void **slot; + XA_STATE(xas, &mapping->i_pages, index); - if (!__radix_tree_lookup(&mapping->i_pages, index, &node, &slot)) + xas_set_update(&xas, workingset_update_node); + if (xas_load(&xas) != entry) return; - if (*slot != entry) - return; - __radix_tree_replace(&mapping->i_pages, node, slot, NULL, - workingset_update_node); + xas_store(&xas, NULL); mapping->nrexceptional--; } @@ -738,10 +735,10 @@ int invalidate_inode_pages2_range(struct address_space *mapping, index++; } /* - * For DAX we invalidate page tables after invalidating radix tree. We + * For DAX we invalidate page tables after invalidating page cache. We * could invalidate page tables while invalidating each entry however * that would be expensive. And doing range unmapping before doesn't - * work as we have no cheap way to find whether radix tree entry didn't + * work as we have no cheap way to find whether page cache entry didn't * get remapped later. */ if (dax_mapping(mapping)) { -- 2.17.1 ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot