From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([65.50.211.133]:50790 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753346AbdLFAmN (ORCPT ); Tue, 5 Dec 2017 19:42:13 -0500 From: Matthew Wilcox Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-Id: <20171206004159.3755-45-willy@infradead.org> In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> To: unlisted-recipients:; (no To-header on input) Sender: linux-btrfs-owner@vger.kernel.org List-ID: From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Matthew Wilcox To: Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-Id: <20171206004159.3755-45-willy@infradead.org> In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> Sender: owner-linux-mm@kvack.org List-ID: From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([65.50.211.133]:50790 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753346AbdLFAmN (ORCPT ); Tue, 5 Dec 2017 19:42:13 -0500 From: Matthew Wilcox Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-Id: <20171206004159.3755-45-willy@infradead.org> In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> Sender: linux-xfs-owner@vger.kernel.org List-ID: List-Id: xfs Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg0-f70.google.com (mail-pg0-f70.google.com [74.125.83.70]) by kanga.kvack.org (Postfix) with ESMTP id 56F896B027D for ; Tue, 5 Dec 2017 19:42:14 -0500 (EST) Received: by mail-pg0-f70.google.com with SMTP id w22so1497302pge.10 for ; Tue, 05 Dec 2017 16:42:14 -0800 (PST) Received: from bombadil.infradead.org (bombadil.infradead.org. [65.50.211.133]) by mx.google.com with ESMTPS id w31si921295pla.679.2017.12.05.16.42.12 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 05 Dec 2017 16:42:13 -0800 (PST) From: Matthew Wilcox Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-Id: <20171206004159.3755-45-willy@infradead.org> In-Reply-To: <20171206004159.3755-1-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> Sender: owner-linux-mm@kvack.org List-ID: Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, linux-btrfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-ID: <20171206004159.3755-45-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> Return-path: In-Reply-To: <20171206004159.3755-1-willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> Sender: linux-usb-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-f2fs-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-nilfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-btrfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-xfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-usb-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: linux-f2fs-devel.lists.sourceforge.net From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH v4 44/73] shmem: Convert shmem_tag_pins to XArray Date: Tue, 5 Dec 2017 16:41:30 -0800 Message-ID: <20171206004159.3755-45-willy@infradead.org> References: <20171206004159.3755-1-willy@infradead.org> Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=9dfa5xjfob0XhxbetJUhn15cyMrS/GsRs7cI8PzMX9c=; b=FeX2D55oMpftaZIZQw6jpJ+CT GINziyh4fvKc2NIOZCJOhKn2MsH5f+IY+keaTQmejHPXpOKQXEY9N8yCzvUo0JFCeMmX/11tfJjer P+VYr9vWqxDwB2bdCAqTtGzQuOyShhk4Yz61TIpRG0Mw4dM3VfiD7ST8XCRYaSB19muD7S+zG6r13 w6nKMhDsiv3si0l9hVm9XI5D5Qc4jbHeyJ2oZAZ0erKFFoU/nqdDCExxtRXlkI7G8TjlYmJm3s+Lx CqB6sLxUTJeT2a8+unbE8aivSEo7+J5M6B+FAYc6QbqDn6Uqyh/3uAU6U2QeGeTXvtZz89/tv16iW In-Reply-To: <20171206004159.3755-1-willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> Sender: linux-usb-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Cc: Matthew Wilcox , Ross Zwisler , Jens Axboe , Rehas Sachdeva , linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-fsdevel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-f2fs-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-nilfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-btrfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-xfs-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-usb-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org From: Matthew Wilcox Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock for a (potentially) long time. We could replicate the same locking behaviour with the xarray, but would have to be careful that the xa_node wasn't RCU-freed under us before we took the lock. Signed-off-by: Matthew Wilcox --- mm/shmem.c | 39 ++++++++++++++++----------------------- 1 file changed, 16 insertions(+), 23 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index ce285ae635ea..2f41c7ceea18 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2601,35 +2601,28 @@ static loff_t shmem_file_llseek(struct file *file, loff_t offset, int whence) static void shmem_tag_pins(struct address_space *mapping) { - struct radix_tree_iter iter; - void **slot; - pgoff_t start; + XA_STATE(xas, &mapping->pages, 0); struct page *page; + unsigned int tagged = 0; lru_add_drain(); - start = 0; - rcu_read_lock(); - radix_tree_for_each_slot(slot, &mapping->pages, &iter, start) { - page = radix_tree_deref_slot(slot); - if (!page || radix_tree_exception(page)) { - if (radix_tree_deref_retry(page)) { - slot = radix_tree_iter_retry(&iter); - continue; - } - } else if (page_count(page) - page_mapcount(page) > 1) { - xa_lock_irq(&mapping->pages); - radix_tree_tag_set(&mapping->pages, iter.index, - SHMEM_TAG_PINNED); - xa_unlock_irq(&mapping->pages); - } + xas_lock_irq(&xas); + xas_for_each(&xas, page, ULONG_MAX) { + if (xa_is_value(page)) + continue; + if (page_count(page) - page_mapcount(page) > 1) + xas_set_tag(&xas, SHMEM_TAG_PINNED); - if (need_resched()) { - slot = radix_tree_iter_resume(slot, &iter); - cond_resched_rcu(); - } + if (++tagged % XA_CHECK_SCHED) + continue; + + xas_pause(&xas); + xas_unlock_irq(&xas); + cond_resched(); + xas_lock_irq(&xas); } - rcu_read_unlock(); + xas_unlock_irq(&xas); } /* -- 2.15.0 -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html