All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	akpm@linux-foundation.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Christoph Hellwig <hch@lst.de>, Jeff Layton <jlayton@kernel.org>
Subject: [PATCH v8 19/27] mm/filemap: Add folio_lock_killable
Date: Fri, 30 Apr 2021 18:22:27 +0100	[thread overview]
Message-ID: <20210430172235.2695303-20-willy@infradead.org> (raw)
In-Reply-To: <20210430172235.2695303-1-willy@infradead.org>

This is like lock_page_killable() but for use by callers who
know they have a folio.  Convert __lock_page_killable() to be
__folio_lock_killable().  This saves one call to compound_head() per
contended call to lock_page_killable().

__folio_lock_killable() is 20 bytes smaller than __lock_page_killable()
was.  lock_page_maybe_drop_mmap() shrinks by 68 bytes and
__lock_page_or_retry() shrinks by 66 bytes.  That's a total of 154 bytes
of text saved.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeff Layton <jlayton@kernel.org>
---
 include/linux/pagemap.h | 15 ++++++++++-----
 mm/filemap.c            | 17 +++++++++--------
 2 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index ff81be103539..332731ee541a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -715,7 +715,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page,
 }
 
 void __folio_lock(struct folio *folio);
-extern int __lock_page_killable(struct page *page);
+int __folio_lock_killable(struct folio *folio);
 extern int __lock_page_async(struct page *page, struct wait_page_queue *wait);
 extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
 				unsigned int flags);
@@ -755,6 +755,14 @@ static inline void lock_page(struct page *page)
 		__folio_lock(folio);
 }
 
+static inline int folio_lock_killable(struct folio *folio)
+{
+	might_sleep();
+	if (!folio_trylock(folio))
+		return __folio_lock_killable(folio);
+	return 0;
+}
+
 /*
  * lock_page_killable is like lock_page but can be interrupted by fatal
  * signals.  It returns 0 if it locked the page and -EINTR if it was
@@ -762,10 +770,7 @@ static inline void lock_page(struct page *page)
  */
 static inline int lock_page_killable(struct page *page)
 {
-	might_sleep();
-	if (!trylock_page(page))
-		return __lock_page_killable(page);
-	return 0;
+	return folio_lock_killable(page_folio(page));
 }
 
 /*
diff --git a/mm/filemap.c b/mm/filemap.c
index 6935b068856f..27a86d53dd89 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1587,14 +1587,13 @@ void __folio_lock(struct folio *folio)
 }
 EXPORT_SYMBOL(__folio_lock);
 
-int __lock_page_killable(struct page *__page)
+int __folio_lock_killable(struct folio *folio)
 {
-	struct page *page = compound_head(__page);
-	wait_queue_head_t *q = page_waitqueue(page);
-	return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE,
+	wait_queue_head_t *q = page_waitqueue(&folio->page);
+	return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE,
 					EXCLUSIVE);
 }
-EXPORT_SYMBOL_GPL(__lock_page_killable);
+EXPORT_SYMBOL_GPL(__folio_lock_killable);
 
 int __lock_page_async(struct page *page, struct wait_page_queue *wait)
 {
@@ -1636,6 +1635,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait)
 int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
 			 unsigned int flags)
 {
+	struct folio *folio = page_folio(page);
+
 	if (fault_flag_allow_retry_first(flags)) {
 		/*
 		 * CAUTION! In this case, mmap_lock is not released
@@ -1654,13 +1655,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
 	if (flags & FAULT_FLAG_KILLABLE) {
 		int ret;
 
-		ret = __lock_page_killable(page);
+		ret = __folio_lock_killable(folio);
 		if (ret) {
 			mmap_read_unlock(mm);
 			return 0;
 		}
 	} else {
-		__folio_lock(page_folio(page));
+		__folio_lock(folio);
 	}
 
 	return 1;
@@ -2829,7 +2830,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page,
 
 	*fpin = maybe_unlock_mmap_for_io(vmf, *fpin);
 	if (vmf->flags & FAULT_FLAG_KILLABLE) {
-		if (__lock_page_killable(&folio->page)) {
+		if (__folio_lock_killable(folio)) {
 			/*
 			 * We didn't have the right flags to drop the mmap_lock,
 			 * but all fault_handlers only check for fatal signals
-- 
2.30.2


  parent reply	other threads:[~2021-04-30 17:39 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-30 17:22 [PATCH v8 00/27] Memory Folios Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 01/27] mm: Introduce struct folio Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 02/27] mm: Add folio_pgdat and folio_zone Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 03/27] mm/vmstat: Add functions to account folio statistics Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 04/27] mm/debug: Add VM_BUG_ON_FOLIO and VM_WARN_ON_ONCE_FOLIO Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 05/27] mm: Add folio reference count functions Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 06/27] mm: Add folio_put Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 07/27] mm: Add folio_get Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 08/27] mm: Add folio flag manipulation functions Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 09/27] mm: Add folio_young() and folio_idle() Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 10/27] mm: Handle per-folio private data Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 11/27] mm/filemap: Add folio_index, folio_file_page and folio_contains Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 12/27] mm/filemap: Add folio_next_index Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 13/27] mm/filemap: Add folio_offset and folio_file_offset Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 14/27] mm/util: Add folio_mapping and folio_file_mapping Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 15/27] mm: Add folio_mapcount Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 16/27] mm/memcg: Add folio wrappers for various functions Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 17/27] mm/filemap: Add folio_unlock Matthew Wilcox (Oracle)
2021-04-30 17:22 ` [PATCH v8 18/27] mm/filemap: Add folio_lock Matthew Wilcox (Oracle)
2021-04-30 17:22 ` Matthew Wilcox (Oracle) [this message]
2021-04-30 17:44 ` [PATCH v8 00/27] Memory Folios Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210430172235.2695303-20-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=hch@lst.de \
    --cc=jlayton@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.