All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Christoph Hellwig <hch@lst.de>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 04/14] fs: Reduce stack usage in do_mpage_readpage
Date: Fri, 15 Dec 2023 20:02:35 +0000	[thread overview]
Message-ID: <20231215200245.748418-5-willy@infradead.org> (raw)
In-Reply-To: <20231215200245.748418-1-willy@infradead.org>

Some architectures support a very large PAGE_SIZE, so instead of the 8
pointers we see with a 4kB PAGE_SIZE, we can see 128 pointers with 64kB
or so many on Hexagon that it trips compiler warnings about exceeding
stack frame size.

All we're doing with this array is checking for block contiguity, which we
can as well do by remembering the address of the first block in the page
and checking this block is at the appropriate offset from that address.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/mpage.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/mpage.c b/fs/mpage.c
index 84b02098e7a5..d4963f3d8051 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -166,7 +166,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 	sector_t block_in_file;
 	sector_t last_block;
 	sector_t last_block_in_file;
-	sector_t blocks[MAX_BUF_PER_PAGE];
+	sector_t first_block;
 	unsigned page_block;
 	unsigned first_hole = blocks_per_page;
 	struct block_device *bdev = NULL;
@@ -205,6 +205,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 		unsigned map_offset = block_in_file - args->first_logical_block;
 		unsigned last = nblocks - map_offset;
 
+		first_block = map_bh->b_blocknr + map_offset;
 		for (relative_block = 0; ; relative_block++) {
 			if (relative_block == last) {
 				clear_buffer_mapped(map_bh);
@@ -212,8 +213,6 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 			}
 			if (page_block == blocks_per_page)
 				break;
-			blocks[page_block] = map_bh->b_blocknr + map_offset +
-						relative_block;
 			page_block++;
 			block_in_file++;
 		}
@@ -259,7 +258,9 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 			goto confused;		/* hole -> non-hole */
 
 		/* Contiguous blocks? */
-		if (page_block && blocks[page_block-1] != map_bh->b_blocknr-1)
+		if (!page_block)
+			first_block = map_bh->b_blocknr;
+		else if (first_block + page_block != map_bh->b_blocknr)
 			goto confused;
 		nblocks = map_bh->b_size >> blkbits;
 		for (relative_block = 0; ; relative_block++) {
@@ -268,7 +269,6 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 				break;
 			} else if (page_block == blocks_per_page)
 				break;
-			blocks[page_block] = map_bh->b_blocknr+relative_block;
 			page_block++;
 			block_in_file++;
 		}
@@ -289,7 +289,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 	/*
 	 * This folio will go to BIO.  Do we need to send this BIO off first?
 	 */
-	if (args->bio && (args->last_block_in_bio != blocks[0] - 1))
+	if (args->bio && (args->last_block_in_bio != first_block - 1))
 		args->bio = mpage_bio_submit_read(args->bio);
 
 alloc_new:
@@ -298,7 +298,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 				      gfp);
 		if (args->bio == NULL)
 			goto confused;
-		args->bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
+		args->bio->bi_iter.bi_sector = first_block << (blkbits - 9);
 	}
 
 	length = first_hole << blkbits;
@@ -313,7 +313,7 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 	    (first_hole != blocks_per_page))
 		args->bio = mpage_bio_submit_read(args->bio);
 	else
-		args->last_block_in_bio = blocks[blocks_per_page - 1];
+		args->last_block_in_bio = first_block + blocks_per_page - 1;
 out:
 	return args->bio;
 
-- 
2.42.0


  parent reply	other threads:[~2023-12-15 20:02 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-15 20:02 [PATCH 00/14] Clean up the writeback paths Matthew Wilcox (Oracle)
2023-12-15 20:02 ` [PATCH 01/14] fs: Remove clean_page_buffers() Matthew Wilcox (Oracle)
2023-12-16  4:27   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 02/14] fs: Convert clean_buffers() to take a folio Matthew Wilcox (Oracle)
2023-12-16  4:27   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 03/14] fs: Reduce stack usage in __mpage_writepage Matthew Wilcox (Oracle)
2023-12-16  4:29   ` Christoph Hellwig
2023-12-15 20:02 ` Matthew Wilcox (Oracle) [this message]
2023-12-16  4:29   ` [PATCH 04/14] fs: Reduce stack usage in do_mpage_readpage Christoph Hellwig
2023-12-15 20:02 ` [PATCH 05/14] adfs: Remove writepage implementation Matthew Wilcox (Oracle)
2023-12-16  4:31   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 06/14] bfs: " Matthew Wilcox (Oracle)
2023-12-17 16:48   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 07/14] hfs: Really remove hfs_writepage Matthew Wilcox (Oracle)
2023-12-18  4:31   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 08/14] hfsplus: Really remove hfsplus_writepage Matthew Wilcox (Oracle)
2023-12-16  4:33   ` Christoph Hellwig
2023-12-18 10:41     ` Johannes Thumshirn
2023-12-18 15:04       ` Christoph Hellwig
2023-12-18 15:40         ` Johannes Thumshirn
2024-01-04 17:25           ` Jan Kara
2023-12-15 20:02 ` [PATCH 09/14] minix: Remove writepage implementation Matthew Wilcox (Oracle)
2023-12-17 16:48   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 10/14] ocfs2: " Matthew Wilcox (Oracle)
2023-12-16  4:34   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 11/14] sysv: " Matthew Wilcox (Oracle)
2023-12-17 16:48   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 12/14] ufs: " Matthew Wilcox (Oracle)
2023-12-16  4:34   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 13/14] fs: Convert block_write_full_page to block_write_full_folio Matthew Wilcox (Oracle)
2023-12-16  4:35   ` Christoph Hellwig
2023-12-15 20:02 ` [PATCH 14/14] fs: Remove the bh_end_io argument from __block_write_full_folio Matthew Wilcox (Oracle)
2023-12-20  6:36   ` Christoph Hellwig
2023-12-16 20:51 ` [PATCH 00/14] Clean up the writeback paths Jens Axboe
2023-12-19  0:31 [PATCH 04/14] fs: Reduce stack usage in do_mpage_readpage kernel test robot
2023-12-19 17:47 kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231215200245.748418-5-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.