From: "Adam J. Richter" <adam@yggdrasil.com>
To: axboe@suse.de
Cc: linux-kernel@vger.kernel.org, dm-crypt@saout.de
Subject: [PATCH?] make __bio_add_page check q->max_hw_sectors
Date: Sun, 10 Oct 2004 11:50:04 -0700 [thread overview]
Message-ID: <20041010115004.A5282@freya> (raw)
On an dm-crypt partiton on an IDE disk, 2.6.9-rc3 and
2.6.9-rc3-bk9 repeatedly generate the following error, which
does not occur in 2.6.9-rc1:
bio too big for device dm-0 (256 > 255)
The stack trace looked something like this:
submit_bio
mpage_bio_submit
mpage_readpages
readpages
do_page_cache_readahead
filemap_nopage
do_no_page
handle_mm_fault
Around 2.6.9-rc3, a new field q->max_hw_sectors was
added to struct request_queue. I was able to make this
problem disappear by the following patch, which adds a
check of this new field to __bio_add_page. (I've edited
this patch to hide other differences in my fs/bio.c, so
it may be necessary to apply it by hand if patch fails.)
I do not understand the intended difference between
the new max_hw_sectors field and max_sectors, so it is unclear
to me if it is a bug that my dm-crypt request_queue has
q->max_hw_sectors < q->max_sectors. If q->max_hw_sectors
is supposed to be guaranteed to be greater than or equal
to q->max_sectors, then the real bug is elsewhere and my
patch is unnecessary.
I am cc'ing the dm-crypt mailing list, since I
suspect that dm-crypt users who are running on a disk
partition (as opposed to a file via a loop device) and who
upgrade to 2.6.9-rc3 or later are effected and I think the
bug could result in file system corruption. However, I've
only observed this problem in a harmless readahead situation.
--
__ ______________
Adam J. Richter \ /
adam@yggdrasil.com | g g d r a s i l
Index: linux/fs/bio.c
===================================================================
RCS file: /usr/src.repository/repository/linux/fs/bio.c,v
retrieving revision 1.19
diff -u -1 -8 -r1.19 bio.c
--- linux/fs/bio.c 2004/10/09 18:16:58 1.19
+++ linux/fs/bio.c 2004/10/10 18:18:36
@@ -289,36 +289,39 @@
static int __bio_add_page(request_queue_t *q, struct bio *bio, struct page
*page, unsigned int len, unsigned int offset)
{
int retried_segments = 0;
struct bio_vec *bvec;
/*
* cloned bio must not modify vec list
*/
if (unlikely(bio_flagged(bio, BIO_CLONED)))
return 0;
if (bio->bi_vcnt >= bio->bi_max_vecs)
return 0;
if (((bio->bi_size + len) >> 9) > q->max_sectors)
return 0;
+ if (((bio->bi_size + len) >> 9) > q->max_hw_sectors)
+ return 0;
+
/*
* we might lose a segment or two here, but rather that than
* make this too complex.
*/
while (bio->bi_phys_segments >= q->max_phys_segments
|| bio->bi_hw_segments >= q->max_hw_segments
|| BIOVEC_VIRT_OVERSIZE(bio->bi_size)) {
if (retried_segments)
return 0;
retried_segments = 1;
blk_recount_segments(q, bio);
}
/*
* setup the new entry, we might clear it again later if we
next reply other threads:[~2004-10-10 3:55 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-10-10 18:50 Adam J. Richter [this message]
2004-10-10 8:14 ` [PATCH?] make __bio_add_page check q->max_hw_sectors Jens Axboe
2004-10-11 2:29 Adam J. Richter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20041010115004.A5282@freya \
--to=adam@yggdrasil.com \
--cc=axboe@suse.de \
--cc=dm-crypt@saout.de \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).