linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Hannes Reinecke <hare@suse.de>, Jens Axboe <axboe@fb.com>,
	Nitin Gupta <ngupta@vflare.org>, Christoph Hellwig <hch@lst.de>,
	Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
	<yizhan@redhat.com>,
	Linux Block Layer Mailinglist <linux-block@vger.kernel.org>,
	Linux Kernel Mailinglist <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] zram: set physical queue limits to avoid array out of bounds accesses
Date: Thu, 9 Mar 2017 14:28:29 +0900	[thread overview]
Message-ID: <20170309052829.GA854@bbox> (raw)
In-Reply-To: <1073055f-e71b-bb07-389a-53b60ccdee20@suse.de>

On Wed, Mar 08, 2017 at 08:58:02AM +0100, Johannes Thumshirn wrote:
> On 03/08/2017 06:11 AM, Minchan Kim wrote:
> > And could you test this patch? It avoids split bio so no need new bio
> > allocations and makes zram code simple.
> > 
> > From f778d7564d5cd772f25bb181329362c29548a257 Mon Sep 17 00:00:00 2001
> > From: Minchan Kim <minchan@kernel.org>
> > Date: Wed, 8 Mar 2017 13:35:29 +0900
> > Subject: [PATCH] fix
> > 
> > Not-yet-Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> 
> [...]
> 
> Yup, this works here.
> 
> I did a mkfs.xfs /dev/nvme0n1
> dd if=/dev/urandom of=/test.bin bs=1M count=128
> sha256sum test.bin
> mount /dev/nvme0n1 /dir
> mv test.bin /dir/
> sha256sum /dir/test.bin
> 
> No panics and sha256sum of the 128MB test file still matches
> 
> Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

Thanks a lot, Johannes and Hannes!!

> 
> Now that you removed the one page limit in zram_bvec_rw() you can also
> add this hunk to remove the queue splitting:

Right. I added what you suggested with detailed description.

> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 85f4df8..27b168f6 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -868,8 +868,6 @@ static blk_qc_t zram_make_request(struct
> request_queue *queue, struct bio *bio)
>  {
>         struct zram *zram = queue->queuedata;
> 
> -       blk_queue_split(queue, &bio, queue->bio_split);
> -
>         if (!valid_io_request(zram, bio->bi_iter.bi_sector,
>                                         bio->bi_iter.bi_size)) {
>                 atomic64_inc(&zram->stats.invalid_io);
> 
> Byte,
> 	Johannes
> 

Jens, Could you replace the one merged with this? And I don't want
to add stable mark in this patch because I feel it need enough
testing in 64K page system I don't have. ;(

>From bb73e75ab0e21016f60858fd61e7dc6a6813e359 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Thu, 9 Mar 2017 14:00:40 +0900
Subject: [PATCH] zram: handle multiple pages attached bio's bvec

Johannes Thumshirn reported system goes the panic when using NVMe over
Fabrics loopback target with zram.

The reason is zram expects each bvec in bio contains a single page
but nvme can attach a huge bulk of pages attached to the bio's bvec
so that zram's index arithmetic could be wrong so that out-of-bound
access makes panic.

This patch solves the problem via removing the limit(a bvec should
contains a only single page).

Cc: Hannes Reinecke <hare@suse.com>
Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
I don't add stable mark intentionally because I think it's rather risky
without enough testing on 64K page system(ie, partial IO part).

Thanks for the help, Johannes and Hannes!!

 drivers/block/zram/zram_drv.c | 37 ++++++++++---------------------------
 1 file changed, 10 insertions(+), 27 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 01944419b1f3..fefdf260503a 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -137,8 +137,7 @@ static inline bool valid_io_request(struct zram *zram,
 
 static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
 {
-	if (*offset + bvec->bv_len >= PAGE_SIZE)
-		(*index)++;
+	*index  += (*offset + bvec->bv_len) / PAGE_SIZE;
 	*offset = (*offset + bvec->bv_len) % PAGE_SIZE;
 }
 
@@ -838,34 +837,20 @@ static void __zram_make_request(struct zram *zram, struct bio *bio)
 	}
 
 	bio_for_each_segment(bvec, bio, iter) {
-		int max_transfer_size = PAGE_SIZE - offset;
-
-		if (bvec.bv_len > max_transfer_size) {
-			/*
-			 * zram_bvec_rw() can only make operation on a single
-			 * zram page. Split the bio vector.
-			 */
-			struct bio_vec bv;
-
-			bv.bv_page = bvec.bv_page;
-			bv.bv_len = max_transfer_size;
-			bv.bv_offset = bvec.bv_offset;
+		struct bio_vec bv = bvec;
+		unsigned int remained = bvec.bv_len;
 
+		do {
+			bv.bv_len = min_t(unsigned int, PAGE_SIZE, remained);
 			if (zram_bvec_rw(zram, &bv, index, offset,
-					 op_is_write(bio_op(bio))) < 0)
+					op_is_write(bio_op(bio))) < 0)
 				goto out;
 
-			bv.bv_len = bvec.bv_len - max_transfer_size;
-			bv.bv_offset += max_transfer_size;
-			if (zram_bvec_rw(zram, &bv, index + 1, 0,
-					 op_is_write(bio_op(bio))) < 0)
-				goto out;
-		} else
-			if (zram_bvec_rw(zram, &bvec, index, offset,
-					 op_is_write(bio_op(bio))) < 0)
-				goto out;
+			bv.bv_offset += bv.bv_len;
+			remained -= bv.bv_len;
 
-		update_position(&index, &offset, &bvec);
+			update_position(&index, &offset, &bv);
+		} while (remained);
 	}
 
 	bio_endio(bio);
@@ -882,8 +867,6 @@ static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio)
 {
 	struct zram *zram = queue->queuedata;
 
-	blk_queue_split(queue, &bio, queue->bio_split);
-
 	if (!valid_io_request(zram, bio->bi_iter.bi_sector,
 					bio->bi_iter.bi_size)) {
 		atomic64_inc(&zram->stats.invalid_io);
-- 
2.7.4

  reply	other threads:[~2017-03-09  5:28 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-06 10:23 [PATCH] zram: set physical queue limits to avoid array out of bounds accesses Johannes Thumshirn
2017-03-06 10:25 ` Hannes Reinecke
2017-03-06 10:45 ` Sergey Senozhatsky
2017-03-06 15:21 ` Jens Axboe
2017-03-06 20:18   ` Andrew Morton
2017-03-06 20:19     ` Jens Axboe
2017-03-07  5:22 ` Minchan Kim
2017-03-07  7:00   ` Hannes Reinecke
2017-03-07  7:23     ` Minchan Kim
2017-03-07  7:48       ` Hannes Reinecke
2017-03-07  8:55         ` Minchan Kim
2017-03-07  9:51           ` Johannes Thumshirn
2017-03-08  5:11             ` Minchan Kim
2017-03-08  7:58               ` Johannes Thumshirn
2017-03-09  5:28                 ` Minchan Kim [this message]
2017-03-30 15:08                   ` Minchan Kim
2017-03-30 15:35                     ` Jens Axboe
2017-03-30 23:45                       ` Minchan Kim
2017-03-31  1:38                         ` Jens Axboe
2017-04-03  5:11                           ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170309052829.GA854@bbox \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@fb.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jthumshirn@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ngupta@vflare.org \
    --cc=sergey.senozhatsky.work@gmail.com \
    --cc=yizhan@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).