From: Minchan Kim <minchan@kernel.org>
To: Hannes Reinecke <hare@suse.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>,
Jens Axboe <axboe@fb.com>, Nitin Gupta <ngupta@vflare.org>,
Christoph Hellwig <hch@lst.de>,
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>,
yizhan@redhat.com,
Linux Block Layer Mailinglist <linux-block@vger.kernel.org>,
Linux Kernel Mailinglist <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] zram: set physical queue limits to avoid array out of bounds accesses
Date: Tue, 7 Mar 2017 16:23:43 +0900 [thread overview]
Message-ID: <CAEwNFnAb4x+sm6=vWd1W4Xj7Snp1V_LtRRx9ORFKpm1b-58kyw@mail.gmail.com> (raw)
In-Reply-To: <95c31a93-32cd-ad06-6cc0-e11b42ec2f68@suse.de>
Hi Hannes,
On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke <hare@suse.de> wrote:
> On 03/07/2017 06:22 AM, Minchan Kim wrote:
>> Hello Johannes,
>>
>> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote:
>>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using
>>> the NVMe over Fabrics loopback target which potentially sends a huge bulk of
>>> pages attached to the bio's bvec this results in a kernel panic because of
>>> array out of bounds accesses in zram_decompress_page().
>>
>> First of all, thanks for the report and fix up!
>> Unfortunately, I'm not familiar with that interface of block layer.
>>
>> It seems this is a material for stable so I want to understand it clear.
>> Could you say more specific things to educate me?
>>
>> What scenario/When/How it is problem? It will help for me to understand!
>>
Thanks for the quick response!
> The problem is that zram as it currently stands can only handle bios
> where each bvec contains a single page (or, to be precise, a chunk of
> data with a length of a page).
Right.
>
> This is not an automatic guarantee from the block layer (who is free to
> send us bios with arbitrary-sized bvecs), so we need to set the queue
> limits to ensure that.
What does it mean "bios with arbitrary-sized bvecs"?
What kinds of scenario is it used/useful?
And how can we solve it by setting queue limit?
Sorry for the many questions due to limited knowledge.
Thanks.
next prev parent reply other threads:[~2017-03-07 7:24 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-06 10:23 [PATCH] zram: set physical queue limits to avoid array out of bounds accesses Johannes Thumshirn
2017-03-06 10:25 ` Hannes Reinecke
2017-03-06 10:45 ` Sergey Senozhatsky
2017-03-06 15:21 ` Jens Axboe
2017-03-06 20:18 ` Andrew Morton
2017-03-06 20:19 ` Jens Axboe
2017-03-07 5:22 ` Minchan Kim
2017-03-07 7:00 ` Hannes Reinecke
2017-03-07 7:23 ` Minchan Kim [this message]
2017-03-07 7:48 ` Hannes Reinecke
2017-03-07 8:55 ` Minchan Kim
2017-03-07 9:51 ` Johannes Thumshirn
2017-03-08 5:11 ` Minchan Kim
2017-03-08 7:58 ` Johannes Thumshirn
2017-03-09 5:28 ` Minchan Kim
2017-03-30 15:08 ` Minchan Kim
2017-03-30 15:35 ` Jens Axboe
2017-03-30 23:45 ` Minchan Kim
2017-03-31 1:38 ` Jens Axboe
2017-04-03 5:11 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAEwNFnAb4x+sm6=vWd1W4Xj7Snp1V_LtRRx9ORFKpm1b-58kyw@mail.gmail.com' \
--to=minchan@kernel.org \
--cc=axboe@fb.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jthumshirn@suse.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ngupta@vflare.org \
--cc=sergey.senozhatsky.work@gmail.com \
--cc=yizhan@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).