All of lore.kernel.org
 help / color / mirror / Atom feed
From: Minchan Kim <minchan.kim@lge.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan@kernel.org>, <linux-kernel@vger.kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	<kernel-team@lge.com>, Jens Axboe <axboe@kernel.dk>,
	Hannes Reinecke <hare@suse.com>,
	Johannes Thumshirn <jthumshirn@suse.de>
Subject: Re: [PATCH 1/5] zram: handle multiple pages attached bio's bvec
Date: Tue, 4 Apr 2017 08:13:15 +0900	[thread overview]
Message-ID: <20170403231315.GA8903@bbox> (raw)
In-Reply-To: <20170403154528.6470165dd791cf8a23ae57c8@linux-foundation.org>

Hi Andrew,

On Mon, Apr 03, 2017 at 03:45:28PM -0700, Andrew Morton wrote:
> On Mon, 3 Apr 2017 14:17:29 +0900 Minchan Kim <minchan@kernel.org> wrote:
> 
> > Johannes Thumshirn reported system goes the panic when using NVMe over
> > Fabrics loopback target with zram.
> > 
> > The reason is zram expects each bvec in bio contains a single page
> > but nvme can attach a huge bulk of pages attached to the bio's bvec
> > so that zram's index arithmetic could be wrong so that out-of-bound
> > access makes panic.
> > 
> > It can be solved by limiting max_sectors with SECTORS_PER_PAGE like
> > [1] but it makes zram slow because bio should split with each pages
> > so this patch makes zram aware of multiple pages in a bvec so it
> > could solve without any regression.
> > 
> > [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of
> >     bounds accesses
> 
> This isn't a cleanup - it fixes a panic (or is it a BUG or is it an
> oops, or...)

I should have written more carefully.
Johannes reported the problem with fix[1] and Jens already sent it to the
mainline to fix it. However, during the discussion, we can solve the problem
nice way so this is revert of [1] plus solving the problem with other way
which no need to split bio.

Thanks.

> 
> How serious is this bug?  Should the fix be backported into -stable
> kernels?  etc.
> 
> A better description of the bug's behaviour would be appropriate.
> 
> > Cc: Jens Axboe <axboe@kernel.dk>
> > Cc: Hannes Reinecke <hare@suse.com>
> > Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> This signoff trail is confusing.  It somewhat implies that Johannes
> authored the patch which I don't think is the case?
> 
> 

  reply	other threads:[~2017-04-03 23:13 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-03  5:17 [PATCH 0/5] zram clean up Minchan Kim
2017-04-03  5:17 ` [PATCH 1/5] zram: handle multiple pages attached bio's bvec Minchan Kim
2017-04-03 22:45   ` Andrew Morton
2017-04-03 23:13     ` Minchan Kim [this message]
2017-04-04  4:55   ` Sergey Senozhatsky
2017-04-03  5:17 ` [PATCH 2/5] zram: partial IO refactoring Minchan Kim
2017-04-03  5:52   ` Mika Penttilä
2017-04-03  6:12     ` Minchan Kim
2017-04-03  6:57       ` Mika Penttilä
2017-04-04  2:17   ` Sergey Senozhatsky
2017-04-04  4:50     ` Minchan Kim
2017-04-03  5:17 ` [PATCH 3/5] zram: use zram_slot_lock instead of raw bit_spin_lock op Minchan Kim
2017-04-03  6:08   ` Sergey Senozhatsky
2017-04-03  6:34     ` Minchan Kim
2017-04-03  8:06       ` Sergey Senozhatsky
2017-04-04  2:18   ` Sergey Senozhatsky
2017-04-04  4:50     ` Minchan Kim
2017-04-03  5:17 ` [PATCH 4/5] zram: remove zram_meta structure Minchan Kim
2017-04-04  2:31   ` Sergey Senozhatsky
2017-04-04  4:52     ` Minchan Kim
2017-04-04  5:40     ` Minchan Kim
2017-04-04  5:54       ` Sergey Senozhatsky
2017-04-03  5:17 ` [PATCH 5/5] zram: introduce zram data accessor Minchan Kim
2017-04-04  4:40   ` Sergey Senozhatsky
2017-04-11  5:38 ` [PATCH 0/5] zram clean up Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170403231315.GA8903@bbox \
    --to=minchan.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=hare@suse.com \
    --cc=jthumshirn@suse.de \
    --cc=kernel-team@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=minchan@kernel.org \
    --cc=sergey.senozhatsky@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.