linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Jaegeuk Kim <jaegeuk@kernel.org>, linux-kernel@vger.kernel.org
Cc: linux-block@vger.kernel.org
Subject: Re: [PATCH v2] loop: drop caches if offset or block_size are changed
Date: Tue, 18 Dec 2018 10:48:19 -0700	[thread overview]
Message-ID: <29369548-df14-a5a7-2bee-a40b3479df68@kernel.dk> (raw)
In-Reply-To: <20181217194236.GA50659@jaegeuk-macbookpro.roam.corp.google.com>

On 12/17/18 12:42 PM, Jaegeuk Kim wrote:
> If we don't drop caches used in old offset or block_size, we can get old data
> from new offset/block_size, which gives unexpected data to user.
> 
> For example, Martijn found a loopback bug in the below scenario.
> 1) LOOP_SET_FD loads first two pages on loop file
> 2) LOOP_SET_STATUS64 changes the offset on the loop file
> 3) mount is failed due to the cached pages having wrong superblock
> 
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: linux-block@vger.kernel.org
> Reported-by: Martijn Coenen <maco@google.com>
> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
> ---
> 
> v1 to v2:
>  - cover block_size change
> 
>  drivers/block/loop.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index cb0cc8685076..382557c81674 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1154,6 +1154,12 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
>  
>  	if (lo->lo_offset != info->lo_offset ||
>  	    lo->lo_sizelimit != info->lo_sizelimit) {
> +		struct block_device *bdev = lo->lo_device;
> +
> +		/* drop stale caches used in old offset */
> +		sync_blockdev(bdev);
> +		kill_bdev(bdev);
> +
>  		if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {
>  			err = -EFBIG;
>  			goto exit;
> @@ -1388,6 +1394,15 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
>  	blk_queue_io_min(lo->lo_queue, arg);
>  	loop_update_dio(lo);
>  
> +	/* Don't change the size if it is same as current */
> +	if (lo->lo_queue->limits.logical_block_size != arg) {
> +		struct block_device *bdev = lo->lo_device;
> +
> +		/* drop stale caches likewise set_blocksize */
> +		sync_blockdev(bdev);
> +		kill_bdev(bdev);
> +	}
> +
>  	blk_mq_unfreeze_queue(lo->lo_queue);
>  
>  	return 0;

Looks fine to me, my only worry would be verifying that we're fine calling
sync/kill from those contexts. The queue is frozen at this point, what
happens if we do need to flush out dirty data?

-- 
Jens Axboe


  reply	other threads:[~2018-12-18 17:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-14 20:32 [PATCH] loop: drop caches if offset is changed Jaegeuk Kim
2018-12-17 19:42 ` [PATCH v2] loop: drop caches if offset or block_size are changed Jaegeuk Kim
2018-12-18 17:48   ` Jens Axboe [this message]
2018-12-18 22:41     ` [PATCH v3] " Jaegeuk Kim
2019-01-09  5:22       ` Jaegeuk Kim
2019-01-09 20:51       ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=29369548-df14-a5a7-2bee-a40b3479df68@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=jaegeuk@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).