linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@infradead.org>,
	linux-block <linux-block@vger.kernel.org>,
	Damien Le Moal <Damien.LeMoal@wdc.com>,
	Keith Busch <kbusch@kernel.org>,
	"linux-scsi @ vger . kernel . org" <linux-scsi@vger.kernel.org>,
	"Martin K . Petersen" <martin.petersen@oracle.com>,
	"linux-fsdevel @ vger . kernel . org"
	<linux-fsdevel@vger.kernel.org>
Subject: Re: [PATCH v3 02/10] block: Introduce REQ_OP_ZONE_APPEND
Date: Fri, 27 Mar 2020 10:19:04 -0700	[thread overview]
Message-ID: <20200327171904.GD11524@infradead.org> (raw)
In-Reply-To: <20200327165012.34443-3-johannes.thumshirn@wdc.com>

> +/**
> + * bio_full - check if the bio is full
> + * @bio:	bio to check
> + * @len:	length of one segment to be added
> + *
> + * Return true if @bio is full and one segment with @len bytes can't be
> + * added to the bio, otherwise return false
> + */
> +bool bio_full(struct bio *bio, unsigned len)
> +{
> +	if (bio->bi_vcnt >= bio->bi_max_vecs)
> +		return true;
> +
> +	if (bio->bi_iter.bi_size > UINT_MAX - len)
> +		return true;
> +
> +	if (bio_op(bio) == REQ_OP_ZONE_APPEND)
> +		return bio_can_zone_append(bio, len);
> +
> +	return false;
> +}

If you need to move bio_full out of line that should be a separate
prep patch.  But I'd rather unshare a little more code rather than
spreading zone append conditionals over lots of fast path functions.

> +static bool bio_try_merge_zone_append_page(struct bio *bio, struct page *page,
> +					   unsigned int len, unsigned int off)
> +{
> +	struct request_queue *q = bio->bi_disk->queue;
> +	struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
> +	unsigned long mask = queue_segment_boundary(q);
> +	phys_addr_t addr1 = page_to_phys(bv->bv_page) + bv->bv_offset;
> +	phys_addr_t addr2 = page_to_phys(page) + off + len - 1;
> +
> +	if ((addr1 | mask) != (addr2 | mask))
> +		return false;
> +	if (bv->bv_len + len > queue_max_segment_size(q))
> +		return false;
> +	return true;
> +}

This seems to be identical to bio_try_merge_pc_page, except for not
passing an explicit queue argument, and for not calling
__bio_try_merge_page.  I'd rather factor out a new
__bio_can_merge_pc_page or similar helper in a prep patch and use
that in both functions.

>  /**
>   * __bio_try_merge_page - try appending data to an existing bvec.
>   * @bio: destination bio
> @@ -856,6 +911,12 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
>  	if (bio->bi_vcnt > 0) {
>  		struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt - 1];
>  
> +		if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
> +			if (!bio_try_merge_zone_append_page(bio, page, len,
> +							    off))
> +				return false;
> +		}
> +
>  		if (page_is_mergeable(bv, page, len, off, same_page)) {
>  			if (bio->bi_iter.bi_size > UINT_MAX - len)
>  				return false;

I'd rather have a separare __bio_try_merge_append_page helper to avoid
the conditional in __bio_try_merge_page.

> @@ -916,6 +977,7 @@ int bio_add_page(struct bio *bio, struct page *page,
>  	if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
>  		if (bio_full(bio, len))
>  			return 0;
> +
>  		__bio_add_page(bio, page, len, offset);
>  	}
>  	return len;
> @@ -948,7 +1010,7 @@ static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
>  
>  	len = min_t(size_t, bv->bv_len - iter->iov_offset, iter->count);
>  	size = bio_add_page(bio, bv->bv_page, len,
> -				bv->bv_offset + iter->iov_offset);
> +			    bv->bv_offset + iter->iov_offset);

Spurious whitespace changes.

>  	if (unlikely(size != len))
>  		return -EINVAL;
>  	iov_iter_advance(iter, size);
> @@ -1448,7 +1510,7 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
>   */
>  struct bio *bio_map_user_iov(struct request_queue *q,
>  			     struct iov_iter *iter,
> -			     gfp_t gfp_mask)
> +			     gfp_t gfp_mask, unsigned int op)

Why do we need to pass the op here? bio_map_user_iov is only used
for SG_IO passthrough.

>  				if (!__bio_add_pc_page(q, bio, page, n, offs,
> -						&same_page)) {
> +						       &same_page)) {

Spurious whitespace change.

>  extern int bio_add_pc_page(struct request_queue *, struct bio *, struct page *,
>  			   unsigned int, unsigned int);
> +

Spurious whitespace change.

> +static inline unsigned int queue_max_zone_append_sectors(const struct request_queue *q)

This adds an overly long line.

  reply	other threads:[~2020-03-27 17:19 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-27 16:50 [PATCH v3 00/10] Introduce Zone Append for writing to zoned block devices Johannes Thumshirn
2020-03-27 16:50 ` [PATCH v3 01/10] block: provide fallbacks for blk_queue_zone_is_seq and blk_queue_zone_no Johannes Thumshirn
2020-03-27 17:10   ` Christoph Hellwig
2020-03-27 16:50 ` [PATCH v3 02/10] block: Introduce REQ_OP_ZONE_APPEND Johannes Thumshirn
2020-03-27 17:19   ` Christoph Hellwig [this message]
2020-03-31 15:23   ` Keith Busch
2020-03-31 15:35     ` Johannes Thumshirn
2020-03-27 16:50 ` [PATCH v3 03/10] block: introduce blk_req_zone_write_trylock Johannes Thumshirn
2020-03-27 17:19   ` Christoph Hellwig
2020-03-27 16:50 ` [PATCH v3 04/10] block: Introduce zone write pointer offset caching Johannes Thumshirn
2020-03-27 17:21   ` Christoph Hellwig
2020-03-27 16:50 ` [PATCH v3 05/10] scsi: sd_zbc: factor out sanity checks for zoned commands Johannes Thumshirn
2020-03-27 17:21   ` Christoph Hellwig
2020-03-27 16:50 ` [PATCH v3 06/10] scsi: sd_zbc: emulate ZONE_APPEND commands Johannes Thumshirn
2020-03-28  8:51   ` Christoph Hellwig
2020-03-28  9:02     ` Damien Le Moal
2020-03-28  9:07       ` hch
2020-03-28  9:18         ` Damien Le Moal
2020-03-28  9:21           ` hch
2020-03-27 16:50 ` [PATCH v3 07/10] null_blk: Cleanup zoned device initialization Johannes Thumshirn
2020-03-27 17:23   ` Christoph Hellwig
2020-03-27 16:50 ` [PATCH v3 08/10] null_blk: Support REQ_OP_ZONE_APPEND Johannes Thumshirn
2020-03-27 17:26   ` Christoph Hellwig
2020-03-28  8:51     ` Damien Le Moal
2020-03-28 14:17       ` Johannes Thumshirn
2020-03-27 16:50 ` [PATCH v3 09/10] block: export bio_release_pages and bio_iov_iter_get_pages Johannes Thumshirn
2020-03-27 17:07   ` Christoph Hellwig
2020-03-27 17:13     ` Johannes Thumshirn
2020-03-27 17:22       ` hch
2020-03-27 16:50 ` [PATCH v3 10/10] zonefs: use REQ_OP_ZONE_APPEND for sync DIO Johannes Thumshirn
2020-03-27 17:10   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200327171904.GD11524@infradead.org \
    --to=hch@infradead.org \
    --cc=Damien.LeMoal@wdc.com \
    --cc=axboe@kernel.dk \
    --cc=johannes.thumshirn@wdc.com \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).